Posted on: 21/04/2026
Description :
Were looking for a Sr Big Data Engineer who expects more from their career. Its chance to extend and improve dunnhumbys Data Engineering Team. Its an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers.
Key Responsibilities :
- Design end-to-end data solutions, including data lakes, data warehouses, ETL/ELT pipelines, APIs, and analytics platforms.
- Architect scalable and low-latency data pipelines using tools like Apache Kafka, Flink, or Spark Streaming to handle high-velocity data streams.
- Design /Orchestrate end-to-end automation using orchestration frameworks such as Apache Airflow to manage complex workflows and dependencies.
- Design intelligent systems that can detect anomalies, trigger alerts, and automatically reroute or restart processes to maintain data integrity and availability.
- Define and implement data governance, metadata management, and data quality standards.
- Lead architectural reviews and technical design sessions to guide solution development.
- Partner with business and IT teams to translate business needs into data architecture requirements.
- Ensure security, compliance, and regulatory requirements are addressed in all data solutions.
- Evaluate and recommend improvements to existing data architecture and processes.
Technical Expertise :
- Bachelors or master's degree in computer science, Information Systems, Data Science, or related field.
- Extensive experience with high level programming languages - Python, Java or Scala
- 5+ years of experience in data architecture, data engineering, or a related field.
- Proficient in data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.
- Experience with data governance frameworks and tools (e.g., Collibra, Alation, OpenMetadata).
- Strong knowledge of cloud platforms (Azure or Google Cloud), especially with cloud-native data
services.
- Experience working in Agile or DevOps environments.
- Experience with modern data stack tools (e.g., dbt, Snowflake, Databricks).
- Experience with Hive, Oozie, Airflow, HBase, MapReduce, Spark along with working knowledge of
Hadoop/Spark Toolsets.
- Extensive Experience working with Git and Process Automation
- In depth understanding of relational database management systems (RDBMS) and Data Flow Development.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1630081