Posted on: 24/04/2026
Description :
We are looking for a highly skilled Data Engineer to design, build, and optimize scalable data pipelines and architectures. Youll play a key role in enabling data-driven decision-making by ensuring high data quality, efficient processing, and reliable integration across systems.
Key Responsibilities :
- Design and develop robust data pipelines using tools such as PySpark, SQL, and Python.
- Build and maintain ETL workflows to collect, process, and transform large-scale datasets.
- Collaborate with data analysts, scientists, and business teams to ensure data accessibility and reliability.
- Optimize data storage and query performance across data lakes and warehouses (e.g., Azure Data Lake, Snowflake, Databricks, Redshift, BigQuery).
- Implement data quality, lineage, and governance frameworks.
- Work closely with DevOps and cloud engineers for CI/CD integration and pipeline automation.
- Participate in code reviews, mentor junior engineers, and ensure adherence to best practices.
Technical Skills Required :
- Programming : Python, PySpark, SQL
- Big Data Frameworks : Spark, Databricks, Hadoop ecosystem
- Cloud Platforms : Azure / AWS / GCP (Azure preferred)
- Data Warehousing : Snowflake, Redshift, Synapse, or BigQuery
- ETL Tools : Azure Data Factory, Airflow, or similar orchestration tools
- Version Control & CI/CD : Git, Jenkins, or similar
- Strong understanding of data modeling, schema design, and performance optimization
Nice to Have :
- Experience with streaming data (Kafka, Kinesis)
- Familiarity with machine learning data pipelines
- Exposure to DataOps or MLOps frameworks
- Working knowledge of agile methodologies
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1631201