Posted on: 20/04/2026
Role Overview :
- The Senior Data Engineer will be responsible for building, optimizing, and maintaining scalable data pipelines across Bronze, Silver, and curated layers.
- This role focuses on high-performance ETL/ELT development, Delta Lake optimization, and close collaboration with DWH teams for downstream consumption.
- The engineer will act as a senior contributor, ensuring reliability, performance, and data quality across the platform.
Responsibilities :
Pipeline Development Experience :
- Implement ingestion and transformation logic across bronce and silver layers.
- Strong hands-on experience building pipelines on Azure.
Curated Layer Logic :
- Implement hydration, merge and upsert logic using Delta Lake.
- Ensure Curated datasets meet data quality and business requirements.
- Handle late arriving data and incremental updates.
Performance & Storage Optimization :
- Optimize Delta Lake tables for performance and cost.
- Select and tune appropriate storage formats (Parquet / Delta).
- Apply partitioning, compaction, and file sizing strategies.
- Tune Spark jobs for large-scale data processing.
Downstream & DWH Collaboration :
- Work closely with DWH and BI teams to support downstream consumption.
- Provide optimized datasets for Synapse and reporting workloads.
- Support data validation and reconciliation with gold layer outputs.
Engineering Best Practices :
- Implement basic CI/CD practices for data pipelines.
- Follow coding standards, documentation, and version control practices.
- Support production troubleshooting and performance tuning.
Required Skills & Experiences :
- Bachelors or Masters degree in Computer Science or related field.
Experience :
- 5 to 8 years of experience in data engineering.
- Strong hands-on experience building pipelines on Azure.
Technical Skills :
- Strong proficiency in PySpark.
- Hands-on experience with Azure Databricks and Data Factory.
- Deep knowledge of Delta Lake tuning and optimization.
- Experience with storage optimization (Parquet, Delta).
- Strong SQL skills for transformation and validation.
Tools & Practices :
- Experience with Git and basic CI/CD pipelines.
- Familiarity with data quality and validation techniques.
- Experience working in Agile delivery models.
Soft Skills :
- Strong analytical and problem-solving skills.
- Ability to work independently on complex pipelines.
- Good communication and collaboration skills.
Nice to have Skills :
- Experience supporting Synapse Dedicated SQL Pool.
- Exposure to streaming or near real-time pipelines.
- Familiarity with data governance or metadata tools.
Interview Process :
- 2 Rounds of Discussion.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1629789