Posted on: 26/03/2026
Description :
- Build and maintain data pipelines using Azure Databricks and Azure Data Factory.
- Implement ingestion and transformation logic across Bronze and Silver layers.
- Support batch and incremental processing patterns.
Curated Layer Logic :
- Ensure curated datasets meet data quality and business requirements.
- Handle late-arriving data and incremental updates.
Performance & Storage Optimization :
- Select and tune appropriate storage formats (Parquet / Delta).
- Apply partitioning, compaction, and file sizing strategies.
- Tune Spark jobs for large-scale data processing.
Downstream & DWH Collaboration :
- Provide optimized datasets for Synapse and reporting workloads.
- Support data validation and reconciliation with Gold layer outputs.
Engineering Best Practices :
- Follow coding standards, documentation, and version control practices.
- Support production troubleshooting and performance tuning.
Experience :
- Strong hands-on experience building pipelines on Azure.
- Experience working with large datasets and distributed processing.
Technical Skills :
- Hands-on experience with Azure Databricks.
- Strong experience with Azure Data Factory.
- Deep knowledge of Delta Lake tuning and optimization.
- Experience with storage optimization (Parquet, Delta).
- Strong SQL skills for transformation and validation.
Tools & Practices :
- Familiarity with data quality and validation techniques.
- Experience working in Agile delivery models.
Soft Skills :
- Ability to work independently on complex pipelines.
- Good communication and collaboration skills.
Nice to Have :
- Exposure to streaming or near real-time pipelines.
- Familiarity with data governance or metadata tools
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1623986