Posted on: 25/01/2026
Description :
- Build and manage data processing workloads on Databricks, including notebooks, jobs, clusters, and repos.
- Implement and optimize Delta Lake architectures ensuring reliability, performance, and data quality.
- Develop ETL/ELT pipelines handling batch, micro-batch, and (where applicable) streaming workloads.
- Orchestrate workflows using tools such as Azure Data Factory, Airflow, Synapse Pipelines, or Prefect.
- Work with diverse data sources including relational databases, cloud object storage (ADLS / S3), and event streams.
- Implement CI/CD pipelines for data engineering workflows using Git and automated testing frameworks.
- Apply best practices for data modeling, schema evolution, partitioning, and Slowly Changing Dimensions (SCD).
- Collaborate with platform, DevOps, and security teams to implement IaC (Terraform / ARM / Bicep) and cloud best practices.
- Monitor and troubleshoot data pipelines using logging, metrics, and observability tools (Application Insights, Prometheus, Grafana).
- Ensure adherence to data governance, privacy, security, and compliance requirements.
- Provide technical guidance and code reviews to junior engineers.
- Communicate technical concepts effectively to non-technical stakeholders.
Mandatory Skills & Qualifications :
- Strong programming expertise in Python and PySpark.
- Proven experience with Databricks and Delta Lake (or similar managed Spark platforms)
- Experience with workflow orchestration tools (ADF, Airflow, Synapse Pipelines, Prefect, etc.)
- Solid understanding of data storage formats (Parquet, Delta, ORC).
- Hands-on experience with cloud data storage (Azure Data Lake Storage preferred; S3 acceptable).
- Strong knowledge of ETL/ELT principles, data modeling, and performance optimization.
- Experience implementing CI/CD for data pipelines.
- Strong analytical, troubleshooting, and debugging skills.
- Excellent written and verbal communication skills.
Good to Have Skills :
- Hands-on exposure to IT-regulated or compliance-driven environments.
- Familiarity with Azure services and cloud-native data architectures.
- Experience with monitoring, alerting, and observability frameworks.
- Knowledge of DevOps and SRE practices in data platforms.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1605962