Posted on: 11/11/2025
Description :
Looking for a Senior Data Engineer skilled in Azure Databricks, Apache Spark (PySpark), and Unity Catalogue to design, optimise, and govern large-scale data pipelines on Azure.
Responsibilities :
- Build and optimise ETL/ELT pipelines using Databricks and PySpark.
- Implement Unity Catalogue for secure data governance, lineage, and access control.
- Integrate with Azure Data Lake, Blob Storage, Event Hub, and Delta Lake.
- Automate workflows using Azure Data Factory or Databricks Workflows.
- Develop CI/CD pipelines, monitor clusters, and troubleshoot performance issues.
- Collaborate with analysts and business teams to deliver scalable data solutions.
- Handle API integrations for diverse data sources.
Requirements :
- Azure Databricks, Apache Spark (PySpark), SQL, Python.
- Unity Catalogue, Delta Lake, CI/CD, IaC (Databricks CLI, DABs), RBAC, Encryption.
- Experience with MLflow, Kafka, Delta Live Tables, dbt, Synapse Link, and Azure Functions is a plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1572949
Interview Questions for you
View All