Posted on: 15/08/2025
Job Description :
- Architect, configure, and optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment.
- Set up and manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, and Networking.
- Design and implement a monitoring and observability framework using tools like Azure Monitor, Log Analytics, and Prometheus/Grafana.
- Collaborate with platform and data engineering teams to enable microservices-based architecture for scalable and modular data solutions.
- Drive automation and CI/CD practices using Terraform, ARM templates, and GitHub Actions/Azure DevOps.
Required Skills & Experience :
- Strong hands-on experience with Azure Databricks, Delta Lake, and Apache Spark.
- Deep understanding of Azure services : Resource Manager, AKS, ACR, Key Vault, and Networking.
- Proven experience in microservices architecture and container orchestration.
- Expertise in infrastructure-as-code, scripting (Python, Bash), and DevOps tooling.
- Familiarity with data governance, security, and cost optimization in cloud environments.
Bonus :
- Experience with event-driven architectures (Kafka/Event Grid).
- Knowledge of data mesh principles and distributed data ownership.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1530033
Interview Questions for you
View All