Posted on: 09/12/2025
Role : Senior/Lead Azure Data Engineer
Experience : 8- 14 Years
Location : Pune, Hyderabad, Gurugram, Noida, Bangalore (Hybrid)
Primary Objective :
Lead the design, development, and deployment of ETL (Extract, Transform, Load) processes and data pipelines on Databricks on Azure cloud platforms, ensuring high performance, scalability, and reliability for enterprise data integration and analytics.
Key Responsibilities :
ETL Architecture & Design :
- Define and implement ETL strategies for data ingestion, transformation, and loading into Azure-based data lakes or warehouses.
- Design reusable ETL frameworks and patterns for structured and unstructured data.
Cloud Platform Expertise :
- Utilize Azure services such as ADF for ETL workflows and workflow scheduling.
- Implement best practices for security, cost optimization, and performance tuning.
Data Integration :
- Integrate data from multiple sources (on-prem, cloud, APIs) into centralized repositories.
- Ensure data quality, consistency, and lineage tracking.
Automation & Orchestration :
- Build automated ETL pipelines using Azure ADF, Step Functions, or Airflow.
- Implement CI/CD for ETL deployments.
Monitoring & Optimization :
- Set up monitoring for ETL jobs and troubleshoot performance issues.
- Optimize ETL processes for large scale datasets and real-time streaming.
Leadership & Collaboration :
- Lead a team of ETL developers and data engineers.
- Collaborate with data architects, analysts, and business stakeholders to define requirements.
Required Skills :
- External Sources : APIs, on-prem databases, flat files (CSV, Parquet, JSON).
- Tools : Azure Data Factory (ADF) for orchestration, Databricks connectors.
- Apache Spark : Strong knowledge of Spark (PySpark, Spark SQL) for distributed processing.
- Data Cleaning & Normalization : Handling nulls, duplicates, schema evolution.
- Performance Optimization : Partitioning, caching, broadcast joins.
- Delta Lake : Implementing ACID transactions, time travel, and schema enforcement.
- Azure Data Factory (ADF) : Building pipelines to orchestrate Databricks notebooks.
- Azure Key Vault : Secure credential management.
- Azure Monitor & Logging : For ETL job monitoring and alerting.
- Networking & Security : VNET integration, private endpoints.
Ideal Client Profile :
- 8+ years in data engineering/ETL roles, with at least 4+ years in Azure cloud ETL leadership. Azure certifications (e.g., Azure Analytics Specialty, Solutions Architect) preferred.
- Strong communication and team leadership skills.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1587424
Interview Questions for you
View All