Posted on: 12/11/2025
Description :
Position Summary :
We are seeking a highly skilled Azure Databricks Engineer to design, develop, and optimize large-scale data pipelines and analytical solutions on the Azure Cloud ecosystem.
The ideal candidate will possess hands-on experience in Azure Databricks, Azure Data Factory (ADF), and PySpark, with a strong understanding of distributed data processing, ETL development, and data integration best practices.
Key Responsibilities :
Data Engineering & Pipeline Development :
- Design, build, and maintain ETL/ELT pipelines in Azure Databricks using PySpark and Azure Data Factory, ensuring data reliability, accuracy, and performance.
Data Integration :
- Integrate and transform structured, semi-structured, and unstructured data from multiple data sources into unified, analytics-ready datasets.
Performance Optimization :
- Implement performance tuning, caching, and partitioning strategies to improve Databricks and Spark job efficiency.
Cloud Architecture & Deployment :
- Leverage Azure components (Data Lake, Synapse Analytics, Key Vault, Event Hub, etc.) for data ingestion, transformation, and storage.
Automation & CI/CD :
- Work with DevOps teams to automate deployment and monitoring processes using Azure DevOps pipelines, Git integration, and version control best practices.
Collaboration :
- Collaborate with data scientists, BI developers, and business analysts to translate analytical requirements into technical solutions.
Data Governance & Security :
- Ensure data quality, lineage, and compliance with data governance, privacy, and security standards within Azure.
Technical Skills & Expertise :
- Strong experience in Azure Databricks, Azure Data Factory (ADF), and PySpark.
- Proficiency in Python for data engineering and automation tasks.
- Familiarity with Azure Data Lake Storage (ADLS), Azure Synapse Analytics, and SQL-based data modeling.
- Understanding of Spark architecture, data partitioning, job scheduling, and performance optimization.
- Experience with data orchestration, workflow automation, and error handling in ADF pipelines.
- Hands-on with CI/CD implementation, Git, and Azure DevOps workflows.
- Working knowledge of ETL best practices, data transformation logic, and data quality frameworks.
- Familiarity with Power BI or other visualization tools (optional but desirable).
Preferred Candidate Profile :
- Experience : 3 - 5 years in Azure Cloud, Azure Data Factory, and PySpark.
- Educational Qualification : Bachelors degree in Computer Science, Information Technology, or a related discipline.
- Strong analytical, problem-solving, and debugging skills.
- Excellent communication and collaboration abilities within cross-functional teams.
- Ability to deliver quality results under tight deadlines and evolving priorities
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1574166
Interview Questions for you
View All