HamburgerMenu
hirist

Job Description

Job Summary :

We are seeking an experienced Azure Data Engineer to design, build, and optimize scalable data pipelines and platforms on Microsoft Azure.


The ideal candidate will have strong hands-on experience with Azure data services, Databricks, and big data processing frameworks, and will work closely with cross-functional teams to deliver reliable, high-quality data solutions that support analytics and business decision-making.

Key Responsibilities :


- Design, develop, and maintain end-to-end data pipelines using Azure services.


- Build and optimize batch and real-time data processing solutions using Azure Databricks, PySpark, and Structured Streaming.


- Implement data ingestion and orchestration using Azure Data Factory (ADF).


- Manage and optimize data storage solutions using Azure Data Lake Storage (ADLS Gen2).


- Develop and maintain curated data layers using Azure Synapse Analytics / Delta Lake.


- Ensure data quality, reliability, performance, and cost optimization across platforms.


- Implement data security, governance, and access controls using Unity Catalog and Azure security best practices.


- Collaborate with data analysts, data scientists, architects, and business stakeholders to translate

requirements into technical solutions.


- Monitor, troubleshoot, and resolve data pipeline and performance issues.


- Contribute to architecture decisions, coding standards, and best practices.


- Mentor junior engineers and provide technical guidance when required.


Required Skills & Qualifications :


Technical Skills :



- Strong experience in Azure Data Engineering stack:


- Azure Data Factory (ADF)


- Azure Data Lake Storage (ADLS Gen2)


- Azure Synapse Analytics


- Azure Databricks


- Azure Stream Analytics (nice to have)


Hands-on expertise in :


- PySpark, Python, and SQL


- Databricks Delta Lake, Delta Live Tables (DLT)


- Structured Streaming for real-time data processing


- Strong understanding of ETL/ELT, data modeling, and data warehousing concepts.


- Experience working with large-scale, high-volume, and real-time datasets.


- Familiarity with CI/CD, Git, and infrastructure-as-code concepts (ARM, Bicep, or Terraform is a plus).


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in