Posted on: 19/08/2025
We are looking for a passionate and detail-oriented Azure Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and optimizing scalable data platforms on the cloud. The ideal candidate will have strong expertise in the Azure ecosystem, hands-on experience with Databricks and PySpark, and a solid background in data modeling, ELT, and data integration. If you enjoy solving complex data challenges and enabling businesses with meaningful insights, this role is for you.
Key Responsibilities :
- Design and implement robust, scalable, and secure data pipelines using Azure Data Factory and Databricks.
- Develop, optimize, and maintain PySpark code for large-scale data processing and transformation.
- Design and implement ELT processes to support analytical and reporting requirements.
- Build and maintain logical and physical data models to support business intelligence, analytics, and reporting.
- Ingest and integrate structured, semi-structured, and unstructured data from multiple sources ensuring data quality, consistency, and governance.
- Collaborate closely with data architects, analysts, and business stakeholders to deliver end-to-end data solutions.
- Implement best practices for code quality, version control, and automation using GitHub Actions, Azure DevOps, PyTest, and SonarQube.
- Monitor, troubleshoot, and optimize performance of pipelines and data processes.
- Stay up to date with new trends in cloud data engineering, evaluating new tools and frameworks for adoption.
Required Skills & Experience :
Must-Have :
- Hands-on experience with Databricks for data engineering workflows.
- Strong knowledge of Azure Data Factory for pipeline orchestration.
- Solid programming skills in PySpark for big data processing.
Mastery in :
- Cloud ecosystems (Azure / AWS / SAP).
- Data Modeling (dimensional, relational, and modern warehouse/lakehouse approaches).
- ELT methodologies for scalable data integration.
Additional Skills / Tools :
- Data integration & ingestion from multiple sources.
- Data manipulation and processing at scale.
- Proficiency with SQL Databases, Synapse Analytics, Stream Analytics, Glue, Airflow, Kinesis, and Redshift.
- CI/CD knowledge with GitHub Actions / Azure DevOps.
- Testing and code quality tools such as SonarQube and PyTest.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1531666
Interview Questions for you
View All