Posted on: 25/09/2025
Key Responsibilities :
- Design, build, and maintain data pipelines using Azure Data Factory, Databricks, and Data Lake.
- Develop efficient, reusable, and scalable ETL/ELT processes in Python, PySpark, and SQL.
- Implement and maintain data models, transformations, and integrations for large-scale data platforms.
- Ensure data quality, governance, and security compliance in all processes.
- Collaborate with data scientists, analysts, and business teams to enable data-driven decision-making.
- Apply CI/CD practices and work with Azure DevOps for deployment and version control.
- Troubleshoot and optimize performance of data pipelines, APIs, and integrations.
- Ensure data security and compliance across platforms and services.
Technical Skills & Experience :
- 7+ years of experience in Data Engineering with strong expertise in Azure-based solutions.
- Proficiency in Python, PySpark, and SQL (including advanced queries, performance tuning).
- Hands-on experience with Azure services:
- Azure Databricks (data processing, ML model support)
- Azure Data Factory (ETL/ELT orchestration)
- Azure Data Lake (storage and data management)
- Azure DevOps (CI/CD pipelines, version control)
- Strong understanding of CI/CD principles for data workflows.
- Good knowledge of API/web service security and data security practices.
- Experience with relational databases (SQL Server, Oracle, MySQL, etc.
- Familiarity with ML libraries (added advantage).
- Excellent problem-solving, troubleshooting, and communication skills
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1552499
Interview Questions for you
View All