Posted on: 28/08/2025
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL/ELT processes using PySpark, SQL, and Python.
- Build and manage data workflows on Azure Data Lake, Azure Data Factory (ADF), and Databricks.
- Collaborate with data scientists, analysts, and other stakeholders to understand data needs and ensure data quality and availability.
- Implement data modeling, transformation, and integration solutions from various structured and unstructured data sources.
- Design and implement CI/CD pipelines for data engineering projects using industry best practices.
- Monitor and optimize performance of data systems, ensuring high availability and reliability.
- Document data workflows, schemas, and architectures to support knowledge sharing and maintenance.
- Actively participate in code reviews and contribute to a culture of continuous improvement.
- Communicate effectively with technical and non-technical stakeholders, translating business requirements into technical solutions.
Required Qualifications :
- Proficiency in SQL, Python, and PySpark for data processing and transformation.
- Experience working with Azure Data Lake Storage, Azure Data Factory, and Databricks.
- Solid understanding of data warehousing concepts, data modeling, and data integration.
- Experience with CI/CD tools and deployment practices (e.g., Azure DevOps, Git, Jenkins).
- Strong analytical, problem-solving, and debugging skills.
- Excellent verbal and written communication skills.
- Strong team player with the ability to work independently when needed
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1536790
Interview Questions for you
View All