Posted on: 12/11/2025
Description :
Key Responsibilities :
- Lead the design, development, and implementation of data pipelines and analytics solutions using Databricks, Azure Data Factory (ADF), Synapse, and PySpark/Python.
- Collaborate with stakeholders to interpret design specifications and create STTM (Source to Target Mapping) and technical documentation.
- Ensure best practices in data engineering, ETL, and data warehousing are followed across projects.
- Mentor and guide junior developers, providing technical leadership and fostering a culture of knowledge sharing.
- Troubleshoot, optimize, and enhance existing pipelines and workflows for performance, reliability, and scalability.
- Work closely with cross-functional teams to understand business requirements and translate them into technical solutions.
- Flexibility to work from client locations as needed and support project delivery timelines.
Required Skills & Expertise :
- 10+ years of IT experience in Data Warehousing and ETL.
- 5+ years of hands-on experience with Databricks.
- Strong foundation in cloud data engineering using Azure, ADF, and Synapse.
- Proficiency in PySpark and Python for large-scale data processing.
- Knowledge of data modeling, pipeline orchestration, and performance optimization.
- Excellent leadership, mentoring, and communication skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1573299
Interview Questions for you
View All