Posted on: 28/10/2025
Description :
- Work closely with business stakeholders, analysts, and data scientists to understand data requirements and deliver reliable solutions.
- Optimize ETL workflows for performance, scalability, and reliability.
- Implement best practices for data ingestion, transformation, and integration across multiple sources.
- Ensure data quality, governance, and security across the data lifecycle.
- Troubleshoot and resolve issues related to data pipelines, storage, and performance.
Required Skills & Qualifications :
- Strong experience in building large-scale data pipelines and ETL workflows.
- Hands-on expertise in PySpark for data processing and transformation.
- Proficiency in Azure Data Factory (ADF) for orchestrating and automating workflows.
- Solid understanding of Python for scripting, data handling, and automation.
- Strong SQL skills and ability to work with relational and non-relational databases.
- Good knowledge of data warehousing concepts and performance optimization.
- Exposure to Azure ecosystem (Data Lake, Databricks, Synapse Analytics, etc.) preferred.
- Excellent problem-solving, analytical, and communication skills.
Nice to Have (Optional) :
- Knowledge of data governance, security, and compliance frameworks.
- Familiarity with real-time data streaming technologies (Kafka, Event Hubs, etc.
Additional Details :
- Contract/Full-Time: Full-Time.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1565923
Interview Questions for you
View All