Posted on: 09/11/2025
Responsibilities :
- Design and build robust, scalable ETL/ELT pipelines using Python, SQL, and Spark.
- Develop and optimize data workflows for performance, reliability, and cost efficiency.
- Implement data integration solutions across multiple sources and destinations.
- Ensure high standards of data quality, accuracy, and governance.
- Collaborate with analytics, engineering, and business teams to deliver reliable data solutions.
- Maintain and monitor data pipelines and troubleshoot issues proactively.
Requirements :
- Strong programming skills in Python and SQL.
- Hands-on experience with Apache Spark for big data processing.
- Experience with cloud platforms (Azure / AWS) and their data services (e.g., Azure Data Factory, AWS Glue, S3, Redshift, etc.).
- Familiarity with CI/CD practices and data version control is a plus.
- Excellent problem-solving and communication skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1571808
Interview Questions for you
View All