Posted on: 23/09/2025
Key Responsibilities :
- Design, develop, and maintain data pipelines using AWS services such as S3, Glue Studio, Redshift, Athena, and EMR.
- Write efficient SQL queries for data processing, transformation, and analysis.
- Develop and optimize Python scripts to automate data workflows.
- Manage batch job scheduling and monitor dependencies to ensure reliable data delivery.
- Collaborate with data analysts, data scientists, and business teams to enable data-driven decision-making.
- Troubleshoot and resolve issues related to data quality, performance, and scalability.
Required Qualifications :
- 4-8 years of experience as a Data Engineer or in a similar role.
- Hands-on experience with AWS data services : S3, Glue Studio, Redshift, Athena, and EMR.
- Strong proficiency in SQL and Python for large-scale data processing.
- Experience with batch job scheduling and managing complex data dependencies.
- Solid understanding of data modeling, ETL processes, and performance optimization.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1551182
Interview Questions for you
View All