Posted on: 12/02/2026
Role Description :
We are seeking a skilled professional to design, build, and maintain scalable ETL/data pipelines on AWS using Glue, Lambda, and S3 to enable reliable and high-performance data processing.
- Monitor, optimize, and troubleshoot workflows to ensure data accuracy, consistency, and operational stability.
- Good understanding of monitoring and logging on AWS.
- Collaborate with business and engineering teams to translate data requirements into automated transformation, integration, and migration solutions.
- Enforce best practices across data quality, security, governance, and compliance.
- Maintain clear documentation and continuously enhance platform efficiency and reliability.
Key Responsibilities :
- Design and implement robust ETL pipelines using AWS Glue, Lambda, and S3.
- Monitor and optimize the performance of data workflows and batch processing jobs.
- Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks.
- Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy.
- Develop and maintain automated solutions for data transformation, migration, and integration tasks.
- Implement best practices for data security, data governance, and compliance within AWS environments.
- Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management.
- Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes.
Required Skills and Experience :
- Strong experience with Data Engineering practices.
- Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools.
- Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets.
- Experience in designing and implementing ETL pipelines in cloud environments.
- Expertise in troubleshooting and optimizing data processing workflows.
- Familiarity with data warehousing concepts and cloud-native data architecture.
- Knowledge of automation and orchestration tools in a cloud-based environment.
- Strong problem-solving skills and the ability to debug and improve the performance of data jobs.
- Excellent communication skills and the ability to work collaboratively with cross-functional teams.
- Good to have knowledge of DBT & Snowflake.
Preferred Qualifications :
- Bachelors degree in Computer Science, Information Technology, Data Engineering, or a related field.
- Experience with other AWS data services like Redshift, Athena, or Kinesis.
- Familiarity with Python or other scripting languages for data engineering tasks.
- Experience with containerization and orchestration tools like Docker or Kubernetes.
Location : Candidates should be based in Gurgaon/Hyderabad.
We are an Equal Opportunity Employer :
We value diversity at Incedo.
We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1612068