HamburgerMenu
hirist

Incedo - Senior Data Engineer - Python/PySpark

hirist.tech
Gurgaon/Gurugram
4 - 10 Years

Posted on: 30/01/2026

Job Description

Note : If shortlisted, you will be invited for initial rounds on 7th February 2026 (Saturday) in Gurugram


Role Description :


We are seeking a skilled professional to maintain and support batch jobs in a legacy environment. The role involves managing and monitoring ETL processes, addressing issues, and enhancing existing PL/SQL scripts. The ideal candidate will have strong expertise in Informatica, SQL Server, and data warehousing concepts, along with experience in troubleshooting and improving batch job performance.

Key Responsibilities :

- Design and implement robust ETL pipelines using AWS Glue, Lambda, and S3.

- Monitor and optimize the performance of data workflows and batch processing jobs.

- Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks.

- Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy.

- Develop and maintain automated solutions for data transformation, migration, and integration tasks.

- Implement best practices for data security, data governance, and compliance within AWS environments.

- Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management.

- Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes.

Required Skills and Experience :

- Strong experience with Data Engineering practices.

- Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools.

- Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets.

- Experience in designing and implementing ETL pipelines in cloud environments.

- Expertise in troubleshooting and optimizing data processing workflows.

- Familiarity with data warehousing concepts and cloud-native data architecture.

- Knowledge of automation and orchestration tools in a cloud-based environment.

- Strong problem-solving skills and the ability to debug and improve the performance of data jobs.

- Excellent communication skills and the ability to work collaboratively with cross-functional teams.

- Good to have knowledge of DBT & Snowflake

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in