Posted on: 22/01/2026
Note : If shortlisted, you will be invited for initial rounds on 7th February 2026 (Saturday) in Gurugram
Job Description :
Location : Gurgaon
Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
- Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
- Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
- Implement data governance and security best practices to ensure compliance and data integrity.
- Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
- Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Requirements :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 6+ years of prior experience in data engineering, with a focus on designing and building data pipelines.
- Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
- Strong programming skills in languages such as Python, Java, or Scala.
- Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
- Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1605026