Posted on: 16/07/2025
About the Opportunity :
We are seeking experienced Data Engineers skilled in AWS and Python to join its data-driven teams across Hyderabad, Pune, and Gurugram. If you're passionate about building scalable data pipelines, enabling advanced analytics, and solving real-world data challenges in a cloud-first environment, this is the right role for you.
Key Responsibilities :
Data Pipeline Development :
- Design, develop, and maintain large-scale, real-time and batch ETL/ELT data pipelines on AWS
- Process and transform structured and unstructured data for analytics and reporting
Cloud Data Engineering :
- Work extensively with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena
- Ensure security, scalability, and performance of cloud-native data platforms
Python-Based Development :
- Write modular, testable Python code for data ingestion and processing
- Optimize performance of Python-based data transformation scripts
Collaboration & Best Practices :
- Work with data scientists, architects, and business teams to understand data requirements
- Implement CI/CD, data quality checks, and monitoring solutions for production pipelines
Desired Skills :
- Strong command of Python for data processing
- Expertise in AWS data stack S3, Glue, Lambda, Redshift, etc.
- Familiarity with SQL, data lakes, data modeling, and Spark (preferred)
- Experience with Airflow, Terraform, or similar orchestration and IAC tools
- Understanding of DevOps practices, Git, and Agile methodology
Eligibility Criteria :
Education :
- UG : B.Tech/B.E. in Computer Science, IT, or related technical fields
- PG : M.Tech/MCA preferred but not mandatory
The job is for:
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1513881
Interview Questions for you
View All