Posted on: 02/12/2025
Location : Gurgaon
Job Type : Full Time
Experience : 3- 6 Years
Job Description :
We are looking to hire exceptional talent from Tier 1 engineering institutions in India, such as the IITs, NITs, BITS, DTU and other top-ranked universities, who bring strong technical expertise and a passion for innovation.
Stack :
- Python (Pandas, PySpark, Airflow)
- PostgreSQL, DynamoDB, S3
- AWS Glue, Redshift (optional: Snowflake)
- Kafka or RabbitMQ (streaming pipelines)
Responsibilities :
- Design and build scalable ETL pipelines for ingestion/reporting
- Work with semi-structured and unstructured data (JSON, logs, etc.)
- Partner with backend and DS to enable ML-driven features
- Monitor data health and pipeline performance
- Implement observability, validation, and testing in pipelines
Requirements :
- 3- 5 years of data engineering experience
- Strong in Python, SQL, and data modeling
- Hands-on with workflow tools like Airflow or Prefect
- Familiar with S3, partitioning, cost-efficient pipelines
- Bonus: event-driven ETL or ad/commerce datasets
The job is for:
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1583561
Interview Questions for you
View All