Posted on: 18/08/2025
Roles and Responsibilities :
- Develop, Monitor, and Maintain data pipeline.
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Incident diagnosis, routing, evaluation & resolution
- Analyze the root cause of incidents.
- Create an incident closure report
Requirements :
- BE Degree in computer science or equivalent from business schools, IITins, NIIT, IIM's
- Min 3 years of Experience in Data management
- Experience with data modeling, data warehousing, and building ETL pipelines
- Hands-on experience on Spark SQL and Spark streaming
- Hands-on experience with Airflow or Luigi
- Comfortable working with python and shell scripts
- Good understanding of Hadoop Ecosystem
- Experience with Data warehouses like RedShift and Databases like Postgres, MariaDB.
- Ability to implement webhooks if required.
Immediate joining, need candidate within 15 or 30 days.
Did you find something suspicious?
Posted By
Posted in
Data Analytics & BI
Functional Area
Data Engineering
Job Code
1530813
Interview Questions for you
View All