Posted on: 31/10/2025
Description :
Key Responsibilities :
- The ideal candidate will have strong expertise in Snowflake, Hadoop ecosystem, PySpark, and SQL, and will play a key role in enabling data-driven decision-making across the organization.
- Design, develop, and optimize robust data pipelines using PySpark and SQL.
- Implement and manage data warehousing solutions using Snowflake.
- Work with large-scale data processing frameworks within the Hadoop ecosystem.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
- Ensure data quality, integrity, and governance across all data platforms.
- Monitor and troubleshoot data pipeline performance and reliability.
- Automate data workflows and implement best practices for data engineering.
Required Qualifications :
- 5+ years of experience in data engineering or related roles.
- Data Engineering, Snowflake, Data Pipeline, Airflow
- Design, develop, and maintain robust data pipelines and ETL workflows
- Hands-on experience with Azure Data Services (Data Factory, Blob Storage, Synapse, etc.)
Did you find something suspicious?
Posted By
Devendra Karlekar
Associate Recruitment Consultant at EXL Services.com ( I ) Pvt. Ltd.
Last Active: 3 Dec 2025
Posted in
Data Analytics & BI
Functional Area
Data Science
Job Code
1568051
Interview Questions for you
View All