Posted on: 05/08/2025
About the Role :
RoleTwit Services is looking for a highly skilled and experienced ETL Developer to join our growing data team in Pune.
This role is critical in ensuring high-quality, reliable, and optimized data flow across various systems and platforms.
The ideal candidate will have a proven track record of building and maintaining ETL pipelines, ensuring data accuracy and consistency, and integrating data from various sources into enterprise data warehouses or data lakes.
A passion for clean code, robust testing, and performance tuning is essential.
We are looking for someone who brings a Quality First mindset, with a deep understanding of data integration best practices and a strong commitment to data integrity and continuous improvement.
Key Responsibilities :
- Design, develop, and maintain efficient, scalable, and reliable ETL processes and workflows to support enterprise data management and analytics
- Collaborate with data architects, analysts, and business stakeholders to understand data requirements and build appropriate solutions
- Ensure data quality, consistency, completeness, and integrity across the pipeline and warehouse systems
- Implement data validation, cleansing, transformation, and reconciliation processes
- Monitor and troubleshoot ETL processes to resolve issues quickly and effectively
- Automate data ingestion and transformation tasks using industry-standard tools and scripting languages
- Contribute to performance tuning of long-running jobs and optimize query performance within ETL workflows
- Develop test automation strategies for data pipelines, ensuring all edge cases and data anomalies are captured
- Maintain documentation for ETL processes, data models, and data flow diagrams
- Participate in code reviews, design sessions, and agile sprint ceremonies
Required Skills and Experience :
- Minimum 5 years of hands-on experience as an ETL Developer in enterprise environments
- Strong expertise in ETL tools such as Informatica, Talend, Apache Nifi, SSIS, or equivalent platforms
- Excellent knowledge of SQL, stored procedures, and performance tuning on RDBMS like Oracle, PostgreSQL, SQL Server, or MySQL
- Hands-on experience in integrating large datasets from multiple sources including APIs, flat files, databases, and cloud storage
- Proficiency in at least one programming/scripting language such as Python, Shell Script, or Java for data manipulation
- Strong understanding of data warehousing concepts, data lakes, dimensional modeling, and data governance principles
- Familiarity with test automation for data pipelines and unit testing frameworks
- Experience with performance tuning, error handling, and data quality frameworks
- Understanding of cloud platforms like AWS, Azure, or GCP is preferred
- Strong problem-solving skills, attention to detail, and ability to work in cross-functional teams
Preferred Qualifications :
- Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
- Experience working in Agile/Scrum project environments
- Certifications in ETL tools or cloud data engineering platforms is a plus
- Exposure to big data frameworks and stream processing tools such as Apache Kafka, Spark, or Hadoop is an advantage
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1524076
Interview Questions for you
View All