HamburgerMenu
hirist

Python & ETL Engineer - Data Pipeline

Servhigh Global Services Private Limited
Multiple Locations
5 - 8 Years
star-icon
4.8white-divider2+ Reviews

Posted on: 31/08/2025

Job Description

Role Overview :

We are seeking a highly skilled Python + ETL Engineer with strong expertise in building data pipelines, automation workflows, and scalable ETL solutions. The ideal candidate will have hands-on experience with Python-based ETL frameworks, data ingestion pipelines, and cloud-native deployments (preferably AWS).

The role involves designing and developing end-to-end data solutions for ingestion, transformation, validation, and reporting, enabling analytics and business intelligence across large datasets.


Key Responsibilities :


- Design, develop, and maintain ETL pipelines for structured and unstructured data.


- Build Python-based automation scripts for data ingestion, transformation, and validation.

- Develop and optimize data pipelines on cloud platforms (AWS preferred) using S3, Lambda, ECS, EC2, DynamoDB, and other services.

- Work with databases (MySQL, SQL Server, PL/SQL) for data extraction, transformation, and loading.

- Implement rule-based data validation and ensure high-quality, reliable data flows.

- Collaborate with cross-functional teams to deliver data-driven solutions, dashboards, and reporting tools.

- Support deployment through CI/CD pipelines (Docker, AWS CodeBuild/ECS) for seamless production rollout.

- Ensure scalability, performance, and security of data workflows.

Required Skills & Experience :


- 5+ years of experience in Python development with focus on data processing & ETL pipelines.

- Strong knowledge of Python libraries (Pandas, NumPy, Scikit-learn, TensorFlow preferred).

- Proven experience in ETL development (Python-based or cloud-native pipelines).

- Proficiency in SQL (MySQL, PL/SQL, SQL Server) for data operations.

- Hands-on experience with AWS services (S3, Lambda, ECS, EC2, DynamoDB, CloudWatch).

- Exposure to automation, data validation, and anomaly detection.

- Strong analytical and problem-solving skills.

- Ability to work in Agile environments and deliver sprint-based features.


Good to Have :


- Experience with traditional ETL tools (Informatica, Talend, SSIS).


- Exposure to data visualization/dashboarding tools.

- Knowledge of containerization (Docker, Kubernetes).


Why Join Us?


- Opportunity to work on cloud-first, modern ETL pipelines.


- Exposure to end-to-end data engineering workflows from ingestion to reporting.

- Collaborative and innovative work culture with room for creativity.


info-icon

Did you find something suspicious?