Posted on: 11/09/2025
About the Role :
This role requires a deep understanding of Python (Pandas/PySpark), advanced SQL, and ETL workflows, with a proven track record of delivering scalable, high-performance data solutions.
The ideal candidate will also have experience with cloud platforms and modern data warehouse ecosystems.
Key Responsibilities :
ETL Development & Optimization :
- Design, develop, and optimize complex ETL workflows and data pipelines using Python and PySpark.
- Implement efficient data transformations, cleansing, and enrichment processes to ensure high data quality and integrity.
- Optimize ETL jobs for scalability, performance, and cost-efficiency in distributed environments.
Workflow Orchestration & Automation :
- Ensure job reliability with automated monitoring, alerting, and error-handling mechanisms.
- Build reusable components and frameworks for ETL workflow management.
Data Engineering & Cloud Integration :
- Implement best practices for cloud data pipelines, including storage optimization, security, and access management.
- Work with data warehouses (Snowflake, Redshift, BigQuery) to design efficient data models and query performance improvements.
Collaboration & Stakeholder Management :
robust technical solutions.
- Provide technical guidance on data architecture, pipeline best practices, and optimization strategies.
- Collaborate with cross-functional teams to ensure alignment on data requirements and delivery timelines.
Required Skills & Experience :
Core Technical Skills :
- Expert-level proficiency in Python (Pandas, PySpark).
- Strong knowledge of SQL (query optimization, stored procedures, analytical queries).
- Hands-on experience with ETL design, development, and performance tuning.
Tools & Platforms :
- Cloud platforms : AWS / GCP / Azure.
- Data Warehouses : Snowflake, Redshift, BigQuery.
Additional Competencies :
- Experience with CI/CD pipelines and version control (Git).
- Strong problem-solving, debugging, and performance tuning skills.
- Ability to work independently and in a collaborative, agile environment.
Preferred Qualifications :
- Knowledge of data modeling techniques and best practices.
- Exposure to DevOps practices for data engineering
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1544719
Interview Questions for you
View All