Posted on: 15/10/2025
Description :
- Design, build, and maintain efficient and reliable data pipelines for ingestion, transformation, and delivery.
- Optimize data processing workflows using Python, PySpark, and SQL.
- Work with cloud-native infrastructure (AWS, Terraform, CloudFormation) to deploy and manage data environments.
- Implement best practices for data validation, data quality, and incremental data loading.
- Develop event-driven and API-based data ingestion frameworks.
- Collaborate with cross-functional teams including analysts, product owners, and data scientists to understand data needs.
- Drive performance improvements, scalability, and maintainability of data solutions.
Key Skills & Expertise :
Cloud & Infrastructure :
- Proven hands-on experience with AWS Services
- Experience using Snowflake for cloud data warehousing
- Infrastructure as Code : Terraform / CloudFormation
- Familiarity with CI/CD pipelines (Jenkins, Bitbucket, etc.)
Programming & Scripting :
- Strong proficiency in Python and PySpark
- Excellent SQL skills for complex querying and data modeling
ETL & Data Modelling :
- Expertise in building scalable ETL/ELT pipelines
- Experience with :
- Event-driven architectures
- API-based data ingestion
- Archival strategies
- Incremental data loading
Preferred Qualifications :
- Bachelors/Masters degree in Computer Science, Engineering, or related field
- Exposure to real-time data streaming tools (Kafka, Kinesis, etc.) is a plus
- Familiarity with data governance and compliance frameworks
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1561410
Interview Questions for you
View All