Posted on: 30/10/2025
Description :
- Design and implement scalable data pipelines using AWS services such as S3, Glue, PySpark, and EMR.
- Develop and maintain robust data transformation scripts using Python and SQL.
- Optimize data storage and retrieval using AWS database services like Redshift, RDS, and DynamoDB.
- Build and manage data warehousing layers tailored to specific business use cases.
- Apply strong ETL and data modeling skills to ensure efficient data flow and structure.
- Ensure high data quality, availability, and consistency to support analytics and reporting needs.
- Work closely with data analysts and business stakeholders to understand data requirements and deliver actionable insights.
Required Qualifications :
- Bachelors or masters degree in computer science, Information Technology, or a related field.
- Hands-on experience with AWS data services such as S3, Glue Studio, Redshift, Athena, and EMR.
- Strong proficiency in SQL and Python for data processing.
- Experience with batch job scheduling and managing data dependencies
Preferred Skills :
- Expertise in data warehousing, ETL frameworks, and big data processing.
- Familiarity with the pharmaceutical domain is a plus.
- Experience with data lake architecture and schema evolution
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1567060