Posted on: 19/11/2025
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines and architectures using AWS services.
- Implement ETL/ELT processes using AWS Glue, Lambda, and Step Functions.
- Work with structured and unstructured data across S3, Redshift, and other AWS data services.
- Develop data integration workflows to collect, process, and store data efficiently.
- Optimize performance and cost of data pipelines.
- Monitor and troubleshoot data pipeline failures using CloudWatch and related tools.
- Collaborate with data analysts, data scientists, and other stakeholders to ensure data availability and quality.
- Apply best practices for security and governance of data assets on AWS.
Required Skills :
2+ years of experience with AWS services such as :
- AWS Glue
- AWS Lambda
- Amazon S3
- Amazon EC2
- Amazon Redshift
- CloudWatch
- Experience in building and maintaining ETL pipelines.
- Knowledge of data lake and data warehouse architecture.
- Familiarity with DevOps tools and CI/CD pipelines is a plus.
- Good understanding of data governance and security best practices on AWS.
Preferred Qualifications :
- Experience with other cloud platforms (Azure, GCP) is a plus.
- Exposure to tools like Apache Airflow, Kafka, or Snowflake is an added advantage.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1577213
Interview Questions for you
View All