Posted on: 29/08/2025
Role Overview :
We are seeking a highly skilled AWS Data Engineer to design, develop, and maintain scalable data pipelines and cloud-based data solutions.
The ideal candidate should have hands-on expertise in AWS cloud services, data lake/warehouse, ETL/ELT pipelines, and big data frameworks to support analytics and business insights.
What you do :
- Design, build, and optimize data pipelines for structured/unstructured data.
- Develop and manage ETL/ELT workflows using AWS services (Glue, Lambda, EMR, Kinesis).
- Work with data storage solutions such as S3, Redshift, DynamoDB, RDS.
- Implement streaming & batch data processing with Spark, Kinesis, Kafka.
- Ensure data quality, governance, and security standards across systems.
- Collaborate with Data Scientists, Analysts, and stakeholders to enable self-service analytics and ML pipelines.
- Monitor, troubleshoot, and improve pipeline performance.
- Apply CI/CD and Infrastructure as Code practices for deployment (Terraform, CloudFormation, Jenkins).
What we expect :
- Strong programming in Python, PySpark, or Scala.
- Hands on experience with SageMaker
Hands-on experience with AWS services :
- Compute & Storage : S3, EC2, Lambda, EMR
- Data Integration : Glue, Kinesis, Step Functions
- Databases/Warehousing : Redshift, RDS, DynamoDB
- Knowledge of data modeling, schema design, data lakes, and data warehouses.
- Experience with big data frameworks (Spark, Hadoop, Kafka).
- Proficiency in SQL & database optimization.
- Knowledge of CI/CD pipelines, Git, DevOps practices.
Good to Have :
- Familiarity Databricks, or ML pipelines.
- Exposure to BI tools (QuickSight, Tableau, Power BI).
- Containerization & orchestration (Docker, Kubernetes, EKS).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1537812
Interview Questions for you
View All