Posted on: 12/11/2025
Description :
We are seeking an experienced AWS Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data platforms on Amazon Web Services (AWS). The ideal candidate will have strong expertise in data integration, data modeling, ETL development, and cloud data architecture, with a focus on performance, scalability, and security.
Key Responsibilities :
- Design and implement data ingestion, transformation, and storage pipelines using AWS services such as Glue, Lambda, EMR, Redshift, and S3.
- Develop and optimize ETL/ELT workflows to support analytics, data science, and reporting requirements.
- Collaborate with data scientists, analysts, and business teams to understand data needs and ensure reliable data delivery.
- Build and maintain data lake and data warehouse architectures on AWS.
- Work with both structured and unstructured data, ensuring high quality, consistency, and availability.
- Manage data security, governance, and compliance according to organizational standards.
- Implement data validation, quality checks, and monitoring frameworks for pipelines.
- Optimize performance and cost across storage, compute, and data processing layers.
- Leverage Infrastructure as Code (IaC) tools like Terraform or CloudFormation for environment setup and automation.
- Support DevOps and CI/CD practices for automated data pipeline deployments.
Required Skills & Qualifications :
- Bachelors or Masters degree in Computer Science, Information Systems, Data Engineering, or a related field.
- 58 years of professional experience in data engineering, ETL development, or cloud data solutions.
Hands-on expertise in AWS data services such as :
- AWS Glue, S3, Lambda, Redshift, EMR, Athena, Step Functions, and Kinesis.
- Strong proficiency in SQL and Python for data processing and automation.
- Solid understanding of data modeling (OLTP and OLAP), data warehousing concepts, and performance tuning.
- Experience with ETL tools (AWS Glue, Talend, Informatica, dbt, or similar).
- Familiarity with big data technologies such as Spark, Hadoop, or PySpark.
- Knowledge of version control (Git) and CI/CD pipelines for data projects.
- Strong understanding of data security, encryption, and IAM policies in AWS.
Preferred Skills :
- Experience with streaming data solutions (Kafka, Kinesis Data Streams, or AWS MSK).
- Familiarity with modern data stack tools like dbt, Airflow, Snowflake, or Databricks.
- Exposure to MLOps or data science pipeline integration.
- Knowledge of API-based data integration and RESTful services.
- AWS certification such as :
- AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect Associate/Professional.
Soft Skills :
- Strong problem-solving and analytical abilities.
- Excellent communication and collaboration skills.
- Ability to work independently and deliver in agile environments.
- Detail-oriented with a focus on data quality and reliability.
Did you find something suspicious?
Posted By
Ranjith Chandran
Delivery Manager at Lavu Tech Solutions Sdn Bhd
Last Active: NA as recruiter has posted this job through third party tool.
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1573352
Interview Questions for you
View All