HamburgerMenu
hirist

AWS Data Engineer - ETL/Redshift

ACME SERVICES PRIVATE LIMITED
Bangalore
5 - 8 Years
star-icon
4.5white-divider105+ Reviews

Posted on: 20/08/2025

Job Description

We are seeking a skilled and experienced AWS Data Engineer to join our team.


The ideal candidate will be a hands-on expert in designing, building, and managing robust and scalable data pipelines on the Amazon Web Services (AWS) platform.


This role is crucial for our data-driven initiatives, requiring a strong understanding of cloud-based data solutions and a passion for optimizing data workflows.


You will work on complex ETL (Extract, Transform, Load) processes, ensuring data integrity, security, and performance.


Key Responsibilities :


- Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines using a range of AWS services.


- ETL Processes: Implement and manage ETL processes to extract data from various sources, transform it for analysis, and load it into data lakes or data warehouses.


- Data Workflow Optimization: Continuously monitor and optimize data workflows and processing jobs to improve performance, reduce latency, and ensure cost-effectiveness.


- Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality, clean, and accessible data.


- Cloud Security and Governance: Implement and maintain data security best practices on AWS, including access control, encryption, and compliance with data governance policies.


- Automation: Automate data-related tasks and infrastructure management using scripting and Infrastructure as Code (IaC) principles.


- Troubleshooting: Identify, diagnose, and resolve complex data-related issues and system failures.


Required Skills & Experience :


- Hands-on AWS Expertise: Strong experience with core AWS data services, including AWS Glue, Amazon S3, and AWS Lambda.


- Programming Proficiency: Expert-level skills in Python for scripting, automation, and data manipulation.


- Distributed Data Processing: Proven experience with PySpark for processing large-scale datasets.


- Data Engineering Fundamentals: Solid understanding of data warehousing, data modeling, and ETL/ELT concepts.


- SQL Knowledge: Proficient in SQL for data querying and manipulation.


- Familiarity with other AWS Services: Experience with other AWS services such as Amazon Redshift, Amazon Athena, and AWS Step Functions is a plus.


- Security and Performance: Knowledge of AWS security best practices (e.g., IAM, KMS) and experience with performance tuning and cost optimization.


- Problem-Solving: Excellent analytical and problem-solving skills with a meticulous attention to detail.


info-icon

Did you find something suspicious?