HamburgerMenu
hirist

Lead Data Engineer - AWS Platform

Helius
Hyderabad
6 - 8 Years

Posted on: 09/01/2026

Job Description

Description :


Please note role requires strictly working from our Hyderabad Office.

WaveRock SEZ, 3rd Floor IT Park Road Number 2 Nanakramguda,.

Gachibowli, Telangana 500032, India.

Key Skills :


- Strong AWS knowledge in designing, supporting, and optimizing data architecture. (Amazon S3, AWS Glue, AWS RDS ,AWS DMS, MWAA/Airflow, EC2, IAM).

- Hands-on experience with AWS Glue scripting (Python / PySpark) for ETL processing.

- Good knowledge of Apache Airflow and AWS MWAA for job orchestration and monitoring.

- Experience in AWS DMS for data replication and migration use cases.

- Working knowledge of Amazon RDS and database integrations.

- Experience managing and troubleshooting EC2 instances in production environments.

- Hands-on experience in CloudFormation templates for infrastructure provisioning.

- Strong knowledge in Python, SQL, PySpark, and UNIX shell scripting.

- Basic to intermediate SQL knowledge in Oracle and PostgreSQL.

- Intermediate SQL knowledge in Oracle and PostgreSQL.

- Working experience on Oracle plsql.

- Working experience with Snowflake.

- Good understanding of ETL concepts, data pipelines, data ingestion, and transformations.

- Knowledge of cloud optimization techniques for performance, reliability, and cost.

- Ability to document processes, workflows, and operational runbooks.

Required Qualifications :


- Experience as a Data Engineer: 6+ years.

- Bachelors degree in computer science, Engineering, or STEM-related field.

- At least 4+ years of hands-on experience working with AWS data services.

- Understanding of data warehousing, ETL concepts, and data quality checks.

- Familiarity with AWS IAM, logging, and monitoring (CloudWatch).

- Experience with CI/CD pipelines and Git is a plus.

Responsibilities :


- Design, develop, and maintain ETL/ELT pipelines using AWS Glue (PySpark / Python).

- Build and orchestrate workflows using AWS Airflow (MWAA) with Python.

- Manage and optimize data storage and access in Amazon S3.

- Work with AWS DMS to support data migration and replication from source systems.

- Develop and maintain CloudFormation templates for infrastructure provisioning.

- Manage and monitor EC2 instances, including performance tuning and troubleshooting.

- Write shell scripts for automation, monitoring, and operational tasks.

- Perform data ingestion, transformation, and validation from Oracle and PostgreSQL databases along with Oracle plsql.

- Experience on Snowflake and snowpipes along with integration with AWS.

- Write and optimize SQL queries for data extraction, reconciliation, and reporting.

- Monitor job executions, handle failures, and implement retry and alerting mechanisms.

- Collaborate with cross-functional teams to understand data requirements and deliver reliable data solutions.

- Ensure data security, compliance, and best practices across AWS services.

Good to Have :


- Experience with CI/CD pipelines for data workloads.

- Knowledge of partitioning, performance tuning, and cost optimization in AWS.

- Exposure to data governance, metadata management, or auditing frameworks.

- Prior experience in large-scale data migration or modernization projects.

Soft Skills :


- Strong problem-solving and analytical skills.

- Excellent communication and stakeholder management skills.

- Ability to collaborate effectively with cross-functional teams.

- Self-motivated, proactive, and solution-oriented.

- Willingness to learn and adapt to new tools and technologies.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in