HamburgerMenu
hirist

AWS Data Engineer - Python/PySpark

TECHRACERS PRIVATE LIMITED
Hyderabad
3 - 5 Years
star-icon
4.3white-divider5+ Reviews

Posted on: 19/11/2025

Job Description

Key Responsibilities :

- Design, develop, and maintain scalable data pipelines and architectures using AWS services.

- Implement ETL/ELT processes using AWS Glue, Lambda, and Step Functions.

- Work with structured and unstructured data across S3, Redshift, and other AWS data services.

- Develop data integration workflows to collect, process, and store data efficiently.

- Optimize performance and cost of data pipelines.

- Monitor and troubleshoot data pipeline failures using CloudWatch and related tools.

- Collaborate with data analysts, data scientists, and other stakeholders to ensure data availability and quality.

- Apply best practices for security and governance of data assets on AWS.


Required Skills :


- 3+ years of experience in Python, SQL, and PySpark.

2+ years of experience with AWS services such as :


- AWS Glue

- AWS Lambda

- Amazon S3

- Amazon EC2

- Amazon Redshift

- CloudWatch

- Experience in building and maintaining ETL pipelines.

- Knowledge of data lake and data warehouse architecture.

- Familiarity with DevOps tools and CI/CD pipelines is a plus.

- Good understanding of data governance and security best practices on AWS.


Preferred Qualifications :


- AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect.

- Experience with other cloud platforms (Azure, GCP) is a plus.

- Exposure to tools like Apache Airflow, Kafka, or Snowflake is an added advantage.


info-icon

Did you find something suspicious?