HamburgerMenu
hirist

AWS Data Engineer

Nazztec Private Limited
8 - 10 Years
Multiple Locations

Posted on: 20/03/2026

Job Description

Company Overview:

Nazztec Private Limited is a leading technology solutions provider specializing in data engineering, cloud computing, and analytics. We empower businesses across various sectors, including finance, healthcare, and e-commerce, to unlock the value of their data through innovative and scalable solutions. Our expertise lies in building robust data pipelines, implementing advanced analytics platforms, and providing actionable insights that drive business growth.

Role Overview:

As an AWS Data Engineer at Nazztec, you will be instrumental in designing, developing, and maintaining our cloud-based data infrastructure. You will collaborate closely with data scientists, analysts, and other engineers to build scalable and efficient data pipelines that support our clients' analytical needs. Your work will directly impact the ability of our clients to make data-driven decisions and gain a competitive edge in their respective industries.

Key Responsibilities:

- Design and implement scalable and reliable data pipelines using AWS services such as EMR, Glue, Lambda, and S3 to ingest, process, and store large datasets.

- Develop and maintain data models and schemas that support various analytical use cases, ensuring data quality and consistency.

- Optimize data processing workflows for performance and cost-efficiency, leveraging best practices for cloud-based data engineering.

- Collaborate with data scientists and analysts to understand their data requirements and provide them with the necessary data infrastructure and tools.

- Automate data pipeline deployments and monitoring using infrastructure-as-code principles and DevOps practices.

- Troubleshoot and resolve data pipeline issues, ensuring data availability and accuracy for downstream applications.

- Implement data governance policies and procedures to ensure data security and compliance with regulatory requirements.

Required Skillset:

- Demonstrated ability to design and implement data pipelines using Spark and PySpark.

- Proven expertise in working with AWS services such as EMR, Glue, Lambda, and S3.

- Strong understanding of data modeling, data warehousing, and ETL concepts.

- Experience with infrastructure-as-code tools such as Terraform or CloudFormation.

- Excellent problem-solving and communication skills, with the ability to collaborate effectively with cross-functional teams.

- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

- Ability to work in a fast-paced, agile environment and adapt to changing priorities.

- 8-10 years of experience in Data Engineering.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in