Posted on: 10/12/2025
Description :
Urgent opening for AWS Data Engineers ( Remote)
Experience : 6+ years
Work timings : 1.00pm -10.00 p.m (Mon-Fri)
Contract duration : 3 months (can be extended)
Mandatory :
- AWS Data Engineering, AWS Services(AWS Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, and IAM).
- Python, PySpark, and Apache Spark,data modelling,on-prem/cloud data warehouse ,DevOps
Techstack Table :
- Cloud Platform AWS Data Engineering
- AWS Services Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, IAM
- Programming Python, PySpark, Apache Spark
- Data Management Data Modelling, On-Prem/Cloud Data Warehouse
- DevOps CI/CD, Automation, Deployment, Monitoring
Job Description :
We are seeking an experienced AWS Data Engineer with 6+ years of experience, strong understanding of large, complex, and multi-dimensional datasets. The ideal candidate will design, develop, and maintain scalable data pipelines and transformation frameworks using AWS native tools and modern data engineering technologies.
The role requires hands-on experience in AWS Data Engineering services and strong data modelling expertise. Exposure to Veeva API integration will be a plus (not mandatory).
Responsibilities :
- Design, develop, and optimize data ingestion, transformation, and storage pipelines on AWS.
- Manage and process large-scale structured, semi-structured, and unstructured datasets efficiently.
- Build and maintain ETL/ELT workflows using AWS native tools such as Glue, Lambda, EMR, and Step Functions.
- Design and implement scalable data architectures leveraging Python, PySpark, and Apache Spark.
- Develop and maintain data models and ensure alignment with business and analytical requirements.
- Work closely with stakeholders, data scientists, and business analysts to ensure data availability, reliability, and quality.
- Handle on-premises and cloud data warehouse databases and optimize performance.
- Stay updated with emerging trends and technologies in data engineering, analytics, and cloud computing.
Requirements :
- Mandatory: Proven hands-on experience with AWS Data Engineering stack, including but not limited to:
- AWS Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, and IAM.
- Proficiency in Python, PySpark, and Apache Spark for data transformation and processing.
- Strong understanding of data modelling principles and ability to design and maintain conceptual, logical, and physical data models.
- Experience working with one or more modern data platforms: Snowflake, Dataiku, or Alteryx (Good to have not mandatory)
- Familiarity with on-prem/cloud data warehouse systems and migration strategies.
- Solid understanding of ETL design patterns, data governance, and best practices in data quality and security.
- Knowledge of DevOps for Data Engineering CI/CD pipelines, Infrastructure as Code (IaC) using Terraform/CloudFormation (Good to have not mandatory)
- Excellent problem-solving, analytical, and communication skills.
Desirable candidate :
- Qualification - Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field.
- Experience with cloud data engineering tools/components/technologies such as AWS Glue, EMR, S3 & EC2.
- Continual learning mindset to understand emerging trends in the data science field.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1587217
Interview Questions for you
View All