Posted on: 21/08/2025
Job Summary :
Role : AWS Data Architect Leadership
Experience : 15 to 22 Years
We are seeking a highly skilled and experienced Technical Leader for our AWS Data Engineering practice.
The ideal candidate will be responsible for driving Data Engineering strategy for the practice, architecting scalable enterprise level data solutions and driving the implementation of data projects on AWS.
This role requires a deep understanding of AWS services and data engineering best practices.
In this role, you will be responsible for establishing and enhancing the company's Data Engineering Services Practice.
You will work closely with senior stakeholders to understand business needs and deliver technical solutions.
The role is well-suited for a technically proficient individual looking to thrive in a dynamic and fast-paced environment.
Responsibilities :
Technical Leadership :
What is Required :
- Bachelor's or Master's degree in Engineering or Technology (B/ M/ BTech / MTech).
- 15+ years of technical hands-on experience in Data space.
- At least 4 end-to-end implementations of large-scale data projects.
- Experience working on projects across multiple geographic regions.
- Extensive experience with a variety of projects, including on-premises to AWS migration,modernization, greenfield implementations and cloud-to-cloud migrations.
- Proficiency with AWS data services such as AWS Glue, Redshift, S3, Athena, EMR, Lambda, and RDS.
- Strong understanding of AWS architecture and best practices for data engineering.
- Proficiency in managing AWS IAM roles, policies, and permissions.
- Proficient in SQL and Python for data processing and transformation.
- Strong understanding of data warehousing concepts, ETL/ELT processes, and data modeling.
- Experience with data integration from various sources including batch and real-time data streams.
- Familiarity with data serialization formats such as Avro, Parquet, and ORC.
- Expertise in optimizing data pipelines and query performance.
- Experience with monitoring and troubleshooting data pipelines.
- Proficiency in performance tuning and optimization of distributed computing environments.
- Experience with data governance frameworks and practices.
- Understanding of data lifecycle management and data retention policies.
- Ability to implement and manage data quality frameworks and processes.
- Hands-on experience with big data processing frameworks like Apache Spark, Hadoop, and Kafka.
- Knowledge of stream processing technologies and frameworks.
- Experience with data visualization tools such as PowerBI or Tableau.
Whats in it for you?
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1533317
Interview Questions for you
View All