Posted on: 11/09/2025
AWS Engineer (Experience : 5-8 Years)
Are you passionate about building scalable, high-performance data solutions on AWS? We are looking for a seasoned AWS Engineer with 5-8 years of experience to join a dynamic team and take ownership of cloud-based data architectures. This role offers an exciting opportunity to work hands-on with AWS services, define infrastructure-as-code, and collaborate with senior engineers to drive innovation in data processing and management.
Role Overview :
As an AWS Engineer, you will be responsible for developing and optimizing data processing pipelines on AWS that handle structured and semi-structured data at scale. The role requires expertise in AWS Glue, Lambda, Athena, and serverless architecture patterns. You will work closely with architects and data engineers to build resilient, efficient, and cost-effective solutions that support both batch and streaming data workloads. Your contributions will directly impact the organizations data-driven capabilities, and youll have the opportunity to mentor peers and influence architectural decisions.
Key Responsibilities :
- Design, develop, and optimize AWS Glue, Lambda, and Athena workloads for data processing and transformation
- Implement infrastructure automation using AWS CDK to define and deploy cloud resources efficiently and reliably
- Work with AWS Lakehouse, Iceberg, and other frameworks to manage and organize large volumes of structured and semi-structured data
- Build serverless data architectures for both batch and real-time data processing pipelines
- Collaborate with senior engineers, architects, and stakeholders to enhance scalability, performance, and fault tolerance
- Ensure best practices for cloud governance, security, and cost optimization are followed
- Perform troubleshooting, monitoring, and tuning of data workflows for optimal throughput and minimal latency
Must-Have Skills :
- Strong experience with AWS Cloud Services Glue, Lambda, Athena, Lakehouse, Iceberg
- Expertise in AWS CDK for Infrastructure-as-Code deployments
- Programming proficiency in Python, PySpark, Spark SQL, TypeScript, Scala, or Java
- Hands-on experience with AWS data lakes and data warehousing solutions
- Solid understanding of data pipelines, data modeling, and batch/stream processing patterns
- Ability to write optimized, reusable, and maintainable code in cloud environments
- Strong problem-solving skills and collaborative mindset to work with cross-functional teams
Nice-to-Have Skills :
- Experience with real-time data streaming technologies such as Firehose, Kinesis, and Kafka
- Practical knowledge of Apache Iceberg for data lake management and schema evolution
- Familiarity with NoSQL databases like DynamoDB for fast, scalable access patterns
- Exposure to monitoring tools, logging frameworks, and performance tuning on AWS
- Experience with DevOps practices and CI/CD pipelines tailored for data workloads
Why Join Us :
- Work on cutting-edge cloud-native architectures in a fast-paced environment
- Collaborate with experts and shape the data strategy of the organization
- Opportunities to learn, grow, and lead within a supportive technical community
- Work remotely with flexible schedules that respect work-life balance
- Build scalable solutions that impact millions of users and data transactions
If you are excited to take on challenging cloud projects and want to be part of an innovative team driving AWS-powered data solutions, we want to hear from you!
Apply now and share your resume with us to embark on this rewarding journey!
Did you find something suspicious?
Posted By
Posted in
DevOps / SRE
Functional Area
DevOps / Cloud
Job Code
1544730
Interview Questions for you
View All