Posted on: 11/04/2026
Company Overview:
Is a leading provider of cutting-edge data solutions, empowering businesses across various sectors to unlock the full potential of their data. We specialize in building and managing robust data pipelines and analytics platforms, enabling data-driven decision-making for our clients. We operate across industries like Finance, Healthcare, and E-commerce, processing and analyzing large volumes of data to deliver actionable insights.
Role Overview:
As an AWS Data Engineer , you will be responsible for designing, building, and maintaining scalable and reliable data pipelines on the AWS cloud platform. You will collaborate closely with data scientists, analysts, and other engineers to understand data requirements and deliver high-quality data solutions. Your work will directly impact the ability of our clients to gain valuable insights from their data, improve business processes, and make informed strategic decisions.
Key Responsibilities:
- Design and implement robust ETL processes to ingest, transform, and load data from various sources into the data warehouse for efficient data analysis.
- Develop and maintain data models that support business requirements and ensure data quality and consistency for stakeholders.
- Build and optimize data pipelines using AWS services such as S3, Glue, Lambda, EMR, and Redshift to ensure high performance and scalability for data scientists and analysts.
- Automate data processing tasks and monitoring to improve efficiency and reduce manual intervention for the data engineering team.
- Troubleshoot and resolve data-related issues to minimize downtime and ensure data availability for business users.
- Collaborate with cross-functional teams to understand data requirements and deliver effective data solutions for various business needs.
Required Skillset:
- Demonstrated ability to design and implement data warehousing solutions using AWS services like Redshift, S3, and Glue.
- Proven expertise in developing ETL pipelines using Python and related libraries.
- Strong understanding of data modeling principles and experience in designing relational and dimensional data models.
- Proficiency in writing complex SQL queries for data extraction, transformation, and loading.
- Hands-on experience with Big Data technologies such as Spark and Hadoop.
- Solid understanding of Linux operating systems and command-line tools.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Bachelor's degree in Computer Science, Engineering, or a related field.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1627739