Posted on: 06/03/2026
Description :
An AWS Redshift Data Engineer designs, develops, and optimizes high-performance data pipelines and warehousing solutions. Key responsibilities include ETL/ELT development, complex SQL tuning, schema modeling, managing cluster performance (provisioned or serverless), and integrating with AWS services like S3, Glue, and Kinesis to support analytics.
Key Responsibilities :
- Data Pipeline Development : Build and maintain scalable ELT/ETL pipelines to ingest, transform, and deliver data into Redshift.
- Redshift Optimization : Monitor and tune Redshift clusters, optimize SQL queries, and manage Workload Management (WLM).
- Data Modeling : Design efficient schema, distribution keys, and sort keys for optimal analytical performance.
- Architecture & Migration : Migrate, modernize, and manage Redshift environments, including transitioning to Redshift Serverless.
- Data Integration : Utilize tools like AWS Glue, S3, Kinesis, and Lambda for data ingestion and processing.
- Collaboration & Documentation : Work with analysts to meet reporting needs and produce technical documentation.
Requirements & Skills :
- Experience : Typically 510+ years in data engineering with deep hands-on experience in Amazon Redshift.
- Technical Skills : Advanced SQL (stored procedures, views), Python for data processing, and experience with data modeling concepts.
- AWS Services : Strong knowledge of Redshift, S3, Glue, Kinesis, and IAM.
- Performance Tuning : Expertise in optimizing complex SQL and large-scale data processing.
- Preferred Certifications : AWS Certified Data Engineer Associate or Data Analytics Specialty.
Database : Amazon Redshift (Provisioned/Serverless).
ETL/Compute : AWS Glue (PySpark), Lambda, EMR.
Storage/Streaming : Amazon S3, Kinesis Data Streams.
Languages : SQL, Python.
Immediate- serving NP until month end
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1618355