Posted on: 24/09/2025
Requirements :
- Strong proficiency in writing complex, optimized SQL queries (especially for Amazon Redshift).
- Experience with Apache Spark (preferably on AWS EMR) for big data processing.
- Proven experience using AWS Glue for ETL pipelines (working with RDS, S3 etc. ).
- Strong understanding of data ingestion techniques from diverse sources (files, APIs, relational DBs).
- Solid hands-on experience with Amazon Redshift : data modeling, optimization, and query tuning.
- Familiarity with AWS Quicksight for building dashboards and visual analytics.
- Proficient in Python or PySpark for scripting and data transformation.
- Understanding of data pipeline orchestration, version control, and basic DevOps.
Good-to-have Skills :
- Knowledge of other AWS services (Lambda, Step Functions, Athena, CloudWatch).
- Experience with workflow orchestration tools like Apache Airflow.
- Exposure to real-time streaming tools (Kafka, Kinesis, etc. ).
- Familiarity with data security, compliance, and governance best practices.
- Experience with infrastructure as code (e. g., Terraform, CloudFormation).
Key Responsibilities :
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1551500
Interview Questions for you
View All