Posted on: 13/10/2025
Description :
Job Summary :
We are seeking a highly skilled Data Engineer to design, build, and optimize scalable data pipelines using Snowflake, AWS services, and Apache Spark. You will be responsible for real-time and batch data ingestion, transformation, and orchestration across cloud platforms.
Notice Period - 0 to 15 days.
Key Responsibilities :
- Develop and maintain data pipelines using AWS Glue, Lambda, EMR, and Snowflake.
- Implement real-time ingestion using Snowpipe and Streams for CDC (Change Data Capture).
- Write efficient PySpark or Scala Spark jobs for large-scale data processing.
- Automate workflows and orchestrate jobs using AWS Step Functions, Airflow, or similar tools.
- Optimize Snowflake queries and warehouse performance.
- Collaborate with Data Scientists, Analysts, and DevOps teams to deliver reliable data solutions.
- Monitor and troubleshoot data pipeline failures and latency issues.
Required Skills :
- Strong experience with Snowflake architecture, SQL, and performance tuning.
- Hands-on expertise in AWS Glue, Lambda, S3, EMR, and CloudWatch.
- Proficiency in Apache Spark (PySpark or Scala).
- Familiarity with Snowpipe, Streams, and Tasks in Snowflake.
- Knowledge of CI/CD tools and infrastructure-as-code (Terraform, CloudFormation).
- Experience with version control (Git) and agile development practices
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1560056
Interview Questions for you
View All