Posted on: 14/02/2026
Description :
Experience :
- 3+ years of experience working in data engineering or big data environments.
- Experience working with services like EMR, Glue, Redshift within AWS, Snowflake, and Databricks.
- Hands-on experience with AWS, Spark, Python, and SQL.
- Solid understanding of data modeling and data warehouse design principles.
- Experience with distributed data frameworks such as Spark (Core, SQL, Streaming) or Flink.
- Hands-on experience with Snowflake or Databricks.
- Experience working on setting up data warehouses/ lake or lakehouse platforms end-to-end.
- Industry knowledge in Retail, Logistics, FSI, or Manufacturing would be an added advantage.
Roles & Responsibilities :
- Develop, maintain, and optimize ETL pipelines for batch and streaming data
- Work with data architects to implement scalable data models and warehouse structures
- Write efficient SQL and PySpark code for data transformation and analytics
- Support data quality, governance, and monitoring processes
- Collaborate with business and analytics teams to understand data requirements and ensure timely data delivery
- Participate in performance tuning and troubleshooting of data workflows
- Contribute to documentation, standards, and automation across data engineering projects
- Collaborate with cross-functional teams to ensure seamless data delivery and alignment with business goals
Key Skills : (Mandatory) :
- Technologies : AWS, Spark, Python, SQL, Airflow, Kafka, Snowflake or Databricks
- Frameworks : Spark Core, Spark SQL, Spark Streaming/Flink
- Soft Skills : Customer communication, Team coordination, problem-solving, Ownership.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1612794