Posted on: 29/04/2026
Job Description :
We are seeking a highly skilled Data Engineer to join our team and help build scalable data pipelines, integrate machine learning workflows, and optimize data platforms for actionable insights. This role plays a critical part in enabling data-driven solutions for sustainability initiatives and innovation.
Responsibilities :
- Design, implement, and optimize scalable data pipelines using SQL, Python, and PySpark for efficient data processing.
- Design, develop, and maintain scalable data pipelines using Databricks
- Build and optimize ETL processes and data ingestion frameworks
- Work on modern data architecture using Lakehouse principles
- Handle end-to-end project ownership including requirement understanding, design, and delivery
- Collaborate with cross-functional teams and stakeholders
- Stay updated with latest advancements in Databricks and data engineering ecosystem Ensure strong data governance practices, including data quality, compliance, and cataloging using tools like Unity Catalog or Hive Metastore.
Primary Skills :
- Strong hands-on experience in Python and Advanced SQL
- Mandatory experience in Databricks, including :
i. Unity Catalog
ii. Data ingestion & ETL pipelines
iii. Performance optimization
- Exposure to latest Databricks features such as :
i. LakeFlow
ii. DBT integration
iii. AI/BI Genie
- Strong communication skills with ability to lead modules independently
- Clear understanding and vision around AI-driven data solutions
- Experience using Terraform for infrastructure as code.
- Expertise in data warehousing concepts and solutions.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1632098