Posted on: 20/04/2026
Responsibilities :
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or dbt to implement transformation logic and dimensional data models (star and snowflake schemas)
- Implement and manage Slowly Changing Dimensions (SCD Type 1, Type 2, etc.) to support historical data tracking and analytics
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable, well-modeled data products
- Apply data governance, data modelling standards, and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Requirement :
- 5 -8 years of hands-on Data Engineering experience
- Strong ETL/ELT fundamentals - pipeline design, transformation logic, and end-to-end ownership
- Solid understanding of data warehousing concepts, including dimensional modelling, star schema, and snowflake schema design
- Hands-on experience implementing Slowly Changing Dimensions (SCD Type 1, Type 2, and/or hybrid approaches)
- Proficiency with Dataform or dbt (preferred); strong SQL is a must
- Experience with BigQuery (preferred) or equivalent cloud data warehouse (Redshift, Snowflake, Synapse)
- Cloud platform experience: GCP (preferred), AWS, or Azure - including object storage (GCS, S3, ADLS)
- Exposure to data migration projects and/or data mesh principles
- Programming skills in Python or SQL; Spark/PySpark is a plus
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1629802