Posted on: 26/10/2025
Responsibilities :
- Design and implement data pipelines using Databricks and AWS services (e. g., S3 Glue, Lambda, Redshift).
- Architect and manage the Medallion architecture (Bronze, Silver, Gold layers) within Databricks.
- Implement and maintain Unity Catalogue, Delta Tables, and ensure robust data governance and lineage.
- Develop and optimise SQL queries for high performance across large datasets.
- Design and maintain data models supporting analytical and reporting needs.
- Implement Slowly Changing Dimensions (SCD) for historical data tracking.
- Apply normalisation and denormalisation techniques for efficient data storage and retrieval.
- Identify and apply optimisation techniques for query performance and resource utilisation.
- Collaborate with data scientists, analysts, and business teams to deliver high-quality data solutions.
Requirements :
- Strong expertise in Databricks and AWS Data Services (S3 Glue, Redshift, Lambda, IAM).
- Excellent command of SQL and data modelling best practices.
- In-depth understanding of Medallion architecture (Bronze, Silver, Gold).
- Experience with Unity Catalogue, Delta Lake, and Delta Tables.
- Proficiency in Python or PySpark for data transformation and ETL.
- Experience with SCDs, data normalisation/denormalisation, and query optimisation.
- Must have experience in an e-commerce project.
Good to Have :
- Familiarity with BI tools (e. g., Power BI, Tableau) and data visualisation best practices.
- Exposure to CI/CD pipelines, Terraform, or DevOps for data engineering workflows.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1564788
Interview Questions for you
View All