Posted on: 17/04/2026
Role : Data Warehouse & Engineering
Role Overview :
The Analytics Engineer owns the data warehouse transformation layer the curated, documented, and tested tables that every analyst, data scientist, and ML engineer relies on.
Using dbt as the primary transformation tool on top of Redshift or Snowflake, you turn raw event streams, application databases, and third-party feeds into clean, well-modelled, and trustworthy data products.
You are the person who makes reliable data access easy for the entire organisation.
Key Responsibilities :
- Design and maintain the dbt project : staging models (raw cleaned), intermediate models (business logic), and mart/semantic layer models (fact/dim tables, aggregated metrics).
- Implement and enforce data quality testing (dbt tests, Great Expectations, or Elementary) : uniqueness, non-null, referential integrity, and custom statistical range tests.
- Build and maintain the semantic layer / metrics layer (dbt Metrics, Cube, or Looker LookML) so that metric definitions are single-sourced and consistent across all consumers.
- Optimise warehouse performance : table materialisation strategies (table vs. incremental vs. view), clustering/sort keys, query profiling, and cost monitoring on Redshift or Snowflake.
- Collaborate with Data Engineers on ingestion pipelines define data contracts, document expected schemas, and flag upstream issues early.
- Partner with Data Scientists and ML Engineers to model feature tables and analytical marts that serve offline training, portfolio analysis, and experimentation.
- Maintain comprehensive dbt documentation (model descriptions, column-level lineage, freshness SLAs) so the data catalogue is always accurate.
- Participate in data governance : PII classification, column-level access policies, and data lineage tooling (dbt lineage graph + Monte Carlo or Atlan).
Required Skills & Qualifications :
- 3 to 7 years in an analytics engineering, BI engineering, or senior data analyst role with hands-on dbt experience.
- Expert SQL window functions, recursive CTEs, semi-structured JSON handling, and performance tuning on columnar warehouses.
- Hands-on dbt experience (dbt Core or dbt Cloud) : writing models, macros, tests, snapshots, and seeds; managing environments and deployment jobs.
- Deep experience with Redshift or Snowflake : Redshift distribution styles / sort keys, Snowflake clustering / dynamic tables / Snowpark, or both.
- Understanding of dimensional modelling principles (Kimball) : fact/dimension design, slowly changing dimensions, bridge tables.
- Experience with data quality testing frameworks and data observability concepts.
- Git proficiency version-controlled dbt projects, PR-based workflow, branch deployment environments.
- Working knowledge of a BI tool (Looker, Tableau, or Metabase) to understand how warehouse models are consumed.
Nice to Have :
- Experience with a metrics/semantic layer tool (dbt Semantic Layer / MetricFlow, Cube, or AtScale).
- Familiarity with streaming/near-real-time ingestion patterns and how they interact with warehouse materialisation.
- Knowledge of data catalogue and lineage tools (Atlan, Alation, DataHub, or Monte Carlo).
- Exposure to Python-based data transformation (dbt Python models, Snowpark, or pandas on Spark via Glue).
Tech Stack :
- Dbt/AIRFLOW/PYTHON
- Snowflake/RS
- SQL
- Looker/BI
- Git
- Data Tests
The job is for:
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1629399