Posted on: 04/12/2025
Description :
Responsibilities :
- Architect and implement scalable reporting data models in Snowflake to support ThoughtSpot and Power BI consumption.
- Design and maintain robust semantic models, data sharing mechanisms, and APIs for downstream consumers.
- Partner with the Data Platform team to align ingestion, conformance, and consumption patterns across ADLS, Iceberg, Airflow, and IICS.
- Define and implement data governance, quality validation, and security standards across reporting pipelines.
- Build performant, reusable data transformations and APIs for reporting and dashboards.
- Integrate with Snowflake Data Sharing, webhooks, and REST/GraphQL endpoints to deliver customer-facing insights.
- Lead proof-of-concepts for new reporting frameworks and data sharing capabilities.
- Ensure reliability, accuracy, and auditability across all data presented to end users.
Requirements :
- 8+ years of experience in data engineering, data products, or BI systems.
- Snowflake - modelling, performance optimisation, RBAC, and data sharing.
- SQL - advanced queries, window functions, analytical optimisations.
- ThoughtSpot / Power BI - data modelling, embedding, semantic layer design.
- Python / PySpark - data transformations and automation.
- Airflow, Informatica IICS, or equivalent for orchestration.
- Azure ecosystem - ADLS, ADF, and related services.
- Strong understanding of data modelling patterns (star schema, Iceberg, medallion architecture).
- Familiarity with Kafka / Debezium for CDC-based ingestion.
- Experience building or consuming REST / GraphQL APIs for data products.
- Bachelor's degree in Computer Science, Software Engineering, or a related technical field.
- A Master's or PhD in Computer Science, Data Science, Machine Learning, Artificial Intelligence or Statistics is a strong plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1584653
Interview Questions for you
View All