Posted on: 09/10/2025
Responsibilities :
- Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake.
- Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices.
- Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake.
- Optimize Snowflake for performance and cost : warehouse sizing, clustering, materializations, query profiling, and credit monitoring.
- Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs.
- Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines.
- Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies.
- Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets.
- Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards.
- Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup
Required Qualifications :
- 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments.
Technical Skills :
Cloud Data Warehouse & Transformation Stack :
- Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management.
- Experience in dbt development : modular model design, macros, tests, documentation, and version control using Git.
Orchestration and Integration :
- Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory.
- Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs.
Data Modelling and Architecture :
- Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions.
- Knowledge of modern data warehousing principles.
- Experience implementing Medallion Architecture (Bronze/Silver/Gold layers).
- Experience working with Parquet, JSON, CSV, or other data formats.
Programming Languages :
- Python : For data transformation, notebook development, automation.
- SQL : Strong grasp of SQL for querying and performance tuning.
- Jinja (nice to have) : Exposure to Jinja for advanced dbt development.
Data Engineering & Analytical Skills :
- ETL/ELT pipeline design and optimization.
- Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have).
- Exposure to data quality and validation frameworks.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1558045
Interview Questions for you
View All