Posted on: 11/11/2025
Description :
Key Responsibilities :
- Lead the design, development, and optimization of data pipelines and data warehouse solutions on Snowflake.
- Snowflake Types of Tables, Storage Integration, Internal & External Stages, Streams, Tasks, Views, Materialized Views, Time Travel, Fail Safe, Micro partitions, Warehouses, RBAC, COPY Command, File Formats (CSV, JSON and XML), snowpipe, Stored Procedures (SQL or JavaScript, Python).
- Develop and maintain dbt models for data transformation, testing, and documentation.
- dbt : create, run and build a model, Scheduling, Running dependency Models, Macros, Jinga Template (Optional).
- Collaborate with cross-functional teams including data architects, analysts, and business stakeholders to deliver robust data solutions.
- Ensure high standards of data quality, governance, and security across pipelines and platforms.
- Leverage Airflow (or other orchestration tools) to schedule and monitor workflows.
- Integrate data from multiple sources using tools like Fivetran, Qlik Replicate, IDMC (At least one).
- Provide technical leadership, mentoring, and guidance to junior engineers in the team.
- Optimize costs, performance, and scalability of cloud-based data environments.
- Contribute to architectural decisions, code reviews, and best practices.
- CI/CD BitBucket, GitHub (At least one).
- Data Model ENTITY (SUB DIM, DIM, FACTS), Data Vault (HUB, LINK, SAT).
Required Skills & Experience :
- Strong hands-on expertise in Snowflake (data modeling, performance tuning, query optimization, security, and cost management).
- Proficiency in dbt (core concepts, macros, testing, documentation, and deployment).
- Solid programming skills in Python (for data processing, automation, and integrations).
- Experience with workflow orchestration tools such as Apache Airflow.
- Exposure to ELT/ETL tools.
- Strong understanding of modern data warehouse architectures, data governance, and cloud-native
environments.
- Excellent problem-solving, communication, and leadership skills.
Good to Have :
- Hands-on experience with Databricks (PySpark, Delta Lake, MLflow).
- Exposure to other cloud platforms (AWS, Azure, or GCP).
- Experience in building CI/CD pipelines for data workflows.
Did you find something suspicious?
Posted By
LumenData
Global Talent Acquisition at LUMENDATA SOLUTIONS INDIA PRIVATE LIMITED
Last Active: 26 Nov 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1572698
Interview Questions for you
View All