Posted on: 20/01/2026
Description :
About the job :
Role : Senior Snowflake Data Engineer
Exp : 5+ Years
Location : Noida- WFO
Immediate Joiners
We are looking for a Senior Snowflake Data Engineer with 5+ years of hands-on experience to design, build, and optimize scalable, cloud-native data platforms for BFSI clients.
This role demands strong expertise in Snowflake, SQL, and PySpark, along with experience in building production-grade data pipelines, data models, and transformations using modern ELT/ETL frameworks.
You will work closely with architects, analysts, and business stakeholders to deliver reliable, high-performance, analytics-ready data solutions.
Snowflake & Data Warehousing :
- Design, develop, and optimize Snowflake data warehouses, including databases, schemas, stages, and warehouses.
- Build fact and dimension models (Star/Snowflake schemas) to support analytics, reporting, and regulatory use cases.
- Implement performance optimization techniques such as clustering, partitioning, pruning, query profiling, and warehouse sizing.
- Handle large-scale data volumes with cost-efficient Snowflake design.
SQL & Data Transformations :
- Write and optimize complex SQL queries, including CTEs, window functions, subqueries, and incremental logic.
- Develop reusable, scalable transformation logic for business-critical datasets.
- Implement data quality checks, validations, and reconciliation logic using SQL.
PySpark & Data Processing :
- Build and maintain PySpark-based data pipelines for large-scale batch processing.
- Apply Spark best practices such as partitioning, caching, broadcast joins, and file format optimization (Parquet/Delta).
- Troubleshoot and optimize Spark job performance in cloud environments.
ETL / ELT & Orchestration :
- Design and implement end-to-end ETL/ELT pipelines from ingestion to consumption layers.
- Use orchestration tools like Airflow to schedule, monitor, and manage workflows.
- Implement incremental loads, CDC patterns, and failure recovery mechanisms.
DBT & Modern Data Stack :
- Build and maintain DBT models, tests, and documentation for transformation layers.
- Implement data lineage, modular modeling, and version-controlled transformations.
- Collaborate on analytics engineering best practices.
Cloud & Security :
- Work on cloud platforms such as AWS / Azure / GCP, with Snowflake as the core warehouse.
- Ensure data security, governance, and compliance aligned with BFSI standards.
- Collaborate with DevOps teams for CI/CD, monitoring, and production readiness.
Required Skills & Qualifications :
Must-Have :
- 5+ years of experience as a Data Engineer in production environments.
- Strong hands-on experience with Snowflake (data modeling, performance tuning, cost optimization).
- Advanced SQL skills (complex joins, window functions, query optimization).
- Strong PySpark experience for large-scale data processing.
- Solid understanding of data warehousing concepts and ETL/ELT patterns.
- Experience with Airflow or similar orchestration tools.
- Hands-on exposure to DBT for transformations and data modeling.
Good to Have :
- Experience in BFSI / regulated domains.
- Exposure to Cloudera Hadoop, Hive, Impala, or NiFi.
- Experience with CDC, streaming, or near real-time pipelines.
- Familiarity with CI/CD pipelines, Git-based workflows, and cloud monitoring.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1603649