Posted on: 11/12/2025
Key Responsibilities :
- Design, build, and optimize scalable data pipelines and data models.
- Manage and administer Snowflake environments, including virtual warehouses, secure shares, clustering, tagging, masking, dynamic tables, and performance tuning.
- Ensure cost efficiency in Snowflake workloads through monitoring and optimization techniques.
- Implement and maintain data governance, data quality checks, and metadata standards.
- Collaborate with cross-functional teams to understand data needs and deliver robust solutions.
- Develop automated scripts and transformations using Python for data processing.
- Leverage AI-assisted tools (GitHub Copilot, Claude Code) to improve design, coding, documentation, and reviews.
- Monitor system health, performance, and data reliability using industry tools and best practices.
- Contribute to architectural discussions, documentation, and design specifications.
- Use Git effectively for branching strategies, code reviews, and version control.
- Support production environments and troubleshoot performance, quality, and pipeline issues.
Requirements :
- 6+ years of experience as a Data Engineer, Software Developer, or Administrator focused on data modeling, governance, and platform administration.
- 3+ years of hands-on Snowflake experience including virtual warehouse management, clustering, secure shares, cost control, tagging, masking, dynamic tables, and performance optimization.
- Expert-level SQL capability for analytics and database engineering.
- Intermediate Python skills for data pipelines and automation.
- Experience using AI-assisted code tools (GitHub Copilot, Claude Code).
- Strong experience across MSSQL, PostgreSQL, OLAP/OLTP systems.
- Familiarity with ETL/ELT tools, schema design for high-volume systems, and data health monitoring.
- Experience implementing data governance principles and standards.
- Solid understanding of cloud design principles (SaaS, PaaS, IaaS).
- Experience with structured file formats : XML, JSON, Parquet, CSV, Fixed Length.
- Hands-on experience with Git including branching, merging, and git-flow.
Good to Have :
- Experience with large-scale cloud data platforms.
- Knowledge of infrastructure automation.
- Exposure to DevOps-driven workflows.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1587740
Interview Questions for you
View All