Posted on: 11/03/2026
Role Overview :
We are seeking a highly skilled Data Engineer to design, build, and optimize scalable data pipelines that power analytics and business decision-making. The role requires strong hands-on expertise in modern cloud data platforms, transformation frameworks, and enterprise-grade data governance practices.
Key Responsibilities :
- Design, develop, and maintain cloud-native data pipelines using Snowflake and SQL
- Implement end-to-end data ingestion (ETL/ELT) frameworks from multiple source systems
- Apply data warehousing best practices, including Slowly Changing Dimensions (SCDs)
- Build and manage transformation layers using dbt (Data Build Tool)
- Operationalize Medallion Architecture (Bronze, Silver, Gold layers)
- Ensure version control, CI/CD, and deployment governance across Dev, Stage, and Production using GitHub and pull-request workflows
- Perform SQL performance tuning, query optimization, and root-cause analysis for data issues
- Troubleshoot and resolve data quality, pipeline failures, and technical defects
- Conduct technical sprint demos showcasing completed deliverables to stakeholders
- Collaborate cross-functionally with analytics, platform, and business teams to deliver reliable data products
Required Skills & Experience :
- Strong hands-on experience with Snowflake
- Advanced proficiency in SQL
- Solid understanding of data warehousing concepts and dimensional modeling
- Experience with SCD implementation
- Proven expertise in ETL/ELT data acquisition pipelines
- Hands-on experience with dbt for transformations
- Working knowledge of GitHub, pull requests, and controlled deployment practices
- Experience with production support, issue resolution, and performance optimization
- Excellent communication and stakeholder interaction skills
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1619597