Posted on: 04/12/2025
Description :
About the Role :
We are seeking a highly skilled and experienced Data Engineer to design, build, and optimize enterprise-grade data pipelines, data warehouse architectures, and large-scale data processing systems.
The ideal candidate will have deep expertise in Python, Snowflake, SQL, and core Data Warehousing and ETL principles.
This role requires strong engineering discipline, a product mindset, and the ability to work across cross-functional teams to deliver scalable, reliable, and high-performance data solutions.
Core Technical Skills (Mandatory) :
- Python (advanced proficiency) with strong understanding of OOPs concepts and modular development
- Snowflake (hands-on experience in development, performance tuning, and architecture)
- SQL development, query optimization, stored procedures, and database design
- Data Warehousing principles including dimensional modeling, star/snowflake schemas, and data mapping
- ETL concepts and experience working with ETL tools and orchestration frameworks
- Performance tuning, optimization of data workflows and SQL queries
- Strong analytical, debugging, and problem-solving skills
- Excellent oral and written communication with the ability to collaborate in a multi-disciplinary environment
Key Responsibilities :
Data Pipeline Development :
- Design, develop, and maintain scalable and fault-tolerant ETL/ELT pipelines for batch and real-time data processing.
- Implement data ingestion frameworks from structured, semi-structured, and unstructured sources using Python and SQL.
- Build reusable components and automation scripts leveraging Python OOP principles.
Snowflake Engineering :
- Architect and manage Snowflake warehouses, including schema design, data modeling, clustering, and resource optimization.
- Implement Snowflake-specific features such as Streams, Tasks, Time Travel, and Secure Data Sharing.
- Perform performance analysis and tuning of Snowflake workloads to ensure low latency and high throughput.
Data Warehousing & Modeling :
- Apply best practices in data warehousing including dimension modeling, normalization/denormalization strategies, and data integration patterns.
- Translate business requirements into technical specifications and develop robust data models that support analytics, reporting, and data science initiatives.
Data Quality & Governance :
- Establish data validation rules, monitoring frameworks, and automated quality checks across data pipelines.
- Ensure compliance with data security, governance, and privacy policies.
- Maintain high data accuracy, consistency, and reliability across systems.
Cross-functional Collaboration :
- Work closely with product managers, analytics teams, BI developers, and business stakeholders to understand data requirements and deliver timely solutions.
- Partner with DevOps and cloud infrastructure teams to ensure smooth deployment, monitoring, and scaling of data services.
Troubleshooting & Optimization :
- Diagnose and resolve pipeline failures, data anomalies, and performance bottlenecks.
- Continuously refine data architectures for improved reliability, maintainability, and cost efficiency.
Key Requirements :
- Minimum 7 years of experience as a Data Engineer or in a similar data engineering role.
- Proven hands-on experience in Snowflake including warehouse configuration, schema design, and performance improvements.
- Strong mastery of SQL and database development concepts.
- Deep understanding of ETL/ELT frameworks, data integration tools, and cloud-based data ecosystems.
- Experience working with large datasets, data lakes, and distributed data platforms.
- Ability to write clean, modular, and efficient Python code aligned with OOP best practices.
- Strong attention to detail with the ability to manage multiple tasks and deliver high-quality outputs.
- Excellent communication and team collaboration skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1584878
Interview Questions for you
View All