HamburgerMenu
hirist

Job Description

Description :

Key Responsibilities :

- Design, develop, and optimize ETL tb processes, with a focus on Snowflake integration.

- Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.

- Architect and implement efficient data pipelines to support various business needs using Snowflake, and DBT.

- Perform data profiling, cleansing, and transformation to ensure data accuracy and consistency.

- Monitor and troubleshoot ETL jobs, identifying and resolving performance issues and data anomalies.

- Implement best practices for data integration, storage, and retrieval within the Snowflake environment.

- Work closely with data engineers, analysts, and business users to understand data requirements and deliver solutions that meet their needs.

- Stay updated with the latest trends and advancements in ETL technologies and AWS services.

- Design, develop, and optimize complex data pipelines within the Snowflake data warehouse environment.

- Implement scalable ETL processes to ingest, transform, and load data from various sources into Snowflake.

- Collaborate with data architects and analysts to design and implement efficient data models within Snowflake.

- Optimize SQL queries, database configurations, and data pipeline performance for enhanced efficiency and scalability.

- Set up and maintain GitHub repositories for version control of data engineering code, configurations, and scripts.

- Establish and enforce branching strategies, pull request workflows, and code review processes to ensure code quality and collaboration.

- Develop and implement robust data quality checks and validation processes to ensure the accuracy and integrity of data within Snowflake.

- Monitor data pipelines for anomalies, errors, and discrepancies, and implement proactive measures to maintain data quality.

- Automate deployment, monitoring, and management of data pipelines using orchestration tools like Airflow or custom automation scripts.

- Continuously enhance automation processes to streamline data engineering workflows and minimize manual interventions.

- Document data engineering processes, pipeline configurations, and troubleshooting steps for knowledge sharing and reference.

- Provide mentorship and training to junior team members on Snowflake best practices, GitHub usage, and data engineering techniques.

Required Skills and Qualifications :

- Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.

- Minimum 3+ years of experience in ETL development.

- Proficiency in Snowflake, including design, development, and administration.

- Solid understanding of AWS services such as S3, Redshift, EC2, and Lambda.

- Strong SQL skills, with experience in complex query optimization and performance tuning.

- Experience with data modeling concepts and techniques.

- Proficiency in DBT (Data Build Tool) for data transformation and modeling.

- Extensive experience with version control systems, particularly GitHub, and proficiency in Git workflows and branching strategies.

- Solid understanding of data modeling principles, ETL processes, and data integration methodologies.

- Excellent problem-solving skills and attention to detail.

- Strong communication and collaboration skills, with the ability to work effectively in a team environment.

- Proven track record of delivering high-quality solutions on time and within budget.

Preferred Qualifications :

- Snowflake certifications.

- Experience with other ETL tools and technologies.

- Familiarity with Agile development methodologies.

- Knowledge of data governance and compliance standards.

- Experience with Data Vault modeling and implementation.

- Familiarity with Python or other programming languages for data manipulation


info-icon

Did you find something suspicious?