Posted on: 02/12/2025
Description :
As a Lead Snowflake Data Engineer, you will be responsible for expanding and optimising the data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams. You will support the software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Responsibilities :
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements : automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Lead and mentor both onshore and offshore development teams, creating a collaborative environment.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DBT, Python, AWS, and Big Data tools.
- Development of ELT processes to ensure the timely delivery of required data for customers.
- Implement Data Quality measures to ensure accuracy, consistency, and integrity of data.
- Design, implement, and maintain data models that can support the organisation's data storage and analysis needs.
- Deliver technical and functional specifications to support data governance and knowledge sharing.
Requirements :
- 6+ years of experience delivering consulting services to medium and large enterprises. Implementations must have included a combination of the following experiences :
- Data Warehousing or Big Data consulting for mid-to-large-sized organisations.
- Strong experience with Snowflake and Data Warehouse architecture.
- Understanding of Data Vault Methodology.
- Strong analytical skills with a thorough understanding of how to interpret customer business needs and translate those into a data architecture.
- SnowPro Core certification is highly desired.
- Hands-on experience with Python (Pandas, Dataframes, Functions).
- Hands-on experience with SQL (Stored Procedures, functions), including debugging,
performance optimisation, and database design.
- Strong Experience with Apache Airflow and API integrations.
- Solid experience in any one of the ETL/ELT tools (DBT, Coalesce, Wherescape, Mulesoft,
Matillion, Talend, Informatica, SAP BODS, DataStage, Dell Boomi, etc. )
- Experience in Docker, DBT, data replication tools (SLT, Fivetran, Airbyte, HVR, Qlik, etc), Shell
Scripting, Linux commands, AWS S3 and Big data technologies.
- Strong project management, problem-solving, and troubleshooting skills with the ability to exercise mature judgment.
- Enthusiastic, professional, and confident team player with a strong focus on customer success who can present effectively even under adverse conditions.
- Strong presentation and communication skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1583604
Interview Questions for you
View All