Posted on: 18/09/2025
We are seeking a highly skilled and motivated Data Engineer to join our data platform team.
The ideal candidate will have hands-on experience in building and optimizing data pipelines and architectures using Databricks and Snowflake.
You will play a crucial role in managing our data ecosystem, ensuring data accessibility, quality, and performance to drive key business decisions.
Key Responsibilities :
- Design & Development : Architect, build, and maintain robust and scalable ETL/ELT data pipelines using Databricks (PySpark/Scala) and Snowflake. This includes ingesting, transforming, and loading data from various sources into our data platform.
- Data Modeling & Warehousing : Develop and manage dimensional and relational data models within Snowflake, ensuring efficient data organization and high-performance querying for analytics and reporting.
- Performance Optimization : Troubleshoot, debug, and optimize existing data workflows and SQL queries in Databricks and Snowflake to improve performance and reduce computational costs.
- Collaboration : Work closely with cross-functional teams, including data scientists, data analysts, and business stakeholders, to understand their data requirements and deliver reliable, well-structured datasets.
- Governance & Quality : Implement and maintain data governance frameworks, including data quality checks, lineage tracking, and security measures (e.g., RBAC, ACLs) within both Databricks and Snowflake.
- Automation & Orchestration : Utilize orchestration tools like dbt (Data Build Tool), Airflow, or Azure Data Factory to automate data pipelines and ensure seamless data flow.
- Innovation : Stay up-to-date with the latest features and best practices in Databricks and Snowflake, and propose new solutions to improve our data infrastructure.
Required Skills & Qualifications :
- Bachelor's degree in Computer Science, Engineering, or a related technical field.
- 3-6 years of hands-on experience as a Data Engineer or in a similar role.
- Proficiency in Python and advanced SQL is a must.
- Strong experience with Databricks for data processing, pipeline orchestration, and developing Spark applications.
- Extensive experience with Snowflake as a cloud data warehouse platform, including a deep understanding of its architecture (Virtual Warehouses, Snowpipe, etc.).
- Solid understanding of ETL/ELT processes, data modeling (Kimball, Inmon), and data warehousing concepts.
- Experience with at least one major cloud platform (AWS, Azure, or GCP).
- Knowledge of Azure is a plus.
- Familiarity with data governance, metadata management, and observability frameworks.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and as part of a team in a fast-paced, agile environment.
Good to Have Skills :
- Experience with dbt (Data Build Tool) for data transformation and data modeling.
- Knowledge of streaming technologies like Kafka or Kinesis.
- Familiarity with CI/CD tools such as Jenkins, Azure DevOps, or GitHub Actions.
- Relevant certifications, such as Databricks Certified Data Engineer or Snowflake SnowPro Core
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1547656