Posted on: 22/12/2025
Job Summary :
We are seeking an experienced Data Engineer with strong expertise in Databricks, Snowflake, PySpark, Python, and SQL. The ideal candidate will be responsible for building and maintaining scalable data pipelines, optimizing ETL processes, and implementing CI/CD pipelines to support enterprise-level data platforms.
Key Responsibilities :
- Design, develop, and maintain ETL/data pipelines using Databricks, PySpark, and Python.
- Develop and optimize SQL queries and data models in Snowflake.
- Implement data ingestion, transformation, and validation processes.
- Build and maintain CI/CD pipelines for data engineering workflows.
- Ensure data quality, performance tuning, and monitoring of pipelines.
- Work with cross-functional teams to understand data requirements and deliver scalable solutions.
- Handle structured and semi-structured data from multiple sources.
- Follow data security, governance, and best engineering practices.
Required Skills & Qualifications :
- Strong hands-on experience with Databricks and Apache Spark (PySpark).
- Proficiency in Python for data processing and automation.
- Advanced SQL skills for querying and performance tuning.
- Experience with Snowflake data warehouse (schemas, clustering, optimization).
- Solid understanding of ETL processes and data pipeline architecture.
- Experience implementing CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc.).
- Familiarity with version control systems (Git).
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1593319
Interview Questions for you
View All