HamburgerMenu
hirist

Senior Databricks Engineer/Data Architect - Python

FindErnest
Multiple Locations
10 - 15 Years
star-icon
5white-divider14+ Reviews

Posted on: 19/01/2026

Job Description

Description :



Sr Databricks engineer /Data Architect role



Job Summary :



We are seeking a highly skilled Senior Databricks Engineer to join our data team. The ideal candidate will have strong expertise in Python programming, ETL processes, and in-depth knowledge of Medallion Architecture in modern Data Lakehouse environments. You will play a key role in designing, developing, and optimizing scalable data pipelines and analytics solutions on the Databricks platform.



Key Responsibilities :



- Design, develop, and maintain robust ETL pipelines using Databricks and Python.



- Implement and optimize the Medallion Architecture (Bronze, Silver, Gold layers) within our Data Lakehouse ecosystem.



- Collaborate with data engineers, data scientists, and business stakeholders to translate business requirements into scalable data solutions.



- Perform data ingestion, transformation, cleansing, and enrichment from various structured and unstructured data sources.



- Optimize Spark jobs for performance and cost-efficiency on Databricks.



- Implement best practices for data governance, security, and quality within the data pipelines.



- Mentor junior team members and contribute to improving team processes and standards.



- Troubleshoot and resolve data pipeline and platform-related issues promptly.



Required Skills & Qualifications :



- 10+ years of experience in data engineering or a related field.



- Strong proficiency in Python programming and libraries related to data processing (PySpark preferred).



- Hands-on experience with Databricks platform and Apache Spark.



- Deep understanding of ETL concepts and implementation in large-scale data environments.



- Expertise in Medallion Architecture and Data Lakehouse design patterns.



- Experience with data storage technologies like Delta Lake, Parquet, and cloud data platforms (AWS, Azure, or GCP).



- Familiarity with SQL and performance tuning of Spark SQL queries.



- Strong problem-solving skills and attention to detail.



- Excellent communication and collaboration skills.



Preferred Qualifications :



- Experience with containerization (Docker/Kubernetes) and orchestration tools (Airflow, Azure Data Factory).



- Knowledge of CI/CD pipelines for data workflows.



- Exposure to machine learning pipelines and MLOps.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in