Posted on: 20/02/2026
Description :
A Senior Databricks Developer with 6+ yrs is responsible for designing, developing, deploying, and optimizing large-scale data processing solutions and pipelines on the Databricks platform, often within a cloud ecosystem like Azure, AWS, or GCP.
Key Responsibilities :
- Data Pipeline Development : Design, build, and maintain scalable batch and streaming data pipelines and ETL (Extract, Transform, Load) processes.
- Optimization & Performance Tuning : Optimize Spark jobs and Databricks clusters for performance, efficiency, and scalability when handling large datasets.
- Collaboration : Work closely with data engineers, data scientists, business analysts, and architects to gather requirements, define scope, and ensure data solutions meet business needs.
- Data Quality & Governance : Implement data quality checks, ensure data integrity and security, and adhere to data governance best practices, potentially using tools like Unity Catalog.
- Deployment & Operations : Manage the deployment and maintenance of data solutions, including designing CI/CD pipelines using tools like Azure DevOps or GitHub, and monitoring production workflows.
- Technical Leadership : Provide technical guidance, mentorship to junior developers, and conduct code reviews to ensure best practices are followed.
- Documentation : Create and maintain comprehensive technical documentation for data architectures, processes, and workflows.
Required Qualifications and Skills :
- Experience : A minimum of 5+ years of experience in data engineering or a similar role, with proven hands-on expertise in Databricks and Apache Spark.
- Programming Languages : Strong proficiency in Python, PySpark, SQL, and often Scala is highly valued.
- Cloud Platform Expertise : Experience with a major cloud platform such as AWS, Azure, or GCP, and their related data services (e.g., Azure Data Factory, AWS S3/Glue/Redshift, ADLS).
- Data Technologies : In-depth knowledge of big data technologies, data warehousing, data modeling (logical and physical), Delta Lake, and workflow orchestration tools like Airflow.
- Education : Typically requires a Bachelor's degree in Computer Science, Information Technology, or a related technical field.
- Soft Skills : Excellent problem-solving, analytical, and communication skills, with the ability to work effectively in cross-functional, agile teams.
- Certifications (Bonus) : Relevant certifications in Databricks or cloud platforms like Microsoft Certified : Azure Data Engineer Associate can enhance a candidate's profile.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1614427