Posted on: 08/01/2026
Description : Were Hiring : Senior Databricks Developer.
Location : Baner, Pune (Onsite, 5 days a week).
Work Mode : Full-time | Office-based.
Availability : Immediate Joiners / Notice Period ? 30 Days.
Note : local Pune candidates are preferred.
L2 interviews will be conducted in person apply only if you are open to face-to-face interviews.
Position Summary :
This role requires a seasoned Databricks professional with strong technical depth and leadership capability. You will design, architect, and optimize data pipelines using Databricks, Apache Spark, Delta Lake, and cloud-native services (Azure, AWS, or GCP). As a Senior or Lead engineer, you will guide best practices, mentor junior engineers, and collaborate closely with data architects, data scientists, and business stakeholders.
Key Responsibilities :
- Lead the adoption and implementation of Databricks components including Workspace, Jobs, DLT (Delta Live Tables), Repos, and Unity Catalog.
- Build and optimize Delta Lake solutions aligned with Lakehouse and Medallion architecture standards.
- Partner with architects, engineering teams, and business stakeholders to translate complex business requirements into robust technical solutions.
- Establish coding standards, best practices, and reusable components for Databricks-based development.
- Drive CI/CD automation for Databricks deployments using Azure DevOps, GitHub Actions, or equivalent tools.
- Ensure end-to-end data governance, data quality, metadata management, and lineage tracking leveraging Unity Catalog or Azure Purview.
- Utilize orchestration tools such as Apache Airflow, Azure Data Factory, or Databricks Workflows for scheduling and monitoring.
- Conduct performance tuning for Spark jobs, cluster optimization, and cost-efficient compute strategies.
- Guide data modeling, SQL transformations, and data warehousing implementation using industry-standard best practices.
- Mentor and provide technical leadership to junior and mid-level data engineers.
- Participate in architecture reviews, solution design discussions, and roadmap planning.
Required Skills and Qualifications :
- Deep proficiency with Databricks Workspace, Jobs, DLT, Repos, Unity Catalog, and Lakehouse architecture.
- Advanced programming experience with PySpark and Spark SQL; Scala is an added advantage.
- Strong experience with at least one major cloud platform : Azure, AWS, or GCP.
- Proven expertise with Delta Lake, Lakehouse, and Medallion architecture patterns.
- Hands-on experience implementing CI/CD pipelines for Databricks using DevOps tooling.
- Familiarity with orchestration tools such as Airflow, ADF, or Databricks Workflows.
- Strong understanding of data governance, security frameworks, and enterprise data management.
- Experience integrating Databricks datasets with Power BI, including creating optimized data models, implementing Direct Lake or DirectQuery connections, and enabling performant, enterprise-grade reporting solutions.
- Experience leading technical teams or acting as a senior SME in critical data projects.
- Excellent analytical abilities, communication skills, and stakeholder management capability.
- Prior experience in healthcare data environments is a plus (not mandatory).
Work Model :
- 5 days onsite per week is mandatory.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1598421