Posted on: 28/04/2026
Role : Databricks Architect
Location : Pune / Gurgaon / Bangalore
Work Mode : Hybrid
About EXL :
EXL is a leading global analytics and digital solutions company that partners with clients to improve business outcomes and accelerate growth. With expertise across industries such as insurance, healthcare, banking, financial services, and logistics, EXL leverages advanced analytics, AI, digital transformation, and domain knowledge to deliver innovative solutions. Headquartered in New York, EXL operates in more than 50 offices worldwide, combining deep industry expertise with cutting-edge technology to help organisations enhance customer experience, optimise operations, and drive sustainable value.
About the Role :
We are looking for a Databricks Architect with deep Data Engineering expertise to build lakehouse solutions that are scalable, reliable, and secure. The architect will work closely with customers to design customised data platforms that accelerate analytics, reporting, and advanced use cases.
Key Role & Responsibilities :
- Define lakehouse architecture : medallion (bronze/silver/gold) patterns, batch/streaming designs, and multi-workspace strategies.
- Design and implement data pipelines using Spark, Delta Lake, and Databricks workflows (Jobs/Workflows, DLT where applicable).
- Establish governance and security using Unity Catalog, access controls, lineage, and data quality gates.
- Optimise performance : cluster policies, autoscaling, partitioning, file sizing, caching, Spark tuning, and job orchestration.
- Build CI/CD and release governance for notebooks, repos, jobs, and infrastructure-as-code.
- Integrate Databricks with enterprise ecosystem (cloud storage, event streaming, data warehouse, BI tools).
- Conduct solution workshops with customers; provide options and trade-offs; create phased implementation roadmaps aligned to business value.
- Mentor teams, enforce engineering standards, and ensure operational excellence (monitoring, incident response, SRE practices).
Must Have :
- 10+ years experience with a strong Data Engineering background (ETL/ELT, distributed compute, production-grade pipelines).
- 4+ years hands-on Databricks experience in architecture/technical leadership roles.
- Strong experience in Apache Spark (PySpark/Scala), Delta Lake, pipeline design, and performance tuning.
- Experience with data orchestration and DevOps practices (Git, CI/CD, testing frameworks).
- Experience designing secure data platforms (RBAC, secrets, network/security integration, compliance considerations).
- Strong customer-facing skills : requirements discovery, solution design, and stakeholder management.
Good to Have :
- Streaming experience (Kafka/Event Hubs, Structured Streaming, CDC patterns).
- ML/AI enablement experience (MLflow, feature engineering, model lifecycle) as it relates to platform design.
- Cloud certifications or platform-specific certifications.
Education :
- Bachelors/Masters in Computer Science, Engineering, or related fields.
Key Skills : Databricks, Python, Spark, Data Architecture, Data Pipelines
Did you find something suspicious?
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1631757