Posted on: 19/12/2025
Role : Databricks Expert
Job description :
Role Summary :
Databricks with strong data engineering skills and deep AWS cloud expertise.
To lead the design, build, and deployment of data products on the Databricks Lakehouse platform, ensuring scalability, governance, security, and business value delivery.
The role requires mastery in Databricks features, Delta Lake, AWS ecosystem integration, and modern Data Engineering best practices.
Primary Responsibilities :
- Design, build, and manage data products and event driven architecture using Databricks Lakehouse principles.
- Design end-to-end data pipelines using PySpark, Databricks SQL.
- Define Data Product blueprint including domain boundaries, ownership, SLAs, documentation, quality rules.
- Implement modern ingestion frameworks using Auto Loader, Delta Live Tables, Workflows.
- Develop multi-zone medallion architecture (Bronze/Silver/Gold) using Delta Lake.
- Lead Databricks on AWS integration including S3 access, IAM roles, VPC networking.
- Implement Unity Catalog governance frameworks, fine-grained permissions, lineage tracking.
- Drive automation & DevOps practices using Git, Repos, Databricks CLI/SDK, CI/CD pipelines.
- Build and optimize Spark workloads for performance and cost control (Photon/Serverless tuning).
- Lead performance reviews, code quality checks, and data engineering guidance for teams.
- Operationalize Machine Learning and advanced analytics with MLflow/Feature Store/Model Serving.
- Monitor and enhance reliability using job monitoring, observability dashboards, alerts.
- Hands-on development of Power BI dashboards, DAX measures, data models, DirectQuery/Import mode
- Evaluate new Databricks features and adopt innovation into technical roadmap.
Technical Skills Required :
- Strong programming experience in PySpark, SQL & Python for production-grade data pipelines.
- Deep mastery in Databricks, Delta Lake, Unity Catalog, Workflows, SQL Warehouses.
- Expert knowledge of AWS services: S3, Glue, Lambda, EMR, Step Functions, CloudWatch, VPC networking.
- Experience building data products with versioning, discoverability, contracts, metadata & lineage.
- Good understanding of Infra-as-Code: Terraform (workspace, clusters, UC policies, jobs, service principals).
- Strong foundation in data modeling, schema design, Lakehouse & Data Mesh principles.
- Familiar with governance & security frameworks (encryption, tokenization, row/column controls).
- Experience with enterprise integrations (Power BI).
Qualifications :
- 4-10 years of professional Data Engineering experience.
- Minimum 3+ years hands-on with Databricks (production scale).
- Proven experience in delivering Data Products on cloud Lakehouse platforms.
Soft Skills :
- Strong ownership mindset with Data Product thinking.
- Excellent communication and ability to translate architecture for business stakeholders.
- Ability to lead engineering teams and influence architecture decisions.
- Continuous innovation mindset with focus on automation, performance & reusability.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1592736
Interview Questions for you
View All