Posted on: 27/11/2025
Description :
- Apache Spark.
- Delta Lake.
- SQL Warehouse.
- Databricks Unity Catalog.
- Strong proficiency in Python, SQL, and data engineering pipelines.
- Experience with DBT is an added advantage.
Secondary Skills (Good-to-Have) :
- Experience in data governance, security frameworks, and compliance :
- RBAC, SSO, encryption, lineage, and auditing.
- Ability to design scalable, secure, and cost-optimized cloud-native platforms.
- Experience implementing :
- CI/CD pipelines.
- DevOps practices.
- Infrastructure-as-Code (Terraform or similar).
- Hands-on experience designing, developing, and optimizing large-scale batch pipelines for ingestion, transformation, and analytics using Databricks.
- Experience building and managing real-time streaming pipelines using :
- Spark Structured Streaming.
- Delta Live Tables.
- Other Databricks-native streaming frameworks.
Role & Responsibilities :
- Serve as the Solution Architect responsible for designing and delivering Databricks-based data platforms.
- Work with enterprise stakeholders to translate business requirements into scalable architectural designs.
- Lead the development of high-performance batch and streaming pipelines.
- Define architecture standards, best practices, and governance frameworks.
- Ensure platform reliability, security, and cost efficiency across cloud environments.
- Guide cross-functional engineering teams and provide technical leadership throughout project lifecycles.
- Collaborate with DevOps, Cloud, and Security teams to ensure seamless integration and platform adoption.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Technical / Solution Architect
Job Code
1580762
Interview Questions for you
View All