HamburgerMenu
hirist

Databricks Architect - ETL/ELT Pipelines

INTRAEDGE TECHNOLOGIES PRIVATE LIMITED
6 - 8 Years
Chennai

Posted on: 16/03/2026

Job Description

Description : Website-https ://intraedge.com/

Job Title : Databricks Architect

Location : Chennai(Hybrid)

? Experience Level : 6+Years

Role Overview :



We are seeking a highly skilled Databricks Architect to lead the design, implementation, and optimization of scalable cloud-based data platforms.

This role requires deep expertise in Lakehouse architecture, Databricks, Apache Spark, and Delta Lake, along with proven experience delivering end-to-end enterprise data engineering solutions.

Key Responsibilities :


- Architect and implement enterprise-grade Lakehouse solutions using Databricks.

- Design and deliver end-to-end data engineering pipelines, including batch and real-time streaming solutions.

Lead implementation of :


- Cloud-based data lakehouse platforms integrating diverse data sources.

- Real-time data processing pipelines for operational and analytical use cases.

- Develop scalable ETL/ELT pipelines using PySpark, Scala, and SQL.

- Implement advanced data modeling solutions including 3NF, dimensional modeling, and enterprise data warehousing strategies.

- Design and build incremental data loading frameworks and metadata-driven ingestion pipelines.

- Establish data quality frameworks and governance standards.

- Implement and manage Unity Catalog, including fine-grained security and access controls.

- Leverage Databricks components such as :

1. Delta Live Tables.

2. Autoloader.

3. Structured Streaming.

4. Databricks Workflows.

- Integration with orchestration tools (e.g., Apache Airflow).

- Drive CI/CD automation, deployment strategies, and DevOps best practices.

- Optimize performance of pipelines, Spark jobs, and compute resources.

- Provide architectural guidance and technical leadership across cross-functional teams.

- Engage with stakeholders and clients to translate business requirements into scalable technical solutions.

Deep expertise in :



- Databricks and cloud-native storage/compute platforms.

- Apache Spark (batch & streaming).

- Delta Lake & Lakehouse architecture.

- Distributed data processing systems.

- Strong hands-on programming skills in Python, PySpark, Scala, and SQL.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in