HamburgerMenu
hirist

Job Description

About Us :

Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation.

We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes.

As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team.


Job Summary :

We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks.

The ideal candidate will have a deep understanding of streaming architectures, Medallion data models, and performance optimization techniques in cloud environments.

This role requires hands-on technical expertise, including live coding during the interview process.


Key Responsibilities :

- Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming.

- Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers.

- Implement efficient ingestion using Databricks Autoloader for high-throughput data loads.

- Work with large volumes of structured and unstructured data, ensuring high availability and performance.

- Apply performance tuning techniques such as partitioning, caching, and cluster resource optimization.

- Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions.

- Establish best practices for code versioning, deployment automation, and data governance.


Required Technical Skills :


- Strong expertise in Azure Databricks and Spark Structured Streaming.


- Processing modes (append, update, complete).

- Output modes (append, complete, update).

- Checkpointing and state management.

- Experience with Kafka integration for real-time data pipelines.

- Deep understanding of Medallion Architecture.

- Proficiency with Databricks Autoloader and schema evolution.

- Deep understanding of Unity Catalog and Foreign catalog.

- Strong knowledge of Spark SQL, Delta Lake, and DataFrames.

- Expertise in performance tuning (query optimization, cluster configuration, caching strategies).

- Must have Data management strategies.

- Excellent with Governance and Access management.

- Strong with Data modelling, Data warehousing concepts, Databricks as a platform.

- Solid understanding of Window functions.


Proven experience in :

- Merge/Upsert logic.

- Implementing SCD Type 1 and Type 2.

- Handling CDC (Change Data Capture) scenarios.

- Retail/Telcom/Energy any one industry expertise.

- Real time use case execution.

- Data modelling.


info-icon

Did you find something suspicious?