HamburgerMenu
hirist

Job Description

Role : Data Engineer.

Location : Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must).

NP : Immediate to 15 Days (Try to find only immediate joiners).

Note : Candidate Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks.

Job Title : SSE Kafka, Python, and Azure Databricks (Healthcare Data Project).

Experience : 5.5 to 8 years.

Role Overview :

We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects.

The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing.

This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams.

Key Responsibilities :

- Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks.

- Architect scalable data streaming and processing solutions to support healthcare data workflows.

- Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data.

- Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.

- Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions.

- Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows.

- Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering.

- Stay updated with the latest cloud technologies, big data frameworks, and industry trends.

Required Skills & Qualifications :

- 4+ years of experience in data engineering, with strong proficiency in Kafka and Python.

- Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing.

- Experience with Azure Databricks (or willingness to learn and adopt it quickly).

- Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus).

- Proficiency in SQL, NoSQL databases, and data modeling for big data processing.

- Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications.

- Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus.

- Strong analytical skills, problem-solving mindset, and ability to lead complex data projects.

- Excellent communication and stakeholder management skills.


info-icon

Did you find something suspicious?