HamburgerMenu
hirist

Senior Data Engineer - Kafka

Nasugroup
Bangalore
5 - 9 Years

Posted on: 15/12/2025

Job Description

We are looking for a Senior Data Engineer to build and run scalable data systems for real-time and batch analytics. This role requires strong experience with Apache Kafka for event streaming and Terraform for managing cloud infrastructure.

You will work on reliable, high-performance data pipelines, support data access across teams, and help monitor and improve system health. Experience with APIs and microservices is needed to connect data services and enable easy data consumption.

Role and Responsibilities :

- Design, operate, and optimize large-scale distributed data systems that support real-time and batch workloads.

- Manage and enhance event streaming ecosystems with a strong focus on Kafka.

- Improve system observability by building dashboards, metrics, logs, and alerts for data pipelines and streaming services.

- Ensure high availability, fault tolerance, and horizontal scalability across data ingestion, processing, and storage layers.

- Build and manage cloud-native data environments on AWS.

- Use Terraform to automate infrastructure provisioning and deployment.

- Ensure security, monitoring, and operational reliability across all data systems.

- Work with internal microservices that publish and consume data.

- Ensure smooth data integration between services and the broader data ecosystem.

- Collaborate with backend teams to maintain consistent schemas, data contracts, and service-level reliability.

- Build intuitive dashboards and data visualizations for insights into data quality, pipeline health, and system behavior.

- Use tools such as Grafana, CloudWatch, or similar for monitoring and observability.

Required Skills :

- Strong foundation in distributed computing and system internals.

- Hands-on experience with Kafka (partitioning, consumer groups, tuning, schema management).

- Excellent command of Python, SQL, and Apache Spark.

- Experience across relational, NoSQL, and graph database systems.

- Solid knowledge of AWS services relevant to data engineering.

- Good skills with Terraform and infrastructure automation.

- Experience working in high-volume, real-time data environments.

Nice-to-Have :

- Exposure to containerization (Docker) and orchestration systems.

- Experience with Graph Query language, like Cypher

- Understanding of distributed storage engines and indexing concepts.


info-icon

Did you find something suspicious?

Posted by

Job Views:  
11
Applications:  11
Recruiter Actions:  4

Functional Area

Data Engineering

Job Code

1590717