HamburgerMenu
hirist

Job Description

Description :

We are looking for a skilled Kafka Data Engineer to design, develop, and manage real-time data pipelines and streaming integration solutions using Apache Kafka.

The role involves building scalable, high-performance Kafka-based architectures, integrating Kafka with multiple external systems, and ensuring reliability, security, and governance across streaming platforms.

Key Responsibilities :

- Design and develop Kafka-based data pipelines, streaming platforms, and integration solutions

- Implement and manage Kafka producers, consumers, topics, partitions, and schemas

- Build and maintain Kafka connectors for integrating databases, APIs, and cloud services

- Optimize Kafka performance for throughput, low latency, scalability, and fault tolerance

- Monitor Kafka clusters and streaming applications to ensure stability and performance

- Troubleshoot and resolve Kafka-related issues across development and production environments

- Collaborate closely with DevOps, data engineering, and application teams

- Implement best practices for data security, compliance, and governance within Kafka ecosystems

- Maintain comprehensive documentation for Kafka configurations, schemas, pipelines, and operational processes

Required Skills & Qualifications :

- Strong hands-on experience with Apache Kafka and Kafka ecosystem components

- Experience designing and implementing Kafka Connect and custom connectors

- Solid understanding of distributed systems, messaging, and event-driven architectures

- Experience integrating Kafka with databases, REST APIs, and cloud platforms

- Knowledge of monitoring tools and observability for Kafka (metrics, logging, alerts)

- Proficiency in at least one programming language such as Java, Scala, or Python


info-icon

Did you find something suspicious?