Posted on: 13/03/2026
Job Description :
Responsibilities we entrust you with :
- Design and develop Kafka-based solutions for real-time data processing.
- Build and maintain Kafka producers, consumers, and topics.
- Optimize Kafka cluster performance, scalability, and reliability.
- Troubleshoot and resolve issues related to Kafka brokers, producers, and consumers.
- Integrate Kafka with other systems like databases, microservices, and third-party APIs.
- Develop efficient data pipelines for processing large volumes of streaming data.
- Ensure data integrity and fault tolerance in Kafka systems.
- Collaborate with cross-functional teams to design and implement event-driven architectures.
- Monitor and manage Kafka clusters and ensure optimal health and performance.
- Work with DevOps teams for deployment and continuous integration/continuous delivery (CI/CD) pipelines.
Relevant work experience :
- Strong experience with Apache Kafka and Kafka Connect.
- Hands-on experience with Kafka Producers and Consumers.
- Proficiency in programming languages like Java, Python.
- Familiarity with Kafka Streams .
- Experience with Kafka management tools (e.g., Confluent Control Center, Kafka Manager).
- Solid understanding of distributed systems and messaging queues.
- Knowledge of cloud platforms (AWS) and containerization (Docker, Kubernetes).
- Understanding of data serialization formats like Avro, JSON, or Protobuf.
- Experience with event-driven architectures and microservices is a plus.
Did you find something suspicious?
Posted by
Posted in
DevOps / SRE
Functional Area
DevOps / Cloud
Job Code
1620304