Posted on: 18/08/2025
Job description :
Job Title : Kafka Developer
Location : Bangalore / Hyderabad (Hybrid Work Model)
Experience : 5 to 10 years
Shift Timings : 2:00 PM to 11:00 PM
About the Role :
We are looking for an experienced Kafka Developer to join our team and play a key role in designing, developing, and maintaining scalable big data solutions.
The ideal candidate will have strong expertise in Apache Kafka along with a solid background in Hadoop and HDFS.
This role requires hands-on experience with the big data ecosystem and data engineering best practices to support our data-driven initiatives.
Key Responsibilities :
- Design, develop, and maintain Kafka-based streaming data pipelines and real-time data processing solutions.
- Work with large datasets using Hadoop ecosystem tools and technologies.
- Implement and manage data ingestion, processing, and storage on HDFS.
- Collaborate with data engineers, architects, and analysts to deliver reliable, scalable, and high-performance data solutions.
- Optimize Kafka clusters and pipelines for performance, scalability, and fault tolerance.
- Troubleshoot and resolve issues related to Kafka, Hadoop, and related big data components.
- Participate in code reviews, testing, and documentation to ensure high-quality deliverables.
- Stay updated with the latest trends and advancements in big data technologies and propose improvements.
Mandatory Skills :
- Strong experience with Apache Kafka, including Kafka Streams, Kafka Connect, and cluster management.
- Hands-on experience with Big Data technologies, particularly Hadoop and HDFS.
- Proficient in developing data pipelines and streaming data architectures.
- Solid understanding of distributed systems and data engineering concepts.
- Experience with data ingestion tools and ETL processes.
- Ability to work in a hybrid environment and flexible with shift timings (2 PM to 11 PM).
Good to Have :
- Prior experience in data engineering roles.
- Knowledge of Spark, Flink, or other real-time data processing frameworks.
- Familiarity with cloud platforms (AWS, Azure, GCP) and related big data services.
- Experience with scripting languages such as Python, Shell, or Java.
Qualifications :
- Bachelors or Masters degree in Computer Science, Information Technology, or related field.
Skills :
- Kafka,Big Data,Hadoop,Python
Did you find something suspicious?
Posted By
Posted in
Data Analytics & BI
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1531004
Interview Questions for you
View All