Posted on: 19/09/2025
Position : Senior Data Engineer (ETL/Big Data/Streaming)
Experience : 810 years
Location : Onsite - Hyderabad
Job Overview :
The ideal candidate will have deep hands-on experience with Apache Flink, Confluent Kafka, and related technologies to design and implement large-scale data ingestion, processing, and analytics solutions.
Key Responsibilities :
- Design, develop, and optimize Flink applications for real-time stream processing and analytics.
- Build and manage scalable Confluent Kafka infrastructure, including Brokers, Producers, Consumers, and Schema Registry.
- Implement real-time data transformations using ksqlDB.
- Develop and maintain Kafka Connect connectors, including custom connectors.
- Design and implement large-scale data ingestion, curation, and processing solutions.
- Work with big data technologies on cloud platforms (AWS, Azure, GCP).
- Ensure high performance, scalability, and reliability of streaming pipelines.
- Collaborate with cross-functional teams to deliver data-driven solutions.
- Troubleshoot and optimize data streaming applications and infrastructure.
Required Qualifications :
- Bachelors degree or higher from a reputed university.
- 810 years of total experience, with the majority in ETL/ELT, big data, and Kafka.
- Proficiency in Flink application development for stream processing.
- Strong understanding of data streaming architectures.
- Extensive experience with Confluent Kafka ecosystem.
- Hands-on experience with ksqlDB and Kafka Connect.
- Solid experience in big data technologies and cloud platforms.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and in a team environment.
Good to Have :
- Experience with Google Cloud Platform (GCP).
- Exposure to the healthcare industry.
- Experience working in Agile methodologies
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1549087
Interview Questions for you
View All