Posted on: 18/07/2025
Role : Apache Flink Engineer.
Experience : 6 to 12 years.
Work Locations : Ramanujan IT City, Tharamani, Chennai & MindSpace Hi-Tech City, Madhapur, Hyderabad.
Work Model : Hybrid.
Time Zone : 3PM to 12AM IST (Both the way cab will be provided).
Role Summary :
This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink.
The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions.
This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment.
Essential Responsibilities :
- Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink.
- Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka.
- Develop Flink applications for complex event processing, stream enrichment, and real-time analytics.
- Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering.
- Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline.
- Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations.
- Automate data pipeline deployment, monitoring, and maintenance tasks.
- Stay up-to-date with the latest advancements in data streaming technologies and best practices.
- Contribute to the development of data engineering standards and best practices within the organization.
- Participate in code reviews and contribute to a collaborative and supportive team environment.
- Work closely with other architects and tech leads in India & US and create POCs and MVPs.
- Provide regular updates on the tasks, status and risks to project manager.
Required :
- Bachelors degree or higher from a reputed university.
- 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc.
- Proficiency in developing Flink applications for stream processing and real-time analytics.
- Strong understanding of data streaming concepts and architectures.
- Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry.
- Hands-on experience with ksqlDB for real-time data transformations and stream processing.
- Experience with Kafka Connect and building custom connectors.
- Extensive experience in implementing large scale data ingestion and curation solutions.
- Good hands on experience in big data technology stack with any cloud platform.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and as part of a team.
Good to have :
- Experience in Google Cloud.
- Healthcare industry experience.
- Experience in Agile.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1515056
Interview Questions for you
View All