Posted on: 04/08/2025
Responsibilities :
- Develop, and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
- Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
- Demonstrate a strong understanding of the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
- Design and implement scalable ETL/ELT workflows to process large volumes of data efficiently.
- Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
- Implement robust monitoring, testing, and observability practices to ensure data platform reliability and performance.
- Uphold data security, governance, and compliance standards across all data operations.
Requirements :
- Minimum of 8 years of experience in data engineering or related roles.
- Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
- Proficient in coding with Python, SQL, and Java (Java strongly preferred). Person needs to flexible to write code in Python/Java
- Experience with infrastructure-as-code tools (e.g. CloudFormation) and CI/CD pipelines.
- Excellent problem-solving skills and strong communication and collaboration abilities.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1524670
Interview Questions for you
View All