HamburgerMenu
hirist

Job Description

Description :

Mandatory Skills :


- Core Java


- Spring boot


- Hadoop, Flink

Job Description :


The Cloud Data Technologies (CDT) team oversees data infrastructure and the management of the end-to-end data lifecycle. As a Software Engineer, you will work on our Hadoop-based data warehouse, contributing to scalable and reliable big data solutions for analytics and business insights. This is a hands-on role focused on building, optimizing, and maintaining large data pipelines and warehouse infrastructure.

Key Responsibilities :


- Build and support java based applications in the data domain


- Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance.

- Implement data ETL processes for batch and streaming analytics requirements.

- Optimize and troubleshoot distributed systems for ingestion, storage, and processing.

- Collaborate with data engineers, analysts, and platform engineers to align solutions with business needs.

- Ensure data security, integrity, and compliance throughout the infrastructure.

- Maintain documentation and contribute to architecture reviews.

- Participate in incident response and operational excellence initiatives for the data warehouse.

- Continuously learn and apply new Hadoop ecosystem tools and data technologies.

Required Skills and Experience :


- Java + Spring Boot : Build and maintain microservices.


- Apache Flink : Develop and optimize streaming/batch pipelines.

- Cloud Native : Docker, Kubernetes (deploy, network, scale, troubleshoot).

- Messaging & Storage : Kafka; NoSQL KV stores (Redis, Memcached, MongoDB etc.).

- Python : Scripting, automation, data utilities.

- Ops & CI/CD : Monitoring (Prometheus/Grafana), logging, pipelines (Jenkins/GitHub Actions).

- Core Engineering : Data structures/algorithms, testing (JUnit/pytest), Git, clean code.


info-icon

Did you find something suspicious?