Posted on: 03/02/2026
Description :
Software Engineer
Bangalore (Hybrid - 2 days office)
Experience : 5+ years
Job Description :
As a Software Engineer, you will work on our Hadoop-based data warehouse, contributing to scalable and reliable big data solutions for analytics and business insights.
This is a hands-on role focused on building, optimizing, and maintaining large data pipelines and warehouse infrastructure.
Key Responsibilities :
- Build and support java based applications in the data domain
- Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance.
- Implement data ETL processes for batch and streaming analytics requirements.
- Optimize and troubleshoot distributed systems for ingestion, storage, and processing.
- Collaborate with data engineers, analysts, and platform engineers to align solutions with business needs.
- Ensure data security, integrity, and compliance throughout the infrastructure.
- Maintain documentation and contribute to architecture reviews.
- Participate in incident response and operational excellence initiatives for the data warehouse.
- Continuously learn and apply new Hadoop ecosystem tools and data technologies.
Required Skills and Experience :
- Java + Spring Boot : Build and maintain microservices.
- Apache Flink : Develop and optimize streaming/batch pipelines.
- Cloud Native : Docker, Kubernetes (deploy, network, scale, troubleshoot).
- Messaging & Storage : Kafka; NoSQL KV stores (Redis, Memcached, MongoDB etc.
- Python : Scripting, automation, data utilities.
- Ops & CI/CD : Monitoring (Prometheus/Grafana), logging, pipelines (Jenkins/GitHub Actions).
- Core Engineering : Data structures/algorithms, testing (JUnit/pytest), Git, clean code
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1609044