Posted on: 17/04/2026
Job Description :
We are seeking a skilled Data Engineer with strong Java expertise to design, build, and optimize scalable data platforms. The ideal candidate will have hands-on experience with Big Data technologies and will be responsible for developing efficient data pipelines to support analytics and business intelligence.
Key Responsibilities :
- Design and develop scalable data platform frameworks using Spark, Hadoop, Kafka, and Hive.
- Build and manage batch and real-time data pipelines for analytics and data processing needs.
- Develop high-performance data processing applications using Java, Spring Boot, and microservices architecture.
- Design and implement ETL/ELT pipelines, integrating structured and unstructured data from multiple sources.
- Optimize Spark jobs, SQL queries, and distributed systems for better performance and scalability.
- Work closely with data scientists, analysts, and engineering teams to deliver data-driven solutions.
- Implement CI/CD pipelines and monitoring systems to ensure data pipeline reliability and performance.
Required Skills & Qualifications :
- Strong proficiency in Java (Spring Boot, microservices)
- Hands-on experience with Big Data technologies
- Strong knowledge of SQL & NoSQL databases.
- Experience with ETL/ELT processes and data modeling
- Familiarity with CI/CD tools (Git, Jenkins)
- Understanding of distributed computing and large-scale data processing
- Strong analytical and problem-solving skills
- Ability to work in a fast-paced, agile environment
Did you find something suspicious?
Posted by
Namrata Solanki
Technical Recruiter - Human Empowerment at NucleusTeq Consulting Private Limited
Last Active: 27 Apr 2026
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1629340