Posted on: 16/02/2026
Are you a Big Data expert ready to build enterprise-grade data systems at scale? If you're actively looking for a change or an immediate joiner eager for your next big move this opportunity is for you.
We are looking for a seasoned Big Data Developer with deep expertise in the Hadoop ecosystem, Spark, Scala, and Python to drive high-performance, scalable data solutions.
What You'll Be Doing :
- Design, develop & maintain scalable Big Data solutions using Spark, Hive & Hadoop ecosystem tools
- Write clean, reusable, and optimized code in Scala & Python - Work with massive datasets stored in HDFS & distributed systems
- Build enterprise-grade applications using OOP principles & design patterns
- Optimize data processing jobs for performance & reliability
- Collaborate with architects, engineers & business stakeholders
- Research and adopt new tools & technologies quickly
- Present technical updates clearly to stakeholders
What Were Looking For :
- 3 to 5 years overall experience with 3+ years in Big Data technologies
- Strong proficiency in Scala (Certification preferred)
- Hands-on experience with Spark, Hive & HDFS
- Strong understanding of Object-Oriented Programming & Design Patterns
- Proficiency in Python
- Familiarity with Hadoop ecosystem
- Excellent communication skills
- Bachelors degree in Computer Science or related field (preferred)
Nice to Have :
- Basic knowledge of Java
- Unix/Shell scripting experience
- Exposure to additional scripting languages
- Understanding of distributed systems & data engineering best practices
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1612928