Posted on: 22/12/2025
Description :
Job Title : Big Data Hadoop Developer
Work mode : Monday- Friday ( 5 days WFO)
Experience : From 2 Years to up to 10 Years
Notice Period : Immediate Joiner
Must-Have Experience : Big Data, Spark, Hive and (Hadoop Ecosystem HDFS, YARN, MapReduce)
About the Role : We are looking for an experienced Big Data Hadoop Developer to design and build scalable big data systems and pipelines. The ideal candidate will have deep expertise in Hadoop, Spark, Hive, and ETL development, along with strong problem-solving skills and the ability to work in a fast-paced environment.
Roles & Responsibilities :
- Design, develop, and maintain scalable big data systems and data pipelines.
- Implement data processing frameworks and optimize large datasets using Hadoop, Spark, and Hive.
- Develop and maintain ETL processes ensuring high data availability, accuracy, and quality for downstream applications.
- Collaborate with data architects, analysts, and business teams to translate business requirements into scalable big data solutions.
- Perform data validation, quality checks, and troubleshooting to ensure pipeline reliability.
- Optimize data storage, retrieval, and performance across the Hadoop ecosystem.
- Ensure security, governance, and compliance across big data environments.
Required Skills & Qualifications :
- 2-10 years of hands-on experience in Big Data technologies.
- Strong expertise in the Hadoop ecosystem (HDFS, MapReduce, YARN).
- Proficiency in Apache Spark (batch and streaming) and Hive.
- Experience in designing and implementing data pipelines and ETL workflows.
- Solid understanding of data optimization, partitioning, and performance tuning for large datasets.
- Familiarity with NoSQL databases (HBase, Cassandra, MongoDB) is an added advantage.
- Experience with Java, Scala, Python, or Shell scripting.
- Strong analytical and problem-solving skills.
Preferred Qualifications :
- Bachelors/Masters degree in Computer Science, IT, or a related field.
- Experience with real-time data processing tools (Kafka, Flink, Storm).
- Exposure to data governance and security frameworks within big data environments.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1593256
Interview Questions for you
View All