Posted on: 07/11/2025
Job Title : Big Data Hadoop Developer
Location : Hyderabad
Work mode : 5 Days - (WFO)
Experience Required : 4+ years
Key Skills : Big Data, Hadoop, Spark, Hive
Job Overview :
We are looking for an experienced Big Data Hadoop Developer to design, build, and optimize large-scale data processing systems. The ideal candidate should have strong expertise in the Hadoop ecosystem, hands-on experience with Spark and Hive, and the ability to develop efficient ETL pipelines.
Key Responsibilities :
- Design, develop, and maintain scalable big data systems and data pipelines.
- Implement data processing frameworks using Hadoop, Spark, and Hive.
- Develop and manage ETL workflows to ensure data accuracy, consistency, and availability.
- Collaborate with data architects, analysts, and business teams to translate requirements into robust big data solutions.
- Perform data validation, quality checks, and issue resolution for data pipelines.
- Optimize data storage, query performance, and cluster utilization in the Hadoop ecosystem.
- Ensure compliance with security, governance, and data management standards.
Required Skills & Qualifications :
- 4+ years of hands-on experience in Big Data technologies.
- Strong understanding of the Hadoop ecosystem (HDFS, MapReduce, YARN).
- Proficiency in Apache Spark (batch and streaming) and Hive.
- Experience in building and maintaining data pipelines and ETL processes.
- Strong knowledge of data optimization, partitioning, and performance tuning.
- Familiarity with NoSQL databases such as HBase, Cassandra, or MongoDB is an advantage.
- Experience with programming/scripting languages : Java, Scala, Python, or Shell.
- Strong analytical and problem-solving skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1570977
Interview Questions for you
View All