Description :
Location : Hyderabad
Experience : 610 years
Key Responsibilities :
- Design, develop, and maintain scalable big data systems and pipelines.
- Implement data processing frameworks and optimize large datasets using tools such as Hadoop, Spark, and Hive.
- Develop and maintain ETL processes to ensure data availability, accuracy, and quality for downstream applications.
- Collaborate with data architects, analysts, and business stakeholders to translate requirements into scalable big data solutions.
- Perform data validation, quality checks, and troubleshooting to maintain reliability of pipelines.
- Optimize data storage, retrieval, and performance in Hadoop ecosystem.
- Ensure security, governance, and compliance across big data environments.
Required Skills & Qualifications :
- 6-10 years of hands-on experience in Big Data technologies.
- Strong expertise in Hadoop ecosystem (HDFS, MapReduce, YARN).
- Proficiency with Apache Spark (batch and streaming) and Hive.
- Experience in designing and implementing data pipelines and ETL processes.
- Knowledge of data optimization, partitioning, and performance tuning in large datasets.
- Familiarity with NoSQL databases (HBase, Cassandra, MongoDB) is a plus.
- Experience with scripting/programming languages (Java, Scala, Python, or Shell).
- Strong problem-solving
Preferred Qualifications :
- Bachelors/Masters degree in Computer Science, Information Technology, or related field.
- Experience in real-time data processing with tools like Kafka, Flink, or Storm.
- Exposure to data governance and security frameworks in big data environments.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1566495
Interview Questions for you
View All