Posted on: 12/12/2025
Required Technical Skill Set : Hadoop, Python, PySpark, HIVE
Must Have :
Hands-on experience of Hadoop, Python, PySpark, Hive, Big Data Eco System Tools.
- Should be able to develop, tweak queries and work on performance enhancement.
- Solid understanding of object-oriented programming and HDFS concepts
- The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing.
Good-to-Have :
- Preferable to have good DWH/ Data Lake knowledge.
- Conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.
- Experience in working with teams in a complex organization involving multiple reporting lines
- The candidate should have good DevOps and Agile Development Framework knowledge.
Responsibility of / Expectations from the Role :
- Need to work as a developer in Cloudera Hadoop.
- Work on Hadoop, Python, PySpark, Hive SQL's, Bigdata Eco System Tools.
- Experience in working with teams in a complex organization involving multiple reporting lines.
- The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies.
- The candidate should have strong DevOps and Agile Development Framework knowledge.
- Create Python/PySpark jobs for data transformation and aggregation.
- Experience with stream-processing systems like Spark-Streaming.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1589010
Interview Questions for you
View All