Posted on: 06/08/2025
Responsibilities :
- Able to write clean, maintainable and efficient Scala code following best practices.
- Good knowledge on the fundamental Data Structures and their usage
- At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies.
- Having expertise in Spark Core, Spark SQL and Spark Streaming.
- Experience with Hadoop, HDFS, Hive and other BigData technologies.
- Familiarity with Data warehousing and ETL concepts and techniques
- Having expertise in Database concepts and SQL/NoSQL operations.
- UNIX shell scripting will be an added advantage in scheduling/running application jobs.
- At least 8 years of experience in Project development life cycle activities and maintenance/support projects.
- Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives.
- Understand project requirements and translate them into technical solutions which meets the project quality standards
- Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues.
- Strong problem solving and Good Analytical skills.
- Excellent verbal and written communication skills.
- Experience and desire to work in a Global delivery environment.
- Stay up to date with new technologies and industry trends in Development.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1524792
Interview Questions for you
View All