Posted on: 25/08/2025
Key Responsibilities :
- Design, develop, and maintain large-scale, high-performance big data applications.
- Work with Hadoop, Hive, and Spark (Scala) to process and analyze large datasets.
- Build efficient, reusable, and reliable data solutions to support business needs.
- Collaborate with cross-functional teams (data scientists, analysts, product teams) to define data requirements.
- Develop and optimize SQL queries (preferably PostgreSQL) for data analysis and reporting.
- Write unit and integration tests using Scalatest to ensure code quality and reliability.
- Manage code versioning and collaboration using Git.
- Contribute to CI/CD pipelines (experience with Jenkins is a plus).
- Ensure data security, compliance, and governance standards are met.
Required Skills & Qualifications :
- 511 years of hands-on experience in big data engineering.
- Strong expertise in Hadoop, Hive, and Spark (Scala).
- Solid experience in RDBMS and at least one SQL database (PostgreSQL preferred).
- Experience in writing unit and integration tests using Scalatest.
- Proficiency in using Git for version control.
- Experience with CI/CD pipelines; Jenkins knowledge is an added advantage.
- Strong problem-solving and analytical skills with the ability to handle complex data challenges.
- Excellent communication and collaboration skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1535195
Interview Questions for you
View All