HamburgerMenu
hirist

Big Data Developer - PySpark/Scala

Posted on: 15/08/2025

Job Description

Key Responsibilities :

- Data Development : Design, develop, and maintain big data processing jobs using PySpark or Scala/Java.

- AWS Integration : Work extensively with AWS services such as EMR, S3, Glue, Airflow, RDS, and DynamoDB to build and manage data solutions.

- Database Management : Utilize both Relational and NoSQL databases for data storage and retrieval.

- Microservices & Containers : Develop and deploy microservices or domain services, and work with technologies like Docker and Kubernetes.

- CI/CD : Implement and maintain CI/CD pipelines using tools like Jenkins to ensure efficient deployment.

Required Skills & Qualifications :

- Experience : 6-10 years of experience in Big Data development.

Mandatory Skills :

- PySpark

- Scala/Java

- AWS (EMR, S3, Glue, Airflow, RDS, DynamoDB, or similar)

- Jenkins (or other CI/CD tools)

Technical Knowledge :

- Experience with Relational and NoSQL databases.

- Knowledge of microservices, domain services, or API gateways.

- Familiarity with containers (Docker, Kubernetes).


info-icon

Did you find something suspicious?