Key Responsibilities :
- Data Development : Design, develop, and maintain big data processing jobs using PySpark or Scala/Java.
- AWS Integration : Work extensively with AWS services such as EMR, S3, Glue, Airflow, RDS, and DynamoDB to build and manage data solutions.
- Database Management : Utilize both Relational and NoSQL databases for data storage and retrieval.
- Microservices & Containers : Develop and deploy microservices or domain services, and work with technologies like Docker and Kubernetes.
- CI/CD : Implement and maintain CI/CD pipelines using tools like Jenkins to ensure efficient deployment.
Required Skills & Qualifications :
- Experience : 6-10 years of experience in Big Data development.
Mandatory Skills :
- PySpark
- Scala/Java
- AWS (EMR, S3, Glue, Airflow, RDS, DynamoDB, or similar)
- Jenkins (or other CI/CD tools)
Technical Knowledge :
- Experience with Relational and NoSQL databases.
- Knowledge of microservices, domain services, or API gateways.
- Familiarity with containers (Docker, Kubernetes).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1530065
Interview Questions for you
View All