Posted on: 10/09/2025
Key Responsibilities :
- Develop, maintain, and optimize data pipelines using Databricks (SQL, PySpark) and AWS services.
- Assist in data integration, transformation, and basic analysis tasks to support business requirements.
- Collaborate with team members in an Agile, Test-Driven Development (TDD) environment.
- Write clean, efficient, and reusable Python code for data processing tasks.
- Work with AWS services such as S3 and EC2 for data storage and compute requirements.
- Participate in code reviews, sprint planning, and technical discussions.
- Support data quality checks and documentation of solutions.
- Ensure adherence to best practices in data engineering and software development.
Good to Have :
- Experience with Databricks Workflows for automating data jobs.
- AWS and Databricks certifications are advantageous but not mandatory.
Technical Skills :
- Working knowledge of Databricks (SQL, PySpark).
- Basic understanding of AWS services (S3, EC2).
- Familiarity with development tools such as JIRA and Git.
- Knowledge of Agile methodologies
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1544140
Interview Questions for you
View All