Posted on: 26/02/2026
Description :
Role Overview :
We are looking for an experienced Data Engineer with strong expertise in Databricks, PySpark, and cloud-based data engineering. The ideal candidate should have hands-on experience in building and optimizing scalable data pipelines in AWS environments.
Core Skills :
- Databricks
- PySpark & Python
- SQL
- AWS Services
Key Responsibilities :
- Design, develop, and optimize scalable data pipelines
- Work extensively with Databricks using PySpark/Python and SQL
- Build and maintain data workflows in cloud environments (preferably AWS)
- Ensure data quality, performance optimization, and reliability
- Collaborate with stakeholders and work independently on assigned tasks
Required Skills :
- Proficient in Databricks
- Strong hands-on experience with PySpark/Python and SQL (minimum 3+ years)
- Experience with cloud platforms such as AWS (preferred) or Azure
- Experience in building data pipelines
- Working knowledge of Airflow
- Excellent communication skills
- Ability to work independently
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1616211