Posted on: 14/04/2026
About the Role :
We are looking for a Senior Data Engineer with strong expertise in Databricks and modern data platforms. The ideal candidate will be responsible for building scalable data pipelines, optimizing data workflows, and enabling data-driven decision-making across the organization.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using Databricks and Python
- Build and optimize ETL workflows for structured and unstructured data
- Work with Databricks Lakehouse architecture and implement best practices
- Manage and implement data governance using Unity Catalog
- Integrate AWS data services such as S3, IAM, VPC, and other relevant services
- Collaborate with cross-functional teams to deliver high-quality data solutions
- Build dashboards and support data visualization requirements
- Ensure data quality, reliability, and performance optimization
Must-Have Skills :
- Strong experience with Databricks platform
- Proficiency in Python for data engineering and ETL pipelines
- Hands-on experience with Unity Catalog
- Good knowledge of AWS services (S3, IAM, VPC, Glue/Lambda preferred)
- Solid understanding of Data Lake / Lakehouse architecture
- Experience in building dashboards and reporting solutions
Nice to Have :
- Experience with REST API development (Flask, FastAPI, etc.)
- Knowledge of authentication/authorization (OAuth, API keys, IAM roles)
- Strong query optimization and performance tuning skills
- Experience with PySpark optimization
- Exposure to ML/AI pipelines
- Familiarity with Databricks AI/BI capabilities
Experience : 4-10 Year
Location : Remote
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1628177