Posted on: 07/11/2025
Description :
About the Role :
We are seeking a skilled DataBricks Developer to design, develop, and optimize data pipelines and workflows within the Azure Databricks environment.
The ideal candidate will have strong expertise in PySpark, Python, and SQL, with hands-on experience building scalable and efficient data processing solutions.
Key Responsibilities :
- Design and develop data pipelines and ETL workflows using Databricks and PySpark.
- Optimize data storage, transformation, and retrieval processes for large-scale datasets.
- Collaborate with data engineers, analysts, and business stakeholders to deliver robust data solutions.
- Implement best practices for data quality, performance tuning, and error handling.
- Integrate Databricks solutions with cloud platforms (preferably Azure) and other data services.
- Write efficient SQL queries for data extraction, transformation, and analysis.
- Maintain documentation and support ongoing improvements in data infrastructure.
Required Skills & Qualifications :
- 58 years of experience in data engineering or development roles.
- Strong expertise in Databricks, PySpark, Python, and SQL.
- Experience in building and managing large-scale data pipelines.
- Solid understanding of data lake, ETL, and data warehousing concepts.
- Familiarity with Azure Data Services (ADF, Synapse, etc.) is an advantage.
- Strong analytical, debugging, and problem-solving skills
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1571055
Interview Questions for you
View All