Posted on: 29/09/2025
About the Role :
- Develop data processing workflows in Azure Databricks using PySpark.
- Work with Azure Data Lake Storage (ADLS Gen2) for efficient data storage and access.
- Ensure data quality, scalability, and performance in all solutions.
- Collaborate with BI teams to deliver insights via Power BI / Tableau.
- Apply best practices for big data processing, schema-on-read, and data modeling.
Required Skills :
- Azure Data Factory (ADF) experience.
- Azure Databricks with PySpark (must-have).
- Python programming (PySpark).
- Big Data processing approaches & schema-on-read methodologies.
- Knowledge of Power BI / Tableau.
- Azure Synapse (added advantage).
- Power BI DAX (good-to-have).
Other Requirements :
- Strong collaboration skills with BI, analytics, and engineering teams.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1554073
Interview Questions for you
View All