Posted on: 11/08/2025
Databricks Consultant
Location : Remote
Experience : 5+ years
Job Overview :
The role involves building scalable data pipelines, implementing Delta Lake, and optimizing Spark jobs for performance.
Key Responsibilities :
- Develop and optimize Apache Spark jobs for large-scale data processing.
- Integrate Databricks with Azure services (ADLS, Synapse, ADF, etc.)
- Implement data governance, security, and access control in Databricks.
- Troubleshoot performance bottlenecks and optimize cluster configurations.
- Collaborate with data engineers and analysts to ensure seamless data workflows.
Required Skills :
- Strong expertise in Apache Spark optimization (partitioning, caching, tuning).
- Experience with Azure cloud services (Storage, Synapse, Blob, etc.)
- Knowledge of CI/CD pipelines for Databricks (Azure DevOps, GitHub Actions).
- Familiarity with data modeling, ETL/ELT processes, and data warehousing.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1527204
Interview Questions for you
View All