Posted on: 15/12/2025
Description :
- Position : Databricks Developer
- Location : Hyderabad
- Experience : 5 to 8 Years
- Notice Period : Immediate Joiners Preferred
About the Role :
We are looking for an experienced Databricks Developer with strong expertise in building scalable data pipelines using Apache Spark, Delta Lake, and cloud-native data platforms. The ideal candidate must have hands-on experience in Databricks development across AWS/Azure/GCP environments.
Key Responsibilities :
- Develop scalable and high-performance data pipelines using Databricks (PySpark/Scala).
- Implement Delta Lake, optimized storage layers, and advanced Lakehouse architecture.
- Build batch and streaming ETL/ELT jobs for ingestion, transformation, and enrichment.
- Create, optimize, and maintain Databricks notebooks, clusters, jobs, and workflows.
- Work with cloud services such as AWS (Glue, S3), Azure (ADF, Synapse), GCP (BigQuery) depending on project need.
- Perform data modelling, schema design, and integration with BI/reporting systems.
- Ensure data quality, reliability, and performance through monitoring and optimization.
- Collaborate with data engineers, architects, and business teams for end-to-end data delivery.
- Apply best practices for security, governance, cluster optimization, and cost management within Databricks.
Required Technical Skills :
- Hands-on experience with Databricks on AWS / Azure / GCP.
- Strong expertise in Apache Spark (PySpark / Scala).
- Experience working with Delta Lake, Delta tables, and Lakehouse architecture.
- Proficiency in SQL and optimization techniques.
- Good understanding of ETL/ELT frameworks and pipeline orchestration.
- Experience with one or more cloud services :
1. AWS : Glue, S3, EMR, Lambda
2. Azure : ADF, Synapse, ADLS
3. GCP : Dataflow, Dataproc, BigQuery
- Knowledge of version control (Git), CI/CD pipelines.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1590469
Interview Questions for you
View All