HamburgerMenu
hirist

Databricks Developer - ETL/RLT Pipelines

NS Global Corporation
6 - 10 Years
Bangalore

Posted on: 28/04/2026

Job Description

Job Title : Databricks Developer


Location : Bangalore (On-site)


Employment Type : Full-time / Permanent


Experience Required : 6+ Years


Job Summary :


We are seeking an experienced Databricks Developer to design, develop, and optimize scalable data solutions. The ideal candidate will have strong expertise in Databricks, Apache Spark (PySpark), and modern data engineering practices. This role involves building high-performance data pipelines, driving automation initiatives, and collaborating with cross-functional teams to deliver robust and secure data platforms.


Key Responsibilities :


- Design, develop, and optimize end-to-end data pipelines and ETL/ELT workflows using Databricks and Apache Spark (PySpark).


- Build scalable and high-performance data processing solutions leveraging distributed computing frameworks.


- Lead engineering initiatives focused on automation, performance tuning, and platform modernization.


- Implement and manage CI/CD pipelines using Git-based workflows and tools such as GitHub Actions or Jenkins.


- Collaborate with stakeholders, data analysts, and business teams to translate business requirements into technical solutions.


- Ensure data quality, governance, and security across all data engineering processes.


- Troubleshoot, monitor, and optimize Spark jobs, Databricks clusters, and workflows for efficiency and reliability.


- Conduct code reviews and contribute to the development of reusable frameworks and best practices.


- Utilize AI-powered tools to enhance productivity, streamline workflows, and support engineering activities.


- Work extensively with Databricks Genie, including prompt engineering, workspace utilization, and automation use cases.


Required Skills & Qualifications :


- 6+ years of experience in Data Engineering or a related domain.


- Strong hands-on expertise with Databricks, including notebooks, Delta Lake, and job orchestration.


- Deep understanding of Apache Spark, including PySpark, Spark SQL, and performance optimization techniques.


- Proficiency in Python for data processing, automation, and framework development.


- Strong command of SQL, including complex query writing, performance tuning, and analytical functions.


- Hands-on experience with Databricks Genie and its application in engineering workflows.


- Experience with CI/CD pipelines and version control systems (Git-based workflows).


- Strong knowledge of data modeling concepts and ETL/ELT pipeline design.


- Experience with automation frameworks and scheduling tools.


- Solid understanding of distributed systems, big data architecture, and cloud-based data platforms.


Preferred Qualifications :


- Experience working in Agile/Scrum environments.


- Exposure to cloud platforms such as Azure, AWS, or GCP.


- Strong problem-solving and analytical skills.


- Excellent communication and collaboration abilities.


Key Competencies :


- Ability to work in a fast-paced, data-driven environment.


- Strong ownership mindset with attention to detail.


- Proactive approach to identifying and solving technical challenges.


- Continuous learning attitude toward emerging technologies and tools.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in