Posted on: 31/01/2026
Description : Hiring for Data Engineer - Data Bricks
Role : Data Engineer - Data Bricks
Experience : 5 Years - 10 Years
Location : Bangalore (No Outstation Candidates)
Notice Period : (Immediate Joiners/Serving Notice)
Key skills : Databricks PySpark Developer
Skills & Experience :
- Bachelors degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in ETL/Data Engineering roles with strong focus on Databricks PySpark.
- Strong proficiency in Python, with hands-on experience in developing and debugging PySpark applications.
- In-depth understanding of Apache Spark architecture, including RDDs, DataFrames, and Spark SQL.
- Expertise in SQL development and optimization for large-scale data processing.
- Proven experience working with data warehousing concepts and ETL frameworks.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and collaboration skills.
- Experience working on cloud platforms, preferably AWS.
- Hands-on experience with tools such as Databricks, Snowflake, Tableau, or similar data platforms.
- Strong understanding of data governance, data quality, and best practices in data engineering.
- Relevant certifications in Databricks, PySpark, Spark SQL, or cloud technologies.
Roles & Responsibilities :
ETL Development & Data Engineering :
1. Design, develop, and maintain scalable ETL processes using Databricks PySpark.
2. Extract, transform, and load data from heterogeneous sources into Data Lake and Data Warehouse environments.
3. Optimize ETL workflows for performance, scalability, and cost efficiency using Spark SQL and PySpark.
4. Implement robust error handling, logging, and monitoring mechanisms for ETL jobs.
5. Design and implement data solutions following Medallion Architecture (Bronze, Silver, Gold layers).
6. Ensure data is cleansed, enriched, validated, and optimized at each layer for analytics consumption.
Data Pipeline Management :
1. Hands-on experience in building and managing advanced data pipelines using Databricks Workflows.
2. Develop and maintain reliable, reusable, and scalable pipelines ensuring data quality and integrity.
3. Collaborate with cross-functional teams to translate business and analytics requirements into efficient data pipelines.
Data Analysis & Query Optimization :
1. Write, review, and optimize complex SQL queries for data transformation, aggregation, and analysis.
2. Perform query tuning and performance optimization on large-scale datasets within Databricks.
Project Coordination & Continuous Improvement :
1. Participate in project planning, estimation, and delivery activities.
2. Stay updated with the latest features in Databricks, Spark, and cloud data platforms, and recommend best practices.
3. Document ETL processes, data lineage, metadata, and workflows to support data governance and compliance.
4. Mentor junior developers and contribute to team knowledge sharing where required
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1608506