Posted on: 30/07/2025
JOB ROLE : Data engineer
Work mode : On-site /Hybrid
Location : Bangalore
Experience : 6+ years
We are looking for a self-driven and technically strong Data Engineer with 46 years of experience to join our growing team. The ideal candidate will be proficient in SQL, Databricks,Kafka and PySpark, and capable of managing end-to-end (E2E) data deliverables independently. A strong understanding of Azure Data Factory (ADF), Azure Data Lake (ADL), REST APIs, and Business Intelligence (BI) tools is also expected.
Key Responsibilities :
- Develop, maintain, and optimize data pipelines using Databricks and PySpark.
- Write efficient and complex SQL queries for large-scale data processing and analytics.
- Deliver complete E2E solutions from ingestion to reporting.
- Collaborate with data analysts and business teams to gather and understand data requirements.
- Work independently and take ownership of data engineering tasks and delivery timelines.
- Integrate data from various sources, including REST APIs and cloud storage.
- Utilize tools such as ADF, ADL, and BI platforms to enable data consumption and reporting.
Required Skills :
- Very strong hands-on experience with SQL.
- Proficiency in Databricks and PySpark is mandatory.
- Experience in developing data pipelines using Azure Data Factory (ADF).
- Knowledge of Azure Data Lake (ADL) and data orchestration.
- Understanding and integration of REST APIs for data ingestion.
- Exposure to Business Intelligence (BI) tools like Power BI/Tableau (preferred).
- Strong analytical and problem-solving skills.
- Ability to work independently and manage project timelines efficiently.
Mandatory skills : Pyspark,python,Kafka
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1522155
Interview Questions for you
View All