Posted on: 09/10/2025
Description :
Azure Data Bricks + SQL. Azure Data Bricks + SQL. by HR Neev Systems.
Experience : 5 + Years.
Location : Hyderabad.
Job Description :
We are seeking an experienced Azure Databricks + SQL Developer / Big Data Engineer to design, develop, and maintain scalable data solutions on Azure. The role will focus on building efficient ETL/ELT pipelines, optimizing SQL queries, and leveraging Databricks and other Azure services for advanced data processing, analytics, and data platform engineering. The ideal candidate will have a strong background in both traditional SQL development and modern big data technologies on Azure.
Key Responsibilities :
- Develop, maintain, and optimize ETL/ELT pipelines using Azure Databricks (PySpark/Spark SQL).
- Write and optimize complex SQL queries, stored procedures, triggers, and functions in
Microsoft SQL Server.
- Design and build scalable, metadata-driven ingestion pipelines for both batch and streaming datasets.
- Perform data integration and harmonization across multiple structured and unstructured data sources.
- Implement orchestration, scheduling, exception handling, and log monitoring for robust pipeline management.
- Collaborate with peers to evaluate and select appropriate tech stack and tools.
- Work closely with business, consulting, data science, and application development teams to
deliver analytical solutions within timelines.
- Support performance tuning, troubleshooting, and debugging of Databricks jobs and SQL queries.
- Work with other Azure services such as Azure Data Factory, Azure Data Lake, Synapse
Analytics, Event Hub, Cosmos DB, Streaming Analytics, and Purview when required.
- Support BI and Data Science teams in consuming data securely and in compliance with
governance standards.
Required Skills & Experience :
Azure.
- Proficiency in Microsoft SQL Server (T-SQL) stored procedures, indexing, optimization, and performance tuning.
- Strong experience with Azure Data Factory (ADF), Databricks, ADLS, PySpark, and Azure SQL Database.
- Working knowledge of Azure Synapse Analytics, Event Hub, Streaming Analytics, Cosmos DB,
and Purview.
- Proficiency in SQL, Python, and either Scala or Java with debugging and performance
optimization skills.
- Hands-on experience with big data technologies such as Hadoop, Spark, Airflow, NiFi, Kafka,
Hive, Neo4J, and Elastic Search.
- Strong understanding of file formats such as Delta Lake, Avro, Parquet, JSON, and CSV.
- Solid background in data modeling, data transformation, and data governance best practices.
- Experience designing and building REST APIs with practical exposure to Data Lake or
Lakehouse projects.
- Ability to work with large and complex datasets, ensuring data quality, governance, and
security standards.
- Certifications such as DP-203 : Data Engineering on Microsoft Azure or Databricks Certified
Developer (DE) are a plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Technical / Solution Architect
Job Code
1558521
Interview Questions for you
View All