Posted on: 05/01/2026
Job Description :
Function : Software Engineering ? Big Data / DWH / ETL
Data AnalysisETLAzure Data FactoryDatabricksSparkSQL
You will be part of the operation team providing L2 support to a client working in specified business hours or working in a 24-7 support model. Provide Level-2 (L2) technical support for data platforms and pipelines built on Azure Data Factory (ADF), Databricks, SQL, and Python. This role involves advanced troubleshooting, root cause analysis, code-level fixes, performance tuning, and collaboration with engineering teams to ensure data reliability and SLA complianceMust adhere to ITIL processes for Incident, Problem, and Change management.
Responsibilities :
- Investigate complex failures in ADF pipelines, Databricks jobs, and SQL processes beyond L1 scope.
- Perform root cause analysis for recurring issues, document findings, and propose permanent fixes.
- Debug Python scripts, SQL queries, and Databricks notebooks to resolve data ingestion and transformation errors.
- Analyse logs, metrics, and telemetry using Azure Monitor, Log Analytics, and Databricks cluster logs.
- Apply hotfixes for broken pipelines, scripts, or queries in non-production and coordinate controlled deployment to production.
- Optimise ADF activities, Databricks jobs, and SQL queries for performance and cost efficiency.
- Implement data quality checks, schema validation, and error handling improvements.
- Handle escalated incidents from L1 to ensure resolution within SLA.
- Create and maintain the Known Error Database (KEDB) and contribute to Problem Records.
- Participate in Major Incident calls, provide technical insights, and lead recovery efforts when required.
- Enhance monitoring dashboards, alerts, and auto-recovery scripts for proactive issue detection.
- Develop Python utilities or Databricks notebooks for automated validation and troubleshooting.
- Suggest improvements in observability and alert thresholds.
- Ensure all changes follow the ITIL Change Management process and are properly documented.
- Maintain secure coding practices, manage secrets via Key Vault, and comply with data privacy regulations.
Requirements :
- Azure Data Factory (ADF) : Deep understanding of pipeline orchestration, linked services, triggers, and custom activities.
- Databricks : Proficient in Spark, cluster management, job optimisation, and notebook debugging.
- SQL : Advanced query tuning, stored procedures, schema evolution, and troubleshooting.
- Python : Strong scripting skills for data processing, error handling, and automation.
- Azure Services : ADLS, Key Vault, Synapse, Log Analytics, Monitor.
- Familiarity with CI/CD pipelines (Azure DevOps/GitHub Actions) for data workflows.
Non-technical skills :
- Strong knowledge of ITIL (Incident, Problem, Change).
- Ability to lead technical bridges, communicate RCA, and propose permanent fixes.
- Excellent documentation and stakeholder communication skills.
- Drive Incident/Problem resolution by assisting in key operational activities in terms of delivery, fixes, and supportability with the operations team.
- Experience working in ServiceNow is preferred.
- Attention to detail is a must, with focus on quality and accuracy.
- Able to handle multiple tasks with appropriate priority and strong time management skills.
- Flexible about work content and enthusiastic to learn.
- Ability to handle concurrent tasks with appropriate priority.
- Strong relationship skills to work with multiple stakeholders across organisational and business boundaries at all levels.
Education Qualification and Certifications required, if any :
Certifications (Preferred) :
- Microsoft Certified : Azure Data Engineer Associate (DP-203).
- Databricks Certified Data Engineer Associate.
Did you find something suspicious?
Posted by
Priyanka Ganapavarapu
Life Science Recruiter at Element Infomatics (India) Pvt. Ltd.
Last Active: 2 Feb 2026
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1596519