HamburgerMenu
hirist

Qentelli - Azure Data Engineer - Synapse Analytics

QENTELLI SOLUTIONS PRIVATE LIMITED
3 - 10 Years
Hyderabad

Posted on: 18/03/2026

Job Description

Description :

Role Summary :


We are seeking an Azure Data Engineer with strong, hands-on Microsoft Fabric experience to build and operate a scalable middle-layer data and integration platform. The role focuses on ingesting data from diverse sources, implementing transformation and business rules, and delivering curated, consumption-ready datasets/models for analytics and downstream applications.

Key Responsibilities :


- Fabrics Implementation : Work on the fabrics platform to design and implement robust data solutions, including One Lake architecture for efficient data storage and processing.

- Build & optimize data pipelines : Design, develop, and maintain scalable ingestion and transformation pipelines using Microsoft Fabric (Data Factory in Fabric / Pipelines), ADF/Synapse Pipelines, OneLake storage patterns, PySpark, Python, and SQL across structured and unstructured data.

- API-driven and scheduled workflows : Develop pipelines that ingest data from external APIs on a scheduled basis and initiate end-to-end downstream processing, supporting one or multiple daily runs through to curated and consumption-ready layers.

- Data ingestion & integration : Integrate data from cloud and on-prem sources including databases, third-party systems, files, and REST/SOAP APIs (auth, throttling, pagination, retries, and error handling).

- Transformation & data modeling : Build curated layers and consumption-ready models; implement incremental and batch processing logic; apply data modeling and transformation best practices aligned to reporting/analytics needs.

- SQL development & tuning : Develop and optimize complex queries, stored procedures, views, and datasets for efficient analytics and reporting; partner with analytics teams to meet performance SLAs.

- Performance tuning & cost optimization : Tune Spark jobs, ADF data flows and SQL workloads (partitioning, caching, parallelism, cluster sizing/configs) to improve reliability and reduce runtime/cost.

- Business logic implementation : Translate requirements into scalable rules (validation, eligibility, availability calculations), manage exceptions, audit logging, and ensure data consistency across systems.

- Data quality & validation : Implement automated data quality checks, validation frameworks, reconciliations, and monitoring to ensure trusted datasets.

- Security & compliance : Implement secure access via Azure AD, Managed Identities, RBAC, least privilege, and secure connectivity to data lake, Fabric/Synapse, and APIs.

- Automation & CI/CD : Build deployment automation using Azure DevOps/Git, promoting code across environments with consistent release practices; support testing and release activities.

- Monitoring & troubleshooting : Monitor pipelines and jobs using Spark UI and Azure Log Analytics; triage failures, perform root-cause analysis, and improve resiliency/runbooks.

- Collaboration : Work closely with architects, platform/DevOps engineers, analysts, and data scientists; participate in design sessions and code reviews; operate within Agile/Scrum delivery.

Tools & Technologies :

- Fabric : Microsoft Fabric Workspaces, OneLake, Fabric Pipelines / Data Factory in Fabric, Lakehouse/Warehouse (as applicable)

- Azure : ADLS Gen2, Blob Storage, Synapse Analytics, App Service (as needed), Azure Databricks

- Languages : PySpark, Python, SQL (T-SQL)

- DevOps : Azure DevOps, Git, Terraform (preferred)

- Monitoring : Spark UI, Azure Log Analytics

- Data Governance : Azure purview

- AI Tools : Co-pilot, Claude,


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in