Posted on: 07/04/2026
Description :
Job Title : Data Engineer & Microsoft Fabric Developer
Location : HYD, IND
Job Type : Contract
Key Responsibilities :
Data Engineering & Platform Optimization :
- Architect high-performance pipelines across structured and semi-structured data.
- Implement streaming/event-driven ingestion frameworks where applicable.
- Build Data Pipelines from various data sources such as Caseware, HubSync, iManage, XCM, CCH, SAP etc.
- Establish enterprise data quality controls and reconciliation frameworks.
- Optimize computing, storage, and cost efficiency across cloud platforms.
- Implement DevOps automation for data engineering assets.
- Lead architecture reviews and performance tuning initiatives.
Fabric Development :
- Lead the design, development, and optimization of Data Pipelines, and semantic models using best practices.
- Architect and implement data engineering using Microsoft Fabric (Lakehouse, Data Warehouse, One Lake) and Microsoft Azure Data Factory.
- Build and optimize enterprise semantic models, KPIs, and complex DAX calculations for performance and scalability.
- Implement row-level security (RLS), object-level security, and workspace governance standards.
Data Integration & Sources:
- Design and manage data ingestion pipelines from multiple sources, including :
- Relational databases (SQL Server, Azure SQL, Synapse)
- Cloud storage (ADLS Gen2, One Lake)
- REST APIs / Web APIs / SaaS application APIs
- Work with Fabric Data Pipelines, Dataflows Gen2, and Azure Data Factory for ETL/ELT processes.
- Ensure data quality, validation, and refresh reliability across batch and near real-time datasets.
Qualifications & Experience :
Required :
- 7 to 10 years of experience in Business Intelligence / Analytics / Data Engineering.
- Extensive hands-on experience with Microsoft Fabric, including Lakehouse, Data Warehouse, One Lake, and Data Pipelines.
- Deep hands-on expertise in :
a. Microsoft Fabric
b. Azure Data Factory / Synapse / ADLS2
- Proven experience integrating data from REST APIs and external application APIs into BI and analytics platforms.
- Expertise in :
a. Dimensional modeling
b. Lakehouse architecture
c. SQL optimization
d. Delta handling & incremental loads
- Enterprise security frameworks
- Experience with CI/CD pipelines for Power BI and data assets using Azure DevOps or GitHub.
- Excellent communication and client-facing skills, including interaction with senior stakeholders.
Preferred :
- Exposure to AI/ML-enabled analytics within Fabric or Azure ecosystem.
- Microsoft certifications : Power BI, Fabric, Azure Data Engineer, or Azure Solutions Architect.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1626549