HamburgerMenu
hirist

Data Solution Architect - Microsoft Fabric

MOURI TECH LIMITED
10 - 18 Years
Hyderabad

Posted on: 21/04/2026

Job Description

Job Description :

Job Role : Data Solutions Architect (Microsoft Fabric)

Client Location : Hyderabad

Shift : 12 PM- 9 PM IST

Joining : Earlier the best

Employment : Full-time

About the Role :

We are seeking an experienced Data Solutions Architect consultant role to take end-to-end ownership of our data ecosystem within a complex healthcare environment. This is a senior individual-contributor contract role with high visibility across the organisation, spanning two critical delivery streams :

- Enterprise Data Analytics & AI : Integrating multiple source systems into Microsoft Fabric and building a scalable, AI-ready data foundation that supports clinical, operational, and business intelligence.

- Web Portal Database Architecture : Designing and maintaining the relational database underpinning our new web portal and integrating that data into Fabric for centralised reporting and analytics.

The right candidate will be a technically deep, commercially aware architect who can operate autonomously, communicate clearly with both technical and non-technical stakeholders, and deliver production-grade solutions in a regulated elderly healthcare setting (HSE/GDPR/HIPAA-aligned).

Key Responsibilities :

Enterprise Data Integration & AI Readiness :

- Architect and implement end-to-end data pipelines ingesting data from diverse source systems - HR, Finance, Clinical, Facilities, Marketing, and external feeds into Microsoft Fabric using Dataflows Gen2, Pipelines, and REST API integrations.

- Design and optimise a Medallion architecture (Bronze/Silver/Gold) on OneLake, ensuring data quality gates and transformation logic at each layer.

- Prepare AI-ready datasets including structured/unstructured data preparation, feature engineering, vector embeddings, and integration with Azure OpenAI or similar LLM services.

- Build semantic models and certified datasets in Power BI connected to Fabric Warehouses/Lakehouses for self-service analytics.

- Implement data cataloguing, lineage tracking, and metadata standards using Microsoft Purview integrated with Fabric.

Web Portal Database Architecture :

- Design logical and physical database schemas for the web portal (user management, RBAC, workflows, transactional records, audit logs) using Azure SQL Database or SQL Server.

- Apply performance tuning best practices : indexing strategies, query optimisation, partitioning, and connection pooling for high-concurrency portal workloads.

- Implement backup, point-in-time restore, geo-redundancy, and high-availability configurations aligned to HSE/healthcare uptime requirements.

- Manage end-to-end security : TDE, row-level security, dynamic data masking, and Azure AD-integrated authentication.

- Design and maintain incremental ELT pipelines from the portal DB into Fabric for operational reporting and audit analytics.

Data Governance & Security :

- Establish and enforce data standards, naming conventions, stewardship accountability models, and data dictionaries across all platforms.

- Implement RBAC across Microsoft Fabric workspaces, OneLake, and the portal database, aligned to the principle of least privilege.

- Ensure full compliance with GDPR, HIPAA, HIQA, and HSE data protection standards; act as a subject matter expert during audits or compliance reviews.

- Build and operate a data quality framework covering validation rules, reconciliation checks, anomaly detection, and error logging with automated alerting.

CI/CD & DevOps for Data Platforms :


- Implement CI/CD pipelines for Fabric assets (Lakehouses, Warehouses, Notebooks, Pipelines, Dataflows Gen2) using Azure DevOps or GitHub Actions with Fabric Git integration.

- Version-control all database schema changes, stored procedures, and migration scripts using tools such as Flyway, Liquibase, or SSDT with automated test validation.

- Establish infrastructure as code (IaC) for Fabric capacity, Azure SQL, and supporting services using Terraform or Bicep.

- Define and maintain environment segregation (Dev / UAT / Prod), deployment gates, rollback strategies, and release approval workflows.

Stakeholder Collaboration & Technical Leadership :

- Partner with application developers (.NET/Azure App Service), BI engineers, clinical informatics teams, and senior business stakeholders to align data solutions with strategic priorities.

- Translate ambiguous business problems into precise, scalable data and database designs; challenge requirements where necessary and propose alternatives.

- Communicate technical decisions clearly to non-technical audiences including senior leadership, using architecture diagrams, decision logs, and roadmaps.

- Oversee all active data and analytics engagements across the programme tracking delivery progress, managing dependencies, and ensuring alignment to the overall data strategy.

- Design end-to-end solution architectures for new and existing workloads, covering data ingestion, transformation, storage, modelling, and consumption layers.

- Proactively identify risks, dependencies, and blockers; escalate early and drive resolution with a solutions-first mindset.

Offshore Team Management :

- Manage and coordinate offshore data engineering and BI development teams on a day-to-day basis, ensuring clarity of task assignments, priorities, and delivery timelines.

- Conduct regular structured check-ins with offshore teams, bridging time-zone and communication gaps to maintain delivery momentum.

- Define work packages with clear acceptance criteria, breaking down high-level requirements into actionable tasks for offshore engineers.

- Monitor quality of offshore deliverables and provide constructive, timely feedback to maintain high standards across all workstreams.

Code Review & Quality Assurance :

- Lead internal code reviews across SQL/T-SQL, PySpark, Python, and Fabric Notebook submissions from both onshore and offshore team members.

- Establish and enforce coding standards, peer review processes, and pull request workflows within Azure DevOps or GitHub.

- Identify anti-patterns, performance bottlenecks, and security vulnerabilities during review; document findings and drive remediation.

- Champion a culture of engineering excellence by promoting consistent, reviewable, and well-documented code across all data platform workstreams.

Mentoring BI & Data Engineering Teams :


- Provide hands-on mentoring to BI developers and data engineers, upskilling them in Microsoft Fabric, data modelling best practices, and modern ELT patterns.

- Run knowledge-sharing sessions, architectural walkthroughs, and internal workshops to elevate overall team capability.

- Support junior and mid-level team members in navigating complex technical problems, acting as a first point of escalation for technical questions.

- Contribute to onboarding materials, internal wikis, and technical documentation to build lasting institutional knowledge.

Hands-On Reporting & Data Engineering :

- Remain actively hands-on in building and optimizing data engineering pipelines, not just in an oversight capacity writing production-grade PySpark, T-SQL, and Dataflow logic where needed.

- Design and develop Power BI semantic models, certified datasets, and report templates that serve as the standard for the wider BI team.

- Perform data profiling, root cause analysis on data quality issues, and end-to-end testing of pipelines from source to report layer.

- Validate reporting outputs against source systems, ensuring accuracy, consistency, and traceability of all metrics consumed by business and clinical users.

Required Qualifications :

Experience :


- 10+ years in Data Architecture or Database Architecture roles, with at least 4 years in a senior/lead capacity.

- Proven hands-on delivery with Microsoft Fabric - Lakehouse, Warehouse, OneLake, Dataflows Gen2, Notebooks, and Fabric Pipelines.

- Deep expertise in relational database design using SQL Server, Azure SQL Database, or PostgreSQL including schema design, normalisation, indexing, and query optimisation.

- Strong proficiency in SQL/T-SQL, PySpark, and ELT/ETL frameworks; experience with Delta Lake or Apache Parquet formats.

- Hands-on Power BI experience including semantic model design, certified dataset publishing, row-level security, and report development not just oversight.

- Demonstrated experience managing offshore data engineering or BI teams, including task allocation, quality control, and cross-timezone coordination.

- Proven track record leading internal code reviews across SQL, PySpark, and Python codebases and driving engineering quality standards.

- Experience integrating data from REST APIs, SFTP, and event-based sources (e.g. Azure Event Hub / Service Bus).

- Demonstrable experience designing and operating production databases for web portals or enterprise applications.

- Hands-on CI/CD and DevOps experience applied to data platform and database environments.

- Prior experience in a regulated industry (healthcare, financial services, or public sector) with working knowledge of GDPR and data protection frameworks.

Technical Stack Familiarity :

- Microsoft Fabric (OneLake, Lakehouse, Warehouse, Dataflows Gen2, Notebooks, Real-Time Analytics)

- Azure SQL Database / SQL Server / PostgreSQL

- Azure Data Factory, Synapse Analytics (migration/integration context)

- Power BI (semantic models, certified datasets, row-level security)

- Microsoft Purview (data catalog, lineage, sensitivity labels)

- Azure DevOps / GitHub Actions / Fabric Git Integration

- Terraform / Bicep (IaC for Azure environments)

- PySpark / Python / T-SQL / Delta Lake


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in