Posted on: 02/02/2026
Role Summary :
We are looking for an experienced Data Engineer with strong expertise in Microsoft Fabric, Azure data services, and big data technologies.
The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and data platforms to support analytics and business intelligence use cases.
Roles & Responsibilities :
- Design, develop, and maintain scalable data pipelines using PySpark and Spark SQL
- Work extensively on Microsoft Fabric, including Lakehouse architecture and notebook-based development
- Build and manage ETL/ELT pipelines using Azure Data Factory (ADF)
- Implement and manage Medallion Architecture (Bronze, Silver, Gold layers)
- Optimize data models, data warehouses, and performance in Azure Synapse or similar platforms
- Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements
- Ensure data quality, reliability, and security across data platforms
- Troubleshoot and resolve data pipeline and performance issues
- Participate in code reviews and follow best practices for data engineering and cloud architecture
Required Skills & Qualifications :
- 6+ years of experience in Data Engineering, preferably in cloud-based environments
- Strong hands-on experience with :
1. PySpark & Spark SQL
2. Microsoft Fabric / Databricks
3. Azure Data Factory (ADF)
4. Azure Synapse Analytics
- Strong understanding of data modeling, data lake management, and data warehousing concepts
- Proven experience implementing Medallion Architecture
- Bachelor's or Master's degree in Computer Science, IT, or a related field
Did you find something suspicious?
Posted by
Recruiter
Lead - Talent Acquisition - Talent Cloud at Netlink Software Private Limited
Last Active: 2 Feb 2026
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1608851