Posted on: 20/07/2025
We are hiring a Data Engineer for one of our clients' MNCs.
Exp 2 to 5 years
Location : Hybrid-Bangalore
Job Type : 6 months contract + ext.
About the Role :
We are looking for a skilled Data Engineer with 25 years of experience to join our dynamic team. The ideal candidate will be responsible for designing and developing scalable, reusable, and efficient data pipelines using modern Data Engineering platforms such as Microsoft Fabric, PySpark, and Data Lakehouse architectures.
You will play a key role in integrating data from diverse sources, transforming it into actionable insights, and ensuring high standards of data governance and quality. This role requires a strong understanding of modern data architectures, pipeline observability, and performance optimization.
Key Responsibilities :
- Design and build robust data pipelines using Microsoft Fabric components including Pipelines, Notebooks (PySpark), Dataflows, and Lakehouse architecture.
- Ingest and transform data from a variety of sources such as cloud platforms (Azure, AWS), on-prem databases, SaaS platforms (e.g., Salesforce, Workday), and REST/OpenAPI-based APIs.
- Develop and maintain semantic models and define standardized KPIs for reporting and analytics in Power BI or equivalent BI tools.
- Implement and manage Delta Tables across bronze/silver/gold layers using Lakehouse medallion architecture within OneLake or equivalent environments.
- Apply metadata-driven design principles to support pipeline parameterization, reusability, and scalability.
- Monitor, debug, and optimize pipeline performance; implement logging, alerting, and observability mechanisms.
- Establish and enforce data governance policies including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trail mechanisms.
- Perform data quality checks including null detection, duplicate handling, schema drift management, outlier identification, and Slowly Changing Dimensions (SCD) type management.
Required Skills & Qualifications :
- 25 years of hands-on experience in Data Engineering or related fields.
- Solid understanding of data lake/lakehouse architectures, preferably with Microsoft Fabric or equivalent tools (e.g., Databricks, Snowflake, Azure Synapse).
- Strong experience with PySpark, SQL, and working with dataflows and notebooks.
- Exposure to BI tools like Power BI, Tableau, or equivalent for data consumption layers.
- Experience with Delta Lake or similar transactional storage layers.
- Familiarity with data ingestion from SaaS applications, APIs, and enterprise databases.
- Understanding of data governance, lineage, and RBAC principles.
- Strong analytical, problem-solving, and communication skills.
Nice to Have :
- Prior experience with Microsoft Fabric and OneLake platform.
- Knowledge of CI/CD practices in data engineering.
- Experience implementing monitoring/alerting tools for data pipelines.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1516009
Interview Questions for you
View All