Posted on: 15/07/2025
We are seeking a talented and experienced Data Engineer with a strong background in Microsoft Fabric to design, develop, and maintain robust, scalable, and secure data solutions.
You'll play a crucial role in building and optimizing data pipelines, data warehouses, and data lakehouses within the Microsoft Fabric ecosystem to enable advanced analytics and business intelligence.
Key Responsibilities :
- Design and Development : Architect, design, develop, and implement end-to-end data solutions within the Microsoft Fabric ecosystem, including Lakehouse, Data Warehouse, and Real-Time Analytics components.
- Data Pipeline Construction : Build, test, and maintain robust and scalable data pipelines for data ingestion, transformation, and curation from diverse sources using Microsoft Fabric Data Factory (Pipelines and Dataflows Gen2) and Azure Databricks (PySpark/Scala).
- Data Modeling & Optimization : Develop and optimize data models within Microsoft Fabric, adhering to best practices for performance, scalability, and data integrity (e.g., dimensional modeling).
- ETL/ELT Processes : Implement efficient ETL/ELT processes to extract data from various sources, transform it into suitable formats, and load it into the data lakehouse or analytical systems.
- Performance Tuning : Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, especially for large-scale datasets.
- Data Quality & Governance : Design and implement data quality, validation, and reconciliation processes to ensure data accuracy and reliability.
- Ensure data security and compliance with data privacy regulations.
- Collaboration : Work closely with data architects, data scientists, business intelligence developers, and business stakeholders to understand data requirements and translate them into technical solutions.
- Automation & CI/CD : Implement CI/CD pipelines for data solutions within Azure DevOps or similar tools, ensuring automated deployment and version control.
- Troubleshooting : Troubleshoot and resolve complex data-related issues and performance bottlenecks.
- Documentation : Maintain comprehensive documentation for data architectures, pipelines, data models, and processes.
- Stay Updated : Keep abreast of the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices.
Required Skills & Qualifications :
- Bachelor's degree in Computer Science, Information Technology, or a related quantitative field, or equivalent practical experience.
- 6+ years of hands-on experience as a Data Engineer or Data Architect.
- Mandatory hands-on experience with Microsoft Fabric, including its core components such as Lakehouse, Data Warehouse, and Data Factory (Pipelines, Dataflows Gen2), and Spark notebooks.
- Strong expertise in Microsoft Azure data services, including :
1. Azure Databricks (PySpark/Scala for complex data processing and transformations).
2. Azure Data Lake Storage Gen2 (for scalable data storage).
3. Azure Data Factory (for ETL/ELT orchestration).
- Proficiency in SQL for data manipulation and querying.
- Experience with Python or Scala for data engineering tasks.
- Solid understanding of data warehousing concepts, data modeling (dimensional, relational), and data lakehouse architectures.
- Experience with version control systems (e.g., Git, Azure Repos).
- Strong analytical and problem-solving skills with a keen eye for detail.
- Excellent communication (written and verbal) and interpersonal skills to collaborate effectively with cross-functional teams.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1513346
Interview Questions for you
View All