Posted on: 03/12/2025
Description :
- Data Factory, Notebooks, Pyspark, Delta Live Tables, Dataflow Gen2, Shortcuts, Fabric Lakehouse/ Warehouse, Copy Job, Mirroring, Event stream, KQL database, Fabric SQL DB, Semantic model (optional), Fabric Data Agent.
- Experience integrating data as well as modernizing data from on-prem, Cloud, Web services, API, File sources : ADLS Gen2, SQL Database, Synapse, Event Hub, SFTP, Salesforce, Dynamics 365, Azure Synapse etc.
- Experience of designing and developing metadata-driven frameworks (relevant for Data Engineering processes).
- Strong programming, debugging, and performance tuning skills in Python and SQL.
- Strong architectural understanding of Fabric workloads with pros/cons/cost awareness around proposing the right component.
- Good experience with setting up Fabric Workspace, Access provisioning, Capacity management/cost control.
- Good Experience of Data modeling (both Dimensional and 3-NF).
- Good Exposure to developing LLM/GenAI-powered applications.
- Sound understanding of CI/CD processes using Azure DevOps & Fabric Deployment pipelines.
- Exposure to technologies like Neo4j, Cosmos DB, and vector databases is desirable.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Technical / Solution Architect
Job Code
1584138
Interview Questions for you
View All