Posted on: 18/03/2026
Description :
Key Responsibilities :
- Implement Medallion architecture (Bronze/Silver/Gold) using Delta tables in OneLake.
- Build ingestion pipelines from CSV/Excel/REST/SQL using Data Factory Gen2 and Dataflows Gen2.
- Develop PySpark notebooks for cleansing, transformation, and aggregation.
- Design star schema in Gold layer; publish Semantic Model for reporting.
- Enable governance : lineage, workspace roles, sensitivity labels, data catalog visibility.
- Benchmark performance : load time, query latency, refresh duration; observe capacity/cost behavior.
- Document architecture, design choices, limitations, and comparison with legacy stack.
Must-Have Skills :
- Strong PySpark for ELT transformations on Delta Lake.
- Data modeling : Star schema, Semantic Model design.
- Power BI : Advanced DAX, report optimization, RLS.
- Advanced SQL for analytics workloads.
- Understanding of Medallion architecture and Lakehouse patterns.
- Knowledge of governance: lineage, access control, sensitivity labeling.
Good-to-Have Skills :
- REST API ingestion patterns and incremental loads.
- Partitioning, file sizing, and performance tuning strategies.
- Row/Column level security design patterns.
- Awareness of Fabric capacity/SKU behavior and cost considerations.
- Comparison knowledge : Fabric vs Synapse + ADLS + Power BI architectures.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1621698