Posted on: 20/11/2025
Description :
Key Responsibilities :
- Build and maintain scalable ETL/ELT pipelines using Databricks, PySpark, and cloud-native tools.
- Optimize ingestion and transformation workflows for performance and cost efficiency.
Data Architecture & Modeling :
- Design modern data architectures (Data Lake / Lakehouse / Data Warehouse) using Databricks Lakehouse, Snowflake, Redshift, or BigQuery.
- Develop and maintain dimensional and semantic data models for analytics and BI.
Data Quality & Governance :
- Implement automated validation, quality checks, and monitoring within Databricks workflows.
- Support metadata management, lineage tracking, and governance initiatives.
Collaboration & Stakeholder Management :
- Work closely with data scientists, analysts, and business teams to deliver curated, production-grade datasets.
- Support deployment and operationalization of ML models within Databricks and MLOps frameworks.
Automation & CI/CD :
- Automate pipelines using Azure tools or Databricks Workflows.
- Implement CI/CD and IaC practices for robust data operations.
Cloud & Infrastructure :
- Manage and optimize cloud data infrastructure on Azure.
- Use Databricks for scalable compute, collaborative development, and cost-effective processing.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1578045
Interview Questions for you
View All