Posted on: 13/09/2025
Job description :
We are a team of data professionals who work with large datasets every day. With over 40 years of combined experience, we have a unique approach to data engineering.
Key Responsibilities :
- Design and develop big data pipelines using Azure Databricks, PySpark, and Delta Lake.
- Work with Azure Data Lake, Azure Synapse, Azure SQL Database, and Azure Data Factory to implement robust data solutions.
- Develop and maintain ETL/ELT workflows for efficient data ingestion, transformation, and processing.
- Implement data governance, security, and compliance best practices in Azure environments.
- Optimize Databricks clusters, workflows, and cost efficiency in cloud environments.
- Collaborate with data scientists, analysts, and business stakeholders to ensure high-quality data solutions.
- Implement CI/CD pipelines for data engineering workflows using Azure DevOps.
Required Qualifications & Skills :
- Databricks Certified Data Engineer Associate (Preferred)
- Databricks Certified Data Engineer Professional
- Azure Cloud Services : Azure Databricks, Azure Data Factory, Azure Data Lake, Azure Synapse, Azure Functions
- Big Data & ETL : PySpark, SQL, Delta Lake, Kafka (Preferred)
- Programming : Python, SQL, Scala (Optional)
- Orchestration & Automation : Airflow, Azure DevOps, GitHub Actions
- Data Governance & Security : Unity Catalog, RBAC, PII masking
- Performance Optimization : Spark tuning, Databricks cluster configuration
- Experience : years of experience in data engineering with a focus on Azure and Databricks.
- Strong understanding of data modeling, warehousing, and governance best practices
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1545386
Interview Questions for you
View All