Posted on: 17/11/2025
Description :
- About the project: Data solution to gather and transform data from multiple heterogeneous systems across our clients organization into the Azure Data Lakehouse using Databricks and ADF.
Responsibilities :
- Design complex ETL processes of various data sources in the data Lakehouse.
- Build new and maintain existing data pipelines using Python to improve efficiency and latency.
- Improve data quality through anomaly detection by building and working with internal tools to measure data and automatically detect changes.
- Identify, design, and implement internal process improvements, including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
- Perform data modeling and improve our existing data models for analytics.
- Collaborate with SMEs, architects, analysts, data scientists, and others to build solutions that integrate data from many of our enterprise data sources.
- Partner with stakeholders, including data, design, product, and executive teams, and assist them with data-related technical issues.
Requirements :
- 3-5 years of commercial experience in building and maintaining complex data pipelines.
- Proficiency in Python and SQL.
- Well-versed in the optimization of ETL processes.
- Solid skills in Bash scripting.
- Solid knowledge of relational SQL and NoSQL databases: PostgreSQL/Redshift/MongoDB, Snowflake.
- Knowledge of job scheduling and orchestration tools (Airflow etc.).
- Good knowledge of cloud services (Azure).
- Solid expertise in data processing tools for Azure: Elasticsearch, Azure Data Factory, Databricks, Azure Synapse.
- Decent knowledge of CI/CD (Docker, Jenkins, Terraform, Git).
- Good understanding of algorithms and data structures.
- Experience in the schema and dimensional data design.
- Excellent communication skills, both written and verbal.
- English level: upper intermediate+.
- Nice to have:
- Databricks certification.
- Microsoft Azure certification.
We offer :
- A competitive salary and good compensation package.
- Personalized career growth.
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more).
- Active tech communities with regular knowledge sharing.
- Education reimbursement.
- Memorable anniversary presents.
- Corporate events and team buildings.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1576362
Interview Questions for you
View All