HamburgerMenu
hirist

Job Description

Position Overview :

The ideal candidate will have at least 5 years of experience in designing, developing and maintaining complex data pipelines on Azure.

Strong expertise in Azure ADF, ADLS, Databricks and PySpark is required.

Key Responsibilities :

- Design, develop and maintain complex data pipelines on Azure to support business requirements.

- Migrate data from on-premises and cloud-based systems to Azure platforms, ensuring data integrity and security.

- Implement and optimize dimensional data models and data warehouses for scalable and efficient data processing.

- Work extensively with Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake and Azure SQL Database to manage and process large datasets.

- Write efficient and scalable code using Python and SQL to support data engineering workflows.

- Develop and maintain ETL/ELT processes, leveraging big data frameworks such as Apache Spark for large-scale data processing.

- Set up and manage Azure DevOps pipelines, Git repositories and CI/CD processes for seamless deployment and automation.

- Ensure compliance with Azure security best practices, including implementing AAD, Key Vault and RBAC for secure data access.

- Collaborate with cross-functional teams to identify, develop and implement data solutions to meet organizational needs.

- Monitor and troubleshoot data workflows, ensuring high performance and reliability.

Requirements :

- At least 5 years of experience in designing, developing and maintaining complex data pipelines on Azure.

- Strong expertise in Azure ADF, ADLS, Databricks and PySpark.

- Experience in data processing, transformation and performance optimization.

- Good communication skills to collaborate effectively with global teams.

- Experience working with US companies is a must


info-icon

Did you find something suspicious?