HamburgerMenu
hirist

Job Description

Description :

About the Role :

We are looking for a highly skilled Senior Azure Data Engineer to join our data engineering team on a contractual basis. The ideal candidate will have a strong background in building modern data pipelines and architectures using the Azure Data Platform. You should be proficient in PySpark, Azure Databricks, ADF, and possess a solid understanding of data warehousing, data governance, and ETL practices.

This is a remote opportunity aligned with Indian Standard Time (IST) working hours. You will collaborate with cross-functional teams to deliver end-to-end data engineering solutions that are robust, scalable, and optimized for performance.

Key Responsibilities :

- Design, develop, and deploy scalable and secure data pipelines on the Azure platform using tools such as Databricks, Azure Data Factory (ADF), and ADLS Gen2.

- Write efficient and modular PySpark code to transform, cleanse, and enrich data from multiple sources.

- Work with structured and semi-structured data formats, ensuring high performance, reusability, and scalability of data solutions.

- Develop ETL workflows, data ingestion processes, and data transformations across cloud and on-prem environments.

- Collaborate with Data Architects, BI Engineers, Analysts, and business stakeholders to gather requirements and define data models.

- Apply best practices in data governance, security, data modeling, and documentation.

- Optimize existing data processes for performance and reliability in a distributed computing environment.

- Use Azure DevOps for code versioning, deployment automation, and CI/CD integration.

- Containerize data solutions where applicable using Docker or other container technologies.

- Participate actively in Agile ceremonies, SDLC processes, sprint planning, and daily standups.

- Propose and implement innovative data engineering solutions to solve complex business problems.

Must-Have Skills & Qualifications :

- 3+ years of hands-on experience in the data engineering domain with a focus on the Azure ecosystem.

- Strong programming skills in SQL, Python, and PySpark.

- Expertise in Azure Databricks, Azure Data Factory (ADF), Azure Data Lake Storage Gen2 (ADLS), and Azure Key Vault.

- Deep understanding of Apache Spark and distributed systems.

- Solid experience in building and maintaining ETL pipelines, data integration frameworks, and data warehouse solutions.

- Strong grasp of data modeling, data governance, and data quality principles.

- Experience working in Agile development environments and understanding of SDLC.

- Familiarity with Azure DevOps, including pipelines and repositories for CI/CD.

- Ability to break down complex problems into actionable data solutions with clear Data Flow Diagrams (DFDs).

- Quick thinker with excellent logical reasoning and communication skills, capable of discussing technically complex topics with clarity.

- Experience with Docker/containerization for packaging and deployment of data solutions.


info-icon

Did you find something suspicious?