HamburgerMenu
hirist

Job Description

About the Opportunity :


We are looking for an experienced Azure Data Engineer to design and implement high-performance data integration and transformation pipelines within the Microsoft Azure ecosystem.


This role requires strong hands-on expertise in Azure Data Factory (ADF), Databricks (PySpark), and SQL, with a solid understanding of big data processing and schema-on-read principles.


The ideal candidate will work closely with analytics and BI teams to build robust ETL/ELT pipelines, optimize data flow, and ensure scalability, accuracy, and security across enterprise data environments.


Key Responsibilities :


- Design, build, and manage ETL/ELT data pipelines using Azure Data Factory (ADF) and Azure Databricks (PySpark).


- Develop scalable data ingestion, transformation, and processing solutions from multiple sources into Azure Data Lake or Synapse.


- Implement data validation, monitoring, and automation processes to ensure consistency and reliability.


- Collaborate with architects and data modelers to support data warehouse and data lake design.


- Optimize data pipelines for performance, scalability, and cost efficiency in Azure environments.


- Write complex SQL queries for data profiling, quality checks, and analytics enablement.


- Integrate Azure Synapse Analytics for advanced data orchestration and warehousing use cases.


- Work with reporting tools such as Power BI or Tableau to enable visualization and business reporting.


- Ensure adherence to data governance, security, and compliance standards within the Azure cloud ecosystem.


Required Skills and Qualifications :


- Minimum of 5 years of experience in data engineering, with at least 3+ years in Azure-based solutions.


- Strong proficiency in SQL, including complex joins, stored procedures, and performance tuning.


- Hands-on expertise with :


- Azure Data Factory (ADF) for ETL/ELT workflows.


- Azure Databricks using PySpark for distributed data processing.


- Azure Data Lake Storage (ADLS Gen2) for scalable storage and schema-on-read architecture.


- Understanding of big data principles, data pipelines, and schema-on-read methodologies.


- Experience developing and automating data transformations, monitoring, and CI/CD workflows.


- Strong scripting skills in Python (PySpark) for data manipulation and automation.


- Working knowledge of Power BI or Tableau for dashboarding and reporting.


- Good understanding of data governance, access control, and cloud security best practices.


- Excellent problem-solving, communication, and collaboration skills.


Preferred Skills :


- Experience with Azure Synapse Analytics or Azure SQL Data Warehouse.


- Knowledge of data orchestration and workflow tools (Airflow, Prefect).


- Exposure to Power BI DAX and data modeling for analytics.


- Familiarity with CI/CD pipelines using Azure DevOps or GitHub Actions.


- Certifications such as Microsoft Certified : Azure Data Engineer Associate are a plus


info-icon

Did you find something suspicious?