HamburgerMenu
hirist

Job Description

Key Responsibilities :

- Design, develop, and manage Azure Data Factory pipelines, datasets, dataflows, and integration runtimes.

- Implement data ingestion, transformation, and loading (ETL) processes using ADF, Azure Databricks, and Azure Synapse.

- Develop and optimize SQL scripts to perform complex queries and transformations.

- Build and monitor data migration pipelines from on-premise systems to Azure SQL or Synapse Analytics.

- Migrate databases from on-prem SQL Server to Azure using Azure Database Migration Service (DMS) and Data Migration Assistant.

- Create and maintain Synapse pipelines for migrating data from Data Lake Gen2 to Azure SQL.

- Work with Azure Data Catalog to manage and document data assets and lineage.

- Implement and optimize Big Data solutions for batch, interactive, and real-time data processing.

- Monitor and troubleshoot pipeline runs, ensuring data accuracy, reliability, and performance.

- Collaborate with cross-functional teams to understand business requirements and deliver scalable data solutions.

Required Skills & Qualifications :

- 6+ years of experience in data engineering or ETL development.

- Hands-on expertise with Azure Data Factory (ADF) pipeline creation, scheduling, monitoring, and debugging.

- Strong experience with Azure Databricks, Azure Synapse, Azure SQL Database, and Azure Data Lake Gen2.

- Proficiency in SQL development, including complex queries, stored procedures, and performance tuning.

- Experience in data migration using Azure DMS and Data Migration Assistant.

- Understanding of Azure Fabric and its data integration capabilities.

- Strong knowledge of data architecture principles, ETL frameworks, and data modeling concepts.

- Experience working with Big Data, batch, real-time, and interactive processing solutions.

- Familiarity with Azure Data Catalog for metadata and lineage management.

Preferred Qualifications :

- Experience with Power BI, ADF CI/CD pipelines, and DataOps practices.

- Knowledge of Python, PySpark, or Scala for data transformation in Databricks.

- Exposure to Azure DevOps, Git, and monitoring tools like Log Analytics or App Insights.

- Strong problem-solving and analytical skills.


info-icon

Did you find something suspicious?