HamburgerMenu
hirist

Job Description

Role Overview :

ZettaMine is seeking a skilled and experienced Data Engineer to design, build, and maintain robust data pipelines and infrastructure on the Azure platform. The ideal candidate will have hands-on experience working with modern data technologies and tools, and a strong understanding of data modeling, ETL/ELT pipelines, and cloud-native services in Azure.

This role requires working closely with data architects, analysts, and business stakeholders to enable data-driven decision-making by ensuring the availability, quality, and reliability of enterprise data.

Key Responsibilities :

- Design, develop, and optimize scalable ETL/ELT pipelines for both batch and real-time/streaming data ingestion and processing.

- Develop robust, reusable, and high-performance data ingestion frameworks using Azure Data Factory (ADF) and Azure Databricks.

- Implement workflows to ingest structured and unstructured data from various on-prem and cloud sources.

- Build and maintain data solutions using Azure Data Services, including ADF, Azure Synapse Analytics, Data Lake, Blob Storage, and Databricks.

- Ensure seamless integration of data systems across different Azure services.

- Collaborate with platform and infrastructure teams to optimize data storage, access, and performance.

- Design and implement data models (star/snowflake, normalized/denormalized) suitable for analytical and operational workloads.

- Perform complex data transformations and aggregations to make data analytics-ready.

- Apply data quality and governance standards across pipelines.

- Optimize performance of data processes through indexing, partitioning, caching, and tuning of Spark SQL or T-SQL queries.

- Monitor pipeline performance, troubleshoot issues, and ensure reliability and scalability.

- Lead or support data migration initiatives from on-premises systems to Azure cloud environments.

- Integrate various data sources, including APIs, databases, flat files, and streaming services.

- Follow CI/CD practices using DevOps tools for automated deployment and testing.

- Maintain detailed documentation of workflows, data models, and system configurations.

- Advocate for and implement data security, privacy, and compliance best practices.

Required Skills & Experience

- 6 to 10 years of experience in Data Engineering with strong focus on Azure ecosystem

- Expertise in :

1. Python for data processing and transformation

2. SQL (T-SQL or Spark SQL) for querying and scripting

3. Azure Data Factory (ADF) Pipelines, triggers, linked services

4. Azure Synapse Analytics or SQL DW

5. Databricks Notebooks, Spark, data exploration & transformation

- Experience in building data ingestion frameworks for batch and streaming data using tools like Event Hubs, Kafka, or Azure Stream Analytics (preferred)

- Strong understanding of data modeling techniques, data warehousing, and analytics

- Experience with performance tuning, monitoring, and troubleshooting

- Familiarity with version control (Git) and CI/CD practices


info-icon

Did you find something suspicious?