Posted on: 08/07/2025
About the Role :
We are seeking a highly experienced and skilled Senior Data Engineer with strong expertise in Azure, Snowflake, and Azure Data Factory (ADF). The ideal candidate will design and build efficient, secure, and scalable data pipelines and cloud data solutions that power analytics, machine learning, and enterprise data warehousing.
You will play a key role in modernizing the companys data ecosystem and driving advanced analytics through structured, semi-structured, and real-time data integrations.
Key Responsibilities :
- Design, build, and manage robust data pipelines and ETL/ELT workflows using Azure Data Factory, Azure Functions, and other Azure services.
- Develop and optimize data models in Snowflake to support BI, analytics, and ML use cases.
- Implement data architecture and frameworks that support large-scale, real-time data ingestion and transformation.
- Ensure data reliability, consistency, availability, and quality across all pipelines.
- Develop solutions on Azure Cloud Platform, integrating with services such as Azure Data Lake, Azure Synapse, Azure SQL, and Function Apps.
- Build reusable components and automate deployment using Terraform or other Infrastructure-as-Code tools.
- Manage and implement CI/CD pipelines using Azure DevOps, Argo, or equivalent frameworks.
- Implement and manage real-time data streaming using Kafka.
- Build event-driven data flows that support advanced analytics, alerting, and monitoring.
- Set up and configure monitoring tools such as Prometheus, Grafana, and Azure Application Insights to ensure system health and performance.
- Ensure compliance with data governance, security policies, and manage private endpoints, access controls, and encryption for data protection.
Required Qualifications :
- 5+ years of hands-on experience in data engineering, preferably in a cloud-native environment.
- Strong experience with Azure services, especially ADF, Function Apps, Azure SQL, Blob Storage, and Data Lake.
- Expertise in Snowflakeincluding schema design, performance tuning, data loading/unloading, and security best practices.
- Experience with streaming platforms such as Apache Kafka.
- Proficiency with Terraform, CI/CD pipelines, and DevOps practices.
- Solid experience in Python, SQL, and JSON/XML data processing.
- Knowledge of monitoring and logging tools like Prometheus, Grafana, Log Analytics, and App Insights.
- Familiarity with ML/AI deployment pipelines is a strong plus.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1509548
Interview Questions for you
View All