Posted on: 10/10/2025
About the Opportunity :
We are looking for a skilled Azure Data Engineer / Data Engineer with hands-on experience in Azure Data Factory, Snowflake, Databricks, and DBT to architect and implement large-scale data integration and transformation pipelines.
The ideal candidate will bring deep technical expertise in ETL/ELT design, data modeling, and big data processing, along with proficiency in Python and SQL for automating and optimizing complex data workflows.
Youll work in a high-impact role building and maintaining cloud-native data solutions that enable analytics, AI, and business intelligence initiatives across the organization.
This position is ideal for professionals passionate about designing high-performance, scalable, and secure data systems that serve as the backbone for enterprise decision-making.
What Youll Do :
- Design, develop, and maintain ETL/ELT pipelines using Azure Data Factory and Databricks for data ingestion, transformation, and orchestration.
- Implement data integration and transformation frameworks in Snowflake, ensuring high performance and scalability.
- Develop modular and reusable data pipelines with DBT for data transformation, lineage tracking, and testing.
- Build and manage data lakes and data warehouse solutions leveraging Azure Data Lake Storage (ADLS) and Snowflake.
- Write optimized SQL and Python scripts for data processing, validation, and automation.
- Collaborate with data scientists, BI developers, and product teams to ensure data availability, quality, and reliability.
- Design and implement data models (star/snowflake schemas) for analytics and reporting workloads.
- Optimize Spark and Databricks jobs for cost, performance, and scalability.
- Establish data quality validation, error handling, and automated monitoring frameworks.
- Integrate CI/CD pipelines for data workflows, ensuring repeatable, version-controlled deployments.
- Ensure compliance with data governance, security policies, and industry regulations.
What You Bring :
- 6 to 10 years of professional experience in data engineering or ETL development, with a focus on cloud data platforms.
- Hands-on experience with Azure Data Factory, Azure Databricks, and Snowflake.
- Proficiency in SQL and Python for developing transformation logic, validation, and automation scripts.
- Expertise in data warehousing concepts, data lake architectures, and big data processing.
- Experience with DBT (Data Build Tool) for transformation management, modular pipelines, and testing.
- Strong understanding of Spark, Delta Lake, and Parquet for distributed data processing.
- Working knowledge of ETL/ELT pipeline orchestration, metadata management, and data lineage tracking.
- Experience implementing CI/CD pipelines and DevOps for data workflows.
- Familiarity with Azure cloud services, including ADLS, Synapse, and Key Vault.
- Strong analytical, troubleshooting, and performance optimization skills.
- Bachelors or Masters degree in Computer Science, Data Engineering, or related field.
Preferred Skills :
- Experience with data governance frameworks, role-based security, and compliance standards.
- Familiarity with AWS Redshift or GCP BigQuery for multi-cloud data integration.
- Exposure to Airflow, Prefect, or other workflow orchestration tools.
- Understanding of ML pipelines and feature store management within data ecosystems.
- Certifications in Azure Data Engineer, Snowflake, or Databricks are a plus
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1558111
Interview Questions for you
View All