Posted on: 25/08/2025
We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks. The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies.
Key Responsibilities :
- Collaborate with business and IT stakeholders to define business and functional requirements for data solutions.
- Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake.
- Develop detailed technical designs, data flow diagrams, and future-state data architecture.
- Evangelize modern data modelling practices, including entity-relationship models, star schema, and Kimball methodology.
- Ensure data governance, quality, and validation by working closely with quality engineering teams.
- Write, optimize, and troubleshoot complex SQL queries, including DDL, DML, and performance tuning.
- Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing.
- Implement DevOps and CI/CD best practices for automated data pipeline deployments.
- Support real-time streaming data processing with Spark, Kafka, or similar technologies.
- Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions.
- Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset.
Required Skills & Qualifications :
Primary Requirements :
- SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning.
- Programming Languages Proficiency in Python or Java for data processing and automation.
- Data Modelling Good understanding of entity-relationship modelling, star schema, and Kimball methodology.
- Cloud Data Engineering Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake
- ETL Development Experience building scalable ETL/ELT pipelines and data ingestion workflows.
- Ability to learn and apply Snowflake concepts as needed.
- Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions.
- Financial Services Domain (Optional) : Knowledge of financial services.
Good to Have Skills :
- DevOps & CI/CD Experience with Git, Jenkins, Docker, and automated deployments.
- Streaming Data Processing Experience with Spark, Kafka, or real-time event-driven architectures.
- Data Governance & Security Understanding of data security, compliance, and governance frameworks.
- Experience in AWS Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.
Primary Requirements :
- SQL Expertise : Strong hands-on experience with DDL, DML, query optimization, and performance tuning.
- Programming Languages : Proficiency in Python or Java for data processing and automation.
- Data Modelling : Good understanding of entity-relationship modelling, star schema, and Kimball methodology.
- Cloud Data Engineering : Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake
- ETL Development : Experience building scalable ETL/ELT pipelines and data ingestion workflows.
- Ability to learn and apply Snowflake concepts as needed.
- Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions
Notice period : 30days
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1535034
Interview Questions for you
View All