Posted on: 28/10/2025
Description :
Key Responsibilities :
- Lead the design and implementation of modern data platforms across Azure, AWS, and Snowflake.
- Translate business requirements into robust technical solutions covering ingestion, transformation, integration, warehousing, and validation.
- Architect, build, and maintain data pipelines for analytics, reporting, and machine learning use cases.
- Develop and maintain ETL processes to move data from multiple sources into cloud data lakes and warehouses.
- Design and implement data models, lineage, and metadata management to ensure consistency and traceability.
- Optimize pipelines and workflows for performance, scalability, and cost efficiency.
- Enforce data quality, security, and governance standards across all environments.
- Support migration of legacy/on-premises ETL solutions to cloud-native platforms.
- Develop and tune SQL queries, database objects, and distributed processing workflows.
- Drive adoption of CI/CD, test automation, and DevOps practices in data engineering.
- Collaborate with architects, analysts, and data scientists to deliver end-to-end data solutions.
- Provide technical leadership, mentorship, and training to junior engineers.
- Produce and maintain comprehensive technical documentation.
Requirements & Skills :
- Strong experience designing and developing ETL/data pipelines on Azure, AWS, and Snowflake.
- Proficiency in SQL, Python, and distributed processing (e.g., Spark, Databricks, EMR).
- Hands-on expertise with :
- Azure : Data Factory, Synapse, Databricks, Azure SQL
- AWS : Glue, Redshift, S3, Lambda, EMR
- Snowflake : Data warehousing, performance optimization, security features
- Solid understanding of data modeling, lineage, metadata management, and governance.
- Experience with CI/CD, infrastructure-as-code, and automation frameworks.
- Strong problem-solving and communication skills with the ability to work across teams.
Desired Profile :
- Bachelors or masters degree in computer science, Engineering, or related discipline.
- 6 - 10 years of progressive data engineering experience, with at least 5 years in cloud-based data platforms.
- Strong expertise in data modelling, database design, and warehousing concepts.
- Proficiency in Python (including Pandas, API integrations, and automation).
- Familiarity with varied data formats and sources (CSV, Parquet, JSON, APIs, relational and NoSQL databases).
- Exposure to modern orchestration and workflow tools, with strong understanding of CI/CD practices.
- Experience with Databricks and Microsoft Fabric is a plus.
- Excellent analytical, problem-solving, and communication skills.
- Ability to evaluate new technologies and adopt them where appropriate.
Did you find something suspicious?
Posted By
Posted in
Data Analytics & BI
Functional Area
ML / DL Engineering
Job Code
1565791
Interview Questions for you
View All