Posted on: 05/03/2026
Description :
Key Responsibilities :
- Lead the design, development, and maintenance of data pipelines and ETL processes for efficient data integration and transformation.
- Manage and optimise data storage and data flows on at least 2 of the following cloud ecosystems - GCP, AWS, Azure, Oracle Cloud.
- Work with large-scale datasets, streaming data, ensuring data quality, consistency, and reliability across systems.
- Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions.
- Implement and enforce data governance, security, and compliance standards.
- Develop and Enhance Cloud Architecture that could be used for New proposals as well as for Data Engineering pipelines, refreshes, automations and integrations.
- Monitor data pipelines, troubleshoot issues, and ensure high availability of data platforms.
- Optimize database performance and ensure cost-effective cloud resource utilization.
- Mentor junior engineers, provide technical guidance, and contribute to best practices in data engineering.
Qualifications :
- Strong proficiency in data storage and data flows on at least 2 of the following cloud ecosystems - GCP, AWS, Azure, Oracle Cloud.
- Hands-on experience with ETL tools (Oracle DI, Informatica, Talend, or similar).
- Advanced knowledge of SQL, PL/SQL, and database performance tuning.
- Solid understanding of data warehousing concepts and big data technologies.
- Strong skills in Python for data processing and automation.
- Experience with streaming data pipelines (Kafka, Spark Streaming).
- Past experience of developing or enhancing Cloud and data flow Architecture, including Data Engineering pipelines, refreshes, automations and integrations.
- Experience of Application/ Data Integrations with Internal and Third Party APIs, MCPs, LLMs & other multimodal language models would be required
- Web Data Harvesting, Automations & API integration would be a good-to-have skill
- Knowledge of data modelling and data governance best practices.
- Exposure to containerization technologies (Docker, Kubernetes) is a plus.
- Strong analytical and problem-solving abilities.
- Excellent communication and collaboration skills.
- Ability to work independently, manage multiple priorities, and thrive in a fast-paced environment.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1618202