Posted on: 24/02/2026
Description :
- Design, build, and optimize scalable data engineering solutions in Azure/GCP cloud environments.
- Develop secure and reliable ETL/ELT pipelines using Python, PySpark, SQL, and UNIX/Linux scripting.
- Apply expertise in query optimization, data structures, transformations, metadata management, dependency tracking, and workload orchestration.
- Automate workflows using orchestration and scheduling tools.
Cloud & DevOps Enablement :
Work with cloud-native services including :
- Kubernetes, containerized services, cluster management, cloud storage, and workspace management.
- Build and maintain CI/CD pipelines following DevOps best practices.
- Collaborate effectively using Git-based version control in multi-developer environments.
AI/ML & Agentic Systems Support :
Support and enhance AI/ML initiatives including :
- Feature engineering, model deployment, and MLOps platform integrations.
- Work with AI/ML frameworks such as TensorFlow, PyTorch, and libraries like scikit-learn, MLflow.
- Build solutions using multi-agent AI tech stacks, including :
i. PydanticAI
ii. LangChain
iii. LangGraph
- Exposure to Agent-to-Agent Protocol or Model Context Protocol (MCP) is highly desirable.
- Cross-functional Collaboration & Agile Delivery
- Partner with data scientists, ML engineers, architects, and business teams.
- Translate complex technical concepts for non-technical stakeholders.
- Participate in Agile ceremonies within Scrum or Kanban frameworks.
Service & Operational Excellence :
- Track and manage work effectively using the ServiceNow ticketing system.
- Ensure performance, scalability, and operational excellence across data and AI systems.
Required Skills & Qualifications :
- 5 to 9 years of experience in data engineering or related roles.
- Strong hands-on programming in Python, PySpark, SQL, and UNIX/Linux scripting.
- Proven experience designing and deploying data solutions in Azure or GCP (cloud certification preferred).
- Strong understanding of query tuning, distributed computing, data modeling, and data lifecycle management.
Familiarity with :
- CI/CD pipelines and DevOps tools
- Containerization (Docker), Kubernetes
- Cloud-native compute and storage services
- Excellent communication and documentation skills.
- Experience with automation and orchestration tools (Airflow, Databricks, Prefect, etc.).
- Good understanding of modern API and microservice architectures.
- Knowledge of Agile methodologies (Scrum, Kanban).
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
ML / DL / AI Research
Job Code
1615338