Posted on: 30/10/2025
Description :
- Design and implement RAG (Retrieval-Augmented Generation) pipelines that integrate foundation models with enterprise data sources to enhance contextual and accurate response generation.
- Build and orchestrate Agentic AI workflows, enabling autonomous, goal-driven LLM interactions through function calling and tool use.
- Fine-tune and evaluate foundation models via Watsonx.ai and Hugging Face integrations, ensuring relevance, safety, and alignment with business objectives.
- Integrate and manage vector databases (e.g., Milvus, FAISS, IBM Vela Vector DB) to power semantic search and contextual retrieval.
- Collaborate with cross-functional teams to gather requirements, define ML-driven KPIs, and translate business objectives into AI-driven solutions.
- Ensure adherence to governance, model explainability, and ethical AI practices using Watsonx.governance.
Mandatory Skills :
- Strong understanding of LLMs (Large Language Models) and foundation model fine-tuning.
- Experience building RAG pipelines and Agentic AI workflows.
- Proficiency in Python, ML frameworks (PyTorch, TensorFlow), and Hugging Face Transformers.
- Familiarity with vector databases (Milvus, FAISS, or IBM Vela Vector DB).
- Knowledge of data governance, MLOps, and model lifecycle management.
Good to Have Skills :
- Understanding of cloud platforms (IBM Cloud, AWS, Azure, or GCP).
- Knowledge of containerization and microservice orchestration (Docker, Kubernetes).
Did you find something suspicious?