Posted on: 23/12/2025
Description :
- Design and implement agentic AI pipelines using LangGraph, LangChain, CrewAI, or custom frameworks.
- Build robust retrieval-augmented generation (RAG) systems with vector databases.
- Fine-tune, evaluate, and deploy LLMs for task-specific applications.
- Integrate external tools and APIs into multi-agent workflows using dynamic tool/function calling (e.g., OpenAI JSON schema).
- Develop memory modules such as short-term context, episodic memory, and long term vector stores.
- Build scalable, cloud-native services using Python, Docker, and Terraform.
- Collaborate in agile, cross-functional teams to rapidly prototype and ship ML-based features.
- Monitor and evaluate agent performance using tailored metrics (e.g., success rate, hallucination rate).
- Ensure secure, reliable, and maintainable deployment of AI systems in production environments.
Your profile :
- 7+ years of professional experience in machine learning, NLP, or software engineering.
- Strong proficiency in Python and experience with ML libraries like PyTorch, TensorFlow, scikit-learn, and XGBoost.
- Hands-on experience with LLMs (e.g., GPT, Claude, LLaMA, Mistral) and NLP tooling such as LangChain, HuggingFace, and Transformers.
- Experience designing and implementing RAG pipelines with chunking, semantic search, and reranking.
- Familiarity with agent frameworks and orchestration techniques (e.g., planning, memory, role assignment).
- Deep understanding of prompt engineering, embeddings, and LLM architecture basics.
- Design systems with role-based communication, coordination loops, and hierarchical planning.
- Optimize agent collaboration strategies for real-world tasks.
- Solid foundation in microservice architectures, CI/CD, and infrastructure-as-code (e.g., Terraform).
- Experience integrating REST/GraphQL APIs into ML workflows.
- Strong collaboration and communication skills, with a builders mindset and willingness to explore new approaches.
Bonus Qualifications :
- Experience with RLHF, LoRA, or parameter-efficient LLM fine-tuning.
- Familiarity with CrewAI, AutoGen, Swarm, or other multi-agent libraries.
- Exposure to cognitive architectures like task trees, state machines, or episodic memory.
- Prompt debugging and LLM evaluation practices.
- Awareness of AI security risks (e.g., prompt injection, data exposure).
Did you find something suspicious?