- Design and deploy agentic AI workflows that connect Large Language Models (LLMs) and custom machine learning models into backend architectures, using frameworks like LangChain, LangGraph, PydanticAI or SemanticKernel.
- Design systems for multi-agent coordination and multi-step reasoning loops/memory management.
- Build and maintain data ingestion pipelines and vector databases (e.g., Pinecone, Weaviate) to support Retrieval-Augmented Generation for AI agents.
- Implement tracing (e.g., LangSmith, Langfuse) and build evaluation pipelines to measure accuracy, latency, and cost.
- Design testing strategies for non-deterministic AI outputs, including the implementation of guardrails and safety constraints.
- Deploy and orchestrate containerized services using Docker and Kubernetes on cloud platforms like AWS, GCP, or Azure (Azure preferred)
Preferred : Experience with Azure Foundry/ Azure OpenAI
Qualifications :
- University degree in Computer Science or a related discipline.
- 5- 8 years of professional experience in software engineering or backend systems.
- 2- 3 years relevant experience in AI technologies building APIs, integrating LLMs, designing RAG pipelines, and deploying scalable microservices.
- Strong applied expertise with frameworks like LangChain, FastAPI, CrewAI, and cloud platforms.