Posted on: 13/11/2025
Description :
- Design, build, and deploy LLM-based applications using Python and Azure OpenAI Services.
- Develop Retrieval-Augmented Generation (RAG) pipelines with vector databases like Azure Cognitive Search, Pinecone, or FAISS.
- Create intelligent agents using frameworks such as LangChain, LlamaIndex, or Semantic Kernel.
- Apply advanced prompting techniques, including:
- Chain-of-Thought (CoT).
- Tree-of-Thoughts (ToT).
- ReAct, Self-Consistency, and Toolformer-style prompting.
- Implement reasoning strategies for multi-step decision-making and task decomposition.
- Orchestrate agentic AI workflows, integrating external tools, APIs, and business logic.
- Deploy scalable LLM services with Azure Functions, Docker, and serverless architectures.
- Monitor and optimize model performance, latency, and reliability.
Must-Have Skills :
- Proven integration of Azure OpenAI (GPT models) into real-world applications.
- Strong understanding of LLMs, embeddings, vector search, and RAG architectures.
- Experience with at least one LLM orchestration framework: LangChain, LlamaIndex, or Semantic Kernel.
- Knowledge of structured prompting techniques and LLM reasoning models.
- Ability to integrate external APIs, cloud tools, and business data into LLM pipelines.
- Familiarity with Azure Functions, Docker, and cloud-native development.
- Excellent debugging and performance optimization skills.
Good-to-Have Skills :
- Background in building AI developer tools, automation bots, or assistants.
- Exposure to LLMOps practices: versioning, prompt evaluation, monitoring.
- Familiarity with traditional NLP tasks: summarization, classification, Q&A.
- Experimental prompting experience: role prompting, reflection-based refinement, zero-shot CoT, etc.
Did you find something suspicious?