Posted on: 23/07/2025
Experience Required :
610+ years in AI/ML development, with 3+ years of hands-on experience in Generative AI, RAG frameworks, and Agentic AI systems.
Job Summary :
We are seeking highly skilled Generative AI Engineers to join a dynamic team focused on building enterprise-grade, production-ready AI systems using RAG and Agentic AI paradigms. The ideal candidates will have hands-on experience developing and fine-tuning LLM-based applications, integrating feedback loops, and implementing safeguards in regulated or complex business environments.
Key Responsibilities :
- Implement Agentic AI architectures involving task-based agents, stateful memory, planning-execution workflows, and tool augmentation.
- Perform model fine-tuning, embedding generation, and evaluation of LLM outputs; incorporate human and automated feedback loops.
- Build and enforce guardrails to ensure safe, compliant, and robust model behaviorincluding prompt validation, output moderation, and access controls.
- Collaborate with cross-functional teams to deploy solutions in cloud-native environments such as Azure OpenAI, AWS Bedrock, or Google Vertex AI.
- Contribute to system observability via dashboards and logging, and support post-deployment model monitoring and optimization.
Required Qualifications :
- Solid understanding of Agentic AI design patterns: task agents, memory/state tracking, and orchestration logic
- Strong expertise in LLM fine-tuning, vector embeddings, evaluation strategies, and feedback integration
- Experience with implementing AI guardrails (e.g., moderation, filtering, prompt validation)
- Proficiency in Python, LLM APIs (OpenAI, Anthropic, Cohere, etc.), and vector database integration
- Familiarity with CI/CD pipelines, API integrations, and cloud-native deployment patterns
Preferred Qualifications :
- Hands-on experience with cloud AI platforms : Azure OpenAI, AWS Bedrock, or Google Vertex AI
- Knowledge of prompt engineering, RLHF, and LLM observability frameworks
- Experience building or leveraging internal LLM evaluation harnesses, agent orchestration layers, or compliance dashboards
Did you find something suspicious?