Posted on: 03/03/2026
Role Overview :
- Lead evaluation of AI-driven voice bots, agent assist, and summarization systems.
- Design LLM validation frameworks ensuring accuracy, safety, and latency compliance.
- Drive responsible AI and bias testing standards.
Core Responsibilities :
- Hallucination detection and response accuracy scoring.
- RAG and knowledge grounding validation.
- Synthetic call testing and AI load benchmarking.
- Prompt robustness and fallback validation
- Sensitive data leakage and AI compliance testing.
Ideal Background :
- Experience in LLM or conversational AI testing.
- Hands-on exposure to AI evaluation metrics.
- Strong understanding of AI safety and bias validation.
- Experience designing AI testing frameworks.
Did you find something suspicious?
Posted by
Posted in
Quality Assurance
Functional Area
ML / DL / AI Research
Job Code
1617740