Posted on: 04/12/2025
Responsibilities :
- Design and implement end-to-end quality strategies for agentic AI systems, balancing manual exploratory testing with automation frameworks.
- Develop and maintain validation frameworks for multi-agent orchestration logic, ensuring correctness in planning, decision-making, and adaptive behaviours.
- Develop and run manual and automated test pipelines to validate end-to-end functionality, data flow, and system reliability across AI services.
- Perform exploratory testing of AI reasoning, tool usage, and agent collaboration workflows to uncover edge cases and emergent behaviours.
- Automate test coverage for APIs, microservices, and orchestration components using modern testing tools and frameworks.
- Create observability and monitoring solutions to assess agent accuracy, latency, and behavioural consistency.
- Collaborate with AI and backend developers to embed quality gates and automated checks into CI/CD pipelines.
- Evaluate and integrate testing tools for AI-specific workflows (prompt validation, response benchmarking, output scoring).
- Champion best practices for reproducibility, versioning, and manual/automated validation of AI model behaviours.
- Contribute to cross-team initiatives, R& D demos, hackathons, and innovation sprints.
Requirements :
- 5+ years of experience in Quality Engineering, SDET, or Software Testing (manual and automated) in distributed or AI-driven systems.
- Strong experience designing and executing both manual exploratory tests and automated test frameworks.
- Proficient in Python, Java, or Go for developing test automation scripts and frameworks.
- Have worked on distributed systems and microservices architecture (preferred).
- Hands-on experience with API testing, microservices validation, and CI/CD integration (GitHub Actions, Jenkins, etc.).
- Experience with AI technologies and frameworks that enable intelligent agents, automation, and contextual reasoning.
- Familiarity with cloud-native testing (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes).
- Experience with observability tools (Grafana, Prometheus) to validate performance and system health.
- Strong analytical, debugging, and communication skills; able to translate complex AI behaviours into clear, testable criteria.
- Passionate about ensuring quality and trustworthiness in intelligent, autonomous systems.
Did you find something suspicious?
Posted by
Posted in
Quality Assurance
Functional Area
QA & Testing
Job Code
1584632
Interview Questions for you
View All