HamburgerMenu
hirist

Quality Assurance Engineer - AI Enablement

Sheryl strategic solutions Pvt. LTD .
Multiple Locations
3 - 6 Years

Posted on: 12/11/2025

Job Description

Were seeking an AI QA Enablement Specialist an AI champion who will accelerate codeless automation and agentic AI workflows across QA. This role blends a deep foundation in software testing and QA methodologies with hands-on expertise in AI enablement and automation frameworks, leveraging tools such as Azure OpenAI Service, generative AI APIs, and codeless automation platforms. The ideal candidate will help modernize QA processes by building AI-augmented testing frameworks, integrating intelligent test generation, and improving speed, coverage, and quality across the SDLC.


Key Responsibilities :


- Design and implement AI-assisted testing workflows leveraging LLMs (e.g., Azure OpenAI, OpenAI GPT models) for test case generation, summarization, and defect prediction.


- Develop prompt engineering and AI agent orchestration frameworks that enhance test coverage and reduce manual effort.


- Drive agentic QA initiatives enabling autonomous test creation, execution, and maintenance


- Deploy, customize, and optimize codeless automation tools (e.g., Testim, Katalon, Leapwork, UiPath Test Suite, Functionize, etc.) integrated with AI workflows.


- Collaborate with DevOps teams to integrate AI-based test automation into CI/CD pipelines.


- Establish metrics and dashboards to measure automation ROI, AI inference accuracy, and test stability.


- Define AI readiness and enablement roadmaps for QA teams across multiple products.


- Build test data generation pipelines using generative AI models and synthetic data frameworks.


- Ensure model validation, ethical AI use, and quality governance in AI-assisted decision-making.


- Partner with software engineers, ML engineers, and DevOps teams to embed AI in QA workflows.


- Train QA teams on AI-first testing approaches and maintain documentation for AI model integration.


- Conduct POCs and pilot projects to evaluate new AI tools, frameworks, and codeless automation platforms.


Required Technical Skills & Competencies :


- Strong QA foundation : Functional, regression, performance, and API testing experience.


- Automation experience : Hands-on with Selenium, Playwright, Cypress, or similar.


- Working understanding of Azure OpenAI, OpenAI APIs, LangChain, or Semantic Kernel.


- Experience crafting structured prompts for test case generation or data validation.


- Familiarity with REST APIs, JSON, YAML, and CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps).


- Exposure to Python, TypeScript, or PowerShell scripting for AI model or automation integration.


- Strong understanding of software development life cycle (SDLC) and QA best practices.


Preferred Qualifications :


- Experience working with Azure OpenAI Service or similar cloud AI services (AWS Bedrock, Google Vertex AI).


- Familiarity with test data synthesis, model validation, and AI governance frameworks.


- Strong problem-solving, analytical, and cross-functional collaboration skills.

info-icon

Did you find something suspicious?