Posted on: 01/12/2025
Description :
Role : QA Engineer AI Engineering (Model Validation & Automation)
Role Overview :
The QA Engineer AI Engineering is a specialized technical role requiring 26 years of experience in software testing, with a dedicated focus on validating complex AI/ML models, data pipelines, and AI-driven applications.
This role demands strong automation skills (Python preferred), a good foundational understanding of ML workflows, and expertise in ensuring the quality, performance, and ethical compliance of AI systems.
Job Summary :
We are seeking a proactive QA Engineer (26 years experience) with mandatory hands-on experience in automated testing and Python scripting to specialize in AI quality assurance.
The ideal candidate will be responsible for testing AI/ML models for critical factors like accuracy, consistency, and fairness, while also validating associated data pipelines and model outputs.
Key responsibilities include creating and executing automated test scripts for APIs and UI, performing rigorous performance and reliability testing of AI services (latency, throughput), and collaborating directly with ML engineers and data scientists to define robust test scenarios.
Key Responsibilities and Technical Deliverables :
AI/ML Model Validation and Testing :
- Test AI/ML models for critical quality attributes including accuracy, consistency, fairness (bias detection), model drift, and overall production quality.
- Validate large datasets, data pipelines, and raw/transformed model outputs to ensure data integrity, completeness, and suitability for model training and inference.
- Apply Basic understanding of ML models, LLMs, NLP/CV systems, and relevant evaluation metrics (e.g., F1-score, perplexity, AUC).
Automation and Performance Testing :
- Create and execute automated test scripts (Python preferred) for validating API endpoints, user interfaces (UI), and model inference services.
- Utilize Hands-on experience with Python, automated testing, and QA tools such as PyTest, Postman, Robot Framework, or equivalent.
- Perform performance and reliability testing of AI services, rigorously measuring critical metrics like latency, throughput, and scalability under load.
- Implement API testing strategies and integrate automated tests within the CI/CD pipeline for continuous quality assurance.
Collaboration and Quality Management :
- Work closely with ML engineers, data scientists, and product teams to understand complex requirements and define comprehensive, domain-specific test scenarios.
- Identify defects, report issues with clear reproduction steps, and collaborate cross-functionally to drive quality improvements and root cause analysis.
- Maintain and manage detailed test plans, test cases, and quality documentation throughout the software and model development lifecycle.
- Demonstrate Strong analytical, communication, and problem-solving skills to manage quality gates effectively.
Mandatory Skills & Qualifications :
- Experience : 2 to 6 years of QA/testing experience.
- Automation : Hands-on experience with Python, automated testing, and QA tools (PyTest, Postman, Robot Framework, etc.).
- AI Fundamentals : Basic understanding of ML models, LLMs, NLP/CV systems, and model evaluation metrics.
- Integration : Experience in API testing and CI/CD integration.
- Analytical : Strong analytical, communication, and problem-solving skills.
Preferred Skills :
- Experience with MLOps tools such as MLflow or Airflow for workflow management.
- Knowledge of LLM testing frameworks (e.g., Evals, Ragas, LangSmith) for Generative AI validation.
- Exposure to vector databases or major cloud platforms (AWS, GCP, Azure).
- Experience with load testing tools (e.g., JMeter, Locust) for performance verification.
Did you find something suspicious?
Posted By
Posted in
Quality Assurance
Functional Area
QA & Testing
Job Code
1583477
Interview Questions for you
View All