Posted on: 05/11/2025
Description :
Key Responsibilities :
- Design and implement automated test frameworks for validating ML models functional and scaling performance.
- Collaborate with ML engineers and data scientists to define robust testing strategies for model deployments.
- Develop and maintain backend automation scripts using Python / Java with frameworks such as Pytest or Rest Assured.
- Conduct performance testing, identify bottlenecks, and ensure models meet SLA and throughput requirements.
- Validate end-to-end model lifecycle, including data ingestion, training, inference, and deployment validation.
- Work closely with DevOps to integrate automated tests within CI/CD pipelines for continuous delivery.
- Ensure comprehensive test coverage across APIs, data pipelines, and orchestration layers.
- Analyze, debug, and report issues effectively, collaborating with development and platform teams for resolution.
- Create test documentation, including test plans, test cases, and defect reports.
- Contribute to and enforce testing best practices and coding standards across the QA and ML teams.
- Mentor junior SDETs and promote a quality-first engineering culture.
What Were Looking For :
Core Requirements :
- 68 years of experience in Testing and Automation, with strong backend automation experience.
- Solid understanding of testing fundamentals and best practices.
- Proficiency in Python or Java; must be comfortable coding and automating tests.
- Experience with backend test automation frameworks such as Rest Assured, Pytest, or similar tools.
- Exposure to ML application testing (validation of models, data pipelines, or inference systems).
- Strong understanding of system design, distributed backend architectures, and databases (SQL/NoSQL).
- Hands-on experience in performance testing of backend or ML systems (mandatory).
- Knowledge of application deployment and orchestration concepts (e.g., Docker, Kubernetes).
- Understanding of CI/CD pipelines and integration processes (good to have).
- Experience using prompt-based IDEs or editors like Cursor (good to have).
Preferred Skills :
- Familiarity with ML model lifecycle testing, including validation of inference accuracy, latency, and scalability.
- Experience working with cloud environments (AWS/GCP/Azure).
- Exposure to observability and monitoring tools such as Grafana, Prometheus, or Datadog.
- Understanding of data integrity testing and API contract validation.
- Strong debugging, problem-solving, and analytical skills
Did you find something suspicious?
Posted By
Posted in
Quality Assurance
Functional Area
QA & Testing
Job Code
1570216
Interview Questions for you
View All