HamburgerMenu
hirist

Job Description

Description :

- Support the implementation and day-to-day execution of the Second Line of Defense (2LOD) Model Risk Management (MRM) program for high-risk models, with particular focus on Fraud detection models (Transaction Fraud & Merchant Fraud) and Generative AI / LLM-based systems deployed across Toast.

- Assist in maintaining and enhancing the Model Risk Management framework, including policies, procedures, validation standards, governance documentation, templates, and best practices aligned with evolving regulatory and industry expectations.

- Enforce model lifecycle standards across development, implementation, use, monitoring, recalibration, change management, governance, and decommissioning, ensuring appropriate controls for traditional ML models as well as GenAI systems (e.g., RAG architectures, copilots, AI-assisted decision tools).

- Contribute to the development, risk-tiering, and ongoing maintenance of a comprehensive model inventory, including assessment of model impact, intrinsic risk (complexity and methodology), reliance on model outputs, and emerging AI-specific risks.

- Perform independent model validation reviews under the guidance of senior leadership, covering conceptual soundness, data integrity, model methodology, performance metrics (e.g., AUC, precision/recall, calibration), stability, bias/fairness risk, explainability, and monitoring frameworks. Produce validation reports and track issue remediation plans through closure.

- For Fraud models, evaluate class imbalance handling, threshold optimization, cost-sensitive performance metrics, operational overlays, rule-based controls, and portfolio-level impact analyses.

- For Generative AI systems, validate systems and evaluate risks related to hallucination, prompt injection, adversarial vulnerabilities, data privacy and leakage, model explainability limitations, bias, guardrails, output monitoring, jailbreak testing, regression testing, and human-in-the-loop controls.

- Partner with Data Science, Data Engineering, Product/Engineering, Information Security, Legal/Compliance, Finance, Credit Risk and Business teams to obtain documentation, perform effective challenge, conduct validation and oversee performance monitoring

- Prepare reports and executive materials summarizing model risk issues, validation findings, monitoring insights, and remediation status for leadership review, risk committees, audit committees, and internal audit engagements.

- Research and stay informed on industry developments in fraud analytics, machine learning, Generative AI governance, and regulatory guidance (e.g., SR 11-7, OCC 2011-12, NIST AI RMF).

- Propose enhancements to strengthen Toasts Model Risk and AI Governance framework. Contribute to the development of model risk training materials and support delivery of training sessions to key stakeholders to enhance awareness of fraud model risk and AI-related risks.

- Support ad-hoc initiatives related to model risk governance, AI oversight, regulatory compliance, and enterprise risk management enhancements.

Preferred candidate profile :

- Advanced degree in Data Science, Statistics, Applied Mathematics, Computer Science, Engineering, or a related quantitative discipline.

- 5+ years of relevant industry experience in Model Risk Management, Model Validation, Data Science, or Machine Learning within fintech, banking, payments, technology, or consulting environments, with exposure to fraud analytics, credit risk, or AI/ML model governance.

- Strong understanding of machine learning methodologies and statistical foundations, including model development, validation techniques, performance evaluation, calibration, and stability analysis.

- Hands-on experience with fraud detection models (e.g., transaction monitoring, chargeback prediction, merchant risk scoring) and familiarity with concepts such as class imbalance handling, threshold optimization, precision/recall trade-offs, and cost-sensitive evaluation.

- Familiarity with Generative AI and LLM-based systems (e.g., RAG architectures, embeddings, prompt engineering, AI copilots) and awareness of associated risks such as hallucination, bias, prompt injection, and data leakage.

- Strong proficiency in Python and SQL; experience with data science and ML tools such as Spark, scikit-learn, XGBoost, LightGBM, TensorFlow, or PyTorch. Familiarity with software engineering best practices (version control, CI/CD, testing frameworks, Airflow, AWS or cloud platforms) is a plus.

- Knowledge of model risk management frameworks and regulatory guidance (e.g., SR 11-7, OCC 2011-12) and familiarity with emerging AI governance frameworks (e.g., NIST AI RMF) is highly desirable.

- Ability to translate complex quantitative methodologies into clear business insights and effectively challenge first-line model development teams while maintaining strong cross-functional partnerships.

- Demonstrated critical thinking, analytical rigor, and structured problem-solving skills, with the ability to manage multiple priorities and deliver high-quality outputs under tight deadlines.

- Excellent written and verbal communication skills, including experience drafting formal validation reports, executive summaries, and committee-level materials.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in