Posted on: 15/05/2026
Description :
- Define and execute the organisations AI security strategy covering threat modelling, risk assessment, and security architecture for GenAI, Agentic AI, RAG, and traditional ML systems.
- Lead LLM red-teaming and adversarial testing design and run prompt injection attacks (direct, indirect, multi-turn), jailbreak assessments, data extraction probes, and model manipulation tests to identify vulnerabilities before production release.
- Architect and implement guardrails, input/output filtering, content moderation, hallucination detection, and toxicity screening pipelines to ensure safe and policy-compliant GenAI outputs.
- Secure Agentic AI systems by enforcing least-privilege tool access, sandboxed execution environments, action approval workflows, human-in-the-loop gates for high-risk operations, and agent behaviour boundary
enforcement across multi-agent orchestration frameworks (LangChain, LangGraph, CrewAI, AutoGen).
- Design security controls for RAG pipelines protect vector databases from poisoning attacks, enforce document-level access control in retrieval, prevent sensitive data leakage through embeddings, and validate retrieval-grounded outputs against source authority.
- Own AI data privacy engineering implement PII detection and redaction, differential privacy, data anonymisation/pseudonymisation, consent management, and data minimisation practices across training
datasets, fine-tuning corpora, and inference inputs/outputs.
- Drive compliance with GDPR, CCPA, EU AI Act, NIST AI RMF, ISO 42001, SOC 2, and industry-specific regulations (HIPAA, PCI-DSS) as they apply to AI/ML systems, ensuring audit readiness and documentation.
- Build AI security observability deploy monitoring for anomalous model behaviour, adversarial input detection, data exfiltration attempts, agent action audit trails, and token-level cost anomaly alerts using SIEM integration and custom telemetry.
- Establish secure MLOps pipelines model signing, provenance tracking, supply chain security for open-source models (SBOM for AI), secure model registries, encrypted model artefacts, and tamper-proof experiment tracking.
- Develop and deliver AI security training, threat awareness programmes, and secure-by-design guidelines for engineering, data science, and product teams across the organisation.
- Lead incident response for AI-specific security events prompt injection breaches, model theft, training data poisoning, adversarial attacks in production, and agent autonomy failures.
Required Qualifications :
- 8- 13 years of combined experience in cybersecurity, AI/ML engineering, or security engineering, with 3+ years focused on AI/ML security.
- Bachelors/Masters in Computer Science, Cybersecurity, AI/ML, or a related field.
- Deep understanding of LLM architectures, transformer internals, fine-tuning workflows, and GenAI application stacks sufficient to identify and exploit security weaknesses.
- Hands-on experience with LLM red-teaming, prompt injection testing, jailbreak methodologies, and adversarial ML techniques (evasion, poisoning, model inversion, membership inference).
- Strong knowledge of AI privacy techniques : PII detection/redaction (Presidio, spaCy), differential privacy, federated learning, data anonymisation, and privacy-preserving ML.
- Proven experience securing agentic AI systems tool-use access controls, agent sandboxing, action boundaries, and multi-agent trust frameworks.
- Familiarity with regulatory frameworks : GDPR, CCPA, EU AI Act, NIST AI RMF, ISO 42001, OWASP Top 10 for LLMs, and MITRE ATLAS.
- Proficient in Python, security tooling, and cloud security across AWS, Azure, or GCP.
Preferred Qualifications :
- Experience building AI guardrail frameworks (NVIDIA NeMo Guardrails, Guardrails AI, LLM Guard, Rebuff) and content safety systems.
- Background in offensive security, penetration testing, or red-team operations (OSCP, OSCE, GPEN certifications a plus).
- Hands-on experience with AI governance platforms (Fiddler, Arthur AI, Credo AI, IBM OpenPages) and model explainability tools (SHAP, LIME, Captum).
- Experience with secure multi-tenant RAG architectures, vector DB access controls, and embedding-level data isolation.
- Publications, conference talks, or CTF contributions in AI/ML security; certifications such as CISSP, CCSP, or AI-specific security credentials.
Technical Stack :
- LLM Security
- Prompt injection testing, jailbreak frameworks, OWASP LLM Top 10, MITRE ATLAS, Garak, PyRIT
- Guardrails
- NVIDIA NeMo Guardrails, Guardrails AI, LLM Guard, Rebuff, Lakera Guard
- Privacy & PII
- Presidio, spaCy NER, differential privacy (OpenDP), anonymisation, consent engines
- Agentic Security
- LangChain/LangGraph security, agent sandboxing, tool ACLs, action approval gates
- RAG Security
- Vector DB access control, embedding isolation, retrieval validation, document-level ACLs
- Compliance
- GDPR, CCPA, EU AI Act, NIST AI RMF, ISO 42001, SOC 2, HIPAA, PCI-DSS
- AI Governance
- Fiddler, Arthur AI, Credo AI, SHAP, LIME, Captum, model cards, datasheets
- Cloud Security
- AWS (IAM, GuardDuty, Bedrock Guardrails), Azure (Defender, Content Safety), GCP (DLP, VPC-SC)
- MLOps Security
- Model signing, SBOM for AI, secure registries, encrypted artefacts, audit trails
What We Offer :
- Competitive salary with performance bonuses and equity; dedicated budget for security research tools and lab environments.
- Sponsorship for top-tier security and AI conferences (DEF CON AI Village, Black Hat, NeurIPS, USENIX Security) and certifications.
- Flexible hybrid work, comprehensive benefits, and the opportunity to define AI security standards for the organisation.
Did you find something suspicious?
Posted by
Posted in
CyberSecurity
Functional Area
ML / DL Engineering
Job Code
1636267