HamburgerMenu
hirist

Moder - AI Cyber Security Engineer/Specialist - Incident Management

MODER SOLUTIONS INDIA PRIVATE LIMITED
5 - 10 Years
rupee25-40 LPA
Multiple Locations

Posted on: 28/04/2026

Job Description

Job Summary :


The Cybersecurity Engineer Sr. AI Cybersecurity Specialist leads the evaluation, implementation, and governance of secure AI practices across Cenlars cloud and on premise environments.


This role is responsible for guiding strategic AI adoption by assessing LLM applications, agentic systems, RAG pipelines, and model integrations for immediate security risks, establishing enterprise guardrails, and delivering rapid, intelligence driven impact assessments to ensure responsible and compliant AI deployment.


The engineer also designs and orchestrates AI assisted incident response frameworks capable of operating at machine speed to detect, contain, and neutralize evolving threats while maintaining the integrity and continuity of mission essential services.


Through strategic coordination with Security, IT, and business stakeholders, this work materially strengthens Cenlars ability to defend against AI enabled risks and safeguard the confidentiality, integrity, and availability of data and systems.

Job Responsibilities :


AI Security Strategy & Governance :


- Serves as the senior SME for AI security, defining policies, standards, and architectural guardrails for GenAI, LLMs, agentic AI, and ML platforms aligned with NIST AI RMF, CIS Controls, and Cenlar security policy.


- Leads AI security risk assessments, evaluating adversarial ML threats (poisoning, evasion), prompt based risks, output safety concerns, model theft, data leakage, and supply chain vulnerabilities.

- Establishes governance for model catalogs, sanctioned AI tools, shadow AI detection, and enterprise approval pathways.

- Embeds secure by design principles across AI development, testing, deployment, and monitoring pipelines.

LLM / Agentic System Security & Hardening: AI Security Strategy & Governance :

- Serves as the senior SME for AI security, defining policies, standards, and architectural guardrails for GenAI, LLMs, agentic AI, and ML platforms aligned with NIST AI RMF, CIS Controls, and Cenlar security policy.

- Leads AI security risk assessments, evaluating adversarial ML threats (poisoning, evasion), prompt based risks, output safety concerns, model theft, data leakage, and supply chain vulnerabilities.

- Establishes governance for model catalogs, sanctioned AI tools, shadow AI detection, and enterprise approval pathways.

- Embeds secure by design principles across AI development, testing, deployment, and monitoring pipelines.

LLM / Agentic System Security & Hardening

- Develops and validates secure configurations for LLM and RAG architectures, including retrieval permissioning, function calling safeguards, and agent workflow boundaries.


- Conducts AI red team exercises and adversarial testing; drives guardrail tuning, jailbreak prevention, and output safety assurance.


- Integrates AI security telemetry into SIEM/XDR/SOAR to improve detection of misuse, exfiltration attempts, or integrity failures.

AI Incident Response & Defense Automation :

- Designs and operates AI assisted incident response frameworks that detect, isolate, and contain threats in real time.


- Builds automated triage, containment, and recovery agents that preserve mission essential services and enable graceful degradation under attack.

- Conducts integrity verification, evidence automation, forensic correlation, and post incident AI behavior analysis.


- Leads cross team coordination during AI related incidents, integrating intelligence, engineering, and operational support.

Automation, Integration & Evidence Management :

- Develops automation pipelines for security policy enforcement, model telemetry ingestion, anomaly detection, and AI governance workflows.


- Integrates AI controls and evidence collection into ServiceNow GRC/SecOps for continuous monitoring, control testing, and audit readiness.


- Creates dashboards that visualize AI risk posture, compliance adherence, and control performance.

Training, Awareness & Collaboration :

- Provides training to engineers, developers, and business units on secure AI usage, responsible model interaction, and emerging adversarial threats.


- Partners with Information Security, Cloud Architecture, SOC/IR, GRC, and business stakeholders to embed AI security controls and align strategies.


- Monitors threat intelligence related to AI exploitation, synthetic attacks, and industry developments, translating insights into actionable controls.

Requirements, Education, Experience :

- Bachelors degree in Computer Science, Cybersecurity, Information Systems, Data Engineering, or equivalent experience; Masters preferred.


- 5+ years of cybersecurity experience, including 1-3+ years focused on AI/ML security, LLM security, model governance, or agentic AI systems.


- Hands-on experience with AI security controls, LLM/RAG architecture hardening, adversarial testing, and secure MLOps.


- Experience with Azure OpenAI, Azure ML, AWS Bedrock/SageMaker, and hybrid AI deployments.

- Strong understanding of authentication, authorization, access governance, and Zero Trust identity controls related to AI workflows.


- Proficiency with Python, PowerShell, APIs, and automation tooling for policy enforcement and telemetry processing.


- Experience with NIST AI RMF, CIS Controls, ISO 27001, SOX, GLBA, FFIEC, and emerging AI regulatory guidance.


- Strong analytical, communication, and cross team leadership skills.

Highly Preferred Certifications :

- CISSP, CCSP


- SANS GSEC


- Azure AI Engineer Associate


- AWS Machine Learning Specialty


- Google Professional Machine Learning Engineer


- Additional cloud, governance, or AI focused certifications are beneficial.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in