HamburgerMenu
hirist

Job Description

Description:

Roles & Responsibilities:

- Design and develop AI agents for cybersecurity workflows such as automated reconnaissance, phishing analysis, vulnerability triage, and SOC playbook execution.

- Implement LLM-based architectures (prompt engineering, tool integrations) with a focus on security, accuracy, and reliability.

- Build and maintain secure model integration pipelines, ensuring protection against adversarial threats like prompt injection, data poisoning, and model misuse.

- Collaborate with security engineers, analysts, and red/blue teams to translate manual workflows into autonomous or semi-autonomous AI-driven systems.

- Ensure data governance, privacy compliance, and provenance tracking in all AI-related workflows.

- Maintain clear documentation of architectures, workflows, and safety measures for production systems.

Preferred Skills:

- Strong programming skills in Python (experience with production-grade software).

- Hands-on experience with LLM-based applications (LangChain, OpenAI API, Hugging Face, or similar frameworks).

- Familiarity with cybersecurity tools (e.g., Burp Suite, Nmap, Zap).

- Experience with REST APIs, micro services, and workflow orchestration tools (e.g., Airflow, n8n, Zapier).

Nice-to-Have Skills:

- Background in SaaS product development or cybersecurity automation/SOC engineering.

- Experience with cloud platforms (AWS, GCP, Azure) and event streaming technologies (Kafka).

- Experience with MLOps tools such as Docker, Kubernetes, and MLflow.

- Experience with AI/ML-based automation or GenAI prompt engineering for security workflows.

- Prior work involving secure AI deployments in regulated or sensitive environments.

- Ability to set up MLOps pipelines for deploying and monitoring AI models and agents, including performance tracking, drift detection, and version control.

- Experience integrating AI agents with security tools and data sources such as SIEM, EDR, vulnerability scanners, and threat intelligence feeds.

- Conduct adversarial testing and AI red-teaming to evaluate and harden agents against malicious inputs.

Qualification & Experience:

- Experience: 4+ years

- Education: Bachelors or Masters degree in Computer Science, AI/ML, Cybersecurity, or a related field (or equivalent practical experience).


info-icon

Did you find something suspicious?