Posted on: 11/02/2026
About the job :
In This Role :
- Design and deploy LLM-powered scam detection systems using GPT-4, Claude, Llama, or similar foundation models
- Build semantic analysis pipelines using transformers and embeddings to identify phishing content and malicious patterns
- Implement RAG (Retrieval-Augmented Generation) systems with vector databases to match known scam signatures and emerging threats
- Create AI safety frameworks including content moderation, toxicity detection, and behavioral anomaly systems
- Deploy scalable AI systems on AWS using SageMaker, Lambda, and real-time inference endpoints
- Build deep learning models for pattern recognition and text classification
- Monitor adversarial attacks, conduct red-teaming exercises, and continuously improve detection algorithms against evolving threats
What are we looking for?
- Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related field
- 3-5 years of experience building production AI systems, preferably in fraud detection, trust & safety, or security domains
- Proven track record of deploying LLM-based applications that handle adversarial or high-stakes scenarios
- Strong AWS proficiency with hands-on experience in SageMaker, Lambda, S3, and AI/ML services
- Strong LLM experience: API integration (OpenAI, Anthropic, etc.), prompt engineering, and guardrail implementation
- Proficiency with LLM frameworks: LangChain, LlamaIndex, Haystack, or similar orchestration tools
- Solid deep learning experience using PyTorch or TensorFlow, with understanding of transformer architectures
- Strong NLP skills with Hugging Face Transformers, sentence embeddings, and semantic similarity techniques
- Experience with vector databases (Pinecone, Weaviate, ChromaDB) for RAG systems
- Understanding of adversarial AI, model security, jailbreaking techniques, and AI safety
principles
You'll Stand Out If You Have :
- Experience in cybersecurity, fraud prevention, or trust & safety engineering
- Experience with AWS Bedrock or other managed LLM services
- Knowledge of model fine-tuning techniques (LoRA, QLoRA, PEFT, RLHF) and experience adapting LLMs for specific domains
- Familiarity with AI red-teaming, jailbreak detection, and prompt injection defenses
- Understanding of real-time streaming architectures (Kafka, Kinesis) for live threat detection
- Knowledge of multi-modal AI (text, image, audio) for detecting deepfakes and comprehensive
scam detection
- Experience with computer vision for detecting fake documents or manipulated images
- AWS certifications (Machine Learning Specialty, Solutions Architect)
- Contributions to open-source AI safety or security projects
What you will get from us?
- Welcome to the good side the home of scam protection! Work with industry-leading experts defining the future of cybersecurity and scam protection
- Be an AI pioneer, not a follower. Access industry-leading tools like Claude and Claude Code, with full support to integrate AI into your daily work while others are still figuring out policies. We're not asking "if" but "how" AI transforms our work, positioning you at the forefront of the industry.
- Thrive in our Fellowship culture where we empower, trust, challenge, and support each other in doing our best work.
- Flexible work that works for you hybrid and remote options with team-agreed ways of
working.
- Inclusive environment with flat, approachable leadership in our diverse global community.
- Comprehensive global benefits including Employee Share Savings Plan (ESSP), Fellow Member of the Board opportunities, and Annual Protect & Educate paid volunteer day.
- Wellbeing support through personal coaching services and one hour per week for personal recharging.
- Continuous growth via F-Secure Academy, Leadership programs, AI training, mentoring, and dedicated Learning Week.
- A security vetting will possibly be conducted for the selected candidate in accordance with our employment process.
Did you find something suspicious?