Posted on: 15/04/2026
About The Job :
We are seeking a Senior AI/ML Engineer to design, develop, and deploy production-grade AI/ML solutions within GSKs Digital & Tech organization.
This role focuses on building Generative AI applications, multi-agent systems, and advanced retrieval pipelines that drive measurable business impact.
You will work at the intersection of cutting-edge AI research and enterprise software engineering, collaborating with data scientists, platform engineers, and domain experts across R&D, supply chain, and commercial functions.
Essential Job Functions :
- Design, develop, and deploy Generative AI applications using LLMs (GPT-4, Claude, Gemini, open-source models) for enterprise use cases.
- Build and orchestrate multi-agent systems using frameworks like LangGraph, LangChain, CrewAI, or AutoGen with function calling and tool use.
- Implement Retrieval-Augmented Generation (RAG), Graph RAG, and hybrid retrieval pipelines using vector databases (Pinecone, Weaviate, Chroma, pgvector).
- Apply prompt engineering, chain-of-thought reasoning, and context engineering techniques to optimize model outputs.
- Fine-tune LLMs and embedding models for domain-specific tasks using LoRA, QLoRA, or full fine-tuning approaches.
- Implement guardrails, content filtering, and safety mechanisms for responsible AI deployment.
- ML Engineering & MLOps
- Build end-to-end ML pipelines data ingestion, feature engineering, model training, evaluation, and deployment.
- Implement LLMOps practices: model versioning, A/B testing, prompt management, evaluation frameworks (LLM-as-judge, RAGAS, custom metrics).
- Deploy and manage LLM inference using frameworks such as vLLM, TensorRT-LLM, or DeepSpeed for latency and cost optimization.
- Monitor model performance, detect drift, and implement continuous improvement loops.
- Build observability for AI systems using LangSmith, Langfuse, or custom tracing solutions.
- Architecture & Cloud
- Architect scalable AI solutions on AWS (Bedrock, SageMaker, Lambda) or Azure (OpenAI Service, ML Studio).
- Containerize AI applications with Docker and deploy via Kubernetes, ECS, or serverless patterns.
- Design event-driven and API-first architectures for AI service integration with enterprise systems.
- Implement CI/CD pipelines for ML models and AI applications.
- Collaboration & Leadership
- Collaborate with data scientists, domain experts, and product owners to translate business problems into AI solutions.
- Conduct code reviews, architectural design reviews, and contribute to engineering standards.
- Mentor junior AI/ML engineers; lead technical knowledge-sharing sessions.
- Evaluate and recommend emerging AI technologies, frameworks, and approaches.
- Present AI solutions and results to technical and non-technical stakeholders.
Qualifications :
- Strong planning skills and should be able to work on the plan with the technical lead / architect.
- Strong GenAI development skills alongside classical ML/NLP foundations.
- Planning capability able to lead technical planning for AI sprints and agree to delivery plans crafted by engineering leads.
- Accountable, self-driven, with a bias toward shipping production-grade solutions.
- Minimum 2 years hands-on experience building and deploying GenAI / LLM applications in production.
Required :
- 10+ years of total experience in ML/AI or data science roles, with a minimum of 2 years building and deploying GenAI / LLM applications in production.
- Strong proficiency in Python and at least one ML framework (PyTorch / TensorFlow).
- Hands-on experience with RAG pipelines, vector databases, and LLM orchestration frameworks.
- Experience deploying AI solutions on cloud platforms (AWS or Azure) with containerization.
- Solid understanding of transformer architectures, attention mechanisms, and modern NLP techniques.
Preferred :
- Background in Graph RAG, knowledge graphs, or ontology-based information extraction.
Did you find something suspicious?