Posted on: 22/07/2025
Job Role/Title : LLM Engineer
Location : Hyderabad / Vizag Vishakapatnam
About the Role :
Were looking for an experienced and motivated LLM Engineer to lead the fine-tuning and optimization of large language models on billion-scale datasets. In this high-impact role, youll build scalable ML infrastructure, adapt open-source models to domain-specific data, and shape intelligent systems that operate at real-world scale.
Youll work with advanced open-source models (such as LLaMA, Mistral, or Falcon), implement parameter-efficient fine-tuning strategies, and build robust pipelines that transform raw behavioral and textual signals into aligned, production-ready models.
This is an ideal role for someone who enjoys solving deep learning challenges, scaling models across massive datasets, and laying the foundation for future, intelligent AI systems.
What Youll Do :
- Fine-tune and adapt open-source LLMs (e.g., LLaMA, Mistral, Falcon) on massive domain-specific datasets
- Build scalable pipelines for data preprocessing and model fine-tuning using big data processing frameworks
- Apply fine-tuning strategies such as LoRA, PEFT, and RLHF to align models efficiently
- Optimize performance using distributed training techniques for multi-GPU or multi-node environments
- Design and manage experiments, run training cycles, and evaluate model performance across key quality metrics
- Monitor model behavior over time and iterate on tuning approaches for robustness and alignment
- Collaborate across engineering, infrastructure, and product teams to integrate models into user-facing applications
What Were Looking For :
You're someone who thrives on scaling deep learning systems, solving real-world modeling challenges, and improving models through experimentation and iteration.
- 3 - 5 years of experience fine-tuning LLMs or large-scale NLP models
- Deep understanding of Machine Learning, Deep Learning, NLP,and LLMs,
- Strong proficiency in Python (required)
- Hands-on experience with PyTorch and Hugging Face Models
- Familiar with distributed training strategies for large models (e.g., DeepSpeed, FSDP, etc)
- Experience handling and processing large datasets using big data frameworks
- Comfortable running experiments, evaluating results, and refining models through iterative tuning
- Experience working with or contributing to open-source LLMs
Nice to have :
- Familiarity with agentic workflows, including tool use, task chaining, or standards like Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication
Did you find something suspicious?