HamburgerMenu
hirist

LLM Engineer - Data Modeling

Follex Technology
Multiple Locations
7 - 10 Years

Posted on: 08/09/2025

Job Description

Position : Senior LLM Engineer

Experience : Overall 7+Yrs

Relevant : 4+Yrs

Location : Hyderabad(Onsite)

Notice Period : Immediate Joiner

Key Responsibilities :


- Model Expertise : Work with transformer models (GPT, BERT, T5, RoBERTa, etc.) across NLP tasks including text generation, summarization, classification, and translation.

- Model Fine-Tuning : Fine-tune pre-trained models on domain-specific datasets to optimize for summarization, text generation, question answering, and related tasks.

- Prompt Engineering : Design, test, and iterate on contextually relevant prompts to guide model outputs for desired performance.

- Instruction-Based Prompting : Implement and refine instruction-based prompting strategies to achieve contextually accurate results.

- Learning Approaches : Apply zero-shot, few-shot, and many-shot learning methods to maximize model performance without extensive retraining.

- Reasoning Enhancement : Leverage Chain-of-Thought (CoT) prompting for structured, step-by-step reasoning in complex tasks.

- Model Evaluation : Evaluate model performance using BLEU, ROUGE, and other relevant metrics; identify opportunities for improvement.

- Deployment : Deploy trained and fine-tuned models into production environments,

integrating with real-time systems and pipelines.


- Bias & Reliability : Identify, monitor, and mitigate issues related to bias, hallucinations, and knowledge cutoffs in LLMs.

- Collaboration : Work closely with cross-functional teams (data scientists, engineers, product managers) to design scalable and efficient NLP-driven solutions.

Must-Have Skills :


- 7+ years of overall experience in software/AI development with at least 2+ years in transformer-based NLP models.

- 4+ years of hands-on expertise with transformer architectures (GPT, BERT, T5, RoBERTa, etc.).

- Strong understanding of attention mechanisms, self-attention layers, tokenization, embeddings, and context windows.

- Proven experience in fine-tuning pre-trained models for NLP tasks (summarization, classification, text generation, translation, Q&A).

- Expertise in prompt engineering, including zero-shot, few-shot, many-shot learning, and

prompt template creation.

- Experience with instruction-based prompting and Chain-of-Thought prompting for reasoning tasks.

- Proficiency in Python and NLP libraries/frameworks such as Hugging Face Transformers,

SpaCy, NLTK, PyTorch, TensorFlow.

- Strong knowledge of model evaluation metrics (BLEU, ROUGE, perplexity, etc.).

- Experience in deploying models into production environments.

- Awareness of bias, hallucinations, and limitations in LLM outputs.

Good to Have :


- Experience with LLM observability tools and monitoring pipelines.

- Exposure to cloud platforms (AWS, GCP, Azure) for scalable model deployment.

- Knowledge of MLOps practices for model lifecycle management.


info-icon

Did you find something suspicious?