Posted on: 18/07/2025
Title : AIML - LLM Gen AI Engineer
Key Responsibilities :
- Design, develop, and implement solutions leveraging transformer-based models for various NLP tasks, including text generation, summarization, question answering, classification, and translation.
- Work extensively with transformer models such as GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), T5 (Text-to-Text Transfer Transformer), RoBERTa, and similar architectures.
- Apply a deep understanding of model architectures, attention mechanisms, and self-attention layers to effectively utilize LLMs for generating human-like text.
- Lead efforts in fine-tuning pre-trained LLMs and other transformer models on domain-specific datasets to optimize performance for specialized tasks like text generation, summarization, question answering, classification, and translation.
- Utilize knowledge of concepts like attention mechanisms, context windows, tokenization, and embedding layers in model development and optimization.
- Address and mitigate issues related to biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality.
- Apply expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs, including the use of instruction-based prompting.
- Implement and experiment with zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without extensive retraining.
- Iterate on prompts and prompt engineering strategies to refine outputs, rigorously test model performance, and ensure consistent and high-quality results.
- Craft prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs.
- Demonstrate expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns.
- Contribute to the entire lifecycle of machine learning models in an NLP context, including training, fine-tuning, and deployment.
Required Skills & Qualifications :
- Overall 8+ years of experience working with transformer-based models and NLP tasks, with a dedicated focus on text generation, summarization, question answering, classification, and similar applications.
- Expertise in transformer models like GPT, BERT, T5, RoBERTa, and similar foundational models.
- Strong familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text.
- Proven experience in fine-tuning pre-trained models on domain-specific datasets for various NLP tasks.
- Strong familiarity with concepts like attention mechanisms, context windows, tokenization, and embedding layers.
- Awareness of biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality.
- Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs, including instruction-based prompting.
- Experience in using zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without retraining.
- Proven experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results.
- Demonstrated ability to craft prompt templates for repetitive tasks, ensuring adaptability to different contexts and inputs.
- Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks.
- Proficiency in Python and extensive experience with key NLP libraries (e.g., Hugging Face Transformers, SpaCy, NLTK).
- Solid experience in training, fine-tuning, and deploying machine learning models in an NLP context.
- Excellent problem-solving skills and a strong analytical mindset.
- Strong communication and collaboration skills, with the ability to work effectively in a remote or hybrid environment.
- Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field.
Did you find something suspicious?
Posted By
Dinesh Reddy
Senior Talent Acquisition Specialist at Zen Technologies Limited
Last Active: 18 Jul 2025
Posted in
AI/ML
Functional Area
ML / DL Engineering
Job Code
1514819
Interview Questions for you
View All