HamburgerMenu
hirist

Generative AI Engineer - Large Language Models

ENTENTE SOLUTIONS LLP
Gurgaon/Gurugram
2 - 5 Years

Posted on: 12/08/2025

Job Description

Job Title : Generative AI Engineer Python | LLM | LangChain | HuggingFace | AWS/GCP/Azure

Experience : 3 to 5 Years (with minimum 2 years in Generative AI / LLM development)

Location : Gurgaon working from Office

Employment Type : Full-time


About Us :


We are a fast-growing AI-first company building Generative AI products that solve real-world problems across industries. Our work blends cutting-edge AI research with production-grade software engineering to create scalable, high-performance AI systems.


Role Overview :

We are seeking a Generative AI Engineer with expertise in Large Language Models (LLMs), transformer architectures, and AI product development. This role demands hands-on skills in fine-tuning models, optimizing inference, and deploying AI solutions on cloud platforms. You will work with OpenAI, Anthropic, Gemini, LLaMA, Mistral and integrate them into production-grade applications.


Key Responsibilities :

- Build & Deploy : Design, develop, and deploy production-grade Generative AI applications (not just proof-of-concepts).

- LLM Integration : Integrate commercial APIs (OpenAI, Anthropic, Gemini) and open-source models (LLaMA, Mistral, Vicuna) using LangChain, LlamaIndex, HuggingFace Transformers.

- Fine-tuning : Train and fine-tune LLMs/SLMs using LoRA, QLoRA, SFT and optimize inference using NVIDIA Triton, CUDA, TensorRT.

- Vector Search & RAG : Implement semantic search and Retrieval-Augmented Generation (RAG) pipelines using Chroma, Pinecone, FAISS, Weaviate, Qdrant.

- Observability & Evaluation : Use tools like LangFuse, PromptLayer, WandB to monitor and evaluate AI system performance.

- API Development : Build and integrate robust APIs with FastAPI, Flask and orchestrate agentic workflows.

- Cloud & DevOps : Deploy on AWS, GCP, Azure using Docker, Kubernetes, Terraform, GitHub Actions.


Required Skills & Qualifications :


Experience : 3 to 5 years in software engineering / machine learning with 2+ years in Generative AI.

Project Delivery : Must have delivered at least one end-to-end working GenAI product (will be demonstrated during the interview).

Technical Skills :

- Python (Advanced), LangChain, HuggingFace, LlamaIndex

- Transformer architecture, embeddings, tokenization, attention mechanisms

- LLM fine-tuning (LoRA, QLoRA, SFT)

- Vector databases (Chroma, Pinecone, FAISS, Weaviate)

- NVIDIA stack for inference optimization

- Cloud & Deployment : AWS SageMaker, GCP Vertex AI, Azure OpenAI, Kubernetes, Docker

Mindset : Research-driven, innovation-oriented, and strong problem-solving ability

Preferred / Good to Have

- RLHF, DPO, multi-agent AI systems

- Multi-modal AI (CLIP, LLaVA, BLIP)

- Model quantization (GGUF, GPTQ, AWQ)

- Contributions to open-source AI projects


Tech Stack Exposure


- Languages & Frameworks : Python, LangChain, HuggingFace Transformers, FastAPI, Flask


- LLM APIs : OpenAI, Anthropic, Gemini, HuggingFace, LLaMA, Mistral

- Fine-tuning Tools : LoRA, QLoRA, SFT, DPO

- Vector Databases : Chroma, Pinecone, FAISS, Weaviate, Qdrant

- Cloud & Deployment : AWS, GCP, Azure, Docker, Kubernetes, Terraform

- Observability : LangFuse, PromptLayer, WandB


Why Join Us ?

- Opportunity to work on state-of-the-art Generative AI solutions from research to deployment

- Collaborate with top AI researchers and engineers

- Work on high-impact projects used globally

- Flexible work environment and growth-oriented culture


info-icon

Did you find something suspicious?