HamburgerMenu
hirist

Verveo Solutions - AI/Generative AI Engineer - Python/RAG

Verveo Solutions
2 - 5 Years
Bangalore

Posted on: 08/04/2026

Job Description

About the role :

We're looking for an early-career AI / Generative AI Engineer to join our ML/AI team.

You'll build, fine-tune, and deploy models (NLP / multimodal) and productionize GenAI features used by product teams. This role balances hands-on model work, prompt & dataset engineering, and engineering responsibilities to deliver reliable, secure, and scalable AI services.

Key responsibilities :

- Design, fine-tune, and evaluate generative models (transformers) for tasks such as summarization, Q&A, code generation, and retrieval-augmented generation (RAG).

- Implement data pipelines for training & evaluation : dataset collection, cleaning, labeling, and augmentation.

- Develop, test, and maintain prompt engineering practices and templates; measure prompt drift and performance.

- Build RAG pipelines (embeddings, vector store selection, index management, retriever tuning).

- Containerize models and services (Docker), create reproducible deployments (FastAPI / Flask / .NET wrappers), and help deploy to staging/production (K8s, serverless, or cloud infra).

- Implement monitoring, logging, and evaluation metrics for model performance and data/feature drift.

- Work with product and infra teams to integrate AI features into user-facing apps and ensure secure usage (rate-limits, content filtering, PII redaction).

- Keep up with new model releases and evaluate third-party APIs (OpenAI, Anthropic, Meta, etc.) for integration.

- Write clear documentation, runbooks, and reproducible experiments.

Required qualifications :

- 2-5 years professional experience in applied ML / NLP / generative model work.

- Strong Python skills and experience with ML frameworks : PyTorch (preferred) or TensorFlow.

- Experience with transformer models and libraries : Hugging Face Transformers, sentence-transformers, or equivalent.

- Experience with embeddings and vector DBs (e.g., FAISS, Milvus, Pinecone, Weaviate).

- Good understanding of model evaluation : ROUGE, BLEU, Accuracy, F1, human eval basics, and safety metrics.

- Solid software engineering fundamentals : Git, unit testing, code reviews, and RESTful APIs.

- Experience with LLM orchestration tools / agent frameworks (LangChain, LlamaIndex, LangGraph, Semantic Kernel/Autogen).

Preferred / nice-to-have :

- Knowledge of prompt engineering best practices and prompt templates.

- Experience with cloud platforms : AWS / GCP / Azure (SageMaker, Vertex AI, Bedrock, etc.).

- Exposure to MLOps tooling : MLflow, DVC.

- Familiarity with security / privacy practices for models (PII handling, content moderation).

- Experience with production monitoring for ML (Prometheus, Grafana, SLOs).

Soft skills :

- Strong problem-solving and debugging skills.

- Ability to communicate model trade-offs and limitations to non-ML stakeholders.

- Collaborative mindset : works well with product managers, backend engineers, and designers.

- Attention to reproducibility, reproducible experiments, and documentation.

Deliverables / KPIs (first 3-6 months) :

- Ship at least one end-to-end GenAI feature (prototype - staged deployment) with documented evaluation results.

- Reproducible training/fine-tuning pipeline and an experiment tracking dashboard.

- Production-ready inference endpoint with basic monitoring and cost controls.

- Documented prompt templates and a rollback strategy for model releases.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in