HamburgerMenu
hirist

Job Description

About the Role:

We are looking for a highly experienced GenAI Technical Architect with strong domain expertise in Generative AI, Transformer architectures, and LLM-based solution implementation. The ideal candidate will lead technical programs and architect cutting-edge solutions using large language models (LLMs), Retrieval-Augmented Generation (RAG), and related GenAI technologies. This is a hybrid role combining deep hands-on knowledge, architecture-level thinking, and program management skills.

Key Responsibilities :

- Architect and lead the end-to-end implementation of GenAI solutions, including LLM selection, RAG integration, knowledge retrieval, and model deployment.

- Design and implement systems using Transformer architectures (Encoder/Decoder models), leveraging frameworks like Hugging Face, LangChain, or custom pipelines.

- Develop, evaluate, and deploy models using both Autoencoder (BERT, RoBERTa, DistilBERT) and Autoregressive (GPT, LLaMA, Mistral, PaLM, BLOOM, Claude, CodeGen, OPT) paradigms.

- Implement RAG (Retrieval-Augmented Generation) architecture with real-world datasets and search systems.

- Lead LLM fine-tuning and prompt engineering for domain-specific use cases and optimized performance.

- Utilize LangChain or similar frameworks to build intelligent pipelines and agents that interact with data and APIs.

- Drive the design and implementation of Knowledge Graphs, integrating structured and unstructured data for enterprise knowledge systems.

- Build and execute LLM evaluation pipelines using standard evaluation metrics like RAGAS, ROUGE, BLEU, BERTScore, etc.

- Collaborate with cross-functional teams including data science, product management, software engineering, and stakeholders to align technical roadmaps with business objectives.

- Mentor junior engineers and data scientists on GenAI best practices and tooling.

- Stay up-to-date with the latest advancements in Generative AI, NLP, and ML research and incorporate them into the companys AI strategy.

Required Technical Skillset :

- Strong expertise in Transformer architectures, including both:

- Autoencoding models: BERT, RoBERTa, DistilBERT

- Autoregressive models: GPT (OpenAI, GPT-J), LLaMA, Claude, Mistral, PaLM, CodeGen, BLOOM, OPT, etc.

- Solid understanding of the differences between Autoencoder and Autoregressive models and their use cases.

- Proven experience building and deploying RAG-based systems (e.g., using FAISS, ElasticSearch, or vector DBs like Pinecone, Weaviate).

- Proficiency with LangChain for orchestrating LLM applications.

- Demonstrated experience in fine-tuning LLMs for custom datasets or tasks.

- Strong understanding of ML Ops and GenAI Ops pipelines, from experimentation to deployment.

- Experience in AI/ML development lifecycle, including data preparation, model training, evaluation, and monitoring.

- Exposure to Knowledge Graph design and implementation is highly desirable.

- Familiarity with evaluation frameworks and scoring metrics: RAGAS, ROUGE, BLEU, BERTScore, etc.

- Programming expertise in Python and hands-on experience with Hugging Face Transformers, OpenAI APIs, LangChain, PyTorch/TensorFlow, LLM fine-tuning libraries.

- Knowledge of cloud-based AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI) and containerization (Docker, Kubernetes).


info-icon

Did you find something suspicious?