Posted on: 25/08/2025
Key Responsibilities :
LLM & Machine Learning :
- Work with a variety of LLMs including Hugging Face OSS models, GPT (OpenAI), Gemini (Google), Claude (Anthropic), Mixtral (Mistral), and LLaMA (Meta).
- Fine-tune and deploy LLMs for various use cases such as summarization, Q&A, RAG (Retrieval Augmented Generation), chatbots, document intelligence, etc.
- Evaluate and compare model performance and apply optimization strategies.
LLMOps & MLOps :
- Design and implement complete LLMOps workflows using tools like : MLFlow for experiment tracking and model versioning.
- LangChain, LangGraph, LangFlow for LLM orchestration.
- Langfuse, LlamaIndex for observability and indexing.
- AWS SageMaker, Bedrock and Azure AI for model deployment and management.
- Monitor, log, and optimize inference latency and model behavior in production.
Databases & Vector Stores :
- Work with structured and unstructured data using MongoDB and PostgreSQL.
- Develop scalable data ingestion and transformation pipelines for AI training and inference.
Cloud & DevOps :
- Deploy and manage AI workloads on AWS and Azure cloud environments.
Programming & Integration :
- Build robust APIs and microservices using Python, with integrations using SQL and JavaScript where needed.
- Develop UI interfaces or dashboards to visualize model outputs and system metrics.
Essential Skills :
- Hands-on experience with multiple LLMs including GPT, Claude, Mixtral, Llama, etc.
Langfuse, etc.
- Strong understanding of cloud-native AI deployment (AWS SageMaker, Bedrock, Azure AI).
- Proficient in vector databases like Pinecone and ChromaDB.
- Familiarity with DevOps best practices using Docker and Kubernetes.
- Proficient in Python, SQL, and JavaScript.
Preferred Qualifications :
- Familiarity with real-time or low-latency systems involving LLMs.
- Certification in AWS or Azure cloud platforms.
- Exposure to prompt engineering, model fine-tuning, and LLM evaluation techniques
Did you find something suspicious?