HamburgerMenu
hirist

Machine Learning Engineer - Data Modeling

Catalyst IQ
Bangalore
2 - 10 Years
star-icon
4.4white-divider2+ Reviews

Posted on: 25/08/2025

Job Description

Key Responsibilities :


LLM & Machine Learning :


- Work with a variety of LLMs including Hugging Face OSS models, GPT (OpenAI), Gemini (Google), Claude (Anthropic), Mixtral (Mistral), and LLaMA (Meta).


- Fine-tune and deploy LLMs for various use cases such as summarization, Q&A, RAG (Retrieval Augmented Generation), chatbots, document intelligence, etc.

- Evaluate and compare model performance and apply optimization strategies.

LLMOps & MLOps :


- Design and implement complete LLMOps workflows using tools like : MLFlow for experiment tracking and model versioning.


- LangChain, LangGraph, LangFlow for LLM orchestration.


- Langfuse, LlamaIndex for observability and indexing.

- AWS SageMaker, Bedrock and Azure AI for model deployment and management.

- Monitor, log, and optimize inference latency and model behavior in production.

Databases & Vector Stores :


- Work with structured and unstructured data using MongoDB and PostgreSQL.


- Leverage vector databases like Pinecone and ChromaDB for RAG-based applications.

- Develop scalable data ingestion and transformation pipelines for AI training and inference.

Cloud & DevOps :


- Deploy and manage AI workloads on AWS and Azure cloud environments.


- Use Docker and Kubernetes for containerization and orchestration of LLM-based services.

Programming & Integration :


- Build robust APIs and microservices using Python, with integrations using SQL and JavaScript where needed.


- Develop UI interfaces or dashboards to visualize model outputs and system metrics.


Essential Skills :


- Hands-on experience with multiple LLMs including GPT, Claude, Mixtral, Llama, etc.


- Expertise in MLOps / LLMOps frameworks : MLFlow, LangChain, LangGraph, LangFlow,

Langfuse, etc.

- Strong understanding of cloud-native AI deployment (AWS SageMaker, Bedrock, Azure AI).

- Proficient in vector databases like Pinecone and ChromaDB.

- Familiarity with DevOps best practices using Docker and Kubernetes.

- Proficient in Python, SQL, and JavaScript.

Preferred Qualifications :


- Previous experience building and deploying production-grade LLM or GenAI applications.

- Familiarity with real-time or low-latency systems involving LLMs.

- Certification in AWS or Azure cloud platforms.

- Exposure to prompt engineering, model fine-tuning, and LLM evaluation techniques


info-icon

Did you find something suspicious?