HamburgerMenu
hirist

P99Soft - Artificial Intelligence/Machine Learning Engineer - LLM Models

P99SOFT PRIVATE LIMITED
Hyderabad
2 - 4 Years
star-icon
4.7white-divider31+ Reviews

Posted on: 26/10/2025

Job Description

Description :

About the Role

We are looking for an innovative and hands-on AI Engineer to design, develop, and deploy intelligent AI agents leveraging the Model Context Protocol (MCP) framework. This role requires a strong understanding of AI system design, LLM integration, and cloud-native deployment (preferably on Azure Foundry or equivalent platforms).


The ideal candidate will be passionate about building autonomous and context-aware AI systems that enhance productivity, automation, and data-driven decision-making.

This is an excellent opportunity for professionals with strong AI engineering and cloud experience to work on next-generation AI agent architectures, integrating machine learning models, APIs, and contextual reasoning systems.

Key Responsibilities :

- Design, develop, and operationalize AI Agents using the Model Context Protocol (MCP) for structured contextual interaction.

- Build modular and extensible agent frameworks capable of multi-turn reasoning, data retrieval, and contextual task execution.

- Integrate LLMs (Large Language Models) with external APIs, databases, and enterprise tools for intelligent task orchestration.

- Implement context management and memory persistence mechanisms within agents.

- Develop, fine-tune, and optimize machine learning and deep learning models using frameworks such as TensorFlow, PyTorch, or Scikit-learn.

- Implement model training pipelines for both supervised and unsupervised learning tasks.

- Ensure performance optimization of deployed models, focusing on latency, scalability, and accuracy.

- Utilize Python-based frameworks for data processing, feature engineering, and evaluation.

- Build and deploy AI/ML solutions on Azure Foundry, Azure Machine Learning, or equivalent cloud platforms (AWS Sagemaker, GCP Vertex AI).

- Manage end-to-end model lifecycle from experimentation to production deployment using CI/CD pipelines and MLOps best practices.

- Implement containerized solutions using Docker and orchestrate deployments with Kubernetes (AKS, EKS, or GKE).

- Ensure proper monitoring, logging, and scaling of AI workloads in production environments.

- Develop data ingestion and processing pipelines to support AI model training and inference.

- Work with RESTful APIs, GraphQL, and webhooks to connect AI agents to external systems and services.

- Manage structured and unstructured data, ensuring quality, integrity, and governance.

- Stay up-to-date with the latest advancements in MCP frameworks, AI agent orchestration, and Generative AI technologies.

- Experiment with open-source frameworks (LangChain, LlamaIndex, Haystack, Semantic Kernel) to prototype intelligent agent systems.

- Drive continuous improvement through R&D on AI/LLM optimization, retrieval-augmented generation (RAG), and hybrid AI systems.

- 58 years of total experience in AI, Data Science, or Software Engineering, with at least 2+ years in AI agent development or LLM-based systems.

- Proven experience with Model Context Protocol (MCP) and AI agent frameworks.

- Strong proficiency in Python and core ML/AI libraries:

- TensorFlow, PyTorch, Scikit-learn, NumPy, Pandas, etc.

- Deep understanding of cloud environments (Azure Foundry preferred; AWS/GCP is a plus).

- Experience with API integration, microservices, and data pipelines (ETL/ELT).

- Proficiency in version control systems (Git, GitHub, GitLab) and familiarity with DevOps/MLOps workflows.

- Excellent analytical thinking, debugging, and problem-solving skills


info-icon

Did you find something suspicious?