HamburgerMenu
hirist

BMC Software - Principal AI Engineer

BMC Software India Pvt. Ltd
10 - 16 Years
Multiple Locations

Posted on: 24/04/2026

Job Description

Description :


We are looking for a AI Engineer to build our next-generation Agentic AI platform from 01. This is a hands-on, delivery-driven role for a senior engineer who has shipped AI-powered products into real enterprise environments, understands the trade-offs of production AI systems, and takes full ownership of outcomes not just models or demos.


You will work alongside our VP of Engineering, VP of AI and product leadership to define and build the core AI systems of the platform. You will spend most of your time designing, coding, and shipping production grade AI capabilities used by external B2B customers. Success is measured by reliability, controllability, cost efficiency, and customer impact not novelty.


Here is how, through this exciting role, YOU will contribute to BMC's and your own success :


- Design, build, and evolve agentic AI systems that reason, plan, execute, and adapt in production environments.


- Take AI-driven features from concept to production in a true 01 product environment.


- Write and review high-quality production code (Python-first) across AI pipelines, inference services, orchestration layers, and supporting systems.


- Implement prompt engineering, tool use, memory, evaluation, and guardrails as first-class engineering concerns, not experiments.


- Design agent frameworks that balance autonomy with determinism, observability, and safety.


- Make pragmatic architectural trade-offs across latency, cost, accuracy, scalability, and maintainability.


- Integrate and operate LLMs (commercial and/or open-source) including model selection, fine-tuning strategies, embeddings, retrieval (RAG), and inference optimization.


- Address real-world issues : hallucinations, drift, prompt regressions, failure modes, and customer trust.


- Deploy and operate AI services across cloud platforms (AWS, Azure, GCP), including secure enterprise integrations and customer-specific deployments.


- Design scalable inference and orchestration architectures using containers, APIs, and distributed systems.


- Ensure the platform is shippable, debuggable, and supportable not fragile or research-grade.


- Act with founder-level ownership : identify gaps, propose solutions, and move forward without waiting for perfect requirements.


To ensure youre set up for success, you will bring the following skillset & experience :


- 10+ years of professional software development experience, with significant time shipping B2B products used by external customers.


- Strong software engineering foundation with expert-level Python and experience designing production systems.


- Proven experience building, deploying, and operating AI-powered products in production not just prototypes or research.


- Hands-on experience with LLMs and GenAI systems in real applications (e.g., agents, copilots, automation, decision systems).


Deep understanding of at least several of the following :


- Agent frameworks and orchestration


- Prompt engineering and tool-use patterns


- RAG architectures and vector search


- Model evaluation, feedback loops, and monitoring


- Safety, guardrails, and enterprise controls


Hands-on experience with multiple of the following in real systems :


- LangGraph and/or LangChain


- LlamaIndex


- Vector databases (e.g., Pinecone, Weaviate, FAISS, Milvus)


- Prompt engineering as a managed, versioned, testable artifact


- Experience deploying and operating LLMs using : AWS SageMaker, Vertex AI, or equivalent managed platforms. Direct API integrations (OpenAI, Anthropic)


- Experience designing multi-agent systems or complex agent workflows.


- Experience commercializing AI features under enterprise constraints (security, compliance, uptime).


- Comfort operating in ambiguity and making decisions with incomplete information.


Whilst these are nice to have, our team can help you develop in the following skills :


- Contributions to open-source GenAI tooling or internal frameworks used at scale


- Experience with Supervised fine-tuning, Parameter-efficient tuning methods (LoRA, QLoRA), reinforcement learning (RLHF) and preference optimization (PPO, DPO, GRPO).


- Experience deploying LLMs at scale (Kubernetes, model serving, GPU optimization).


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in