HamburgerMenu
hirist

Python Backend Engineer - LLM Applications

CRUX Consulting Serices
4 - 6 Years
Chennai

Posted on: 24/02/2026

Job Description

Description :

We are seeking a seasoned Python Backend Engineer to join our AI Engineering team. This role is at the intersection of robust backend architecture and cutting-edge Generative AI. You will be responsible for moving beyond simple chatbots to building complex, stateful AI Agents using LangGraph, ensuring they are production-ready through rigorous evaluation using Arize AI.

If you have a "data-first" mindset and experience scaling LLM applications from prototype to production, we want to talk to you.

Core Responsibilities :

- AI Orchestration : Design and implement complex, cyclic agent workflows using LangGraph and LangChain.

- Backend Architecture : Develop and maintain high-performance, asynchronous APIs using FastAPI (preferred), Flask, or Django.

- RAG & Data Strategy : Build and optimize Retrieval-Augmented Generation (RAG) pipelines, including advanced chunking, embedding strategies, and metadata filtering.

- Data Engineering : Construct robust ETL pipelines to process and ingest unstructured data into Vector Databases (Pinecone, Milvus, or Weaviate).

- LLM Ops & Evaluation : Implement "Golden Datasets" and automated monitoring using Arize AI or Phoenix to track faithfulness, relevance, and latency.

- Scalability : Containerize applications using Docker and deploy scalable microservices on cloud environments (AWS/Azure).

Required Technical Skills :

- Python Mastery : 4+ years of professional experience with a focus on asynchronous programming and backend design patterns.

- Generative AI Stack : Proven track record of deploying LLM-based applications (OpenAI, Anthropic, or Llama).

- Agentic Frameworks : Hands-on experience with LangGraph (State management, conditional edges, and persistence) is a must.

- Vector Infrastructure : Proficiency in working with vector stores (FAISS, Pinecone, Azure AI Search) and understanding semantic search.

- Evaluation & Observability : Experience with Arize AI, Ragas, or similar frameworks to quantify LLM performance.

- DevOps Foundations : Strong knowledge of Docker, RESTful API design, and SQL/NoSQL databases.

Bonus Points (Good to Have) :

- Experience with LLM Caching strategies to reduce latency and costs.

- Familiarity with Fine-tuning open-source models.

- Cloud-native development on Azure AI Studio or AWS Bedrock.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in