HamburgerMenu
hirist

Ardberg AI - Artificial Intelligence Engineer - RAG/LLM

ArdbergAI
5 - 8 Years
Hyderabad

Posted on: 14/04/2026

Job Description

About ARDBERG AI :


Ardberg AI is a next-generation AI consulting and services firm helping organizations transform the way they work through the intelligent application of artificial intelligence. We partner with enterprises across industries to build and deploy production-grade AI systems - from intelligent automation and RAG pipelines to fine-tuned models and AI-native workflows.

We believe the future of software development is human-AI collaboration. At Ardberg AI, we are at the forefront of vibe coding - a discipline that leverages AI-assisted development environments to dramatically accelerate how engineers ideate, prototype, and ship solutions. Our team is global in perspective, diverse in thought, and relentless in our commitment to engineering excellence.

The Role :


We are looking for a seasoned AI Engineer who thrives at the intersection of AI research and pragmatic engineering. You will work alongside our consulting teams to design, build, and deploy cutting-edge AI solutions for clients across sectors. You bring deep technical fluency in large language models, retrieval-augmented generation, and machine learning - and you are native to AI-assisted development environments like Cursor, GitHub Copilot, and Replit.

As a practitioner of vibe coding, you know how to move fast without breaking things: using AI as a creative engineering partner, not just a code autocomplete tool. You will mentor colleagues, shape best practices, and help Ardberg AI stay ahead of the curve.

Key Responsibilities :


- Design and implement end-to-end AI systems including RAG pipelines, vector search infrastructure, and LLM-powered applications for enterprise clients.


- Champion vibe coding practices by leveraging AI-assisted development tools (Cursor, GitHub Copilot, Replit, etc.) to accelerate delivery without compromising code quality.

- Lead the fine-tuning, evaluation, and deployment of large language models (e.g., GPT, Claude, Llama, Mistral) tailored to client-specific use cases.

- Build and maintain prompt engineering frameworks, evaluation harnesses, and guardrail systems for production LLM deployments.

- Architect scalable vector database solutions using tools such as Pinecone, Weaviate, Qdrant, or pgvector to power semantic search and knowledge retrieval systems.

- Collaborate with cross-functional consulting teams to translate business requirements into technical AI architectures and working prototypes.

- Conduct code reviews, enforce engineering best practices, and contribute to the internal AI engineering playbook.

- Stay current with rapidly evolving AI research and tooling; evaluate and integrate new techniques and libraries as appropriate.

- Mentor junior AI engineers and contribute to a culture of continuous learning and knowledge sharing.

Required Qualifications :


- 5 - 8 years of professional software engineering experience, with a significant focus on AI/ML systems over at least the last 3 years.


- Hands-on experience with AI-assisted development tools (Cursor, GitHub Copilot, Replit, Amazon CodeWhisperer, or equivalent) - you do not just use them, you master them.

- Deep expertise in large language models: prompt engineering, evaluation, fine-tuning, and API integration (OpenAI, Anthropic, Hugging Face, Cohere, etc.).

- Strong experience with Retrieval-Augmented Generation (RAG) architectures, embedding models, and vector databases (Pinecone, Weaviate, Qdrant, Chroma, pgvector).

- Proficiency in Python for ML/AI engineering; familiarity with frameworks such as LangChain, LlamaIndex, Haystack, or similar orchestration libraries.

- Experience with ML fine-tuning workflows including data preparation, PEFT/LoRA, RLHF, and model evaluation pipelines.

- Solid understanding of cloud AI services and deployment (AWS SageMaker, Azure AI, Google Vertex AI, or equivalent).

- Excellent communication skills - able to explain complex AI systems to technical and non-technical stakeholders alike.

Nice To Have :


- Experience working in a consulting or client-services environment.

- Familiarity with agentic AI frameworks (AutoGen, LangGraph) and multi-agent orchestration

patterns.

- Prior work with multimodal models (vision + language) or speech AI systems.

- Contributions to open-source AI projects or published research.

- Knowledge of MLOps tools and practices (MLflow, Weights & Biases, DVC, Kubeflow).

- Exposure to edge AI deployment or on-premise LLM hosting (Ollama, vLLM, TGI).

What We Offer :


- Competitive compensation package benchmarked against global technology market

standards.

- Exposure to cutting-edge AI projects across diverse industries and geographies.

- A culture that genuinely embraces vibe coding and AI-native workflows - we dogfood what we

preach.

- Access to a curated library of AI tools, compute resources, and research subscriptions.

- Structured mentorship, learning stipends, and conference attendance support.

- A collaborative, inclusive, and intellectually stimulating team environment.

- Opportunity to shape the AI engineering practice of a fast-growing firm.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in