Were seeking a Senior AI Engineer to design and ship production-grade agentic AI systems that automate complex workflows end-to-end.
This is a hands-on, senior role with significant technical ownership.
Youll work closely with the Chief Architect, product, engineering, and domain experts to translate ambiguous, high-impact problems into reliable AI-driven user experiences.
What Success Looks Like :
- Ship AI capabilities that measurably improve user outcomes (quality, time saved, throughput).
- Build systems that are reliable by design : evals, observability, safety, and cost/latency controls from day one.
I- terate quickly using a tight loop of instrument - evaluate - improve - deploy.
What Youll Do :
Agentic AI Feature & Workflow Development :
- Build and integrate AI-driven features using LLM APIs (OpenAI / Azure OpenAI, Anthropic, Gemini on Vertex AI).
- Design and implement tool-using agents (structured function calling, schema validation, retries, fallbacks).
- Build multi-agent workflows when appropriate (e.g., planner/worker, reviewer/critic, specialist routing) and know when a simpler architecture is better.
- Create agentic workflows such as document understanding, extraction, reasoning over evidence, task automation, and multi-step decision support.
- Own context engineering end-to-end :
a. dynamic context assembly (retrieval + state + tool outputs).
b. context budgeting and compression/summarization.
c. grounding strategies to reduce hallucinations and improve consistency.
- Implement retrieval-augmented generation (RAG) and search workflows using off-the-shelf vector stores and embedding services.
Evaluation, Quality & Iteration (Core) :
- Establish evaluation frameworks for accuracy, reliability, and output quality.
- Build task-specific eval suites : golden datasets, adversarial cases, regression tests, and rubric-based scoring.
- Set up automated evaluation pipelines and release gates (CI/CD-friendly) tied to prompt/model/version changes.
- Define and monitor online metrics (e.g., task success rate, human override rate, safety flags, latency, cost) and run experiments/A-B tests where appropriate.
- Use LLM-as-judge responsibly : calibrate, validate, and pair with human labels when needed.
Engineering, Integration & Observability :
- Develop scalable backend services and APIs that incorporate AI functionality.
- Integrate AI pipelines into existing cloud, microservices, and event-driven architectures.
- Implement observability and analytics for all AI features (tracing, evaluations, prompt versioning, cost tracking) Example tooling : Langfuse (and/or OpenTelemetry-compatible stacks).
- Ensure reliability, uptime, performance, and security of AI services.
- Build internal tooling for evaluation, testing, prompt/version management, and safe deployment.
Product & Collaboration :
- Partner with product managers, designers, the Chief Architect, and domain SMEs to shape AI-first solutions.
- Rapidly prototype concepts and iterate based on user feedback and measurable eval results.
- Translate business problems into well-structured AI workflows without requiring ML model training.
- Document system behavior, known failure modes, and operational playbooks.
Governance & Safety :
- Implement guardrails, checks, and fallback logic for safe and predictable AI behavior.
- Help define and follow compliance, privacy, and responsible AI guidelines.
- Deep hands-on experience building agentic LLM systems from first principles : agent loops, tool interfaces, planning/replanning, memory/state, and failure handling.
- Build core AI capabilities that directly impact users and product strategy.
- Work on cutting-edge, real-world agentic systemsfocused on applied engineering (no model training required).
- High ownership, fast iteration cycles, and strong cross-functional collaboration.
- Competitive compensation and opportunities for rapid advancement.
What Your First 90 Days Could Look Like :
Ship one production agent workflow end-to-end with :
a. tracing + observability.
b. an offline eval suite with regression gates.
c. cost/latency targets and monitoring.
d. documented failure modes and fallback path.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment.
Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.