HamburgerMenu
hirist

Job Description

Client : AI Native wearable device startup.

Backend Engineer - AI Platform (Immediate Joiner)

The Core Challenge :

AI systems generate massive amounts of data that must be processed, routed, and served with microsecond precision. Traditional backends break under AI workloads. We're building infrastructure that treats AI as a first-class citizenwhere every service, every database query, every message queue is designed for the unique demands of machine learning pipelines.

What You'll Engineer :

You'll design and implement the distributed systems that power AI applications. This means building event-driven architectures that can process millions of inference requests, vector similarity searches, and real-time model updates without breaking.

Infrastructure You'll Build :

- Microservices in Python (FastAPI/Django/Flask) that handle AI model orchestration

- Event streaming systems using Kafka for real-time data pipeline processing

- Database architectures : SQL for transactions, NoSQL for scale, vector DBs for embeddings

- Search and retrieval systems that return relevant results in <100ms

- Kubernetes deployments with auto-scaling, monitoring, and zero-downtime updates

- CI/CD pipelines that can deploy ML models and traditional services seamlessly

Technical Stack & Requirements :

Production Systems :

- 2-5 years building backend systems that handle real user traffic

- Python expertise : async programming, concurrency patterns, performance optimization

- Distributed systems : event sourcing, saga patterns, eventual consistency

- Database design : indexing strategies, query optimization, data modeling

- Cloud infrastructure : AWS/GCP/Azure, container orchestration, observability

AI Platform Specifics :

- Understanding of ML model serving : batch vs streaming inference

- Experience with vector databases (Pinecone, Weaviate, Chroma) or search systems

- Knowledge of data streaming patterns for real-time ML feature generation

- Familiarity with ML orchestration tools and workflow management

The Technical Reality :

AI workloads are different.

They require :

- Asynchronous processing pipelines that can handle variable latency

- Database schemas that can evolve as models change

- Caching strategies for computationally expensive operations

- Monitoring systems that track both infrastructure and model performance


info-icon

Did you find something suspicious?