Posted on: 29/10/2025
Description :
- This encompasses data ingestion, feature engineering, training, validation, and serving.
- Deployment & Orchestration: Lead the deployment of models into production using containers (Docker and Kubernetes).
- Utilize advanced orchestrators like Airflow or Vertex AI Pipelines for scheduled and event-driven execution.
- Distributed Systems: Work effectively with distributed systems and big data technologies (Spark) to handle large-scale data processing and model training efficiently.
- NLP & Model Serving: Focus on building and deploying robust solutions in Natural Language Processing (NLP).
Implement low-latency model serving layers using modern frameworks like FastAPI.
- LLM & Vector Integration: Maintain and integrate nascent technologies, including exploring and deploying models based on LLM architectures and managing high-scale data retrieval using Vector Databases.
- MLOps & Automation: Ensure models are production-ready by integrating advanced MLOps principles, guaranteeing continuous delivery, monitoring, and robust system performance.
Required Skills & Technical Expertise :
- Expertise in advanced SQL is required.
- ML Frameworks: Hands-on experience with major frameworks like TensorFlow and PyTorch.
- Backend & Serving: Experience with REST API design and implementation using frameworks like FastAPI.
Infrastructure & MLOps :
- Strong knowledge of CI/CD pipelines, Docker, and Kubernetes.
- Practical experience with Infrastructure as Code (IaC) tools, particularly Terraform.
- Expertise in working with Spark and orchestrators (Airflow/Vertex AI).
- Cutting-Edge Exposure: Exposure to Vector Databases and LLM-based architectures is highly valued
Did you find something suspicious?