Posted on: 21/01/2026
Description :
Title - Senior MLOps / DataOps Engineer
Exp - 3 to 8 years
Loc - Bangalore (Hybrid)
NP - Immediate to 20 days
Skills : Python, AWS, AI, ML, MLOPS, NLP/LLM, GenAI, RAG
About the role :
We are looking for a senior MLOps / DataOps Engineer to productionise and operate machine learning and data pipelines used in live public-sector forecasting systems. This role sits at the interface of research and deployment and requires strong hands-on ownership in a lean team. You will work closely with researchers and data scientists to turn research-grade models and datasets into reliable, reusable, production pipelines. You will be the primary engineering owner of the model pipelines. This is a high-ownership role in a small team, requiring practical judgment, independence, pragmatism, and the ability to deliver reliable systems alongside ongoing translational research work.
Key responsibilities :
- Design, implement, and operate end-to-end ML and data pipelines on AWS (EC2, S3).
- Productionise research and forecasting models into robust, repeatable pipelines.
- Work with evolving, domain-driven datasets (e.g. health, climate, public-sector data).
- Implement and maintain pipelines using workflow orchestration frameworks (e.g. Mage, Airflow, Prefect), including externally managed orchestrators.
- Design and enforce data standards and shared schemas used across multiple models and pipelines.
- Ensure pipeline reliability through validation, monitoring, and sensible failure handling.
- Collaborate closely with researchers and data scientists with limited engineering background, enabling them to work safely within production systems.
- Document pipelines, assumptions, and workflows, and provide walkthroughs or training as needed.
- Build reusable proof-of-concept and pipeline templates that can be extended across projects.
- Work with external collaborators and partners to integrate data, models, or tools into existing pipelines.
- Mentor junior data scientists on best practices in data handling, pipeline design, and reproducibility.
Required qualifications and experience :
- 3- 7+ years of experience in MLOps, ML Engineering, or Data Engineering roles.
- Demonstrated experience in productionising ML or data pipelines on AWS.
- Experience owning pipelines end-to-end, from data ingestion to outputs and failure handling.
- Experience working with messy, evolving datasets driven by research or domain constraints.
- Hands-on experience with at least one workflow orchestration framework (e.g. Mage, Airflow, Prefect).
- Strong proficiency in Python.
- Experience with data standardization, schema design, and validation across multiple datasets or models.
- Comfortable writing clear documentation and supporting non-engineering users.
Nice to have :
- Open-source contributions or public repositories demonstrating ML, data engineering, or MLOps work.
- Experience with ML forecasting or time-series models.
- Familiarity with CI/CD practices for ML systems.
- Experience with ML experiment or model tracking tools (e.g. MLflow).
- Prior work in public health, climate, or public-sector data systems.
What we offer :
- Opportunity to build and operate real-world ML systems used in public-sector decision-making at the state and national levels.
- A high-ownership role in a small, capable, translational research team.
- Direct collaboration with researchers, policymakers, and external partners.
- Space to shape best practices, templates, and technical direction from the ground up.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
ML / DL Engineering
Job Code
1604515