Posted on: 01/12/2025
Description :
- Manage and optimize Databricks platform (Workspaces, Jobs, Unity Catalog, Delta Lake)
- Implement the whole ML lifecycle : model training, versioning, deployment, monitoring, and retraining
- Track, manage, and govern ML experiments & models via MLflow
- Develop scalable data/ML pipelines with Python (pandas, scikit-learn, PyTorch/TensorFlow), PySpark & SQL
- Deploy and manage solutions on AWS (specifically Sagemaker); knowledge of Docker/Kubernetes required
- Design and drive deployment strategies (A/B testing, blue-green & canary deployments)
- Create CI/CD workflows for ML using Jenkins/GitHub Actions/GitLab CI
- Monitor data quality, performance, and drift using Databricks Lakehouse Monitoring; integrate SHAP/LIME for explainability
- Automate end-to-end processes : data validation, feature generation, model building & deployment
- Collaborate across Data Science, Engineering, DevOps, and Business teams
- Mentor juniors, create clear documentation, and contribute to standard operating procedures
Mandatory Skills :
- Databricks (core)
- MLflow, End-to-end MLOps & ML lifecycle
- Python, PySpark, AWS Sagemaker, Docker/Kubernetes, CI/CD (Jenkins, GitHub Actions, GitLab CI)
Requirements :
- 4 to 6 years experience in MLOps, Data Engineering, or AI/ML roles
- Strong background in building, deploying & maintaining ML models at scale in the cloud
Location : Pune, Bangalore, Noida, Gurgaon
Looking for Immediate Joiners
Did you find something suspicious?
Posted By
Posted in
DevOps / SRE
Functional Area
DevOps / Cloud
Job Code
1583541
Interview Questions for you
View All