Posted on: 27/08/2025
Job Summary :
Key Responsibilities :
- Use libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, or XGBoost for model development.
- Conduct feature engineering, data preprocessing, hyperparameter tuning, and model validation.
- Deploy ML models to production environments using APIs, containers (Docker), or cloud services like AWS SageMaker, Azure ML, or GCP AI Platform.
- Collaborate with software engineers and DevOps teams to integrate models into applications and ensure high availability and scalability.
- Work with structured and unstructured datasets from databases, APIs, or data lakes.
- Perform data cleaning, transformation, normalization, and exploratory data analysis (EDA) to extract insights and improve model accuracy.
- Monitor model performance post-deployment to detect drift or degradation.
- Build tools and processes to retrain, update, and version-control models.
- Document model behavior, performance metrics, and decisions for auditability.
- Translate complex business problems into ML/AI solutions.
- Communicate findings and recommendations to both technical and non-technical stakeholders.
- Stay updated with the latest research, tools, and best practices in AI/ML.
Required Skills & Qualifications :
- 2 - 5+ years of hands-on experience in AI/ML or Data Science roles.
- Experience with machine learning frameworks (e.g., TensorFlow, PyTorch, Keras, XGBoost).
- Strong understanding of data structures, algorithms, probability, statistics, and ML algorithms.
- Experience with model deployment and serving (e.g., Flask APIs, FastAPI, SageMaker, Docker,
Kubernetes).
- Familiarity with SQL and/or NoSQL databases.
- Experience with cloud services (AWS, GCP, or Azure) and MLOps tools is a plus
Did you find something suspicious?