Posted on: 30/04/2026



Key Responsibilities :
- Design, develop, train, and deploy machine learning and deep learning models for real-world business use cases
- Build end-to-end ML pipelines including data ingestion, feature engineering, model training, evaluation, and deployment
- Implement and manage ML workflows using AWS services such as SageMaker, S3, EC2, Lambda, Glue, and Step Functions
- Deploy models as scalable APIs using Docker, REST endpoints, and CI/CD pipelines
- Monitor model performance, drift, and retraining strategies in production
- Collaborate with data engineering teams to ensure high-quality, reliable data pipelines
- Optimize model performance, cost, and scalability on AWS infrastructure
- Ensure solutions meet security, compliance, and governance standards
Required Skills & Experience :
- 3+ years of experience in Machine Learning / AI development
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch)
- Hands-on experience with AWS ML stack, including :
1. Amazon SageMaker (training, endpoints, pipelines)
2. S3, EC2, IAM, Lambda
3. Glue, Athena, Redshift (nice to have)
- Experience with model deployment and MLOps practices
- Solid understanding of supervised, unsupervised, and deep learning techniques
- Experience with REST APIs, Docker, and Git-based version control
- Strong problem-solving and communication skills
Did you find something suspicious?