Posted on: 08/09/2025
Job Description :
Key Responsibilities :
- Design and implement ML/NLP-driven solutions, focusing on LLMs, ASR/STT, and modern AI frameworks.
- Develop, optimize, and maintain production-grade ML workflows and pipelines.
- Work with APIs, system design, and integrations for scalable ML solutions.
- Experiment with LLM prompt engineering, chaining, fine-tuning, and RAG pipelines.
- Collaborate with data scientists and engineers to build and deploy ML models.
- Leverage tools such as Transformers, Hugging Face, LangChain, and OpenAI APIs for model development.
- Deploy and monitor ML systems on cloud platforms (AWS/GCP).
- Maintain code quality and version control using Git, Docker, and best practices.
- Troubleshoot, profile, and optimize performance/latency of ML inference systems.
Requirements :
- 5- 6 years of hands-on software engineering experience, ideally in ML/NLP-heavy roles.
- Strong proficiency in Python with expertise in libraries like NumPy, Pandas, etc.
- Practical experience with LLMs (GPT, Llama, Mistral), prompt engineering, chaining, or fine-tuning.
- Exposure to ASR/STT technologies (Whisper, DeepSpeech, Kaldi).
- Strong understanding of system design, API integrations, and ML workflows in production.
- Experience with Transformers, Hugging Face, LangChain, OpenAI APIs.
- Knowledge of AWS/GCP and model deployment basics.
- Familiarity with Git, Docker, and collaborative version control workflows.
- Bonus : Background in competitive programming or strong DSA skills.
Nice to Have :
- Experience with speech/audio data pipelines or ASR model fine-tuning.
- Knowledge of RAG pipelines, embeddings, and vector databases.
- Skills in performance profiling, latency debugging, and scaling inference systems.
Did you find something suspicious?