HamburgerMenu
hirist

Job Description

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all.

This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments.

The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure.

As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices.

You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services.

Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments.


Responsibilities :


AI-Powered Software Development & API Integration :

- Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization.

- Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making.


Data Engineering & AI Model Performance Optimization :

- Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake.

- Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings).


Microservices, APIs & Security :


- Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication.

- Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices.


AI & Data Engineering Collaboration :

- Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions.

- Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation.


Cross-Functional Coordination and Communication :

- Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2).

- Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency.


Mentorship & Knowledge Sharing :


- Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions.


Education & Experience Required :

- 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions.

- Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles.

- Bachelors Degree or equivalent in Computer Science, Engineering, Data Science, or a related field.

- Proficiency in full-stack development with expertise in Python (preferred for AI), Java.

Experience with structured & unstructured data :

- SQL (PostgreSQL, MySQL, SQL Server).

- NoSQL (OpenSearch, Redis, Elasticsearch).

- Vector Databases (FAISS, Pinecone, ChromaDB).

- Cloud & AI Infrastructure

- AWS : Lambda, SageMaker, ECS, S3.

- Azure : Azure OpenAI, ML Studio.

- GenAI Frameworks & Tools : OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI.


- Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization.

- Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment.

- Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams.

- Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption.


info-icon

Did you find something suspicious?