Posted on: 21/04/2026
Description :
- 8+ years of experience with Python, SQL, Pyspark and any one of Gen AI frameworks such as Langgraph, OpenAI Agents SDK, Autogen, CrewAI,etc.
Duties & Responsibilities :
- Translate business requirements into scalable and well documented ML pipelines and AI solutions using Databricks, Azure AI, and Snowflake.
- Define and drive the strategic roadmap for GenAI and agentic AI adoption across business units.
- Lead architecture design for multi-agent systems using modular frameworks like LangChain and Azure AI Agent Service.
- Oversee development and deployment of AI agents for tasks such as customer outreach, research, and workflow automation.
- Establish governance frameworks for AI observability, access control, and guardrail enforcement using Unity Catalog and Azure Guardrails.
- Mentor and guide engineering teams on best practices in GenAI, MLOps, and agentic design patterns.
- Collaborate with product, data, and engineering leadership to align AI initiatives with business goals.
- Evaluate emerging technologies and integrate them into the enterprise AI stack (e.g., Semantic Kernel, Foundry SDK, etc.).
Basic Qualifications :
- Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related quantitative field.
- Masters degree or higher in Computer Science, AI, or related field.
- 8+ years of experience in AI/ML engineering, with 3+ years in leadership roles.
- Deep expertise in GenAI, agentic architectures, and enterprise AI deployment.
- Experience leading cross-functional teams and managing large-scale AI programs.
- Strong track record in AI governance, security, and compliance.
Preferred Qualifications :
- Languages : Python, SQL, PySpark
- GenAI & Agentic Tools : LangChain, LangGraph, OpenAI SDK, Gemini, Azure AI Agent Service, Semantic Kernel
- Governance & Observability : Unity Catalog, Azure Guardrails, OpenTelemetry, Databricks AI Gateway
- Cloud & Data Platforms : Azure AI, Databricks, Snowflake, GCP Vertex AI
- Understanding of AI governance, including model explainability, fairness, and security (e.g., prompt injection, data leakage mitigation).
Did you find something suspicious?