Posted on: 28/02/2026
"Women Candidates Preferred"
Role Overview :
At Schneider Electric's Digital Technology Centres(DTCs), we are building a next generation enterprise AI Delivery team and are seeking an experienced, detail-oriented AI-first Solutions Architect with deep technical expertise and a strong business acumen.
You will play a critical role in architecting end-to-end AI Solutions covering LLM integration, RAG pipelines, agentic frameworks, secure model hosting, inference governance, MLOps automation, and cost optimization - while also leveraging data engineering fundamentals and pipeline design as complementary skills that support robust AI delivery.
Ideal candidates will have 8-10 years of experience, bring passion and creativity to solve complex AI challenges, and thrive in a fast-moving, innovation-driven environment.
You will lead architectural decisions, guide cross-functional teams, and ensure seamless integration of AI technologies across data platforms, automation frameworks, and advanced AI workflows.
- Architect end-to-end LLM/SLM solutions including grounding, RAG, evaluation, prompt engineering, and agent-based workflows - ensuring best practices in security, compliance, and scalability.
- Design and operate multi-model architecture across Bedrock, OpenAI, Mistral, Anthropic via LLM APIM (routing, fallback, safe model selection).
- Define and implement LLM security architecture - prompt firewalls, input sanitization, output filters, jailbreak detection, responsible AI controls, and auditability.
- Establish evaluation & quality governance - define quality metrics, grounding scores, safety thresholds, and automated benchmarking (incl. regression checks) to ensure reliable and governed AI outputs.
- Define and optimize LLM cost strategy - model tiering, intelligent routing, token budgeting, semantic caching, and SLAs to balance quality, latency, and cost.
- Build agentic AI systems using LangChain, Copilot Studio, and enterprise grade Agentic Process Automation (APA) frameworks, enabling multi step reasoning, tool orchestration, workflow automation, and safe, scalable agent behaviors across business functions.
- Architect and automate end to end MLOps workflows, including ML/LLM pipelines, CI/CD for AI deployments, data lineage, feature workflows, and governance to ensure reliable and repeatable model delivery.
- Design scalable, secure model serving infrastructure leveraging Kubernetes, IaC (Terraform/CloudFormation), and enterprise grade monitoring, logging, and performance optimization for production AI workloads.
- Design and integrate data lakes, structured databases, and ML pipelines leveraging Databricks/ AWS architecture for smooth data processing and AI workload orchestration.
- Oversee data ingestion, transformation, and governance workflows to maintain data privacy, quality, and accessibility for AI and RPA workloads.
- Drive architectural decisions, review technical approaches, and provide leadership to development teams on experimentation, prototyping, and solution delivery.
- Collaborate with business stakeholders to prioritize AI use cases and ensure architectural alignment with strategic objectives.
- Stay current on emerging AI technologies, especially in Generative AI and LLM domains, and drive integration of innovative solutions with existing platforms.
Required Skills & Qualifications :
Technical Experience :
- 8-10 years of hands-on experience in solution architecture with strong expertise in cloud native AI systems, scalable design patterns, and enterprise integrations.
- Hands-on experience with LLMs and AI platform integration using AWS Bedrock and multi provider APIs (Mistral, OpenAI, Anthropic) and modern orchestration frameworks.
- Strong proficiency with the LangChain ecosystem (LangChain, LangGraph, LangSmith) for agent workflows, orchestration, evaluation, and observability.
- Deep expertise in core cloud architecture and AWS services: Serverless, Microservices, S3, Redshift, SageMaker, Bedrock, Lambda, IAM.
- Proven experience architecting data lakes, ETL pipelines, and managing data engineering workflows (AWS S3, Redshift, Databricks).
- Proficiency in Python programming for building AI systems - API integrations, orchestration logic, automation scripts, evaluation harnesses, and SDK-based interactions with cloud and LLM platforms.
- Experience designing and deploying scalable ML/LLM pipelines, with strong understanding of MLOps best practices (CI/CD, automated pipelines, containerization, environment management).
- Hands-on experience with Databricks as part of ML/AI workflows and data/feature processing ecosystems will be a plus.
- Ability to architect scalable and resilient workloads using Kubernetes, container orchestration, and Infrastructure-as-Code (Terraform/CloudFormation).
- Solid grounding in observability - monitoring, tracing, logging, performance debugging, and operational excellence for distributed AI workloads.
- Experience in system integration and API design, connecting AI services with enterprise platforms and implementing security best practices for identity, privacy, and compliance.
- Strong business acumen with the ability to translate technical solutions into business value and prioritize use cases accordingly.
- Excellent collaboration, leadership, and communication skills to guide cross-functional teams through complex AI initiatives and influence architectural direction.
Consulting Experience :
- Proven track record in an IT consulting environment, engaging with large enterprises and MNCs in strategic data solutioning projects.
- Strong stakeholder management, business needs assessment, and change management skills.
Leadership & Soft Skills :
- Experience managing and mentoring small teams, developing technical skills AI & Advanced Analytics domains.
- Ability to influence and align cross-functional teams and stakeholders.
- Excellent communication, documentation, and presentation skills.
- Strong problem-solving, analytical thinking, and strategic vision.
Educational Qualifications :
- Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related quantitative field.
Preferred Certifications :
- AWS Certified Solutions Architect - Professional
- AWS Certified Machine Learning - Specialty
- Certified Kubernetes Administrator (CKA) or equivalent
- Terraform Associate or equivalent Infrastructure as Code certifications
- Certified Artificial Intelligence Practitioner (CAIP) or similar AI certifications
- Databricks Generative AI Engineer a plus
- Databricks Data Engineer Associate a plus
- Relevant RPA certifications (UiPath, Blue Prism) a plus
What We're Looking For :
- Self-starters who are highly motivated, ambitious, and eager to challenge the status quo.
- Innovative thinkers capable of solving complex architectural and system design challenges.
- Effective leaders who collaborate openly, freely share knowledge and elevate team performance.
- Results-driven professionals who stay current on emerging AI trends and market developments, proactively upskilling and innovating.
- Straightforward, results-oriented individuals who value impact and accountability.
- Adaptable experts who stay on top of fast-evolving AI technologies and practices.
Why Join Us ?
- Opportunity to shape and build an AI product portfolio that delivers meaningful business impact for SE Regions.
- Work alongside a motivated and innovative team that values learning, ownership, and excellence.
- Thrive in a culture that challenges the status quo and embraces diverse perspectives.
The job is for:
Did you find something suspicious?