Ecolab

Lead AI Engineer

IND - Karnataka - Bangalore - EDC Full time

ROLE SUMMARY

As a Lead AI Engineer, you will own the architecture and delivery of GenAI-based systems that integrate large language models (LLMs), multi-agent workflows, and embedding-powered retrieval solutions. You will guide cross-functional pods, define engineering standards, and drive innovation through scalable, production-grade intelligent applications. You will lead a team of associates both functionally and admin responsibilities.

KEY RESPONSIBILITIES

  • Architect enterprise-grade GenAI systems using modular LLM APIs, agent orchestration frameworks, and embedding pipelines
  • Design and implement autonomous agent workflows with context management, multi-agent coordination, and task delegation
  • Optimize performance, latency, and accuracy through experimentation with prompt strategies, retrieval layers, and caching logic
  • Lead solution reviews, enforce prompt safety and governance, and ensure alignment with security protocols
  • Collaborate with platform, product, and engineering leads to define reusable patterns and scalable AI capabilities
  • Guide engineering pods on GenAI design principles, system reliability, and prompt lifecycle management
  • Build and maintain reusability assets — SDKs, templates, shared agent logic — to accelerate delivery velocity across teams
  • Stay up to date with advancements in LLM tooling, orchestration abstractions, and prompt optimization techniques

Required Qualifications

  • 6 to 8+ years of experience in software, AI, or ML engineering roles, including significant experience designing, delivering, and operating production-grade GenAI or agentic AI applications
  • Proven experience leading the technical delivery of LLM-powered products or agent-based solutions, including solution design, engineering guidance, and operational readiness
  • Strong technical foundation in Python and modern backend engineering patterns, with practical experience building AI-enabled application services and APIs
  • Hands-on experience with Azure OpenAI, Azure AI Studio, Semantic Kernel, LangChain, AutoGen, or equivalent platforms and orchestration frameworks, including real-world use of LLM APIs, prompt workflows, tool calling, and agent coordination
  • Strong experience designing and implementing retrieval-augmented generation (RAG) and vector-based patterns using platforms such as Azure AI Search, Pinecone, Weaviate, FAISS, or equivalent
  • Experience building and deploying cloud-native AI services using technologies such as Azure Functions, Azure Container Apps, FastAPI, Docker, Azure DevOps, GitHub, or equivalent engineering and deployment platforms
  • Solid understanding of CI/CD, containerization, automated testing, and production deployment practices for AI-driven systems
  • Practical experience with observability and operational tooling such as Application Insights, OpenTelemetry, Azure Monitor, Datadog, New Relic, or equivalent, including monitoring of reliability, latency, and cost
  • Exposure to Model Context Protocol (MCP), agent-to-agent (A2A) interaction patterns, or similar context-sharing and distributed agent communication approaches
  • Strong ownership mindset across the full SDLC, including design, build, deployment, support, reliability improvement, and long-term maintainability
  • Proven ability to raise engineering quality through code reviews, technical mentoring, design guidance, and reuse of shared patterns and components
  • Strong collaboration and communication skills, with the ability to work effectively across engineering, architecture, product, and platform teams

Preferred Qualifications

  • Experience leading the design or implementation of agentic AI workflows involving multi-step reasoning, tool orchestration, and reusable orchestration patterns
  • Experience with Microsoft AI Foundry, Azure Machine Learning, Azure AI / Copilot Studio, or equivalent platforms used for enterprise AI solution development and experimentation
  • Familiarity with enterprise integration and application ecosystems, including AI integration with APIs, workflow platforms, and downstream business systems
  • Experience contributing to reusable GenAI accelerators, prompt libraries, orchestration templates, internal AI developer platforms, or engineering toolkits
  • Familiarity with AI governance, safety, observability, and cost-management tooling, including token usage analytics, quality evaluation, and guardrail implementation
  • Experience supporting technical direction for other engineers through architecture reviews, implementation guidance, and technical mentoring
  • Ability to communicate complex technical decisions clearly to both engineers and non-technical stakeholders
  • Experience operating in a build-own-operate product environment with strong expectations around reliability, supportability, and continuous improvement