Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility.
Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com.
Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles. We are driven by innovation and looking for team players who want to actively build our company.
But, we also believe in balancing productivity with self-care. That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives.
If this sounds right up your alley, please submit an application. We look forward to getting to know you!
Also, feel free to check out why:
Business Insider named us an “enterprise startup to bet your career on”
Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world
Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America
Quartz ranked us the #1 best company for remote workers
We are looking for a Senior Python Engineer to play a key role in building the core of our AI platform. In this position, you will design and develop production-grade systems that power intelligent automation, agentic workflows, and large-scale retrieval services. This is a highly technical, hands-on role that involves close collaboration with product and platform teams to transform advanced AI concepts into reliable, scalable, and secure solutions used across our enterprise ecosystem.
Design, build, and maintain AI-powered services and APIs, leveraging LLMs (OpenAI, Anthropic, Qwen, OSS models) and custom ML models.
Develop an enterprise-grade agentic framework that enables orchestration, retrieval, and collaboration between multiple AI agents.
Implement and optimize knowledge retrieval systems and agentic search capabilities using vector databases such as Qdrant and ElasticSearch.
Write well-structured, efficient, and testable Python code for production services, experimentation, and internal developer tools.
Build and maintain shared Python libraries and SDKs used across multiple applications and microservices.
Collaborate with cross-functional teams on architecture, internal protocols, and API standards to ensure consistency and reliability across the platform.
Develop and enhance monitoring, validation, and observability for production-grade AI solutions.
Drive the full software development lifecycle - from design and implementation to deployment, monitoring, and continuous improvement.
Identify and resolve performance bottlenecks, reliability issues, and scaling challenges in complex, data-intensive environments.
Participate in code reviews and technical discussions, mentoring other engineers and contributing to a culture of excellence.
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
5+ years of experience as a Software Engineer, with strong proficiency in Python.
Proven track record of building and maintaining production-grade systems using Python.
Strong understanding of distributed systems, API design, and data-driven architectures.
Experience with relational and non-relational databases (PostgreSQL, Elastic, Qdrant, or similar).
Familiarity with AI/ML system design, including LLM integration and evaluation pipelines.
Knowledge of DevOps and observability practices (CI/CD, monitoring, metrics, and model validation).
Experience working with multiple LLM providers (OpenAI, Anthropic, Qwen, open-source models).
Background in developer platforms or AI infrastructure services.
Familiarity with vector databases, semantic retrieval, and knowledge graph architectures.
Exposure to Langfuse, LiteLLM, LangChain, or similar frameworks.
Experience developing enterprise-scale SaaS or distributed backend systems.
Contributions to open-source projects in Python, AI, or infrastructure engineering.
Excellent communication skills, with the ability to convey complex technical ideas clearly to both technical and non-technical audiences.
Collaborative and proactive approach, comfortable working across teams in a dynamic environment.
Strong analytical and problem-solving abilities, with a focus on continuous improvement and innovation.
Curiosity and a genuine interest in emerging AI technologies and modern backend architectures.
Tech Stack
Python • FastAPI • LLM APIs (OpenAI, Anthropic, Qwen, OSS) • LiteLLM • Qdrant • PostgreSQL • ElasticSearch • Langfuse • Kubernetes • GitHub Actions • ArgoCD
Example Projects
Building an evaluation and observability framework for AI model performance and reliability.
Developing an agentic orchestration platform that enables collaboration among multiple AI agents and tools.
Implementing semantic retrieval and agentic search capabilities over large enterprise knowledge bases.
Designing AI services that process and reason over high-volume real-world data at scale.