LiteLLM

Site Reliability Engineer

Remote Full-time

TLDR

LiteLLM is an open-source LLM Gateway with 34K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara,

Lemonade, and Adobe. We're rapidly expanding and seeking our 6th Engineer focused on owning reliability, performance, and

infrastructure stability for the LiteLLM proxy.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure,

OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format.

We just hit $6M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more

information on our website, Github and Technical Documentation.

Why do companies use LiteLLM Enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics

(production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web

Tokens).

What you will be working on

Skills: Python, FastAPI, PostgreSQL, Redis, Kubernetes, Prometheus, performance profiling

As the SRE, you'll own the reliability and performance of the LiteLLM proxy in production. Our users run LiteLLM as a

critical gateway handling millions of LLM requests — when it goes down, their entire AI stack goes down. You'll work

directly with the CEO and CTO on critical projects including:

  • Fixing OOM issues — e.g. Prisma Query Engine unable to recover from OOMKill in K8s deployments, unbounded in-memory

buffers in spend log transactions

  • Solving database connection problems — e.g. database query limits getting reached under load, spend logs loading

extremely slowly, Prisma connection pool exhaustion

  • Fixing race conditions and deadlocks — e.g. max_parallel_requests deadlocking API keys after provider timeouts (counter

never released, Redis reset required), PodLockManager releasing another pod's lock, in-memory cache increment race

conditions

  • Performance optimization — e.g. update_database() doing 7 deep copies per request in the spend tracking hot path, health

check fan-out overloading startup

  • Improving Redis/cache reliability — e.g. budget limiter reading stale Redis data, cache sync issues between in-memory and

Redis layers

  • Production monitoring — making Prometheus metrics accurate (fixing missing/inf budget metrics), adding alerting,

improving observability for multi-pod deployments

  • Making the proxy self-healing — graceful degradation when DB/Redis is temporarily unavailable, connection retry logic,

proper health checks

What is our tech stack

The tech stack includes Python, FastAPI, Redis, Postgres, Prisma ORM, Kubernetes, Prometheus, Docker.

Who we are looking for

  • 1-4 years of experience running Python services in production at scale

  • Experience debugging OOMs, memory leaks, connection pool issues, and race conditions

  • Comfortable with PostgreSQL (query optimization, connection pooling, PgBouncer) and Redis

  • Kubernetes experience — you've dealt with pod restarts, resource limits, health probes, and multi-replica coordination

  • Familiarity with Prometheus/Grafana for monitoring and alerting

  • Passion for open source and user engagement

  • Strong work ethic and ability to thrive in small teams

  • Eagerness to talk to users and help solve real problems — our GitHub issues are full of production debugging sessions and

you'd be jumping into those directly

🚀 Y Combinator Company Info

Y Combinator Batch: W23
Team Size: 10 employees
Industry: B2B Software and Services
Company Description: Call every LLM API like it's OpenAI [100+ LLMs]

💰 Compensation

Salary Range: $120,000 - $180,000
Equity Range: 0.25% - 0.75%

📋 Job Details

Job Type: Full-time
Experience Level: 1+ years
Engineering Type: Backend

🛠️ Required Skills

Kubernetes Python Redis Docker PostgreSQL