Job Summary
The Red Hat Performance and Scale Engineering team is looking for an AI Performance Engineer to join the PSAP (Performance and Scale for AI Platforms) to support the performance and scalability characterization and tuning of Red Hat AI’s Agentic AI platform.
Red Hat AI is building an open‑source, end‑to‑end platform for building LLM powered agentic solutions on top of RHEL and OpenShift. From high‑volume Data Processing pipelines and Retrieval‑Augmented Generation (RAG) services to MCP orchestration using MCP gateways and a production‑ready Llama Stack, each scrum team ships a critical layer of our stack. As a Performance engineer supporting this team, you will be the technical expert who ensures every layer performs and scales flawlessly in the hands of developers and enterprise customers.
This role needs a seasoned engineer that thinks creatively, adapts to rapid change, and has the willingness to learn and apply new technologies. You will be joining a vibrant open source culture, and helping promote performance and innovation in this Red Hat engineering team. The border mission of the Performance and Scale team is to establish performance and scale leadership of the Red Hat product and cloud services portfolio. The scope includes component level, system and solution analysis and targeted enhancements. The team collaborates with engineering, product management, product marketing and customer support as well as Red Hat’s hardware and software ecosystem partners.
At Red Hat, our commitment to open source innovation extends beyond our products - it’s embedded in how we work and grow. Red Hatters embrace change – especially in our fast-moving technological landscape – and have a strong growth mindset. That's why we encourage our teams to proactively, thoughtfully, and ethically use AI to simplify their workflows, cut complexity, and boost efficiency. This empowers our associates to focus on higher-impact work, creating smart, more innovative solutions that solve our customers' most pressing challenges.
What you will do
Define measurable KPIs / SLOs for throughput, latency, footprint, and cost across all Agentic AI Platform components.
Formulate performance test plans and execute performance benchmarks to characterize performance, drive improvements, and detect performance issues through data analysis, visualization and thoughtful use of AI
Develop and maintain tools, scripts, and automated solutions that streamline performance benchmarking tasks.
Work closely with cross-functional engineering teams to identify and address performance issues. For eg.
RAG: profile vector DBs (PGVector, Milvus) and embedding models, tune ANN indexes and cache paths.
Agentic/MCP: stress‑test agent orchestration graphs, reduce tail latency of multi‑step chains.
Llama Stack: Performance and Capacity Measurement
Partner with product DevOps teams to bake performance gates into GitHub Actions/OpenShift Pipelines.
Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.
Publish results, recommendations, and best practices through internal reports, presentations, external blogs, and official documentation.
Represent the team at internal and external conferences, presenting key findings and strategies.
What you will bring:
Understanding of AI and LLMs
Familiarity with tools and systems to build agentic AI applications
Fluency in Python (data & ML)
Strong linux systems engineering skills
Exceptional communication skills - able to translate raw performance numbers into customer value and executive narratives
Commitment to open‑source values
The following is considered a plus:
Master’s or PhD in Computer Science, AI, or a related field
History of open-source contributions
Hands‑on expertise with Kubernetes/OpenShift
Performance engineering expertise
Deep experience building Agentic AI applications with popular orchestration frameworks such as LangChain, LangGraph, and others
#LI-OA1
About Red Hat
Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Inclusion at Red Hat
Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.
Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.