Accelerate research in strategic projects that enable trustworthy, robust and reliable Agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems.
We’re a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit.
We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals
As a Research Scientist in Strategic Initiatives, you will use your machine learning expertise to collaborate with other machine learning scientists and engineers within our strategic initiatives programs. Your primary focus will be on building technologies to make AI agents safer. AI agents are increasingly used in sensitive contexts with powerful capabilities, having abilities to access personal data, confidential enterprise data and code, interact with third party applications or websites, or write and execute code in order to fulfil user tasks. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact. In this role, you will serve this mission by proposing and evaluating novel approaches to agentic safety, building prototype implementations and production grade systems to validate and ship your ideas, in collaboration with a team of researchers and engineers from SSI, and the rest of Google and GDM.
In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:
In addition, the following would be an advantage: