The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AI’s toughest challenges through technical research, field-building initiatives, and policy engagement, along with our sister organization, Center for AI Safety Action Fund.
To achieve our mission, we run a large number of programs dedicated to maximizing our positive impact. Some of our past achievements include: releasing the most-used measure of AI capabilities used by all major AI companies, running a large compute cluster to facilitate AI safety research which has been cited over 16,000 times, and publishing a global statement on AI Risk signed by Geoffrey Hinton, Yoshua Bengio and top AI CEOs.
We're looking for dynamic operators to own and execute programs across public engagement, operations, publications, special projects, and research. Example projects include partnering with the team behind #TeamTrees to run a campaign on AGI, supporting researchers in building benchmarks for deception and weaponization risk, standing up an AI safety hub in DC, or finding ways to engage YouTubers and longform creators on AI safety. CAIS is a fast-moving, meritocratic organization. Responsibilities and leadership grow with those who show initiative and consistently deliver.