Anthropic

Policy Design Manager, Age-Appropriate Design

San Francisco, CA | New York City, NY Full Time

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content. You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate. Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.

*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.

Responsibilities:

  • Serve as an internal subject matter expert, leveraging deep expertise in child safety, adult content, youth development, and age-appropriate design to:

    • Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases

    • Design evaluation frameworks for testing model performance in areas of expertise

  • Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities

  • Review flagged content to drive enforcement and policy improvements

  • Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review

  • Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions for users across different age groups

  • Advise on age assurance approaches and content classification frameworks in partnership with Enforcement, Product, Engineering, and Legal teams

  • Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s)

  • Keep up to date with new and existing AI policy norms, regulatory requirements (e.g., age-appropriate design codes), and industry standards, and use these to inform our decision-making on policy areas

You may be a good fit if you have experience:

  • As a researcher, subject matter expert, or trust & safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy. Note: For this role, an advanced degree in developmental psychology, child development, education, or a related field is preferred.

  • Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions

  • Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems

  • Working with generative AI products, including writing effective prompts for policy evaluations and classifier development

  • Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams

  • Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space

  • Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations

  • Navigating and prioritizing work efforts amidst ambiguity

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$245,000$285,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process