Anthropic

Content Moderation Specialist

New York City, NY; San Francisco, CA; Washington, DC Full Time

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.

Our Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.

As a Content Moderation Specialist, you'll own day-to-day program management of Anthropic's global content moderation and online safety regulatory compliance program. Online safety regulation is one of the fastest-moving areas of technology law, and AI sits squarely in its sights. Regimes including the EU Digital Services Act, the UK Online Safety Act, the Australia Online Safety Act, and a growing set of emerging frameworks globally create novel obligations for how AI products are built, deployed, and governed. You will be at the forefront of translating those obligations into a defensible, well-documented compliance program — with regulatory risk assessments as the core of the work.

This is a deeply cross-functional role. You'll partner closely with internal counsel, Safeguards, and operations teams across Anthropic to build the compliance program and frameworks that demonstrate Anthropic meets its obligations under content regulation. This is a builder's role at a company that takes integrity seriously and moves fast — you'll exercise independent judgment on issues without clear precedent and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.

Key responsibilities

  • Own the global content regulation risk assessment program, including the roadmap of required assessments across jurisdictions, a consistent and repeatable risk assessment methodology and framework, and the coordination of inputs, consultation, and approvals for each assessment

  • Build and maintain systems and trackers to assess, operationalize, and report on relevant regulatory requirements across Anthropic's products and jurisdictions

  • Partner with internal counsel, Safeguards, Policy, engineering, and operations teams to align internal practices with external commitments and legal obligations

  • Maintain a controls inventory and the compliance documentation library for content regulation, ensuring documentation is drafted, reviewed by the right stakeholders, and kept current

  • Conduct gap analysis when new or amended content regulations come into scope, and stand up the compliance readiness plan and workback for each

  • Provide regular written program status reporting to stakeholders and leadership, proactively surfacing stalled or at-risk items with a proposed path to unblock

  • Take on additional related work as the program evolves; job duties and responsibilities may change from time to time at Anthropic's discretion or as required by applicable law

Minimum qualifications

  • Experience managing regulatory or compliance programs at a technology company or in a regulated industry

  • Hands-on experience conducting or program-managing regulatory risk assessments, including coordinating inputs across multiple functions

  • Demonstrated ability to build and maintain compliance program artifacts, including policies, risk assessment documentation, controls inventories, program trackers, and readiness plans

  • A track record of executing cross-functionally, driving outcomes across legal, product, policy, and operations partners without direct authority

  • Excellent written and verbal communication skills, including producing clear program documentation and status reporting for senior stakeholders

  • Sound judgment and the ability to make decisions and move work forward with incomplete information in an evolving regulatory environment

Preferred qualifications

  • 5+ years of relevant experience in regulatory program management or content moderation compliance

  • Direct experience with online safety or content moderation regulation, such as the EU Digital Services Act, UK Online Safety Act, Australia Online Safety Act, or comparable regimes (strongly preferred)

  • Experience in trust and safety, online safety, or regulatory compliance at a large consumer technology platform

  • Prior experience in a Big 4 or other professional services firm advising on content regulation, online safety, or platform compliance engagements

  • Experience designing risk assessment methodologies or compliance frameworks from first principles

  • Experience with multi-jurisdictional compliance programs in a rapidly scaling environment

  • Familiarity with how generative AI products intersect with content and online safety regulation

Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$255,000$270,000 USD

Logistics

Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience

Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process