Escape

AI Engineer (Security Research)

New York, US Full-time

Location: Paris (hybrid: 3 days in office, occasional NYC trips)

Reports to: Antoine Carossio (CTO) & Tristan Kalos (CEO), jointly

Visa & relocation: We sponsor and relocate. France's passeport talent makes this fast for research profiles.

Compensation: €90–120 base + up to 0.2% equity (4y vest, 1y cliff)

Start date: ASAP

____

About Escape

Escape is offensive security for the teams that are 100x outnumbered. We build AI agents that find and exploit vulnerabilities in modern applications, APIs, and microservices: the way a real attacker would, but at the speed of a CI pipeline.

We're a Y Combinator W23 company. We just closed an $18M Series A in March 2026 led by Balderton Capital (with Uncorrelated Ventures, IRIS, and YC following on). Our customers and partners include Zoom, Schibsted, Wiz, Pandadoc, and a growing list of teams who've stopped pretending annual pen tests are enough. We're 40 people split between Paris and NYC, scaling to 80 in the next 12 months

The product surface is real: ASM, business-logic-aware DAST, and AI Pentesting GA-ing this year. The competitive thesis is that LLMs collapsed the cost of offensive engineering by 1000x, and the companies that ship that capability into production first will own the next decade of the category. We're betting we're those companies.

____

The thesis

LLMs are getting good at security. Every new model finds more vulnerabilities, with fewer false positives, faster, and at lower cost than the model before it. Open-source is 6-12 months behind frontier, which means attackers will have these capabilities anyway.

This breaks the security market in three ways:

  1. The bar for "useful security tool" is rising fast. Off-the-shelf LLMs are already better than most dedicated scanners. Anything we build has to clear a moving target.
  2. Patching becomes the bottleneck. Vulns are easy to find, hard to safely fix. Backlogs will explode.
  3. A new class of defense will emerge: defenses against AI-powered attackers. Source code engineered to confuse LLMs reading it. Binaries that defend themselves against LLM-driven fuzzing. Honeypots that compromise pentesting agents and pop a reverse shell on the attacker. Adversarial examples, but for cybersecurity.

We want to make fear change side. We want to hack the AI hackers.

To do that, we need someone who can do real research, not repurpose existing techniques, but produce IP that doesn't exist in the training data yet. That's the role.

___

What you'll actually do

You'll join Escape as one of the first members of a dedicated Research Team, reporting to Mathieu (Head of Engineering) with weekly direct contact with the founders (Tristan and Antoine). Concretely:

  • Probe the limits of current frontier models on offensive and defensive security tasks. Build our own benchmarks where the public ones lie or don't exist.
  • Invent and prototype novel defenses against LLM-powered attackers: adversarial inputs for code-reading models, traps for agentic pentesters, techniques that turn the asymmetry around.
  • Ship research as artifacts: papers, benchmarks, tools, responsible disclosures. We commit to one public output per month (post, disclosure, talk) and one major artifact per quarter (paper, benchmark release, or open-source tool). This is in your contract, not just a hope.
  • Run a weekly "research office hours" for our engineering team. Your job isn't to be a silo, it's to compound the rest of the company.
  • Represent Escape at conferences: DEF CON, Black Hat, USENIX Security, NeurIPS workshops, whatever fits. Travel budget is real.

What success looks like at 12 months: at least one piece of work that moves the conversation in the AI-security community, a benchmark or tool with external adoption, and clear hires-against-roadmap evidence that the research function is a marketing and product weapon, not a cost center.

___

Who we're looking for

This is a research role with a high formal bar. You should have:

  • A graduate degree from a top program in ML, applied math, or CS. Examples in Europe: MVA (ENS Paris-Saclay), ENS Ulm, Polytechnique, Mines ParisTech, Télécom Paris, EPFL, ETH Zürich, Cambridge MPhil, Oxford, MIT, Stanford, CMU, or an equivalent program elsewhere. A PhD is a strong plus, especially in ML, security, programming languages, or formal methods.
  • At least 3 years of post-graduation professional experience in research, applied ML, or security. Internships and PhD time count partially, not in full.
  • Hybrid technical depth: real ML/LLM work (publications, open-source models, applied research at a frontier lab or strong industry team) and real security experience (CTFs, vuln research, pentesting, applied crypto, red team, or security engineering at a serious shop). We don't expect parity in both, but you must be credible on both sides.
  • A public track record: a paper at a top venue (NeurIPS, ICML, ICLR, USENIX Security, IEEE S&P, CCS, etc.), a CVE, a tool with traction, a benchmark people use, or a technical write-up that moved the conversation. We need to see how you think.
  • Comfort across the stack: reading PyTorch internals and disassembly, or willing to close the gap fast on whichever side you're weaker.
  • Strong, defensible opinions about what's bullshit in AI-security marketing. The space is full of theater. We want someone who sees through it.
  • A genuine desire to publish. If you want to keep your work secret, this isn't the right role.
  • Language: English is the working language. French is a nice-to-have, not required.

___

What we commit to in writing

We've thought about why research roles fail at startups. Here's how we're trying to avoid each failure mode:

  • Roadmap pressure: Your research time is contractually protected. Antoine and the founders have signed off on this. Engineering urgency does not eat into it.
  • Isolation: We're hiring 2-3 researchers, not one. You'll have peers. If you join early, you'll help shape who else comes in.
  • Career path: A Senior Researcher / Research Lead track is being defined now and will be finalized before your start date. Internal promotion is the default expectation if the team grows.
  • Conference and publication budget: real, not symbolic.
  • Compute budget: real, not symbolic.

___

Why now

The window where this kind of research is novel is short. In 12 months, defenses against AI-powered attackers will be a category, not a curiosity. We want to define it, not catch up to it.

If reading this got your attention, send us:

  1. The most interesting thing you've shipped in the last 2 years (link, paper, repo, write-up).
  2. Two paragraphs on what you think the next non-obvious problem in AI-security is.

No cover letters. We'll read every submission ourselves.

🚀 Y Combinator Company Info

Y Combinator Batch: W23
Team Size: 29 employees
Industry: B2B Software and Services -> Security
Company Description: Offensive security for the teams that are 100x outnumbered

💰 Compensation

Salary Range: $90 - $120
Maximum Equity: 0.2%

📋 Job Details

Job Type: Full-time
Experience Level: 3+ years
Engineering Type: Machine learning
Time to Hire: 14

🛠️ Required Skills

LLMs Machine Learning Information Security

🎯 Interview Process

Target time-to-offer: 2 weeks:
  • First call with our CTO Antoine (30 min)
  • Technical deep-dive on your past work with our Head of Engineering Mathieu (60 min).
  • Research-direction conversation with founders Tristan and Antoine (60 min).
  • Reference checks. Offer.