NVIDIA

Senior AI Security Researcher

US, NC, Durham Full time

NVIDIA is looking for a Senior AI Security Researcher to help define how frontier AI systems, agentic applications, and AI-enabled security automation are tested, attacked, defended, and safely deployed. You will build new methods, tools, evaluations, and proofs of concept that help NVIDIA understand and reduce security risk across AI models, AI platforms, autonomous agents, cloud services, developer tooling, and accelerated computing systems!

We are looking for a researcher who can move fluidly from open-ended research questions to application within working systems: someone who can discover novel failure modes, build rigorous evaluation harnesses, prototype adversarial and defensive techniques, and turn findings into practical mitigations for engineering teams. The right person may come from AI security, ML security, malware data science, cyber-defense research, adversarial ML, LLM security, offensive security, threat hunting, or applied security research at scale!

What You'll Be Doing:

  • Develop and answer open-ended AI security research questions that helps NVIDIA understand, measure, and reduce risk in frontier models, agentic systems, AI platforms, and AI-enabled products.

  • Develop practical methods, prototypes, evaluations, or tools that reveal how AI systems can fail under adversarial conditions and how those risks can be mitigated.

  • Explore a range of AI security problems, such as LLM and agent security, adversarial testing, model evaluation, cyber-defense automation, vulnerability discovery, secure deployment, or autonomous response.

  • Translate research into usable outcomes for engineering and security teams, including proof-of-concept demonstrations, benchmarks, technical guidance, mitigations, and secure-by-design recommendations.

  • Collaborate across offensive security, product security, AI research, platform, cloud, and infrastructure teams to connect research insights with NVIDIA's highest-impact security priorities.

  • Help shape NVIDIA's AI-security research strategy by mentoring others, identifying emerging risks, and building repeatable practices for evaluating and defending AI systems.

What We Need to See:

  • 12+ years of experience in AI security, cybersecurity research, applied ML research, offensive security, cyber defense, or related technical fields.

  • Demonstrated record of original research and practical impact, such as deployed security ML systems, AI-security evaluations, CVEs, patents, publications, conference talks, open-source tools, production mitigations, or funded research programs.

  • Hands-on ability to build working research systems in Python and modern ML/data tooling such as PyTorch, JAX, TensorFlow, scikit-learn, Pandas, NumPy, Spark, BigQuery, or comparable platforms.

  • Experience with one or more AI-security areas: LLM security, adversarial ML, model evaluation, agent security, prompt injection, model backdoors, data poisoning, model abuse, secure RAG, synthetic data, or AI-enabled security automation.

  • Strong cybersecurity foundation, including threat modeling, adversary simulation, exploit or vulnerability research, malware analysis, network defense, threat hunting, detection engineering, digital forensics, secure code review, or incident-response automation.

  • Ability to work across ambiguous research problems and practical product constraints, translating findings into prioritized recommendations and measurable security outcomes.

  • Bachelor's degree or equivalent experience in Computer Science, Machine Learning, Cybersecurity or a related field.

  • Experience leading AI-security research for major models, AI platforms, security products, or large-scale production systems.

  • A track record of building security ML systems that operate at real-world scale,

Ways to Stand Out from the Crowd:

  • Published work or public technical leadership in AI security, malware data science, adversarial ML, LLM security, cyber-defense automation, or offensive AI.

  • Experience developing benchmarks, challenge datasets, red-team tools, evaluation suites, or simulation environments for AI and security systems.

  • Deep knowledge of attacker tradecraft, including living-off-the-land techniques, supply-chain abuse, application-layer AI attacks, data exfiltration, and abuse of autonomous tooling.

  • Experience with low-level systems security.

  • History of mentoring researchers, winning or leading research programs, filing patents, publishing papers, or speaking at major security and AI venues.

In this role, your research will help NVIDIA build AI systems that are not only powerful, but trustworthy, resilient, and secure. You will work with world-class researchers, engineers, and security teams on problems that matter to NVIDIA's products, customers, and the broader AI ecosystem.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 224,000 USD - 356,500 USD for Level 5, and 272,000 USD - 431,250 USD for Level 6.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 12, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.