NVIDIA

Senior Performance Engineer - Deep Learning

US, CA, Santa Clara Full time

Our Deep Learning models performance engineering team at NVIDIA is hiring software engineers at all experience levels to build and optimize the libraries and tools that enable Deep Learning Researchers and Engineers to design, develop, and deploy efficient AI applications. We are an ambitious and diverse team that builds optimizations directly into mainstream open source Deep Learning frameworks - PyTorch and JAX, which boost the performance at all levels of NVIDIA's AI stack. Our team has a wide collaborative footprint, working not only with multiple teams across NVIDIA but also with the broader open-source community to deliver SOTA Deep Learning performance on the best AI platform in the world!

What you will be doing:

  • Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models.

  • Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc.

  • Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems.

  • Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf.

  • Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations.

  • Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.

What we need to see:

  • BS or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 3+ years of experience in C++ and Python programming.

  • Strong background, experience, or coursework in parallel systems programming, preferably on GPUs.

  • Knowledge of Computer Architecture, Code Optimization, and/or Operating Systems.

  • Proven experience in developing large software projects.

  • Excellent verbal and written communication skills.

Ways to stand out from the crowd:

  • Experience in PyTorch, JAX, or any other DL framework.

  • Experience with performance analysis, profiling, and code optimization techniques, especially with multi-GPU or multi-node systems.

  • Knowledge of modern LLM architectures, attention mechanisms, and/or low-level DL libraries such as cuBLAS, cuDNN, and cuSOLVER.

  • Experience in writing GPU kernels using any of - CUDA, OpenAI Triton, CuTeDSL, Pallas, or other similar libraries.

  • Any past contributions to the open source community and/or experience working with multidisciplinary teams also showcase readiness for the team's responsibilities. 

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 8, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.