NVIDIA

Senior Software Engineer, Deep Learning Inference

Israel, Tel Aviv Full time

NVIDIA has been at the forefront of the deep learning revolution, pioneering innovations that have transformed the entire field. As the leading provider of GPUs and AI computing platforms, NVIDIA has empowered researchers and engineers worldwide to accelerate breakthroughs in artificial intelligence.

We seek a versatile Senior Software Engineer who is passionate about performance optimization and generative AI. Our team brings the latest research in LLM inference — from novel decoding strategies to quantization schemes — into production across NVIDIA's hardware lineup, from large data center servers to powerful edge devices. We work on the most advanced architectures in the field, with a focus on NVIDIA's own.

What you'll be doing:

  • Implement and optimize inference algorithms for LLM and omnimodal architectures, including hybrid Mamba-Transformer and mixture-of-experts models

  • Profile inference pipelines using NVIDIA's profiling and simulation tools. Correlate simulation predictions against real hardware across data center and edge devices

  • Write and tune GPU kernels (CUDA, Triton) for operators like fused MoE layers, SSM state updates, and quantized GEMMs

  • Solve distributed inference problems: expert parallelism, communication-compute overlap, collective tuning, multi-node deployment

  • Build production-grade software inside major open-source libraries - vLLM, SGLang, Dynamo, FlashInfer

  • Own optimization features end-to-end, from scoping through delivery, collaborating with research, product, and engineering teams worldwide

What we need to see:

  • B.Sc., M.Sc., or equivalent experience in Computer Science or Computer Engineering

  • 5+ years of hands-on software engineering experience in performance-critical systems

  • Solid understanding of deep learning architectures (Transformers, SSMs, MoE, …)

  • Experience with systems where hardware constraints matter: GPU programming, memory hierarchy, networking, or distributed computing

  • Strong software engineering fundamentals: clean design, extensibility, testability. Good judgment about when complexity is warranted

  • Effective communicator who works well across teams and time zones

  • Experience optimizing deep learning workloads on NVIDIA GPUs using roofline models, Nsight/PyTorch profilers and end-to-end traces

Ways to stand out from the crowd:

  • Contributions to open-source inference runtimes and libraries - vLLM, SGLang, FlashInfer, Dynamo or similar

  • Hands-on work with LLM quantization (FP8, NVFP4, MXFP8, mixed-precision) and practical understanding of numerical precision tradeoffs

  • Track record with distributed inference at scale: tensor parallelism, pipeline parallelism, expert parallelism, disaggregation, multi-node orchestration

  • Deep knowledge of the latest LLM architectural trends: multi-token predictors, sparse hybrid models, attention and state-space mechanisms  

  • Experience with performance modeling and simulation-to-silicon correlation

NVIDIA is widely considered one of the world's most desirable employers in the technology field. We have some of the most forward-thinking and hardworking people working for us. If you're creative and autonomous, we want to hear from you! We are committed to fostering a diverse work environment and are proud to be an equal-opportunity employer. We highly value diversity in our current and future employees. We do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.