NVIDIA

Machine Learning Intern, Humanoid Loco-Manipulation - 2026

China, Shanghai Full time

The NVIDIA Isaac Loco-Manipulation team is seeking exceptional machine learning Interns to join our world-class robotics initiatives. As an intern, you’ll work alongside industry-leading experts, gaining hands-on experience and contributing to the future of humanoid loco-manipulation. We’re looking for strategic, ambitious, and creative individuals who are passionate about pushing the boundaries of robotics.

What You’ll Be Doing:

  • Collaborate with researchers and engineers to define and execute projects focused on humanoid loco-manipulation and mobile manipulation.

  • Contribute to GR00T and Cosmos foundation models.

  • Support the development of reference workflows in Isaac Lab and Newton.

  • Advance technologies for robot learning and synthetic data generation using human video datasets.

  • Design, implement, and deploy novel algorithms for humanoid robot locomotion and manipulation in both simulated and real-world environments.

  • Integrate your work with NVIDIA’s advanced robotics platforms.

  • Transfer your innovations into products, with deliverables including prototypes, patents, and/or publications in top conferences and journals.

What we need to see:

  • Currently enrolled in a PhD or Master’s program in Computer Science, Electrical Engineering, Robotics, or a related field, and available for the duration of the internship.

  • Strong programming skills in Python and C++; familiarity with deep learning frameworks (PyTorch, JAX, TensorFlow) and physics simulation tools (Isaac Sim/Lab, MuJoCo).

  • Demonstrated research or internship experience, with publications in top conferences.

  • Excellent communication and collaboration skills.

  • Experience with large-scale model training on GPU clusters is a plus.

Ways to stand out from the crowd:

  • Foundation models for robotics and 3D perception.

  • Learning from human video demonstrations; Human-object reconstruction.

  • Humanoid loco-manipulation: whole-body control, dexterous and bimanual manipulation, locomotion.

  • Robotics simulation, sim-to-real and real-to-sim transfer.

  • Robot learning and reasoning, including imitation and reinforcement learning:

Vision-language-action (VLA) models.

Synthetic data generation for robotics research.