The Mission
Make general-purpose robots a reality.
The Challenge
We envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots need to operate reliably in messy, unstructured environments. Our mission is to answer the question: what will it take to create truly general-purpose robots that can accomplish a wide variety of tasks in settings like human homes, with minimal human supervision? We believe that the answer lies in cultivating large-scale datasets of physical interaction from a variety of sources and building on the latest advances in machine learning to learn general-purpose robot behaviors from this data.
The Team
The Learning From Videos (LFV) team in the Robotics division develops foundation models that leverage large-scale multi-modal data (RGB, depth, flow, semantics, actions, tactile, audio, etc.) from multiple domains (driving, robotics, indoors, outdoors, etc.) to power downstream embodied AI tasks. Our topics of interest include Video Generation, World Models, 4D Reconstruction, Multi-Modal Models, Multi-View Geometry, Data Augmentation, and Video-Language-Action models, with a primary focus on embodied applications. We are making progress on some of the hardest scientific challenges around spatio-temporal reasoning, and how it can lead to the deployment of autonomous agents in real-world unstructured environments, across both robotics and driving domains.
The Opportunity
Our team is looking for a Research Engineer to own and drive the core data and model infrastructure that powers our research. As our foundation models scale in both data diversity and model complexity, we need a strong engineer who can bridge the gap between research ideas and production-grade systems. This is not a traditional software engineering role; you will work directly alongside research scientists, understand the research deeply enough to make independent technical decisions, and play a key role in enabling the team to move faster and train better models.
As a Research Engineer, you will be responsible for building and maintaining the infrastructure that ingests, unifies, and serves heterogeneous multi-modal datasets at scale; supporting and optimizing large-scale distributed training of diffusion and transformer models; and developing tools and pipelines that accelerate the research-to-results cycle. You will work closely with researchers to prototype new ideas, run experiments, and help ship our most successful models toward real-world applications.