Xpeng motors

Research Scientist Intern

Santa Clara, CA Full Time
XPENG is a leading smart technology company at the forefront of innovation, integrating advanced AI and autonomous driving technologies into its vehicles, including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. With a strong focus on intelligent mobility, XPENG is dedicated to reshaping the future of transportation through cutting-edge R&D in AI, machine learning, and smart connectivity.
 
About the Role
We are actively seeking a full-time Research Scientist Intern to drive the modeling and algorithmic development of XPENG’s next-generation Vision-Language-Action (VLA) Foundation Model — the core brain that powers our end-to-end autonomous driving systems.
 
You will work closely with world-class researchers, perception and planning engineers, and infrastructure experts to design, train, and deploy large-scale multi-modal models that unify vision, language, and control. Your work will directly shape the intelligence that enables XPENG’s future L3/L4 autonomous driving products.
Key Responsibilities
  • Conduct research on designing and implementing large-scale multi-modal architectures (e.g., vision–language–action transformers) for end-to-end autonomous driving.
  • Design and integrate cross-modal alignment (e.g., visual grounding, temporal reasoning, policy distillation, imitation and reinforcement learning) to improve model interpretability and action quality.
  • Closely collaborate with researchers and engineers across the modeling and infrastructure team.
  • Contribute to top-tier AI/CV/ML conferences publications and present research findings.
Minimum Qualifications
  • Currently enrolled in the Master/Ph.D program in Computer Science, Electrical/Computer Engineering, or related field, with the specialization in the CV/NLP/ML.
  • Experience in multi-modal modeling (vision, language, or planning), with deep understanding of representation learning, temporal modeling, and reinforcement learning techniques.
  • Strong proficiency in PyTorch and modern transformer-based model design.
Preferred Qualifications
  • Publication record in top-tier AI conferences (CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, etc).
  • Prior experience building foundation or end-to-end driving models, or LLM/VLM architectures (e.g., ViT, Flamingo, BEVFormer, RT-2, or GRPO-style policies).
  • Knowledge of RLHF/DPO/GRPO, trajectory prediction, or policy learning for control tasks.
  • Familiarity with distributed training (DDP, FSDP) and large-batch optimization.
What We Offer
  • A collaborative, research-driven environment with access to massive real-world data and industry-scale compute.
  • Opportunity to work with top-tier researchers and engineers advancing the frontier of foundation models for autonomous driving.
  • Direct impact on the next generation of intelligent mobility systems.
  • Comprehensive benefits, meals, and team-building activities.
What do we provide:
  • A fun, supportive and engaging environment
  • Infrastructures and computational resources to support your work.
  • Opportunity to work on cutting edge technologies with the top talents in the field.
  • Opportunity to make significant impact on the transportation revolution by the means of advancing autonomous driving
  • Competitive compensation package
  • Snacks, lunches, dinners, and fun activities
 
We are an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.