XPENG is a leading smart technology company at the forefront of innovation, integrating advanced AI and autonomous driving technologies into its vehicles, including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. With a strong focus on intelligent mobility, XPENG is dedicated to reshaping the future of transportation through cutting-edge R&D in AI, machine learning, and smart connectivity.
As a core member of our AI Infrastructure team, you will work at the intersection of Autonomous Driving and Foundation Models. We don't just process EB-scale perception data from tens of thousands of production vehicles; we are building the high-performance Data Engine that powers our next-generation AI. Your work will directly determine how our self-driving systems "learn" from massive datasets and define the cognitive ceiling of multi-modal models in the physical world.
Key Responsibilities
-
Scalable Data Pipelines: Architect and build scalable, end-to-end pipelines to automate the ingestion, cleaning, and processing of PB-scale raw data for both production autonomy and multi-modal LLMs.
-
Modern Lakehouse Architecture: Evolve our data storage solutions based on Apache Iceberg and Lance to implement efficient semantic indexing, metadata management, and data versioning.
-
Training Throughput Optimization: Deeply optimize data loading and pre-fetching strategies to ensure maximum throughput for large-scale training on 10,000+ GPU clusters.
-
Infrastructure Evolution: Support the seamless transition of foundation model data into actionable training sets, bridging the gap between raw vehicle logs and model-ready tokens.
Minumum Qualifications
-
Engineering Excellence: BS/MS/PhD in Computer Science or a related field, with a proven track record of building large-scale distributed systems.
- Work Experience: 3-5 years of industry experience.
-
Programming Mastery: Proficient in Python, C++, or Java, with a deep understanding of high-performance concurrent programming and systems design.
-
Distributed Frameworks: Hands-on experience with at least one distributed processing framework, such as Ray and Spark.
-
Lakehouse Expertise: Familiarity with Data Lakehouse concepts and practical experience with technologies like Iceberg and Lance.
Preferred Qualifications
-
Experience building data warehouses for Trillion-token datasets or PB-scale multi-modal data.
-
Deep understanding of data access patterns in deep learning frameworks like PyTorch, DeepSpeed, or Megatron.
-
Practical experience with Vector Databases, automated labeling toolchains, or data-centric AI workflows.
-
Knowledge of storage formats optimized for AI (e.g., Parquet, Lance) and high-performance file systems.
The base salary range for this full-time position is $124,091-$210,000, in addition to bonus, equity and benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
We are an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.