The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
-
Updated
May 31, 2025 - Python
The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
[Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]
A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.
Website of Paper: VLA Model-Expert Collaboration for Bi-directional Manipulation Learning
ProRobo3D Benchmark to be release...
VLA Implementation Code for Jake Kemple's UW Masters Thesis
Add a description, image, and links to the vision-language-action-model topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-action-model topic, visit your repo's landing page and select "manage topics."