Large and small-scale models’ fusion-driven proactive robotic manipulation control for human-robot collaborative assembly in Industry 5.0
This is our work on combining large and small models to achieve robot control, thanks to “ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation” for inspiring our work.
-
Clone this repository with:
git clone https://github.com/mdx-box/LLM_for_robot.git -
Download Isaac Sim and config it properly.
-
Create conda environment
conda create -n LSM_SLM python=3.11
conda activate LSM_SLM
pip install -r requirements.txt
-
Config the config files according your demands.
-
Start the project. Here, we provide two choice, namely automated one and human-assisted one.
auto: python main_test.py
human: python main_human.py
- Notes: How to define your scene to Isaac Sim?
6.1. Create your scene with modelling software, such as Solidworks, FreeCAD, Blender, etc.
6.2 Expert your model with .gltf format. And then import this model to Blender. Then set geomerty to origin. And then export it with the .usd format.
6.2 Open Isaac Sim platform. Open exported .usd file. Currently, there is only a model without any physical characteristic. Therefore, we we should add more details. Firstly, it is recommended to add your desired color to the object.
6.3 Copy the visual one and rename it with collusion. Move 'collusion' into the same folder as 'visual'. Here, 'visual' refers to visualization, making it visible when opened. And "collusion" is used to add physical property.
6.4 Add the "rigid property" to collusion.
6.5 Finally, saving your model as .usd file. And put its address in config file.






