-
Notifications
You must be signed in to change notification settings - Fork 772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Control simulated robot with real leader #514
Control simulated robot with real leader #514
Conversation
Thanks @marinabar who tested the script on her setup. |
I'm noticing quite a bit of the new scripts could be DRY'ed up, since it's rehashing a fair bit of the original control_robot. I'm curious- what issues are you finding with using |
Hey @apockill! You're right, we might be able to find a general solution in
So even though the two scripts resemble each other I still think it is cleaner to have them separate. What do you think? If you have some vision of how we can improve on that or merge the two scripts I would be happy to chat or have a look :D |
f71be08
to
50922e1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First quick review ;)
Thanks Michel :D
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
Co-authored-by: Remi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved!
What this does
Adds a script
control_sim_robot.py
inlerobot/scripts
that has the same functionality and interface ascontrol_robot.py
but for simulated environments.The script has three control modes:
--repo-id
option.The dataset created contains more columns related to reinforcement learning like
next.reward
,next.success
andseed
.Simulation environments
Along with the
--robot-path
argument, the scripts requires a path the configuration file of the simulation environment -- define inlerobot/configs/env
.Example of the configuration file for gym_lowcostrobot:
Essential elements:
How to test
First install the gym_lowcostrobot environment and add the environment's config file in
yaml
format.Test teleoperation:
Test data collection and upload to hub:
Replay the episodes:
In the script we save the
seed
in the dataset which enables us to reset the environment in the same state when the data collection was happening which makes the replay successful.Finally visualize the dataset:
TODO:
Test with more simulation environments, brax, maniskill, IsaacLab ...
Add keyboard control of the end-effector.
Note: You might need to run
mjpython
if you're using MAC for Mujoco.Note: The current script requires a real leader in order to teleoperate sim environments. We can add support for keyboard control of the end effector for people who don't have the real robot.