Open
Description
Hello,
Thank you for providing this code. It works quite well.
After training the libero90 and reproducing some of the evaluation metrics. As I understand, some of the relevant parameters to adjust would be the action_horizon (chunk size). Is there a connection to the autoencoder when increasing the chunk size?
It would be great if you could provide a hint about which parameters to adjust. Maybe there are more tweaks for better performance.
I'm currently trying to train the policy on a (cheap) robotic arm with 6 DOF. It kind of works, although ACT seems to perform better at the moment. I suspect this might be a training or configuration issue. I have trained the encoder and prior using one-hot embeddings.
Thanks a lot!
Metadata
Metadata
Assignees
Labels
No labels