-
Notifications
You must be signed in to change notification settings - Fork 28
Open
Description
Currently trying to modify my current PPO code for ManiSkill GPU sim (mostly based on clean rl as well) that is modified to do everything on the GPU. I am trying to squeeze as much performance out as possible and am reviewing the torch compile ppo continuous code right now.
A few questions
- I can probably remove all the .to(device) calls of tensordicts right? e.g.
https://github.com/pytorch-labs/LeanRL/blob/a416e61058ffa2dfe571bfe1d69a7f62d622b503/leanrl/ppo_continuous_action_torchcompile.py#L206-L209
And is the original non_blocking meant to to ensure we don't eagerly move it until we need to (e.g. next inference step in the rollout)?
- How bad are .eval, .train calls? and how come they should be avoided? I thought they were like simple switches
- Are there potentially any environment side optimizations to make RL faster? Im aware of some things that can be made to do be done non-blocking, I wonder if the same can be done for environment observations and rewards? Are there other tricks?
Thanks! Looking forward to trying to set some training speed records with these improvements!
Metadata
Metadata
Assignees
Labels
No labels