Skip to content

Optimizing for GPU parallelized environments that return batched torch cuda tensors #13

@StoneT2000

Description

@StoneT2000

Currently trying to modify my current PPO code for ManiSkill GPU sim (mostly based on clean rl as well) that is modified to do everything on the GPU. I am trying to squeeze as much performance out as possible and am reviewing the torch compile ppo continuous code right now.

A few questions

  1. I can probably remove all the .to(device) calls of tensordicts right? e.g.
    https://github.com/pytorch-labs/LeanRL/blob/a416e61058ffa2dfe571bfe1d69a7f62d622b503/leanrl/ppo_continuous_action_torchcompile.py#L206-L209

And is the original non_blocking meant to to ensure we don't eagerly move it until we need to (e.g. next inference step in the rollout)?

  1. How bad are .eval, .train calls? and how come they should be avoided? I thought they were like simple switches
  2. Are there potentially any environment side optimizations to make RL faster? Im aware of some things that can be made to do be done non-blocking, I wonder if the same can be done for environment observations and rewards? Are there other tricks?

Thanks! Looking forward to trying to set some training speed records with these improvements!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions