-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] RLModule: InferenceOnlyAPI
.
#47572
[RLlib] RLModule: InferenceOnlyAPI
.
#47572
Conversation
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Some questions here and there. Awesome PR that saves us a lot of code 👍
"""An API to be implemented by RLModules that have an inference-only mode. | ||
|
||
Only the `get_non_inference_attributes` method needs to get implemented for | ||
a RLModule to have the following functionality: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nit "a" -> "an RLModule" ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
Only the `get_non_inference_attributes` method needs to get implemented for | ||
a RLModule to have the following functionality: | ||
- On EnvRunners (or when self.config.inference_only=True), RLlib will remove |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we indent these? It makes it easier to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it'll pass the doctests, but let's try ...
|
||
@abc.abstractmethod | ||
def get_non_inference_attributes(self) -> List[str]: | ||
"""Returns a list of names (str) of attributes of inference-only components. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is confusing. The method is named get_NON_inference_attributes
and the docstring tells that this returns attributes of inference-only attributes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "Returns a list of names (str) of attributes not used in inference mode."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed and clarified
|
||
from ray.rllib.core.rl_module.rl_module import RLModuleSpec | ||
|
||
spec = RLModuleSpec(module_class=..., inference_only=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice example!!
# If an inference-only class AND self.config.inference_only is True, | ||
# remove all attributes that are returned by | ||
# `self.get_non_inference_attributes()`. | ||
if self.config.inference_only and isinstance(self, InferenceOnlyAPI): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice test!
continue | ||
target = getattr(self, parts[0]) | ||
# Traverse from the next part on (if nested). | ||
for part in parts[1:]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stops after the first nesting of modules. Can't we just use a recursive implementation that calls itself again?
Also, this first builds all the components (could be many) and then deletes them again. This could induce some initialization cost which hurts performance in cases where workers need to be rebuild often.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is true, BUT users can always decide to not even build these components in the first place inside their setup()
. In this case, the above functionality will simply ignore the (already) missing attributes (which it would have deleted otherwise).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, the logic is actually fine. It does not stop after the first level, but goes all the way to the end of the specified attribute string, e.g:
"a.b.c.d.e".
- If any sub-attribute above does not exist, break and del nothing (attribute already missing).
- If "a.b.c.d.e" can be found -> del only the "e" sub-attribute.
if ( | ||
inference_only | ||
and not self.config.inference_only | ||
and isinstance(self, InferenceOnlyAPI) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this form of testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, me, too. This is what Learners should do: Check, whether the given RLModules are actually compatible with them (or expose certain known APIs to work with).
This way, we can eliminate the need to have algo-specific RLModule classes, freeing the user to think about these beforehand and making it easier to switch between algos later.
@@ -85,6 +79,10 @@ def make_target_networks(self) -> None: | |||
if self.uses_dueling: | |||
self._target_vf = make_target_network(self.vf) | |||
|
|||
@override(InferenceOnlyAPI) | |||
def get_non_inference_attributes(self) -> List[str]: | |||
return ["_target_encoder", "_target_af", "_target_vf"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return ["_target_encoder", "_target_af"] + ["_target_vf"] if self.uses_dueling else []
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
ret = ["vf"] | ||
if hasattr(self.encoder, "critic_encoder"): | ||
ret += ["encoder.critic_encoder"] | ||
return ret |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return ["vf"] + [] if self.config.model_config_dict["vf_share_layers"] else ["encoder.critic_encoder"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@override(InferenceOnlyAPI) | ||
def get_non_inference_attributes(self) -> List[str]: | ||
ret = ["qf", "target_qf", "qf_encoder", "target_qf_encoder"] | ||
if self.twin_q: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return ["qf", "target_qf", "qf_encoder", "target_qf_encoder"] + [ "qf_twin", "target_qf_twin", "qf_twin_encoder", "target_qf_twin_encoder", ] if self.twin_q else []
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Signed-off-by: ujjawal-khare <[email protected]>
Introduce the RLModule
InferenceOnlyAPI
to simplify and generalize the concept of defining an inference-only version of ones (custom) RLModule.Users that want to save space on the EnvRunners (on which all RLModule are constructed with the inference_only flag set to True), can now override a single method in their RLModule code (
get_non_inference_only_attributes
) that returns those attribute names (possibly nested) that point to sub-components of the model NOT needed for action computations.RLlib will then automatically:
a) remove those parts in inference-only mode.
b) adjust the state returned by
get_state(inference_only=True)
from aninference_only=False
RLModule which does implement theInferenceOnlyAPI
(capable of having an inference_only version). This way, when getting the weights from a Learner (inference_only=False) for an EnvRunner (inference_only=True), the weight matrix is reduced already on the Learner saving network traffic.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.