-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecation message for non-full backward hook #328
Comments
Thanks for flagging this. We are currently discussing with the Pytorch team as the new proposed hooks are not ideal for our use-case. |
Thanks for letting me know! |
Whoops, thanks for pointing out the old colab in the bug report template - fixed now. With full backward hooks, unfortunately, it's not as simple as just replacing the deprecated hooks with the new method - their behaviour not always matches. I'll leave this issue open for tracking progress in the future, as we definitely plan to address this eventually |
having this same issue |
Thanks for flagging @mmsaki. Currently, it is a warning so you can safely ignore it. We are working on a solution for the next version of Pytorch. |
has this issue been solved now? |
i have the same problem |
Hi @dDCTRr, the warning for hooks still exist due to the reasons outlined by @ffuuugor in this comment; you can safely ignore it though (apologies for the annoyance). That said, starting Opacus 1.2.0, we support functorch based per-sample gradient computation (no hooks, no warnings). To use this, simply set |
i met the same error |
has this issue been solved now? |
is seems that the warning has no effect on code execution
| |
杨熙元
|
|
***@***.***
|
…---- Replied Message ----
| From | ***@***.***> |
| Date | 03/9/2023 12:32 |
| To | ***@***.***> |
| Cc | Xiyuan ***@***.***> ,
***@***.***> |
| Subject | Re: [pytorch/opacus] deprecation message for non-full backward hook (Issue #328) |
has this issue been solved now?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hi @karthikprasad, I tried setting Thanks a lot for your help!
The displayed warning:
|
Summary: This PR is a collection of smaller fixes that will save us some deprecation issues in the future ## 1. Updating to PyTorch 2.0 **Key files: grad_sample/functorch.py, requirements.txt** `functorch` has been a part of core PyTorch since 1.13. Now they're going a step further and changing the API, while deprecating the old one. There's a [guide](https://pytorch.org/docs/master/func.migrating.html) on how to migrate. TL;DR - `make_functional` will no longer be part of the API, with `torch.func.functional_call()` being (non drop-in) replacement. They key difference for us is `make_functional()` creates a fresh copy of the module, while `functional_call()` uses existing module. As a matter of fact, we need the fresh copy (otherwise all the hooks start firing and you enter nested madness), so I've copy-pasted a [gist](https://gist.github.com/zou3519/7769506acc899d83ef1464e28f22e6cf) from the official guide on how to get a full replacement for `make_functional`. ## 2. New mechanism for gradient accumulation detection **Key file: privacy_engine.py, grad_sample_module.py** As [reported](https://discuss.pytorch.org/t/gan-raises-userwarning-using-a-non-full-backward-hook-when-the-forward-contains-multiple/175638/2) on the forum, clients are still getting "non-full backward hook" warning even when using `grad_sample_mode="ew"`. Naturally, `functorch` and `hooks` modes rely on backward hooks and can't be migrated to full hooks because [reasons](#328 (comment)). However, `ew` doesn't rely on hooks and it's unclear why the message should appear. The reason, however, is simple. If the client is using poisson sampling we add an extra check to prohibit gradient accumulation (two poisson batches combined is not a poisson batch), and we do that by the means of backward hooks. ~In this case, backward hook serves a simple purpose and there shouldn't be any problems with migrating to the new method, however that involved changing the checking method. That's because `register_backward_hook` is called *after* hooks on submodule, but `register_full_backward_hook` is called before.~ Strikethrough solution didn't work, because hook order execution is weird for complex graphs, e.g. for GANs. For example, if your forward call looks like this: ``` Discriminator(Generator(x)) ``` then top-level module hook will precede submodule's hooks for `Generator`, but not for `Discriminator` As such, I've realised that gradient accumulation is not even supported in `ExpandedWeights`, so we don't have to worry about that. And the other two modes are both hooks-based, so we can just check the accumulation in the existing backward hook, no need for an extra hook. Deleted some code, profit. ## 3. Refactoring `wrap_collate_with_empty` to please pickle Now here're two facts I didn't know before 1) You can't pickle a nested function, e.g. you can't do the following ```python def foo(): def bar(): <...> return bar pickle.dump(foo(), ...) ``` 2) Whether or not `multiprocessing` uses pickle is python- and platform- dependant. This affects our tests when we test `DataLoader` with multiple workers. As such, our data loaders tests: * Pass on CircleCI with python3.9 * Fail on my local machine with python3.9 * Pass on my local machine with python3.7 I'm not sure how cow common the issue is, but it's safer to just refactor `wrap_collate_with_empty` to avoid nested functions. ## 4. Fix benchmark tests We don't really run `benchmarks/tests` on a regular basis, and some of them were broken since we've upgraded to PyTorch 1.13 (`API_CUTOFF_VERSION` doesn't exist anymore) ## 4. Fix flake8 config Flake8 config no [longer support](https://flake8.pycqa.org/en/latest/user/configuration.html) inline comments, fix is due Pull Request resolved: #581 Reviewed By: alexandresablayrolles Differential Revision: D44749760 Pulled By: ffuuugor fbshipit-source-id: cf225f4134c049da4ee2eef53e1af3ef54d090bf
I meet the same Userwarning. |
🐛 Bug
I am getting the following warning when using opacus on my system:
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py:1025: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
To Reproduce
Steps to reproduce the behavior:
(sorry, I had problems running google colab)..
Expected behavior
No error message.
Environment
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-1026-oem-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] numpydoc==1.1.0
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.10.1
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] mypy_extensions 0.4.3 py39h06a4308_0
[conda] numpy 1.20.3 py39hf144106_0
[conda] numpy-base 1.20.3 py39h74d4b33_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch 1.10.1 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-ranger 0.1.1 pyhd8ed1ab_0 conda-forge
[conda] torch-optimizer 0.3.0 pyhd8ed1ab_0 conda-forge
[conda] torchaudio 0.10.1 py39_cu102 pytorch
[conda] torchvision 0.11.2 py39_cu102 pytorc
The text was updated successfully, but these errors were encountered: