Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing augmentation multiplicity using Functorch [Error: AttributeError: 'Tensor' object has no attribute '_forward_counter] #626

Closed
mshubhankar opened this issue Feb 6, 2024 · 1 comment

Comments

@mshubhankar
Copy link

🐛 Bug

I am trying to implement augmentation multiplicity (as implemented in Paper 1 and Paper 2 ) using Opacus's new functionality of functorch. I am following the exact steps as pointed out by @alexandresablayrolles in #455 and #575.
However, I am facing a bug in the predictions = fmodel(params, batch) line where it says that AttributeError: 'Tensor' object has no attribute '_forward_counter'. My intuition is that the attribute _forward_counter should be added to the model while calling make_functional() but there has been some code change which might be causing this.

Any help is appreciated. Thanks!

Colab reproducible link

To Reproduce

Steps to reproduce the behavior:

  1. Open colab link
  2. Run until the end till the train() function cell
  3. The bug line has been pointed out in comments

Expected behavior

I would expect the forward function to work with the new functorch functionality.

@HuanyuZhang
Copy link
Contributor

This problem arises when using functorch on models with added hooks. However, both functorch and the hook serve the purpose of calculating per-sample gradients. Therefore, their coexistence is unnecessary. A fix is to leverage "no_op" grad_sample_module which does not add hooks:

model, optimizer, train_loader = privacy_engine.make_private(
            module=model,
            optimizer=optimizer,
            data_loader=train_loader,
            noise_multiplier=args.sigma,
            max_grad_norm=max_grad_norm,
            clipping=clipping,
            grad_sample_mode="no_op", ## avoid adding hooks
        ) 

Synced offline and proved the fix worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants