-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for quantized models #718
Comments
Thanks for the feature request! Can you add more details on what kind of quantization methods do you want Opacus to support? It may be possible that we can still use Opacus, out of the box, for these? If not, we can distill on what additional properties we need to enable it. |
Thank you for your repsonse. but, unfortunately, applying DP using Opacus is not supported in this case, I must use the standard PyTorch parameters to compute per-sample gradients. |
Thanks to @Omnyyah for the idea. Yeah, quantization is an important feature which is on our radar. However, we do not have a clear timeline to support this function right now. If you could have a proposal/prototyping, we are happy with the consulting and discussion. |
Thank you for your reply! |
🚀 Feature
Please extend Opacus to support differentially private training of quantized models.
Motivation
Enabling efficient and private machine learning by combining differential privacy with model quantization.
Additional context
This would facilitate deploying private AI models on resource-constrained devices.
The text was updated successfully, but these errors were encountered: