Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for quantized models #718

Open
Omnyyah opened this issue Jan 20, 2025 · 4 comments
Open

Support for quantized models #718

Omnyyah opened this issue Jan 20, 2025 · 4 comments

Comments

@Omnyyah
Copy link

Omnyyah commented Jan 20, 2025

🚀 Feature

Please extend Opacus to support differentially private training of quantized models.

Motivation

Enabling efficient and private machine learning by combining differential privacy with model quantization.

Additional context

This would facilitate deploying private AI models on resource-constrained devices.

@EnayatUllah
Copy link
Contributor

Thanks for the feature request! Can you add more details on what kind of quantization methods do you want Opacus to support? It may be possible that we can still use Opacus, out of the box, for these? If not, we can distill on what additional properties we need to enable it.

@Omnyyah
Copy link
Author

Omnyyah commented Jan 22, 2025

Thank you for your repsonse.
so the thing is that, I am using unsloth to train LLMs faster, and they use "Dynamic 4-bit Quantization, which involves dynamically opting not to quantize certain parameters and this builds on top of BitsandBytes 4-bit. This approach delivers significant accuracy gains while only using <10% more VRAM than BnB 4-bit."

but, unfortunately, applying DP using Opacus is not supported in this case, I must use the standard PyTorch parameters to compute per-sample gradients.
Bitsandbytes 4-bit parameters store weights in custom quantization blocks that break the usual PyTorch grad flow, so Opacus can’t attach its gradient hooks.

@HuanyuZhang
Copy link
Contributor

Thanks to @Omnyyah for the idea. Yeah, quantization is an important feature which is on our radar. However, we do not have a clear timeline to support this function right now. If you could have a proposal/prototyping, we are happy with the consulting and discussion.

@Omnyyah
Copy link
Author

Omnyyah commented Jan 23, 2025

Thank you for your reply!
would you please clarify what the proposal/prototyping should contain ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants