Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How are qparams (scale and zero_point) determined after fusing Conv and BN layers? #4369

Open
gef1998 opened this issue Feb 27, 2025 · 0 comments

Comments

@gef1998
Copy link

gef1998 commented Feb 27, 2025

During quantization (using pytorch_quantization), the qparams (scale and zero_point) of old Conv is computed using Calibrator. However, when the Conv and Batch Normalization (BN) layers are fused, the weights and biases of the fused Conv change. In this case, the original qparams may not be applicable anymore. Could you please explain how to correctly determine the new qparams (scale and zero_point) after this fusion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant