-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why so large commit loss weight #36
Comments
This parameter was configured a long time ago, and I assume it was initially intended to ensure that the various loss functions could remain on a similar order of magnitude. |
Thanks for your reply. Besides, I also want to know how to compute the codebook usage rate? |
During the inference process, the codebook utilization can be calculated by recording the occurrence frequency of each codebook entry using the LibriTTS test-clean dataset. |
So the threshold to classify a code as used is set as 1? |
Hi, author. I find that in the training code, the commit loss weight is set to 1000 which is much higher than that of encodec and speechtokenizer, why so large commit loss weight? Will it contribute to higher code usage or trigger some training instability? Thanks~
The text was updated successfully, but these errors were encountered: