You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope someone can help me understand why the KL is calculated as:
0.5 * torch.sum(torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, dim=[1, 2, 3])
I am asking, because most loss functions for the VAE I can find use (-1 times this calculations) like this: 0.5 * torch.sum(-torch.pow(self.mean, 2) - self.var + 1.0 + self.logvar, dim=[1, 2, 3])
And I cannot see that we multiply by -1 in the contperceptual loss for instance:
Hello
I hope someone can help me understand why the KL is calculated as:
0.5 * torch.sum(torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, dim=[1, 2, 3])
In the DiagonalGaussianDistribution listed here:
stable-diffusion/ldm/modules/distributions/distributions.py
Line 44 in 21f890f
I am asking, because most loss functions for the VAE I can find use (-1 times this calculations) like this: 0.5 * torch.sum(-torch.pow(self.mean, 2) - self.var + 1.0 + self.logvar, dim=[1, 2, 3])
And I cannot see that we multiply by -1 in the contperceptual loss for instance:
https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/ldm/modules/losses/contperceptual.py#L83C57-L83C65
Thank you very much in advance :)
The text was updated successfully, but these errors were encountered: