Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FID from the mainline code is different from https://github.com/mosaicml/diffusion/tree/ejyuen-patch-1 #88

Open
viyjy opened this issue Oct 20, 2023 · 1 comment

Comments

@viyjy
Copy link

viyjy commented Oct 20, 2023

Hi, I found that the current mainline code can generate reasonable FID score for pre-trained models, but generate very high FID score for the model that is pre-trained using this codebase.
For example, I have a checkpoint that is pre-trained on the LAION dataset and get the following FID scores by using fid-clip-evaluation.py:
Mainline -> 18.46875
ejyuen-patch-1 -> 14.32812

For anther checkpoint, I get the following result:
Mainline -> 21.46875
ejyuen-patch-1 -> 15.89062

Note that for all those FID calculation, I use the same COCO2014-10K dataset.

@coryMosaicML
Copy link
Collaborator

Apologies for the large delay in getting a response to you on this. Can you share some more information about the model + training setup you're training and seeing lower FID with? We've matched/improved on the FID of the pre-trained models with this codebase on our training stack, but if the training data distribution differs, or if the training setup/config otherwise changes one could easily get different results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants