You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I found that the current mainline code can generate reasonable FID score for pre-trained models, but generate very high FID score for the model that is pre-trained using this codebase.
For example, I have a checkpoint that is pre-trained on the LAION dataset and get the following FID scores by using fid-clip-evaluation.py:
Mainline -> 18.46875 ejyuen-patch-1 -> 14.32812
For anther checkpoint, I get the following result:
Mainline -> 21.46875 ejyuen-patch-1 -> 15.89062
Note that for all those FID calculation, I use the same COCO2014-10K dataset.
The text was updated successfully, but these errors were encountered:
Apologies for the large delay in getting a response to you on this. Can you share some more information about the model + training setup you're training and seeing lower FID with? We've matched/improved on the FID of the pre-trained models with this codebase on our training stack, but if the training data distribution differs, or if the training setup/config otherwise changes one could easily get different results.
Hi, I found that the current mainline code can generate reasonable FID score for pre-trained models, but generate very high FID score for the model that is pre-trained using this codebase.
For example, I have a checkpoint that is pre-trained on the LAION dataset and get the following FID scores by using
fid-clip-evaluation.py
:Mainline -> 18.46875
ejyuen-patch-1 -> 14.32812
For anther checkpoint, I get the following result:
Mainline -> 21.46875
ejyuen-patch-1 -> 15.89062
Note that for all those FID calculation, I use the same COCO2014-10K dataset.
The text was updated successfully, but these errors were encountered: