-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing keys when loading pretrained weights #278
Comments
I had the same problem |
With the older version i.e. 0.7.0 the You should check out #208 |
Same problem for me |
@lirus7 The You can confirm this by passing a dummy variable through two different version of the model, one with
When we include the top the output is as expected: a 2-dimensional 1x1000 tensor, 1 for the batch dimension, 1000 for the possible ImageNet classes. However, when we exclude the top the output is a 4-dimensional tensor: there are two additional dimensions corresponding to the input image width and height dimensions, and the size of the second dimension, 1280, corresponds to the number of output channels in the final convolutional layer of the architecture:
Hope this helps, please let me know if you're experiencing different behavior when using |
hello all, please check out this PR #290, it allows the model (latest version) to load the pre-trained weights when you can check by running the following code: from efficientnet_pytorch import EfficientNet
enet = EfficientNet.from_pretrained('efficientnet-b0', include_top=False)
from torch.utils import model_zoo
w = model_zoo.load_url("https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth")
print(w['_conv_stem.weight'][0][0])
print(enet._conv_stem.weight[0][0]) |
Hello all, I'm still getting the same error: assert not ret.unexpected_keys, 'Missing keys when loading pretrained weights {}'.format(ret.unexpected_keys)
AssertionError: Missing keys when loading pretrained weights: ['_fc.weight', '_fc.bias'] when simply running: backbone = EfficientNet.from_pretrained('efficientnet-b7', include_top=False) Was a solution ever found? Thanks. |
When I don't need the top, in doubt I just set: model._fc = nn.Identity() |
Python 3.8.7
efficientnet-pytorch==0.7.1
There's an issue loading the pre-trained weights (or any saved weights) when using
include_top = False
It seems to be related to the latest release
0.7.1
because the issue doesn't present when using the previous version:efficientnet-pytorch==0.7.0
The text was updated successfully, but these errors were encountered: