-
Notifications
You must be signed in to change notification settings - Fork 294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cifar10 results #22
Comments
Hmm that is odd. I assumed the model would train correctly this time since the code was almost exactly similar to the one posted by the author. The only thing I can think of that is different is preprocessing and augmentation. The earlier one was after tons of restarts, and I did not use horizontal flips for it. Also, if you used the unmodified cifar10 script, that means that you did not use the densenet.preprocess_data(...) ? I think the authors have specifically mentioned in one of their comments on the issues of Github that the specific mean standard normalization was necessary along with the scaling.
They suggest random crops after zero padding and horizontal flips. Keras doesnt have inbuilt support for random crops, so instead I used the augmentation from earlier papers such as rotation, scaling and horizontal flips. I suggest removing horizontal flipping. I have found that it simply destroys performance, no matter how much I train models with it. Without it, convergence is faster and ofcourse, overfits faster as well, but at least validation loss is higher (though loss of flip invariance is kinda bad). I hate to ask this since it takes so much time, but could you run it with the current modified version ? And while it may not be best, you could try simply running it with these weights initialized to speed it up and not need 300 additional epochs (it will be very eratic in the beginning, since the earlier normalization was static at 0-1 and now its -2.x to 2.x (this is from (255 - 124) * 0.017). Still, it should not affect too many of the weights at the latter end of the network, so it may be better off. |
By exactly same to the author, I mean https://github.com/liuzhuang13/DenseNet/blob/master/models/densenet.lua#L67 |
Oh wait. Disregard the above. Seems you ran it with the updated script. I don't really understand then. Perhaps the additional rotation and scaling are hurting performance ? |
Sorry, I meant the older version of the Titan X (there are two). I haven't really investigated but I too found it a bit puzzling. Perhaps additional training would bring it down to the expected level. |
Yeah maybe that will work. |
I found the same problem with ahundt that just ran 300 epochs using tensorflow 1.3 backend in keras 2.0, and got the same result as same as ahundt's result. And I change the optimization algorithm to SGD with momentum=0.9 and nesterov=True ,and initial learning rate=0.1 and in epoch=150 with learning rate=0.01 and in epoch=225 learning rate=0.001 until the end of 300 epochs, but the result did not get some improvement and be approximately 92% . oh, my GPU's version is Nvidia GTX 1080ti. So i'm so confused because I have modified the hyper parameters again and again and just cannot see any virtual improvement. Thanks for your time!! |
Perhaps related to #25? @LelouchVC do you think you could try that version? My GPUs are currently occupied |
HI, does the problem solved? I also use this code, but not get satisfied results as paper reported. Any idea? |
I consider the result is related to the keras version, I see another code use keras 1.0 API and got a result similar with paper's result. |
Hi,Thank for your reply. Would you mind give a link for another code for keras1.0. Or, have you try what about this code in keras1.0. I have similar experience, the keras1.0 and keras2.0 will make the different results. |
Do you mind sending me a version of keras 1.0? I am very anxious now because of my graduation thesis. My mailbox [email protected]. Thank you |
1 similar comment
Do you mind sending me a version of keras 1.0? I am very anxious now because of my graduation thesis. My mailbox [email protected]. Thank you |
I just ran a 300 epoch run using tensorflow and an unmodified
cifar10.py
from 54ed2d6 on a Titan X (old version) and got the following results:Here is the file:
DenseNet-40-12-CIFAR10.h5.zip
This definitely doesn't seem as good as previous training runs from the readme which cite
4.51 %
error.The text was updated successfully, but these errors were encountered: