Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Potential NAN bug] Loss may become NAN during training #1698

Open
Justobe opened this issue Jan 31, 2021 · 0 comments
Open

[Potential NAN bug] Loss may become NAN during training #1698

Justobe opened this issue Jan 31, 2021 · 0 comments

Comments

@Justobe
Copy link

Justobe commented Jan 31, 2021

Hello~

Thank you very much for sharing the code!

I try to use my own data set ( with the same shape as mnist) in code. After some iterations, it is found that the training loss became NAN. After carefully checking the code, I found that the following code may trigger NAN in loss:

In /python/algorithms/Classification Algorithms/MNIST Digit cassification using CNN.py: line 245

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

If y contains 0 (output of softmax ), the result of tf.log(y) is inf because log(0) is illegal . And this may cause the result of loss to become NAN.

It could be fixed by making the following changes:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y + 1e-8), reduction_indices=[1]))

or

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.clip_by_value(y,1e-8,1.0)), reduction_indices=[1]))

The same problem was also found at line 645:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_CNN), reduction_indices=[1]))

Hope to hear from you ~

Thanks in advance! : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant