Skip to content

Commit ded78a6

Browse files
committed
Fix training script on TensorFlow v1.0 and above
Though the script works fine on r0.11, on r1.0 and above you hit this error (I tested on r1.0, r1.2 and r1.3): ``` # python mnist.py Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz Traceback (most recent call last): File "mnist.py", line 96, in <module> cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1558, in softmax_cross_entropy_with_logits labels, logits) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1512, in _ensure_xent_args "named arguments (labels=..., logits=..., ...)" % name) ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...) ``` Implemented fix as described here: https://stackoverflow.com/a/42297021/112705 TESTING After the change, I tested on TensorFlow r0.11 and r1.3, and the script runs successfully.
1 parent 0fc8b4e commit ded78a6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

mnist.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ def conv_net(x, weights, biases, dropout):
9393
pred = conv_net(x, weights, biases, keep_prob)
9494

9595
# Define loss and optimizer
96-
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
96+
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
9797
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
9898

9999
# Evaluate model

0 commit comments

Comments
 (0)