1 Matching Annotations
- Mar 2017
-
www.tensorflow.org www.tensorflow.org
-
EXERCISE: The output of inference are un-normalized logits. Try editing the network architecture to return normalized predictions using tf.nn.softmax.
Does anyone know if this is a valid exercise still? Line 260 of
cifar10.py
specifically states:# linear layer(WX + b), # We don't apply softmax here because # tf.nn.sparse_softmax_cross_entropy_with_logits accepts the unscaled logits # and performs the softmax internally for efficiency.
or is this saying it would be useful to get the scaled logits too using
tf.nn.softmax
and keep both, passing unscaled to theloss(logits, labels)
call in line 72 ofcifar10_train.py
as required by thetf.nn.sparse_softmax_cross_entropy_with_logits()
. And using the scaled logits for tracking the classifications of our inference. It'd be cool to get some thoughts on this!
-