1 Matching Annotations
  1. Mar 2017
    1. EXERCISE: The output of inference are un-normalized logits. Try editing the network architecture to return normalized predictions using tf.nn.softmax.

      Does anyone know if this is a valid exercise still? Line 260 of cifar10.py specifically states:

      # linear layer(WX + b),
      # We don't apply softmax here because
      # tf.nn.sparse_softmax_cross_entropy_with_logits accepts the unscaled logits
      # and performs the softmax internally for efficiency.
      

      or is this saying it would be useful to get the scaled logits too using tf.nn.softmax and keep both, passing unscaled to the loss(logits, labels) call in line 72 of cifar10_train.py as required by the tf.nn.sparse_softmax_cross_entropy_with_logits(). And using the scaled logits for tracking the classifications of our inference. It'd be cool to get some thoughts on this!