11 Matching Annotations
  1. Dec 2017
  2. Aug 2017
  3. Jan 2016
    1. Well darn. Competition? Maybe with enough competition this web annotation thing might actually take off. Probably not.

  4. May 2015
    1. To propagate error through the convolutional layer, you simply need to multiply the incoming error by the derivative of the activation function as in the usual back propagation algorithm.

      This is the actual delta for the convolutional layer. This is the delta you want to save for weight updates.

    2. delta_pool

      delta_pool_upsampled

    3. delta

      delta_pool

    4. Let the incoming error to the pooling layer be given by

      $$\delta_i = g'(net_i) \sum{w_{ij} \cdot \delta_j} = \sum{w_{ij} \cdot \delta_j}$$

      . The value $$net_i$$ isn't really defined, but our activation function after pooling is just the linear function so it's derivative is always 1.

      Note that the equation above considers a flattened representation (delta i instead of delta i,j,k for the position i,j in filter k)

    5. First compute the error, δd, from the cross entropy cost function w.r.t. the parameters in the densely connected layer

      Nathan thinks this sentence makes no sense!

    1. 88 - 85.81 = 2.19. NOT 2.91. This error is propagated throughout the rest of the calculations.