 Dec 2017

www.theatlantic.com www.theatlantic.com

where
/s/where/whether ?

 Aug 2017

www.farnamstreetblog.com www.farnamstreetblog.com

Kuroko
Kokura

Kuroko
Kokura

Kuroko
Kokura

 Jan 2016

genius.com genius.com

Well darn. Competition? Maybe with enough competition this web annotation thing might actually take off. Probably not.

 May 2015

ufldl.stanford.edu ufldl.stanford.edu

To propagate error through the convolutional layer, you simply need to multiply the incoming error by the derivative of the activation function as in the usual back propagation algorithm.
This is the actual delta for the convolutional layer. This is the delta you want to save for weight updates.

delta_pool
delta_pool_upsampled

delta
delta_pool

Let the incoming error to the pooling layer be given by
$$\delta_i = g'(net_i) \sum{w_{ij} \cdot \delta_j} = \sum{w_{ij} \cdot \delta_j}$$
. The value $$net_i$$ isn't really defined, but our activation function after pooling is just the linear function so it's derivative is always 1.
Note that the equation above considers a flattened representation (delta i instead of delta i,j,k for the position i,j in filter k)

First compute the error, δd, from the cross entropy cost function w.r.t. the parameters in the densely connected layer
Nathan thinks this sentence makes no sense!


mste.illinois.edu mste.illinois.edu

88  85.81 = 2.19. NOT 2.91. This error is propagated throughout the rest of the calculations.
