2 Matching Annotations
 Jan 2015

cs231n.github.io cs231n.github.io

There are other ways of performing the optimization (e.g. LBFGS), but Gradient Descent is currently by far the most common and established way of optimizing Neural Network loss functions.
Are there any studies that compare different pros and cons of the optimization procedures with respect to some specific NN architectures (e.g., classical LeNets)?
Tags
Annotators
URL


cs231n.github.io cs231n.github.io

k  Nearest Neighbor Classifier
Is there a probabilistic interpretation of kNN? Say, something like "kNN is equivalent to [a probabilistic model] under the following conditions on the data and the k."
Tags
Annotators
URL
