15 Matching Annotations
- Mar 2018
- May 2016
-
caffe.berkeleyvision.org caffe.berkeleyvision.org
-
0
set 0 as biases normally have no impact on weight decay.
Tags
Annotators
URL
-
- Mar 2015
-
caffe.berkeleyvision.org caffe.berkeleyvision.org
-
Data enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats. Common input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying TransformationParameters.
Data input
-
The BNLL (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.
BNLL
-
The POWER layer computes the output as (shift + scale * x) ^ power for each input element x.
POWER
-
specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.
ReLU leak
-
ReLU / Rectified-Linear and Leaky-ReLU
ReLU
-
In ACROSS_CHANNELS mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape local_size x 1 x 1). In WITHIN_CHANNEL mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_size x local_size). Each input value is divided by (1+(α/n)∑ix2i)β, where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
LRN definition
-
whether to sum over adjacent channels (ACROSS_CHANNELS) or nearby spatial locaitons (WITHIN_CHANNEL)
LRN
-
the pooling method. Currently MAX, AVE, or STOCHASTIC
pooling methods
-
blobs_lr: 1 # learning rate multiplier for the filters blobs_lr: 2 # learning rate multiplier for the biases weight_decay: 1 # weight decay multiplier for the filters weight_decay: 0 # weight decay multiplier for the biases
learning rate & weight decay
-
n * c_o * h_o * w_o, where h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1 and w_o likewise.
output size
-
we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the ith output group channels will be only connected to the ith input group channels.
group
-
specifies the number of pixels to (implicitly) add to each side of the input
pad
-
Caffe layers and their parameters are defined in the protocol buffer definitions for the project in caffe.proto. The latest definitions are in the dev caffe.proto.
proto文件定义
Tags
Annotators
URL
-