24 Matching Annotations
- Mar 2015
-
blog.csdn.net blog.csdn.net
-
如果make执行时,有“-I”或“--include-dir”参数,那么make就会在这个参数所指定的目录下去寻找。 如果目录/include(一般是:/usr/local/bin或/usr/include)存在的话,make也会去找。
make include
-
如果要指定特定的Makefile,你可以使用make的“-f”和“--file”参数
make -f
-
在Makefile中的命令,必须要以[Tab]键开始
tab
Tags
Annotators
URL
-
-
caffe.berkeleyvision.org caffe.berkeleyvision.org
-
Data enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats. Common input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying TransformationParameters.
Data input
-
The BNLL (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.
BNLL
-
The POWER layer computes the output as (shift + scale * x) ^ power for each input element x.
POWER
-
specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.
ReLU leak
-
ReLU / Rectified-Linear and Leaky-ReLU
ReLU
-
In ACROSS_CHANNELS mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape local_size x 1 x 1). In WITHIN_CHANNEL mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_size x local_size). Each input value is divided by (1+(α/n)∑ix2i)β, where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
LRN definition
-
whether to sum over adjacent channels (ACROSS_CHANNELS) or nearby spatial locaitons (WITHIN_CHANNEL)
LRN
-
the pooling method. Currently MAX, AVE, or STOCHASTIC
pooling methods
-
blobs_lr: 1 # learning rate multiplier for the filters blobs_lr: 2 # learning rate multiplier for the biases weight_decay: 1 # weight decay multiplier for the filters weight_decay: 0 # weight decay multiplier for the biases
learning rate & weight decay
-
n * c_o * h_o * w_o, where h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1 and w_o likewise.
output size
-
we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the ith output group channels will be only connected to the ith input group channels.
group
-
specifies the number of pixels to (implicitly) add to each side of the input
pad
-
Caffe layers and their parameters are defined in the protocol buffer definitions for the project in caffe.proto. The latest definitions are in the dev caffe.proto.
proto文件定义
Tags
Annotators
URL
-
- Jan 2015
-
danielmiessler.com danielmiessler.com
-
A copy and paste reference
A copy and paste reference
-
Basic deletion options
Basic deletion options
-
Motion command reference
Motion command reference
-
Ctrl-i: jump to your previous navigation location Ctrl-o: jump back to where you were
这里总是试不成功,不明白什么意思。
-
j: move down one line k: move up one line h: move left one character l: move right one character
Basic Motions
-
A search reference /{string}: search for string t: jump up to a character f: jump onto a character *: search for other instances of the word under your cursor n: go to the next instance when you’ve searched for a string N: go to the previous instance when you’ve searched for a string ;: go to the next instance when you’ve jumped to a character ,: go to the previous instance when you’ve jumped to a character
A Search Reference
-
-
hypothes.is hypothes.is
-
Leave a comment.
-
You can then add your own comments and tags.
I add my own comments and tags
Test blockquote.
-