24 Matching Annotations
  1. Mar 2015
    1. 如果make执行时,有“-I”或“--include-dir”参数,那么make就会在这个参数所指定的目录下去寻找。 如果目录/include(一般是:/usr/local/bin或/usr/include)存在的话,make也会去找。

      make include

    2. 如果要指定特定的Makefile,你可以使用make的“-f”和“--file”参数

      make -f

    3. 在Makefile中的命令,必须要以[Tab]键开始

      tab

    1. Data enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats. Common input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying TransformationParameters.

      Data input

    2. The BNLL (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.

      BNLL

    3. The POWER layer computes the output as (shift + scale * x) ^ power for each input element x.

      POWER

    4. specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.

      ReLU leak

    5. ReLU / Rectified-Linear and Leaky-ReLU

      ReLU

    6. In ACROSS_CHANNELS mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape local_size x 1 x 1). In WITHIN_CHANNEL mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_size x local_size). Each input value is divided by (1+(α/n)∑ix2i)β, where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).

      LRN definition

    7. whether to sum over adjacent channels (ACROSS_CHANNELS) or nearby spatial locaitons (WITHIN_CHANNEL)

      LRN

    8. the pooling method. Currently MAX, AVE, or STOCHASTIC

      pooling methods

    9. blobs_lr: 1 # learning rate multiplier for the filters blobs_lr: 2 # learning rate multiplier for the biases weight_decay: 1 # weight decay multiplier for the filters weight_decay: 0 # weight decay multiplier for the biases

      learning rate & weight decay

    10. n * c_o * h_o * w_o, where h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1 and w_o likewise.

      output size

    11. we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the ith output group channels will be only connected to the ith input group channels.

      group

    12. specifies the number of pixels to (implicitly) add to each side of the input

      pad

    13. Caffe layers and their parameters are defined in the protocol buffer definitions for the project in caffe.proto. The latest definitions are in the dev caffe.proto.

      proto文件定义

  2. Jan 2015
    1. A copy and paste reference

      A copy and paste reference

    2. Basic deletion options

      Basic deletion options

    3. Motion command reference

      Motion command reference

    4. Ctrl-i: jump to your previous navigation location Ctrl-o: jump back to where you were

      这里总是试不成功,不明白什么意思。

    5. j: move down one line k: move up one line h: move left one character l: move right one character

      Basic Motions

    6. A search reference /{string}: search for string t: jump up to a character f: jump onto a character *: search for other instances of the word under your cursor n: go to the next instance when you’ve searched for a string N: go to the previous instance when you’ve searched for a string ;: go to the next instance when you’ve jumped to a character ,: go to the previous instance when you’ve jumped to a character

      A Search Reference

    1. Leave a comment.

    2. You can then add your own comments and tags.

      I add my own comments and tags

      Test blockquote.