47 Matching Annotations
  1. Apr 2023
  2. Apr 2022
    1. An interesting study came out recently

      Preprint of study: Broockman, D., & Kalla, J. (2022, April 1). The manifold effects of partisan media on viewers’ beliefs and attitudes: A field experiment with Fox News viewers. https://doi.org/10.31219/osf.io/jrw26

    1. Convolution Demo. Below is a running demo of a CONV layer. Since 3D volumes are hard to visualize, all the volumes (the input volume (in blue), the weight volumes (in red), the output volume (in green)) are visualized with each depth slice stacked in rows. The input volume is of size W1=5,H1=5,D1=3W1=5,H1=5,D1=3W_1 = 5, H_1 = 5, D_1 = 3, and the CONV layer parameters are K=2,F=3,S=2,P=1K=2,F=3,S=2,P=1K = 2, F = 3, S = 2, P = 1. That is, we have two filters of size 3×33×33 \times 3, and they are applied with a stride of 2. Therefore, the output volume size has spatial size (5 - 3 + 2)/2 + 1 = 3. Moreover, notice that a padding of P=1P=1P = 1 is applied to the input volume, making the outer border of the input volume zero. The visualization below iterates over the output activations (green), and shows that each element is computed by elementwise multiplying the highlighted input (blue) with the filter (red), summing it up, and then offsetting the result by the bias.

      Best explanation/inllustration of a convolution layer.and the ways the number relate.

    2. Example 1. For example, suppose that the input volume has size [32x32x3], (e.g. an RGB CIFAR-10 image). If the receptive field (or the filter size) is 5x5, then each neuron in the Conv Layer will have weights to a [5x5x3] region in the input volume, for a total of 5*5*3 = 75 weights (and +1 bias parameter). Notice that the extent of the connectivity along the depth axis must be 3, since this is the depth of the input volume. Example 2. Suppose an input volume had size [16x16x20]. Then using an example receptive field size of 3x3, every neuron in the Conv Layer would now have a total of 3*3*20 = 180 connections to the input volume. Notice that, again, the connectivity is local in 2D space (e.g. 3x3), but full along the input depth (20).

      These two examples are the first two layers of Andrej Karpathy's wonderful working ConvNetJS CIFAR-10 demo here

    1. input (32x32x3)max activation: 0.5, min: -0.5max gradient: 1.08696, min: -1.53051Activations:Activation Gradients:Weights:Weight Gradients:conv (32x32x16)filter size 5x5x3, stride 1max activation: 3.75919, min: -4.48241max gradient: 0.36571, min: -0.33032parameters: 16x5x5x3+16 = 1216

      The dimensions of these first two layers are explained here

    1. Here the lower level layers are frozen and are not trained, only the new classification head will update itself to learn from the features provided from the pre-trained chopped up model on the left.
    1. Starting from random noise, we optimize an image to activate a particular neuron (layer mixed4a, unit 11).

      And then we use that image as a kind of variable name to refer to the neuron in a way that more helpful than the the layer number and neuron index within the layer. This explanation is via one of Chris Olah's YouTube videos (https://www.youtube.com/watch?v=gXsKyZ_Y_i8)

  3. Feb 2022
    1. Studies show women and people of color tend to be paid less than White men in the same roles.

      Refers to pay between workers "in the same roles" but links to an article that uses a gross unadjusted figure. Nothing in the link supports the claim being made, which was hardly surprising considering this claim has been debunked thousands of times over the last few decades.

  4. Jan 2022
  5. Oct 2021
  6. Aug 2021
  7. Nov 2020
  8. Oct 2020
  9. May 2020
  10. Apr 2019
  11. Mar 2019
  12. static.googleusercontent.com static.googleusercontent.com
    1. Multi-digit Number Recognition from Street ViewImagery using Deep Convolutional Neural Networks

  13. Feb 2019
    1. Weighted Channel Dropout for Regularization of Deep Convolutional Neural Network

      这项工作由 Hou 和 Wang 完成,受到了以下观察的启发。在一个 CNN 的卷积层的堆栈内,所有的通道都是由之前的层生成的,并会在下一层中得到平等的对待。这就带来了一个想法:这样的「分布」可能不是最优的,因为事实可能证明某些特征比其它特征更有用。当特征仍然可追溯时,对于更高层(更浅)来说尤其如此。Zhang et al. 2016 更进一步表明了这一点,他们表明,对于每张输入图像,更高层中仅有少量通道被激活,同时其它通道中的神经元响应接近于零。

      由此,作者提出了一种根据激活的相对幅度来选择通道的方法,并可以进一步作为一种建模通道之间的依赖关系的特殊方法。他们这项工作的主要贡献是为 CNN 中卷积层的正则化提出了加权式通道丢弃(Weighted Channel Dropout/WCD)方法。

  14. Jan 2019
    1. A Survey of the Recent Architectures of Deep Convolutional Neural Networks

      深度卷积神经网络(CNN)是一种特殊类型的神经网络,在各种竞赛基准上表现出了当前最优结果。深度 CNN 架构在挑战性基准任务比赛中实现的高性能表明,创新的架构理念以及参数优化可以提高 CNN 在各种视觉相关任务上的性能。本综述将最近的 CNN 架构创新分为七个不同的类别,分别基于空间利用、深度、多路径、宽度、特征图利用、通道提升和注意力。

    2. Understanding Geometry of Encoder-Decoder CNNs

      由于计算机视觉,医学成像等各种逆问题的优异性能,使用卷积神经网络(CNN)架构的编码器 - 解码器网络已被广泛用于深度学习文献中。然而,仍然难以获得相干几何学视图为何如此架构提供了理想的性能。最近对神经网络的普遍性,表现力和优化景观的理论认识以及卷积框架理论的启发,在这里我们提供了一个统一的理论框架,有助于更好地理解编码器 - 解码器CNN的几何。我们的数学框架表明,编码器 - 解码器CNN架构与使用组合卷积帧的非线性基表示密切相关,其可表达性随着网络深度呈指数增长。我们还展示了跳过连接的可表达性和优化环境的重要性。

    3. Explanatory Graphs for CNNs

      Q Zhang 在知乎上亲自解答关于 Explanatory Graphs 的技术细节和研究理念~ http://t.cn/EqfQbAW [赞]

  15. Dec 2018
    1. CFUN: Combining Faster R-CNN and U-net Network for Efficient Whole Heart Segmentation

      图做得很好看~~~

    2. Deep Neural Networks for Automatic Classification of Anesthetic-Induced Unconsciousness

      spatio-temporo-spectral features.

    3. Using Convolutional Neural Networks to Classify Audio Signal in Noisy Sound Scenes

      先辨别信号位置,再过滤出信号,这和 LIGO 找event波形的套路很像~ ;又看到 RNN与CNN 结合起来的应用~

    4. Bag of Tricks for Image Classification with Convolutional Neural Networks

      李沐老师们的 paper!Pretty much summaries of various tricks by itself!

      Paper Summary

      Reddit: http://t.cn/Ey1gZKo

    5. Seeing in the dark with recurrent convolutional neural networks

      目测一些结果和自己的 paper 很接近,同时此 paper 于我而言,有太多值得借鉴的地方!同时又看到了 recurrency(类循环记忆单元)在模式识别领域有着很必要的用武之地!

  16. Oct 2018
  17. Mar 2018
  18. Jan 2018
    1. Although there are reports on CHH peptides in other crustacean taxa such as Armadillidium vulgare (Isopoda)22,23, Daphnia pulex (Cladocera)24 and Daphnia magna15, investigations beyond decapods have remained scant and the sequences of CHH/MIH/GIH genes in other crustacean taxa have remained elusive.

      This is interesting!

  19. Aug 2017
  20. Mar 2017
  21. Sep 2016
    1. computes the gradients with respect to the parameters and to the inputs

      compute two gradients

    2. 1000 x 1024.

      input总是在后面

  22. Jun 2016