5 Matching Annotations
  1. May 2023
    1. When you boil it down, AI today is deep learning, and deep learning is backprop—which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion—because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.

      The main contribution of Geoffrey Hinton was the idea of backprop which enabled multilayer neural nets to learn

  2. Feb 2019
    1. Decoupled Greedy Learning of CNNs

      基于反向传播的神经网络在训练过程中一个造成低效的问题是,每个层必须等信号在网络中传播之后才能更新。在这篇文章中,作者考察并分析了一种称为“解耦贪心学习”(Decoupled Greedy Learning)的训练程序,这个过程可以有效地解决上述问题。

  3. Dec 2018
    1. Linear Backprop in non-linear networks

      这年头~ 非线性梯度也可以不用算了。。。嗅到一丝 DNN 要返祖回到 ANN 的味道。

  4. Nov 2018
    1. Accelerating Natural Gradient with Higher-Order Invariance

      每次看到研究梯度优化理论的 paper,都感觉到无比的神奇!厉害到爆表。。。。

    2. Backprop Evolution

      这似乎是说反向传播的算法,在函数结构本身上,就还有很大的优化空间。作者在一些初等函数和常见矩阵操作基础上探索了一些操作搭配,发现效能轻易的就优于传统的反向传播算法。

      不禁启发:我们为什么不能也用网络去拟合优化梯度更新函数呢?