12 Matching Annotations
  1. Feb 2021
    1. The rationale is that it's actually clearer to eager initialize. You don't need to worry about timing/concurrency that way. Lazy init is inherently more complex than eager init, so there should be a reason to choose lazy over eager rather than the other way around.
  2. Nov 2020
  3. Oct 2020
  4. Sep 2020
    1. It looks like the issue stems from having "svelte" as a dependency instead of a devDependencies in package.json within the sapper project. This causes import 'svelte' to load the rollup excluded npm package's set_current_component instead of from within the sapper generated server.js.
  5. Jun 2019
    1. RandomOut: Using a convolutional gradient norm to rescue convolutional filters

      或许导师这回可以相信初始化网络后的稳定性一直就是一个问题了吧~ 另外,此文还是在优秀的 MXNet 框架上跑的,赞一个~

  6. Feb 2019
    1. Fixup Initialization: Residual Learning Without Normalization

      关于拟合的表现,Regularization 和 BN 的设计总是很微妙,尤其是 learning rate 再掺和进来以后。此 paper 的作者也就相关问题结合自己的文章在 Reddit 上有所讨论。

  7. Jan 2019
    1. Generalization in Deep Networks: The Role of Distance from Initialization

      Goodfellow 转推了此文。

      作者强调了模型的初始化参数对解释泛化能力的重要性! ​