83 Matching Annotations
  1. Jul 2020
  2. Feb 2020
  3. Jan 2020
    1. s


    2. querry


    3. The positive statement in the previous corollary uses the fact that it is easy to learn randomdegree-kparities with neural nets and GD whenkis finite, see for example [Bam19] for a specificimplementation


  4. Aug 2019
    1. For simplicity, we focus ontraining only hidden weights−→Win this paper and leaveAandBat random initialization. Ourresult naturally extends to the case whenA,Band−→Ware jointly trained

      They do not train all the layers, but just for simplicity

  5. Jul 2019
    1. The data is generated from a gaussian in both linear and non linear models

    1. 0f+ 1f+:::+ Nn

      This should have the opposite sign

  6. Feb 2019
  7. Jan 2019
  8. Dec 2018
    1. If you're a teacher, please oh please tell your students about Spaced Repetition (& other evidence-based study habits) early on.

      But take into account that spaced repetition apps don't work for kids (it would be nice if there was a clarification there)

      TL;DR: their memory works differently, their long term memory is very bad, while their brain plasticity is high.

      The post is from Piotr Wozniak, the creator of supermemo and the concept of spaced repetition (he probably deserves a mention as well since though Herman Ebbinghaus did discovered the spacing effect and forgetting curves it was Wozniak the one that invented the spaced repetition algorithms).

  9. Nov 2018
    1. in principle each other

      in principle cancel each other

  10. arxiv.org arxiv.org
    1. The great achievements of PAC learning that made it successful are its generality and algorithmicapplicability: PAC learning does not restrict the input domain in any way, and thus allows very generallearning, without generative or distributional assumptions on the world. Another important feature isthe restriction to specific hypothesis classes, without which there are simple impossibility results suchas the “no free lunch” theorem. This allowscomparativeandimproperlearning of computationally-hardconcepts


  11. Oct 2018
    1. aAim to consume less than 10% of total calories from saturated fat.

      1 internet point if you find a trusted reference

  12. Sep 2018
    1. TL;DR: Because Supermemo's algorithm SM-17 is better than Anki's algorithm (SM-2) and among other things the workload in Anki is too much for the same or less results

      It is mentioned that in Anki you should change the New Interval for lapses to 40%

    2. New Interval for lapses COULDN’T BE ZERO. I would suggest setting it over 40%.

      True. In order to change it, go to the deck -> Deck options -> Lapses -> New Interval

  13. Aug 2018
    1. π

      This \(\pi\) should be mix as at the end of the sentence

  14. Jul 2018
    1. they are using what proposition A.1 is saying, which has a typo, it should be \(\frac{\Delta_k (t_j - t_{j-1})}{4} \exp (-t_{j-1} \Delta^2/2)\)

    2. max1≤k≤MMXj=1∆ktj4exp(−tj−1∆2k/2)−1∆k

      The \(\sum_{j=1}^M\) should be inside the bracket. The \(-\frac{1}{\Delta_k}\) is not part of the sum.

    3. t

      This numerator should be \(t_j - t_{j-1}\)

    4. 1

      A \(\Delta\) is missing here

  15. Jun 2018
  16. May 2018
  17. Apr 2018
  18. Mar 2018
  19. Feb 2018
    1. Introducing Math

      Is there any shortcut to use instead of clicking on the button for latex or typing dollars or \ ( \ ) ???

    1. maxy{y>β−h(y)}

      again, here it should be argmax

    2. maxy{y>β−g(y)}

      This should be argmax

  20. Dec 2017
    1. Table 1

      This paper https://arxiv.org/abs/1005.2012 of 2010 proves $\tilde{O}(\frac{Mm}{\epsilon^2})$ in the primal case. It should be in the related work

  21. Nov 2017