9 Matching Annotations
1. Nov 2018
2. iphysresearch.github.io iphysresearch.github.io
1. Explaining Deep Learning Models - A Bayesian Non-parametric Approach

无疑，讨论模型可解释性的 paper 总是让人充满好奇的。 文中说前人据网络的 output 形成了两种解释思路：whitebox/blackbox explanation。此文提出了新black-box方法(general sensitivity level of a target model to specific input dimensions) 通过建立 DMM-MEN。

#### URL

3. Sep 2018
4. docs.pymc.io docs.pymc.io
1. conditional distribution for individual components can be constructed

So the conditional distribution is conditioned on other components?

2. p(y∣x)=∫p(y∣f,x)p(f∣x)df

\(y\) is the data, \(f\) is the model, \(x\) is the input variable

#### URL

5. am207.github.io am207.github.io
1. in equation B for the marginal of a gaussian, only the covariance of the block of the matrix involving the unmarginalized dimensions matters! Thus “if you ask only for the properties of the function (you are fitting to the data) at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account!”(Rasmunnsen)

key insight into Gaussian processes

#### URL

6. Jul 2018
7. am207.github.io am207.github.io

#### URL

8. Jan 2016
9. blogs.scientificamerican.com blogs.scientificamerican.com
1. P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.
2. The probability that a belief is true given new evidence equals the probability that the belief is true regardless of that evidence times the probability that the evidence is true given that the belief is true divided by the probability that the evidence is true regardless of whether the belief is true. Got that?
3. Initial belief plus new evidence = new and improved belief.

#### URL

10. Oct 2015
11. davidar.io davidar.io
1. Nearly all ap­pli­ca­tions of prob­a­bil­ity to cryp­tog­ra­phy de­pend on the fac­tor prin­ci­ple (or Bayes’ The­o­rem).

This is easily the most interesting sentence in the paper: Turing used Bayesian analysis for code-breaking during WWII.