64 Matching Annotations
  1. Jan 2023
    1. Hermeneutic circle   In traditional humanities scholarship, the hermeneutic circle refers to the way in which we understand some part of a text in terms of our ideas about its overall structure and meaning -- but that we also, in a cyclic fashion, update our beliefs about the overall structure and meaning of a text in response to particular moments.
  2. Aug 2022
  3. Apr 2022
  4. Aug 2021
  5. Jul 2021
  6. May 2021
  7. Apr 2021
  8. Mar 2021
    1. inference and learning in Bayesian networks.

      Learning como Machine learning?

  9. Feb 2021
  10. Dec 2020
  11. Oct 2020
  12. Sep 2020
  13. Aug 2020
  14. Jul 2020
  15. Jun 2020
    1. Friston, K. J., Parr, T., Zeidman, P., Razi, A., Flandin, G., Daunizeau, J., Hulme, O. J., Billig, A. J., Litvak, V., Moran, R. J., Price, C. J., & Lambert, C. (2020). Dynamic causal modelling of COVID-19. ArXiv:2004.04463 [q-Bio]. http://arxiv.org/abs/2004.04463

  16. May 2020
  17. Apr 2020
  18. Nov 2018
    1. Explaining Deep Learning Models - A Bayesian Non-parametric Approach

      无疑,讨论模型可解释性的 paper 总是让人充满好奇的。 文中说前人据网络的 output 形成了两种解释思路:whitebox/blackbox explanation。此文提出了新black-box方法(general sensitivity level of a target model to specific input dimensions) 通过建立 DMM-MEN。

  19. Sep 2018
    1. conditional distribution for individual components can be constructed

      So the conditional distribution is conditioned on other components?

    2. p(y∣x)=∫p(y∣f,x)p(f∣x)df

      \(y\) is the data, \(f\) is the model, \(x\) is the input variable

    1. in equation B for the marginal of a gaussian, only the covariance of the block of the matrix involving the unmarginalized dimensions matters! Thus “if you ask only for the properties of the function (you are fitting to the data) at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account!”(Rasmunnsen)

      key insight into Gaussian processes

  20. Jul 2018
  21. Jan 2016
    1. P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.
    2. The probability that a belief is true given new evidence equals the probability that the belief is true regardless of that evidence times the probability that the evidence is true given that the belief is true divided by the probability that the evidence is true regardless of whether the belief is true. Got that?
    3. Initial belief plus new evidence = new and improved belief.
  22. Oct 2015
    1. Nearly all ap­pli­ca­tions of prob­a­bil­ity to cryp­tog­ra­phy de­pend on the fac­tor prin­ci­ple (or Bayes’ The­o­rem).

      This is easily the most interesting sentence in the paper: Turing used Bayesian analysis for code-breaking during WWII.