65 Matching Annotations
  1. Jun 2023
    1. Children develop a succession of different, increasingly accurate, conceptions of the world and it at least appears that they do this as a result of their experience. But how can the concrete particulars of experience become the abstract structures of knowledge?

      I was unfamiliar with Bayesian learning/Bayesian interference before reading this article. I looked it up and found a helpful tool here: https://seeing-theory.brown.edu/bayesian-inference/index.html. Much of the information I read to familiarize myself with the topic referred to it in the context of machine learning. I can see how the idea of "how one should update one’s beliefs upon observing data" can apply to student learning, especially for young kids. Kunin, D., Guo, J., Dae Devlin, T., & Ziang, D. (n.d.). Bayesian inference. Seeing Theory. https://seeing-theory.brown.edu/bayesian-inference/index.html

  2. Jan 2023
    1. Hermeneutic circle   In traditional humanities scholarship, the hermeneutic circle refers to the way in which we understand some part of a text in terms of our ideas about its overall structure and meaning -- but that we also, in a cyclic fashion, update our beliefs about the overall structure and meaning of a text in response to particular moments.
  3. Aug 2022
  4. Apr 2022
  5. Aug 2021
  6. Jul 2021
  7. May 2021
  8. Apr 2021
  9. Mar 2021
    1. inference and learning in Bayesian networks.

      Learning como Machine learning?

  10. Feb 2021
  11. Dec 2020
  12. Oct 2020
  13. Sep 2020
  14. Aug 2020
  15. Jul 2020
  16. Jun 2020
    1. Friston, K. J., Parr, T., Zeidman, P., Razi, A., Flandin, G., Daunizeau, J., Hulme, O. J., Billig, A. J., Litvak, V., Moran, R. J., Price, C. J., & Lambert, C. (2020). Dynamic causal modelling of COVID-19. ArXiv:2004.04463 [q-Bio]. http://arxiv.org/abs/2004.04463

  17. May 2020
  18. Apr 2020
  19. Nov 2018
    1. Explaining Deep Learning Models - A Bayesian Non-parametric Approach

      无疑,讨论模型可解释性的 paper 总是让人充满好奇的。 文中说前人据网络的 output 形成了两种解释思路:whitebox/blackbox explanation。此文提出了新black-box方法(general sensitivity level of a target model to specific input dimensions) 通过建立 DMM-MEN。

  20. Sep 2018
    1. conditional distribution for individual components can be constructed

      So the conditional distribution is conditioned on other components?

    2. p(y∣x)=∫p(y∣f,x)p(f∣x)df

      \(y\) is the data, \(f\) is the model, \(x\) is the input variable

    1. in equation B for the marginal of a gaussian, only the covariance of the block of the matrix involving the unmarginalized dimensions matters! Thus “if you ask only for the properties of the function (you are fitting to the data) at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account!”(Rasmunnsen)

      key insight into Gaussian processes

  21. Jul 2018
  22. Jan 2016
    1. P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.
    2. The probability that a belief is true given new evidence equals the probability that the belief is true regardless of that evidence times the probability that the evidence is true given that the belief is true divided by the probability that the evidence is true regardless of whether the belief is true. Got that?
    3. Initial belief plus new evidence = new and improved belief.
  23. Oct 2015
    1. Nearly all ap­pli­ca­tions of prob­a­bil­ity to cryp­tog­ra­phy de­pend on the fac­tor prin­ci­ple (or Bayes’ The­o­rem).

      This is easily the most interesting sentence in the paper: Turing used Bayesian analysis for code-breaking during WWII.