 Jul 2023

xianblog.wordpress.com xianblog.wordpress.com

weakly informative approach to Bayesian analysis
In [[Richard McElreath]]'s [[Statistical Rethinking]], he defines [[weakly informative priors]] (aka [[regularizing priors]]) as
priors that gently nudge the machine [which] usually improve inference. Such priors are sometimes called regularizing or weakly informative priors. They are so useful that nonBayesian statistical procedures have adopted a mathematically equivalent approach, [[penalized likelihood]]. (p. 35, 1st ed.)


civil.colorado.edu civil.colorado.edu

Science is not described by thefalsification standard, as Popper recognized and argued.4 In fact, deductive falsification isimpossible in nearly every scientific context. In this section, I review two reasons for thisimpossibility.(1) Hypotheses are not models. The relations among hypotheses and different kinds ofmodels are complex. Many models correspond to the same hypothesis, and manyhypotheses correspond to a single model. This makes strict falsification impossible.(2) Measurement matters. Even when we think the data falsify a model, another observer will debate our methods and measures. They don’t trust the data. Sometimesthey are right.For both of these reasons, deductive falsification never works. The scientific method cannotbe reduced to a statistical procedure, and so our statistical methods should not pretend.
Seems consistent with how Popper used the terms [[falsification]] and [[falsifiability]] noted here

Statistical RethinkingA Bayesian Coursewith Examplesin R and StanRichard McElreath

Statisticians, for theirpart, can derive pleasure from scolding scientists, which just makes the psychological battleworse.
Note to self: don't do this.

So where do priors come from? They are engineering assumptions, chosen to help themachine learn. The flat prior in Figure 2.5 is very common, but it is hardly ever the best prior.You’ll see later in the book that priors that gently nudge the machine usually improve inference. Such priors are sometimes called regularizing or weakly informative priors.They are so useful that nonBayesian statistical procedures have adopted a mathematicallyequivalent approach, penalized likelihood. These priors are conservative, in that theytend to guard against inferring strong associations between variables.
p. 35 where [[Richard McElreath]] defines [[weakly informative priors]] aka [[regularizing priors]] in [[Bayesian statistics]]. Notes that nonBayesian methods have a mathematically equivalent approach called [[penalized likelihood]].

The other imagines instead that population size fluctuates through time, which can be trueeven when there is no selective difference among alleles.
McElreath is referring to \(\text{P}_{0\text{B}}\) (process model zeroB).

one assumes the population size andstructure have been constant long enough for the distribution of alleles to reach a steady state
The population size & structure being "constant" is what [[Richard McElreath]] means by "equilibrium" in \(\text{P}_{0\text{A}}\) (process model zeroA), which corresponds to the null hypothesis
\(\text{H}_0: \text{``Evolution is neutral"}\)

Andrew Gelman’s
Per Andrew Gelman's wiki:
Andrew Eric Gelman (born February 11, 1965) is an American statistician and professor of statistics and political science at Columbia University.
Gelman received bachelor of science degrees in mathematics and in physics from MIT, where he was a National Merit Scholar, in 1986. He then received a master of science in 1987 and a doctor of philosophy in 1990, both in statistics from Harvard University, under the supervision of Donald Rubin.[1][2][3]
