 Mar 2022

en.wikipedia.org en.wikipedia.org

In 1925, Ronald Fisher advanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publication Statistical Methods for Research Workers.[28][29][30] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[31] In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named α {\displaystyle \alpha } . They recommended that α {\displaystyle \alpha } be set ahead of time, prior to any data collection.[31][32] Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.[31]
The lofty p=0.5 is utter bullshit. It was just an arbitrary, madeup value with no real evidence behind it.


www.nature.com www.nature.com

only 2.5 sigma
That's 99.38% chance of being correct, yet that's considered "weak". Would that we could do that in medicine or the social sciences.

 Feb 2021


Altman, D. G., & Bland, J. M. (1995). Statistics notes: Absence of evidence is not evidence of absence. BMJ, 311(7003), 485. https://doi.org/10.1136/bmj.311.7003.485

 Aug 2020

jamanetwork.com jamanetwork.com

Khan MS, Fonarow GC, Friede T, et al. Application of the Reverse Fragility Index to Statistically Nonsignificant Randomized Clinical Trial Results. JAMA Netw Open. 2020;3(8):e2012469. doi:10.1001/jamanetworkopen.2020.12469

 Jun 2020


Zhang, L., & Peixoto, T. P. (2020). Statistical inference of assortative community structures. ArXiv:2006.14493 [CondMat, Physics:Physics, Stat]. http://arxiv.org/abs/2006.14493
