 Aug 2022

news.ycombinator.com news.ycombinator.com
 Nov 2021

www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov

In this report, we investigated performance of the omnibus test using simulated data. The hierarchical procedure is a widely used approach for comparing multiple (more than two) groups.[1] The omnibus test is intended to preserve type I errors by eliminating unnecessary posthoc analyses under the null of no group difference. However, our simulation study shows that the hierarchical approach is not guaranteed to work all the time. The omnibus and posthoc tests are not always in agreement. As our goal of comparing multiple groups is to find groups that have different means, a significant omnibus test gives a false alarm, if none of the posthoc tests are significant. But, most important, we may also miss opportunities to detect group differences, if we have a nonsignificant omnibus test, since some or all posthoc tests may still be significant in this case.Although we focus on the classic ANOVA model in this report, the same considerations and conclusions also apply to more complex models for comparing multiple groups, such as longitudinal data models [2]. Since for most models, posthoc tests with significant levels adjusted to account for multiple testing do not have exactly the same type I error as the omnibus test as in the case of ANOVA, it is more difficult to evaluate performance of the hierarchical procedure. For example, the Bonferroni correction is generally conservative.Given our findings, it seems important to always perform pairwise group comparisons, regardless of the significance status of the omnibus test and report findings based on such group comparisons.
Post hoc not significant when omnibus test is significant.

 Aug 2020

twitter.com twitter.com

ReconfigBehSci {@SciBeh} (2020) this kind of piece behavioural scientists need to reject! A shallow understanding of the bias literature in an even shallower application to the pandemic the idea that believing lockdowns brought down infection rates is an example of the "post hoc fallacy" is bizarre 1/3. Twitter. Retrieved from: https://twitter.com/SciBeh/status/1298939778340184065


www.journalofsurgicalresearch.com www.journalofsurgicalresearch.com

Althouse, A. D. (2020). Post Hoc Power: Not Empowering, Just Misleading. Journal of Surgical Research, 0(0). https://doi.org/10.1016/j.jss.2019.10.049

 May 2019

stats.stackexchange.com stats.stackexchange.com

Multiple comparisons: It is not good practice to test for significant differences among pairs of group means unless the ANOVA suggests some such differences exist. Nevertheless, I admit it is tempting to take another look at the comparison of G1 with G3 (ignoring the existence of G2 and perhaps assuming normality), but then you should use a Welch t test to account for the differences in sample variances, and you should not make claims about the result unless the Pvalue is as low as .01 or .02. Looking at that difference more carefully might prompt a subsequent experiment.
Test for significance among pairs when the overall f test is not significant.

 Jan 2016

tachesdesens.blogspot.com tachesdesens.blogspot.com

Nothing would have changed
This is always a retrospective view. Always worry about the post hoc view that this represents. Perhaps the change was inevitable given the set of initial conditions the classroom represented at this point in time. Perhaps we need the hard rock problem that the status quo ante bellum represents. Perhaps it is just one narrative of many that are equal to more compelling.
