- Jul 2018
-
europepmc.org europepmc.org
-
On 2015 Dec 09, Marcus Munafò commented:
Jones and colleagues [1] tested for small study bias (which may be caused by publication bias) in the literature on inhibitory control training for appetitive behavior change, using funnel plots and the Egger test. However, these methods are limited when the studies included in the analysis include only a narrow range of sample sizes, as is the case here. Other methods may be more sensitive to publication bias. We used the significance test, developed by Ioannidis and Trikalinos [2], to test this.
The excess of significance test uses the best estimate of any true underlying effect size (e.g., the estimate from the largest single study, or the fixed effects meta-analysis) to estimate the statistical power of each individual study in a literature to detect that effect. The sum of these values provides the number of studies that can be expected to be statistically significant in that literature. This can be compared to the observed number of significant studies using a binomial test. Using the pooled effect size estimate under a fixed effects model for alcohol (d = 0.43) and food (d = 0.28), the expected number of significant studies is 4.2 and the observed number is 13 (P < 0.001), indicating an excess of significance in this literature.
Another way of characterizing this is to describe the average statistical power of studies within this literature, which is 24%. This is consistent with evidence from other fields [3], and suggests that most studies are underpowered. In order to achieve 80% power using a 5% alpha, studies on alcohol would require at least 172 participants and studies on food at least 404 participants, based on the effect size estimates indicated by the meta-analysis.
Marcus R. Munafò, Andrew Jones and Matt Field
Jones, A., et al., Inhibitory control training for appetitive behavior change: a meta-analytic investigation of mechanisms of action and moderators of effectiveness. Appetite, 2016. 97, p. 16-28.
Ioannidis, J.P.A. and Trikalinos, T.A., An exploratory test for an excess of significant findings. Clinical Trials, 2007. 4, p. 245-53.
Button, K.S. et al., Power failure: why sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 2013. 14, p. 365-76.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
- Feb 2018
-
europepmc.org europepmc.org
-
On 2015 Dec 09, Marcus Munafò commented:
Jones and colleagues [1] tested for small study bias (which may be caused by publication bias) in the literature on inhibitory control training for appetitive behavior change, using funnel plots and the Egger test. However, these methods are limited when the studies included in the analysis include only a narrow range of sample sizes, as is the case here. Other methods may be more sensitive to publication bias. We used the significance test, developed by Ioannidis and Trikalinos [2], to test this.
The excess of significance test uses the best estimate of any true underlying effect size (e.g., the estimate from the largest single study, or the fixed effects meta-analysis) to estimate the statistical power of each individual study in a literature to detect that effect. The sum of these values provides the number of studies that can be expected to be statistically significant in that literature. This can be compared to the observed number of significant studies using a binomial test. Using the pooled effect size estimate under a fixed effects model for alcohol (d = 0.43) and food (d = 0.28), the expected number of significant studies is 4.2 and the observed number is 13 (P < 0.001), indicating an excess of significance in this literature.
Another way of characterizing this is to describe the average statistical power of studies within this literature, which is 24%. This is consistent with evidence from other fields [3], and suggests that most studies are underpowered. In order to achieve 80% power using a 5% alpha, studies on alcohol would require at least 172 participants and studies on food at least 404 participants, based on the effect size estimates indicated by the meta-analysis.
Marcus R. Munafò, Andrew Jones and Matt Field
Jones, A., et al., Inhibitory control training for appetitive behavior change: a meta-analytic investigation of mechanisms of action and moderators of effectiveness. Appetite, 2016. 97, p. 16-28.
Ioannidis, J.P.A. and Trikalinos, T.A., An exploratory test for an excess of significant findings. Clinical Trials, 2007. 4, p. 245-53.
Button, K.S. et al., Power failure: why sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 2013. 14, p. 365-76.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-