- Jul 2018
I think this paper and these data could be extremely useful for psychologists, but I also think this paper needs at least one more analysis: estimating effect sizes by research area, controlling for publication bias.
It's very hard to interpret these estimates given good evidence and arguments that researchers and journals select for p < .05. I think it's safe to assume that all these estimates reported in this preprint are bigger than the true averages (Simonsohn, Nelson, & Simmons, 2014).
One approach to estimating "selection bias adjusted" effects would be to estimate the effect size for each research area using the code provided in the p-curve effect size paper supplements (http://www.p-curve.com/Supplement/). You could estimate confidence intervals or percentiles using bootstrapping procedures or write code to estimate the lower and upper bounds using the same methods to estimate the average effect.
This approach assumes the p-values all test the hypothesis of interest and don't suffer from unique selection biases (see Selecting p Values in Simonsohn, Nelson, & Simmons, 2014, p. 540).
Hope this helps make the paper better!