- Apr 2020
-
psyarxiv.com psyarxiv.com
-
three dimensions. First, we estimated separate models in which we regressed trait ratings on a dummy variable indicating which trait was judged (coded 0 for gullibilityand 1 for trustworthiness,for example), the relevant set of facial features(e.g., all demographic variables), and an interaction betweenthetraitdummyand a specific facial feature(e.g., gender).
This is a small thing I noticed while I was reading this section for how you tested whether gullible can be distinguished from trustworthy.
Interactions between dummy variables ruin main effect interpretations. Just in case, you should check it effect coding your categorical variables changes your main effect interpretations. I can explain more if you’re curious how multiplying dummy codes gives correct interaction but incorrect main effects. Very interesting paper sorry to focus on a small detail!
-
- Aug 2018
-
-
I really enjoyed this preprint. I have one comment: Throughout this manuscript, you reference univariate effect sizes. It seems relevant to reference a multivariate effect (i.e., Mahalanobis D) if only to make the point that even small mean differences in univariate preferences, personality traits, and cognitive abilities can combine to create perceivably large gender/sex differences. I know this topic is debated (and the authors are critical of D and what is means), but this thesis seems incomplete without reference to how preferences/traits/abilities could combine to create differences in what majors/jobs/careers men and women are good at/pursue (on average). I think the commentaries here (https://marcodgdotnet.files.wordpress.com/2014/04/delgiudice_etal_2012_comments_reply.pdf) and the examples described here (https://marcodgdotnet.files.wordpress.com/2014/04/delgiudice_2013_is-d_valid_ep.pdf) make the debate and the conceptual utility of D clear.
-
- Jul 2018
-
psyarxiv.com psyarxiv.com
-
I think this paper and these data could be extremely useful for psychologists, but I also think this paper needs at least one more analysis: estimating effect sizes by research area, controlling for publication bias.
It's very hard to interpret these estimates given good evidence and arguments that researchers and journals select for p < .05. I think it's safe to assume that all these estimates reported in this preprint are bigger than the true averages (Simonsohn, Nelson, & Simmons, 2014).
One approach to estimating "selection bias adjusted" effects would be to estimate the effect size for each research area using the code provided in the p-curve effect size paper supplements (http://www.p-curve.com/Supplement/). You could estimate confidence intervals or percentiles using bootstrapping procedures or write code to estimate the lower and upper bounds using the same methods to estimate the average effect.
This approach assumes the p-values all test the hypothesis of interest and don't suffer from unique selection biases (see Selecting p Values in Simonsohn, Nelson, & Simmons, 2014, p. 540).
Hope this helps make the paper better!
Tags
Annotators
URL
-