2 Matching Annotations
  1. Jul 2018
    1. On 2017 Aug 02, Gregory Francis commented:

      The journal Perspectives on Psychological Science used to have an on-line commenting system. They seem to have discontinued it and removed all past comments. In January 2016, I published a comment on this article. I reproduce it below.

      Clarifying the role of data detectives

      Since my work was presented as an example of the "gotcha gang" and "witch hunt" activity (p. 892), I feel it is necessary to post a comment that more accurately describes the work in Francis (2012) and adds some relevant information.

      Francis (2012) did not simply critique Galak and Meyvis (2011) for having "too many replications", rather the critique was that Galak and Meyvis (2011) had too many replications relative to the estimated power of the studies. Power is the probability of a randomly drawn sample producing a significant outcome for a given effect size (in this case, estimated from the studies reported by Galak and Meyvis (2011)). In Galak and Meyvis (2011) a high rate of success coincided with only modest estimated power: and that pattern suggests that there were missing studies or that the reported studies were run and analyzed improperly.

      As noted in Spellman's article, my analysis was essentially validated by the response from Galak and Meyvis (2012), who reported that there was a file drawer of unsuccessful experiments. The title of their response was "You could have just asked", meaning that it was not necessary to perform the power analysis to detect the existence of missing studies because they would have told anyone who asked about those unpublished studies. I was stunned by Galak and Meyvis' (2012) response because it suggests that the standard process of reading a scientific article involves thinking to yourself, "That's really interesting. I wonder if it is true? I will ask the authors." I thought the ludicrousness of this suggestion was too obvious to need clarification, but Spellman's implication that it was a "cool reply" indicates such a need.

      It is true, as Spellman notes, that "not everything can be, or should be, shared in papers," but a scientific article is supposed to present the facts that are relevant to the conclusions. Selective reporting (or improper data collection or analysis) withholds relevant facts and thereby calls in doubt the conclusions. Moreover, the impracticality of "just asking" the authors about missing experiments becomes clear when we think about the (inevitable) death of a scientist; are we supposed to discount a scientist's lifetime of work when they die? Since it was brought up in Spellman's manuscript, I feel obligated to mention that after reading the Galak and Meyvis (2012) response, I formally asked them for the details of their file drawer. Although we had a nice phone conversation discussing power and replication, they never provided the requested data (nether raw data nor summary statistics).

      When expressing her concerns about a "witch hunt" and "reviling people", Spellman confuses Galak and Meyvis (2011), which referes to the experimental findings and conclusions in a manuscript, with Jeff Galak and Tom Meyvis, who I suspect are nice guys trying to do good science. The observation that Galak and Meyvis (2011) is not as good science as it first appeared actually benefits Jeff Galak and Tom Meyvis, and other scientists, who might want to build on those studies. Spellman calls my analysis an "unnecessary attack", but attacking ideas is a necessary part of larger scientific practice. I would hope other critics would have written a similar comment if they had identified flaws in the experimental design, realized that the questions were poorly phrased, or noticed that the statistics were mis-calculated. Scientists should expect (and even hope) that their work will be critiqued, so that future studies and theories will be better.

      Although, it is not central to the discussion about the "gotcha gang", I thought I would also mention that I object to Spellman's characterization of these discussions as part of a "war" or "revolution." This framing implies antagonism and animosity that, I hope, is largely absent. I believe that (nearly) everyone in the field wants to do good science and that psychological science addresses important topics that deserve the best science. I think a better analogy is one that is very familiar for most of us: education. Regardless of how much we already know, and regardless of our current standing in the field (from graduate students to editors of prominent journals), the recent debates about replication and data analysis indicate that we all have quite a bit to learn about both good scientific practice and the various ways that scientific investigations can be compromised. We are not on opposite "sides" and there are no teachers to tell us the right way to do things; we have to help each other learn. Sometimes that learning process involves criticism, other times it involves kudos; we cannot develop a healthy scientific field with just one approach or the other.

      Conflict of Interest: None declared


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Aug 02, Gregory Francis commented:

      The journal Perspectives on Psychological Science used to have an on-line commenting system. They seem to have discontinued it and removed all past comments. In January 2016, I published a comment on this article. I reproduce it below.

      Clarifying the role of data detectives

      Since my work was presented as an example of the "gotcha gang" and "witch hunt" activity (p. 892), I feel it is necessary to post a comment that more accurately describes the work in Francis (2012) and adds some relevant information.

      Francis (2012) did not simply critique Galak and Meyvis (2011) for having "too many replications", rather the critique was that Galak and Meyvis (2011) had too many replications relative to the estimated power of the studies. Power is the probability of a randomly drawn sample producing a significant outcome for a given effect size (in this case, estimated from the studies reported by Galak and Meyvis (2011)). In Galak and Meyvis (2011) a high rate of success coincided with only modest estimated power: and that pattern suggests that there were missing studies or that the reported studies were run and analyzed improperly.

      As noted in Spellman's article, my analysis was essentially validated by the response from Galak and Meyvis (2012), who reported that there was a file drawer of unsuccessful experiments. The title of their response was "You could have just asked", meaning that it was not necessary to perform the power analysis to detect the existence of missing studies because they would have told anyone who asked about those unpublished studies. I was stunned by Galak and Meyvis' (2012) response because it suggests that the standard process of reading a scientific article involves thinking to yourself, "That's really interesting. I wonder if it is true? I will ask the authors." I thought the ludicrousness of this suggestion was too obvious to need clarification, but Spellman's implication that it was a "cool reply" indicates such a need.

      It is true, as Spellman notes, that "not everything can be, or should be, shared in papers," but a scientific article is supposed to present the facts that are relevant to the conclusions. Selective reporting (or improper data collection or analysis) withholds relevant facts and thereby calls in doubt the conclusions. Moreover, the impracticality of "just asking" the authors about missing experiments becomes clear when we think about the (inevitable) death of a scientist; are we supposed to discount a scientist's lifetime of work when they die? Since it was brought up in Spellman's manuscript, I feel obligated to mention that after reading the Galak and Meyvis (2012) response, I formally asked them for the details of their file drawer. Although we had a nice phone conversation discussing power and replication, they never provided the requested data (nether raw data nor summary statistics).

      When expressing her concerns about a "witch hunt" and "reviling people", Spellman confuses Galak and Meyvis (2011), which referes to the experimental findings and conclusions in a manuscript, with Jeff Galak and Tom Meyvis, who I suspect are nice guys trying to do good science. The observation that Galak and Meyvis (2011) is not as good science as it first appeared actually benefits Jeff Galak and Tom Meyvis, and other scientists, who might want to build on those studies. Spellman calls my analysis an "unnecessary attack", but attacking ideas is a necessary part of larger scientific practice. I would hope other critics would have written a similar comment if they had identified flaws in the experimental design, realized that the questions were poorly phrased, or noticed that the statistics were mis-calculated. Scientists should expect (and even hope) that their work will be critiqued, so that future studies and theories will be better.

      Although, it is not central to the discussion about the "gotcha gang", I thought I would also mention that I object to Spellman's characterization of these discussions as part of a "war" or "revolution." This framing implies antagonism and animosity that, I hope, is largely absent. I believe that (nearly) everyone in the field wants to do good science and that psychological science addresses important topics that deserve the best science. I think a better analogy is one that is very familiar for most of us: education. Regardless of how much we already know, and regardless of our current standing in the field (from graduate students to editors of prominent journals), the recent debates about replication and data analysis indicate that we all have quite a bit to learn about both good scientific practice and the various ways that scientific investigations can be compromised. We are not on opposite "sides" and there are no teachers to tell us the right way to do things; we have to help each other learn. Sometimes that learning process involves criticism, other times it involves kudos; we cannot develop a healthy scientific field with just one approach or the other.

      Conflict of Interest: None declared


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.