2 Matching Annotations
  1. Jul 2018
    1. On 2017 Aug 30, Hilda Bastian commented:

      Although the authors draw conclusions here about cost and effectiveness of simply offering badges if certain criteria are met, the study does not support these claims. There are, for example, no data on costs for the journal, peer reviewers, or authors. Any conclusions about effectiveness are hampered by the study's design, and the lack of consideration and assessment of any potentially negative repercussions.

      It was not possible for the authors to study the effects of offering badges alone, as this intervention was part of a complex intervention: a package of 5 co-interventions, announced by the journal in November 2013 to begin taking effect from January 2014 (Eich E, 2014). All were designed to improve research transparency and/or reproducibility, and signaled a major change in editorial policy and practice. Any manuscript accepted for publication after 1 January, while being eligible for these badges, was also subject to additional editorial requirements of authors and reviewers. All authors submitting articles from 2014 faced additional reproducibility-related questions before submission, that included data disclosure assurances. Other authors have shown that although these did not all lead to the changes sought, there was considerable impact on some measures (Giofrè D, 2017).

      Data on the impact on submissions, editorial rejections, and length of time until publication of accepted articles is not provided in this paper by Kidwell and colleagues. These would be necessary to gain perspective on the burdens and impact of the intervention package. I had a look at the impact on publications, though. It is clear from the data as collected in this study, and from a more extended timeframe based on analysis of date of e-publication, that the package of interventions appears to have led to a considerable drop in publication of articles (see my blog post, Absolutely Maybe, 2017). The number of articles receiving badges is small. During the year in this study from the awarding of the first badge, it was about 4 articles a month. That first dropped, then rose since, while at the same time the number of publications by Psychological Science has dropped to less than half the rate it was in the year before this package of interventions was introduced, leading to a substantial increase in percentage, while the absolute numbers of compliant articles remains small.

      Taken together, it appears that it was likely there was a process of "natural selection", on the side of the journal and authors, leading to more rigorous reporting and sharing among the reduced number of articles reaching publication. The part that badges alone played in this is unknowable. Higher rates of compliance with such standards have been achieved without badges at other journals (see the blog post for examples). There is some data to suggest that disinclination to data disclosure is part of a range of practices adopted together more by some psychology researchers than others, in one of the studies that spurred Psychological Science to introduce these initiatives (<PMID:26173121). The data in Giofrè D, 2017 tend to support the hypothesis that there is a correlation between some of the data disclosure requirements in the co-interventions, and data-sharing (see my follow-up blog post).

      In addition to not considering a range of possible effects of the practices, or being able to isolate the impact of one of the set of co-interventions, the study used only one data extractor and coder for each article. This is a particularly critical potential source of bias, as assessors could not be blinded to the journals, and the badging intervention was developed and promoted from within the author group.

      It would be useful if the authors could report in more detail what was required for the early screening question of "availability statement, yes or no". Was an explicit data availability statement required here, whether or not there was indeed additional data than was included in the paper and its supplementary materials?

      It would be helpful if the authors could confirm the percentage of articles eligible for badges, where the offer of a badge was rejected.

      At the heart of this badge approach for closed access journals, is a definition of "open-ness" that enables potentially serious limitation of the methodological information and key explanatory data available outside paywalls. In de-coupling the part of the study included in the paper from the study's data, and allowing it be inaccessible to many who could potentially use it or offer useful critique, the intervention is promoting a limited form of open-ness. The trade-off assumed is that this results in more open-ness than there otherwise would be. However, it may have the reverse effect, for example, by encouraging authors to think fully open access doesn't matter and can be foregone with pride and without concern, and if journals believe this "magic bullet" is an easy way out of more effective intensive intervention.

      Disclosures: I have a long relationship with PLOS (which has taken a different approach to increasing openness), including blogging at its Blog Network, and am a user of the Open Science Framework (which is produced by the group promoting the badges). My day job is at NCBI, which maintains literature and data repositories.

      This comment was updated with the two references and data on the question of correlation between data disclosure and data sharing on 1 September, after John Sakaluk tweeted the Giofrè paper to me.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Aug 30, Hilda Bastian commented:

      Although the authors draw conclusions here about cost and effectiveness of simply offering badges if certain criteria are met, the study does not support these claims. There are, for example, no data on costs for the journal, peer reviewers, or authors. Any conclusions about effectiveness are hampered by the study's design, and the lack of consideration and assessment of any potentially negative repercussions.

      It was not possible for the authors to study the effects of offering badges alone, as this intervention was part of a complex intervention: a package of 5 co-interventions, announced by the journal in November 2013 to begin taking effect from January 2014 (Eich E, 2014). All were designed to improve research transparency and/or reproducibility, and signaled a major change in editorial policy and practice. Any manuscript accepted for publication after 1 January, while being eligible for these badges, was also subject to additional editorial requirements of authors and reviewers. All authors submitting articles from 2014 faced additional reproducibility-related questions before submission, that included data disclosure assurances. Other authors have shown that although these did not all lead to the changes sought, there was considerable impact on some measures (Giofrè D, 2017).

      Data on the impact on submissions, editorial rejections, and length of time until publication of accepted articles is not provided in this paper by Kidwell and colleagues. These would be necessary to gain perspective on the burdens and impact of the intervention package. I had a look at the impact on publications, though. It is clear from the data as collected in this study, and from a more extended timeframe based on analysis of date of e-publication, that the package of interventions appears to have led to a considerable drop in publication of articles (see my blog post, Absolutely Maybe, 2017). The number of articles receiving badges is small. During the year in this study from the awarding of the first badge, it was about 4 articles a month. That first dropped, then rose since, while at the same time the number of publications by Psychological Science has dropped to less than half the rate it was in the year before this package of interventions was introduced, leading to a substantial increase in percentage, while the absolute numbers of compliant articles remains small.

      Taken together, it appears that it was likely there was a process of "natural selection", on the side of the journal and authors, leading to more rigorous reporting and sharing among the reduced number of articles reaching publication. The part that badges alone played in this is unknowable. Higher rates of compliance with such standards have been achieved without badges at other journals (see the blog post for examples). There is some data to suggest that disinclination to data disclosure is part of a range of practices adopted together more by some psychology researchers than others, in one of the studies that spurred Psychological Science to introduce these initiatives (<PMID:26173121). The data in Giofrè D, 2017 tend to support the hypothesis that there is a correlation between some of the data disclosure requirements in the co-interventions, and data-sharing (see my follow-up blog post).

      In addition to not considering a range of possible effects of the practices, or being able to isolate the impact of one of the set of co-interventions, the study used only one data extractor and coder for each article. This is a particularly critical potential source of bias, as assessors could not be blinded to the journals, and the badging intervention was developed and promoted from within the author group.

      It would be useful if the authors could report in more detail what was required for the early screening question of "availability statement, yes or no". Was an explicit data availability statement required here, whether or not there was indeed additional data than was included in the paper and its supplementary materials?

      It would be helpful if the authors could confirm the percentage of articles eligible for badges, where the offer of a badge was rejected.

      At the heart of this badge approach for closed access journals, is a definition of "open-ness" that enables potentially serious limitation of the methodological information and key explanatory data available outside paywalls. In de-coupling the part of the study included in the paper from the study's data, and allowing it be inaccessible to many who could potentially use it or offer useful critique, the intervention is promoting a limited form of open-ness. The trade-off assumed is that this results in more open-ness than there otherwise would be. However, it may have the reverse effect, for example, by encouraging authors to think fully open access doesn't matter and can be foregone with pride and without concern, and if journals believe this "magic bullet" is an easy way out of more effective intensive intervention.

      Disclosures: I have a long relationship with PLOS (which has taken a different approach to increasing openness), including blogging at its Blog Network, and am a user of the Open Science Framework (which is produced by the group promoting the badges). My day job is at NCBI, which maintains literature and data repositories.

      This comment was updated with the two references and data on the question of correlation between data disclosure and data sharing on 1 September, after John Sakaluk tweeted the Giofrè paper to me.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.