8 Matching Annotations
  1. Jun 2017
    1. Furthermore, the JIF –in its normalized variant –seems to differentiate more or less successfully between promising and uninteresting candidates not only in the short term, but also in the long term.

      Except that the effect sizes are too small for them to be credible in the absence of pre-registration of this hypothesis.

  2. Mar 2016
    1. De Rond, M., & Miller, A. N. (2005). Publish or perish—Bane or boon of academic life?Journal ofManagement Inquiry, 14(4), 321–329. doi:

      On how increased pressure to publish diminishes creativity.

    2. Atkin, P. A. (2002). A paradigm shift in the medical literature.British Medical Journal, 325(7378),1450–1451

      On the rise of sexy terms like "paradigm shift" in abstracts.

    3. Bonitz, M., & Scharnhorst, A. (2001). Competition in science and the Matthew core journals.Sciento-metrics, 51(1), 37–54

      Matthew effect

    1. To publish. And sometimes publish in the right journals.... In my discipline ...there’s just a few journals, and if you’re not in that journal, then yourpublication doesn’t really count

      Importance of "top" journals

    1. Editors, Publishers, Impact Factors, and Reprint Income

      On the incentives for journal editors to publish papers they think might improve IF... and how citations are gamed.

  3. Jul 2015
    1. based on a scientific analysis of citation data

      JIF is discredited in many reviews. See for example http://dx.doi.org/10.3389/fnhum.2013.00291. A recent independent review of metrics for the Higher Education Funding Council for England also strongly recommended against the use of measures like JIF: http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/

  4. May 2015
    1. Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility.

      This seems to be at least one reproducible result!