10 Matching Annotations
  1. Aug 2021
    1. Lately, metrics related to social usage and online comment have gained momentum — F1000Prime was established in 2002, Mendeley in 2008, and Altmetric.com (supported by Macmillan Science and Education, which owns Nature Publishing Group) in 2011.

      See altmetrics.

  2. Apr 2019
    1. And if a resurfaced tweet has an emotional resonance of x, than a passage in a book by which you were once moved must resonate at 100x.

      This is something that Altmetrics should move on...

  3. Jun 2017
    1. Furthermore, the JIF –in its normalized variant –seems to differentiate more or less successfully between promising and uninteresting candidates not only in the short term, but also in the long term.

      Except that the effect sizes are too small for them to be credible in the absence of pre-registration of this hypothesis.

    2. The publication data areindependent of each other.

      Which they clearly are not - there are a researcher at an elite institution will have a citation pattern more like another researcher at an elite institution, not like another researcher in the same field at a non-elite institution.

    3. 3,976researchers who published theirfirst paper in 1998 and at least one paper in 2012.One can expect that these researchers published more or less continuously over 15 years

      This is not something I would assume. This needs to be demonstrated.

  4. Oct 2016
    1. metrics on annotations/comments

      Nice example of the various #altmetrics that one could pull out of Hypothes.is

  5. May 2016
    1. a low correlation suggests that the new indicator predominantly reflects something other thanscholarly quality

      or that the previous metric wasn't capturing that dimension of quality

    1. Elsevier continues its march into data analytics at a pace that should terrify anyone on the ground in HE

      Policy makers and administrations have used Scival for years as a decision support resource. As discussed on Twitter, there is an open standard for these metrics: http://www.snowballmetrics.com/

      The data are useful to help support decisions about HE policy, though they are more useful in STEM than in the humanities, partly due to the lack of identifiers and comprehensive indexing of outputs in the humanities.

      Hopefully that makes it a little less terrifying.

  6. Mar 2016
  7. Jan 2016
    1. There is a significant amount of innovation in peer review, with the more evolutionary approaches gaining more support than the more radical. For example, some variants of open peer review (e.g. disclosure of reviewer names either before or after publication; publication of reviewer reports alongside the article) are becoming more common. Cascade review (transferring articles between journals with reviewer reports) and even journal-independent (“portable”) peer review are establishing a small foothold. The most notable change in peer review practice, however, has been the spread of the “soundness not significance” peer review criterion adopted by open access “megajournals” like PLOS ONE and its imitators. Post-publication review has little support as a replacement for conventional peer review but there is some interest in its use as a complement to it (for example, the launch of PubMed Commons is notable in lending the credibility of PubMed to post-publication review). There is similar interest in “altmetrics” as a potentially useful complement to review and in other measures of impact. A new technology of potential interest for post-publication review is open annotation, which uses a new web standard to allow citable comments to be layered over any website (page 47).