269 Matching Annotations
  1. Jun 2021
    1. Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., Esterling, K. M., & Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 1–8. https://doi.org/10.1038/s41562-021-01142-4

    1. A short text "interview" with the authors of three works that posted versions of their books online for an open review via annotation.

      These could be added to the example and experience of Kathleen Fitzpatrick.

    2. He notes that authors of such projects should consider the return on investment. It take time to go through community feedback, so one needs to determine whether the pay off will be worthwhile. Nevertheless, if his next work is suitable for community review, he’d like to do it again.

      This is an apropos question. It is also somewhat contingent on what sort of platform the author "owns" to be able to do outreach and drive readers and participation.

    1. recently published book

      I was honored to interview Remi and Antero (along with other MITP authors) about collaborative community review and how it fit with their traditional peer review experience. The blog post can be found here.

    1. Publisher costs usually include copyediting/formatting and organizing peer review. While these content transformations are fundamental and beneficial, they alone cannot justify the typical APC (Article Publication Charge), especially since peer reviewers are not paid.

      But peer reviewers are largely responsible for generating the assertions you talk about in the next paragraph, and which apparently, justify the cost of publishing.

  2. May 2021
    1. However, the degraded performance across all groups at 6 weeks suggests that continued engagement with memorised information is required for long-term retention of the information. Thus, students and instructors should exercise caution before employing any of the measured techniques in the hopes of obtaining a ‘silver bullet’ for quick acquisition and effortless recall of important data. Any system of memorization will likely require continued practice and revision in order to be effective.

      Abysmally sad that this is presented without the context of any of the work over the last century and a half of spaced repetition.

      I wonder that this point slipped past the reviewers and isn't at least discussed somewhat narratively here.

  3. Apr 2021
    1. Die weitestgehende Öffnung liegt bei dieser Variante vor, wenn sowohl Autor*innen- wie auch Gutachter*innen- und Gutachtentransparenz besteht. Offene Review-Verfahren schließen ferner die Option einer nachträglichen Veröffentlichung der Gutachten als Begleittexte einer Publikation mit ein

      Volle Transparenz wäre m.E. erst dann gegeben, wenn auch abgelehente Einreichungen mitsamt der der Gutachten, die zur Ablehnung geführt haben ins Netz gestellt werden. Mir scheint, um Meinungs- oder Zitationskartelle zu verhindern (oder zumindest offensichtlich werden zu lassen), wäre das sogar wichtiger als die Namen der Gutachter anzugeben.

  4. Mar 2021
    1. I returned to another OER Learning Circle and wrote an ebook version of a Modern World History textbook. As I wrote this, I tested it out on my students. I taught them to use the annotation app, Hypothesis, and assigned them to highlight and comment on the chapters each week in preparation for class discussions. This had the dual benefits of engaging them with the content, and also indicating to me which parts of the text were working well and which needed improvement. Since I wasn't telling them what they had to highlight and respond to, I was able to see what elements caught students attention and interest. And possibly more important, I was able to "mind the gaps', and rework parts that were too confusing or too boring to get the attention I thought they deserved.

      This is an intriguing off-label use case for Hypothes.is which is within the realm of peer-review use cases.

      Dan is essentially using the idea of annotation as engagement within a textbook as a means of proactively improving it. He's mentioned it before in Hypothes.is Social (and Private) Annotation.

      Because one can actively see the gaps without readers necessarily being aware of their "review", this may be a far better method than asking for active reviews of materials.

      Reviewers are probably not as likely to actively mark sections they don't find engaging. Has anyone done research on this space for better improving texts? Certainly annotation provides a means for helping to do this.

  5. Feb 2021
    1. The Rights Retention Strategy provides a challenge to the vital income that is necessary to fund the resources, time, and effort to provide not only the many checks, corrections, and editorial inputs required but also the management and support of a rigorous peer review process

      This is an untested statement and does not take into account the perspectives of those contributing to the publishers' revenue. The Rights Retention Strategy (RRS) relies on the author's accepted manuscript (AAM) and for an AAM to exist and to have the added value from peer-review a Version of Record (VoR) must exist. Libraries recognise this fundamental principle and continue to subscribe to individual journals of merit and support lucrative deals with publishers. From some (not all) librarians' and possibly funders' perspectives these statements could undermine any mutual respect.

  6. Jan 2021
    1. ReconfigBehSci [@SciBeh] (2020-01-27) new post on Scibeh's meta-science reddit describing the new rubric for peer review of preprints aimed at broadening the pool of potential 'reviewers' so that students could provide evaluations as well! https://reddit.com/r/BehSciMeta/comments/l64y1l/reviewing_peer_review_does_the_process_need_to/ please take a look and provide feedback! Twitter. Retrieved from: https://twitter.com/SciBeh/status/1354456393877749763

    1. Mambrini. A. Baronchelli. A. Starnini. M. Marinazzo. D. De Domenico, M. (2020) .PRINCIPIA: a Decentralized Peer-Review Ecosystem. Retrieved from: chrome-extension://bjfhmglciegochdpefhhlphglcehbmek/pdfjs/web/viewer.html?file=https%3A%2F%2Farxiv.org%2Fpdf%2F2008.09011.pdf

  7. Nov 2020
  8. Oct 2020
    1. Senior colleagues indicate that I should not have to balance out publishing in “traditional, peer-reviewed publications” as well as open, online spaces.

      Do your colleagues who read your work, annotate it, and comment on it not count as peer-review?

      Am I wasting my time by annotating all of this? :) (I don't think so...)

  9. Sep 2020
  10. Aug 2020
  11. Jul 2020
    1. Authors should annotate code before the review occurs because annotations guide the reviewer through the changes

      Guide the reviewer during the review process

    2. It´s also useful to watch internal process metrics, including:

      Inspection rate Defect rate Defect density

    3. Before implementing a process, your team should decide how you will measure the effectiveness of peer review and name a few tangible goals.

      Set few tangible goals. Fix more bugs is not a good example.

    4. Code reviews in reasonable quantity, at a slower pace for a limited amount of time results in the most effective code review.

      Only less than 500 LOC per hour

    5. The brain can only effectively process so much information at a time; beyond 400 LOC, the ability to find defects diminishes.

      <400 LOC

  12. Jun 2020
  13. May 2020
  14. Apr 2020