289 Matching Annotations
  1. Jan 2024
  2. Mar 2023
    1. German academic publishing in Niklas Luhmann's day was dramatically different from the late 20th/early 21st centuries. There was no peer-review and as a result Luhmann didn't have the level of gatekeeping that academics face today which only served to help increase his academic journal publication record. (28:30)

  3. Jul 2022
    1. Perhaps the most widely recognized failing of peer review is its inability to ensure the identification of high-quality work.

      stakesinscience

  4. May 2022
    1. Studying, done properly, is research,because it is about gaining insight that cannot be anticipated and willbe shared within the scientific community under public scrutiny.

    Tags

    Annotators

    1. or at least they pretend

      I don't think we're pretending. I know I'm not!

    2. Senior colleagues indicate that I should not have to balance out publishing in “traditional, peer-reviewed publications” as well as open, online spaces.

      Do your colleagues who read your work, annotate it, and comment on it not count as peer-review?

      Am I wasting my time by annotating all of this? :) (I don't think so...)

    1. He notes that authors of such projects should consider the return on investment. It take time to go through community feedback, so one needs to determine whether the pay off will be worthwhile. Nevertheless, if his next work is suitable for community review, he’d like to do it again.

      This is an apropos question. It is also somewhat contingent on what sort of platform the author "owns" to be able to do outreach and drive readers and participation.

    2. A short text "interview" with the authors of three works that posted versions of their books online for an open review via annotation.

      These could be added to the example and experience of Kathleen Fitzpatrick.

    1. I returned to another OER Learning Circle and wrote an ebook version of a Modern World History textbook. As I wrote this, I tested it out on my students. I taught them to use the annotation app, Hypothesis, and assigned them to highlight and comment on the chapters each week in preparation for class discussions. This had the dual benefits of engaging them with the content, and also indicating to me which parts of the text were working well and which needed improvement. Since I wasn't telling them what they had to highlight and respond to, I was able to see what elements caught students attention and interest. And possibly more important, I was able to "mind the gaps', and rework parts that were too confusing or too boring to get the attention I thought they deserved.

      This is an intriguing off-label use case for Hypothes.is which is within the realm of peer-review use cases.

      Dan is essentially using the idea of annotation as engagement within a textbook as a means of proactively improving it. He's mentioned it before in Hypothes.is Social (and Private) Annotation.

      Because one can actively see the gaps without readers necessarily being aware of their "review", this may be a far better method than asking for active reviews of materials.

      Reviewers are probably not as likely to actively mark sections they don't find engaging. Has anyone done research on this space for better improving texts? Certainly annotation provides a means for helping to do this.

    1. However, the degraded performance across all groups at 6 weeks suggests that continued engagement with memorised information is required for long-term retention of the information. Thus, students and instructors should exercise caution before employing any of the measured techniques in the hopes of obtaining a ‘silver bullet’ for quick acquisition and effortless recall of important data. Any system of memorization will likely require continued practice and revision in order to be effective.

      Abysmally sad that this is presented without the context of any of the work over the last century and a half of spaced repetition.

      I wonder that this point slipped past the reviewers and isn't at least discussed somewhat narratively here.

  5. Apr 2022
  6. Mar 2022
  7. Feb 2022
  8. Dec 2021
    1. AIMOS. (2021, November 30). How can we connect #metascience to established #science fields? Find out at this afternoon’s session at #aimos2021 Remco Heesen @fallonmody Felipe Romeo will discuss. Come join us. #OpenScience #OpenData #reproducibility https://t.co/dEW2MkGNpx [Tweet]. @aimos_inc. https://twitter.com/aimos_inc/status/1465485732206850054

  9. Nov 2021
    1. I have no problem with publishers making a profit, or with peer reviewers doing their work for free. The problem I have is when there is such an enormous gap between those two positions.

      If publishers make billions in profit (and they do), while at the same time reviewers are doing a billion dollars worth of work for free, that seems like a broken system.

      I think there are parallels with how users contribute value to social media companies. In both cases, users/reviewers are getting some value in return, but most of the value that's captured goes to the publisher/tech company.

      I'd like to see a system where more of the value accrues to the reviewers. This could be in the form of direct payment, although this is probably less preferable because of the challenges of trying to convert the value of different kinds of peer review into a dollar amount.

      Another problem with simply paying reviewers is that it retains the status quo; we keep the same system with all of it's faults and redistribute profits. This is an OK option as it at least sees some of the value that normally accrues to publishers moving to reviewers.

      I also don’t believe that open access - in it’s current form - is a good option either. There are still enormous costs associated with publishing; the only difference is that those costs are now covered by institutions instead of the reader. The publisher still makes a heart-stopping profit.

      A more elegant solution, although more challenging, would be for academics to step away from publishers altogether and start their own journals, on their own terms.

    1. COVID-19 Living Evidence. (2021, November 12). As of 12.11.2021, we have indexed 257,633 publications: 18,674 pre-prints 238,959 peer-reviewed publications Pre-prints: BioRxiv, MedRxiv Peer-reviewed: PubMed, EMBASE, PsycINFO https://t.co/ytOhLG90Pi [Tweet]. @evidencelive. https://twitter.com/evidencelive/status/1459163720450519042

  10. Oct 2021
  11. Aug 2021
  12. Jul 2021
  13. Jun 2021
    1. Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., Esterling, K. M., & Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 1–8. https://doi.org/10.1038/s41562-021-01142-4

    1. recently published book

      I was honored to interview Remi and Antero (along with other MITP authors) about collaborative community review and how it fit with their traditional peer review experience. The blog post can be found here.

    1. Publisher costs usually include copyediting/formatting and organizing peer review. While these content transformations are fundamental and beneficial, they alone cannot justify the typical APC (Article Publication Charge), especially since peer reviewers are not paid.

      But peer reviewers are largely responsible for generating the assertions you talk about in the next paragraph, and which apparently, justify the cost of publishing.

  14. May 2021
  15. Apr 2021
    1. Die weitestgehende Öffnung liegt bei dieser Variante vor, wenn sowohl Autor*innen- wie auch Gutachter*innen- und Gutachtentransparenz besteht. Offene Review-Verfahren schließen ferner die Option einer nachträglichen Veröffentlichung der Gutachten als Begleittexte einer Publikation mit ein

      Volle Transparenz wäre m.E. erst dann gegeben, wenn auch abgelehente Einreichungen mitsamt der der Gutachten, die zur Ablehnung geführt haben ins Netz gestellt werden. Mir scheint, um Meinungs- oder Zitationskartelle zu verhindern (oder zumindest offensichtlich werden zu lassen), wäre das sogar wichtiger als die Namen der Gutachter anzugeben.

  16. Mar 2021
  17. Feb 2021
    1. The Rights Retention Strategy provides a challenge to the vital income that is necessary to fund the resources, time, and effort to provide not only the many checks, corrections, and editorial inputs required but also the management and support of a rigorous peer review process

      This is an untested statement and does not take into account the perspectives of those contributing to the publishers' revenue. The Rights Retention Strategy (RRS) relies on the author's accepted manuscript (AAM) and for an AAM to exist and to have the added value from peer-review a Version of Record (VoR) must exist. Libraries recognise this fundamental principle and continue to subscribe to individual journals of merit and support lucrative deals with publishers. From some (not all) librarians' and possibly funders' perspectives these statements could undermine any mutual respect.

  18. Jan 2021
    1. ReconfigBehSci [@SciBeh] (2020-01-27) new post on Scibeh's meta-science reddit describing the new rubric for peer review of preprints aimed at broadening the pool of potential 'reviewers' so that students could provide evaluations as well! https://reddit.com/r/BehSciMeta/comments/l64y1l/reviewing_peer_review_does_the_process_need_to/ please take a look and provide feedback! Twitter. Retrieved from: https://twitter.com/SciBeh/status/1354456393877749763

    1. Mambrini. A. Baronchelli. A. Starnini. M. Marinazzo. D. De Domenico, M. (2020) .PRINCIPIA: a Decentralized Peer-Review Ecosystem. Retrieved from: chrome-extension://bjfhmglciegochdpefhhlphglcehbmek/pdfjs/web/viewer.html?file=https%3A%2F%2Farxiv.org%2Fpdf%2F2008.09011.pdf

  19. Nov 2020
  20. Oct 2020
  21. Sep 2020
  22. Aug 2020
  23. Jul 2020
    1. Authors should annotate code before the review occurs because annotations guide the reviewer through the changes

      Guide the reviewer during the review process

    2. It´s also useful to watch internal process metrics, including:

      Inspection rate Defect rate Defect density

    3. Before implementing a process, your team should decide how you will measure the effectiveness of peer review and name a few tangible goals.

      Set few tangible goals. Fix more bugs is not a good example.

    4. Code reviews in reasonable quantity, at a slower pace for a limited amount of time results in the most effective code review.

      Only less than 500 LOC per hour

    5. The brain can only effectively process so much information at a time; beyond 400 LOC, the ability to find defects diminishes.

      <400 LOC

  24. Jun 2020