Peer Review and Annotation

By pbrantley | 7 June, 2014

AGUOn May 15-16, 2014, approximately 60 attendees from AAAS and arXiv to the W3C joined Hypothes.is at an Alfred P. Sloan funded meeting at the American Geophysical Union (AGU) offices in Washington DC to explore new models of peer review and post-publication commentary in the sciences.

As background, Alfred P. Sloan provided support Hypothes.is and our partners AGU, the arXiv preprint respository at Cornell University, and the eLife Sciences journal to explore how peer review might be reimagined by using annotation as one of a suite of new open web tools enabling new forms of engagement and data linking. The promise of re-engineering peer review to take advantage of the web is better workflow for reviewers and authors, with derivative benefits such as citability for academic review; reproducibility; and enhanced discovery. The meeting’s purpose was to explore how to get there.

First Day

On our first day, Kathleen Fitzpatrick of the MLA opened the meeting for us with a wide ranging consideration of the intent of peer review, noting that it encompasses two separate but conflated functions: determining whether an article merits publication, and if so, then standing-in as a proxy for the work’s authority. Peer review itself came out of a culture of publication scarcity, when it was critical to determine whether any specific article warranted the investment of effort necessary for it to be made available; in a networked age, we need filters rather than gatekeepers to cope with abundance.

KFitz speaking at Peer Review

To the extent that the network enables an explosion in the number of publications, it also promises something transformative at the heart of network culture’s openness. Perhaps, Kathleen observed, peer to peer review post-publication might help become the filter we need.

Kathleen also made one really important and subtle comment, based on the uncertain incentives for people to comment on works using annotation. In a networked age where it’s so easy to use voice, “we don’t yet know how to read silence. Like the blog post that goes uncommented-upon: does the silence indicate a lack of interest, or that everyone simply read and nodded in agreement and moved on?“

In an overview of the messy process of how articles actually get submitted, reviewed, and published, Brooks Hanson of the AGU observed that one of the seminal papers of Albert Einstein from 1905 on Brownian motion had no figures; only four references, including two from his own work; and no data. In a traditional publication track, the article would never be published today. Today, the presence of arXiv would have enabled Einstein to far more effectively and rapidly disseminate his ideas as pre-prints than he could have ever imagined. It is a sometimes frightening thought experiment to consider how widely our understanding of nuclear physics would have spread in the 1920s and 1930s with our current networked tools.

Other interesting talks from morning sessions included Doug Schepers discussing the W3C’s interest in peer review as a mechanism for enhancing discussion of W3 standards development. The W3C has an initiated the process leading to a formal working group in annotation, and they have invited publishing companies to participate. Laura Paglione of ORCID gave an excellent overview of the opportunities for integrating identity frameworks. Josh Greenberg of Alfred P. Sloan; Steve Meacham of the National Science Foundation; and Ari Petrosian of the National Institutes of Health provided a discussion of the grants review process by program officers.

In the afternoon, there were rounds of presentations of annotation tools, including annotation frameworks. Authoring environments such as Authorea and writeLaTex garnered much enthusiasm. There was a strong discussion of review tools, which incorporated speculative discussion of the role of GitHub in managing document revisions and linkage to reviews and annotations.

At the end of the first day, John Unsworth, CIO of Brandeis Univ., advocated strongly and humorously for a flexible, lightweight, under-engineered approach to annotation, which will solve some problems but not all; make life easier in some ways, but not all. Evoking Clay Shirky’s In Praise of Evolvable Systems, Unswoth noted:

  1. Only solutions that produce partial results when partially implemented can succeed.
  2. What is, is wrong. Because evolvable systems have always been adapted to earlier conditions and are always being further adapted to present conditions, they are always behind the times.
  3. Finally, Orgel’s Rule, named for the evolutionary biologist Leslie Orgel — “Evolution is cleverer than you are”.

Clay wrote, “Evolvable protocols start out weak and improve exponentially. … The Web is not the perfect hypertext protocol, just the best one that’s also currently practical. Infrastructure built on evolvable protocols will always be partially incomplete, partially wrong and ultimately better designed than its competition.” That prescription for annotation and peer review was embraced enthusiastically by the audience.

Second Day

Our return on the second day started with vivid explications of the review process by faculty researchers. Particularly poignant was a talk by David Cox, a neuroscience professor at Harvard University, who sought to remind the audience that there were few incentives for faculty to engage in public post-publication commentary – it was essentially a lose-lose proposition, returning little credit in exchange for potentially lifelong enmity. Even in pre-publication peer review, within the circumscribed communities of scholarly research, “anonymous” comments are poorly cloaked. David felt that systems like F1000 Research, which accentuate open post-publication review, might foster wider use of annotation.

David Cox on Peer Review

Reviewing itself is generally not seen as a means of career advancement, and even enabling citation would not return commensurate value. Current reviewing platforms are not “web-native” and are awkward and difficult to use; they are often poorly or not-at-all integrated into other software applications that scientists use. Simple things like integrating Google, Twitter, or Facebook based authentication would reduce impact on researcher time and diminish the spread of multiple online identities across scholarly systems.

David’s talk, and those of Satra Ghosh of MIT and Jasper van den Bosch of the University of Washington, drove an extended consideration of creating sheltered spaces where commentary and review could take place. There was a discussion of how journal clubs might be created dynamically and utilized to create opportunities for collaborative review and commentary.

Towards the end of the day, Hypothes.is’ grant partners discussed our committed grant projects, and encouraged ways of creating a shared community to spread innovation ideas, and continue experimentation. Todd Carpenter, the executive director of NISO, gave an excellent closing summary which returned to many of John Unsworth’s earlier points and crystallized them in the context of our further discussions. It was an excellent meeting, and we look forward to working within this community further to explore peer review and the uses of annotation.

Share this article