Hypothesis for scientific research

Hypothesis is enjoying robust use in the sciences: in STEM education (e.g., Science in the Classroom), as a tool for scientists to critique reporting of science in the popular press (e.g., Climate Feedback), for journal clubs and by individual researchers engaging in public or private group discussions on scientific papers. Some of these uses are conversational, as Hypothesis originally envisioned: people ask questions, get answers, make comments. Other annotations are more formal and authoritative; experts extract structured knowledge from the literature, annotate gene sequences with biological information or supply clarifying information to published works.

Opening Meta

Yesterday, the scholarly communication + AI startup Meta signed an agreement to be acquired by the Chan Zuckerberg Initiative (CZI). Aside from the initial news a few weeks ago and Joe Esposito’s article in the Scholarly Kitchen, I’ve seen few people remark on it.

But it’s a big deal.

A serious piece of scholarly infrastructure is being made open, free and effectively non-profit. Meta has built a cutting edge system to mine scholarly papers new and old, and allow the data to be employed in diverse ways–predicting discoveries before they’re made, projecting the future impact of papers just hours old, and unlocking the potential for innumerable applications applying computation at scale across scientific literature. In what must have taken extraordinary patience, persistence and a lot of finesse, they managed to secure access to some of the most strategic closed content in the scholarly world.

Annotating all Knowledge: Adventures in Interoperability

The Annotating All Knowledge Coalition was founded as a forum for accelerating the development of a pervasive interoperable annotation layer across all scholarly works. Figuring out what, exactly, an interoperable annotation layer means was one of the first goals of the coalition. We took the first steps towards defining what an interoperable layer looks like and how it should operate at our Face to Face meetings at FORCE2016 and I Annotate. So what are the next steps?

Participants in both events felt strongly that the best way to move forward was to “Just do it”, that is, identify a use case where you have a need to share annotations across: tools, content, platforms, workflows.

Register now for I Annotate 2017

Join us for I Annotate 2017, the fifth annual gathering dedicated to advancing digital annotation practices and technologies. With events in San Francisco during 3-6 May, I Annotate will continue to expand the annotation community to include more participants from education, journalism, publishing, research, science, and technology, focusing on themes of fact checking, user engagement, and digital literacy.

Welcome Heather Staines, Director of Partnerships

I couldn’t be more thrilled to join the Hypothesis team as Director of Partnerships. As I wrote in my initial reachout to Hypothesis, sometimes you feel as if you have been preparing for something your entire life, as if there was a plan that you were aware of only subconsciously. My long winding road through scholarly content, ed tech, and standards finally makes sense: Hypothesis was the plan!

Our millionth annotation

It was getting close to midnight and the Hypothesis team was watching the counter of total annotations tick up: 999,646…999,752…999,845…by 10:37pm Pacific Time it was 999,959 and we knew we’d reach one million annotations that night. People all over the world were busy taking notes using Hypothesis—students, journalists, researchers, scientists, scholars—most without knowing that our team and the annotation community on social media were rooting for their work. Countdown tweets for a #millionannotations were starting to gather an audience. Who would add the millionth annotation?

Counting down to a million annotations

By the end of today, someone will make the one-millionth Hypothesis annotation.

Who will it be? Will they be factchecking a news article? Linking crucial information to a scientific study? Unpacking a short story with other students? Collecting data for new research? We are about to find out!

Who says neuroscientists don’t need more brains? Annotation with SciBot

You might think that neuroscientists already have enough brains, but apparently not. Over 100 neuroscientists attending the recent annual meeting of the Society for Neuroscience (SFN), took part in an annotation challenge: modifying scientific papers to add simple references that automatically generate and attach Hypothesis annotations, filled with key related information. To sweeten the pot, our friends at Gigascience gave researchers who annotated their own papers their very own brain hats.

But handing out brains is not just a conference gimmick. Thanks to our colleagues at the Neuroscience Information Framework (NIF), Hypothesis was once again featured at SFN, the largest gathering of neuroscientists in the world, attended by well over 30,000 people in San Diego Nov 12-16, 2016. The annotation challenge at SFN was a demonstration of a much larger collaboration with NIF: to increase rigor and reproducibility in neuroscience by using the NIF’s new SciBot service to annotate publications automatically with links to related materials and tools that researchers use in scientific studies.

Science in the Classroom

The American Association for the Advancement of Science (AAAS), publisher of Science, provides an educational resource called Science in the Classroom (SitC) that “helps students understand the structure and workings of professional scientific research.” It looks like this: Graduate students provide annotations that are categorized as shown in the Learning Lens widget. Readers select one … Continued

Annotating to extract findings from scientific papers

David Kennedy is a neurobiologist who periodically reviews the literature in his field and extracts findings, which are structured interpretations of statements in scientific papers. He recently began using Hypothesis to mark up the raw materials for these findings, which he then manually compiles into a report that looks like this:     The report … Continued