2 Matching Annotations
  1. Mar 2021
    1. I returned to another OER Learning Circle and wrote an ebook version of a Modern World History textbook. As I wrote this, I tested it out on my students. I taught them to use the annotation app, Hypothesis, and assigned them to highlight and comment on the chapters each week in preparation for class discussions. This had the dual benefits of engaging them with the content, and also indicating to me which parts of the text were working well and which needed improvement. Since I wasn't telling them what they had to highlight and respond to, I was able to see what elements caught students attention and interest. And possibly more important, I was able to "mind the gaps', and rework parts that were too confusing or too boring to get the attention I thought they deserved.

      This is an intriguing off-label use case for Hypothes.is which is within the realm of peer-review use cases.

      Dan is essentially using the idea of annotation as engagement within a textbook as a means of proactively improving it. He's mentioned it before in Hypothes.is Social (and Private) Annotation.

      Because one can actively see the gaps without readers necessarily being aware of their "review", this may be a far better method than asking for active reviews of materials.

      Reviewers are probably not as likely to actively mark sections they don't find engaging. Has anyone done research on this space for better improving texts? Certainly annotation provides a means for helping to do this.

    1. He introduces the idea of the apophatic: what we can't put into words, but is important and vaguely understood. This term comes from Orthodox theology, where people defined god by saying what it was not.

      Too often as humans we're focused on what is immediately in front of us and not what is missing.

      This same thing plagues our science in that we're only publishing positive results and not negative results.

      From an information theoretic perspective, we're throwing away half (or more?) of the information we're generating. We might be able to go much farther much faster if we were keeping and publishing all of our results in better fashion.

      Is there a better word for this negative information? #openquestions