29 Matching Annotations
  1. Nov 2017
    1. Rap Genius finally gives us the opportunity to find out. It's an ambitious mission, and one we are proud to get behind.

      I wonder whether Marc would still write this the same way today, in a world where web annotation is finally a W3C standard and Hypothesis looks much more like the future of web annotation than Genius...

    1. Students often lose access to their materials at the end of the semester Students also often lose access to their own work as well, in the form of highlights, notes, and other annotations

      Hence the need for #OpenAnnotation to pair up with some of the other opens. Hypothes.is people are doing their part, but still.

  2. Nov 2016
    1. a specific area of the picture has been selected and rotated

      How can I comment on that specific area? According to http://iiif.io/api/presentation/2.0:

      Although normally annotations are used for associating commentary with the thing the annotation’s text is about, the Open Annotation model allows any resource to be associated with any other resource, or parts thereof, and it is reused for both commentary and painting resources on the canvas.

    2. development of the IIIF technology
  3. Aug 2016
  4. Jun 2016
    1. Diigo’s Refocus Back to Annotation

      Had missed this announcement. The annotation scene has this interesting ambivalence between being old and new, forward-looking and somewhat nostalgic. Wish Diigo were forward-looking enough to get into Open Annotations.

  5. Apr 2016
    1. That latter article included these even more chilling paragraphs:

      As the W3C is having its international conference in a few days (and since we’re holding Open Knowledge Fest concurrently), the Open Annotation web standard could be an interesting topic for thoughtful dialogue. The first step is acknowledging that people’s needs differ largely, in terms of annotations. Doug Schepers has an insightful way to put it.

    1. one of the annotations is simply a link to a Google search for a phrase that’s been used.

      Glad this was mentioned. To the Eric Raymonds of this world, such a response sounds “perfectly legitimate”. But it’s precisely what can differentiate communities and make one more welcoming than the other. Case in point: Arduino-related forums, in contrast with the Raspberry Pi community. Was looking for information about building a device to track knee movement. Noticed that “goniometer” was the technical term for that kind of device, measuring an angle (say, in physiotherapy). Ended up on this page, where someone had asked a legitimate question about Arduino and goniometers. First, the question:

      Trying to make a goniometer using imu (gy-85). Hoe do I aquire data from the imu using the arduino? How do I code the data acquisition? Are there any tutorials avaible online? Thanks =)

      Maybe it wouldn’t pass the Raymond test for “smart questions”, but it’s easy to understand and a straight answer could help others (e.g., me).

      Now, the answer:

      For me, google found 87,000,000 hits for gy-85. I wonder why it failed for you.

      Wow. Just, wow.

      Then, on the key part of the question (the goniometer):

      No idea what that is or why I should have to google it for you.

      While this one aborted Q&A is enough to put somebody off Arduino forever, it’s just an example among many. Like Stack Overflow, Quora, and geek hideouts, Arduino-related forums are filled with these kinds of snarky comments about #LMGTFY.

      Contrast this with the Raspberry Pi. Liz Upton said it best in a recent interview (ca. 25:30):

      People find it difficult to remember that sometimes when somebody comes along… and appears to be “not thinking very hard”, it could well be because they’re ten years old.

      And we understand (from the context and such) that it’s about appearance (not about “not thinking clearly”). It’s also not really about age.

      So, imagine this scenario. You’re teacher a class, seminar, workshop… Someone asks a question about using data from a device to make it into a goniometer. What’s the most appropriate strategy? Sure, you might ask the person to look for some of that information online. But there are ways to do so which are much more effective than the offputting ’tude behind #LMGTFY. Assuming they do search for that kind of information, you might want to help them dig through the massive results to find something usable, which is a remarkably difficult task which is misunderstood by someone who answer questions about goniometers without knowing the least thing about them.

      The situation also applies to the notion that a question which has already been asked isn’t a legitimate question. A teacher adopting this notion would probably have a very difficult time teaching anyone who’s not in extremely narrow a field. (Those teachers do exist, but they complain bitterly about their job.)

      Further, the same logic applies to the pedantry of correcting others. Despite the fact that English-speakers’ language ideology allows for a lot of non-normative speech, the kind of online #WordRage which leads to the creation of “language police” bots is more than a mere annoyance. Notice the name of this Twitter account (and the profile of the account which “liked” this tweet).

      Lots of insight from @BiellaColeman on people who do things “for the lulz”. Her work is becoming increasingly relevant to thoughtful dialogue on annotations.

  6. Mar 2016
    1. open – that is, to make public, transparent, and participatory

      Neat definition of “open”, very contextual, it sounds like.

  7. Jan 2016
  8. Dec 2015
    1. The EDUPUB Initiative VitalSource regularly collaborates with independent consultants and industry experts including the National Federation of the Blind (NFB), American Foundation for the Blind (AFB), Tech For All, JISC, Alternative Media Access Center (AMAC), and others. With the help of these experts, VitalSource strives to ensure its platform conforms to applicable accessibility standards including Section 508 of the Rehabilitation Act and the Accessibility Guidelines established by the Worldwide Web Consortium known as WCAG 2.0. The state of the platform's conformance with Section 508 at any point in time is made available through publication of Voluntary Product Accessibility Templates (VPATs).  VitalSource continues to support industry standards for accessibility by conducting conformance testing on all Bookshelf platforms – offline on Windows and Macs; online on Windows and Macs using standard browsers (e.g., Internet Explorer, Mozilla Firefox, Safari); and on mobile devices for iOS and Android. All Bookshelf platforms are evaluated using industry-leading screen reading programs available for the platform including JAWS and NVDA for Windows, VoiceOver for Mac and iOS, and TalkBack for Android. To ensure a comprehensive reading experience, all Bookshelf platforms have been evaluated using EPUB® and enhanced PDF books.

      Could see a lot of potential for Open Standards, including annotations. What’s not so clear is how they can manage to produce such ePub while maintaining their DRM-focused practice. Heard about LCP (Lightweight Content Protection). But have yet to get a fully-accessible ePub which is also DRMed in such a way.

    1. want to synchronize their favorite sites across multiple devices

      Open Annotations could serve private purposes, like FoxMarks.

    1. Anyone can say Anything

      The “Open World Assumption” is central to this post and to the actual shift in paradigm when it comes to moving from documents to data. People/institutions have an alleged interest in protecting the way their assets are described. Even libraries. The Open World Assumption makes it sound quite chaotic, to some ears. And claims that machine learning will solve everything tend not to help the unconvinced too much. Something to note is that this ability to say something about a third party’s resource connects really well with Web annotations (which do more than “add metadata” to those resources) and with the fact that no-cost access to some item of content isn’t the end of the openness.

  9. Nov 2015
  10. Oct 2015
    1. machine-readable, ‘semantic’ annotations.

      Waiting for those to be promoted, through Hypothesis and other Open Annotations platforms.

    2. Separating discussion about a page from the page itself is an essential step to free discussion.

      Sounds like separating content from format (à la \(LaTeX\)) and there are interesting parallels between annotations, metadata, and formatting. All of these can be described as “layers”, in the common metaphor.

    3. sleek interactive infographic created by W3C’s Doug Schepers:

      Slick, neat, and informative.

    4. annotations can be stored independently from their target

      Powerful but misunderstood (or, even, difficult to grasp) feature of Open Annotations.

    1. He gave the example of digital textbooks which can be updated as an example of how online technology could be better than traditional methods.

      Great argument for OERs, no? And Open Annotations, for that matter.

  11. Sep 2015
    1. The W3C Annotation Working Group has a joint deliverable with the W3C Web Application Working Group called “Robust Anchoring”. This deliverable will provide a general framework for anchoring; and, although defined within the framework of annotations, the specification can also be used for other fragment identification use cases. Similarly, the W3C Media Fragments specification [media-frags] may prove useful to address some of the use cases. Finally, the Streamable Package Format draft, mentioned above, also includes a fragment identification mechanism. Would that package format be adopted for EPUB+WEB, that fragment identification may also come to the fore as an important mechanism to consider.

      Anchors are a key issue. Hope that deliverable will suffice.

  12. Aug 2015
    1. how the Internet would have turned out differently if users had been able to annotate everything

      Maybe a new phase in the Internet’s development will allow us to observe this.

  13. Jan 2014
    1. When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions

      I love this early UX imagining of the linking/annotation process by Vannevar. What's notable here of course is that he suggested that creating links between things was a function that something visitors (trailblazers) could do. In a sense, to him the notion of a hypertext link, and a clickable annotation w/ two targets were mutually interchangeable ideas. Today, these are distinct. The idea that a visitor can do this, is only possible within the emerging idea of Open Annotation as we understand it now. It's why those of us exploring it are so excited about its potential.