6 Matching Annotations
  1. Jan 2018
    1. I have added a script to my websites today that will block annotations

      I’ve spent some time thinking about this type of blocking in the past and written about a potential solution. Kevin Marks had created a script to help prevent this type of abuse as well; his solution and some additional variants are freely available. — {cja}

  2. Apr 2016
    1. appreciate your help

      I think that a major part of improving the issue of abuse and providing consent is building in notifications so that website owners will at least be aware that their site is being marked up, highlighted, annotated, and commented on in other locations or by other platforms. Then the site owner at least has the knowledge of what's happening and can then be potentially provided with information and tools to allow/disallow such interactions, particularly if they can block individual bad actors, but still support positive additions, thought, and communication. Ideally this blocking wouldn't occur site wide, which many may be tempted to do now as a knee-jerk reaction to recent events, but would be fine grained enough to filter out the worst offenders.

      Toward the end of notifications to site owners, it would be great if any annotating activity would trigger trackbacks, pingbacks, or the relatively newer and better webmention protocol of the WW3C out of the http://IndieWebCamp.com movement. Then site owners would at least have notifications about what is happening on their site that might otherwise be invisible to them.

      Perhaps there's a way to further implement filters or tools (a la Akismet on platforms like WordPress) that allow site users to mark materials as spam, abusive, or other so that they are then potentially moved from "public" facing to "private" so that the original highlighter can still see their notes, but that the platform isn't allowing the person's own website to act as a platform to give reach to bad actors.

      Further some site owners might appreciate graded filters (G, PG, PG-13, R, X) so that users or even parents can filter what they're willing to see. Consider also annotations on narrative forms that might be posted as spoilers--how can these be guarded against? (Possibly with CSS and a spoiler tag?) Options can be built into the platform itself as well as allowing server-side options for truly hard cases.

      My coding skills are rustier than I wish they were, but I'm available to help/consult if needed.

    1. The editor of News Genius joined in with snarky and hostile comments.

      Funny how frequently this terms comes up, when talking about Genius. The difference between annotation platforms is significantly a matter of usage. Usage of Genius has a lot to do with snarky comments made by “the smart kid at the back of the class”. My perception of Hypothesis is that it’s much more oriented towards diversifying voices. But that has less to do with technical features of the platform than with the community adopting it.

    1. “The annotations I have seen are often more snark than substance,”

      Same experience, even in the Genius guidelines. The tool’s affordances (and name) revolve around snark. In the abstract, there’s nothing wrong with that. We need spaces for people to have fun, even if it’s at the expense of others. But the startup is based on a very specific idea of what constitutes useful commentary. That idea is closer to pedantry, snark, intellectual bullying, and animated gifs than on respectful exchange.

  3. Mar 2016