3 Matching Annotations
  1. May 2017
    1. Concerns

      Here's another concern:

      Let's say this bot-powered annotation "agora" starts to operate as Mike has laid it out. The annotations from that credibility activity alone may start to become noisy—and much noisier if 1,000 other bots start annotating for other purposes.

      One solution would be a dedicated "credibility" layer, where only the annotations from "registered" or "approved" credibility annotators appear. That way a user could activate just this credibility layer to focus on credibility signals.

      But if there's going to be a "credibility" layer, who is going to gatekeep participation? One might imagine a trusted, "neutral" organization that would publish criteria for participation in this credibility layer and enable annotators that meet such criteria.

      And just as there could be multiple annotators—bots or human—with different points of view posting to such a layer, there could also be multiple credibility layers, each administered by a different organization with different points of view, sources, and/or approaches. A sort of agora of credibility agoras. Users could then pay attention to the credibility layers they find most useful.

      But, users could pay attention to just the credibility layer they find most agrees with their established point of view. Would this just move the same issues we see with the credibility of information to the credibility layer itself? Would #fakecred become an industry like #fakenews?