- May 2017
Here's another concern:
Let's say this bot-powered annotation "agora" starts to operate as Mike has laid it out. The annotations from that credibility activity alone may start to become noisy—and much noisier if 1,000 other bots start annotating for other purposes.
One solution would be a dedicated "credibility" layer, where only the annotations from "registered" or "approved" credibility annotators appear. That way a user could activate just this credibility layer to focus on credibility signals.
But if there's going to be a "credibility" layer, who is going to gatekeep participation? One might imagine a trusted, "neutral" organization that would publish criteria for participation in this credibility layer and enable annotators that meet such criteria.
And just as there could be multiple annotators—bots or human—with different points of view posting to such a layer, there could also be multiple credibility layers, each administered by a different organization with different points of view, sources, and/or approaches. A sort of agora of credibility agoras. Users could then pay attention to the credibility layers they find most useful.
But, users could pay attention to just the credibility layer they find most agrees with their established point of view. Would this just move the same issues we see with the credibility of information to the credibility layer itself? Would #fakecred become an industry like #fakenews?
Annotation As a Marketplace for Context
I'm generally in agreement about the problem area and solutions Mike proposes here. One thing we probably certainly know is that the "automated, centralized, closed approaches" that we have already seen attempting to address issues of this scope are unlikely to solve all the issues we are already seeing around credibility.
Could we use "agora" or some other term rather than "marketplace" to suggest that not all activities in the space will be mercantile?