1 Matching Annotations
  1. May 2022
    1. “It was 2017, I would say, when Twitter started really cracking down on bots in a way that they hadn’t before — taking down a lot of bad bots, but also taking down a lot of good bots too. There was an appeals process [but] it was very laborious, and it just became very difficult to maintain stuff. And then they also changed all their API’s, which are the programmatic interface for how a bot talks to Twitter. So they changed those without really any warning, and everything broke.

      Just like chilling action by political actors, social media corporations can use changes in policy and APIs to stifle and chill speech online.

      This doesn't mean that there aren't bad actors building bots to actively cause harm, but there is a class of potentially helpful and useful bots (tools) that can make a social space better or more interesting.

      How does one regulate this sort of speech? Perhaps the answer is simply not to algorithmically amplify these bots and their speech over that of humans.

      More and more I think that the answer is to make online social interactions more like in person interactions. Too much social media is giving an even bigger bullhorn to the crazy preacher on the corner of Main Street who was shouting at the crowds that simply ignored them. Social media has made it easier for us to shout them back down, and in doing so, we're only making them heard by more. We need a negative feedback mechanism to dampen these effects the same way they would have happened online.