7 Matching Annotations
  1. Feb 2022
    1. If each of Facebook’s 15,000 U.S. moderators aggressively reviewed several dozen of the most active users and permanently removed those guilty of repeated violations, abuse on Facebook would drop drastically within days. But so would overall user engagement.

      this was powerful to me, how the incentive system is screwed up. It also made me think the problem is not that it's infeasible to prevent this from happening, but the people who could choose not to due to their business incentives. so there is opportunity as well, it's a tractable problem

    2. In addition to the torrent of vile posts, dozens of top users behaved in spammy ways. We don’t see large-scale evidence of bot or nonhuman accounts in our data, and comments have traditionally been the hardest activity to fake at scale. But we do see many accounts that copy and paste identical rants across many posts on different pages. Other accounts posted repeated links to the same misinformation videos or fake news sites. Many accounts also repeated one- or two-word comments—often as simple as “yes” or “YES !!”—dozens and dozens of times, an unusual behavior for most users. Whether this behavior was coordinated or not, these throwaway comments gave a huge boost to MSI, and signaled to Facebook’s algorithm that this is what users want to see.

      bots

    3. Of the 219 accounts with at least 25 public comments, 68 percent spread misinformation, reposted in spammy ways, published comments that were racist or sexist or anti-Semitic or anti-gay, wished violence on their perceived enemies, or, in most cases, several of the above.

      This surprised me

    4. the top 1 percent of accounts were responsible for 35 percent of all observed interactions; the top 3 percent were responsible for 52 percent.

      1.5 million people are half the content. Reminds me of a similar statistic about Twitter and how "what twitter thinks" only represents a subset, and probably a selectively chosen subset, of the population