22 Matching Annotations
  1. Jul 2021
  2. Apr 2021
  3. Mar 2021
    1. Take control of it for yourself.

      quite in contrast to the 2021 Congressional Investigation into Online Misinformation and Disinformation which places the responsibility on major platforms (FB, Twitter, YouTube) to moderate and control content.

    1. Q: So, this means you don’t value hearing from readers?A: Not at all. We engage with readers every day, and we are constantly looking for ways to hear and share the diversity of voices across New Jersey. We have built strong communities on social platforms, and readers inform our journalism daily through letters to the editor. We encourage readers to reach out to us, and our contact information is available on this How To Reach Us page.

      We have built strong communities on social platforms

      They have? Really?! I think it's more likely the social platforms have built strong communities which happen to be talking about and sharing the papers content. The paper doesn't have any content moderation or control capabilities on any of these platforms.

      Now it may be the case that there are a broader diversity of voices on those platforms over their own comments sections. This means that a small proportion of potential trolls won't drown out the signal over the noise as may happen in their comments sections online.

      If the paper is really listening on the other platforms, how are they doing it? Isn't reading some or all of it a large portion of content moderation? How do they get notifications of people mentioning them (is it only direct @mentions)?

      Couldn't/wouldn't an IndieWeb version of this help them or work better.

    2. <small><cite class='h-cite via'> <span class='p-author h-card'>Inquirer.com</span> in Why we’re removing comments on most of Inquirer.com (<time class='dt-published'>03/18/2021 19:32:19</time>)</cite></small>

    1. Many news organizations have made the decision to eliminate or restrict comments in recent years, from National Public Radio, to The Atlantic, to NJ.com, which did a nice job of explaining the decision when comments were removed from its site.

      A list of journalistic outlets that have removed comments from their websites.

    2. Experience has shown that anything short of 24-hour vigilance on all stories is insufficient.
    1. Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience.

      This and the prior note are also underpinned by the fact that only 10% of people are going to be responsible for the majority of posts, so if you can filter out the velocity that accrues to these people, you can effectively dampen down the crazy.

    2. In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

      The one thing many of these types of noxious content WILL have in common are the people at the fringes who are regularly promoting it. Why not latch onto that as a means of filtering?

    1. Lori Morimoto, a fandom academic who was involved in the earlier discussion, didn’t mince words about the inherent hypocrisy of the controversy around STWW. “The discussions of the fic were absolutely riddled with people saying they wished you could block and/or ban certain users and fics on AO3 altogether because this is obnoxious,” she wrote to me in an email, “and nowhere (that I can see) is there anyone chiming in to say, ‘BUT FREE SPEECH!!!’” Morimoto continued: But when people suggest the same thing based on racist works and users, suddenly everything is about freedom of speech and how banning is bad. When it’s about racism, every apologist under the sun puts in an appearance to fight for our rights to be racist assholes, but if it’s about making the reading experience less enjoyable (which is basically what this is — it’s obnoxious, but not particularly harmful except to other works’ ability to be seen), then suddenly our overwhelming concern with free speech seems to just disappear in a poof of nothingness.

      This is an interesting example of people papering around allowing racism in favor of free speech.

  4. Feb 2021
    1. The solution, he said, was to identify “super-spreaders” of slander, the people and the websites that wage the most vicious false attacks.

      This would be a helpful thing in general disinformation from a journalistic perspective too.

  5. Jan 2021
    1. Group Rules from the Admins1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.4NO POSTING OR UPLOADING VIDEOS OF ANY KINDTo protect the quality of our group & prevent members from being solicited products & services - we don't allow any videos because we can't monitor what's being said word for word. Written post only.

      Wow, that's strict.

    1. This has some interesting research which might be applied to better design for an IndieWeb social space.

      I'd prefer a more positive framing rather than this likely more negative one.

  6. Sep 2020
    1. What were the “right things” to serve the community, as Zuckerberg put it, when the community had grown to more than 3 billion people?

      This is just one of the contradictions of having a global medium/platform of communication being controlled by a single operator.

      It is extremely difficult to create global policies to moderate the conversations of 3 billion people across different languages and cultures. No team, no document, is qualified for such a task, because so much is dependent on context.

      The approach to moderation taken by federated social media like Mastodon makes a lot more sense. Communities moderate themselves, based on their own codes of conduct. In smaller servers, a strict code of conduct may not even be necessary - moderation decisions can be based on a combination of consensus and common sense (just like in real life social groups and social interactions). And there is no question of censorship, since their moderation actions don't apply to the whole network.

  7. Oct 2018
    1. "I am really pleased to see different sites deciding not to privilege aggressors' speech over their targets'," Phillips said. "That tends to be the default position in so many online 'free speech' debates which suggest that if you restrict aggressors' speech, you're doing a disservice to America—a position that doesn't take into account the fact that antagonistic speech infringes on the speech of those who are silenced by that kind of abuse."