52 Matching Annotations
  1. Jan 2024
    1. By its very nature, moderation is a form of censorship. You, as a community, space, or platform are deciding who and what is unacceptable. In Substack’s case, for example, they don’t allow pornography but they do allow Nazis. That’s not “free speech” but rather a business decision. If you’re making moderation based on financials, fine, but say so. Then platform users can make choices appropriately.
  2. Dec 2023
  3. Mar 2023
    1. content-moderation subsidiarity. Just asthe general principle of political subsidiarity holds that decisions should bemade at the lowest organizational level capable of making such decisions,15content-moderation subsidiarity devolves decisions to the individual in-stances that make up the overall network.

      Content-moderation subsidiarity

      In the fediverse, content moderation decisions are made at low organization levels—at the instance level—rather than on a global scale.

    1. OpenAI also contracted out what’s known as ghost labor: gig workers, including some in Kenya (a former British Empire state, where people speak Empire English) who make $2 an hour to read and tag the worst stuff imaginable — pedophilia, bestiality, you name it — so it can be weeded out. The filtering leads to its own issues. If you remove content with words about sex, you lose content of in-groups talking with one another about those things.

      OpenAI’s use of human taggers

  4. Feb 2023
    1. Rozenshtein, Alan Z., Moderating the Fediverse: Content Moderation on Distributed Social Media (November 23, 2022). 2 Journal of Free Speech Law (2023, Forthcoming), Available at SSRN: https://ssrn.com/abstract=4213674 or http://dx.doi.org/10.2139/ssrn.4213674

      Found via Nathan Schneider

      Abstract

      Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social media platforms that control who can use their services and how. But an emerging form of decentralized social media—the "Fediverse"—offers an alternative model, one more akin to how email works and that avoids many of the pitfalls of centralized moderation. This essay, which builds on an emerging literature around decentralized social media, seeks to give an overview of the Fediverse, its benefits and drawbacks, and how government action can influence and encourage its development.

      Part I describes the Fediverse and how it works, beginning with a general description of open versus closed protocols and then proceeding to a description of the current Fediverse ecosystem, focusing on its major protocols and applications. Part II looks at the specific issue of content moderation on the Fediverse, using Mastodon, a Twitter-like microblogging service, as a case study to draw out the advantages and disadvantages of the federated content-moderation approach as compared to the current dominant closed-platform model. Part III considers how policymakers can encourage the Fediverse, whether through direct regulation, antitrust enforcement, or liability shields.

    1. Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’ by [[Taylor Lorenz]]

      shifts in language and meaning of words and symbols as the result of algorithmic content moderation

      instead of slow semantic shifts, content moderation is actively pushing shifts of words and their meanings


      article suggested by this week's Dan Allosso Book club on Pirate Enlightenment

    2. “you’ll never be able to sanitize the Internet.”
    3. Could it be the sift from person to person (known in both directions) to massive broadcast that is driving issues with content moderation. When it's person to person, one can simply choose not to interact and put the person beyond their individual pale. This sort of shunning is much harder to do with larger mass publics at scale in broadcast mode.

      How can bringing content moderation back down to the neighborhood scale help in the broadcast model?

    4. “Zuck Got Me For,” a site created by a meme account administrator who goes by Ana, is a place where creators can upload nonsensical content that was banned by Instagram’s moderation algorithms.
    5. “The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” said Ángel Díaz, a lecturer at the UCLA School of Law who studies technology and racial discrimination.
    6. Is algorithmic content moderation creating a new sort of cancel culture online?

    7. But algorithmic content moderation systems are more pervasive on the modern Internet, and often end up silencing marginalized communities and important discussions.

      What about non-marginalized toxic communities like Neo-Nazis?

    1. LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
  5. Dec 2022
    1. The hypothesis is that hate speech is met with other speech in a free marketplace of ideas.That hypothesis only functions if users are trapped in one conversational space. What happens instead is that users choose not to volunteer their time and labor to speak around or over those calling for their non-existence (or for the non-existence of their friends and loved ones) and go elsewhere... Taking their money and attention with them.As those promulgating the hate speech tend to be a much smaller group than those who leave, it is in the selfish interest of most forums to police that kind of signal jamming to maximize their possible user-base. Otherwise, you end up with a forum full mostly of those dabbling in hate speech, which is (a) not particularly advertiser friendly, (b) hostile to further growth, and (c) not something most people who get into this gig find themselves proud of.

      Battling hate speech is different when users aren't trapped

      When targeted users are not trapped on a platform, they have the choice to leave rather than explain themselves and/or overwhelm the hate speech. When those users leave, the platform becomes less desirable for others (the concentration of hate speech increases) and it becomes a vicious cycle downward.

    1. The trust one must place in the creator of a blocklist is enormous, because the most dangerous failure mode isn’t that it doesn’t block who it says it does, but that it blocks who it says it doesn’t and they just disappear.
  6. Nov 2022
    1. The problem when the asset is people is that people are intensely complicated, and trying to regulate how people behave is historically a miserable experience, especially when that authority is vested in a single powerful individual.
    2. The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works.
  7. Oct 2022
    1. Some social media platforms struggle with even relatively simple tasks, such as detecting copies of terrorist videos that have already been removed. But their task becomes even harder when they are asked to quickly remove content that nobody has ever seen before. “The human brain is the most effective tool to identify toxic material,” said Roi Carthy, the chief marketing officer of L1ght, a content moderation AI company. Humans become especially useful when harmful content is delivered in new formats and contexts that AI may not identify. “There’s nobody that knows how to solve content moderation holistically, period,” Carthy said. “There’s no such thing.”

      Marketing officer for an AI content moderation company says it is an unsolved problem

    1. If the link is in a Proud Boys forum, would you not take any action against it, even if it’s like, “Click this link to help plan”?Are you asking if we have people out there clicking every link and checking if the forum comports with the ideological position that Signal agrees with?Yeah. I think in the most abstract way, I’m asking if you have a content moderation team.No, we don’t have that. We are also not a social media platform. We don’t amplify content. We don’t have Telegram channels where you can broadcast to thousands and thousands of people. We have been really careful in our product development side not to develop Signal as a social network that has algorithmic amplification that allows that “one to millions” amplification of content. We are a messaging platform. We don’t have a content moderation team because (1) we are fully private, we don’t see your content, we don’t know who you’re talking about; and (2) we are not a content platform, so it is a different paradigm.

      Signal president, Meredith Wittaker, on Signal's product vision and the difference between Signal and Telegram.

      They deliberately steered the product away from "one to millions" amplification of content, like for example Telegram's channels.

  8. Jul 2022
  9. May 2022
    1. https://www.niemanlab.org/2022/05/reader-comments-on-news-sites-we-want-to-hear-what-your-publication-does/

      I'm curious if any publications have experimented with the W3C webmention spec for notifications as a means of handling comments? Coming out of the IndieWeb movement, Webmention allows people to post replies to online stories on their own websites (potentially where they're less like to spew bile and hatred in public) and send notifications to the article that they've mentioned them. The receiving web page (an article, for example) can then choose to show all or even a portion of the response in the page's comments section). Other types of interaction beyond comments can also be supported here including receiving "likes", "bookmarks", "reads" (indicating that someone actually read the article), etc. There are also tools like Brid.gy which bootstrap Webmention onto social media sites like Twitter to make them send notifications to an article which might have been mentioned in social spaces. I've seen many personal sites supporting this and one or two small publications supporting it, but I'm as yet unaware of larger newspapers or magazines doing so.

    2. The Seattle Times turns off comments on “stories that are of a sensitive nature,” said Michelle Matassa Flores, executive editor of The Seattle Times. “People can’t behave on any story that has to do with race.” Comments are turned off on stories about race, immigration, and crime, for instance.

      The Seattle Times turns off comments on stories about race, immigration, and crime because as their executive editor Michelle Matassa Flores says, "People can't behave on any story that has to do with race."

  10. Jul 2021
  11. Apr 2021
  12. Mar 2021
    1. Take control of it for yourself.

      quite in contrast to the 2021 Congressional Investigation into Online Misinformation and Disinformation which places the responsibility on major platforms (FB, Twitter, YouTube) to moderate and control content.

    1. Q: So, this means you don’t value hearing from readers?A: Not at all. We engage with readers every day, and we are constantly looking for ways to hear and share the diversity of voices across New Jersey. We have built strong communities on social platforms, and readers inform our journalism daily through letters to the editor. We encourage readers to reach out to us, and our contact information is available on this How To Reach Us page.

      We have built strong communities on social platforms

      They have? Really?! I think it's more likely the social platforms have built strong communities which happen to be talking about and sharing the papers content. The paper doesn't have any content moderation or control capabilities on any of these platforms.

      Now it may be the case that there are a broader diversity of voices on those platforms over their own comments sections. This means that a small proportion of potential trolls won't drown out the signal over the noise as may happen in their comments sections online.

      If the paper is really listening on the other platforms, how are they doing it? Isn't reading some or all of it a large portion of content moderation? How do they get notifications of people mentioning them (is it only direct @mentions)?

      Couldn't/wouldn't an IndieWeb version of this help them or work better.

    2. <small><cite class='h-cite via'> <span class='p-author h-card'>Inquirer.com</span> in Why we’re removing comments on most of Inquirer.com (<time class='dt-published'>03/18/2021 19:32:19</time>)</cite></small>

    1. Many news organizations have made the decision to eliminate or restrict comments in recent years, from National Public Radio, to The Atlantic, to NJ.com, which did a nice job of explaining the decision when comments were removed from its site.

      A list of journalistic outlets that have removed comments from their websites.

    2. Experience has shown that anything short of 24-hour vigilance on all stories is insufficient.
    1. Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience.

      This and the prior note are also underpinned by the fact that only 10% of people are going to be responsible for the majority of posts, so if you can filter out the velocity that accrues to these people, you can effectively dampen down the crazy.

    2. In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

      The one thing many of these types of noxious content WILL have in common are the people at the fringes who are regularly promoting it. Why not latch onto that as a means of filtering?

    1. Lori Morimoto, a fandom academic who was involved in the earlier discussion, didn’t mince words about the inherent hypocrisy of the controversy around STWW. “The discussions of the fic were absolutely riddled with people saying they wished you could block and/or ban certain users and fics on AO3 altogether because this is obnoxious,” she wrote to me in an email, “and nowhere (that I can see) is there anyone chiming in to say, ‘BUT FREE SPEECH!!!’” Morimoto continued: But when people suggest the same thing based on racist works and users, suddenly everything is about freedom of speech and how banning is bad. When it’s about racism, every apologist under the sun puts in an appearance to fight for our rights to be racist assholes, but if it’s about making the reading experience less enjoyable (which is basically what this is — it’s obnoxious, but not particularly harmful except to other works’ ability to be seen), then suddenly our overwhelming concern with free speech seems to just disappear in a poof of nothingness.

      This is an interesting example of people papering around allowing racism in favor of free speech.

  13. Feb 2021
    1. The solution, he said, was to identify “super-spreaders” of slander, the people and the websites that wage the most vicious false attacks.

      This would be a helpful thing in general disinformation from a journalistic perspective too.

  14. Jan 2021
    1. Group Rules from the Admins1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.4NO POSTING OR UPLOADING VIDEOS OF ANY KINDTo protect the quality of our group & prevent members from being solicited products & services - we don't allow any videos because we can't monitor what's being said word for word. Written post only.

      Wow, that's strict.

    1. This has some interesting research which might be applied to better design for an IndieWeb social space.

      I'd prefer a more positive framing rather than this likely more negative one.

  15. Sep 2020
    1. What were the “right things” to serve the community, as Zuckerberg put it, when the community had grown to more than 3 billion people?

      This is just one of the contradictions of having a global medium/platform of communication being controlled by a single operator.

      It is extremely difficult to create global policies to moderate the conversations of 3 billion people across different languages and cultures. No team, no document, is qualified for such a task, because so much is dependent on context.

      The approach to moderation taken by federated social media like Mastodon makes a lot more sense. Communities moderate themselves, based on their own codes of conduct. In smaller servers, a strict code of conduct may not even be necessary - moderation decisions can be based on a combination of consensus and common sense (just like in real life social groups and social interactions). And there is no question of censorship, since their moderation actions don't apply to the whole network.

  16. Oct 2018
    1. "I am really pleased to see different sites deciding not to privilege aggressors' speech over their targets'," Phillips said. "That tends to be the default position in so many online 'free speech' debates which suggest that if you restrict aggressors' speech, you're doing a disservice to America—a position that doesn't take into account the fact that antagonistic speech infringes on the speech of those who are silenced by that kind of abuse."