7 Matching Annotations
  1. Apr 2021
    1. American moderators are more likely to have the cultural context necessary to evaluate U.S. content that may involve bullying and hate speech, which often involve country-specific slang

      This mentioning of country-specific slang reminded me of Meredith Clark's 2015 piece on Black Twitter. Using the examples of #PaulasBestDishes and #SolidarityIsForWhiteWomen, Clark highlights how the use of cultural based languages and phrases has created a collective online community and identity, titled Black Twitter (Clark 2015).

      Highlighting some of the more funny trends and hashtags Black Twitter made viral, this article by The Guardian puts it best when they claim Black Twitter creating "a particular collective of black identities and voices on Twitter taking part in collective, culturally specific jokes and dialogues that affect the community."

      Continuing, the article adds, "It was one of the first spaces that white people could see how creative black people are with our discourse, and how we used a technology that wasn’t originally designed for us."

      Members of the Black Community are using Twitter to reclaim their voice as active, valued members of society. While satirical humor and jokes are enjoyable to view, the deeper meaning behind Black Twitter is the creation of conversation and allowing Black community members to rightfully be represented socially and politically.

      https://www.theguardian.com/technology/2019/dec/23/ten-years-black-twitter-watchdog

    2. Even with an ever-changing rulebook, moderators are granted only the slimmest margins of error. The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk.

      The images above demonstrates the particular sentences Facebook Content Moderators would be flagged to remove or keep up. The image is sourced from an article by The Guardian which disclosed information on, "Leaked policies guiding moderators on what content to allow are likely to fuel debate about social media giant’s ethics"

      As a social media user, I would consider all of these sentences to be violent and not pleasant to view on Facebook. This goes to show that the job of a moderator has the opportunity for open interpretation while leaving no room for error.

      An interpretation/further explanation that really resinates with the difficulties of violent language reads,

      "...Violent language is most often not credible until [it] gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration.”

      https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-violence

    3. Like Facebook itself, Workplace has an algorithmic News Feed that displays posts based on engagement. During a breaking news event, such as a mass shooting, managers will often post conflicting information about how to moderate individual pieces of content, which then appear out of chronological order on Workplace. Six current and former employees told me that they had made moderation mistakes based on seeing an outdated post at the top of their feed. At times, it feels as if Facebook’s own product is working against them. The irony is not lost on the moderators.

      When Twitter first came out in 2006, Tweets were displayed in reverse chronological order showing only Tweets from the people you follow. Recently Twitter has switched it up and displays in an Algorithmic Timeline. This timeline includes features like "While You Were Away" and "In Case You Missed It" which curates Tweets that Twitter believes you could find valuable. In addition to this, users stumble across the same problem similar to this moderator. In your feed, you could be scrolling a come across posts that were from two-minutes ago, five hours ago, or days ago right next to each-other.

      https://sproutsocial.com/insights/twitter-algorithm/

      The image above is a visual way of demonstrating what exactly this new algorithm is doing.

      • The red "Tweets" resembling the normal reverse, chronological news feed
      • The white "Tweet" being dropped into the mix of your feed to grab your attention.

      Users are tricked into believing that what they are seeing is recent due to the surrounding Tweets, but this isn't actually the case. Dropping Tweets from different timelines can cause users to act irrationally about a topic that happened hours, days, or even weeks ago.

      This relates to the frustration Facebook content moderators have in algorithms. In the case of the content moderators, they can be making mistakes based on seeing outdated post on the top of their feed, making it seems as though their own employer is working against them.

    4. Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance.

      Upon further research, I discovered that Cognizant has left the content moderation world. In the article (linked below) it's said that, "The professional services firm Cognizant will exit the content moderation business." After The Verge started two follow up investigations on the well-being of Cognizant employees, the company ended its partnership with Facebook when their 2020 contract was up.

      I don't want to believe that the founders/higher ups at Cognizant are all corrupt people, but brushing their accusations under the rug isn't making the company look professional or respectable.

      What caused the work place environment to get this out of hand? Being signed by the massive company, Facebook, I would think that Facebook would have nothing but the best for its employees.

      Why are content moderators being treated so poorly when they are essential in keeping Facebook appear to be its family friendly, connective, and social platform it is?

      Read more about the investigation and Cognizant's exit following this link:

      https://www.theverge.com/2019/10/30/20940956/cognizant-facebook-content-moderation-exit-business-conditions-investigation

    5. 15,000 content reviewers around the world

      "Google and Facebook Have Failed Us" by Alexis Madrigal shares her insights on how social media giants have failed at regulating their content, diminishing disinformation, and regulating what people chose to share. Madrigal claims, "More humans must be added to the decision-making process, and sooner the better" referring to the process of "dealing with rare, breaking news events" (Madrigal 2017).

      Comparing the size of Facebook, 15,000 employed content moderators, the mass amounts of posts shared every minute on Facebook, and the comments moderators had from this article, it's safe to claim that more moderators are needed to regulate Facebook and its users' postings.

      Given Madrigal's suggestion of adding more people to content moderation, I am not sure how any social media company approaches this problem when articles like this one and others, ongoing investigations, and breaches of NDA agreements are making this job look nothing but awful. I have a hard time brainstorming possible recruitment tactics Facebook could use for hiring new moderators especially after reading this article.

      I wonder what Madrigal had in mind when she made that statement, or if it was a somewhat empty hope?

    6. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people.

      Facebook has clear, descriptive lists of their Community Standards and Guidelines online. Section 13 is under the Objectionable Content Guideline which limits users postings in relation to hate speech, violent and graphic content, sexual solicitation, and more. While there are rules in place, users still turn to violence words and hurtful comments and posting explicit images and videos. The link below leads you to Facebook's official Guidelines.

      https://www.facebook.com/communitystandards/introduction

    7. social network

      Nancy Baym, the author of "Communities and Networks" defines Social Networks as any site that allows users to construct a public or semi-public profile within a bounded system, articulate a list of other users with whom they share a connection, and view and traverse their list of connections and those made by others within the system (Baym 2010).

      According to Baym's definition, Facebook qualifies as a Social Network because it is a site that makes it easy to connect and communicate with friends and family via a personal profile.

      Recently though, the bounds of what makes a site a Social Network has expanded beyond the constraints of Baym's definition. As a social media user myself, I have witnessed a shift in social media to no longer being the hub gentle collaboration, appreciation of others, and genuine connection, but rather a place that allows for the spread of disinformation and a platform to compare and compete your value of life with strangers.

      While I know this isn't all what Social Networks are capable of providing its users, I think the opportunity to perform these negative tasks is getting too easy.