24 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Devin Coldewey. Study finds Reddit's controversial ban of its most toxic subreddits actually worked. TechCrunch, September 2017. URL: https://techcrunch.com/2017/09/11/study-finds-reddits-controversial-ban-of-its-most-toxic-subreddits-actually-worked/ (visited on 2023-12-08).

      The Coldewey TechCrunch article on Reddit subreddit bans provides a lot of insight regarding the issues raised in this chapter on moderation tradeoffs. In a study conducted by Georgia Tech, the results showed that banning subreddits on the basis of hate speech actually helped in reducing the occurrence of such activities on Reddit. People who remained after the ban greatly decreased their usage of hate speech while people migrating to other subreddits did not show an increase in such activities. However, what I find to be understated in this chapter is the effectiveness of the policy itself. Banning the hate subreddits may have 'solved' the issue, but the banned users merely migrated to sites such as Voat and Gab, which openly encourage such activities.

    1. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content.

      The fact that 'quality' is dependent on the target audience of the platform is a perspective I need to take into consideration when thinking about moderation in the future. Before reading the article, I assumed that content moderation was an absolute necessity for all platforms since certain content could be classified as 'bad'. However, the 4chan example showed that content moderation may also involve a very different form of moderation based on a new definition of quality. This brings up the question of whether content moderation is truly ethical or simply another aspect of doing business with one's own target market demographic.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Robinson Meyer. Everything We Know About Facebook’s Secret Mood-Manipulation Experiment. The Atlantic, June 2014. URL: https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/ (visited on 2023-12-08).

      The Meyer Atlantic article about Facebook’s secret mood experiment seems truly disturbing from the perspective of this chapter. In 2012, the company modified the news feed of almost 700,000 users, exposing some to more positive stories while others to more negative stories. This study revealed that the exposure to either kind of news feed affected the extent to which individuals shared positive or negative sentiments. I think what makes this experiment even more disturbing is the paradox – the chapter explains how social media can contribute to the improvement of mental well-being; yet, in this case, the social networking site chose to make users feel bad without any user awareness and consent. When asked about the significance of the experiment's findings, the lead author mentioned the minimal effect in real-life contexts. However, as stated by one school of social work's dean, users who were emotionally vulnerable could become depressed or anxious due to the manipulation in news feed, and no one would ever know.

    1. One of the ways social media can be beneficial to mental health is in finding community (at least if it is a healthy one, and not toxic like in the last section). For example, if you are bullied at school (and by classmates on some social media platform), you might find a different online community online that supports you. Or take the example of Professor Casey Fiesler finding a community that shared her interests (see also her article [m26]):

      It certainly hits close to home for me – I believe the difference between seeking out community versus simply being discovered by an online community is often overlooked. The case provided by Fiesler is particularly relevant since it shows that she was not simply scrolling; instead, she went looking for her own community. This element of agency seems to be crucial for whether or not being in an online community positively impacts mental health. Is it possible that being forced into communities through popular platforms such as TikTok or Instagram Reels may be even less effective for this purpose than old-school forums or niche spaces?

  4. May 2026
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lauren Goode. I Called Off My Wedding. The Internet Will Never Forget. Wired, 2021. URL: https://www.wired.com/story/weddings-social-media-apps-photos-memories-miscarriage-problem/ (visited on 2023-12-07).

      The article in Wired by Lauren Goode exemplifies how recommendation algorithms are oblivious to the complexity of real life. The author had broken up from her partner, called off her wedding and got divorced, yet she kept receiving wedding offers and anniversaries on the platform. To me, the most striking part is the fact that we are looking at an extreme case but one that exposes a problem that cannot easily be overlooked. Algorithms are trained to see patterns, not individuals. That makes me think about whether it would make sense for social networks to make it possible for people to exclude themselves from specific life events or topics.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      This explanation of recommendation algorithms really shows how much decision-making takes place behind the scenes every time we use an application. What strikes me as strange is the fact that the word 'recommendation' somehow sounds a little too neutral – as if we are receiving a favor from the system in the form of recommendations. In fact, recommendation algorithms may be focused on maximizing some parameters, which do not necessarily correspond to the needs and interests of the users. It would be interesting to know whether there is any kind of responsibility regarding disclosure of criteria behind the algorithms.

  6. Apr 2026
  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).

      Social Model of Disability, referenced in this chapter, emphasizes the difference between individuals' impairment and barriers to accessibility imposed on people by society. Instead of treating disability as a flaw or a deficiency, Social Model of Disability states that individuals become disabled through social and physical barriers rather than impairment. Thus, people are disabled by society, not their disability; that is to say, disability becomes a barrier imposed by society. To me, it seemed to be an interesting approach, especially taking into account social media which either promotes or eliminates barriers.

    1. Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize.

      Masking is another term which clearly emphasizes on the harsh reality that we live in an environment which is very much unwilling to accommodate its inhabitants. If someone feels like he or she needs to disguise their own self just to make their way in the environment, then such an act reflects the society rather than the individual who suffers from a disability. In my opinion, the phrase "mental or physical toll" stands out because society admires individuals who "push through" but does not question why they have to push through.

    1. Lyra Hale. New Book Says Facebook Employees Abused Access to Track and Stalk Women. The Mary Sue, July 2021. URL: https://www.themarysue.com/facebook-employees-abused-access-target-women/ (visited on 2023-12-06).

      The Mary Sue story about Facebook workers exploiting their privileges to spy on women is horrifying. It relates very much to the chapter's message that our 'private' information is not really private. The most disturbing aspect of this story is that this was not some kind of breach or hack, but rather an internal affair where insiders used their privilege to spy on women for personal gain. This leads one to wonder whether the greatest privacy risk we face is not big companies and governments themselves, but rather the individuals working inside them.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      The example of sometimes wishing companies to access our personal communications, such as during situations involving threats of death, emphasizes the issue of privacy. Privacy is generally perceived as a right that only needs protection. However, this particular example opened my eyes on the matter and made me realize that sometimes there may come a time when it would be to our benefit to sacrifice a little bit of privacy. Where do we draw the line in terms of whether or not accessing personal communications is allowable?

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      The Vox article written by Kurt Wagner highlights that Facebook creates 'shadow profiles' for those individuals who have never signed up for any Facebook account. They track the movements of such individuals using different tactics like placing tracking pixels on third-party sites and uploading contact information submitted by other users to Facebook.

    1. Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence [h10].

      This is one of the most disturbing aspects discussed in the chapter. The revival of discredited pseudoscience like physiognomy under the guise of 'AI' illustrates how easily a new technology can be used as a vehicle for laundering dubious concepts. Wrapping phrenology in algorithms will not make it a scientific concept; it only makes it easier for the ordinary citizen to object, since there is an algorithm backing it now. Rather, the use of AI in physiognomy is far more damaging than its original form, because it enables scaling and implementation in applications such as employment assessment or security screening in ways that 19th-century phrenologists would have never thought possible.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      In his article published in the journal Scientific American, Whitney Phillips claims that trolling did not appear on its own. It is, in fact, a copy of the actions carried out by mainstream media organizations. By looking at what is being done by the media industry, trolls have found out how to troll people on the internet.

    1. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      In this regard, I agree with Film Crit Hulk completely. "Don't feed the trolls" has always sounded like the equivalent of advising a victim to take the beating rather than the advice to stop the attacker from committing an attack in the first place. This is an inversion of logic, since platforms have both the power and means to ban offensive users but choose not to do so out of the economic benefit of engagement and the financial burden of moderating.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Peter Aldhous. At First It Looked Like A Scientist Died From COVID. Then People Started Taking Her Story Apart. BuzzFeed News, August 2020. URL: https://www.buzzfeednews.com/article/peteraldhous/bethann-mclaughlin-twitter-suspension-fake-covid-death (visited on 2023-12-07).

      This article is about a fake COVID death story, which is an example of how fast misinformation can spread on social media when the story cannot be verified. What I find concerning is that the false story gained a lot of attraction before anybody did their due diligence on the story, showing how even when the truth comes out, damage may already be done.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone

      To me, it is very interesting to see the phenomena of catfishing develop as a result of the digital age. This phenomenon definitely seemed much less popular in the past, with the invention and surge in popularity of social media, we are seeing how harmful this is starting to become, especially with the development of generative AI. Personally, with this growth, I believe platforms do not do enough to prevent catfishing, where innocent people get hurt thinking the person they are interacting with is real. Furthermore, this leads to safety issues that affect users on the platform, so social media can benefit from adding more catfishing prevention features.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Comedy Central. Drunk History - John Adams and Thomas Jefferson Had Beef. February 2018. URL: https://www.youtube.com/watch?v=l6Ove4_JsCM (visited on 2023-11-24).

      Including the segment from Drunk History regarding John Adams and Thomas Jefferson as part of a lesson about social media history is quite fascinating. I am reminded that political antagonism and the use of media as a tool of the trade did not begin with modern day social media sites, but Adams and Jefferson used papers and pamphlets for their own political gain as well.

    1. One of the early ways of social communication across the internet was with Email [e5], which originated in the 1960s and 1970s. These allowed people to send messages to each other, and look up if any new messages had been sent to them.

      This shows how email was one of the most significant moments in the evolution of communication. The ability to transport information digitally increases the convenience factor. This reminded me of how e-mail wasn't always a public tool; in the 1960s-1970s, it was a private tool used by researchers in the U.S. military.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      I was reading the Julia Evans article, and what stuck out to me was specifically the odometer example. This was supposed to be a program that tracked 10,000 km, but it only actually tracked 262 km because floating-point errors kept accumulating over time. I originally thought of floating point as a super technical concept, but this example makes it seem much clearer and less technical to me, and makes me think twice about how much we should trust computers when it comes to accuracy.

    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).

      Learning about metadata made me realize how much information I have that is going out to the public that I do not even know about. Every photo I take shares a location and a timestamp, and most apps track when and how the app is being used, so it sort of bothers me how so much of my information and schedule is going out to these companies, which can potentially sell it to other groups or organizations.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      The Huck Magazine provides an article that goes over click farms and added an observation that I did not expect earlier. It isn't just bots on automated scripts running servers, rather it is actual humans manually liking and following posts at a large scale. This blurs the distinction between "bot" and "human" which I believe this chapter doesn't fully address since it is the artificial manipulation of metrics, regardless of who does it.

    1. [Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.

      This stat that states how over half of the negative tweets about The Last Jedi were from bots or accounts that were politically motivated make me rethink how I interpret online backlash. Whenever I see this online, I assume it is a group psychology, but this statistic has showcased to me that a lot of this really just comes down to bots. This makes me wonder how often public opinion is shaped without anyone noticing.

    1. There are absolute moral rules and duties to follow (regardless of the consequences). They can be deduced by reasoning about the objective reality.

      The idea of deontology seems difficult to apply in real life because situations vary, and you aren't always able to apply a strict set of rules. For example, in social media, people may feel obligated always to tell the truth. In doing so, this could spread negativity and be seen as harmful, suggesting that consequences may need to be seen alongside different moral rules.

    2. Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means.

      This section of consequentialism is interesting to me since it talks about how actions are judged based on the total of their consequences, sort of acting as a cumulative judgment. Personally, I do not agree with the idea; however, consequentialism can be seen as problematic since it can justify harming a small group of people as a justification for supporting a larger group. For example, using social media to maximize engagement could lead to promoting harmful content for maximum exposure.