12 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Is It Funny or Offensive? Comedian Impersonates FBI on Twitter, Makes MLK Assassination Joke. January 2020. URL: https://isitfunnyoroffensive.com/comedian-impersonates-fbi-on-twitter-makes-mlk-assassination-joke/ (visited on 2023-12-05).

      This article describes how a comedian pretended to be the official FBI account on Twitter and posted a joke about Martin Luther King Jr's assassination. While the intention may have been humor, the impersonation clearly crossed ethical boundaries by exploiting both historical trauma and public trust. I think this example connects directly to Chapter 7's discussion of trolling and "bad faith". The comedian's defense that it was "just a joke" mirrors the Schrodinger's asshole concept, using humor as a shield to deny accountability when the audience reacts negatively. It raises an important question: when dose satire stop being social commentary and start becoming harm disguised as entertainment.

    1. One set of the early Internet-based video games were Multi-User Dungeons (MUDs [g14]), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS [g15]). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan [g16] (2003), Encyclopedia Dramatica [g17] (2004), and some forums on Reddit [g18] (2005).

      I find it fascinating that trolling began as a kind of "inside joke" among early Internet users. What started as playful testing of newcomers' knowledge later evolved into behaviors that intentionally cause harm or humiliation. Reading about the shift from lighthearted pranks to toxic online cultures like 4chan made me realize how easily community norms can slide when cruelty becomes entertainment. It also connects to Sartre's idea of "bad faith" mentioned earlier, the troll's refusal to take words seriously allows them to avoid responsibility. It makes me wonder whether trolling is less about anonymity and more about disengagement from empathy.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      This article analyzes how tweets posted from Donald Trump's Android device were significantly more negative and aggressive than those from his campaign's iPhone, which were more neutral and polished. The researchers used sentiment analysis to identify emotional differences between the two sets of tweet. I found this especially relevant to the discussion of inauthenticity in Chapter 6, because it shows that even when a social media account seems to belong to one person, multiple voices or personas may actually be behind it. This raises important ethical questions about authenticity and accountability online.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many people were upset at being deceived, and at the many levels of inauthenticity of Dr. McLaughlin’s actions, such as: Dr. McLaughlin pretended to be a person (@Sciencing_Bi) who didn’t exist. Dr. McLaughlin, as a white woman, created an account where she pretended to be a Native American (see more on “pretendians” [f11]). Dr. Mclaughlin put herself at the center of the MeToo movement as it related to STEM, but then Dr. Mclaughlin turned out to be a bully herself. Dr. McLaughlin used the fake @Sciencing_Bi to shield herself from critizism. From the NYTimes article [f10]: “‘The fact that @Sci-Bi was saying all these things about BethAnn, saying that BethAnn had helped her, it didn’t make me trust BethAnn — but it made me less willing to publicly criticize her because I thought that public criticism would be felt by the people she was helping,’ he said. ‘Who turned out to be fake.’” Though Dr. McLaughlin claimed a personal experience as a witness in a Title IX sexual harassment case, through the fake @Sciencing_Bi, she invented an experience of sexual harassment from a Harvard professor. This professor was being accused of sexual harassment by multiple real women, and these real women were very upset to find out that @Sciencing_Bi, who was trying to join them, was not a real person. Dr. McLaughlin, through the @Sciencing_Bi account, pretended to have an illness she didn’t have (COVID). She made false accusations against Arizona State University’s role in the (fake) person getting sick, and she was able to get attention and sympathy through the fake illness and fake death of the fake @Sciencing_Bi.

      This story about Dr. McLaughlin and the fake @Sciencing_Bi account really shocked me. It made me realize how easily emotional trust can be built, and then betrayed, on social media. As someone who often reads personal stories online, I've never thought deeply about whether those stories could be fabricated. What feels most unethical here is not only the lying itself, but how it exploited people's empathy for marginalized identities and real victims of harassment. It's frightening that authenticity online now requires skepticism, and that our compassion can be used as a tool for manipulation.

  5. Oct 2025
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social networking service. November 2023. Page Version ID: 1186603996. URL: https://en.wikipedia.org/w/index.php?title=Social_networking_service&oldid=1186603996#History (visited on 2023-11-24).

      The Wikipedia article on Social networking services traces how early online communities evolved from simple bulletin boards to complex social ecosystems. What stood out to me was how the article highlights the shift from chronological, user-driven spaces to algorithmic, engagement-driven platforms. This connects directly to the discussion in Section 5.6 about "friction" and "affordances"--as platforms became more automated, they started shaping our behavior through invisible design choices. Reading this made me realize that social media's design evolution isn't just technical progress; it's a history of how digital architectures have gradually gained power over attention and emotion.

    1. Friction [e30] is anything that gets in the way of a user performing an action. For example, if you have to open and navigate through several menus to find the privacy settings, that is significant friction. Or if one of the buttons has a bug and doesn’t work when you press it, so you have to find another way of performing that action, which is significant friction. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad: Fig. 5.6 An ad on a mobile device, which has an incredibly small, hard to press “x” button. You need to press that button to close the ad. If you miss the “x”, it takes you to more advertising.

      I think the example of "friction" is fascinating because it shows how small design choices can completely change user behavior. The Twitter pop-up that asks users to read the article before retweeting might seem minor, but it introduces a brief comment of reflection that slows down impulsive sharing. As someone who often scrolls through social media quickly, I realized how little time I spend verifying what I see online. Adding friction can be an ethical design choice--not to frustrate users, but to protect them from misinformation and emotional manipulation. It makes me wonder whether platforms should use more friction, not less, in areas related to mental health or political content.

    1. Thus, when designers of social media systems make decisions about how data will be saved and what constraints will be put on the data, they are making decisions about who will get a better experience. Based on these decisions, some people will fit naturally into the data system, while others will have to put in extra work to make themselves fit, and others will have to modify themselves or misrepresent themselves to fit into the system.

      I found this section particularly thought-provoking because it shows how neutral design decisions can quietly define who belongs in a system. As someone who has filled out many online forms as an international student, I've often experienced exactly what this paragraph describes--forms that assume every user lives in the U.S. or has a "first" and "last" name that fits English conventions. It reminds me that "fitting into the data" isn't just about usability but also about representation and identity. The example of address fields illustrates how technical defaults can privilege one group's reality while making others invisible. It makes me wonder how many times I've unconsciously adapted myself to technology, rather than technology adapting to me.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      Hoffmann's concept of "data violence" resonates strongly with this chapter's discussion of how data practices encode social power. She argues that when engineers treat datasets as neutral, they overlook how design decisions--such as labeling conventions or exclusion of marginalized voices--can reproduce harm. I found this perspective particularly striking after reading the earlier part of chapter 4 on data collection and bias: it reframes technical errors not just as mistakes, but as moral and structural failures. It makes me think that ethical data work requires both tecinical literacy and historical awareness of inequality.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL: https://www.businessinsider.com/photo-shows-how-fake-app-store-rankings-are-made-2015-2 (visited on 2024-03-07).

      The reading lists Tweedie's (2015) article with that photo of a worker controlling tons of iPhones. That example really hit me -- it shows how rankings and popularity online can be totally fake. It connects to the chapter's point that bots or fake accounts don't just add numbers, they can change what appes or voices get noticed in the first place.

    1. There are several ways computer programs are involved with social media. One of them is a “bot,” a computer program that acts through a social media account. There are other ways of programming with social media that we won’t consider a bot (and we will cover these at various points as well): The social media platform itself is run with computer programs, such as recommendation algorithms (chapter 12). Various groups want to gather data from social media, such as advertisers and scientists. This data is gathered and analyzed with computer programs, which we will not consider bots, but will cover later, such as in Chapter 8: Data Mining. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      Reading about how bots are often used to amplify certain voices on social media made me think about my own experience on Twitter/X. Sometimes I notice trending hashtags that feel "unnatural," almost as if too many accounts are repeating the same message. It makes me wonder whether genuine user interest is actually being represented, or if it's the result of coordinated bot activity. This connects to the ethical concern raised in the chapter about authenticity: if bots distort what looks like public opinion, should platforms be responsible for filtering them out, or should users just learn to be skeptical? Personally, I feel it undermines trust in social media when I can't tell if I'm interacting with a real person or an automated script.

  9. Sep 2025
    1. 2.1.1. What is Social Media?

      I find it interesting that the definition of "social media" shifts depending on who is using it -- researchers, companies, and everyday users. From my own experience, I sometimes think of Youtube more as entertainment than "social media", which makes me question whether strict definitions are even possible.

    1. Consequentialism# Sources [b46] [b47] Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” That is, What is moral is to do what makes the most people the most happy. Key figures: Jeremy Bentham [b48] 1700’s England John Stuart Mill [b49], 1800’s England

      While utilitarianism highlights the importance of maximizing happiness, I wonder is this approach can justify sacrificing minority rights on social media platforms. For example, could a company claim it is ethical to collect sensitive data from a small group if majority benefits?