20 Matching Annotations
  1. May 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. Inside Twitter’s ambitious plan to clean up its platform. Vox, March 2019. URL: https://www.vox.com/2019/3/8/18245536/exclusive-twitter-healthy-conversations-dunking-research-product-incentives (visited on 2023-12-07).

      Twitter is known for sharp, sarcastic posts called “dunks,” which get lots of likes and retweets but often make conversations meaner. CEO Jack Dorsey wants to make Twitter healthier by encouraging more positive, respectful interactions. He asked researchers to help develop ways to measure “conversation health,” but progress has been slow due to privacy issues and legal delays. While some research projects stalled, Twitter is also building its own tools to detect toxic behavior. However, it’s hard to judge what makes a conversation “healthy,” and actually changing user behavior is proving very difficult.

    1. One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers” [k14]), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge.

      I think that it is very concerning that algorithms are able to feed people with certain information intentionally, especially when social media company owners are involved with large scale politics. When someone is sent down a rabbit hole by the algorithm, only given information that reinforces a one sided view, it can create a divide within communities online and in real life

  3. Apr 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Robson. The women with superhuman vision. BBC, February 2022. URL: https://www.bbc.com/future/article/20140905-the-women-with-super-human-vision (visited on 2023-12-07). [j4]

      Concetta Antico, a tetrachromat, can perceive far more color variations than most people due to a rare genetic condition that gives her a fourth type of cone cell in her eyes. Tetrachromacy is thought to occur only in some women and allows individuals to see subtle color differences invisible to others. Around 12% of women may carry the genetic potential, but few actually have enhanced perception without training. Antico’s artistic practice and visual sensitivity set her apart, offering scientists a unique window into this rare condition. Her work may help others develop greater color awareness, and she hopes to one day help both tetrachromats and the color blind appreciate the full spectrum of color in the world.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett [j21] points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      I think it is super important to have a thorough process for becoming a designer. User friendly design is very valuable and can be overlooked without the proper knowledge and experience. I have experienced how rigorous the process for the design major at UW is because of the factors that go into creating successful design.

    1. Jacob Kastrenakes. Facebook stored millions of Instagram passwords in plain text. The Verge, April 2019. URL: https://www.theverge.com/2019/4/18/18485599/facebook-instagram-passwords-plain-text-millions-users (visited on 2023-12-06).

      Facebook admitted that it stored millions of Instagram and Facebook users’ passwords in plain text, making them accessible to over 20,000 employees internally. Facebook said the issue affected only tens of thousands of Instagram users at first, but it later revealed it was actually millions. Facebook claims no passwords were misused or leaked externally and isn’t urging users to change their passwords. This news adds to Facebook’s series of recent security problems.

    1. Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them [i19]. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages [i20]

      This makes me wonder how many loopholes there are with privacy rules and if these are ever exploited. I am curious how easy it is to get away with violating privacy on these types of platforms, and if it happens without us knowing. As an average citizen, it seems like having my private information would be practically useless for these companies, but it is still concerning that they may have access to it.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      In 2007, Whitney Phillips explored trolling after her brother introduced her to 4chan, a site known for offensive and disruptive behavior. She learned that trolling involves online users intentionally provoking others for amusement, often with shocking or absurd content. She focused on trolls who embrace this disruptive behavior, not general online aggression or bullying. Trolling became especially popular on U.S. forums, with trolls creating chaos and manipulating media for laughs. One famous example was the "Jenkem" hoax where fake rumors about a drug made from sewage—which trolls spread to confuse the media and create outrage, showing how easy it is to trick news outlets for amusement.

    1. Before the internet, there were many activities that we would probably now call “trolling”, such as: Hazing: Causing difficulty or suffering for people who are new to a group Satire: (e.g., A Modest Proposal [g9]) which takes a known form, but does something unexpected or disruptive with it. Practical jokes / pranks

      It is interesting how far the internet has taken the concept of trolling and normalized it in our society on a greater scale. The idea of messing with people seems to be part of the human experience for a while, but the internet made it much easier to be way more cruel to people without any repercussions. It has got to the point where people can get paid to troll which is crazy to me. Like prank channels and people using trolling to gain engagement online for more views.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zoe Schiffer. She created a fake Twitter persona — then she killed it with COVID-19. The Verge, September 2020. URL: https://www.theverge.com/21419820/fake-twitter-persona-covid-death-munchausen-metoostem-co-founder (visited on 2023-11-24).

      BethAnn McLaughlin created a fake Twitter account called @Sciencing_Bi. She pretended to be a Native American, queer professor who talked about harassment in science and strongly supported McLaughlin. When people began to criticize McLaughlin’s behavior, she used this fake account to protect herself. She later tweeted that @Sciencing_Bi died from COVID-19 after being forced to teach in person. One person got suspicious and found out the account was fake. Experts say this kind of behavior is called “Munchausen by internet,” where people lie online to get attention and sympathy. McLaughlin may have created the fake professor to feel more liked and supported, but eventually the lie went too far and was exposed.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone Sockpuppet (or a “burner” account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad)

      It is very interesting to me how the creation of the internet has made it so much more common for people to be able to hide behind fake personas. This seemed to be way less common and more difficult in the past, but with the creation of social media people feel way more comfortable with saying and doing things with the protection of a screen preventing them from any real world consequences.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      Facebook collects data on people even if they don’t have an account. It does this by tracking visits to websites that use Facebook tools and through users who upload their phone contacts. This means Facebook can have info like your name, email, or browsing history even if you’ve never signed up. They say this data is used for things like security and analytics, and not to make full "shadow profiles." There’s no real way to completely stop Facebook from collecting this kind of data.

    1. This is a common case of technological advancements designed to assist human interaction, negatively affecting us. Social media algorithms are designed to show people exactly what they want to see online, but this can be dangerous as it is seen to be very addictive because it is designed to specifically make you spend more time on the app. This reminds me of the idea that when something is free (like social media), then you are the product. We are all being used for our attention spans to watch ads all day.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      This article highlights how conspiracy theories have been deeply rooted in American political culture since the early republic, being used to strategically manipulate public opinions. It’s interesting how tactics we see in modern elections were already present in the Jacksonian era. It shows how political literacy and skepticism have always been essential when receiving information.

    1. Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts

      This part is super interesting to me because I've always associated social media with the internet. It shows how the purpose of social media as a means to share updates and information is something that has been around in different mediums for thousands of years. It makes me think how it's in our nature to interact in this way, modern social media just takes this idea to a further extent.

    1. “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’” Confucius, Analects 15.23 [b9] (~500 BCE China) “There is nothing dearer to man than himself; therefore, as it is the same thing that is dear to you and to others, hurt not others with what pains yourself.” Gautama Buddha, Udānavarga 5:18 [b10] (~500 BCE Nepal/India) “That which is hateful to you do not do to another; that is the entire Torah, and the rest is its interpretation.” Hillel the Elder, Talmud Shabbat, folio 33a [b11] (~0 CE Palestine) “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.” Jesus of Nazareth, Matthew 7:12 [b12] (~30 CE Palestine)

      The Golden Rule is a powerful principle that appears across many different cultures and religions. It’s fascinating how this idea of treating others as we want to be treated seems to be a universal value. We all understand the importance of kindness, and this idea is practically engrained into everyones minds. I find it interesting how, despite its simplicity, this rule can be difficult to practice consistently in real life.

    1. Deontology

      I wanted to add a bit more info that expands on what deontology can include beyond just Kant’s ideas. Another important figure is W.D. Ross, who proposed a version of deontology based on "prima facie duties." These are moral obligations like keeping promises, preventing harm, and being just.These are duties we generally recognize as important, but can sometimes conflict with each other. Unlike Kant, Ross didn’t think there was always one absolute rule to follow. Instead, he believed that in complex situations, we have to weigh our different duties and decide which one is most important in that context.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Shannon Bond. Elon Musk wants out of the Twitter deal. It could end up costing at least $1 billion. NPR, July 2022. URL: https://www.npr.org/2022/07/08/1110539504/twitter-

      Elon Musk agreed to buy Twitter, but now he’s trying to back out of the deal. He claims Twitter gave him misleading info about how many fake or spam accounts are on the platform, and says they didn’t give him enough access to data. Twitter, on the other hand, says they’ve shared what they needed to and that Musk is just using this as an excuse to avoid paying the $44 billion. Now it’s turning into a legal battle. This situation shows how even huge business deals can fall apart over questions about data and trust, and how messy things get when that happens.

    1. “ Twitter has repeatedly said that spam bots represent less than 5% of its total user base. [Elon] Musk, meanwhile, has complained that the number is much higher, and has threatened to walk away from his agreement to buy the company.” Musk’s Dispute With Twitter Over Bots Continues to Dog Deal [d15], by Kurt Wagner, Bloomberg July 7, 2022 The data in question here is over what percentage of Twitter users are spam bots, which Twitter claimed was less than 5%, and Elon Musk claimed is higher than 5%. Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter [d16], and Twitter is accusing Musk’s complaint of being an invented excuse [d17] to back out of the deal, and the case is now in court [d17]. When looking at real-life data claims and datasets, you will likely run into many different problems and pitfalls in using that data. Any dataset you find might have: missing data erroneous data (e.g., mislabeled, typos) biased data manipulated data Any one of those issues might show up in Twitter’s claim or Musk’s counterclaim, but even in the best of situations there is still a fundamental issue when looking at claims like this, and that is that: All data is a simplification of reality.

      This part is a good example of how even numbers that look official can be shaky. It shows how data can be used to support totally different sides, depending on who’s interpreting it or what someone’s trying to get out of it. Makes me think that just because a stat exists doesn’t mean it’s neutral or trustworthy.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. This article about click farms talks about how people are paid to create fake engagement on social media just to make it look popular by interacting with things over and over again. It’s crazy to think how much of what we see online could be totally fake, and how these click farms mess with our idea of what’s really trending or important. Makes you wonder if the numbers we see are even real half the time.

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      The comparison to the donkey protest is really interesting. The idea that there is a disconnect between the the thing that displays a message and what is controlling it is very fitting for bots. Just like the donkey was used without fully understanding its role, bots are run by people but do things without the bot itself knowing what it’s doing. This makes it hard to hold anyone responsible when a bot spreads misinformation or causes trouble.