22 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Oliver Tearle. Who Said, ‘A Lie Is Halfway Round the World Before the Truth Has Got Its Boots On’? June 2021. URL: https://interestingliterature.com/2021/06/lie-halfway-round-world-before-truth-boots-on-quote-origin-meaning/ (visited on 2023-12-08).

      The article explains how the quote “A lie is halfway around the world before the truth has its boots on” has changed over time, I thought it was interesting how many people wrongly credit it to Mark Twain or Churchill, which shows how we like to attach big names to make a saying sound more powerful. It really reminds me of how fast misinformation spreads on social media today,people share things so quickly that the truth never has time to catch up. Even though the quote is old, it still feels completely true in our world now.

    1. Biological Evolution

      I find the description of internet memes as "cultural genes" quite interesting. It reminds me that the evolution of online information mirrors biological evolution; the most "adapted" ideas survive because they spread faster or attract more attention. However, unlike biological evolution, internet memes don't require authenticity to survive. Therefore, social media algorithms, like "natural selection," prioritize promoting content with the highest engagement.

  3. Oct 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lauren Goode. I Called Off My Wedding. The Internet Will Never Forget. Wired, 2021. URL: https://www.wired.com/story/weddings-social-media-apps-photos-memories-miscarriage-problem/ (visited on 2023-12-07).

      This article made me realize how apps “remember” more than we want them to — and don’t know when to stop. The author’s experience shows how technology treats emotional pain like just another data point.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as:

      I think it’s kind of scary how recommendation algorithms know so much about us, yet we barely know how they actually work. The fact that platforms keep their algorithms secret makes me feel uneasy — it’s like we’re being studied without knowing what the experiment is. I’ve definitely noticed times when I talked about something and it suddenly showed up on my feed

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Robson. The women with superhuman vision. BBC, February 2022. URL: https://www.bbc.com/future/article/20140905-the-women-with-super-human-vision (visited on 2023-12-07).

      It really made me stop and think about what “normal vision” even means. If some women can see more colours than most people, then the typical human colour range is just one version of reality but our “normal” might actually be a limitation.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.

      I really like how this chapter explains that disability isn’t just about what someone can’t do, but about what society expects people should be able to do. It made me realize that so many “disabilities” come from poor design choices, not personal limits. For example, being short or colorblind only becomes a problem when things are built without those people in mind.

    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I find it really concerning that Facebook decided not to notify users after such a massive data breach. It feels like they care more about protecting their reputation than protecting the people who use their platform. Even if the leaked information was already public, users still deserve to know when their data is being used or exposed in unsafe ways. I think this shows how weak data privacy laws are, especially when companies can make their own choices about whether to inform people.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, a social media application might offer us a way of “Private Messaging” [i1] (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      I've rethought what privacy actually means online. I've always assumed my personal information was private and secure, but this reflects the trust we've unwittingly placed in these platforms. Once information is uploaded online, is it truly private? Even if we delete something, a copy might still remain on the server. So, I'm now unsure if deletion is even necessary.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      I feel like this breaks a basic idea of consent : if you’ve never joined, you never agreed to be tracked. And yet the only options seem to be: accept it or avoid the web entirely, which is basically impossible. So I am question how much control I really have, If the system can build a “shadow profile” of my interests without me opting in, then is there truly any privacy left?

    1. The AT Protocol API lets you access a lot of the data that Bluesky tracks (since Bluesky is a more open social media protocol), but Bluesky probably track much more than they let you have access to (like what other social media platforms do)., but Bluesky probably track much more than they let you have access to (like what other social media platforms do).

      I think it’s interesting how the Bluesky API gives researchers access to certain data but still limits what they can see. It reminds me that even when a platform claims to be “open,” it still controls what kind of information we’re allowed to analyze. I wonder how much bias this creates in research if the data we get only shows a part.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      The Jenkem prank shows that trolls know exactly how to weaponize sensationalism and exploit weaknesses in journalistic culture to make a point, if journalists treated troll-generated stories with more skepticism instead of just chasing clicks, would trolls lose most of their power?

    1. Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening beha

      I agree with the idea that just “not feeding the trolls” doesn’t always work. Sometimes ignoring them gives them more space to keep spreading hate, especially when the target is already being attacked or harassed. I think the article makes a good point that it’s unfair to put all the responsibility on the person being targeted.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      I think it’s interesting how the article showed that Trump’s tweets were not random but followed clear patterns. The idea that the tone and device he used could show which tweets were his own made me realize how carefully public figures can use social media to shape their image. It also makes me think about the power a posts have, one tweet can change people’s opinions or even affect politics.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      This made me think about how people’s reactions to “fake” content depend on their expectations. Some fans felt betrayed, but others didn’t really care once they knew it was scripted. I feel like this shows that people don’t always need something to be 100% real to enjoy it, they just want to know what kind of relationship they’re in. It reminds me of how influencers act online now. Even if their posts are planned, as long as we know it’s part of their brand and not pretending to be completely natural, it still feels authentic in its own way.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      I agree with the idea in the last paragraph: conspiracy theories are not newthings, they've been a part of American politics almost since its inception. Perhaps we're just seeing an old pattern in new cloth

    1. Fig. 5.2 An newer bulletin board system. In this one you can click on the thread you want to view, and threads can include things like images.

      It’s interesting how communication during Web 1.0 required more effort and focus. People had to log in, find a thread, and join conversations on purpose. I feel like that made interactions more meaningful compared to today’s, it makes me think that technology back then encouraged deeper attention to communicate, even though it was slower.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Binder. The majority of traffic from Elon Musk's X may have been fake during the Super Bowl, report suggests. February 2024. Section: Tech. URL: https://mashable.com/article/x-twitter-elon-musk-bots-fake-traffic (visited on 2024-03-31).

      From a personal perspective ,as a user and content consumer, this article makes me question how much of what seems “popular” is genuinely . It also makes me want to scrutinize any claims of “viral reach” how many of those views were real?

    1. Metadata# In addition to the main components of the images, sound, and video data, this information is often stored with metadata, such as: The time the image/sound/video was created The location where the image/sound/video was taken The type of camera or recording device used to create the image/sound/video etc.

      This part got me thinking about how much invisible information we leave behind online. When I post a content, I usually only consider the image or video itself, not embedded details like the time, location, or device used. This metadata can easily be used to track or identify individuals, raising serious privacy and security concerns. So I am wondring if most social media users realize that deleting a post doesn't necessarily remove this hidden data.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      I read c2 article,a single phone can masquerade as 20 different devices simply by switching its IP address. This means a single room of phones can pretend to be thousands of online users. What surprised me most was the article's description of workers sitting in a room, manually manipulating these phones to generate fake likes and follows. To me, this makes click farms feel more like using "bots" to trick the algorithm, then using real people to maintain the illusion.

    1. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting.

      The example of the donkey protest reminded me that robots operate in a similar way. The donkey doesn't know what message it's sending, and the robot doesn't really "understand" what it's doing. But humans are still behind them, determining the message and the actions. I think this shows that we shouldn't view robots as neutral or harmless. They're just tools, and they always reflect the thoughts of the people who create or use them.

  15. Sep 2025
    1. “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’”

      I think the Golden Rule is very interesting, since it appears across different cultures and traditions. This shows that empathy and fairness are values people have always pursued. But what makes me feel conflicted is that in modern society, especially on social media, this is often ignored. Many companies don’t really follow “ What you do not want done to yourself, do not do to others.” On the contrary, if they can raise attention or boost popularity through public opinion, they will often stop at nothing to exploit users’ emotions and attention.

    1. “A person is a person through other people.”

      Ubuntu stood out to me and its idea that “a person is a person through other people.” I like how it shifts the focus away from individual gain and toward community well being. In the context of social media, this is very relevant, because platforms often encourage competition for likes and followers instead of fostering real connection. I feel this idea could really guide how platforms treat users and how users treat each other