36 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Merriam-Webster. Definition of CAPITALISM. December 2023. URL: https://www.merriam-webster.com/dictionary/capitalism (visited on 2023-12-10).

      It defines capitalism as private ownership and free market competition, but ignores the extreme inequality of ownership and the enormous power held by capital holders. It also views "competition" as always leading to good results, when in reality, competition can degenerate into monopolies, exploitation, or companies disregarding ethics in pursuit of profit maximization.

    1. When shareholders buy stocks in a company, they are owed a percentage of the profits. Therefore it is the company leaders’ fiduciary duty [s11] to maximize the profits of the company (called the Friedman Doctrine [s12]). If the leader of the company (the CEO) intentionally makes a decision that they know will reduce the company’s profits, then they are cheating the shareholders out of money the shareholders could have had. CEOs mistakenly do things that lose money all the time, but doing so on purpose is a violation of fiduciary duty.

      CEOs are legally obligated to maximize profits, even if some decisions are clearly harmful to users or society. I always thought corporate greed was voluntary, but the explanation of fiduciary duty in the article makes me think that the system itself drives them to do so. So should we continue to blame individual leaders, or should we question the systemic structure that compels them to put shareholder interests above all else?

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Trauma and Shame. URL: https://www.oohctoolbox.org.au/trauma-and-shame (visited on 2023-12-10).

      When I read about the “attunement–break–repair” cycle, it felt completely different from how people actually react in real life. Caregivers are supposed to show a child that “the problem is the behavior, not you,” and I think this idea applies to how adults are treated online too. But honestly, most people don’t offer you that kind of repair at all.

    1. Reintegration “Public shaming must aim at, and make possible, the reintegration of the norm violator back into the community, rather than permanently stigmatizing them.”

      I think it’s ironic that public shaming is supposed to leave room for people to eventually rejoin the community, because social media basically gives them no space to do that. The reading says shaming should allow for the possibility of reintegration, but online most people just want entertainment and don’t care whether someone ever gets the chance to repair anything. It makes the internet feel like a place that amplifies the “shame” part while intentionally deleting the “repair” part.

  4. Nov 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Roni Jacobson. I’ve Had a Cyberstalker Since I Was 12. Wired, 2016. URL: https://www.wired.com/2016/02/ive-had-a-cyberstalker-since-i-was-12/ (visited on 2023-12-10).

      What shocked me was that law enforcement told her, "This isn't a crime because you didn't feel personal fear," which I found absurd. Why does our legal system still not treat online harassment as a personal harm?

    1. Suppose it’s been raining all day, and as I walk down the sidewalk, a car drives by, spraying me with water from the road. This does not make me happy. It makes me uncomfortable, since my clothes are wet, and it could hurt me if wet clothes means I get so cold I become ill. Or it could hurt me if I were on my way to an important interview, for which I will now show up looking sloppy. But the car has done nothing wrong, from a legal standpoint. There is no legal basis for reprisals, and indeed it would seem quite ridiculous if I tried to prosecute someone for having splashed me by driving near me. In a shared world, we sometimes wind up in each others’ splash zones.

      The “puddle-splashing” example made me notice how much of modern online harassment works the same way: each individual comment might look harmless on its own, but collectively they create real psychological damage. I’ve seen people dog-piled on social media, where no single message would qualify as illegal, but the overall effect is abusive.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Daniel Oberhaus. Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors. Vice, November 2017. URL: https://www.vice.com/en/article/7x47bb/wikipedia-editors-elite-diversity-foundation (visited on 2023-12-08).

      I was surprised to discover that the majority of the website's content is handled by a very small editorial team, especially since most of the editors are male. So, how neutral and representative can Wikipedia truly be if the people shaping the information come from such a narrow group?

    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves.

      This reminds me of Wikipedia, but it also makes me wonder: if anyone can anonymously edit a page, what happens when someone adds misleading or inappropriate content? And what mechanisms exist to detect and correct these issues?

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Brian Resnick. The 2018 Nobel Prize reminds us that women scientists too often go unrecognized. Vox, October 2018. URL: https://www.vox.com/science-and-health/2018/10/2/17929366/nobel-prize-physics-donna-strickland (visited on 2023-12-08).

      She became the third woman in history to win the Nobel Prize in Physics, but the fact that she didn't even have a Wikipedia page before winning the award highlights how many female scientists' achievements may go unnoticed or be underestimated. I wonder how many other outstanding women scientist have had their contributions buried behind the scenes simply because they haven't received the attention and recognition they deserve.

    1. they often hire teams in countries where they can pay workers less.

      I believe the working conditions of content moderators can cause emotional harm and injustice. It's been said that Facebook frequently recruits moderators from low-income countries and exposes them daily to violent or disturbing content, which could lead to psychological trauma. I think companies and governments should provide content moderators with appropriate compensation and psychological support, rather than treating them as easily replaceable labor.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mia Sato. YouTube reveals millions of incorrect copyright claims in six months. The Verge, December 2021. URL: https://www.theverge.com/2021/12/6/22820318/youtube-copyright-claims-transparency-report (visited on 2023-12-08).

      Automated moderation is unfair to smaller creators. Even if only a small fraction of appeals are wrong, those errors can completely ruin a channel or their income. It feels like YouTube's system is more inclined to protect large companies than individual users.

    1. 14.5.3. Charles W. Mills and The Racialized Contract#

      Mills argues that social contracts are often formulated and serve those in power, which is closely linked to how tech companies (mostly run by mainstream social groups) decide what online discourse is "acceptable." It feels unfair that content from marginalized groups is often more easily deleted, while biased content continues to exist.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health (visited on 2023-12-08).

      Platforms like Instagram encourage people to compare their lives to unrealistic images. Even though Facebook is aware of these negative impacts, it hasn't made any real changes. I don't think social media is entirely bad, but the constant pressure to maintain a perfect image and get likes can easily damage a person's self-confidence. Instead of persuading teenagers to abandon social media, we should focus on teaching them how to use it healthily and helping them understand that what they see online isn't always true.

    1. 13.1.1. Digital Detox?

      I find the concept of "digital detox" interesting, but also overly idealistic. The article argues that simply viewing social media as harmful oversimplifies the complexities of reality, I agree with that. As a person who uses social media daily, I find complete abstinence is unrealistic. Instead, I believe it's important to recognize how platforms manipulate our attention and emotions. The issue isn't just about the tools themselves, but also how we use them. This perspective is more practical than simply labeling technology as "bad."

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Oliver Tearle. Who Said, ‘A Lie Is Halfway Round the World Before the Truth Has Got Its Boots On’? June 2021. URL: https://interestingliterature.com/2021/06/lie-halfway-round-world-before-truth-boots-on-quote-origin-meaning/ (visited on 2023-12-08).

      The article explains how the quote “A lie is halfway around the world before the truth has its boots on” has changed over time, I thought it was interesting how many people wrongly credit it to Mark Twain or Churchill, which shows how we like to attach big names to make a saying sound more powerful. It really reminds me of how fast misinformation spreads on social media today,people share things so quickly that the truth never has time to catch up. Even though the quote is old, it still feels completely true in our world now.

    1. Biological Evolution

      I find the description of internet memes as "cultural genes" quite interesting. It reminds me that the evolution of online information mirrors biological evolution; the most "adapted" ideas survive because they spread faster or attract more attention. However, unlike biological evolution, internet memes don't require authenticity to survive. Therefore, social media algorithms, like "natural selection," prioritize promoting content with the highest engagement.

  11. Oct 2025
  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lauren Goode. I Called Off My Wedding. The Internet Will Never Forget. Wired, 2021. URL: https://www.wired.com/story/weddings-social-media-apps-photos-memories-miscarriage-problem/ (visited on 2023-12-07).

      This article made me realize how apps “remember” more than we want them to — and don’t know when to stop. The author’s experience shows how technology treats emotional pain like just another data point.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as:

      I think it’s kind of scary how recommendation algorithms know so much about us, yet we barely know how they actually work. The fact that platforms keep their algorithms secret makes me feel uneasy — it’s like we’re being studied without knowing what the experiment is. I’ve definitely noticed times when I talked about something and it suddenly showed up on my feed

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Robson. The women with superhuman vision. BBC, February 2022. URL: https://www.bbc.com/future/article/20140905-the-women-with-super-human-vision (visited on 2023-12-07).

      It really made me stop and think about what “normal vision” even means. If some women can see more colours than most people, then the typical human colour range is just one version of reality but our “normal” might actually be a limitation.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.

      I really like how this chapter explains that disability isn’t just about what someone can’t do, but about what society expects people should be able to do. It made me realize that so many “disabilities” come from poor design choices, not personal limits. For example, being short or colorblind only becomes a problem when things are built without those people in mind.

    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I find it really concerning that Facebook decided not to notify users after such a massive data breach. It feels like they care more about protecting their reputation than protecting the people who use their platform. Even if the leaked information was already public, users still deserve to know when their data is being used or exposed in unsafe ways. I think this shows how weak data privacy laws are, especially when companies can make their own choices about whether to inform people.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, a social media application might offer us a way of “Private Messaging” [i1] (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      I've rethought what privacy actually means online. I've always assumed my personal information was private and secure, but this reflects the trust we've unwittingly placed in these platforms. Once information is uploaded online, is it truly private? Even if we delete something, a copy might still remain on the server. So, I'm now unsure if deletion is even necessary.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      I feel like this breaks a basic idea of consent : if you’ve never joined, you never agreed to be tracked. And yet the only options seem to be: accept it or avoid the web entirely, which is basically impossible. So I am question how much control I really have, If the system can build a “shadow profile” of my interests without me opting in, then is there truly any privacy left?

    1. The AT Protocol API lets you access a lot of the data that Bluesky tracks (since Bluesky is a more open social media protocol), but Bluesky probably track much more than they let you have access to (like what other social media platforms do)., but Bluesky probably track much more than they let you have access to (like what other social media platforms do).

      I think it’s interesting how the Bluesky API gives researchers access to certain data but still limits what they can see. It reminds me that even when a platform claims to be “open,” it still controls what kind of information we’re allowed to analyze. I wonder how much bias this creates in research if the data we get only shows a part.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      The Jenkem prank shows that trolls know exactly how to weaponize sensationalism and exploit weaknesses in journalistic culture to make a point, if journalists treated troll-generated stories with more skepticism instead of just chasing clicks, would trolls lose most of their power?

    1. Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening beha

      I agree with the idea that just “not feeding the trolls” doesn’t always work. Sometimes ignoring them gives them more space to keep spreading hate, especially when the target is already being attacked or harassed. I think the article makes a good point that it’s unfair to put all the responsibility on the person being targeted.

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      I think it’s interesting how the article showed that Trump’s tweets were not random but followed clear patterns. The idea that the tone and device he used could show which tweets were his own made me realize how carefully public figures can use social media to shape their image. It also makes me think about the power a posts have, one tweet can change people’s opinions or even affect politics.

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      This made me think about how people’s reactions to “fake” content depend on their expectations. Some fans felt betrayed, but others didn’t really care once they knew it was scripted. I feel like this shows that people don’t always need something to be 100% real to enjoy it, they just want to know what kind of relationship they’re in. It reminds me of how influencers act online now. Even if their posts are planned, as long as we know it’s part of their brand and not pretending to be completely natural, it still feels authentic in its own way.

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      I agree with the idea in the last paragraph: conspiracy theories are not newthings, they've been a part of American politics almost since its inception. Perhaps we're just seeing an old pattern in new cloth

    1. Fig. 5.2 An newer bulletin board system. In this one you can click on the thread you want to view, and threads can include things like images.

      It’s interesting how communication during Web 1.0 required more effort and focus. People had to log in, find a thread, and join conversations on purpose. I feel like that made interactions more meaningful compared to today’s, it makes me think that technology back then encouraged deeper attention to communicate, even though it was slower.

  21. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Binder. The majority of traffic from Elon Musk's X may have been fake during the Super Bowl, report suggests. February 2024. Section: Tech. URL: https://mashable.com/article/x-twitter-elon-musk-bots-fake-traffic (visited on 2024-03-31).

      From a personal perspective ,as a user and content consumer, this article makes me question how much of what seems “popular” is genuinely . It also makes me want to scrutinize any claims of “viral reach” how many of those views were real?

    1. Metadata# In addition to the main components of the images, sound, and video data, this information is often stored with metadata, such as: The time the image/sound/video was created The location where the image/sound/video was taken The type of camera or recording device used to create the image/sound/video etc.

      This part got me thinking about how much invisible information we leave behind online. When I post a content, I usually only consider the image or video itself, not embedded details like the time, location, or device used. This metadata can easily be used to track or identify individuals, raising serious privacy and security concerns. So I am wondring if most social media users realize that deleting a post doesn't necessarily remove this hidden data.

  22. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      I read c2 article,a single phone can masquerade as 20 different devices simply by switching its IP address. This means a single room of phones can pretend to be thousands of online users. What surprised me most was the article's description of workers sitting in a room, manually manipulating these phones to generate fake likes and follows. To me, this makes click farms feel more like using "bots" to trick the algorithm, then using real people to maintain the illusion.

    1. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting.

      The example of the donkey protest reminded me that robots operate in a similar way. The donkey doesn't know what message it's sending, and the robot doesn't really "understand" what it's doing. But humans are still behind them, determining the message and the actions. I think this shows that we shouldn't view robots as neutral or harmless. They're just tools, and they always reflect the thoughts of the people who create or use them.

  23. Sep 2025
    1. “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’”

      I think the Golden Rule is very interesting, since it appears across different cultures and traditions. This shows that empathy and fairness are values people have always pursued. But what makes me feel conflicted is that in modern society, especially on social media, this is often ignored. Many companies don’t really follow “ What you do not want done to yourself, do not do to others.” On the contrary, if they can raise attention or boost popularity through public opinion, they will often stop at nothing to exploit users’ emotions and attention.

    1. “A person is a person through other people.”

      Ubuntu stood out to me and its idea that “a person is a person through other people.” I like how it shifts the focus away from individual gain and toward community well being. In the context of social media, this is very relevant, because platforms often encourage competition for likes and followers instead of fostering real connection. I feel this idea could really guide how platforms treat users and how users treat each other