20 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Rick Paulas. What It Feels Like to Go Viral. Pacific Standard, June 2017. URL: https://psmag.com/economics/going-viral-is-like-doing-cartwheels-on-the-water-spout-of-a-giant-whale (visited on 2023-12-08).

      This article really resonates with people. The author describes "going viral" as "doing a cartwheel in the blowhole of a giant whale" - it sounds funny, but just imagine how out of control it is. You post something on your social media, and suddenly it becomes the topic of discussion all over the world overnight. The pressure and absurdity hit you all at once. It reminds us that although going viral on the internet seems glamorous, it often comes with huge psychological burdens, exposure of privacy, and completely unpredictable consequences. In other words, "the light of going viral" has a lot of "burning heat" in it.

    1. Much of the internet has developed a culture of copying without necessarily giving attribution to where it came from.

      To be honest, this statement is not an exaggeration at all. Nowadays, memes, pictures, and jokes on the internet spread so fast that the original creators can hardly keep up. By the time you notice them, they have already been repackaged, filtered, and had their fonts changed by a dozen netizens. In the end, it's impossible to tell who the original creator is. Although the internet has a memory, sometimes it seems more like it remembers who spreads things faster. However, giving credit is not difficult. A simple "cr: original creator" can show respect and prevent you from becoming a "content pirate". Everyone will feel much more comfortable this way.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lauren Feiner. DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising. CNBC, June 2022. URL: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html (visited on 2023-12-07).

      Many people think that algorithms are just "automatically running numbers", without emotions or biases. However, in this news story, Facebook was investigated by the US Department of Justice for its advertising recommendation system being suspected of discrimination, and later reached a settlement. The real irony is that no one was sitting there manually excluding certain groups; it was just the platform's "optimization logic" that automated, scaled, and executed this bias at a low cost. In other words, discrimination doesn't require a bad person; it only needs an "efficiency-first, click-through rate-king" advertising system. On the surface, it seems to be helping businesses find "the most likely people to see the ads", but in reality, it has set up invisible thresholds in the real world, preventing certain groups from ever seeing the same opportunities. As a result, "technological neutrality" has become a nice-sounding but empty slogan.

    1. when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges.

      The most heart-wrenching aspect of this statement lies in the fact that it reveals the problem does not stem from a single "bad judge", but rather the entire system itself is inherently biased. In other words, sometimes, without anyone intentionally discriminating, the rules themselves will automatically enforce discrimination, and even make people believe that this is a "normal" or "neutral" procedure.

  4. Oct 2025
    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I read this report from NPR titled "After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users". I was really shocked. The data of 53 million people was leaked, and yet the company chose not to notify the users? This made me realize that big companies claim to "value privacy", but when problems occur, their first reaction is usually to protect themselves rather than protect the users. After reading it, I will be more cautious about the personal information on social media. After all, sometimes "the sense of security" is just an illusion.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      I think the sentence "Although we are concerned about information privacy, we often share information with social media platforms and believe they will safely keep these details" is particularly true. We always say we want to protect privacy, but in reality, we still readily click "agree" and hand over our information without hesitation. Seeing this sentence made me reflect a bit - perhaps we have become too accustomed to convenience, and thus have overlooked the aspect of security.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Christie Aschwanden. Science Isn’t Broken. FiveThirtyEight, August 2015. URL: https://fivethirtyeight.com/features/science-isnt-broken/ (visited on 2023-12-05).

      I was deeply impressed by this article titled "Science Isn't Broken". The author pointed out that science is not an infallible system but rather a process of continuous trial and error, correction. I really like this statement because it allows us to see the human side of science - researchers can make mistakes and have biases, but it is precisely this attitude of continuous improvement that enables science to keep advancing. After reading it, I felt quite inspired and was able to better understand that "making mistakes" is actually a part of growth.

    1. something appears to be correlated, doesn’t mean that it is connected in the way it looks like.

      I particularly agree with the sentence mentioned in the text: "Just because something seems related doesn't mean it actually is." This reminds me that in our daily lives, we are often misled by "superficial connections", such as when we see two events occur simultaneously and subconsciously assume they have a causal relationship. This reminds me that when looking at data or making judgments, I should think more carefully about the real reasons behind them instead of being led by numbers or coincidences.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Film Crit Hulk. Don’t feed the trolls, and other hideous lies. The Verge, July 2018. URL: https://www.theverge.com/2018/7/12/17561768/dont-feed-the-trolls-online-harassment-abuse (visited on 2023-12-05).

      I think the title of this article is quite interesting and very realistic. In the past, everyone was saying "Don't pay attention to trolls", but this article presents a different perspective - sometimes ignoring doesn't truly solve the problem. The harm caused by online trolls is real and it's not "as long as you don't respond, everything will be fine". This makes me think that online interactions are actually quite similar to real life. We can't pretend that those malicious remarks don't exist, but rather we need to discuss and build a healthier communication space. The author wrote this article in a straightforward and somewhat angry tone, making people feel that sense of urgency that "we can't pretend we haven't seen it anymore".

    1. Trolling sometimes gives trolls a feeling of empowerment when they successfully cause disruption or cause pain.

      I think this sentence really resonates with people. Many times, internet trolls don't just aim to be "funny", but rather they seek a sense of existence or superiority by manipulating others' emotions. Seeing others get angry at them makes them feel they have the upper hand. However, this "sense of power" is actually quite empty. It's merely a temporary emotional satisfaction, and behind it lies a sense of loneliness. Instead of causing pain, it's better to use that urge to express oneself to do something that can truly trigger communication or reflection.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-24).

      I think the story of Lonelygirl15 is particularly intriguing. At first, she shared her life on YouTube as an ordinary girl, and many people found her sincere and approachable, like a friend to confide in. But later, it was discovered that it was actually a performance, a web drama meticulously planned by actors and the production team. When the truth came out, the reaction of the audience was very strong - not only because they were deceived, but also because the feeling of "being understood" and "being connected" suddenly vanished. I can understand the emotion of betrayal because the trust we invest in the online world is, in essence, as precious as trust in real life. But looking back, I also think the phenomenon of Lonelygirl15 reminds us: Authenticity doesn't necessarily mean "complete truth". It can also be a more complex emotional experience. Even if it was a performance, the sense of companionship and resonance people gained while watching was also real. Perhaps this is the most contradictory and most human aspect of the internet age.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      I can truly understand this statement. The feeling of being deceived is truly awful - not only because we were deceived, but also because we start to doubt our ability to make correct judgments. This reminds me of some "true stories" accounts I followed on social media earlier. Later, I discovered that they were actually fabricated. The sense of loss is deeper than just an information error. Perhaps the reason why we react so strongly to "falsehood" is that trust is an emotional investment for us. When others take advantage of this trust, we lose not only the authenticity of the information but also the sense of security between people.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [e33] Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24).

      This article tells about how Aza Raskin, the inventor of "infinite scrolling", expressed regret for the social impact his design had caused. After reading it, I was deeply impressed because it made the concept of "technology neutrality" highly questionable. In the fifth chapter, it mentions how social media makes people addicted and constantly refreshes, and this report precisely reveals that the designers behind it also realized the severity of the problem. I think this source makes me reflect: the "convenience" of many social functions is actually quietly taking away our attention. Raskin's remorse reminds us that design is not only a matter of technical choice, but also an ethical choice. Developers need to realize that they are shaping people's behaviors, not just their user experience.

    1. In Web 2.0 websites (and web applications), the communication platforms and personal profiles merged. Many websites now let you create a profile, form connections, and participate in discussions with other members of the site. Platforms for hosting content without having to create your own website (like Blogs) emerged.

      When I read this sentence, I realized that I almost entirely live in the Web 2.0 world. For me, the Internet has always been interactive, open, and a place where everyone can express themselves. But when I look back, this freedom of "everyone can speak" has also brought a lot of anxiety, such as the need to constantly update and gain attention, otherwise it feels like being "ignored by the network". I think this section reminds me to think: Is the "interaction" of social media about expressing oneself or being forced to participate? It makes me better understand why some people start "digital decluttering", which might be a way to regain control.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      When I was reading Hoffmann's article, I was particularly impressed. She pointed out that "bad engineering decisions can lead to data violence" - this made me re-examine the viewpoint in Chapter 4 about "all data being a simplified representation of reality". When we are modeling or designing data systems, every seemingly technical choice can actually exclude or misunderstand the experiences of certain groups. This made me think of a table I saw earlier, where the gender options were only "male/female", without an option for "non-binary" - this seemingly was just a data constraint, but in fact it was a form of harm. Hoffmann's argument reminded me that ethics should not be considered after the system is completed, but should start from the very first step of data design.

    1. In addition to representing data with different data storage methods, computers can also let you add additional constraints on what can be saved. So, for example, you might limit the length of a tweet to 280 characters, even though the computer can store longer strings.

      The minute I read this sentence, I noticed that these "constraints" are not only (in theory) put in place to help the system work more efficiently; by a barely perceptible process, they mold our forms of expression. Remember how it was when I first used Twitter, and I always felt that 280 characters were too much in express everything clearly - thus got used to cut off the “redundant emotional words” and keep things simple? In a way, this pattern system was training me to “think in data”, where my thoughts could only be expressed within the character limit of what the algorithm would allow. That’s when I realised that data structures are never neutral — they have an enormous effect on how humans communicate. They frequently embody the value perspective of their designers.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      This article reveals the internal operation mechanism of "click farms": some workers are hired to repeatedly click, download and like, artificially creating false popularity. Unlike the "adversarial robots" mentioned in this chapter, behind the click farms are real people rather than programs, but the effects of both are very similar - both are misleading the public through "false popularity". After reading it, I felt a bit shocked because it made me realize that information manipulation is not just a "technical issue", but there is also a huge gray labor market behind it. Compared to the cold code, these real workers have given me a more complex feeling towards "false traffic": it is both an automation issue and a problem of social and economic exploitation.

    1. Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites. 3.2.2. Antagonistic bots:# On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers [c8], or making fake crowds that appear to support a cause (called Astroturfing [c9]).

      In this chapter, the article distinguishes between "friendly robots" and "aggressive robots", which really struck a chord with me. Previously, on Twitter, I had followed an account that automatically posted animal photos. Every time I saw them, I felt so comforted, and even forgot that it was actually a program running. In contrast, "aggressive robots" bring about a completely different kind of feeling - especially when they are used to manipulate public opinion. It even makes people start to question whether the information they see is true or not. I was thinking, as ordinary users, do we need a mindset of "digital literacy" to constantly remind ourselves: Not all active accounts have a real person behind them. This awareness itself might be the first step in combating information manipulation.

  13. Sep 2025
    1. The question raised in this chapter is: Why do so many people see Justin Sacco's tweets? I believe the key lies in the design mechanism of social media. Forwarding, trending topics, and algorithmic recommendations have enabled originally small-scale opinions to be rapidly amplified. I also had a similar experience: The complaints I once posted in my circle were screenshot and forwarded to a larger group, leaving me feeling surprised and out of control. After reading this chapter, I became even more aware that social media is not a "private space", but rather a public stage that can be magnified at any time.

    1. By the end, my biggest realization was that throughout history, in different regions and cultures, humans have been constantly trying to answer one question: "What is right and what is wrong at the core?" From Confucianism to Taoism, from Kant's deontology to utilitarianism's "the greatest happiness for the greatest number", each framework is like a pair of glasses, making the mountain of morality clear and distinct. What surprised me was that some emphasized individual virtues, such as virtue ethics; some focused on the consequences of actions, such as utilitarianism; and some didn't look at individuals at all, but only cared about relationships and communities themselves, such as care ethics and Ubuntu. So I realized that morality might not be formulable precisely, but rather more like a multi-dimensional map. Perhaps, when facing complex real-life choices, we should change our perspective, try to observe with multiple pairs of glasses, to make more balanced judgments. For me personally, the enlightenment this part brought was that we might not need to find a correct answer, but rather learn to constantly question ourselves. In short, these are not the end, but checkpoints along the way. Especially now with the rapid development of social media and technology, moral issues have not only become more ambiguous but also more complex. At this time, these frameworks are like lighthouses, reminding me that in every choice, I should not only feel the mainstream direction of the current but also explore the various hidden possibilities in this world.