10 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-24).

      I think the story of Lonelygirl15 is particularly intriguing. At first, she shared her life on YouTube as an ordinary girl, and many people found her sincere and approachable, like a friend to confide in. But later, it was discovered that it was actually a performance, a web drama meticulously planned by actors and the production team. When the truth came out, the reaction of the audience was very strong - not only because they were deceived, but also because the feeling of "being understood" and "being connected" suddenly vanished. I can understand the emotion of betrayal because the trust we invest in the online world is, in essence, as precious as trust in real life. But looking back, I also think the phenomenon of Lonelygirl15 reminds us: Authenticity doesn't necessarily mean "complete truth". It can also be a more complex emotional experience. Even if it was a performance, the sense of companionship and resonance people gained while watching was also real. Perhaps this is the most contradictory and most human aspect of the internet age.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      I can truly understand this statement. The feeling of being deceived is truly awful - not only because we were deceived, but also because we start to doubt our ability to make correct judgments. This reminds me of some "true stories" accounts I followed on social media earlier. Later, I discovered that they were actually fabricated. The sense of loss is deeper than just an information error. Perhaps the reason why we react so strongly to "falsehood" is that trust is an emotional investment for us. When others take advantage of this trust, we lose not only the authenticity of the information but also the sense of security between people.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [e33] Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24).

      This article tells about how Aza Raskin, the inventor of "infinite scrolling", expressed regret for the social impact his design had caused. After reading it, I was deeply impressed because it made the concept of "technology neutrality" highly questionable. In the fifth chapter, it mentions how social media makes people addicted and constantly refreshes, and this report precisely reveals that the designers behind it also realized the severity of the problem. I think this source makes me reflect: the "convenience" of many social functions is actually quietly taking away our attention. Raskin's remorse reminds us that design is not only a matter of technical choice, but also an ethical choice. Developers need to realize that they are shaping people's behaviors, not just their user experience.

    1. In Web 2.0 websites (and web applications), the communication platforms and personal profiles merged. Many websites now let you create a profile, form connections, and participate in discussions with other members of the site. Platforms for hosting content without having to create your own website (like Blogs) emerged.

      When I read this sentence, I realized that I almost entirely live in the Web 2.0 world. For me, the Internet has always been interactive, open, and a place where everyone can express themselves. But when I look back, this freedom of "everyone can speak" has also brought a lot of anxiety, such as the need to constantly update and gain attention, otherwise it feels like being "ignored by the network". I think this section reminds me to think: Is the "interaction" of social media about expressing oneself or being forced to participate? It makes me better understand why some people start "digital decluttering", which might be a way to regain control.

  5. Oct 2025
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      When I was reading Hoffmann's article, I was particularly impressed. She pointed out that "bad engineering decisions can lead to data violence" - this made me re-examine the viewpoint in Chapter 4 about "all data being a simplified representation of reality". When we are modeling or designing data systems, every seemingly technical choice can actually exclude or misunderstand the experiences of certain groups. This made me think of a table I saw earlier, where the gender options were only "male/female", without an option for "non-binary" - this seemingly was just a data constraint, but in fact it was a form of harm. Hoffmann's argument reminded me that ethics should not be considered after the system is completed, but should start from the very first step of data design.

    1. In addition to representing data with different data storage methods, computers can also let you add additional constraints on what can be saved. So, for example, you might limit the length of a tweet to 280 characters, even though the computer can store longer strings.

      The minute I read this sentence, I noticed that these "constraints" are not only (in theory) put in place to help the system work more efficiently; by a barely perceptible process, they mold our forms of expression. Remember how it was when I first used Twitter, and I always felt that 280 characters were too much in express everything clearly - thus got used to cut off the “redundant emotional words” and keep things simple? In a way, this pattern system was training me to “think in data”, where my thoughts could only be expressed within the character limit of what the algorithm would allow. That’s when I realised that data structures are never neutral — they have an enormous effect on how humans communicate. They frequently embody the value perspective of their designers.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      This article reveals the internal operation mechanism of "click farms": some workers are hired to repeatedly click, download and like, artificially creating false popularity. Unlike the "adversarial robots" mentioned in this chapter, behind the click farms are real people rather than programs, but the effects of both are very similar - both are misleading the public through "false popularity". After reading it, I felt a bit shocked because it made me realize that information manipulation is not just a "technical issue", but there is also a huge gray labor market behind it. Compared to the cold code, these real workers have given me a more complex feeling towards "false traffic": it is both an automation issue and a problem of social and economic exploitation.

    1. Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites. 3.2.2. Antagonistic bots:# On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers [c8], or making fake crowds that appear to support a cause (called Astroturfing [c9]).

      In this chapter, the article distinguishes between "friendly robots" and "aggressive robots", which really struck a chord with me. Previously, on Twitter, I had followed an account that automatically posted animal photos. Every time I saw them, I felt so comforted, and even forgot that it was actually a program running. In contrast, "aggressive robots" bring about a completely different kind of feeling - especially when they are used to manipulate public opinion. It even makes people start to question whether the information they see is true or not. I was thinking, as ordinary users, do we need a mindset of "digital literacy" to constantly remind ourselves: Not all active accounts have a real person behind them. This awareness itself might be the first step in combating information manipulation.

  8. Sep 2025
    1. The question raised in this chapter is: Why do so many people see Justin Sacco's tweets? I believe the key lies in the design mechanism of social media. Forwarding, trending topics, and algorithmic recommendations have enabled originally small-scale opinions to be rapidly amplified. I also had a similar experience: The complaints I once posted in my circle were screenshot and forwarded to a larger group, leaving me feeling surprised and out of control. After reading this chapter, I became even more aware that social media is not a "private space", but rather a public stage that can be magnified at any time.

    1. By the end, my biggest realization was that throughout history, in different regions and cultures, humans have been constantly trying to answer one question: "What is right and what is wrong at the core?" From Confucianism to Taoism, from Kant's deontology to utilitarianism's "the greatest happiness for the greatest number", each framework is like a pair of glasses, making the mountain of morality clear and distinct. What surprised me was that some emphasized individual virtues, such as virtue ethics; some focused on the consequences of actions, such as utilitarianism; and some didn't look at individuals at all, but only cared about relationships and communities themselves, such as care ethics and Ubuntu. So I realized that morality might not be formulable precisely, but rather more like a multi-dimensional map. Perhaps, when facing complex real-life choices, we should change our perspective, try to observe with multiple pairs of glasses, to make more balanced judgments. For me personally, the enlightenment this part brought was that we might not need to find a correct answer, but rather learn to constantly question ourselves. In short, these are not the end, but checkpoints along the way. Especially now with the rapid development of social media and technology, moral issues have not only become more ambiguous but also more complex. At this time, these frameworks are like lighthouses, reminding me that in every choice, I should not only feel the mainstream direction of the current but also explore the various hidden possibilities in this world.