27 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ted Chiang. Will A.I. Become the New McKinsey? The New Yorker, May 2023. URL: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey (visited on 2023-12-10).

      I have a quick question for this post. If AI can be used to weaken cost and avoid responsibility like McKinsey by companies, how should we balance out between technology development and social justice

    1. If you could magically change anything about how social media sites operate as businesses, what would it be?

      If I am able to do that, I want to focus on users’ well-being and authentic content rather than ads income and optimization of users’ remaining time. Current original design of many social media platforms is letting people continue to scroll and refresh in order to magnify mood and dishonest information. My ideals is to promote health socialization and encourage rational discussion in the condition of gaining profit for the platforms

  3. May 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Paul Billingham and Tom Parr. Enforcing social norms: The morality of public shaming. European J of Philosophy, 28(4):997–1016, December 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/ejop.12543 (visited on 2023-12-10), doi:10.1111/ejop.12543.

      This article is pretty insteresting since it talks about whether public humiliation can be justified ethically. The author also points out that if public humiliation can be reasonably justified according to the protection of society instead of simple revenge. And, this makes me start to think that there are many public judgment online is part of justified humiliation? If we cannot balance out the systems, there may be many public tyranny

    1. The Nazi crimes, it seems to me, explode the limits of the law; and that is precisely what constitutes their monstrousness. For these crimes, no punishment is severe enough. It may well be essential to hang Göring, but it is totally inadequate.

      I agree with Hanna Arendt's point that some crimes is called crimes against humanity because they are exceed the range of rules and human morals. In some cases, only judicial punishment and public humiliation seems not enough to respond to the hurt. So, I think we have to admit some unpaired behavoirs are the red line of justice.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Roni Jacobson. I’ve Had a Cyberstalker Since I Was 12. Wired, 2016. URL: https://www.wired.com/2016/02/ive-had-a-cyberstalker-since-i-was-12/ (visited on 2023-12-10).

      After reading this article, I feel really shocked and angry since Roni has been cyberstalking for so many years since she was a little kid. Even though she tried to switch social platforms and even called the police, the harassment never stopped. And, this makes me start to think if the platform can really take the responsibility of the problem like this case since they focus on more group argument or short-term conflict. So, I think social platform needs to provide more comprehensive protection to give their users enough sense of security

    1. Do you believe crowd harassment is ever justified?

      In my perspective, crowd harassment is never justified since if people start to think the way that as if their aim is justice they can anyway they want, this will become another form of oppression even though the main targets are considered bad people. This happens on the internet with fast spreading and high anonymity, and this may cause justice to go bad. Moreover, the voice of justice will finally turn into cyber bully causing severe problems like suicide.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kate Starbird, Ahmer Arif, and Tom Wilson. Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations. Proc. ACM Hum.-Comput. Interact., 3(CSCW):127:1–127:26, November 2019. URL: https://dl.acm.org/doi/10.1145/3359229 (visited on 2023-12-08), doi:10.1145/3359229.

      What makes me suprised the most is that the spread of falsehoods is ususally not caused by those malicious people or AI, instead most of the time it is those regular users who spread those info unwittingly. And, this article points out that the spread of falsehoods is the process of collboration. Since many users continue to recreate and process those fake info unintentionly, this makes those info more credible and impactful.

    1. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves.

      Although the ad hoc corwdsourcing is well-intentioned, it is still possible to bring severe harm to other people. Reddit users mistakenly belive a missing college student as the suspect, and this well-intentioned behavoir brings a huge pain to this innocent family. And, this makes me realize that a bunch of strangers sticking together. Even though their intention is to help, it is still possible to bring cyberbullying in the condition of no evidence. And, I start to believe that the technology itself is neutral, and it is important to see how people to use it.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah T. Roberts. Behind the Screen. Yale University Press, September 2021. URL: https://yalebooks.yale.edu/9780300261479/behind-the-screen (visited on 2023-12-08).

      According to Roberts’ research and interview, we are able to know how these moderators struggle with economic pressure, time limit and extreme traumatic content. This work is never easy tap work, instead it needs to endure the risk of psychological trauma. I get really shocked and sad when I am reading this article. When we browse social media, we think it is right thing that these content should be clean and safe. However, indeed, there are people back there, who are enduring the things we cannot see silently.

    1. The moderators then are given sets of content to moderate and have to make quick decisions about each item before looking at the next one.

      In order to save costs, many social media companies decide to outsource moderation work, but they also ignore these moderators who have to face huge mental pressure from extreme content such as child abuse, violent videos, etc. And I think the emotional guidance activities like karaoke and drawing are not enough for them to release their mental stress. Rational shift and regular mental support from professional doctors should be added to the schedule. Even more, is there a possibility for AI to filter some extreme content?

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Robinson Meyer. Everything We Know About Facebook’s Secret Mood-Manipulation Experiment. The Atlantic, June 2014. URL: https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/ (visited on 2023-12-08).

      I am pretty shocked by this artilce written by Robinson Meyer. This experiement reveals that Facebook intentionally changes the content of what they want their users to look at in order to observe their emotional reflection without letting their users know. The way of emotional control although has the the explore meaning in the area of academia, without letting users know should lead a severe controversy. And, this also makes me realize that social media platform is no longer just an intermediary of information but also they can actively create an emotional environment for their users as well

    1. “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.”

      After reading the portion of doomscrolling, I feel so connected. Especially, during 2020, the pandemic was in the worst period. Every day, before I slept, I browsed twitter and news wesibte for one to two hours. Alhtough most of the content was about negative information such as increased number of confirmed cases and social conflict. And, even though I knew these contents could make me more anxious and hard to sleep, but I just could not stop. In my opinion, I would rather know the worst case than know nothing about what the world is happening. And, this also makes me think whether the social media platform should take some responsibility of mental health

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      Zack Wittaker, in the article on TechCrunch, points out how Facebook offers phone numbers of their users for advertising targeting and individual information search functions for 2FA, and users are not able to quit this setting, which causes users to worry about their privacy and the transparency of their data. And this really makes me start to think about how users are able to control their data in the current society.

    1. The flat earth movement (an absurd conspiracy theory that the earth is actually flat, and not a globe) gained popularity in the 2010s.

      After reading these posts, I started to seriously think about the difference between intention and impact. Although these platforms do not want to spread misleading information or incite violence intentionally, their impacts do exist and indeed cause some serious problems. I think social media should not only take responsibility for their intention but also the result caused by their design. Also, this makes me start to think if the surviving essence of those platforms is the participation of users and benefits from ads, they can be able to choose the right choice between commercial benefits and ethical responsibility.

  10. Apr 2025
    1. And unfortunately, as researcher Dr. Cynthia Bennett [j21] points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.”

      I was particularly struck by Dr. Cynthia Bennett's observation that when people with disabilities are involved in design, they are often marginalized and not even treated as real designers.This made me think that the issue of diversity in technology is not just about “who it serves” but also about “who has the leading voice”. A truly inclusive technology system doesn't just treat its diverse users as “objects to be taken care of, but involves them in making decisions, creating, and defining standards. My question is: What actual platforms or organizations are out there that actually include people with disabilities or minorities in their role as designers, not just as test subjects?

    2. And unfortunately, as researcher Dr. Cynthia Bennett [j21] points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.”

      I was particularly struck by Dr. Cynthia Bennett's observation that when people with disabilities are involved in design, they are often marginalized and not even treated as real designers.This made me think that the issue of diversity in technology is not just about who it serves but also about who has the leading voice. A truly inclusive technology system doesn't just treat its diverse users as objects to be taken care of, but involves them in making decisions, creating, and defining standards. My question is: What actual platforms or organizations are out there that actually include people with disabilities or minorities in their role as designers, not just as test subjects?

    1. David Ingram. Facebook fuels broad privacy debate by tracking non-users. Reuters, April 2018. URL: https://www.reuters.com/article/idUSKBN1HM0E9/ (visited on 2023-12-06).

      Reuters’s article mentions that Facebook not only collects users’ data but also collects information of non-registered users through techniques such as cookies. This detail makes me really surprised because, in general, a person who does not register or use some server should not have tracked or collected data. However, the reality is that, although you do not have a Facebook account, if you browse any website containing the button of likes or shares, Facebook will be able to track you by implanting tracking cookies in those browsers. And this makes me think of a very serious problem: in the current environment of the internet, choosing not to join becomes impossible. Even though you never actively use any platform, they are still able to collect your information implicitly. This situation dramatically weakens the control of privacy protection.

    1. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles [i25], which are information about the user that the user didn’t provide or consent to

      In my perspective, this behavior compared to the traditional hacker’s attack is more stealthy and disquieting since users do not even provide any relevant information, but that information is inferred by the platforms’ algorithm unconsciously. For example, Facebook is based on your friends, your contacts, and even your interacting posts to build up your shadow profile automatically even if you never provide any information to Facebook. This makes me feel that although I am already very careful to protect my own privacy, I cannot avoid the action of collecting data indirectly. This feels very unstable since privacy is never a problem of self-control, and people around me can also affect my exposure to privacy.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. A teenager on TikTok disrupted thousands of scientific studies with a single video – The Verge [h22]

      We often talk about the “era of big data” as if we can understand everything with data, but if the data itself is wrong, biased, or even maliciously manipulated, does it make sense to draw conclusions based on that data? This makes me question how much of the data analysis I usually see.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Is It Funny or Offensive? Comedian Impersonates FBI on Twitter, Makes MLK Assassination Joke. January 2020. URL: https://isitfunnyoroffensive.com/comedian-impersonates-fbi-on-twitter-makes-mlk-assassination-joke/ (visited on 2023-12-05).

      I think this situation has a close relationship to the content of “Trolling” in this chapter, especially when we judge whether a destructive act has moral value or not. When Jaboukie imitate how FBI snears at MLK being assassinated, this can be seen as an ironic disruption of having poltical intentions, which trying to remind the public not to forget the roles of FBI in the past history. And, personally, this kind of trolling with moral motivation is acceptable in some ethical framework.

    1. While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction.

      Perviously, I think internet trollings are just too boring, but I never realize that there is a group of people who do not really care whether something is true or not as philsophy to attact other people. Apparently, they refuse any rules, but, in fact, they do protect the privilege for some group of people, such as gender and race making me realize that the meaning of standing for nothing is the exitence of severe unfairness. At the same time, how can we quickly distinguish between justicial and bad disruption.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Emily St James. Trans Twitter and the beauty of online anonymity. Vox, September 2020. URL: https://www.vox.com/culture/21432987/trans-twitter-reddit-online-anonymity (visited on 2023-11-24).

      This artilce talks about how the trans use the idea of anomity from the twitter to truely express themselves and obtain a kind of freedom from protection. The artilce points out that many trans people cannot public their identity and always face to violence and discrimination. However, anomity gives them the chance to express their true hear which breaks the traditional thought that anomity equals the idea of fakeness

    1. Anonymity can also encourage authentic behavior.

      I think the point which anonymity can both encourage inauthentic behavior and authentic behavior is very impressive. I especially agree with how anonymity promote people to better express themselves. I have found the similar situations from many online group chat. For example, some people can be brought up tje courage to talk about their mental problem, and this process is not their way of hiding themselves, but a chance to speak to their heart from a safe place. However, sometimes, anoymity can be a shield for the evilness. And, this makes me start to think about if this is the problem from anoymity itself, or it is the problem from the desgn of the platform. Moreover, should the platform be responsible for the use of anonymoity

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      Julia has pointed out many practical examples revealing potential problems of floating points. For example, she mentions that when she uses 32-bit floating numbers to represent odometer to read, as values increase, the gaps between each floating number increase, causing to accumulate decimals, the proccess of reading will stop, and the actual distance can be reflected incorrectly.

    1. this saying in statistics: All models are wrong, but some are useful [d18]

      In math model, we often use formulas, models, and variables to represent various complicated stuffs in reality. For example, we can use a function to describe a population growth or use a matrix to represent a transportation network. However, we all know that all these models are used for abstract expression in reality. At here, the article mentions that 2 + 2 = 4 only works when all four units are identical, but, in reality, each unit has its own difference just like apples which have different sizes. So, we need to always have a crtical thinking and humble when we use these models and data. After reading this chapter, I have a question, in our daily lives, can we fully rely on data to make decisions or we have to leave some space for non-data decision since if all the data try to simplified reality

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      I think this passage deeply reveal the environment of the "click farms", which try to control the data of likes, comments, and forwards through recruiting abundant workers with low salary or applying automatic equipment to imitate users to interact. The most impression from the article is that the passage describes that some of the factories set up many phones in rows for remote control. And, this makes the intereaction on the internet even more untrustful.

    1. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.”

      This metaphor makes me remind some of the arguments and posts which really let me consider those persecptives seriously, even impact my thought. However, afterward, my friends told me that that this was a bot account. And, this experience really makes me distrust the authenticity of information that we found online, and I realize that people like me are really easy to be controlled by those automatic program. Also, I find an interesting question which is when a bot try to spread fake information or incite any hate speech, who will take the responsibility for this?