33 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ted Chiang. Will A.I. Become the New McKinsey? The New Yorker, May 2023. URL: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey (visited on 2023-12-10).

      "Will A.I. Become the New McKinsey?" by Ted Chiang. presents thought-provoking questions about AI's role in exacerbating rather than eliminating systemic inefficiencies. One aspect that jumped out to me was his analogy of AI to a consulting firm that automates the justification of damaging decisions under the pretext of optimization. It is consistent with the chapter's topic of regretted innovation and raises a compelling ethical question: are we developing instruments to fix issues, or to rationalize and accelerate them? This source adds to the argument by portraying AI not as a neutral instrument, but as something that reflects and potentially worsens our current priorities.

    1. But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney [u9] who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel [u10] who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons [u11], or Aza Raskin regretting his invention infinite scroll. [1] In response to Socrates’ story, his debate partner Phaedrus says, “Yes, Socrates, you can easily invent tales of Egypt, or of any other country.”

      The text discusses how inventors such as Eli Whitney, Alfred Nobel, and Albert Einstein experienced unforeseen repercussions from their ideas, leading to remorse. This comes to mind the ethical considerations we've explored in this course, particularly how inventors frequently can't predict the broader consequences of their technology. As someone interested in biochemistry and data science, I'm curious how future gene-editing techniques or AI-driven drug development can produce unexpected results. This section made me think about how important it is to anticipate long-term societal repercussions at the development stage, rather than waiting until something goes wrong.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Merriam-Webster. Definition of CAPITALISM. December 2023. URL: https://www.merriam-webster.com/dictionary/capitalism (visited on 2023-12-10).

      The Merriam-Webster definition of capitalism emphasizes private ownership and market-driven decision-making, in stark contrast to the chapter's description of socialism, which is government-controlled. This contrast helps to highlight the ideological gulf between the two systems: capitalism stresses individual and corporate autonomy, whereas socialism prioritizes collective resource management. I believe that this distinction is crucial to many discussions regarding public policy and economic fairness.

    1. 19.1.2. Socialism# Let’s contrast capitalism with socialism: Socialism [s8], in contrast is a system where: A government owns the businesses (sometimes called “government services”) A government decides what to make and what the price is the price might be free, like with public schools, public streets and highways, public playgrounds, etc. A government then may hire wage laborers [s2] at predetermined rates for their work, and the excess business profits or losses are handled by the government For example, losses are covered by taxes, and excess may pay for other government services or go directly to the people (e.g., Alaska uses its oil profits to pay people to live there [s9]). As an example, there is one Seattle City Sewer system, which is run by the Seattle government. Having many competing sewer systems could actually make a big mess of the underground pipe system.

      I found the comparison between socialism and capitalism thought-provoking, especially the example of Seattle's sewer system. It made me think about how some public services benefit greatly from centralized control, as having numerous competing sewer systems would almost certainly result in logistical mayhem. I'd never considered public utilities in this way before, and it helped me realize why some services are better suited to government management than market competition.

  4. May 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Trauma and Shame. URL: https://www.oohctoolbox.org.au/trauma-and-shame (visited on 2023-12-10).

      I went to the source "Trauma and Shame" on oohctoolbox.org.au, and it explains how shame can be strongly linked to early trauma and developmental experiences. One element that jumped out to me was that shame becomes toxic when a youngster believes they are essentially undeserving of love or belonging, particularly if this belief is reinforced over time. This reinforces the chapter's premise that parents should assist their children differentiate their acts from their identities. The source also underlines the importance of caregivers constantly providing emotional safety to children in order for them to develop resilience to shame.

    1. Before we talk about public criticism and shaming and adults, let’s look at the role of shame in childhood. In at least some views about shame and childhood[1], shame and guilt hold different roles in childhood development [r1]: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view [r1], a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.

      I find the contrast between shame and guilt to be particularly illuminating, especially in the context of parenting. It made me think about how my own parents treated discipline. When I was younger and did something wrong, I recall them emphasizing on what I did rather than characterizing me as a "bad kid"—which corresponds to the concept of encouraging guilt over shame. That type of answer taught me to accept responsibility and correct my actions rather than feeling useless. I'm curious, though, how this strategy would change across cultures where shame is employed more intentionally as a weapon for social conformity.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Doxing. December 2023. Page Version ID: 1189390304. URL: https://en.wikipedia.org/w/index.php?title=Doxing&oldid=1189390304 (visited on 2023-12-10).

      The Wikipedia page on doxing defines it as the act of publicly releasing previously private personal information about someone, sometimes with hostile intent. One element that struck me was how doxing is sometimes utilized as a type of online vigilantism, with people believing they are "serving justice" but actually causing genuine harm. This adds to the chapter's thesis that individual harassment has evolved—now it's not just private messages or threats, but a coordinated public shaming that can have severe offline implications like job loss, safety hazards, or mental health crises.

    1. Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc.

      One section of the chapter that jumped out to me was the discussion of how abusers might utilize tracking and spyware on their victim's phones. This reminds me of previous situations in which survivors of domestic violence described how their partners monitored their location or texts without their permission. It's alarming how technology created for safety (such as location sharing or parental controls) can be twisted into tools of manipulation. It raises fundamental concerns about technology companies' ethical duties to build tools that prevent abuse. Should app creators be held liable if their tools are commonly misused in this manner?

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. WIRED. How to Not Embarrass Yourself in Front of the Robot at Work. September 2015. URL: https://www.youtube.com/watch?v=ho1RDiZ5Xew (visited on 2023-12-08).

      The source "Beyond Being There" makes an essential point: rather of attempting to replicate the in-person experience, we should look into how computer-mediated environments might overcome physical restrictions. For example, it says that asynchronous communication allows for deeper contemplation and broader engagement, which may not be possible in real-time, face-to-face talks. This is consistent with my own experience in online learning environments, where discussion boards might yield more valuable insights than live discussions. The source reframes the purpose of communication technology in a way that allows for more creative and inclusive design choices.

    1. There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots [p5]]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There [p6] (pdf here [p7]).

      The chapter's examination of the limitations of duplicating in-person contact via computers truly struck a chord with me. I've always believed that, while video conversations are handy, they don't really convey the nuances of face-to-face engagement, particularly subtle body language or sense of presence. The idea of changing the design aim away from "being there" and toward harnessing the specific capabilities of computer-mediated communication (as explained in the "Beyond Being There" source) struck me as a new and sensible approach. It made me think about how tools like shared documents and async messaging frequently enhance, rather than replace, human interaction.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health (visited on 2023-12-08).

      Anya Kamenetz's NPR report highlights how Facebook's own study on teen mental health was more murky than headlines implied. One major takeaway from the piece is that, while some internal presentations claimed Instagram exacerbated body image issues for teen girls, other slides in the same study revealed the effects were mixed or even beneficial in some cases. This reminds me to use caution when evaluating statistics, particularly when it is exploited to promote broad narratives in the media.

    1. Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox [m6], where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter.

      I found the segment on "digital detox" really thought-provoking. Personally, I've tried taking breaks from social media during stressful times, and it has really improved my focus and sleep. However, I agree with the chapter's premise that characterizing the internet as inherently hazardous simplifies a difficult problem. Social media may be both detrimental and beneficial—it all depends on how it is utilized. Rather than completely unplugging, learning to set boundaries and create a better online environment may be a more durable approach.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).

      I find it fascinating that the Evolution of Cetaceans article describes how whales evolved from land-dwelling hoofed mammals known as artiodactyls. One startling discovery was that early whale progenitors, such as Pakicetus, had functional legs and could survive both on land and in water. This transition from terrestrial to totally aquatic life exemplifies how much environmental constraints can influence evolution over millions of years. This evolutionary journey, I believe, also calls into question our beliefs about which sorts of mammals can adapt to marine life, and it is closely related to previous issues in this course concerning adaptability and environmental influence.

    1. For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites

      One thing that struck me about this chapter was how the concept of replication and inheritance on social media reflects how ideas evolve in real life. When someone edits a post, such as adding comments in the form of a quote tweet or remixing it in a TikTok Duet, it reminds me of how memes and cultural trends emerge through reinterpretation. It makes me wonder how much control we actually have over our original stuff once it's online. I also wonder if this type of replication dilutes the original message or amplifies it by allowing additional voices to add to it.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      Zack Whittaker's TechCrunch story sheds light on how Facebook exploited users' phone numbers, which were originally provided for two-factor authentication, to allow outsiders to look them up without providing users the option to opt out. This raises severe questions about consent and privacy, especially because consumers were made to assume their phone numbers were only used for security reasons. What struck me the most was that even privacy settings did not provide a comprehensive solution to prevent this type of exposure. It's a fantastic example of how platforms can introduce systemic privacy problems independent of user intent or awareness. This got me thinking about how platform design decisions reflect underlying values—or lack thereof—regarding user autonomy and openness.

    1. Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act)

      Reading about the distinction between individual and systemic analysis made me reflect on how frequently we attribute blame to individuals without considering the bigger systems that influence their actions. The example of drug sentence inequities in the United States was particularly remarkable, demonstrating how institutional racism may operate even when no one means harm. It reminds me of public health talks in which results differ dramatically across racial and socioeconomic lines, not because of individual choices, but because of systemic impediments such as access to care or environmental exposures. This makes me wonder: how do we effectively improve systems that are "invisible" in everyday life but have such significant consequences?

  11. Apr 2025
  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).

      This article covers the social model of disability, which holds that people are incapacitated not by their impairments, but by society impediments such as inaccessible structures, technologies, or biased attitudes. This viewpoint is relevant to the design of social media platforms because it reframes accessibility not as a personal restriction, but as a responsibility. It moves the emphasis from "fixing the person" to "changing the environment," which largely aligns with the ideas presented in Chapter 10 of the course textbook.

    1. Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize.

      Accessible has always been one of my concerns. In my own assignments and thoughts, I have mentioned many times the importance of accessibility to the disabled community and how the lack of attention to this issue in today's society is something that we need to focus on in my opinion.

    1. What assumptions does GDPR make about individuals or groups using social media, which might not be true or might cause problems? List as many as you can think of (bullet points encouraged). 9.5.2. Imagine (2-3 minutes, by yourself):# Select one of the above assumptions that you think is important to address. Then write a 1-2 sentence scenario where a user face difficulties because of the assumption you selected. This represents one way the design could exclude certain users. 9.5.3. Design (3-5 minutes, by yourself):# Brainstorm ways to change the GDPR policy to avoid the scenario you wrote above. List as many different kinds of potential solutions you can think of – aim for ten or more (bullet points encouraged). 9.5.4. Expand (5-10 minutes, with others):# Combine your list of critiques with someone else’s (or if possible, have a whole class combine theirs). You can also consider reading criticism of the GDPR: What’s wrong with the GDPR? – Politico [i31] How GDPR Is Failing – Wired [i32])

      In the section on privacy safeguards and regulations, the article "ActivityPub, GDPR, and the Problem of Privacy" is a useful reference. This paper emphasizes a crucial tension: decentralized social media protocols like ActivityPub, which allow users to engage without a centralized authority, inevitably hinder compliance with privacy legislation such as the GDPR. According to the source, GDPR requires users to have the right to remove their data ("right to be forgotten"), but in decentralized networks, once data is federated among numerous separate servers, it becomes extremely difficult—if not impossible—to properly erase. This raises severe ethical and legal concerns about whether decentralized platforms can truly guarantee user privacy, even if they are intended to encourage autonomy and control.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” [i1] (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      This is very true, in my daily use of self-publishing, my phone number and so on will actually be revealed quite often. At the same time my personal preferences are unconsciously recorded and converted into shopping recommendations and so on.

    1. For example, social media data about who you are friends with might be used to infer your sexual orientation [h9]. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)

      It's very true that for all of us, going online means accessing the big data base and making our own contribution to it. And big data will push different content according to each person's different preferences, and even push different price point products, just like a press, he will draw the last value of each person.

    1. Online advertisers can see what pages their ads are being requested on, and track users [h1] across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website.

      Obviously, this behavior is very annoying when all kinds of ads are tied to software. Maybe I just want to go use social media now, but ads are everywhere.

    1. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      Absolutely, I totally agree with that; victims should not just endure blindly. We should introduce methods on how to appeal rights and protect oneself when hurt online. It even spreads to the legal level.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling [g4]), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously

      Actually, from my perspective, I think this kind of action is rude and lack of respect for people. Cause it's really hard for some people to recognize the thing is true or not. So that might hurt a lot of people since rumor.

    1. Another example of authenticity we can consider is the authenticity (or fake authenticity) of corporate brand accounts. In the late 2010s, a number of corporate brand Twitter accounts started breaking away from normal, safe and boring corporate topics and began interacting with each other playfully [f44], or addressing real and serious human concerns.

      There is no doubt that this technology does make it easier for many people, both readers and staff. It is like a tool that can only serve the public.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity

      It's true that it's different from human habits, but it's something that's also been familiar to a lot of people in the last decade. Whether it's avatars or not, it seems that many people are more willing to accept this form of “deception.”

    1. Many types of data on social media platforms are organized as lists, such as lists of friends or followers lists of posts lists of photos in a post lists of people who liked a post etc.

      At one time I learned about such things in Python, and now I'm willing to learn more about them. Also lists and dicts are really a difficult knowledge and I'm happy to review them again and master them.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      There is no doubt that the invention of social media set the premise on which I can enjoy my life today. Everything I know or learn today is a result of social media. Whether it's something I'm interested in or something else I can find out everything I need to know.

    1. We’ve now looked at how different ways of storing data and putting constraints on data can make social media systems work better for some people than others, and we’ve looked at how this data also informs decision-making and who is taken into account in ethics analyses. Given all that can be at stake in making decisions on how data will be stored and constrained, choose one type of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained

      Interestingly, it's like big data analytics, where everything is tagged and then categorized through a series of algorithms like statistics and so on, and ultimately pushes out posts and content that users want to see.

    1. The date of the tweet: Feb 10, 2020 The text of the tweet: “This is Woods. He’s here to help with the dishes. Specifically, the pre-rinse, where he licks every item he can. 12/10” The photos in the tweet: Three photos of a puppy on a dishwasher The number of replies: 1,533 The number of retweets: 26.2K The number of likes: 197.8K

      To be honest, I do think data is lil significant for me. As the article said, twitter show me the data about this post, that might decide should I read this carefully or not.

    1. Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.

      Basically, a lot of movies get bad reviews or abuse from a variety of different people like this, and for a variety of reasons, which could be personal emotions, competitors, or political factors, to name a few.

    2. On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers [c8], or making fake crowds that appear to support a cause (called Astroturfing [c9]). As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots).

      when i first time come to use insta, there are a lot of people(bots) add me as friend. However, they are just wanna attack my devices.

    1. This means that media, which includes painting, movies, books, speech, songs, dance, etc., all communicates in some way, and thus are social. And every social thing humans do is done through various mediums. So, for example, a war is enacted through the mediums of speech (e.g., threats, treaties, battle plans), coordinated movements, clothing (uniforms), and, of course, the mediums of weapons and violence.

      The web must have two sides