32 Matching Annotations
  1. Mar 2026
    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content.

      This reading reminds me that understanding how social media platforms work can help users avoid being manipulated by algorithms or by people who intentionally post provocative content for attention. By being more aware of these strategies, we can make more thoughtful decisions about how we interact online and better protect our time, emotions, and personal data.

    1. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

      Socrates’ criticism of writing shows that people have long worried that new technologies might weaken human thinking or memory. This reminds me that many of today’s concerns about social media and the internet—such as relying too much on external information instead of critical thinking—are part of a much older debate about how technology shapes knowledge and wisdom.

    1. 20.1.1. Colonialism Defined# Colonialism is when one group or country subjugates another group, often imposing laws, religion, culture, and languages on that group, and taking resources from them. Colonialism is often justified by belief that the subjugated people are inferior (e.g., barbaric, savage, godless, backwards), and the superiority of the group doing the subjugation (e.g., civilized, advanced).

      Colonialism often justified domination by portraying the colonized people as inferior while framing the colonizers as more civilized or advanced. This ideology not only enabled the extraction of resources and control of land, but also imposed lasting cultural, linguistic, and political influences on the societies that were colonized.

    1. 9.1.4. Reflection Questions# In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in?

      China’s economy shows strong elements of state socialism, where the government maintains significant control over key industries and economic planning. At the same time, capitalist market mechanisms are widely used, allowing private businesses and market competition to drive economic growth.

    1. You might remember from Chapter 14 that social contracts, whether literal or metaphorical, involve groups of people all accepting limits to their freedoms. Because of this, some philosophers say that a state or nation is, fundamentally, violent. Violence in this case refers to the way that individual Natural Rights and freedoms are violated by external social constraints. This kind of violence is considered to be legitimated by the agreement to the social contract. This might be easier to understand if you imagine a medical scenario. Say you have broken a bone and you are in pain. A doctor might say that the bone needs to be set; this will be painful, and kind of a forceful, “violent” action in which someone is interfering with your body in a painful way. So the doctor asks if you agree to let her set the bone. You agree, and so the doctor’s action is construed as being a legitimate interference with your body and your freedom. If someone randomly just walked up to you and started pulling at the injured limb, this unagreed violence would not be considered legitimate. Likewise, when medical practitioners interfere with a patient’s body in a way that is non-consensual or not what the patient agreed to, then the violence is considered illegitimate, or morally bad.

      The idea that a state is “fundamentally violent” highlights how laws and social rules always restrict some individual freedoms, but this restriction is considered legitimate when people consent—explicitly or implicitly—to the social contract. Like a doctor setting a broken bone, the interference may be painful or forceful, yet it becomes morally justified through agreement and the expectation that it ultimately serves the individual and the collective good.

    1. Individual harassment can also be done publicly before an audience (such as classmates or family). For example: Bullying: like posting public mean messages Impersonation: Making an account that appears to be from someone and having that account say things to embarrass or endanger the victim. Doxing: Publicly posting identifying information about someone (e.g., full name, address, phone number, etc.). Revenge porn / deep-fake porn Etc.

      Public forms of harassment like bullying, impersonation, and doxing amplify harm because they turn private attacks into performances for an audience, increasing humiliation and long-term reputational damage. The permanence and shareability of digital content make these actions especially dangerous, as victims often lose control over their personal information and how they are represented online.

  2. Feb 2026
    1. hat do you think a social media company’s responsibility is for the crowd actions taken by users on its platform?

      I think a social media company has some responsibility for crowd actions on its platform, especially when its design choices or algorithms amplify harmful behavior. Even if users ultimately make their own decisions, platforms still shape the environment and therefore share accountability for preventing large-scale harm.

    1. 16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There (pdf here). Some of the different characteristics that means of communication can have include (but are not limited to):

      This reading made me realize that computer-mediated communication should not always be judged by how closely it imitates face-to-face interaction. Instead, its unique features—like asynchronicity, scalability, and anonymity—can sometimes create new forms of connection that are not just different, but in certain contexts even more effective than being physically present.

    1. 15.1.6. Automated Moderators (bots)# Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.

      Automated moderators can quickly detect and remove harmful content at a large scale, which makes them efficient and cost-effective. However, they often struggle to understand context, sarcasm, or cultural differences, which can lead to unfair removals or missed harmful content.

    1. 14.4. Government Censorship# Governments might also have rules about content moderation and censorship, such as laws in the US against Child Sexual Abuse Material (CSAM). China additionally censors various news stories in their country, like stories about protests. In addition to banning news on their platforms, in late 2022 China took advantage of Elon Musk having fired almost all Twitter content moderators to hide news of protests by flooding Twitter with spam and porn.

      Government censorship shows how moderation can protect people from real harm, such as banning CSAM, but it can also be used to control information and silence dissent. The example of China flooding Twitter with spam highlights how censorship can go beyond direct bans and instead manipulate the information environment to hide important events from public view.

    1. So you might find a safe space online to explore part of yourself that isn’t safe in public (e.g., Trans Twitter and the beauty of online anonymity). Or you might find places to share or learn about mental health (in fact, from seeing social media posts, Kyle realized that ADHD was causing many more problems in his life than just having trouble sitting still, and he sought diagnosis and treatment). There are also support groups for various issues people might be struggling with, like ADHD, or having been raised by narcissistic parents.

      Online spaces can offer a powerful sense of safety and belonging, especially for people who feel unable to express certain parts of themselves in public. Anonymity can create room for exploration, honesty, and connection that might not otherwise be possible offline. At the same time, social media can also serve as an entry point to self-understanding, as people encounter language and experiences that help them recognize patterns in their own lives. Support groups and online communities show how digital platforms, despite their flaws, can meaningfully reduce isolation and encourage people to seek help.

    1. Doomscrolling is: “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.” Merriam-Webster Dictionary

      This issue became especially severe during the COVID-19 pandemic. Large-scale prevention and control measures intensified people’s sense of uncertainty and anxiety, which in turn led many to lose trust in governments and other perceived “authorities,” weakening public credibility overall. At the same time, heightened stress seemed to push people toward more extreme positions, fostering a kind of defensive aggression in public discourse. As a result, hostility and resentment became more visible in everyday interactions.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      The way accessible design is often discussed creates an artificial divide between “designers” and “disabled users,” which reinforces exclusion rather than inclusion. Dr. Cynthia Bennett’s research highlights how disabled people are frequently treated as consultants or test subjects instead of being recognized as designers in their own right. This framing ignores lived experience as a form of expertise and limits what accessibility can become. If disabled people are not seen as real designers, then accessibility will always be partial and shaped by outsiders’ assumptions rather than real needs.

    1. Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    2. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    3. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    1. When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      I have always been skeptical about whether privacy on social media is truly “private.” In many cases, so-called private messages are still accessible to platform developers or automated systems, which means users are trusting companies to protect their privacy rather than actually controlling it themselves. While this access can be helpful in situations like reporting threats or harassment, it also raises questions about who ultimately benefits from this arrangement. If only social media companies are able to see and manage my private data, I am not sure that this kind of “privacy” genuinely serves users’ interests rather than the platforms’ own priorities.

    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time).

      It is interesting that using symbols, uppercase letters, and numbers does not significantly increase the difficulty of brute-force attacks, while increasing the length of a password dramatically raises the cost of cracking it. However, many social media platforms still emphasize “complex” password rules rather than encouraging longer passwords. This can create a false sense of security for users, who may believe their passwords are strong when they are not. Ironically, these complexity requirements can even make passwords harder to remember, leading users to reuse them or choose predictable patterns, which ultimately gives attackers more opportunities.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      This example shows how datasets can be unintentionally poisoned by social dynamics rather than malicious intent. When a single TikTok video goes viral, it can dramatically change who participates in a survey, skewing the data toward one narrow demographic. Even though the data may look large and complete, it no longer represents the population the researchers originally intended to study. This highlights how data collection is shaped by platforms and visibility, and why researchers must think carefully about how and where their data is gathered.

    1. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:

      When working with large data sets, it becomes clear how easy it is to find patterns that are misleading. Two variables may appear to move together, but this relationship can be caused by chance or by other hidden factors. Psychology has a similar idea captured by the phrase “correlation does not imply causation.” This reminds me that seeing a pattern in data is not the same as understanding why it exists, and that conclusions should be made carefully rather than based on surface-level relationships.

  3. Jan 2026
    1. Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the ihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faithf the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith11. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      This reading helped me see that trolling in spaces like 4chan isn’t just about “free speech” or joking, but about a lack of care for truth and harm. The idea that arguments are made without concern for whether they are true explains why these communities drift toward extreme misogyny and racism. While trolls may claim a nihilistic or egoist stance, this feels less like a genuine ethical position and more like a shield to avoid responsibility. Under almost any serious moral framework, the deliberate disruption and harm caused by trolling is ethically wrong, especially when it consistently targets marginalized groups.

    1. and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck ou

      Misogyny on the internet seems to be more severe than in real life—especially in the realm of online gaming. At first, I thought this was because gaming is a space that glorifies skill and power, where authority is tied almost exclusively to “game performance,” and stereotypes about women being worse at games lead to a loss of discursive power. However, if misogyny was already pervasive in the early internet, then I think there must be other contributing factors and explanations as well.

    1. Anonymity encouraging inauthentic behavior# Anonymity can encourage inauthentic behavior because, with no way of tracing anything back to you1, you can get away with pretending you are someone you are not, or behaving in ways that would get your true self in trouble.

      Anonymity can encourage people to act in ways they normally would not if their real identity were known. When there are no clear consequences, it becomes easier to pretend to be someone else or to say and do things that might cause trouble in real life. This lack of accountability often lowers people’s sense of responsibility toward others. Over time, anonymity can make online spaces feel less honest and more hostile, even though it can sometimes protect users as well.

    1. On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere).

      In fact, many young people today are already used to this situation. We are accustomed to showing different sides of ourselves in different contexts, and we strongly hope that these contexts do not interfere with one another. For example, we might act “cool” or relaxed around friends in order to fit in, while presenting ourselves as serious and hardworking in front of parents, teachers, or in school settings. When these different contexts merge or collide, people often feel a strong sense of discomfort, betrayal, or even resentment. This is not strange or abnormal—just as most people would not want their parents to closely observe their behavior at a party with friends. Because of this, people usually do not see themselves as fake or feel guilty for behaving differently across contexts; instead, it feels like a normal and necessary way of managing social life.

    1. Fig. 5.2 An newer bulletin board system. In this one you can click on the thread you want to view, and threads can include things like images.

      Although bulletin board systems originated decades ago, many websites today still use this forum-style structure. For example, the Dungeons & Dragons (D&D) community relies heavily on well-known bulletin board forums where players share discussions, homebrew rules, and game resources. Similarly, the fighting game MUGEN, which comes from the arcade era, has dedicated forums where users upload character models, stages, and other custom assets. These modern bulletin board systems show how this format continues to support niche communities by organizing discussions and resources in a clear, thread-based way.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      Although blogs are often seen as a product of the early internet, many middle-aged and older users still use them as an important way to communicate and share ideas. I once followed a professor from one of my classes who regularly posted his personal observations and reflections on his blog. For example, he noticed that people in the cafeteria were less likely to choose orange trays than brown ones, and he speculated that there might be a scientific or psychological explanation behind this preference. What stood out to me was that many commenters—who appeared to be from the same age group based on their usernames and profile pictures—actively engaged with his posts, showing that blogs still function as a thoughtful and community-oriented space for discussion.

    1. In fact, I have always been puzzled about the collection of information such as "region" and "age". Is it really "necessary" for companies to collect such information? These pieces of information do not guarantee that the account is used by a real person - fake accounts can also randomly generate combinations of these pieces of information, but it will increase the risk of user information leakage

    1. they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.” Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      As an international student, I have experience with this. Many social media platforms display the posting time based on the time zone of the viewer, i.e. the device time zone, when the publisher posts. That is to say, if you view the release time of the same tweet in different time zones, you will find that their release time has changed

    1. What bots do you dislike?

      I dislike bot armies that are used to manipulate opinions or flood comment sections. In situations where people are supposed to think critically and make their own judgments, these bots create a lot of noise and confusion. They make it harder to tell what real people actually believe. Overall, they turn online discussions into a messy and unhealthy environment instead of a meaningful conversation.

    2. What bots do you find surprising? What bots do you like?

      Many bots on today’s video platforms are designed to recognize background music in videos and help users download audio or video clips. Even though these bots sometimes go against the rules of the platforms, they can be very useful for people who want to find songs or save content for personal use. I find these bots surprising because they can quickly identify music with high accuracy. I also like them because they make it much easier to explore and reuse media that would otherwise be hard to access.

    1. 18.3.2. Schadenfreude# Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others. A 2009 satirical article from the parody news site The Onion satirizes public shaming as being for objectifying celebrities and being entertained by their misfortune: Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy. Nation Demands Fresh Celebrity Meat - The Onion

      I think this phenomenon is even more common on today’s internet, where almost everything—including suffering—is turned into entertainment. For example, in Sex Education, there is a storyline in which a school bully is revealed to be closeted and secretly in love with the gay student he targets. One scene, in which the bully publicly exposes his own physical insecurity as a form of “confession,” is played for humor and often provokes laughter. While the scene may be entertaining, I find it problematic because it turns school bullying into spectacle, overlooking the lasting psychological trauma experienced by the victim. The victim’s suffering cannot simply be erased or healed by understanding the bully’s motivations, even when those motivations are framed romantically.

    1. What do you think is the responsibility of tech workers to think through the ethical implications of what they are making?

      Kumail Nanjiani reflects on his experiences visiting tech companies and expresses concern that many developers give little to no thought to the ethical consequences of their innovations. He highlights how powerful technologies—such as privacy-invasive tools or manipulated media—are often created with a “can we do this?” mindset rather than “should we do this?”. The lack of prepared responses to ethical questions suggests that these issues are rarely discussed within tech culture. Nanjiani ultimately warns that once technology is released, its harms cannot easily be undone, making ethical responsibility crucial.