20 Matching Annotations
  1. Last 7 days
    1. 12.3. Evolution in social media# Let’s now turn to social media and look at how evolution happens there. As we said before, evolution occurs when there is: replication (with inheritance), variations or mutations, and natural selection, so let’s look at each of those.

      This section explains how social media content spreads in ways similar to biological evolution. Posts are copied and shared (replication), often changed through replies, quote tweets, or remixes (variation), and then selectively amplified based on human reactions, money, or platform algorithms (selection). Together, these forces shape which ideas go viral and which disappear, showing that platform design plays a powerful role in guiding online culture.

    1. 11.1. What Recommendation Algorithms Do# When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      Recommendation algorithms decide what we see on social media based on our past behavior, such as what we like, click, or spend time looking at. While this can make feeds feel personalized and engaging, it can also limit what we are exposed to and reinforce habits or interests without us realizing it, especially since these algorithms are usually hidden from users.

    1. 10.3.1. Who gets designed for#

      This section shows that design choices often reflect the assumptions and backgrounds of the people who create them. When designers are not diverse or do not include disabled people in the design process, products can unintentionally exclude or harm certain groups, like the soap dispenser that fails to recognize dark skin. It highlights why inclusive design and involving affected communities directly are essential for fairness and accessibility.

    1. 10.1. Disability# A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example:

      This section explains disability as something created by social expectations, not just by a person’s body or mind. A disability happens when environments, designs, or systems assume everyone has certain abilities, like walking, seeing, hearing, or focusing, and fail to accommodate people who don’t. What counts as a disability can change across societies and situations, and disabilities can be visible or invisible, permanent or temporary. Overall, the reading emphasizes that disability is closely tied to design choices and social norms, and many difficulties could be reduced if society made more inclusive accommodations.

  2. Feb 2026
    1. 9.3. Additional Privacy Violations# Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      I was surprised that privacy violations don’t only come from hacking, but also from normal everyday systems and data practices. Things like unclear rules, metadata, or “anonymous” datasets can still expose people without them realizing it. The example of photo metadata revealing someone’s hidden location shows how small technical details can create serious risks. This reminds me that data mining and sharing data always have hidden consequences, so companies should be much more responsible and transparent.

    1. 9.1. Privacy

      This section made me realize that privacy is not just about “hiding secrets,” but about control over our information and context. For example, we might want to talk differently with friends than with teachers or employers, but social media can mix everything together (context collapse). I also didn’t think about how “private messages” aren’t truly private since companies can still store and scan them. It feels like using social media always means giving up some privacy, even when we don’t notice it.

    1. 8.3. Mining Social Media Data# Data mining is the process of taking a set of data and trying to learn new things from it.

      This section shows that data mining can reveal surprising patterns, like predicting someone’s interests or political views from social media behavior. However, the part about spurious correlations is especially important. Just because two things move together does not mean one causes the other. The candle reviews and COVID example was funny but also a good reminder that data can be misleading. As a data science student, this makes me realize we must be careful when interpreting results and avoid jumping to conclusions too quickly.

    1. 8.1. Sources of Social Media Data# Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      This section made me realize how much data social media platforms collect, even beyond what we intentionally share. I used to think they only stored basic info like my name or email, but they also track behaviors like what I click on, how long I look at posts, and even where I go online. It feels a little uncomfortable because many of these things happen without us noticing. It shows that our online actions can reveal a lot about us, not just what we directly say. This makes me think we should be more careful about privacy and what platforms are allowed to collect.

  3. Jan 2026
    1. 7.6. Ethics and Trolling

      This section aptly points out that “trolling itself is not inherently good or evil”; the key lies in what it disrupts and whether the social reality being disrupted holds legitimacy. What I find most illuminating is framing trolling within the context of “groups, norms, and power”: When trolling challenges unjust norms, it may hold moral value; but when it descends into nihilism, seeking only reactions and harm, it becomes nearly indefensible within any serious ethical framework.

    1. 7.1. What is trolling# Fig. 7.1 On Martin Luther King Day Jr. Day 2020, comedian Jaboukie Young-White, used his verified identity blue checkmark (before Elon Musk made blue checkmarks purchasable) to impersonate the official FBI account. He then made a trolling Tweet, pretending to be the FBI and referring to the theory that the FBI was behind the assassination of Martin Luther King Jr. (note: while this theory is not confirmed, the FBI definitely tried to get MLK to kill himself). Twitter quickly suspended Jaboukie’s account after this post, but many viewed his Tweet as a heroic (and funny) act of protest.

      This example shows how trolling relies on inauthentic presentation to deliberately provoke strong emotional reactions and disrupt trust. By impersonating a verified FBI account, Jaboukie’s tweet blurred the line between humor, protest, and deception, forcing users to confront both the FBI’s historical actions and the power of authority signals online. It raises an interesting ethical tension: even when trolling is used to make a political point or expose institutional hypocrisy, it still depends on misleading audiences, which can undermine the very trust it seeks to critique.

    1. 6.5. Parasocial Relationships# Another phenomenon related to authenticity which is common on social media is the parasocial relationship. Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all. Parasocial relationships are not a new phenomenon, but social media has increased our ability to form both sides of these bonds. As comedian Bo Burnham put it: “This awful D-list celebrity pressure I had experienced onstage has now been democratized.” Learn more about parasocial relationships: StrucciMovies: Fake Friends YouTube Series Sarah Z: How Fans Treat Creators 33 min

      Parasocial relationships are really common on social media, especially with influencers and streamers who talk directly to the camera and share personal stories. It can feel authentic when creators clearly explain the limits of the relationship, like Mr. Rogers did by calling viewers “television friends.” But it can become inauthentic or harmful when creators encourage followers to believe the relationship is more mutual than it actually is.

    1. Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress.

      The lonelygirl15 case shows why authenticity is so important on social media. People felt upset not just because the story was fake, but because they trusted it as real and emotionally invested in it. This makes me realize that when platforms or creators blur the line between fiction and reality, it can easily break users’ trust, even if the content itself is entertaining.

    1. With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)

      I found the discussion of affordances and friction especially thought-provoking, because design choices are not neutral—they actively guide user behavior. Features like infinite scroll reduce friction in a way that benefits engagement metrics, but from an ethical perspective (especially care ethics or virtue ethics), they can undermine users’ ability to rest, reflect, or disengage. This makes me think that “frictionless design” is not always ethically better, and sometimes intentional friction can actually support more responsible and humane use of social media.

    1. 5.5.3. 8Chan (now 8Kun)# 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      I find these “virtually rule-free” platforms deeply contradictory. On one hand, they undeniably gave rise to much of the early internet culture and memes, yet on the other, they also provide fertile ground for extreme and violent content. When “free speech” is treated as the sole principle, it's easy to overlook the real people who get hurt as a result. This makes me believe that “no rules” itself is not a neutral choice.

    1. 3.2.5. Fake Bots# We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      I hadn't fully realized there were so many “unofficial” bots out there—like those that bypass API restrictions by simulating human clicks. This feels riskier than simply registering bots. Add to that fake bots (real people pretending to be bots), and it becomes harder to verify information sources, further eroding trust on the platform.

    2. 3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist

      I think the Tay example perfectly illustrates that “learning bots” are not neutral—what they learn depends entirely on their environment. If the platform itself is saturated with malicious content and the bot lacks sufficient filtering or constraints, it can quickly become corrupted and even amplify existing problems.

    1. 2.2.2. The “Golden Rule”# One widespread ethical principle is what English speakers sometimes call the “Golden Rule”: “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’” Confucius, Analects 15.23 (~500 BCE China) “There is nothing dearer to man than himself; therefore, as it is the same thing that is dear to you and to others, hurt not others with what pains yourself.” Gautama Buddha, Udānavarga 5:18 (~500 BCE Nepal/India) “That which is hateful to you do not do to another; that is the entire Torah, and the rest is its interpretation.” Hillel the Elder, Talmud Shabbat, folio 33a (~0 CE Palestine) “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.” Jesus of Nazareth, Matthew 7:12 (~30 CE Palestine) And many more…

      I find the “Golden Rule” sounds simple enough, but it doesn't always work well in practice because everyone's feelings and boundaries are different. Especially on social media, judging behavior by thinking “I don't mind, so others shouldn't either” can actually overlook the feelings of those who are genuinely affected.

    1. Taoism# Act with unforced actions in harmony with the natural cycles of the universe. Trying to force something to happen will likely backfire. Rejects Confucian focus on ceremonies/rituals. Prefers spontaneity and play. Like how water (soft and yielding), can, over time, cut through rock. Key figures: Lao Tzu ~500 BCE China Lao Tzu Zhuangzi ~300 BCE China

      As a Chinese student, I actually resonate quite deeply with the Taoism mentioned here. Taoism emphasizes “governing through non-action” and following nature's course. This inclines me to question “forceful intervention” and “over-optimization” when considering social media and tech ethics. Sometimes, the more platforms try to control user behavior, the more likely they are to backfire—much like Taoism's idea that “the harder you try, the more unbalanced things become.”

    1. We’ve now looked at how different ways of storing data and putting constraints on data can make social media systems work better for some people than others, and we’ve looked at how this data also informs decision-making and who is taken into account in ethics analyses. Given all that can be at stake in making decisions on how data will be stored and constrained, choose one type of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained.

      This section made me realize that storing personal data on social media is not just a technical question, but also an ethical one. For example, age can be stored as a number, but platforms still need to decide how precise it should be and how it might be used or misused. It also made me question whether some data, like exact address, really needs to be stored at all given the privacy risks.

    1. If we look at a data field like gender, there are different ways we might try to represent it. We might try to represent it as a binary field, but that would exclude people who don’t fit within a gender binary. So we might try a string that allows any values, but taking whatever text users end up typing might make data that is difficult to work with (what if they make a typo or use a different language?). So we might store gender using strings, but this time use a preset list of options for users to choose from, perhaps with a way of choosing “other,” and only then allow the users to type their own explanation if our categories didn’t work for them. Perhaps you question whether you want to store gender information at all. Now it’s your turn, choose some data that you might want to store on a social media type, and think through the storage types and constraints you might want to use: Age Name Address Relationship status etc.

      I found the discussion about representing gender as data especially thoughtful, because it shows how technical design decisions can have real social consequences. Treating gender as a simple binary might make data easier to process, but it can erase people’s identities and experiences. I also like the idea of combining preset options with an “other” field, since it balances inclusivity with the need for usable and consistent data.