8 Matching Annotations
  1. Last 7 days
    1. 4Chan has various image-sharing bulletin boards, where users post anonymously. Perhaps the most infamous board is the “/b/” board for “random” topics. This board emphasizes “free speech” and “no rules” (with exceptions for child pornography and some other illegal content). In these message boards, users attempt to troll each other and post the most shocking content they can come up with. They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community.

      This part really shows how anonymity plus a “no rules” mindset changes behavior. It feels like the goal shifts from sharing ideas to just getting reactions, no matter the harm. Calling it “free speech” sounds ideal in theory, but here it mostly seems to reward whoever can be the most shocking or offensive.

    1. Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      This comparison really stood out to me. It makes today’s misinformation problem feel less like a brand new crisis and more like a return to an older media pattern, just amplified by speed and scale. The idea that the 1900s were the “unusual” period, not the norm, kind of flips how I usually think about media history.

  2. Jan 2026
    1. If we look at a data field like gender, there are different ways we might try to represent it. We might try to represent it as a binary field, but that would exclude people who don’t fit within a gender binary. So we might try a string that allows any values, but taking whatever text users end up typing might make data that is difficult to work with (what if they make a typo or use a different language?). So we might store gender using strings, but this time use a preset list of options for users to choose from, perhaps with a way of choosing “other,” and only then allow the users to type their own explanation if our categories didn’t work for them. Perhaps you question whether you want to store gender information at all.

      I like how this example shows that even something that seems simple like a “data field” actually involves a lot of value judgments. Every way of storing gender has tradeoffs between inclusivity, usability, and data cleanliness, and there isn’t a purely technical solution. It also made me stop and think about whether collecting certain data is even necessary in the first place.

    1. Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values

      This makes sense to me, especially how binary maps so cleanly onto True/False. It’s interesting that something as abstract as “truth” in code ultimately comes down to 0s and 1s, which feels very different from how messy and ambiguous truth can be in real life.

    1. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.”

      The donkey example actually made this click for me. It’s a really clear way to show how intention and action can be separated, and how responsibility gets blurred when the “actor” doesn’t understand what it’s doing. Seeing bots framed this way helps me think less about blaming the account itself and more about the people behind it, likewho designed it, deployed it, or benefit from it, even if they’re far removed from the actual action.

    1. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      I like this clarification because “bot” gets thrown around so loosely online. It makes sense to draw the line at whether an account is actually run by software versus a human, even if that human is being paid and tightly scripted. Calling click-farm workers “human computers” is kind of unsettling, but it does a good job of showing how inauthentic behavior isn’t always automated in a technical sense.

    1. When we (the authors) were young, as internet-based social media was starting to become commonly used, a popular sentiment we often heard was: “The internet isn’t real life.” This was used as a way to devalue time spent on social media sites, and to dismiss harms that occurred on them. Versions of this phrase are still around, such as in this tweet from statistician Nate Silver:

      I found this idea really interesting because it shows how outdated the phrase “the internet isn’t real life” has become. Online spaces now directly affect people’s mental health, relationships, and even job opportunities, so dismissing harm just because it happens online feels disconnected from reality. It makes me think about how language like this allows people and platforms to avoid taking responsibility for real consequences.

    1. Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.”

      I get why consequentialism is appealing, especially for platforms that rely on metrics like engagement or growth, but it also feels kind of risky. If decisions are made only based on what benefits the majority, harm to smaller or more vulnerable groups can easily be overlooked. In social media, this could mean justifying toxic or harmful content as long as it keeps most users active or entertained.