8 Matching Annotations
  1. Last 7 days
    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      Infinite scrolling removes the "stopping point" from the interface, changing not the functionality itself, but people's behavioral rhythm and self-control costs. It makes continuing to consume content the default option, thus treating attention as an extractable resource, which is a classic example of "design as governance."

    2. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.

      The term "frictionless" is not value-neutral; it often disguises the platform's goals (time spent on the platform, engagement) as "better usability." The ethical question is: are the eliminated frictions actually the "brakes" that users need for reflection, disengagement, or setting privacy preferences?

    1. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action.

      So-called "pernicious ignorance" is not simply a lack of knowledge, but rather a selective blindness reinforced by social reward structures. For example, when posting photos of volunteer trips abroad, we are more inclined to consider the attention, fundraising, and personal image enhancement they bring, while ignoring the consent rights of those being photographed, the risks of long-term stigmatization, and the harm caused by power imbalances. This makes actions that "appear beneficial" seem morally easier and more readily justifiable.

    2. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic.

      This statement highlights a frequently overlooked tension: the "accuracy" of moral judgments often comes at the cost of complexity. The mechanisms of social media (instant feedback, likes/shares) encourage us to pursue "quick and certain" conclusions, leading to a natural tendency to make decisions based on simplified data. The result is not simply miscalculation, but the systematic exclusion of inconvenient consequences.

  2. Jan 2026
    1. 3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist 3.2.4. Registered vs. Unregistered bots# Most social media platforms provide an official way to connect a bot to their platform (called an Application Programming Interface, or API). This lets the social media platform track these registered bots and provide certain capabilities and limits to the bots (like a rate limit on how often the bot can post). But when some people want to get around these limits, they can make bots that don’t use this official API, but instead, open the website or app and then have a program perform clicks and scrolls the way a human might. These are much harder for social media platforms to track, and they normally ban accounts doing this if they are able to figure out that is what is happening. 3.2.5. Fake Bots# We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      This passage uses three levels to remind us that "robots" themselves do not equate to intelligence or objectivity. Tay's "contamination" illustrates that machine learning-based conversational robots absorb biases from the platform as "language norms"—when training data comes from an environment full of provocation and racism, the system becomes an amplifier of prejudice; the problem is not just a technical failure, but a governance failure of treating a "public platform" as a safe training ground. Next, the "registered vs. unregistered bots" reveal the cat-and-mouse game of platform regulation and countermeasures: API restrictions act as rules and guardrails, while simulated clicks bypassing APIs disguise automation as "human," making it harder for platforms to track, demonstrating that visibility and controllability are themselves forms of power. Finally, the "fake bots" point to another form of deception: humans pretending to be AI to gain traffic, a sense of mystery, or immunity from responsibility—this blurs the line of "authenticity" and reminds us that in the attention economy, technological identity can also be used for performance and marketing.

    2. On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing). As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots). “I’ve gotten a rush of tweets – coordinated tweets. Like, somewhere else on the internet there’s like a group on the internet saying, ‘Okay, everyone tweet Rian Johnson.’ All from Russian accounts, and all begging me not to kill Admiral Hux in this movie.” From: https://www.imdb.com/video/vi3962091545 (start at 7:49) After the Star Wars: Last Jedi was released, there was a significant online backlash. When a researcher looked into it: [Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place. https://www.indiewire.com/2018/10/star-wars-last-jedi-backlash-study-russian-trolls-rian-johnson-1202008645/ Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      This passage shifts the discussion of "bots" from neutral tools back into the context of power and manipulation: they can not only automate the dissemination of information but also automate the creation of "false impressions of public opinion" (follower boosting, astroturfing) and targeted harassment (the coordinated attack on Rian Johnson). More notably, the research mentions that a large number of negative tweets were "politically motivated or non-human," meaning that the anger, ridicule, and boycotts we see online may not be a natural aggregation of "genuine public opinion," but rather an emotional landscape that is organized, amplified, and fabricated. Finally, the "Gender Pay Gap Bot" provides a counterexample: this "adversarial" automation can be used for public accountability—by forcibly juxtaposing corporate holiday statements with structural data (wage gaps), it forces people to see the reality obscured by public relations language. The key is not whether "bots are good or bad," but who uses them and whose perceptions and interests they are used to shape.

    1. There is no right or wrong. Nothing matters.

      This statement sounds very "radical," as if it could free one from stress, but I think it can easily become a form of escapism: when we say "it doesn't matter," we are often avoiding things we actually care about. Even if there are no absolutely uniform "right answers" in the world, we still make choices every day based on relationships, consequences, and responsibilities—these choices themselves demonstrate that "things do matter to us." Therefore, nihilism can be used to remind me not to be bound by external standards, but it cannot be used as an excuse to "avoid responsibility."

    2. “A person is a person through other people.”

      This statement made me think: many of our feelings of "who I am" don't just appear out of thin air, but are shaped within relationships—for example, when we are respected and trusted, we are more likely to become confident and kind; when we are ignored or hurt, we may become more withdrawn. It's not about "you must please everyone," but rather a reminder to consider one more thing when making decisions: will my actions help others feel more like "a complete person"? If it can lead to greater dignity and recognition for both parties, then it's often a better choice.