9 Matching Annotations
  1. Last 7 days
    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I think infinite scroll is a classic example of “friction-reducing design,” but its impact is actually a bit scary. In the past, with formats like search results, you had to click “next page” after finishing one page. While this action was a bit cumbersome, it provided a pause point, reminding you, “Should I stop now?” Infinite scroll completely removes that barrier. You just keep scrolling down, and content automatically loads, making you completely unaware of how long you've been scrolling.

      I think this is also why scrolling through social media is so addictive: it's not because we genuinely want to look for that long, but because the design eliminates every opportunity to stop.

    2. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad:

      I find this example particularly relatable because I frequently encounter this issue myself when playing mobile games: the “X” button for closing ads is designed to be super tiny, making it incredibly difficult to tap. Sometimes you accidentally click into the ad page instead. On the surface, this seems like a minor design detail, but it's actually a deliberate tactic to increase friction, making it harder for users to leave the ad. For advertisers and platforms, this keeps users engaged longer and even generates accidental clicks, boosting revenue. But from the user's perspective, this design is downright annoying since it exploits our attention and clumsy interactions to “force” us into unwanted actions. I believe this goes beyond ordinary design; it's manipulative design.

  2. Jan 2026
    1. When we think about how data is used online, the idea of a utility calculus can help remind us to check whether we’ve really got enough data about how all parties might be impacted by some actions. Even if you are not a utilitarian, it is good to remind ourselves to check that we’ve got all the data before doing our calculus. This can be especially important when there is a strong social trend to overlook certain data. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action.

      When I think about how data is used on the web, I think the concept of "utility computing" is actually useful, because it reminds us: do we really see all the data before deciding whether something is "more beneficial than harmful"? Many times we only use the information we have, but the missing data may be the most important part. I also agree with the text about "harmful ignorance", because in reality, it is really easy for people to ignore some data that makes them uncomfortable or not in line with their own position, so the results will be more like supporting the choice they want to make. This is especially true in social media and algorithmic recommendations, where we may be seeing things that are already filtered, so if we don't ask, "What's missing?" we may be biased in our utility calculations.

    1. Gender# Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:

      I found that many websites have different gender options. I think it's a hard option to collect. Many times what everyone thinks gender is not even on the website. In order to be fair and treat each user equally, we need to provide them with the most appropriate options.

    1. Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      It is "confrontational", but it has a social justice purpose - to use automation to counter the "pseudo-equality propaganda" of corporate marketing and bring the real structural problem (the wage gap) to the public. This example shows that some antagonistic bots can instead become tools for monitoring power.

    2. Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites. 3.2.2. Antagonistic bots:# On the other hand, some bots are made with the intention of harming, countering, or deceiving others.

      The "bot" itself is not good or bad, but depends on what it is designed for and how the rules of the platform constrain it. For example, friendly bots (automatic captioning, vaccine progress, red panda images) essentially improve the efficiency of information acquisition and enhance the user experience; antagonistic bots (spam, fake fans, astroturfing), however, can create false public opinion and make people think that "many people support/oppose a certain opinion", which directly affects public judgment

    1. 儒(其他关联)# B成为模范人物(例如,仁慈的;真诚的;尊敬和祭祀祖先;尊重各方ents长老和权威人士(照顾儿童和年轻人;对家人和他人慷慨大方)。 这些特质通常是通过仪式和礼节(包括祭祀祖先、音乐和饮茶)来展现和实现的。,从而形成和谐的社会。 关键数据: 孔子中国约500 孟子约350,中国 荀子公元前300年左右,中国 道教# 顺应宇宙自然循环,采取自然而然的行动。强行推动事物发展很可能会适得其反。。 不认同儒家注重礼仪规范,更崇尚自发性和玩乐精神。 就像水(柔软且易变形)经过一段时间可以切割岩石一样。 关键数据: 老子公元前500年左右的中国 老子 庄子公元前300年左右的中国

      I thought it was interesting that many of these frameworks try to describe what makes a “good person,” but they don’t always agree about what that actually looks like. For example, Confucianism emphasizes rituals and social roles, while Taoism encourages doing less and letting things unfold naturally. Reading them side-by-side made me realize that ethical behavior can depend a lot on what a culture values, not just on universal rules.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look.

      I think this idea shows how social media can distort people's lives. When all we see is mostly good news, filters, and edited photos, it's easy to compare ourselves to something that was never real. Over time, this can affect our self-esteem and our expectations of what is "normal". This reminds me that we often forget that social media is more like a highlight reel than real life.

    1. Platforms can be minimalist, like Yo, which only lets you say “yo” to people and nothing else. Platforms can also be tailored for specific groups of people, like a social media platforms for low-income blind people in India.

      I think it was interesting to put a minimalist platform like Yo together with a platform specifically for low-income blind people. The former looks "simple and easy to use," but the fact that it has fewer features means it can't do much. Specialized platforms, on the other hand, are more complex but really help those who need it most. It made me think: Platform design really depends on who you want to serve.