7 Matching Annotations
  1. Last 7 days
    1. 2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.

      This section helped me understand how Web 2.0 fundamentally changed social media from static websites into interactive platforms centered around user profiles and ongoing updates. These early design choices still shape how users interact with feeds today and help explain why engagement and visibility became so important later on.

    1. Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      This section helped me understand how early social media platforms were not originally designed for large-scale influence or automation, but rather for small communities and personal connection. Seeing how features like feeds, likes, and sharing evolved over time makes it clear how design decisions made early on still shape user behavior and power dynamics today.

  2. Jan 2026
    1. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations).

      The “utility calculus” framing is helpful because it makes clear that ethical judgment depends on what data we choose to count and whose outcomes we include. Online, “pernicious ignorance” can look like focusing on likes/engagement while ignoring downstream harms (e.g., non-consensual images, harassment, or impacts on people we don’t personally know), which makes the calculation feel easier but morally distorted.

    1. Thus, when designers of social media systems make decisions about how data will be saved and what constraints will be put on the data, they are making decisions about who will get a better experience. Based on these decisions, some people will fit naturally into the data system, while others will have to put in extra work to make themselves fit, and others will have to modify themselves or misrepresent themselves to fit into the system.

      When the platform sets data constraints, they not only make "technical" choices, they decide who is considered the default user. The example of the address form shows how people outside the hypothetical specification can either do additional work or eventually distort themselves and become "bad data". The system may later regard it as a user error, not a design failure.

    1. This means that media, which includes painting, movies, books, speech, songs, dance, etc., all communicates in some way, and thus are social. And every social thing humans do is done through various mediums. So, for example, a war is enacted through the mediums of speech (e.g., threats, treaties, battle plans), coordinated movements, clothing (uniforms), and, of course, the mediums of weapons and violence.

      The definition of bots in this chapter highlights that automation exists on a spectrum rather than as a simple bot vs. human distinction. I found it interesting that many accounts we interact with daily may be partially automated, which challenges the assumption that bots are always deceptive or malicious. This makes me think that ethical concerns should focus more on transparency and intent, not just whether automation is involved.

    1. As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist

      The discussion of bots influencing public opinion raises important ethical questions about power and accountability. Even if a bot spreads accurate information, the scale and speed of automation can still distort public discourse. This suggests that ethical evaluation of bots should consider not only content accuracy but also their impact on human decision-making and democratic processes.

    2. The discussion of bots influencing public opinion raises important ethical questions about power and accountability. Even if a bot spreads accurate information, the scale and speed of automation can still distort public discourse. This suggests that ethical evaluation of bots should consider not only content accuracy but also their impact on human decision-making and democratic processes.