8 Matching Annotations
  1. Jan 2026
    1. In his television program, Mr. Rogers wanted all children to feel cared for and loved. To do this, he intentionally fostered a parasocial relationship with the children in his audience (he called them his “television friends”): I give an expression of care every day to each child, to help him realize that he is unique. I end the program by saying, “You’ve made this day a special day, by just your being you. There’s no person in the whole world like you, and I like you, just the way you are.” Fred Rogers requesting funds for PBS at the US Senate in 1969 Now, as children, I (Kyle) and my sister watched this program and felt the effects of what Fred Rogers was doing and had different responses to it. I asked my mom to help me send him a letter asking if he was real, to which I got a letter back explaining that he was indeed a real person:

      The section on parasocial relationships helped me understand why social media makes creators feel like “friends,” even when the relationship is one-sided. The Mr. Rogers example shows that parasocial relationships aren’t automatically bad, but they can become unethical if the creator isn’t clear about the limits of the relationship. This made me think about influencers today who intentionally blur those boundaries to seem more relatable.

    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      One thing that stood out to me was the idea that people react so strongly to inauthenticity because we hate being duped, not just misinformed. The example of lonelygirl15 shows how upsetting it feels when the type of connection being offered (a real person’s life) doesn’t match what’s actually happening. This made me realize that authenticity isn’t just about truth, but about trust and expectations in relationships online.

    2. Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress.

      One thing that stood out to me was the idea that people react so strongly to inauthenticity because we hate being duped, not just misinformed. The example of lonelygirl15 shows how upsetting it feels when the type of connection being offered (a real person’s life) doesn’t match what’s actually happening. This made me realize that authenticity isn’t just about truth, but about trust and expectations in relationships online.

    1. Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread

      I found it interesting that things like graffiti and handwritten notes were considered early forms of social media. It shows that the need to share information and rumors existed long before modern technology.

    1. 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      Reading about 8Chan made me realize how dangerous online platforms can become without moderation. It is disturbing that hate groups and mass shooters used these spaces to spread their ideas.

    1. Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).

      The discussion of dates and time zones shows that data is not always as objective as it appears. A label like “posted yesterday” can mean very different things depending on location, which could affect automated systems or data analysis outcomes. This makes me realize how technical design decisions can quietly influence ethical judgments and interpretations online.

    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).

      What surprised me is how much information is classified as metadata rather than data. While the tweet text and images feel like the main content, metadata such as time, user identity, and engagement numbers can be even more powerful when analyzing behavior at scale. This raises ethical concerns because users may not realize how much information about them is being collected and interpreted beyond what they intentionally post.

    1. Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.

      The idea that responsibility becomes separated between programmers, operators, and the bot’s actions is really compelling. If something harmful happens, we can’t easily point to one party. It feels similar to algorithmic decision-making today where there is no single accountable human.