16 Matching Annotations
  1. Last 7 days
    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL:

      The NPR article about Facebook’s 2021 data breach highlights a troubling issue: even when users' personal information is exposed, companies may choose not to alert them. Facebook’s decision not to notify affected users suggests that legal obligations are sometimes valued more than ethical responsibility. It made me think about how data privacy regulations still leave gaps that companies can exploit, and how users are often left unaware and unprotected after major breaches.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one

      I was surprised that even large companies sometimes store sensitive information like passwords insecurely. It made me realize that we often trust technology too much without questioning how safe it really is. Just because a company is big doesn’t mean it always handles user data properly. This section reminded me to be more critical and cautious when using online services.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Greg Miller. Researchers are tracking another pandemic, too—of coronavirus misinformation. Science, March 2020. URL

      The article talks about how false information about COVID-19 spreads like a virus. I learned that scientists are using tools usually used to study diseases to see how misinformation moves through social media. One part that stood out to me was how this false information can actually hurt people—for example, by making them scared of vaccines or not follow health rules. I think this article is important because it shows that fake news isn’t just annoying—it can be dangerous, especially during a health crisis.

    1. By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want.

      It reminded me of a time in high school when I was doing a group project on health trends. We found a strong correlation between energy drink sales and student stress levels, and at first, we were convinced one caused the other. But after talking to our teacher, we realized that both could actually be caused by something else—like exams. That experience taught me how easy it is to misread data. When I saw the chart about margarine consumption and divorce rates, it seemed silly at first, but it actually made a serious point. It showed how easy it is to find patterns that look convincing but have no real meaning. It made me think more critically about how data is used to support arguments, and how important it is to ask whether a connection truly makes sense.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Quinn Norton. Anonymous 101: Introduction to the Lulz. Wired, November 2011. URL:

      The Wired article Anonymous 101 provides a detailed look into the origins and inner workings of the Anonymous group, which is often linked with trolling culture. One detail I found especially interesting is how their use of humor, memes, and chaos isn't just for laughs—it also reflects a kind of protest or activism. This connects to the chapter’s point about trolling evolving from simple pranks to something more organized and community-based, like on 4chan.

    1. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS [g15]).

      Reading about how MUDs evolved into MMORPGs made me think about how much online gaming has changed over time. I remember playing games like World of Warcraft when I was younger, and it’s interesting to realize that those games came from such simple text-based beginnings. It’s kind of mind-blowing how far we've come in terms of game design and online interaction.

  5. Apr 2025
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zoe Schiffer. She created a fake Twitter persona — then she killed it with COVID-19. The Verge, September 2020.

      This article tells the story of how a Twitter account pretending to be a female physician of color gained trust online by posting supportive messages about healthcare and science. Eventually, it was revealed the person behind the account was not who they claimed to be. One important detail was that the account had even faked a COVID death to gain sympathy. This shows how online “authenticity” can be manipulated, and how hard it is to verify who someone really is online—even when they seem supportive or professional.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Authenticity in connection requires honesty about who we are and what we’re doing

      This quote made me think about how I sometimes change how I act just to be liked. For example, I might pretend everything’s fine when I’m actually not, or act interested in something just to fit in. But doing that is tiring, and it doesn’t lead to real friendships. This quote reminded me that people can only connect with the real me if I’m willing to show who I really am. If I always hide, then the connections I make won’t be real either.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Nicholas Jackson and Alexis C. Madrigal. The Rise and Fall of Myspace

      The author explains how MySpace’s early success came from giving users a high level of customization and creative freedom, but those same features later became part of its downfall. A detail that stood out to me was how MySpace’s pages became messy and hard to navigate, which made users eventually switch to cleaner platforms like Facebook. This source helps me understand how user experience and design play a major role in whether a social media platform survives.

    1. With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      This sentence reminds me of how we now use platforms like Canvas to write discussion posts and share reflections. As a non-native English speaker, I really appreciate the chance to write and revise my thoughts before sharing. It gives me more confidence to participate. Blog-style platforms made it easier for people like me to join conversations in public spaces.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Manuela López Restrepo. How the porn bots took over Twitter. NPR, March 2024. URL:

      I read the NPR article about bots on Twitter, and honestly I wasn’t too surprised that so many accounts aren’t real. I’ve noticed weird replies and fake-looking profiles before, but I didn’t realize how hard it is to actually tell what’s real anymore. It’s kind of frustrating, especially when people use Twitter to get news or serious info. Makes me think twice about trusting things I see online, even if the account looks “official.”

    1. What country are you from?

      This section made me realize how often we oversimplify complex real-life situations just to make data easier to handle. For example, the question “What country are you from?” really resonated with me. I’m from Shenzhen, China, but I’ve also spent time living in different places, and sometimes it feels hard to give a single answer. Are we talking about where I was born, where I grew up, or where I currently live? Real life is more complicated than what a data form can capture.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      This article explains how developers could’ve avoided what happened with Tay by using tools that already existed, like slur filters made by other bot creators. One thing that really stood out to me was how Microsoft ignored those tools, even though they were free and easy to access. It made me realize that sometimes the problem isn’t that we don’t know how to prevent these issues—it’s that people don’t take the time to use what’s already out there.

    1. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day.

      Reading about the Tay bot honestly shocked me. I knew AI could reflect biases in data, but I didn’t realize it could go downhill that fast. It reminded me of a time when I posted something pretty neutral online, and it somehow attracted a bunch of toxic or sarcastic replies. It’s weird how fast online spaces can turn negative—and Tay basically just absorbed all of that without any judgment. This really made me think about how important it is for developers to not just focus on what AI can do, but also what environments we’re placing it into. Without the right safeguards, even something meant to be harmless can turn harmful really fast.

    1. “A person is a person through other people.”

      The idea that “a person is a person through other people” made me think about how identity isn’t just individual, but shaped by our relationships. Reading about Ubuntu helped me understand how, especially in post-colonial Africa, this belief supported healing and unity through community.

    2. Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others).

      Growing up in China, I often followed traditions like showing respect to elders and participating in family rituals, but I didn’t think much about why. This reading made me reflect on how those habits were shaping my ideas of what it means to be a “good person”—not through rules, but through relationships and small everyday actions.