6 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).

      This source caught my attention because the title is surprising, but it makes an important point. I think the article helps show that numbers are not always as simple or objective as they first appear. In real-world situations, the meaning of a number often depends on definitions, assumptions, and context. That connects strongly to this chapter, especially the discussion of how measuring Twitter bots depends on how people define what they are counting.

    1. We have to be aware that we are always making these simplifications, try to be clear about what simplifications we are making, and think through the ethical implications of the simplifications we are making.

      The sentence “all data is a simplification of reality” really stood out to me. I like this point because it reminds us that data is never just a perfect copy of the real world. The apple example was simple, but it clearly showed that counting something as “one” can hide important differences. I think this also connects strongly to the Twitter bot example, because the result depends a lot on how people define words like “user” or “spam bot.” This made me realize that when we look at data, we should not only ask whether it is correct, but also ask what has been simplified or left out.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL:

      This source caught my attention because the title already suggests that bias in bots is a real design problem, not just a technical mistake. It connects to the chapter by reminding us that even simple program structures can still produce harmful outcomes if the data, rules, or assumptions behind them are biased.

    1. One of the most common events to program for is around time: We can also tell programs to wait for a period of time, or start at a given time.

      This part made me think about how simple scheduling can make a bot feel much more active and intentional, even when it is doing a very basic task. A bot that posts at regular times may look more “human” or organized, which also raises ethical questions about transparency and whether other users should know they are interacting with automation.

  4. Mar 2026
    1. ust because we use an ethics framework to look at a situation doesn’t mean that we will come out with a morally good conclusion.

      I was most interested in the part saying ethics frameworks do not guarantee moral goodness. I agree because people can use the same framework to defend very different actions. This reminded me that ethical thinking in technology is not just about picking one theory, but about staying critical, comparing perspectives, and asking who might be harmed by a decision.

    2. Focuses on responsibilities and relational issues in the relationships you are invested in.

      I want to add to ethics of care. The reading says it focuses on responsibilities in relationships, but I think it is also useful for social media because it highlights emotional harm that rule-based frameworks may miss. For example, even if a platform follows the same rule for everyone, it may still fail vulnerable users who need more protection and support.