12 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      I really appreciated the inclusion of Whitney Phillips’s “Internet Troll Sub-Culture’s Savage Spoofing of Mainstream Media” (Scientific American, 2015) — it gives a grounded, cultural perspective on trolling that complements the ethical discussions nicely. I wonder whether there are more recent studies that explore how trolling has shifted into coordinated “political influence” efforts that would update or extend Phillips’s observations.

    1. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith[1]. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      I think the section on Trolling and Nihilism raises an important point that some trolling communities don’t just push boundaries playfully but actually seem to treat ethics as irrelevant. What struck me is how this can lead to a kind of moral vacuum where the harm to people or groups is dismissed as part of the “game” of disruption.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      The varianceexplained.org analysis of Trump’s tweets is a great example of how data science can uncover patterns behind online personas — it really connects to this chapter’s discussion of authenticity. I found it fascinating how statistical language analysis can expose who’s “really” speaking behind a public account, showing that authenticity online can sometimes be measured rather than just perceived.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us. When we form connections, it is like all our respective vulnerabilities get entangled and tied together. We depend on each other, so if you betray me I face a loss in wellbeing. But also, since you did that, now you face a loss in wellbeing, as I no longer have your back. That means that both of us have an incentive not to betray or take advantage of each other, for our mutual protection.

      I was struck by the tension in the chapter between authenticity as honesty and authenticity as the matching of presentation and interaction. The idea that social media personas can “feel” authentic even when they’re partly performed makes me wonder: in platforms driven by algorithms and engagement metrics, do we discourage real authenticity because performative authenticity “wins” more often?

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      It’s cool to see Tom Standage’s Writing on the Wall: Social Media — The First 2,000 Years cited here. His historical framing helps us see that social media isn’t entirely new, only evolved. I’d also suggest adding Shoshana Zuboff’s The Age of Surveillance Capitalism as a counterpoint source: it connects design history with power, data extraction, and economic incentives, deepening the discussion of how “design” choices embed commercial values.

    1. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.

      I appreciated the tension you raised around friction vs. frictionless design — it really made me think, on one hand, low friction feels like “good usability,” but as you note, design can deliberately add friction to push users toward more thoughtful behavior. I wonder: is there a risk that too much deliberate friction becomes paternalistic or manipulative (assuming users can’t be trusted)?

  6. Oct 2025
  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sasha Costanza-Chock. Design Justice : Community-Led Practices to Build the Worlds We Need. The MIT Press, 2020. ISBN 978-0-262-35686-2 978-0-262-04345-8. URL: https://directory.doabooks.org/handle/20.500.12854/78577 (visited on 2023-12-15), doi:10.7551/mitpress/12255.001.0001.

      I’m glad to see Design Justice: Community-Led Practices to Build the Worlds We Need by Sasha Costanza-Chock in the bibliography. This work deeply connects design choices with power, oppression, and equity. It would be interesting to more explicitly surface how “data simplification” (in 4.2) is a design choice that can perpetuate injustice.

    1. As you can see in the apple example, any time we turn something into data, we are making a simplification.[1] If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc.

      I really appreciated how it highlights that any dataset is already a selection — a simplification— and that those choices shape the story the data can tell. And I wonder: in practical terms, how might we audit or compare different simplification choices across platforms to surface which versions produce more (or less) ethical distortions?

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      It’s clever because it both acknowledges that bias is baked into datasets and algorithms, yet still tries to offer practical design strategies to deal with harms. I’d love to see people push further by testing or critiquing Jeong’s prescriptions in real bot designs: do they scale? Do they fail under adversarial inputs?

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      I liked the donkey metaphor — it really drives home how bots (or other tools) can act without “understanding,” which loosens the connection between intention and consequence. It makes me wonder: when a bot does something harmful (even unintentionally), should we treat that more like an accident or a responsibility failure — and how much should we hold the creator, operator, or even the platform accountable?

  9. Sep 2025
    1. It might help to think about ethical frameworks as tools for seeing inside of a situation. In medicine, when doctors need to see what’s going on inside someone’s body, they have many different tools for looking in, depending on what they need to know. An x-ray, an ultra-sound, and an MRI all show different information about what’s happening inside the body. A doctor chooses what tool to use based on what she needs to know. An x-ray is great for seeing what’s happening with bones, but isn’t particularly helpful for seeing how a fetus’s development is progressing.

      I really liked the metaphor of ethical frameworks as “tools” for seeing different aspects of a situation. It emphasizes that no single moral theory will show everything. One question I had: how do you choose which pair of frameworks to apply in a given scenario, and what criteria should guide that choice?

    1. Alternative Ethics

      One additional ethics framework that is worth including is Discourse Ethics. This framework emphasizes the ethics of communication: acting only under norms that all affected can accept in rational discourse. It helps especially in social media settings, because these platforms involve many stakeholders (users, platform designers, advertisers...) engaging via mediated communication. Discourse Ethics adds tools for evaluating whether moderation policies, recommendation algorithms, or community norms are acceptable to all affected rather than simply optimizing outcomes or following abstract rules.