30 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Paul Billingham and Tom Parr. Enforcing social norms: The morality of public shaming. European J of Philosophy, 28(4):997–1016, December 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/ejop.12543 (visited on 2023-12-10), doi:10.1111/ejop.12543.

      I find its framework interesting — they don’t just dismiss shaming out of hand, but carefully analyze when and how it might be morally justified. Their conditions (like proportionality, necessity, respect for privacy, non-abusiveness, and reintegration) seem really well suited to thinking about social media shaming.

    1. Truth and Reconciliation Commission# In South Africa, when the oppressive and violent racist apartheid [r16] system ended, Nelson Mandela and Desmond Tutu set up the [Truth and Reconciliation Commission](https://en.wikipedia.org/wiki/Truth_and_Reconciliation_Commission_(South_Africa) [r17]). The commission gathered testimony from both victims and perpetrators of the violence and oppression of apartheid. We could also consider this, in part, a large-scale public shaming of apartheid and those who hurt others through it. Unlike the Nuremberg Trials, the Truth and Reconciliation Commission gave a path for forgiveness and amnesty to the perpetrators of violence who provided their testimony.

      I appreciate how it doesn’t shy away from the limits of reconciliation. By comparing the Nuremberg Trials with the South African Truth and Reconciliation Commission, it highlights how not all wrongdoing can be “fixed” in the same way — and sometimes punishment, not forgiveness, is what justice requires.

  3. Nov 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Roni Jacobson. I’ve Had a Cyberstalker Since I Was 12. Wired, 2016. URL: https://www.wired.com/2016/02/ive-had-a-cyberstalker-since-i-was-12/ (visited on 2023-12-10).

      Jacobson’s work impressed me because it shows how harassment can become a kind of background radiation in someone’s life—persistent, invisible to outsiders, and extremely draining. What stood out most is how the cyberstalker’s behavior followed her across platforms and into adulthood, showing that online harassment isn’t always an isolated flare-up; sometimes it becomes a long-term pattern that shapes someone’s sense of safety.

    1. Because social media spaces are to some extent private spaces, the moderators of those spaces can ask someone to leave if they wish. A Facebook group may have a ‘policy’ listed in the group info, which spells out the conditions under which a person might be blocked from the group. As a Facebook user, I could decide that I don’t like the way someone is posting on my wall; I could block them, with or without warning, much as if I were asking a guest to leave my house.

      I find the framing morally compelling — it challenges the common notion that only “illegal” speech is worth regulating or worrying about. Harassment that skirts legality can still deeply damage people. It underscores the responsibility of platforms and communities to think beyond just what is legally prohibited, and consider what is socially or ethically harmful.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Foldit. September 2023. Page Version ID: 1175905648. URL: https://en.wikipedia.org/w/index.php?title=Foldit&oldid=1175905648 (visited on 2023-12-08).

      Foldit is an interesting case because it shows that crowdsourcing isn’t just about distributing simple microtasks — sometimes the crowd can outperform algorithms on highly complex scientific problems. I love how Foldit turns protein-folding into a game, and how ordinary players (not trained biochemists!) have actually contributed to real scientific discoveries.

    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves.

      This defines crowdsourcing pretty broadly — but are there important ethical differences between volunteer crowdsourcing (like Wikipedia) versus paid microtask platforms (like paid data labeling)? How should we think about those differences, especially in a course about ethics and automation?

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah T. Roberts. Behind the Screen. Yale University Press, September 2021. URL: https://yalebooks.yale.edu/9780300261479/behind-the-screen (visited on 2023-12-08).

      I think the labour issues Roberts raises are under‐recognized. As a user I rarely think about the human moderators behind the platform I use. It made me more aware of the ethics of consumption: the “clean” feed I enjoy is enabled by invisible people working in difficult conditions. That leads me to a question: How much responsibility should platforms bear toward these moderators’ mental health, and how transparent should they be to end users about who does this work?

    1. 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret.

      I wonder if the dichotomy “trained vs untrained” might be easier said than done: even trained moderators face huge grey areas and emotional toll. So “training” helps, but doesn’t eliminate risk.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Rick Paulas. What It Feels Like to Go Viral. Pacific Standard, June 2017. URL: https://psmag.com/economics/going-viral-is-like-doing-cartwheels-on-the-water-spout-of-a-giant-whale (visited on 2023-12-08).

      This article by Rick Paulas provides a visceral, first-hand feel for what it means to go “viral” in social media contexts—describing it as something like “doing cartwheels on the water-spout of a giant whale.” That kind of metaphor really brings home how thrilling yet unstable virality is: fun, exhilarating, but also out of control and potentially dangerous.

    1. Additionally, content can be copied by being screenshotted, or photoshopped. Text and images can be copied and reposted with modifications (like a poem about plums [l17]). And content in one form can be used to make new content in completely new forms, like this “Internet Drama” song whose lyrics are from messages sent back and forth between two people in a Facebook Marketplace:

      As someone who uses social platforms and watches how memes / posts spread, I’ve observed that sometimes the version that goes viral isn’t the original but a mutated one (someone adds a caption, remix, or cross-posts to another network). The chapter’s point that inheritance matters jumped out: once a variation exists and spreads with the change, future copies carry that change. That resonates with seeing e.g. a tweet being quote-retweeted, then everyone repeats the quote-tweet version, not the original tweet.

  8. Oct 2025
  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Petter Törnberg. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences, 119(42):e2207159119, October 2022. URL: https://www.pnas.org/doi/10.1073/pnas.2207159119 (visited on 2023-12-07), doi:10.1073/pnas.2207159119.

      It struck me how relevant this paper is to the chapter’s point that recommendation algorithms don’t just serve content but shape what we see and how we interpret it. The study shows how digital media can drive affective polarization via “partisan sorting” — which nicely connects to the chapter’s warning that algorithms can deepen divisions by reinforcing “you vs them” dynamics.

    1. 11.1.2. Reflections# What experiences do you have of social media sites making particularly good recommendations for you? What experiences do you have of social media sites making particularly bad recommendations for you?

      In my own experience, I’ve seen both “good” and “bad” recommendations: one time a platform surfaced a deeply niche topic I had read once and I found it fascinating; another time it repeatedly suggested things totally irrelevant and frankly annoying.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. How to ADHD. What is Executive Function and Why Do We Need it? March 2021. URL: https://www.youtube.com/watch?v=H4YIHrEu-TU (visited on 2023-12-07).

      It's interesting to see it explores how accessibility features aren’t just “extra” for a few users, but foundational for designing inclusive social-media environments. It challenged me to rethink how often accessibility is treated as an afterthought rather than baked in from the start.

    1. We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.

      I really appreciated how the chapter highlights different approaches to accessibility — making the environment work for all, adapting tools for users, and the burden often being placed on the user instead of the design. One question I have is: what’s a practical checklist or metric developers could use early in the UI design to shift from the “modifying the person” model to the “making the tool adapt” model?

    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I found the inclusion of Bowman’s article about the 530 million-user breach striking — it grounds the discussion of privacy in real, large-scale harm rather than abstract theory. From my experience, seeing such breaches makes the “privacy isn’t just about secrets, it’s about control and trust” line hit home.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy.

      I found how users often feel they’ve lost control over their data interesting — it reminds me of the moments when I accept a “Cookie/Privacy” pop-up without really reading it, then later wonder how much the platform knows about my interests.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Catherine Stinson. The Dark Past of Algorithms That Associate Appearance and Criminality. American Scientist, January 2021. URL: https://www.americanscientist.org/article/the-dark-past-of-algorithms-that-associate-appearance-and-criminality (visited on 2023-12-05).

      I found Catherine Stinson’s “The Dark Past of Algorithms That Associate Appearance and Criminality” especially compelling — it highlights how seemingly neutral data-mining efforts (for example facial recognition or risk scoring) embed deep historical biases and reinforce harmful associations. It makes me reflect: when we apply mining methods in social-media contexts, it’s not just about data quality but also which associations we’re willing to carry forward.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      I found the discussion of unintended versus intentional data poisoning especially striking — it reminded me of how a viral trend on a platform can distort a research survey in ways the authors likely never anticipated. One thing I’m wondering though: given that many social-media datasets are collected passively and opportunistically, how can researchers realistically detect when the data has already been poisoned by normal platform usage (rather than a malicious actor)?

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      I really appreciated the inclusion of Whitney Phillips’s “Internet Troll Sub-Culture’s Savage Spoofing of Mainstream Media” (Scientific American, 2015) — it gives a grounded, cultural perspective on trolling that complements the ethical discussions nicely. I wonder whether there are more recent studies that explore how trolling has shifted into coordinated “political influence” efforts that would update or extend Phillips’s observations.

    1. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith[1]. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      I think the section on Trolling and Nihilism raises an important point that some trolling communities don’t just push boundaries playfully but actually seem to treat ethics as irrelevant. What struck me is how this can lead to a kind of moral vacuum where the harm to people or groups is dismissed as part of the “game” of disruption.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      The varianceexplained.org analysis of Trump’s tweets is a great example of how data science can uncover patterns behind online personas — it really connects to this chapter’s discussion of authenticity. I found it fascinating how statistical language analysis can expose who’s “really” speaking behind a public account, showing that authenticity online can sometimes be measured rather than just perceived.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us. When we form connections, it is like all our respective vulnerabilities get entangled and tied together. We depend on each other, so if you betray me I face a loss in wellbeing. But also, since you did that, now you face a loss in wellbeing, as I no longer have your back. That means that both of us have an incentive not to betray or take advantage of each other, for our mutual protection.

      I was struck by the tension in the chapter between authenticity as honesty and authenticity as the matching of presentation and interaction. The idea that social media personas can “feel” authentic even when they’re partly performed makes me wonder: in platforms driven by algorithms and engagement metrics, do we discourage real authenticity because performative authenticity “wins” more often?

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      It’s cool to see Tom Standage’s Writing on the Wall: Social Media — The First 2,000 Years cited here. His historical framing helps us see that social media isn’t entirely new, only evolved. I’d also suggest adding Shoshana Zuboff’s The Age of Surveillance Capitalism as a counterpoint source: it connects design history with power, data extraction, and economic incentives, deepening the discussion of how “design” choices embed commercial values.

    1. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.

      I appreciated the tension you raised around friction vs. frictionless design — it really made me think, on one hand, low friction feels like “good usability,” but as you note, design can deliberately add friction to push users toward more thoughtful behavior. I wonder: is there a risk that too much deliberate friction becomes paternalistic or manipulative (assuming users can’t be trusted)?

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sasha Costanza-Chock. Design Justice : Community-Led Practices to Build the Worlds We Need. The MIT Press, 2020. ISBN 978-0-262-35686-2 978-0-262-04345-8. URL: https://directory.doabooks.org/handle/20.500.12854/78577 (visited on 2023-12-15), doi:10.7551/mitpress/12255.001.0001.

      I’m glad to see Design Justice: Community-Led Practices to Build the Worlds We Need by Sasha Costanza-Chock in the bibliography. This work deeply connects design choices with power, oppression, and equity. It would be interesting to more explicitly surface how “data simplification” (in 4.2) is a design choice that can perpetuate injustice.

    1. As you can see in the apple example, any time we turn something into data, we are making a simplification.[1] If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc.

      I really appreciated how it highlights that any dataset is already a selection — a simplification— and that those choices shape the story the data can tell. And I wonder: in practical terms, how might we audit or compare different simplification choices across platforms to surface which versions produce more (or less) ethical distortions?

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      It’s clever because it both acknowledges that bias is baked into datasets and algorithms, yet still tries to offer practical design strategies to deal with harms. I’d love to see people push further by testing or critiquing Jeong’s prescriptions in real bot designs: do they scale? Do they fail under adversarial inputs?

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      I liked the donkey metaphor — it really drives home how bots (or other tools) can act without “understanding,” which loosens the connection between intention and consequence. It makes me wonder: when a bot does something harmful (even unintentionally), should we treat that more like an accident or a responsibility failure — and how much should we hold the creator, operator, or even the platform accountable?

  20. Sep 2025
    1. It might help to think about ethical frameworks as tools for seeing inside of a situation. In medicine, when doctors need to see what’s going on inside someone’s body, they have many different tools for looking in, depending on what they need to know. An x-ray, an ultra-sound, and an MRI all show different information about what’s happening inside the body. A doctor chooses what tool to use based on what she needs to know. An x-ray is great for seeing what’s happening with bones, but isn’t particularly helpful for seeing how a fetus’s development is progressing.

      I really liked the metaphor of ethical frameworks as “tools” for seeing different aspects of a situation. It emphasizes that no single moral theory will show everything. One question I had: how do you choose which pair of frameworks to apply in a given scenario, and what criteria should guide that choice?

    1. Alternative Ethics

      One additional ethics framework that is worth including is Discourse Ethics. This framework emphasizes the ethics of communication: acting only under norms that all affected can accept in rational discourse. It helps especially in social media settings, because these platforms involve many stakeholders (users, platform designers, advertisers...) engaging via mediated communication. Discourse Ethics adds tools for evaluating whether moderation policies, recommendation algorithms, or community norms are acceptable to all affected rather than simply optimizing outcomes or following abstract rules.