22 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Rick Paulas. What It Feels Like to Go Viral. Pacific Standard, June 2017. URL: https://psmag.com/economics/going-viral-is-like-doing-cartwheels-on-the-water-spout-of-a-giant-whale (visited on 2023-12-08).

      This article by Rick Paulas provides a visceral, first-hand feel for what it means to go “viral” in social media contexts—describing it as something like “doing cartwheels on the water-spout of a giant whale.” That kind of metaphor really brings home how thrilling yet unstable virality is: fun, exhilarating, but also out of control and potentially dangerous.

    1. Additionally, content can be copied by being screenshotted, or photoshopped. Text and images can be copied and reposted with modifications (like a poem about plums [l17]). And content in one form can be used to make new content in completely new forms, like this “Internet Drama” song whose lyrics are from messages sent back and forth between two people in a Facebook Marketplace:

      As someone who uses social platforms and watches how memes / posts spread, I’ve observed that sometimes the version that goes viral isn’t the original but a mutated one (someone adds a caption, remix, or cross-posts to another network). The chapter’s point that inheritance matters jumped out: once a variation exists and spreads with the change, future copies carry that change. That resonates with seeing e.g. a tweet being quote-retweeted, then everyone repeats the quote-tweet version, not the original tweet.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Petter Törnberg. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences, 119(42):e2207159119, October 2022. URL: https://www.pnas.org/doi/10.1073/pnas.2207159119 (visited on 2023-12-07), doi:10.1073/pnas.2207159119.

      It struck me how relevant this paper is to the chapter’s point that recommendation algorithms don’t just serve content but shape what we see and how we interpret it. The study shows how digital media can drive affective polarization via “partisan sorting” — which nicely connects to the chapter’s warning that algorithms can deepen divisions by reinforcing “you vs them” dynamics.

    1. 11.1.2. Reflections# What experiences do you have of social media sites making particularly good recommendations for you? What experiences do you have of social media sites making particularly bad recommendations for you?

      In my own experience, I’ve seen both “good” and “bad” recommendations: one time a platform surfaced a deeply niche topic I had read once and I found it fascinating; another time it repeatedly suggested things totally irrelevant and frankly annoying.

  4. Oct 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. How to ADHD. What is Executive Function and Why Do We Need it? March 2021. URL: https://www.youtube.com/watch?v=H4YIHrEu-TU (visited on 2023-12-07).

      It's interesting to see it explores how accessibility features aren’t just “extra” for a few users, but foundational for designing inclusive social-media environments. It challenged me to rethink how often accessibility is treated as an afterthought rather than baked in from the start.

    1. We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.

      I really appreciated how the chapter highlights different approaches to accessibility — making the environment work for all, adapting tools for users, and the burden often being placed on the user instead of the design. One question I have is: what’s a practical checklist or metric developers could use early in the UI design to shift from the “modifying the person” model to the “making the tool adapt” model?

    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I found the inclusion of Bowman’s article about the 530 million-user breach striking — it grounds the discussion of privacy in real, large-scale harm rather than abstract theory. From my experience, seeing such breaches makes the “privacy isn’t just about secrets, it’s about control and trust” line hit home.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy.

      I found how users often feel they’ve lost control over their data interesting — it reminds me of the moments when I accept a “Cookie/Privacy” pop-up without really reading it, then later wonder how much the platform knows about my interests.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Catherine Stinson. The Dark Past of Algorithms That Associate Appearance and Criminality. American Scientist, January 2021. URL: https://www.americanscientist.org/article/the-dark-past-of-algorithms-that-associate-appearance-and-criminality (visited on 2023-12-05).

      I found Catherine Stinson’s “The Dark Past of Algorithms That Associate Appearance and Criminality” especially compelling — it highlights how seemingly neutral data-mining efforts (for example facial recognition or risk scoring) embed deep historical biases and reinforce harmful associations. It makes me reflect: when we apply mining methods in social-media contexts, it’s not just about data quality but also which associations we’re willing to carry forward.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      I found the discussion of unintended versus intentional data poisoning especially striking — it reminded me of how a viral trend on a platform can distort a research survey in ways the authors likely never anticipated. One thing I’m wondering though: given that many social-media datasets are collected passively and opportunistically, how can researchers realistically detect when the data has already been poisoned by normal platform usage (rather than a malicious actor)?

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      I really appreciated the inclusion of Whitney Phillips’s “Internet Troll Sub-Culture’s Savage Spoofing of Mainstream Media” (Scientific American, 2015) — it gives a grounded, cultural perspective on trolling that complements the ethical discussions nicely. I wonder whether there are more recent studies that explore how trolling has shifted into coordinated “political influence” efforts that would update or extend Phillips’s observations.

    1. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith[1]. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      I think the section on Trolling and Nihilism raises an important point that some trolling communities don’t just push boundaries playfully but actually seem to treat ethics as irrelevant. What struck me is how this can lead to a kind of moral vacuum where the harm to people or groups is dismissed as part of the “game” of disruption.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      The varianceexplained.org analysis of Trump’s tweets is a great example of how data science can uncover patterns behind online personas — it really connects to this chapter’s discussion of authenticity. I found it fascinating how statistical language analysis can expose who’s “really” speaking behind a public account, showing that authenticity online can sometimes be measured rather than just perceived.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us. When we form connections, it is like all our respective vulnerabilities get entangled and tied together. We depend on each other, so if you betray me I face a loss in wellbeing. But also, since you did that, now you face a loss in wellbeing, as I no longer have your back. That means that both of us have an incentive not to betray or take advantage of each other, for our mutual protection.

      I was struck by the tension in the chapter between authenticity as honesty and authenticity as the matching of presentation and interaction. The idea that social media personas can “feel” authentic even when they’re partly performed makes me wonder: in platforms driven by algorithms and engagement metrics, do we discourage real authenticity because performative authenticity “wins” more often?

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      It’s cool to see Tom Standage’s Writing on the Wall: Social Media — The First 2,000 Years cited here. His historical framing helps us see that social media isn’t entirely new, only evolved. I’d also suggest adding Shoshana Zuboff’s The Age of Surveillance Capitalism as a counterpoint source: it connects design history with power, data extraction, and economic incentives, deepening the discussion of how “design” choices embed commercial values.

    1. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down.

      I appreciated the tension you raised around friction vs. frictionless design — it really made me think, on one hand, low friction feels like “good usability,” but as you note, design can deliberately add friction to push users toward more thoughtful behavior. I wonder: is there a risk that too much deliberate friction becomes paternalistic or manipulative (assuming users can’t be trusted)?

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sasha Costanza-Chock. Design Justice : Community-Led Practices to Build the Worlds We Need. The MIT Press, 2020. ISBN 978-0-262-35686-2 978-0-262-04345-8. URL: https://directory.doabooks.org/handle/20.500.12854/78577 (visited on 2023-12-15), doi:10.7551/mitpress/12255.001.0001.

      I’m glad to see Design Justice: Community-Led Practices to Build the Worlds We Need by Sasha Costanza-Chock in the bibliography. This work deeply connects design choices with power, oppression, and equity. It would be interesting to more explicitly surface how “data simplification” (in 4.2) is a design choice that can perpetuate injustice.

    1. As you can see in the apple example, any time we turn something into data, we are making a simplification.[1] If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc.

      I really appreciated how it highlights that any dataset is already a selection — a simplification— and that those choices shape the story the data can tell. And I wonder: in practical terms, how might we audit or compare different simplification choices across platforms to surface which versions produce more (or less) ethical distortions?

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      It’s clever because it both acknowledges that bias is baked into datasets and algorithms, yet still tries to offer practical design strategies to deal with harms. I’d love to see people push further by testing or critiquing Jeong’s prescriptions in real bot designs: do they scale? Do they fail under adversarial inputs?

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      I liked the donkey metaphor — it really drives home how bots (or other tools) can act without “understanding,” which loosens the connection between intention and consequence. It makes me wonder: when a bot does something harmful (even unintentionally), should we treat that more like an accident or a responsibility failure — and how much should we hold the creator, operator, or even the platform accountable?

  15. Sep 2025
    1. It might help to think about ethical frameworks as tools for seeing inside of a situation. In medicine, when doctors need to see what’s going on inside someone’s body, they have many different tools for looking in, depending on what they need to know. An x-ray, an ultra-sound, and an MRI all show different information about what’s happening inside the body. A doctor chooses what tool to use based on what she needs to know. An x-ray is great for seeing what’s happening with bones, but isn’t particularly helpful for seeing how a fetus’s development is progressing.

      I really liked the metaphor of ethical frameworks as “tools” for seeing different aspects of a situation. It emphasizes that no single moral theory will show everything. One question I had: how do you choose which pair of frameworks to apply in a given scenario, and what criteria should guide that choice?

    1. Alternative Ethics

      One additional ethics framework that is worth including is Discourse Ethics. This framework emphasizes the ethics of communication: acting only under norms that all affected can accept in rational discourse. It helps especially in social media settings, because these platforms involve many stakeholders (users, platform designers, advertisers...) engaging via mediated communication. Discourse Ethics adds tools for evaluating whether moderation policies, recommendation algorithms, or community norms are acceptable to all affected rather than simply optimizing outcomes or following abstract rules.