10 Matching Annotations
  1. Last 7 days
    1. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls.

      In §7.4, I argue that the "don't feed the trolls" rule shifts responsibility to the targets. On Discord, I help keep a balance, not focusing solely on doxing threats that have escalated; what finally worked was friction and compliance: maIembe mode during spikes, cooldowns for new members, removal of prohibitive thresholds, and robust reporting. I also interpret §7.3 as the ethical distinction between the “increase” of protest trolling (e.g., flooding K-pop apps) and the “reduction” of cruelty (RIP). It would be helpful if the chapter explicitly identified conformity and power dynamics as key dividing lines between ridicule, resistance, and contempt.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Film Crit Hulk. Don’t feed the trolls, and other hideous lies. The Verge, July 2018. URL: https://www.theverge.com/2018/7/12/17561768/dont-feed-the-trolls-online-harassment-abuse (visited on 2023-12-05).

      [g32] argues that ignoring rarely stops harassment; abusers escalate to force a reaction. The piece advocates platform-level moderation and removals, shifting responsibility from victims to systems. This directly reinforces §7.4’s critique of “don’t feed the trolls” and grounds the chapter’s recommendation in reported cases rather than abstract principle.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Does this mean that her performance of vulnerability was inauthentic?

      Let’s answer this question: Performance itself does not compromise authenticity. As defined in this chapter, a connection is authentic when the service provided matches the actual service. If the audience can identify with a carefully crafted persona and feel the intimacy promised, the connection is authentic. If important facts are hidden (for example, through undisclosed sponsorships or deceptive practices), it becomes fake. In my research on the Bluesky bot, labeling posts as coming from a bot maintained trust and correlated the connection provided with reality.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jonah E. Bromwich and Ezra Marcus. The Anonymous Professor Who Wasn’t. The New York Times, August 2020. URL: https://www.nytimes.com/2020/08/04/style/college-coronavirus-hoax.html (visited on 2023-11-24).

      [f10] Regarding the “unidentified professor”: This report reveals how community skepticism (strange memorial service phone calls, no Arizona State University records) confirmed that @Sciencing_Bi was a McLaughlin scam and how the scam caused multiple harms: identity theft, community manipulation, and the discrediting of a genuine Title IX whistleblower. The name reference and institutional confirmation are powerful sources. Combined with BuzzFeed [f9] and The Verge [f13], this strengthens the timeline and mitigates the bias of a single source.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24).

      Reading [e33], the inventor apologizes for infinite scroll because it removes stopping points and lengthens use. I propose an ethical design rule: build in “natural stops”—page ends, session timers, and a one-tap “take a break” card—enabled by default and easy to keep on. Platforms should have to justify any removal of these stops.

    1. One famous example of reducing friction was the invention of infinite scroll [e31]. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin [e32] invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I support adding "friendly friction" to the UI. The pop-up before retweeting makes me hesitant. I also suggest offering adjustable friction levels (read timers, cooldowns for late-night posts). This would reduce impulsive spreading while still maintaining freedom of choice. Do you agree?

  6. Oct 2025
  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      [d28] Hoffman, "Data Violence" (2018): Harm arises not only from biased outcomes but also from categorization/interface choices that force people to "adapt." Key takeaway: If the affected community doesn't shape the architecture, technical fairness improvements can still perpetuate violence. This reframes the formal failure in §4.3: a flawed drop-down menu isn't a failure; it's a governance failure.

    1. 4.2.5. Revisiting Twitter Users vs. Bots# Let’s go back to the question of whether less than 5% of Twitter users are spam bots. In this claim are several places where there are simplifications being made, particularly in the definitions of “Twitter users” and “spam bots.”

      While reading §4.2 and the debate over Twitter bots, I was struck by how definitions can quietly gain influence. For a class project, I modified a spam heuristic (number of URLs + account age), and my bot estimate went from 3% to 14%—the same data, just simplified. This convinced me that the "<5%" number wasn't a fact, but a governance decision about what's considered criticaI. Platforms should publish ranges based on uncertain scenarios and disclose the underlying assumptions. Question: Why don't we require confidence intervals and other definitions for platform metrics, as we do in epidemioIogy?

    1. Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?

      I disagree with completely excluding "click farms" from the "bot" discussion. The key question for platform ecosystems isn't whether they're computerized, but whether interactions can be manipulated at superhuman scale, at low cost, and through scripted automation. If the goal of governance is to maintain the information environment, why not include a dimension of "functional automation" in the definition and include manual, batch operations within the framework of disclosure and restrictions? Otherwise, the same manipulation effects will be overlooked simply because the implementation method has changed.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      Sarah pointed out that Tay's out-of-control behavior can be prevented by design: content fiItering, frequency limiting, adversariaI testing, and narrowing learning objectives should be implemented. It is recommended that boundary-based Iearning be adopted by default (no online self-learning without manual review) and that "modeI cards" be made public to explain data sources and security boundaries, transforming vigilance into boundary-based governance.