20 Matching Annotations
  1. Last 7 days
    1. Finally, social media platforms use algorithms and design layouts which determine what posts people see. There are various rules and designs social media sites can use, and they can amplify human selection (including coordinated efforts like astroturfing) in various ways. They can do this through recommendation algorithms as we saw last chapter, as well as choosing what actions are allowed and what amount of friction is given to those actions, as well as what data is collected and displayed.

      I like how the chapter uses evolution to explain virality, but the “selection” part on social media feels more like artificial selection than natural. Platforms kinda breed certain traits on purpose (or at least by design): short, remix-able, high-arousal posts travel farther because the UI + metrics reward them. Remove visible like counts or add one extra click to repost and suddenly the “fitness” of outragey jokes drops—this isn’t nature, it’s a product decision, tbh. That ties back to algoritm ranking from last week: ranking isn’t a mirror, it’s a selector that shapes what even exists to be copied. So my question is: if platforms act as the main selector, how much responsiblity do they own for which memes win and which basically go extinct?

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tanya Chen. A 27-Year-Old Composer Has Inspired One Of The Most Epic And Delightful Duet Chains On TikTok. BuzzFeed News, October 2020. URL: https://www.buzzfeednews.com/article/tanyachen/epic-tiktok-chain-musical-fighting-in-a-grocery-store (visited on 2023-12-08).

      The Waterloo chain-letter example nails classic memetic tricks: authority (“around the world nine times”), urgency (“96 hours”), a fixed replication goal (“send 20 copies”), and fear/hope anecdotes. It’s basically early engagment bait—just offline and slower. The fixed “20” reads like an R0 target; if folks actually did it, growth would explode until friction kills it. Which element actually matter most in real-life replication—the deadline, the number target, or the vivid stories? My hunch is the deadline did most of the work (it fights procrastination and nudges action now), which is still how viraly prompts get us to click today, definitley.

    1. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you.

      I think the example about algorithms recommending painful memories is very real. One time, my social media showed me an old photo with my ex, and it made me feel really bad. It shows that algorithms don’t understand feelings. They only see data, not emotion. I think social media should give users more control to block or turn off certain reminders.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Arvind Narayanan. TikTok’s Secret Sauce. Knight First Amendment Institute, December 2022. URL: http://knightcolumbia.org/blog/tiktoks-secret-sauce (visited on 2023-12-07).

      This article says TikTok’s recommendation system is not really “magic.” It works well because users can skip quickly, and the app learns fast from that. I agree with this idea. I also think it’s dangerous that people believe algorithms “know them,” when in fact it’s just smart design making people stay longer.

  4. Oct 2025
    1. Another strategy for managing disability is to use Universal Design [j17], which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it[2]. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor.

      Universal Design makes sense because it moves the work from the disabled person to the builder. When spaces include ramps, elevators, and clear labels (including braille), many different people benefit at the same time. It treats access as part of the plan, not a special fix. The hard part is balancing needs when space and budgets are tight, but a fair goal is to offer equal, visible paths so no one is sent to a “back door.”

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg Miller and Ilaria Parogni. The Hidden Image Descriptions Making the Internet Accessible. The New York Times, February 2022. URL: https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html (visited on 2023-12-07).

      This source shows why alt-text matters: it should be short, concrete, and task-focused. Useful alt-text names the subject, the action, the setting, and any visible text in the image. A simple policy would help: when someone uploads an image, the tool asks for alt-text with a small prompt. AI can suggest a draft, but a human should review. This makes posts more accessible and also more professional.

    1. [i12] Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      [i12] shows how “no user notification” after a breach undercuts self-defense: without notice, people rarely rotate credentials or enable 2FA in time. To match the chapter’s shared-responsibility theme, please add a 72-hour post-breach action checklist (email/password rotation, stop reuse, enable 2FA, high-value account resets) plus a disclosure-timeline template that platforms should follow.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]).

      This section explains password storage well, but it should explicitly separate encryption from hashing: sites should store passwords with a salted, slow hash (e.g., bcrypt and Argon2), not reversible encryption. Reversible schemes mean one leaked key exposes all passwords; slow hashing makes credential-stuffing economically painful. Minimal user practice: password manager + unique long passwords + TOTP or hardware-key 2FA.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [h14] Tyler Vigen. Spurious correlations. November 2023. URL: http://tylervigen.com/spurious-correlations (visited on 2023-12-05).

      [h14] Read about false correlations. For example, "Maine divorce rate" and "per capita margarine consumption" have a 99.26% correlation, but no causal relationship. Implications for social media data mining: First, formulate hypotheses, perform multiple comparison corrections, and use a holdout set for validation to avoid mistaking coincidence for a pattern. This is consistent with what we discussed in class about "correlation ≠ causation" and "p-hacking."

    1. 8.5.1. Reflection# After looking at your ad profile, ask yourself the following: What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?

      After reading this section, I checked my Google Ads profile. “Automotive, 18-34, Male” was accurate, but “Mother and Child, Gardening” was incorrect. My gut feeling is that the platform is using behavioral data like “stops/clicks/private messages” to better characterize me than the information I filled in myself. The question is: should platforms hide sensitive findings about sexual orientation, addiction risk, and other factors by default, or only make them visible with explicit consent? Even Bluesky’s open API hasn’t redressed this power imbalance.

    1. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls.

      In §7.4, I argue that the "don't feed the trolls" rule shifts responsibility to the targets. On Discord, I help keep a balance, not focusing solely on doxing threats that have escalated; what finally worked was friction and compliance: maIembe mode during spikes, cooldowns for new members, removal of prohibitive thresholds, and robust reporting. I also interpret §7.3 as the ethical distinction between the “increase” of protest trolling (e.g., flooding K-pop apps) and the “reduction” of cruelty (RIP). It would be helpful if the chapter explicitly identified conformity and power dynamics as key dividing lines between ridicule, resistance, and contempt.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Film Crit Hulk. Don’t feed the trolls, and other hideous lies. The Verge, July 2018. URL: https://www.theverge.com/2018/7/12/17561768/dont-feed-the-trolls-online-harassment-abuse (visited on 2023-12-05).

      [g32] argues that ignoring rarely stops harassment; abusers escalate to force a reaction. The piece advocates platform-level moderation and removals, shifting responsibility from victims to systems. This directly reinforces §7.4’s critique of “don’t feed the trolls” and grounds the chapter’s recommendation in reported cases rather than abstract principle.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Does this mean that her performance of vulnerability was inauthentic?

      Let’s answer this question: Performance itself does not compromise authenticity. As defined in this chapter, a connection is authentic when the service provided matches the actual service. If the audience can identify with a carefully crafted persona and feel the intimacy promised, the connection is authentic. If important facts are hidden (for example, through undisclosed sponsorships or deceptive practices), it becomes fake. In my research on the Bluesky bot, labeling posts as coming from a bot maintained trust and correlated the connection provided with reality.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jonah E. Bromwich and Ezra Marcus. The Anonymous Professor Who Wasn’t. The New York Times, August 2020. URL: https://www.nytimes.com/2020/08/04/style/college-coronavirus-hoax.html (visited on 2023-11-24).

      [f10] Regarding the “unidentified professor”: This report reveals how community skepticism (strange memorial service phone calls, no Arizona State University records) confirmed that @Sciencing_Bi was a McLaughlin scam and how the scam caused multiple harms: identity theft, community manipulation, and the discrediting of a genuine Title IX whistleblower. The name reference and institutional confirmation are powerful sources. Combined with BuzzFeed [f9] and The Verge [f13], this strengthens the timeline and mitigates the bias of a single source.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24).

      Reading [e33], the inventor apologizes for infinite scroll because it removes stopping points and lengthens use. I propose an ethical design rule: build in “natural stops”—page ends, session timers, and a one-tap “take a break” card—enabled by default and easy to keep on. Platforms should have to justify any removal of these stops.

    1. One famous example of reducing friction was the invention of infinite scroll [e31]. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin [e32] invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I support adding "friendly friction" to the UI. The pop-up before retweeting makes me hesitant. I also suggest offering adjustable friction levels (read timers, cooldowns for late-night posts). This would reduce impulsive spreading while still maintaining freedom of choice. Do you agree?

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      [d28] Hoffman, "Data Violence" (2018): Harm arises not only from biased outcomes but also from categorization/interface choices that force people to "adapt." Key takeaway: If the affected community doesn't shape the architecture, technical fairness improvements can still perpetuate violence. This reframes the formal failure in §4.3: a flawed drop-down menu isn't a failure; it's a governance failure.

    1. 4.2.5. Revisiting Twitter Users vs. Bots# Let’s go back to the question of whether less than 5% of Twitter users are spam bots. In this claim are several places where there are simplifications being made, particularly in the definitions of “Twitter users” and “spam bots.”

      While reading §4.2 and the debate over Twitter bots, I was struck by how definitions can quietly gain influence. For a class project, I modified a spam heuristic (number of URLs + account age), and my bot estimate went from 3% to 14%—the same data, just simplified. This convinced me that the "<5%" number wasn't a fact, but a governance decision about what's considered criticaI. Platforms should publish ranges based on uncertain scenarios and disclose the underlying assumptions. Question: Why don't we require confidence intervals and other definitions for platform metrics, as we do in epidemioIogy?

    1. Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?

      I disagree with completely excluding "click farms" from the "bot" discussion. The key question for platform ecosystems isn't whether they're computerized, but whether interactions can be manipulated at superhuman scale, at low cost, and through scripted automation. If the goal of governance is to maintain the information environment, why not include a dimension of "functional automation" in the definition and include manual, batch operations within the framework of disclosure and restrictions? Otherwise, the same manipulation effects will be overlooked simply because the implementation method has changed.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      Sarah pointed out that Tay's out-of-control behavior can be prevented by design: content fiItering, frequency limiting, adversariaI testing, and narrowing learning objectives should be implemented. It is recommended that boundary-based Iearning be adopted by default (no online self-learning without manual review) and that "modeI cards" be made public to explain data sources and security boundaries, transforming vigilance into boundary-based governance.