32 Matching Annotations
  1. Last 7 days
    1. When shareholders buy stocks in a company, they are owed a percentage of the profits. Therefore it is the company leaders’ fiduciary duty [s11] to maximize the profits of the company (called the Friedman Doctrine [s12]). If the leader of the company (the CEO) intentionally makes a decision that they know will reduce the company’s profits, then they are cheating the shareholders out of money the shareholders could have had. CEOs mistakenly do things that lose money all the time, but doing so on purpose is a violation of fiduciary duty.

      When I read section 19.1.3 about fiduciary duty and the Friedman doctrine, it really makes me feel like users basically have no real power on platforms like Meta. Even if a CEO personally want to care more about user well-being or ethics, the system kind of punish them if profits go down, so they are pushed to choose shareholders first. It feels a bit scary that even “good intentions” from leaders are not enough, because the whole structure of capitalism is pushing in the opposite direction. It also makes me question if telling people “just choose better companies or better CEOs” is actually helpful, since the problem seem more like the rules of the game, not only the people playing it.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Cory Doctorow. The ‘Enshittification’ of TikTok. Wired, 2023. URL: https://www.wired.com/story/tiktok-platforms-cory-doctorow/ (visited on 2023-12-10).

      In source [s15], Doctorow’s idea of “enshittification” feels very accurate for many platforms I used before, not only TikTok. First they act super nice to users so everyone joins, then slowly more and more value is taken away from users and given to advertisers and investors, until the site feels kind of annoying or even hostile to use. For me this connects a lot to the chapter’s discussion of fiduciary duty, because it shows how profit-max logic slowly squeezes both users and even business customers over time. It makes me wonder if any big social media platform that relies on ads and surveillance capitalism can really avoid this pattern in the long run, or if this “enshittification curve” is basically built-in.

  3. Nov 2025
    1. One useful way to think about harassment is that it is often a pattern of behavior that exploits the distinction between things that are legally proscribed and things that are hurtful, but not so harmful as to be explicitly prohibit by law given the protection of freedoms. Let’s use an example to clarify.

      This chapter make me see harassment very different than before. I used to think it is only about big, obvious things like death threats, but the puddle example shows how many small actions together can still really hurt someone. Online it’s even worse, because people can pretend every single comment is “not that serious” while the target already feel scared and tired. I still don’t fully know where the line should be between free speech and moderation, but now it’s harder to say “it’s just the internet, just ignore it,” because clearly it’s not that simple.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alice E. Marwick. Morally Motivated Networked Harassment as Normative Reinforcement. Social Media + Society, 7(2):20563051211021378, April 2021. URL: https://doi.org/10.1177/20563051211021378 (visited on 2023-12-10), doi:10.1177/20563051211021378.

      For the bibliography, I was really interested in [q16] by Marwick about “morally motivated networked harassment.” I think this idea is scary, because it means harassers actually feel like they are good people, defending the community rules. When I look at some Twitter dogpiles, it really feels like that: everyone thinks they are “doing justice,” but the result is just one person getting destroyed. It make me wonder how we can critic somebody’s bad behavior without turning it into this huge mob that push them completely out of the conversation.

    1. This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian [p36] and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      Reading this chapter about crowdsourcing and “power users vs lurkers” actually make me a little uncomfortable, because I suddenly realize I am part of the problem. On platforms like Reddit, StackOverflow, or even course discussion boards, I usually just read other people’s posts and almost never answer or edit anything. I still get so much benefit from the 1% of people who do most of the work, but they don’t really get equal reward for that labor, except maybe some reputation points or social status. It feels a bit unfair that so much “invisible work” is done by a tiny group, and platforms basically depend on their free time and motivation. At the same time, I understand why lurkers exist: sometimes we are shy, or afraid to be wrong in public, or just tired. I wonder if platforms should design more gentle “on-ramps” for contribution, so it’s less scary to move from lurker to low-key contributor instead of this huge jump.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [p31] Kate Starbird, Ahmer Arif, and Tom Wilson. Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations. Proc. ACM Hum.-Comput. Interact., 3(CSCW):127:1–127:26, November 2019. URL: https://dl.acm.org/doi/10.1145/3359229 (visited on 2023-12-08), doi:10.1145/3359229.

      For the bibliography, I was really interested in [p31], “Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations.” Just from the title and description, it already changes how I think about fake news. Before, I imagined disinformation like one bad actor or one troll farm pushing lies. This paper instead frames it as a kind of “collaborative work,” where many different people and tools are involved, sometimes even regular users who don’t realize they are part of the campaign. That idea is kind of scary, because it means disinformation is not only top-down, but also bottom-up and participatory. It also connects nicely with the chapter’s point that crowdsourcing can be used both for good (like Foldit or crisis help) and for harmful goals. It makes me feel we really need better education on how to not accidentally help spread these operations.

    1. Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods? A post starting a discussion thread on reddit about reddit [o4]

      When the chapter talks about unpaid Reddit moderators, I honestly feel the trade-off is kinda messed up. Reddit is worth billions, but the people actually keeping the communities clean and usable are working for free, and sometimes they even get yelled at by users. I get that mods have some “power” and some of them enjoy shaping the culture of a subreddit, but this power is very fragile, they can also get burned out super fast. For me it feels like Reddit is outsourcing a huge part of its responsibility to volunteers. I think Reddit should at least offer more concrete support: better tools, mental health resources, maybe some small financial compensation or revenue sharing for big subreddits. If the company is making so much money from user content, saying “thanks” in words only is really not enough.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Gilbert. Facebook Is Ignoring Moderators’ Trauma: ‘They Suggest Karaoke and Painting’. Vice, May 2021. URL: https://www.vice.com/en/article/m7eva4/traumatized-facebook-moderators-told-to-suck-it-up-and-try-karaoke (visited on 2023-12-08).

      I looked at source [o11], the Vice article “Facebook Is Ignoring Moderators’ Trauma: ‘They Suggest Karaoke and Painting’”. The thing that really stuck in my head is how the company’s response sounds so shallow. These moderators are watching horrible stuff every day, like violence or hate, and then the solution is basically “do some hobbies, try karaoke.” It feel like treating serious psychological damage as just stress from a normal office job. For me this connects to the question in 15.3 about what support moderators should have. After reading this source, I think real support has to include proper therapy, higher pay, and maybe limits on how long someone can do this job. If a platform can afford global expansion and fancy offices, it should also afford not to ignore the people cleaning up its worst content.

    1. Some philosophers, like Charles W. Mills, have pointed out that social contracts tend to be shaped by those in power, and agreed to by those in power, but they only work when a less powerful group is taken advantage of to support the power base of the contract deciders. This is a rough way of describing the idea behind Mills’s famous book, The Racial Contract. Mills said that the “we” of American society was actually a subgroup, a “we” within the broader community, and that the “we” of American society which agrees to the implicit social contract is a racialized “we”. That is, the contract is devised by and for, and agreed to by, white people, and it is rational–that is, it makes sense and it works–only because it assumes the subjugation and the exploitation of people of color. Mills argued that a truly just society would need to include ALL subgroups in devising and agreeing to the imagined social contract, instead of some subgroups using their rights and freedoms as a way to impose extra moderation on the rights and freedoms of other groups

      The chapter made me rethink “moderation” as more than just deleting bad posts—it’s also an ethical posture. I like the Rawls bit: behind the veil of ignorance, I wouldn’t know if I’m the small creator getting dog-piled or the mega-account driving engagement, so I’d probably choose rules that slow down pile-ons and brigading (rate limits, friction before replying, default-on muting for first-time posters). I do push back a little on the simple “offense ⇒ users leave” story; some communities (like 4chan/8chan) do thrive on edgy content, at least for a while, which shows “quality” is socially constructed and kinda market-shaped. The xkcd about free speech vs. hosting is spot on—folks (me too sometimes) confuse “the government can’t arrest me” with “the platform must amplify me,” which is just… not how it works. Also, advertiser power skews the “golden mean” toward brand safety; that’s not neutral. If we took Mills’s point seriously, moderation boards would need real power-sharing with racialized groups, not just advisory panels with no teeth. One practical question I still have: if platforms rely less on ads (more on subs), do the moderation incentives actually shift, or do we just create paywalled civility while the public squares get noisier? Tbh, I suspect mixed models can work, but the incentives ain’t ever perfectly aligned.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Devin Coldewey. Study finds Reddit's controversial ban of its most toxic subreddits actually worked. TechCrunch, September 2017. URL: https://techcrunch.com/2017/09/11/study-finds-reddits-controversial-ban-of-its-most-toxic-subreddits-actually-worked/ (visited on 2023-12-08).

      [n6] (TechCrunch on Reddit’s bans) is encouraging—“80–90%” reductions sounds huge—but I worry about measurement drift. Hate speech can hop to coded language or move off-platform, so the metric maybe undercounts the harm. The piece also mentions migration to other subs (and elsewhere). That’s success for Reddit proper, sure, but did the overall ecosystem get better, or just re-sorted? I’d love to see follow-ups that combine text metrics with network maps (who talks to who after the ban) and a time-lag check, because norms don’t change overnight. Still, the result does challenge the fatalistic take that “bans never work.” They do something, and sometimes a lot. My takeaway: targeted removals + strong local mods + clear replacement spaces (healthy ones) probably beats vague “free speech” absolutism that, in practice, protects the loudest. Small nit: the headline sells the win; the body hints at nuance. That’s fine for news, but for policy I want the raw numbers, methods, and definitions—otherwise it’s easy to cherry-pick what feels good, which we all do a bit, me included.

    1. Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5].

      Honestly, this part kinda scares me a bit. It shows the feed isn’t just reflecting our mood, it’s quietly shaping it. If a platform can nudge me to post more negative stuff just by tilting the mix, that’s a lot of soft power, like weather control for emotions. I get that A/B tests are normal in tech, but here the “test” bleeds into mental health and consent. Users didn’t sign up to be mood-tuned guinea pigs, right? Also, the result hints emotional contagion is real at scale—which means mitigation should be a design goal, not an accident. Why not flip the script: make “well-being impact” a KPI next to engagement? And give me a simple “why am I seeing this vibe?” control, so I can dial down negativity without turning into a toxic-positivity bubble. I’m not gonna lie, the ethics bar here feels too low, and we should raise it, like, alot.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Robinson Meyer. Everything We Know About Facebook’s Secret Mood-Manipulation Experiment. The Atlantic, June 2014. URL: https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/ (visited on 2023-12-08).

      Meyer’s piece lays out the 2014 “emotional contagion” study in plain terms: Facebook tweaked News Feed toward more negative or positive posts and then measured how users’ own posts shifted. The reporting doesn’t just say “this happened”; it surfaces the core tension—huge social experiments run on people who didn’t know they were in one. For me, that’s the big red flag: informed consent wasn’t just messy, it was kinda missing. Also, the article helped me seperate two things: (1) the legit scientific question (do emotions spread online?) and (2) the governance question (who gets to run mass experiments on public mood?). Even if the effect size is small, the scale is giant, so small × billions = not small. So yea, the source convinces me that transparency and opt-outs shouldn’t be optional “nice to haves.” They’re table stakes. Otherwise it feels sneaky, and people will not trust you, becuase why would they?

    1. Finally, social media platforms use algorithms and design layouts which determine what posts people see. There are various rules and designs social media sites can use, and they can amplify human selection (including coordinated efforts like astroturfing) in various ways. They can do this through recommendation algorithms as we saw last chapter, as well as choosing what actions are allowed and what amount of friction is given to those actions, as well as what data is collected and displayed.

      I like how the chapter uses evolution to explain virality, but the “selection” part on social media feels more like artificial selection than natural. Platforms kinda breed certain traits on purpose (or at least by design): short, remix-able, high-arousal posts travel farther because the UI + metrics reward them. Remove visible like counts or add one extra click to repost and suddenly the “fitness” of outragey jokes drops—this isn’t nature, it’s a product decision, tbh. That ties back to algoritm ranking from last week: ranking isn’t a mirror, it’s a selector that shapes what even exists to be copied. So my question is: if platforms act as the main selector, how much responsiblity do they own for which memes win and which basically go extinct?

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tanya Chen. A 27-Year-Old Composer Has Inspired One Of The Most Epic And Delightful Duet Chains On TikTok. BuzzFeed News, October 2020. URL: https://www.buzzfeednews.com/article/tanyachen/epic-tiktok-chain-musical-fighting-in-a-grocery-store (visited on 2023-12-08).

      The Waterloo chain-letter example nails classic memetic tricks: authority (“around the world nine times”), urgency (“96 hours”), a fixed replication goal (“send 20 copies”), and fear/hope anecdotes. It’s basically early engagment bait—just offline and slower. The fixed “20” reads like an R0 target; if folks actually did it, growth would explode until friction kills it. Which element actually matter most in real-life replication—the deadline, the number target, or the vivid stories? My hunch is the deadline did most of the work (it fights procrastination and nudges action now), which is still how viraly prompts get us to click today, definitley.

  10. Oct 2025
    1. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you.

      I think the example about algorithms recommending painful memories is very real. One time, my social media showed me an old photo with my ex, and it made me feel really bad. It shows that algorithms don’t understand feelings. They only see data, not emotion. I think social media should give users more control to block or turn off certain reminders.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Arvind Narayanan. TikTok’s Secret Sauce. Knight First Amendment Institute, December 2022. URL: http://knightcolumbia.org/blog/tiktoks-secret-sauce (visited on 2023-12-07).

      This article says TikTok’s recommendation system is not really “magic.” It works well because users can skip quickly, and the app learns fast from that. I agree with this idea. I also think it’s dangerous that people believe algorithms “know them,” when in fact it’s just smart design making people stay longer.

    1. Another strategy for managing disability is to use Universal Design [j17], which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it[2]. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor.

      Universal Design makes sense because it moves the work from the disabled person to the builder. When spaces include ramps, elevators, and clear labels (including braille), many different people benefit at the same time. It treats access as part of the plan, not a special fix. The hard part is balancing needs when space and budgets are tight, but a fair goal is to offer equal, visible paths so no one is sent to a “back door.”

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg Miller and Ilaria Parogni. The Hidden Image Descriptions Making the Internet Accessible. The New York Times, February 2022. URL: https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html (visited on 2023-12-07).

      This source shows why alt-text matters: it should be short, concrete, and task-focused. Useful alt-text names the subject, the action, the setting, and any visible text in the image. A simple policy would help: when someone uploads an image, the tool asks for alt-text with a small prompt. AI can suggest a draft, but a human should review. This makes posts more accessible and also more professional.

    1. [i12] Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      [i12] shows how “no user notification” after a breach undercuts self-defense: without notice, people rarely rotate credentials or enable 2FA in time. To match the chapter’s shared-responsibility theme, please add a 72-hour post-breach action checklist (email/password rotation, stop reuse, enable 2FA, high-value account resets) plus a disclosure-timeline template that platforms should follow.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]).

      This section explains password storage well, but it should explicitly separate encryption from hashing: sites should store passwords with a salted, slow hash (e.g., bcrypt and Argon2), not reversible encryption. Reversible schemes mean one leaked key exposes all passwords; slow hashing makes credential-stuffing economically painful. Minimal user practice: password manager + unique long passwords + TOTP or hardware-key 2FA.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [h14] Tyler Vigen. Spurious correlations. November 2023. URL: http://tylervigen.com/spurious-correlations (visited on 2023-12-05).

      [h14] Read about false correlations. For example, "Maine divorce rate" and "per capita margarine consumption" have a 99.26% correlation, but no causal relationship. Implications for social media data mining: First, formulate hypotheses, perform multiple comparison corrections, and use a holdout set for validation to avoid mistaking coincidence for a pattern. This is consistent with what we discussed in class about "correlation ≠ causation" and "p-hacking."

    1. 8.5.1. Reflection# After looking at your ad profile, ask yourself the following: What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?

      After reading this section, I checked my Google Ads profile. “Automotive, 18-34, Male” was accurate, but “Mother and Child, Gardening” was incorrect. My gut feeling is that the platform is using behavioral data like “stops/clicks/private messages” to better characterize me than the information I filled in myself. The question is: should platforms hide sensitive findings about sexual orientation, addiction risk, and other factors by default, or only make them visible with explicit consent? Even Bluesky’s open API hasn’t redressed this power imbalance.

    1. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls.

      In §7.4, I argue that the "don't feed the trolls" rule shifts responsibility to the targets. On Discord, I help keep a balance, not focusing solely on doxing threats that have escalated; what finally worked was friction and compliance: maIembe mode during spikes, cooldowns for new members, removal of prohibitive thresholds, and robust reporting. I also interpret §7.3 as the ethical distinction between the “increase” of protest trolling (e.g., flooding K-pop apps) and the “reduction” of cruelty (RIP). It would be helpful if the chapter explicitly identified conformity and power dynamics as key dividing lines between ridicule, resistance, and contempt.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Film Crit Hulk. Don’t feed the trolls, and other hideous lies. The Verge, July 2018. URL: https://www.theverge.com/2018/7/12/17561768/dont-feed-the-trolls-online-harassment-abuse (visited on 2023-12-05).

      [g32] argues that ignoring rarely stops harassment; abusers escalate to force a reaction. The piece advocates platform-level moderation and removals, shifting responsibility from victims to systems. This directly reinforces §7.4’s critique of “don’t feed the trolls” and grounds the chapter’s recommendation in reported cases rather than abstract principle.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Does this mean that her performance of vulnerability was inauthentic?

      Let’s answer this question: Performance itself does not compromise authenticity. As defined in this chapter, a connection is authentic when the service provided matches the actual service. If the audience can identify with a carefully crafted persona and feel the intimacy promised, the connection is authentic. If important facts are hidden (for example, through undisclosed sponsorships or deceptive practices), it becomes fake. In my research on the Bluesky bot, labeling posts as coming from a bot maintained trust and correlated the connection provided with reality.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jonah E. Bromwich and Ezra Marcus. The Anonymous Professor Who Wasn’t. The New York Times, August 2020. URL: https://www.nytimes.com/2020/08/04/style/college-coronavirus-hoax.html (visited on 2023-11-24).

      [f10] Regarding the “unidentified professor”: This report reveals how community skepticism (strange memorial service phone calls, no Arizona State University records) confirmed that @Sciencing_Bi was a McLaughlin scam and how the scam caused multiple harms: identity theft, community manipulation, and the discrediting of a genuine Title IX whistleblower. The name reference and institutional confirmation are powerful sources. Combined with BuzzFeed [f9] and The Verge [f13], this strengthens the timeline and mitigates the bias of a single source.

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24).

      Reading [e33], the inventor apologizes for infinite scroll because it removes stopping points and lengthens use. I propose an ethical design rule: build in “natural stops”—page ends, session timers, and a one-tap “take a break” card—enabled by default and easy to keep on. Platforms should have to justify any removal of these stops.

    1. One famous example of reducing friction was the invention of infinite scroll [e31]. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin [e32] invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I support adding "friendly friction" to the UI. The pop-up before retweeting makes me hesitant. I also suggest offering adjustable friction levels (read timers, cooldowns for late-night posts). This would reduce impulsive spreading while still maintaining freedom of choice. Do you agree?

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anna Lauren Hoffmann. Data Violence and How Bad Engineering Choices Can Damage Society. Medium, April 2018. URL: {https://medium.com/@annaeveryday/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4} (visited on 2023-11-24).

      [d28] Hoffman, "Data Violence" (2018): Harm arises not only from biased outcomes but also from categorization/interface choices that force people to "adapt." Key takeaway: If the affected community doesn't shape the architecture, technical fairness improvements can still perpetuate violence. This reframes the formal failure in §4.3: a flawed drop-down menu isn't a failure; it's a governance failure.

    1. 4.2.5. Revisiting Twitter Users vs. Bots# Let’s go back to the question of whether less than 5% of Twitter users are spam bots. In this claim are several places where there are simplifications being made, particularly in the definitions of “Twitter users” and “spam bots.”

      While reading §4.2 and the debate over Twitter bots, I was struck by how definitions can quietly gain influence. For a class project, I modified a spam heuristic (number of URLs + account age), and my bot estimate went from 3% to 14%—the same data, just simplified. This convinced me that the "<5%" number wasn't a fact, but a governance decision about what's considered criticaI. Platforms should publish ranges based on uncertain scenarios and disclose the underlying assumptions. Question: Why don't we require confidence intervals and other definitions for platform metrics, as we do in epidemioIogy?

    1. Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?

      I disagree with completely excluding "click farms" from the "bot" discussion. The key question for platform ecosystems isn't whether they're computerized, but whether interactions can be manipulated at superhuman scale, at low cost, and through scripted automation. If the goal of governance is to maintain the information environment, why not include a dimension of "functional automation" in the definition and include manual, batch operations within the framework of disclosure and restrictions? Otherwise, the same manipulation effects will be overlooked simply because the implementation method has changed.

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      Sarah pointed out that Tay's out-of-control behavior can be prevented by design: content fiItering, frequency limiting, adversariaI testing, and narrowing learning objectives should be implemented. It is recommended that boundary-based Iearning be adopted by default (no online self-learning without manual review) and that "modeI cards" be made public to explain data sources and security boundaries, transforming vigilance into boundary-based governance.