- Mar 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, UNITED STATES, 2018. ISBN 978-1-4798-3364-1. URL: https://orbiscascade-washington.primo.exlibrisgroup.com/permalink/01ALLIANCE_UW/8iqusu/alma99162068349301452 (visited on 2023-12-10).
Noble’s work is a critical read in our digital age, offering a detailed examination of how search engine algorithms can perpetuate racial biases. One detail that stood out to me was her discussion of how seemingly neutral algorithms are influenced by historical and cultural power dynamics. This source not only challenges the assumption that technology is objective but also encourages a broader conversation about accountability in tech design. I appreciate how it complements the course’s emphasis on understanding the ethical dimensions of digital platforms, and I wonder how these insights might influence policy or design reforms in the future.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We hope that by the end of this book, you have a familiarity with applying different ethics frameworks, and considering the ethical tradeoffs of uses of social media and the design of social media systems. Again, our goal has been not necessarily to come to the “right” answer, but to ask good questions and better understand the tradeoffs, unexpected side-effects, etc.
I find this sentence particularly striking because it encapsulates the dual goals of knowledge and critical insight in understanding social media. It’s not just about memorizing terms but also about recognizing how these concepts influence both user behavior and platform design. In my experience, learning this vocabulary has empowered me to see the hidden mechanisms behind viral content and algorithmic curation, prompting me to ask deeper questions about who benefits from these design choices. I’m curious whether future revisions of the course might include more hands-on projects to test these concepts in real-world scenarios
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
apitalism is: “an economic system characterized by private or corporate ownership of capital goods, by investments that are determined by private decision, and by prices, production, and the distribution of goods that are determined mainly by competition in a free market”
This definition really highlights the core of how economic decisions in capitalism are driven by private interests and competitive forces. I find it interesting that while this system fosters innovation and efficiency, it can also lead to decisions—such as those on social media platforms—that prioritize short-term profit over the long-term well-being of users. It makes me wonder if there’s a way to balance profit motives with ethical considerations in these fast-paced digital environments.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Merriam-Webster. Definition of CAPITALISM. December 2023. URL: https://www.merriam-webster.com/dictionary/capitalism (visited on 2023-12-10).
I appreciate the inclusion of Merriam-Webster’s definition because it provides a concise yet comprehensive explanation of capitalism. This source sets a solid foundation for understanding the economic forces that drive many business decisions today. Reflecting on this, it becomes clear how the emphasis on profit and competition can sometimes result in platforms making choices that ultimately harm their users, reinforcing the complex relationship between market forces and ethical practices.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
[r1] Trauma and Shame. URL: https://www.oohctoolbox.org.au/trauma-and-shame (visited on 2023-12-10).
This source provides a compelling overview of how trauma and shame interact, particularly in therapeutic contexts. One detail that stood out to me was how it discusses the cyclical nature of shame in individuals who have experienced trauma—where shame can reinforce isolation and hinder recovery. This connects with the chapter’s discussion of shame in childhood, where early negative experiences may lay the groundwork for later difficulties in self-perception. I appreciate that this source extends the conversation by not only defining shame but also exploring its long-term psychological effects, prompting me to reflect on how early interventions might help break this cycle in vulnerable populations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person.
This sentence resonated with me because it highlights the deeply personal and isolating nature of shame. I’ve noticed in my own experiences and observations that when someone feels fundamentally flawed, it can lead to withdrawal and a sense of disconnection from others. However, I also wonder if this response might sometimes be culturally moderated—do some communities foster resilience by reframing shame as a call for growth rather than a mark of worthlessness? I’m curious to explore how different cultural narratives around shame versus guilt might alter a child’s developmental pathway, especially when contrasted with the idea that guilt can prompt reparative actions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Roni Jacobson. I’ve Had a Cyberstalker Since I Was 12. Wired, 2016. URL: https://www.wired.com/2016/02/ive-had-a-cyberstalker-since-i-was-12/ (visited on 2023-12-10).
This Wired article by Roni Jacobson offers a personal and unsettling account of long-term cyberstalking, which adds an important human dimension to our understanding of digital harassment. What stands out to me is how this source highlights the lasting impact such behavior can have on a young person's life, well beyond a single incident of online bullying. It serves as a powerful reminder that while academic discussions of harassment often focus on statistics and theory, real-world accounts like this are crucial for understanding its emotional and psychological toll. I’d be interested in exploring further how these personal narratives might influence policies aimed at protecting vulnerable internet users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bullying: like posting public mean messages
This sentence caught my attention because it succinctly illustrates how digital communication can facilitate harassment in ways that feel both intimate and invasive. In my experience, receiving a barrage of mean messages privately can be deeply isolating—the victim often feels trapped, as there’s no public forum to share their pain or seek immediate support. It makes me wonder how social media platforms might redesign private messaging features to help detect and mitigate such harmful interactions before they escalate.
-
- Feb 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Jim Hollan and Scott Stornetta. Beyond being there. In Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '92, 119–125. Monterey, California, United States, 1992. ACM Press. URL: http://portal.acm.org/citation.cfm?doid=142750.142769 (visited on 2023-12-08), doi:10.1145/142750.142769.
This sentence invites us to reconsider the assumption that digital interactions are simply inferior replicas of in-person exchanges. The referenced work by Jim Hollan and Scott Stornetta challenges that notion, arguing that digital platforms can offer distinct advantages—like archiving and asynchronous communication—that are not available in face-to-face settings. This perspective is particularly thought-provoking when considering how digital tools have transformed remote collaboration in modern workplaces.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing
I find this sentence engaging because it not only explains the basic mechanism of crowdsourcing but also hints at the power of distributed effort in tackling complex tasks. Reflecting on my own experiences with collaborative projects, I’ve seen how harnessing many small contributions can lead to innovative solutions that no single expert might have devised alone.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Mia Sato. YouTube reveals millions of incorrect copyright claims in six months. The Verge, December 2021. URL: https://www.theverge.com/2021/12/6/22820318/youtube-copyright-claims-transparency-report (visited on 2023-12-08).
This source highlights a crucial aspect of content moderation on large platforms like YouTube—how automated systems can sometimes overreach, leading to erroneous takedowns. I find it particularly compelling because it not only underscores the challenges of balancing intellectual property rights with fair use but also resonates with broader concerns about algorithmic decision-making. It makes me wonder about the potential improvements in transparency and accountability that platforms could adopt to mitigate such issues.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave.
This sentence made me reflect on how essential content curation is for keeping online communities vibrant and welcoming. It raises an interesting question about where the line should be drawn between filtering out spam or low-quality material and preserving the diversity of user expression. In my experience, platforms that maintain a clear standard tend to foster more meaningful interactions, but I wonder if sometimes too strict a policy might inadvertently limit creative or unconventional contributions.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Lauren Collee. The Great Offline. Real Life, December 2021. URL: https://reallifemag.com/the-great-offline/ (visited on 2023-12-08).
I’d like to highlight a key sentence from Lauren Collee’s essay, “The Great Offline” ([m7]). In her essay, Collee writes about how early colonial narratives idealized the wilderness by ignoring its full human context. This sentence struck me as particularly powerful because it draws a parallel to modern digital detox conversations. It challenges us to see that dismissing technology as inherently corrupt only limits our ability to reimagine and improve our digital spaces. Collee’s insight encourages a more nuanced approach—one that seeks to reform how we use technology rather than simply shunning it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together.
I want to comment on the sentence: “For as long as we keep dumping our hopes into the conceptual pit of ‘the offline world,’ those hopes will cease to exist as forces that might generate change in the worlds we actually live in together.” This sentence really resonated with me. I agree that framing the offline world as an idyllic escape oversimplifies our relationship with technology. In my experience, simply disconnecting doesn’t address the deeper issues of how we engage with digital environments. Instead, it’s more productive to think about redesigning these platforms so that they promote healthier interactions rather than isolating us.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Color blindness. December 2023. Page Version ID: 1188749829. URL: https://en.wikipedia.org/w/index.php?title=Color_blindness&oldid=1188749829 (visited on 2023-12-07).
The Wikipedia article provides general information about color blindness, but it could be supplemented with more research on the real-world challenges faced by people with color vision deficiencies. Color blindness can impact career choices, daily navigation (such as interpreting traffic signals), and interaction with digital media. Many companies are now incorporating color-blind-friendly designs, reinforcing the idea that disability is often about whether society accommodates differences rather than the differences themselves being inherently limiting.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This line highlights how disability is not just a medical or personal condition but a societal construct. The example of color perception—where trichromats are considered "normal" and tetrachromats' abilities are mostly unrecognized—demonstrates how expectations shape what is seen as a disability. This reinforces the importance of universal design, ensuring that environments accommodate a broader range of abilities rather than conforming to an arbitrary "normal."
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Vincent Manancourt. What’s wrong with the GDPR? POLITICO, June 2022. URL: https://www.politico.eu/article/wojciech-wiewiorowski-gdpr-brussels-eu-data-protection-regulation-privacy/ (visited on 2023-12-06).
The POLITICO article highlights concerns about inconsistent enforcement across EU member states. I’ve noticed in practice that major tech companies often exploit these inconsistencies by setting up operations in countries with more lenient interpretations, showing that the GDPR’s effectiveness can vary depending on the local data authority’s resources and willingness to impose penalties.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Then do the following (preferably on paper or in a blank computer document):
I found it helpful to actually do the CIDER steps on paper first; it helped me visualize assumptions about online privacy without getting distracted by open browser tabs. Handwriting also slowed me down enough to reflect more deeply on the real-life consequences of each assumption in the GDPR brochure.
-
- Jan 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Catherine Stinson. The Dark Past of Algorithms That Associate Appearance and Criminality. American Scientist, January 2021. URL: https://www.americanscientist.org/article/the-dark-past-of-algorithms-that-associate-appearance-and-criminality (visited on 2023-12-05).
This article is an important reminder of the dangers of pseudoscientific practices creeping back into modern AI. The idea that facial features can be used to predict criminality has been debunked for centuries, yet AI-based facial recognition has reintroduced these biases in a more dangerous way. This connects to discussions in class about algorithmic bias, particularly how AI can reinforce systemic discrimination when trained on flawed datasets. It makes me wonder how companies justify using such technology despite its well-documented flaws—are they simply ignoring the ethical concerns, or do they genuinely believe AI can "fix" past mistakes in human judgment?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation
This statement is both fascinating and concerning. It highlights the power of social media data mining in revealing deeply personal aspects of individuals' lives. While this could be used positively in certain cases, it raises serious privacy concerns. Many people may not even be aware that their online interactions could be analyzed in such a way. It reminds me of how companies like Facebook have been criticized for their data collection practices, sometimes leading to ethical debates about user consent and privacy. Should there be stricter regulations on what personal attributes can be inferred from social media data?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Kaitlyn Tiffany. 'My Little Pony' Fans Are Ready to Admit They Have a Nazi Problem. The Atlantic, June 2020. URL: https://www.theatlantic.com/technology/archive/2020/06/my-little-pony-nazi-4chan-black-lives-matter/613348/ (visited on 2023-12-05).
This article highlights an unsettling phenomenon: how extremist or hate-leaning groups can co-opt otherwise innocent fandoms (in this case, My Little Pony). It’s relevant to the chapter’s discussion of trolling and nihilistic communities because it shows that certain online groups use shock value or hateful imagery to disrupt fandom culture. The piece also underscores how even seemingly lighthearted spaces aren’t immune to toxic trolling that escalates to racism or bigotry. It emphasizes that while some troll activity might appear humorous or satirical on the surface, the real-world harms—marginalization, targeted harassment—can be substantial.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One way to begin examining any instance of disruptive behavior is to ask what is being disrupted: a pattern, a habit, a norm, a whole community?
I find this sentence powerful because it illustrates that not all disruption is automatically negative; rather, its ethical standing depends on what system or norm you’re trying to change. If a practice is itself harmful or unjust—like a discriminatory social norm—then disrupting it may be praiseworthy. However, if you’re simply undermining a supportive community or causing chaos for personal amusement, that’s problematic. This approach encourages a deeper look at the underlying reasons for disruption before labeling the act as “trolling” or “harassment.” It reminds me that we need to evaluate the legitimacy of what’s being disrupted and the motives for doing so.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Lindsay Ellis. YouTube: Manufacturing Authenticity (For Fun and Profit!). September 2018. URL: https://www.youtube.com/watch?v=8FJEtCvb2Kw (visited on 2023-11-24).
Lindsay Ellis’s video is a fascinating examination of how YouTubers and influencers often construct a persona that seems intimately authentic but is, in reality, carefully curated and monetized. The video made me think about the lonelygirl15 case again, because it illustrates how easily viewers can become invested in an apparent glimpse into someone’s personal life. Ellis’s analysis clarifies how “authenticity” can be a performance strategy, prompting us to question how much of what we see is genuine connection and how much is shaped by the platform’s profit incentives.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
An inauthentic connection could be a good surprise, but usually, when people use the term ‘inauthentic’, they are indicating that the surprise was in some way problematic: someone was duped.
I find it compelling how this sentence addresses the difference between an “inauthentic connection” that causes harm (like tricking viewers into believing a fictional vlogger was a real person, as with lonelygirl15) versus a harmless ruse (like planning a surprise party). It shows that our emotional reaction depends a lot on whether there is harm or a breach of trust—people generally don’t mind ‘white lies’ if they’re part of a positive surprise. But when we realize someone has misled us for entertainment, profit, or manipulation, we feel betrayed. It highlights how crucial trust still is in our digital age, even when we may rarely interact face-to-face.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).
I found Caroline Delbert’s discussion of “2+2=5” both surprising and refreshing. It challenges the notion that even something as seemingly absolute as basic arithmetic can become subject to debate when we factor in context, assumptions, or unconventional definitions. This illustrates the broader chapter theme that all data is a simplification of reality. Just like “2+2=5” can make sense under certain interpretations (e.g., rounding, alternative number systems), the data we rely on for real-world decisions can be equally dependent on how we define or interpret it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Any dataset you find might have: missing data erroneous data (e.g., mislabeled, typos) biased data manipulated data
I appreciate how the chapter explicitly acknowledges that real-world data is rarely pristine or complete. As someone who has worked with messy datasets in a research setting, I’ve seen how easy it is to take numbers at face value without questioning how they were collected or labeled. This sentence is a great reminder that before taking data-driven conclusions at face value, we need to ask: “What might be missing, mislabeled, or manipulated here?” That mindset can help us avoid building entire projects—or even policies—on shaky foundations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Dale F. Eickelman. Kings and People: Information and Authority in Oman, Qatar, and the Persian Gulf. In Joseph A. Kechichian, editor, Iran, Iraq, and the Arab Gulf States, pages 193–209. Palgrave Macmillan US, New York, 2001. URL: https://doi.org/10.1007/978-1-349-63443-9_12 (visited on 2023-11-17), doi:10.1007/978-1-349-63443-9_12.
Dale F. Eickelman’s work appears to focus on how power and authority function in Gulf states, and how information shapes these relationships. I’m curious about how Eickelman addresses smaller acts of protest—like the donkey demonstration—within the larger context of social and political constraints. It would be fascinating to see whether he discusses intersections between traditional communication methods (like rumors or leaflets) and newer platforms like social media. I also wonder if the book examines cultural factors that affect how such creative, and sometimes indirect, protests are received by both the government and the general public, especially when overt criticism of authority might carry substantial risk.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting
I find this passage fascinating because it highlights how responsibility for an action can be shifted when a non-human agent is involved. It reminds me of how bots on social media can operate under a similar premise, where the “donkey” (the program or account) is simply carrying out instructions, but without any actual understanding or intention behind what it does. It raises tricky questions about accountability: if a bot spreads misinformation or harasses someone, who is really to blame—the coder, the user who deployed the bot, or the bot itself (which has no understanding of morality)? I think this parallel with the donkey protest is an excellent way to start conversations about where responsibility lies when intermediaries (human or otherwise) carry out actions on behalf of others.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“A person is a person through other people.
A person is a person through other people.” This sentence highlights that individuals develop identity and moral values by interacting with and caring for others. It’s not about isolation; rather, it acknowledges the collective nature of humanity. In many Western frameworks, moral reasoning can focus on the individual, but Ubuntu emphasizes interdependence. This perspective can encourage us to think less about maximizing personal gain and more about mutual support. By appreciating our connections to those around us, we might reframe our ethical decisions to better include the well-being of our communities, and learn that we are shaped by close relationships.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
1.1. The case of Justine Sacco’s racist joke tweet
What happen to her makes me think about how the design of social media can turn a small thing into a major international issue. In this case, the location feature of Twitter allowed people to track her flight and found her. Maybe if Twitter had limited or turned off the location feature by default, or if it more aggressively moderated harmful or hateful content, this blow-up wouldn’t have spread so quickly.
-