- Mar 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Luddite. December 2023. Page Version ID: 1189255462. URL: https://en.wikipedia.org/w/index.php?title=Luddite&oldid=1189255462 (visited on 2023-12-10).
The history of the Luddite movement provides an interesting perspective on the ethical dilemmas of technological progress. While Luddites are often mischaracterized as simply anti-technology, their protests were actually about how industrial automation was displacing skilled workers and worsening labor conditions. This connects to modern debates about AI and automation—are we facing a new wave of "digital Luddites" as workers fear job loss due to AI-driven systems?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney [u9] who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel [u10] who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons [u11], or Aza Raskin regretting his invention infinite scroll.
This passage highlights how even well-intentioned technological advancements can have unintended and often harmful consequences. It raises an important ethical question: should inventors be held responsible for the negative impacts of their creations, or is it impossible to predict how technology will evolve? This dilemma is especially relevant in modern tech, where innovations like AI and social media algorithms are shaping society in ways their creators may not have fully anticipated.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Merriam-Webster. Definition of CAPITALISM. December 2023. URL: https://www.merriam-webster.com/dictionary/capitalism (visited on 2023-12-10).
The Merriam-Webster definition of capitalism provides a concise explanation, but it doesn’t capture the complexities of how capitalism operates in digital economies. For example, platform capitalism, as discussed by Nick Srnicek in Platform Capitalism, highlights how tech giants like Meta extract value not just from financial transactions but from user data. Should definitions of capitalism evolve to explicitly account for digital labor and data exploitationThe Merriam-Webster definition of capitalism provides a concise explanation, but it doesn’t capture the complexities of how capitalism operates in digital economies. For example, platform capitalism, as discussed by Nick Srnicek in Platform Capitalism, highlights how tech giants like Meta extract value not just from financial transactions but from user data. Should definitions of capitalism evolve to explicitly account for digital labor and data exploitation
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Let’s take a moment and look at how Meta’s business decisions relate to what its users want. Remember that Meta is a company owned by shareholders in a capitalist system, so decisions are driven by fiduciary duty, that is, maximizing the profits of the shareholders. And among shareholders, those who have invested the most money get the most say in what Meta does. In this system, users of Meta’s social media platforms have very little say in decisions made by the company. The users of Meta have few actions they can take that influence the company, but what they can do is: Use the site less or delete their account. Individually, this doesn’t do much, but if they do this in coordination with others (e.g., a boycott), then this can affect Meta. For example, when Facebook would make interface changes, users would all complain together, and Facebook worried people would all leave together. In order to prevent this, they began slowly rolling out changes, only giving it to some users at a time, making it harder for users to coordinate leaving together.
Meta’s strategy of rolling out changes gradually to avoid mass user backlash is a fascinating example of how companies manage user dissatisfaction in a capitalist system. It reminds me of how streaming services adjust pricing—rather than increasing prices for all users at once, they introduce higher tiers gradually. This raises an important question: Should companies be required to include users in decision-making processes beyond passive feedback mechanisms? What would a model of 'digital democracy' within a platform like Meta look like?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.6. Bibliography# [r1] Trauma and Shame. URL: https://www.oohctoolbox.org.au/trauma-and-shame (visited on 2023-12-10).
This resource offers a compassionate, practical framework for understanding how trauma can lead to feelings of shame—and provides useful strategies for healing.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others [r8]. A 2009 satirical article from the parody news site The Onion satirizes public shaming as being for objectifying celebrities and being entertained by their misfortune: Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy.
Public shaming often relies on schadenfreude—a perverse pleasure in witnessing someone’s downfall. The Onion’s 2009 satire brilliantly mocks this tendency, making us question if our outrage is about genuine accountability or just entertainment.
-
- Feb 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health
The NPR article by Anya Kamenetz highlights an important issue—how social media companies like Facebook (now Meta) conduct internal research on mental health but don't always share conclusive findings. It’s concerning that while some studies suggest Instagram negatively impacts teen mental health, especially for young girls, the data isn’t always transparent or definitive. This makes me wonder: should social media companies be required to release all their internal research on mental health impacts? If these platforms acknowledge their influence on mental well-being, shouldn't they also be more accountable for addressing the harm?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One of the ways social media can be beneficial to mental health is in finding community (at least if it is a healthy one, and not toxic like in the last section). For example, if you are bullied at school (and by classmates on some social media platform), you might find a different online community online that supports you. Or take the example of Professor Casey Fiesler finding a community that shared her interests
"One point that stood out to me was how social media can be a powerful tool for finding supportive communities, as mentioned with Professor Casey Fiesler’s experience. While social media is often criticized for its negative effects on mental health, it’s important to acknowledge that it can also provide a refuge for those who feel isolated in their offline lives. For example, LGBTQ+ youth who may not have a supportive environment at home or school can find online spaces where they feel accepted and understood. However, this also raises the question of how platforms can foster these positive spaces while mitigating the risks of toxic interactions. Should platforms do more to promote healthy online communities, or is that primarily the responsibility of individual users
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.9. Bibliography# [l1] Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).
it’s interesting to see the citation of ‘Evolution of Cetaceans’ in a discussion about virality. While it may seem unrelated at first, it actually makes sense—just as species evolve over time through natural selection, viral content evolves through user-driven modifications and social media algorithms. The concept of replication with inheritance (12.3.1) mirrors biological evolution, where small variations in content (like meme edits or remixes) determine which versions spread the furthest. I wonder if this evolutionary perspective could help us predict what kinds of content are more likely to go viral
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.3.1. Replication (With Inheritance)# For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites
The concept of replication with inheritance really resonates with how memes evolve over time. A single image format or video can take on countless variations as people remix it with their own cultural references or humor. For example, the ‘Distracted Boyfriend’ meme started as a stock photo but has been continuously modified to represent different jokes and ideas, making it a prime example of inheritance in social media virality.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).
The Social Model of Disability article aligns with the chapter’s argument that disability is socially constructed. It highlights how barriers come from societal design rather than individual impairments. The chapter’s color vision example reflects this—trichromacy is "normal" only because society is built around it.
This model also applies to digital accessibility. For example, social media platforms without alt text or captions create barriers for visually impaired users, not because of their impairment but due to poor design. The social model stresses the need for inclusive technology rather than expecting individuals to adapt to an exclusionary system.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby.
This section of the chapter raises an interesting point about how disability is socially constructed rather than purely biological. The example of color vision is particularly compelling because it challenges the typical binary of "able-bodied" versus "disabled." It makes me wonder how many other "disabilities" exist simply because of societal expectations rather than inherent limitations. For example, if our society were structured around sign language rather than spoken language, hearing impairment might not be considered a disability at all. This also connects to the broader conversation about neurodiversity—many cognitive differences, like ADHD or autism, are often labeled as disabilities mainly because society is designed around neurotypical standards. What if we built environments that accommodated a wider range of abilities? Would fewer people be considered disabled?
-
- Jan 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
h2] Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-
Wagner’s article highlights the controversial practice of "shadow profiles," where Facebook collects data on individuals who don’t even have an account. This raises serious privacy concerns, as it suggests that opting out of a platform doesn’t necessarily mean escaping its data collection. Even those who consciously avoid social media can still be tracked through their interactions with websites, friends, or devices linked to Facebook’s network. This brings up an important question: Should companies be required to allow individuals to fully opt out of data collection, even if they don’t use their services?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation [h4]. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence [h5]. Social media data can also be used to infer information about larger social trends like the spread of misinformation [h6]. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many peo
Here are two substantive comments based on the provided excerpt from the chapter:
Comment 1: Ethical Concerns in Data Inference The ability of social media data to infer personal attributes—such as political leanings or susceptibility to addiction—raises serious ethical concerns. While this information can be useful for targeted advertising or public health insights, it can also be exploited in harmful ways. For example, political campaigns have used psychographic profiling to manipulate voters, and financial scammers could exploit people identified as vulnerable. How do we ensure that these inferences are used ethically, rather than to manipulate or exploit users?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
5.8. Bibliography# [e1] Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.
I noticed the reference to Tom Standage’s book, "Writing on the Wall: Social Media - The First 2,000 Years" ([e1]). Standage provides a comprehensive historical perspective on social media, tracing its roots back centuries before the digital age. His exploration of how humans have always sought ways to communicate and share information highlights that social media is not just a modern phenomenon but an extension of our innate desire for connection. This context enriches our understanding of Web 1.0 platforms by framing them as part of a long continuum of social communication tools. It also raises interesting questions about how the fundamental principles of social interaction have remained consistent despite technological advancements.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the 1980s and 1990s, Bulletin board system (BBS) [e6] provided more communal ways of communicating and sharing messages. In these systems, someone would start a “thread” by posting an initial message. Others could reply to the previous set of messages in the thread.
Reading about the early forms of social media in Web 1.0, such as Bulletin Board Systems (BBS) and AOL Instant Messenger (AIM), took me back to my high school days. I vividly remember spending countless hours on AIM, organizing my contacts into groups like "Buddies" and "Family," and eagerly waiting to see who was online. Unlike today's platforms, where interactions are instantaneous and multimedia-rich, Web 1.0 offered a more text-based and slower-paced form of communication. This makes me appreciate how far social media has evolved, providing more dynamic and engaging ways to connect with others. It also makes me wonder about the simplicity of those early interactions and whether some of that simplicity is lost in today's fast-paced digital communication.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Zero-based numbering.
The concept of zero-based numbering, as described in the Wikipedia article, underscores how fundamental design decisions in programming can have far-reaching consequences. It’s interesting how such conventions affect data organization and user experience, connecting to the chapter's discussion on how biases and conventions can shape technology
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”). You can think of this as like a language dictionary where there is a word and a definition for each word. Then you can look up any name or word and find the value or definition. Example: An English Language Dictionary with definitions of three terms: Social Media: An internet-based platform used for people to form connections to each other and share things. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations Automation: Making a process or activity that can run on its own without needing a human to guide it. The Dictionary data type allows programmers to combine several pieces of data by naming each piece. When we do this, the dictionary will have a nu
"I agree that transparency in data collection is crucial, but I wonder how realistic it is to expect companies to prioritize ethics over profit without stricter regulations. What do you think would incentivize them?"
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As we’ve looked through the history of social media platforms, we’ve seen different ways of making them work, such as:
The link to affordances provides a nuanced explanation of how user interfaces encourage specific behaviors. I found it insightful because it shows how even subtle design choices—like a button’s size or color—can shape user actions. For instance, social media platforms often use visual cues to make "liking" a post feel intuitive and rewarding, encouraging engagement. This source could be expanded further by discussing the ethical implications of such affordances—does encouraging engagement align with user well-being, or does it prioritize platform profit at the cost of user experience?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are many types of bots in the social media world. Here are some examples of different bots:
The variety of bot types highlighted in this section illustrates how automation can be both a force for good and a tool for harm. I was particularly struck by the example of Rian Johnson receiving coordinated tweets from Russian accounts while directing Star Wars: The Last Jedi. It raises questions about the ethical responsibilities of social media platforms to detect and mitigate such interference. While the study mentioned shows that a large portion of negative tweets were from trolls or bots, it also emphasizes the difficulty in distinguishing between genuine criticism and manufactured dissent. This makes me wonder how we can better identify and counteract antagonistic bots without suppressing legitimate expression.
-