- Mar 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
19.2.1. Surveillance Capitalism# Meta’s way of making profits fits in a category called Surveillance Capitalism [s37]. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” [s38] (which is ethically very bad, and something Meta allowed). 19.2.2. Meta’s Business Model# So, what Meta does to make money (that is, how shareholders get profits), is that they collect data on their users to make predictions about them (e.g., demographics, interests, etc.). Then they sell advertisements, giving advertisers a large list of categories that they can target for their ads. The way that Meta can fulfill their fiduciary duty in maximizing profits is to try to get: More users: If Meta has more users, it can offer advertisers more people to advertise to. More user time: If Meta’s users spend more time on Meta, then it has more opportunities to show ads to each user, so it can sell more ads. More personal data: The more personal data Meta collects, the more predictions about users it can make. It can get more data by getting more users, and more user time, as well as finding more things to track about users. Reduce competition: If Meta can become the only social media company that people use, then they will have cornered the market on access to those users. This means advertisers won’t have any alternative to reach those users, and Meta can increase the prices of their ads. 19.2.3. How Meta Tries to Corner the Market of Social Media# To increase profits, Meta wants to corner the market on social media. This means they want to get the most users possible to use Meta (and only Meta) for social media. Before we discuss their strategy, we need a couple background concepts: Network effect [s39]: Something is more useful the more people use it (e.g., telephones, the metric system). For example, when the Google+ social media network [s40] started, not many people used it, which meant that if you visited it there wasn’t much content, so people stopped using it, which meant there was even less content, and it was eventually shut down. Network power [s41]: When more people start using something, it becomes harder to use alternatives. For example, Twitter’s large user base makes it difficult for people to move to a new social media network, even if they are worried the new owner is going to ruin it, since the people they want to connect with aren’t all on some other platform. This means Twitter can get much worse and people still won’t benefit from leaving it.
I looked into "Paleomagnetism: Magnetic Domains to Geologic Terranes" by Robert F. Butler, which is often cited in discussions on paleomagnetic methods. This book provides a great explanation of how magnetic minerals in rocks can retain a "fossil" magnetic signature from when they formed, allowing scientists to reconstruct ancient latitudes. One detail I found particularly interesting is how thermoremanent magnetization in igneous rocks differs from detrital remanent magnetization in sedimentary rocks. Since the Chuckanut Formation is primarily sedimentary, the alignment of magnetic minerals depends on how they settled in water rather than cooling from magma, which could introduce some variability in the data.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“an economic system characterized by private or corporate ownership of capital goods, by investments that are determined by private decision, and by prices, production, and the distribution of goods that are determined mainly by competition in a free market” Merriam-Webster Dictionary [s1] In other words, capitalism is a system where: Individuals or corporations own businesses These business owners make what they want and set their own prices. They compete with other businesses to convince customers to buy their products. These business owners then hire wage laborers [s2] at predetermined rates for their work, while the owners get the excess business profits or losses. Related Terms# Here are a few more terms that are relevant to capitalism that we need to understand in order to get to the details of decision-making and strategies employed by social media companies. Shares / Stocks Shares or stocks are ownership of a percentage of a business, normally coming with getting a percentage of the profits and a percentage of power in making business decisions. Companies then have a board of directors who represent these shareholders. The board is in charge of choosing who runs the company (the CEO). They have the power to hire and fire CEOs For example: in 1985, the board of directors for Apple Computers denied Steve Jobs [s3] (co-founded Apple) the position of CEO and then they fired him completely CEOs of companies (like Mark Zuckerberg of Meta) are often both wage-laborers (they get a salary, Zuckerberg gets a tiny symbolic $1/year) and shareholders (they get a share of the profits, Zuckerberg owns 16.8%) Free Market [s4] Businesses set their own prices and customers decide what they are willing to pay, so prices go up or down as each side decides what they are willing to charge/spend (no government intervention) See supply and demand [s5] What gets made is theoretically determined by what customers want to spend their money on, with businesses competing for customers by offering better products and better prices Especially the people with the most money, both business owners and customers Monopoly [s6] “a situation where a specific person or enterprise is the only supplier of a particular thing” Monopolies are considered anti-competitive (though not necessarily anti-capitalist). Businesses can lower quality and raise prices, and customers will have to accept those prices since there are no alternatives. Cornering a market [s7] is being close enough to a monopoly to mostly set the rules (e.g., Amazon and online shopping) 19.1.2. Socialism# Let’s contrast capitalism with socialism: Socialism [s8], in contrast is a system where: A government owns the businesses (sometimes called “government services”) A government decides what to make and what the price is the price might be free, like with public schools, public streets and highways, public playgrounds, etc. A government then may hire wage laborers [s2] at predetermined rates for their work, and the excess business profits or losses are handled by the government For example, losses are covered by taxes, and excess may pay for other government services or go directly to the people (e.g., Alaska uses its oil profits to pay people to live there [s9]). As an example, there is one Seattle City Sewer system, which is run by the Seattle government. Having many competing sewer systems could actually make a big mess of the underground pipe system. 19.1.3. Accountability in Capitalism and other systems# Let’s look at who the leaders of businesses (or services) are accountable for in capitalism and other systems. Democratic Socialism (i.e., “Socialists[1]”)# With socialism in a representative democracy (i.e., “democratic socialism”), the government leaders are chosen by the people through voting. And so, while the governmental leaders are in charge of what gets made, how much it costs, and who gets it, those leaders are accountable to the voters. So, in a democratic socialist government, theoretically, every voter has an equal say in business (or government service) decisions. Note, that there are limitations to the government leaders being accountable to the people their decisions affect, such as government leaders ignoring voters’ wishes, or people who can’t vote (e.g., the young, non-citizens, oppressed minorities) and therefore don’t get a say. Capitalism# In capitalism, business decisions are accountable to the people who own the business. In a publicly traded [s10] business, that is the shareholders. The more money someone has invested in a company, the more say they have. And generally in a capitalist system, the rich have the most say in what happens (both as business owners and customers), and the poor have very little say in what happens. When shareholders buy stocks in a company, they are owed a percentage of the profits. Therefore it is the company leaders’ fiduciary duty [s11] to maximize the profits of the company (called the Friedman Doctrine [s12]). If the leader of the company (the CEO) intentionally makes a decision that they know will reduce the company’s profits, then they are cheating the shareholders out of money the shareholders could have had. CEOs mistakenly do things that lose money all the time, but doing so on purpose is a violation of fiduciary duty. There are many ways a CEO might intentionally lower profits unfairly, such as by having their company pay more than necessary when buying something from the CEO’s friend’s company. But even if a CEO decides to reduce profits for a good reason (e.g., it may be unethical to overwork the employees), then they are still violating their fiduciary duty, and the board of directors might fire them or pressure them into prioritizing profits above all else. For example, the actor Stellan Skarsgård complained that in the film industry, it didn’t matter if a company was making good movies at a decent profit. If there is an opportunity for even more profit by making worse movies, then that is what business leaders are obligated to do:
This method involves measuring the orientation of magnetic minerals within the rock. When sedimentary rocks like those in the Chuckanut Formation were deposited, iron-bearing minerals aligned with Earth’s magnetic field, which can indicate the latitude at the time of formation. If the paleomagnetic data show a different latitude from the present-day location, this would suggest that the formation has moved due to plate tectonics.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles: Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? [r4] Jimmy Kimmel’s Halloween Candy Prank: Harmful Parenting? [r5] We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes). 18.2.1. Aside on “Cancel Culture”# The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.” 18.2.2. Learn More# Twitter, the Intimacy Machine [r6] “Twitter incentivizes its users to take trust falls, and then it rewards other users for blocking the catch. Twitter is a technology finely tuned to call forth, and then crush, intimacy.”
One of the sources cited, "Twitter, the Intimacy Machine", highlights how Twitter fosters a deceptive sense of closeness while simultaneously exposing users to mass scrutiny and public shaming. The idea that social media platforms are designed to encourage vulnerability—only to punish it—aligns with many discussions on digital public shaming. This source made me think about how social media amplifies both connection and cruelty, often without users fully realizing the consequences of engaging in viral criticism
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Before we talk about public criticism and shaming and adults, let’s look at the role of shame in childhood. In at least some views about shame and childhood[1], shame and guilt hold different roles in childhood development [r1]: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view [r1], a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide. [1] This view of shame/guilt is perhaps more individualistic and perhaps more common in individualistic cultures [r2]. It might work differently in other cultures (e.g., face [r3])
One interesting aspect of this discussion on shame and guilt is how it intersects with cultural differences in parenting. In more collectivist cultures, shame is often used as a social tool to maintain harmony and reinforce group values, rather than just an individual emotional response. For example, in some East Asian cultures, the concept of "losing face" is closely tied to shame, and rather than encouraging individual repair (as guilt might in individualistic cultures), the emphasis is on restoring social harmony or preserving group reputation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Harassment can also be done through crowds. Crowd harassment has also always been a part of culture, such as riots, mob violence, revolts, revolution, government persecution, etc. Social media then allows new ways for crowd harassment to occur. Crowd harassment includes all the forms of individual harassment we already mentioned (like bullying, stalking, etc.), but done by a group of people. Additionally, we can consider the following forms of crowd harassment: [Dogpiling](https://en.wikipedia.org/wiki/Dogpiling_(Internet) [q4]): When a crowd of people targets or harasses the same person. Public Shaming (this will be our next chapter) Cross-platform raids (e.g., 4chan group planning harassment on another platform [q5]) Stochastic terrorism [q6] The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. [q7] See also: An atmosphere of violence: Stochastic terror in American politics [q8] In addition, fake crowds (e.g., bots or people paid to post) can participate in crowd harassment. For example: “The majority of the hate and misinformation about [Meghan Markle and Prince Henry] originated from a small group of accounts whose primary, if not sole, purpose appears to be to tweet negatively about them. […] 83 accounts are responsible for 70% of the negative hate content targeting the couple on Twitter.” Twitter Data Has Revealed A Coordinated Campaign Of Hate Against Meghan Markle [q9]
This made me think about the broader implications of fake crowds in online harassment. If public opinion can be skewed by a small number of accounts, it complicates the idea of "public backlash." How much of what we perceive as widespread criticism is actually manufactured? This reminds me of other cases where bot networks have been used for political propaganda, misinformation, and even to incite violence.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc. Individual harassment can also be done publicly before an audience (such as classmates or family). For example:
One thing that stood out to me is the discussion of tracking—it’s often associated with parental monitoring, but in some cases, it can cross the line into control and abuse, especially in relationships. This makes me wonder: where should the line be drawn between protection and invasion of privacy? While some might argue that monitoring children’s online activity is a safety measure, others see it as an overreach that can limit personal autonomy. Similarly, in relationships, tracking can start under the guise of concern but turn into manipulation and control.
-
- Feb 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots [p5]]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There [p6] (pdf here [p7]). Some of the different characteristics that means of communication can have include (but are not limited to):
The study on deciphering bad handwriting through Mechanical Turk reminds me of the broader role of crowdsourcing in machine learning. One notable example is the ImageNet dataset, which was built using crowdsourced labeling of images.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia [p12]: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages [p13]), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora [p14]: An crowdsourced question and answer site. Stack Overflow [p15]: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk [p16]: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork [p17]: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk.
The example of Fold-It is fascinating because it highlights how human intuition and problem-solving skills can outperform computers in certain complex tasks. It reminds me of how CAPTCHA tests often rely on human pattern recognition to verify users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
14.1.1. Quality Control# In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam [n1], mass-produced unsolicited messages, generally advertisements, scams, or trolling. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content. 14.1.2. Legal Concerns# Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child sexual abuse material (CSAM). So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia). With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously [n2]. 14.1.3. Safety# Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment.
The mention of YouTube’s aggressive copyright enforcement reminds me of the ongoing debate about the Digital Millennium Copyright Act (DMCA). The system often disproportionately affects small content creators, who may have their videos removed or demonetized due to false claims, even when their use of copyrighted material falls under fair use. One relevant source is the Electronic Frontier Foundation (EFF), which has criticized the DMCA’s takedown system for being overly automated and biased in favor of large media corporations. Their article “Takedown Hall of Shame” provides real-world examples of wrongful takedowns and how they harm online expression.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy). 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret.
The section on Automated Moderators (bots) made me think about the limitations of AI in content moderation. While bots can quickly filter out spam and flag harmful content, they often struggle with nuance, sarcasm, or cultural context. I’ve seen many cases where platforms like YouTube or Instagram mistakenly flag harmless content while allowing harmful material to slip through. This raises the question: should platforms rely more on AI moderation, or should they invest in more human moderators to review flagged content? While AI can help with efficiency, it still seems far from replacing human judgment completely.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
13.2.2. Trauma Dumping# While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping [m11]. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?). Trauma dumping can be bad for the mental health of those who have this trauma unexpectedly thrown at them, and it also often isn’t helpful for the person doing the trauma dumping either: Venting, by contrast, is a healthy form of expressing negative emotion, such as anger and frustration, in order to move past it and find solutions. Venting is done with the permission of the listener and is a one-shot deal, not a recurring retelling or rumination of negativity. A good vent allows the venter to get a new perspective and relieve pent-up stress and emotion. While there are benefits to venting, there are no benefits to trauma dumping. In trauma dumping, the person oversharing doesn’t take responsibility or show self-reflection. Trauma dumping is delivered on the unsuspecting. The purpose is to generate sympathy and attention not to process negative emotion. The dumper doesn’t want to overcome their trauma; if they did, they would be deprived of the ability to trauma dump. How to Overcome Social Media Trauma Dumping [m12] 13.2.3. Munchausen by Internet# Munchausen Syndrome (or Factitious disorder imposed on self [m13]) is when someone pretends to have a disease, like cancer, to get sympathy or attention. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness in an online community than in an in-person community, so many have done so [m14] (like the fake @Sciencing_Bi fake dying of covid in the authenticity chapter). People who fake these illnesses often do so as a result of their own mental illness, so, in fact, “they are sick, albeit […] in a very different way than claimed” [m15]. 13.2.4. Digital Self-Harm# Sometimes people will harm their bodies (called “self-harm” [m16]) as a way of expressing or trying to deal with negative emotions or situations. Self-harm doesn’t always have to be phys
In the bibliography, the study 'Explainable AI for Mental Disorder Detection via Social Media: A Survey and Outlook' caught my attention. The paper highlights the potential of AI in identifying mental health issues through patterns in social media usage. While this presents promising avenues for early intervention, I'm curious about the ethical implications concerning user privacy and consent. How can we ensure that such technologies are implemented responsibly without infringing on individual rights
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health [m1]. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image [m2] Comedian and director Bo Burnham has his own observations about how social media is influencing mental health: “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience [m3] - NPR Fresh Air (10:15-11:20) It can be difficult to measure the effects of social media on mental health since there are so many types of social media, and it permeates our cultures even of people who don’t use it directly. Some researchers have found that people using social media may enter a dissociation state [m4], where they lose track of time (like what happens when someone is reading a good book). Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5].
The chapter's discussion on 'trauma dumping' resonated with me. I've noticed an increase in unfiltered sharing of personal traumas on social media platforms. While it's essential to have spaces for open expression, I'm concerned about the potential emotional burden this places on unsuspecting readers and whether such platforms are suitable for processing deep-seated issues. How can we balance authentic sharing with the need to protect the mental well-being of the broader online community
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The book Writing on the Wall: Social Media - The First 2,000 Years [l6] describes how, before the printing press, when someone wanted a book, they had to find someone who had a copy and have a scribe make a copy. So books that were popular spread through people having scribes copy each other’s books. And with all this copying, there might be different versions of the book spreading around, because of scribal copying errors, added notes, or even the original author making an updated copy. So we can look at the evolution of these books: which got copied, and how they changed over time. 12.2.2. Chain letters# When physical mail was dominant in the 1900s, one type of mail that spread around the US was a chain letter [l7]. Chain letters were letters that instructed the recipient to make their own copies of the letter and send them to people they knew. Some letters gave the reason for people to make copies might be as part of a pyramid scheme [l8] where you were supposed to send money to the people you got the letter from, but then the people you send the letter to would give you money. Other letters gave the reason for people to make copies that if they made copies, good things would happen to them, and if not bad things would, like this: You will receive good luck within four days of receiving this letter, providing, you in turn send it on. […] An RAF officer received $70,000 […] Gene Walsh lost his wife six days after receiving the letter. He failed to circulate the letter. Fig. 12.2 An example chain letter from https://cs.uwaterloo.ca/~mli/chain.html [l9].# The spread of these letters meant that people were putting in effort to spread them (presumably believing making copies would make them rich or help them avoid bad luck). To make copies, people had to manually write or type up their own copies of the letters (or later with photocopiers, find a machine and pay to make copies). Then they had to pay for envelopes and stamps to send it in the mail. As these letters spread we could consider what factors made some chain letters (and modified versions) spread more than others, and how the letters got modified as they spread. 12.2.3. Sourdough starters# Sourdough bread is made by baking something called a “starter,” which is a mix of flour, water, and a colony of microorganisms (like yeast). Fig. 12.3 Sourdough starter. Photo by Janus Sandsgaard [l10]# Fig. 12.4 Sourdough bread. Photo source [l11]# The microorganisms in the starter will continue multiplying if you let them, and you can add flour and water to make it larger, then split it into multiple starters. You can repeat this process again and again, occasionally using some starters to bake bread, but you can share the starters with others. In this way, as people split and share their starters, sourdough starters are spread, multiply and evolve (including the microorganisms evolving biologically). One sourdough starter even dates back to at least 1847 [l12].
the Bible and classical literature have numerous versions due to centuries of hand-copying, which led to slight (or sometimes major) changes in meaning. This connects well with the concept of memes evolving—ideas don't stay static but shift and adapt as they spread. Similarly, in modern times, digital misinformation spreads and mutates just as easily, showing that the process of "cultural evolution" hasn't really changed—just the speed and scale at which it happens.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.1.1. Evolution# Biological evolution is how living things change, generation after generation, and how all the different forms of life, from humans to bacteria, came to be. Evolution occurs when three conditions are present: Replication (with Inheritance) An organism can make a new copy of itself, which inherits its characteristics Variations / Mutations The characteristics of an organism are sometimes changed, in a way that can be inherited by future copies Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself When those three conditions are present, then over time successive generations of organisms will: be more adapted to their environment divide into different groups and diversify stumble upon strategies for competing with or cooperating with other organisms. 12.1.2. Memes# In the 1976 book The Selfish Gene [l3], evolutionary biologist Richard Dawkins[1] said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene” [l4]). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc. We can even consider the evolutionary forces that play in the spread of true and false information (like an old saying: “A lie is halfway around the world before the truth has got its boots on.” [l5])
One interesting aspect of this chapter is how memes, as described by Dawkins, evolve similarly to biological genes. It makes me wonder—does this mean that the most successful cultural ideas are not necessarily the "best" or "truest" but simply the ones that replicate most effectively? For example, misinformation and conspiracy theories often spread faster than verified facts because they are more sensational or emotionally charged. This seems to align with the idea of natural selection, where the "fittest" memes are the ones that spread the easiest, even if they aren’t the most beneficial.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Friends or Follows:# Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you. Reminders:# Automated reminders can go well in a situation such as when a user enjoys the nostalgia of seeing something from their past. Automated reminders can go poorly when they give users unwanted or painful reminders, such as for miscarriages [k7], funerals, or break-ups [k8] Ads:# Advertisements shown to users can go well for
This article emphasizes that the algorithms driving platforms like Facebook and YouTube are central to their operation, shaping the content users see and influencing public discourse. It underscores the importance of understanding these systems to grasp their societal impact
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely)
Reflecting on my personal experience, I've noticed that while recommendation algorithms often suggest content aligned with my interests, they occasionally miss the mark. For instance, after briefly researching a topic, I might be inundated with related content for weeks, even if my interest was fleeting. This persistence can be intrusive and highlights a limitation in the algorithms' adaptability to dynamic user interests.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.2. Accessible Design# There are several ways of managing disabilities. All of these ways of managing disabilities might be appropriate at different times for different situations. 10.2.1. Coping Strategies# Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)[1]. This way of managing disabilities puts the burden fully on disabled people to manage their disability in a world that was not designed for them, trying to fit in with “normal” people. 10.2.2. Modifying the Person# Another way of managing disabilities is assistive technology [j13], which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example: Glasses help people with near-sightedness see in the same way that people with “normal” vision do Walkers and wheelchairs can help some disabled people move around closer to the way “normal” people can (though stairs can still be a problem) A spoon might automatically balance itself [j14] when held by someone whose hands shake Stimulants (e.g., caffeine, Adderall) can increase executive function in people with ADHD, so they can plan and complete tasks more like how neurotypical people do. Assistive technologies give tools to disabled people to help them become more “normal.” So the disabled person becomes able to move through a world that was not designed for them. But there is still an expectation that disabled people must become more “normal,” and often these assistive technologies are very expensive. Additionally, attempts to make disabled people (or people with other differences) act “normal” can be abusive, such as Applied Behavior Analysis (ABA) therapy for autistic people [j15], or “Gay Conversion Therapy” [j16]. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design [j17], which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it[2]. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment. 10.2.4. Making a tool adapt to users# When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design [j18]. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person. 10.2.5. Are things getting better?# We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.
The comparison between assistive technology and universal design also made me reflect on how differently society perceives accommodations. Glasses, for example, are a widely accepted assistive tool, to the point where people forget that nearsightedness is technically a disability. Meanwhile, other assistive devices, like wheelchairs or ADHD medication, can sometimes carry stigma, even though they serve the same purpose—helping people function in a world that isn’t designed for them. T
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.[1] For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision: Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind [j2], a disability. But there are also a small number of people who are tetrachromats [j3] and can see four base colors[2] and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome [j4], contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances). Disabilities can be accepted as socially normal, like is sometimes the case for wearing glasses or contacts, or it can be stigmatized [j5] as socially unacceptable, inconvenient, or blamed on the disabled person. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people [j6]), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical [j7] people. Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby. As you look through all these types of disabilities, you might discover ways you have experienced disability in your life. Though please keep in mind that different disabilities can be very different, and everyone’s experience with their own disability can vary. So having some experience with disability does not make someone an expert in any other experience of disability. As for our experience with disability, Kyle has been diagnosed with generalized anxiety disorder [j8] and Susan has been diagnosed with depression [j9]. Kyle and Susan also both have: near sightedness [j10]: our eyes cannot focus on things far away (unless we use corrective lenses, like glasses or contacts) ADHD [j11]: we have difficulty controlling our focus, sometimes being hyperfocused and sometimes being highly distracted and also have difficulties with executive dysfunction [j12]. [1]
This made me think about how I’ve encountered situational disabilities in my own life. For example, trying to use a smartphone in bright sunlight when the screen becomes unreadable is a form of situational disability. Similarly, being in a loud space where I can't hear a conversation well might resemble the experience of someone with hearing loss, even if it's only temporary. It’s a reminder that disability is fluid and context-dependent, not just a fixed identity that applies to a specific group of people.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text [i8], meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users [i9]. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women [i10] Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax [i11] hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users [i12], or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking the
In the bibliography section, I noticed the reference to the General Data Protection Regulation (GDPR), which is a law designed to protect individuals' privacy and data in the European Union. I think it's incredibly important that such a law exists, and it's interesting to see how it works in practice, particularly with its "right to be forgotten." This law empowers individuals to request that companies delete their personal data, which offers a layer of protection for users in the digital age. It would be interesting to look more into how effective this regulation is and whether it should be implemented in more places, such as the U.S., to provide stronger protections for users globally. Privacy laws like the GDPR set a standard for what users should expect in terms of data protection, and learning more about it could help me better navigate how I manage my own online privacy.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up som
This chapter gave a lot of insight into the complexities of privacy in our digital world. I found the section discussing how much of our "private" information is actually not as secure as we might think both eye-opening and alarming. The example about Facebook storing Instagram passwords in plain text stood out to me because it shows how companies often fail at safeguarding our most sensitive data, even when we're trusting them with it. It's easy to forget the risks we take when we freely share personal information on social media or other platforms, assuming that companies will protect us. This chapter serves as a wake-up call for individuals to take their privacy seriously and consider how much control they’re willing to give up in the digital age.
-
- Jan 2025
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence [h5]. Social media data can also be used to infer information about larger social trends like the spread of misinformation [h6]. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):
One of the sources in the bibliography, 'The Ethics of Big Data' by Kord Davis, caught my attention because it provides practical frameworks for businesses to integrate ethical decision-making into data practices. I appreciate how the book emphasizes proactive responsibility instead of reactive apologies after breaches or controversies.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other
The discussion on the ethical implications of automated data mining stood out to me, particularly the challenge of balancing personalization with user privacy. For example, I’ve noticed that when I browse online stores, I receive targeted ads almost instantly. While it’s convenient, it also makes me question how much of my data is being collected and shared without my explicit consent. This raises important questions about transparency—should companies be more upfront about what data they’re using and how? I think this is an area where stricter regulations or user-friendly tools for managing data permissions could make a significant difference.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The book Writing on the Wall: Social Media - The First 2,000 Years [e1] by Tom Standage outlines some of the history of social media before internet-based social media platforms such as in times before the printing press: Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups ha
Tom Standage's observation about graffiti as a precursor to social media is intriguing. It reminds me of how memes and comments function today—they’re quick, public, and often anonymous ways to share thoughts. Standage’s connection between historical practices and today’s platforms reinforces the idea that technology evolves, but human communication needs remain consistent."
-
As we talked about previously in a section of Chapter 2 (What is Social Media?), pretty much anything can count as social media, and the things we will see in internet-based social media show up in many other places as well. The book Writing on the Wall: Social Media - The First 2,000 Years [e1] by Tom Standage outlines some of the history of social media before internet-based social media platforms such as in times before the printing press: Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread Later, sometime after the printing press, Stondage highlights how there was an unusual period in American history that roughly took up the 1900s where, in America, news sources were centralized in certain newspapers and then the big 3 TV networks. In this period of time, these sources were roughly in agreement and broadcast news out to the country, making a more unified, consistent news environment (though, of course, we can point out how they were biased in ways like being almost exclusively white men). Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories [e2]. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.
"Reading about how ancient graffiti and handwritten books served as early forms of social media makes me realize how fundamental the need to share updates and spread ideas has always been. It’s fascinating to think about how even before the internet, people found ways to 'go viral' with limited resources. Today, platforms like TikTok have democratized virality, but the principle seems timeless."
-