- Dec 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
21.4. Final Reflection Questions# How have your views on social media changed (or been reinforced)? How have your views on ethics changed (or been reinforced)? How have your views on automation and programming changed (or been reinforced)? If you could magically change anything about how people behave on social media, what would it be? If you could magically change anything about how social media sites are designed, what would it be? If you could magically change anything about how social media sites operate as businesses, what would it be? If you could design a new social media site, what would you want to do that other social media sites do? What would you want to do differently than other social media sites?
My views on social media and ethics have evolved, emphasizing the need for transparency and accountability in design and operation. I’d encourage empathetic user behavior, reduce addictive features, and prioritize user privacy over profit. A new platform should foster genuine connections and minimize harmful content, balancing engagement with ethical responsibility.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
21.2. Ethics in Tech# In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI)
Technological advancements often bring ethical dilemmas. Socrates critiqued writing for undermining memory, while Luddites resisted mechanization due to job loss. These concerns parallel debates today on AI and automation. Balancing innovation with ethical foresight is crucial to ensure technology enhances humanity without neglecting societal and individual impacts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
20.3.2. Programming Adoption Through Silicon Valley# The book Coding Places: Software Practice in a South American City by Dr. Yuri Takhteyev explores how programming in Brazil differs from programming in Silicon Valley. Dr. Takhteyev points out that since tech companies are centralized in Silicon Valley, this then means Silicon Valley determines which technologies (like programming languages or coding libraries) get adopted. He then compares this to how the art world works: “If you want to show [your art] in Chicago, you must move to New York. He then rewords this for tech: if you want your software to be used widely in Brazil, you should write it in Silicon Valley. We can see this happening in a study by StackOverflow. They found that some technologies which are gaining in popularity in Silicon Valley (Python and R), are not commonly used in poorer countries, whereas programming tech that is considered outdated in Silicon Valley (android and PHP), is much more popular in poorer countries. In his book, Takhteyev tracks the history of the Lua programming language, which was invented in Brazil but became adopted in Silicon Valley. In order to gain popularity in Silicon Valley (and thus the rest of the world), the developers had to make difficult tradeoffs, no longer customizing it for the needs of their Brazilian users.
Programming’s English-centric nature and Silicon Valley’s influence highlight global inequities in tech adoption. Non-English speakers face barriers due to English-dominated languages, while Silicon Valley’s dominance pressures developers to prioritize global appeal over local needs. Promoting inclusivity in programming languages and decentralizing tech innovation can bridge these gaps and empower diverse communities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
19.1. What is Capitalism?# Why do social media platforms make decisions that harm users? And why do social media platforms sometimes go down paths of self-destruction and alienating their users? Sometimes these questions can be answered by looking at the economic forces that drive decision-making on social media platforms, in particular with capitalism. So let’s start by defining capitalism. 19.1.1. Definition of Capitalism:# Capitalism is: “an economic system characterized by private or corporate ownership of capital goods, by investments that are determined by private decision, and by prices, production, and the distribution of goods that are determined mainly by competition in a free market” Merriam-Webster Dictionary In other words, capitalism is a system where: Individuals or corporations own businesses These business owners make what they want and set their own prices. They compete with other businesses to convince customers to buy their products. These business owners then hire wage laborers at predetermined rates for their work, while the owners get the excess business profits or losses. Related Terms# Here are a few more terms that are relevant to capitalism that we need to understand in order to get to the details of decision-making and strategies employed by social media companies. Shares / Stocks Shares or stocks are ownership of a percentage of a business, normally coming with getting a percentage of the profits and a percentage of power in making business decisions. Companies then have a board of directors who represent these shareholders. The board is in charge of choosing who runs the company (the CEO). They have the power to hire and fire CEOs For example: in 1985, the board of directors for Apple Computers denied Steve Jobs (co-founded Apple) the position of CEO and then they fired him completely CEOs of companies (like Mark Zuckerberg of Meta) are often both wage-laborers (they get a salary, Zuckerberg gets a tiny symbolic $1/year) and shareholders (they get a share of the profits, Zuckerberg owns 16.8%) Free Market Businesses set their own prices and customers decide what they are willing to pay, so prices go up or down as each side decides what they are willing to charge/spend (no government intervention) See supply and demand What gets made is theoretically determined by what customers want to spend their money on, with businesses competing for customers by offering better products and better prices Especially the people with the most money, both business owners and customers Monopoly “a situation where a specific person or enterprise is the only supplier of a particular thing” Monopolies are considered anti-competitive (though not necessarily anti-capitalist). Businesses can lower quality and raise prices, and customers will have to accept those prices since there are no alternatives. Cornering a market is being close enough to a monopoly to mostly set the rules (e.g., Amazon and online shopping)
Capitalism promotes competition, innovation, and consumer choice by allowing private ownership and market-driven decisions. However, it can lead to wealth disparities and monopolies that reduce competition and consumer power. Balancing free markets with regulations is essential to address inequalities, ensure fairness, and prevent monopolistic practices that undermine its benefits.
-
- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.4. Repair and Reconciliation# The idea of repair (or reconciliation) has shown up a couple of times already, both in the role of shame in child development, and in the Enforcing Social Norms: The Morality of Public Shaming paper. Let’s look more at what a repair might or might not look like. 18.4.1. Limits of Reconciliation# When we think about repair and reconciliation, many of us might wonder where there are limits. Are there wounds too big to be repaired? Are there evils too great to be forgiven? Is anyone ever totally beyond the pale of possible reconciliation? Is there a point of no return? One way to approach questions of this kind is to start from limit cases. That is, go to the farthest limit and see what we find there by way of a template, then work our way back toward the everyday. Let’s look at two contrasting limit cases: one where philosophers and cultural leaders declared that repairs were possible even after extreme wrongdoing, and one where the wrongdoers were declared unforgivable.1
Repair and reconciliation depend on genuine accountability, introspection, and restitution. While some actions may seem beyond forgiveness, frameworks like South Africa’s Truth and Reconciliation Commission show the potential for healing when perpetrators take meaningful steps toward repentance and change.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
18.1. Shame vs. Guilt in childhood development# Before we talk about public criticism and shaming and adults, let’s look at the role of shame in childhood. In at least some views about shame and childhood1, shame and guilt hold different roles in childhood development: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view, a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.
Shame and guilt play distinct roles in childhood development, shaping emotional growth and social behavior. Teaching children to differentiate “I am bad” from “this action was bad” fosters healthier self-perceptions. By addressing harmful actions while affirming the child’s worth, parents help cultivate guilt-driven accountability, encouraging repair over withdrawal or isolation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
You might remember from Chapter 14 that social contracts, whether literal or metaphorical, involve groups of people all accepting limits to their freedoms. Because of this, some philosophers say that a state or nation is, fundamentally, violent. Violence in this case refers to the way that individual Natural Rights and freedoms are violated by external social constraints. This kind of violence is considered to be legitimated by the agreement to the social contract. This might be easier to understand if you imagine a medical scenario. Say you have broken a bone and you are in pain. A doctor might say that the bone needs to be set; this will be painful, and kind of a forceful, “violent” action in which someone is interfering with your body in a painful way. So the doctor asks if you agree to let her set the bone. You agree, and so the doctor’s action is construed as being a legitimate interference with your body and your freedom. If someone randomly just walked up to you and started pulling at the injured limb, this unagreed violence would not be considered legitimate. Likewise, when medical practitioners interfere with a patient’s body in a way that is non-consensual or not what the patient agreed to, then the violence is considered illegitimate, or morally bad. We tend to think of violence as being another “normatively loaded” word, like authenticity. But where authenticity is usually loaded with a positive connotation–on the whole, people often value authenticity as a good thing–violence is loaded with a negative connotation. Yes, the doctor setting the bone is violent and invasive, but we don’t usually call this “violence” because it is considered to be a legitimate exercise of violence. Instead, we reserve the term “violence” mostly for describing forms of interference that we consider to be morally bad.
The concept of legitimate violence highlights the role of consent and social contracts in justifying certain actions. While violence is often seen negatively, scenarios like a doctor setting a bone illustrate how consent can transform invasive acts into morally acceptable ones. This distinction emphasizes the ethical importance of consent and societal norms.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc. Individual harassment can also be done publicly before an audience (such as classmates or family). For example: Bullying: like posting public mean messages Impersonation: Making an account that appears to be from someone and having that account say things to embarrass or endanger the victim. Doxing: Publicly posting identifying information about someone (e.g., full name, address, phone number, etc.). Revenge porn / deep-fake porn Etc.
Social media amplifies traditional harassment through private and public means. Private actions like cyberstalking, hacking, and tracking invade personal space, while public actions like doxing, impersonation, and revenge porn humiliate victims before an audience. The anonymity and reach of social platforms intensify harm, demanding stricter regulation and awareness to combat abuse.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.3.2. Well-Intentioned Harm# Sometimes even well-intentioned efforts can do significant harm. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves. They quickly settled on a missing man (Sunil Tripathi) as the culprit (it turned out had died by suicide and was in no way related to the case), and flooded the Facebook page set up to search for Sunil Tripathi, causing his family unnecessary pain and difficulty. The person who set up the “Find Boston Bomber” Reddit board said “It Was a Disaster” but “Incredible”, and Reddit apologized for online Boston ‘witch hunt’. 16.3.3. Social and political movements# Some ad hoc crowdsourcing can be part of a social or political movement. For example, Social media organizing played a role in the Arab Spring revolutions in the 2010s, and Social Media platforms were a large part of the #MeToo movement, where victims of sexual abuse/harassment spoke up and stood together. 16.3.4. Crowd harassment# Social media crowdsoucing can also be used for harassment, which we’ll look at more in the next couple chapters. But for some examples: the case of Justine Sacco involved crowdsourcing to identify and track her flight, and even get a photo of her turning on her phone.
Crowdsourcing, though powerful, can produce unintended harm despite good intentions. For instance, the misidentification of Sunil Tripathi after the Boston bombing reflects the dangers of misinformation, which can cause personal devastation. While crowdsourcing has fueled positive movements like #MeToo and the Arab Spring, it has also enabled harassment, as seen with Justine Sacco’s case. This duality underscores the need for responsibility in online collaboration.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There (pdf here). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond. Because of these (and other) differences, different forms of communication might be preferable for different tasks. For example, you might send an email to the person sitting next at work to you if you want to keep an archive of the communication (which is also conveniently grouped into email threads). Or you might send a text message to the person sitting next to you if you are criticizing the teacher, but want to do so discretely, so the teacher doesn’t notice.
Computer-based communication offers unique characteristics—like location flexibility, time delay management, synchronicity options, archiving, and anonymity—that sometimes outperform in-person interactions. For instance, email provides a convenient archive and thread organization, while texts offer discreetness. Choosing the right communication mode can enhance collaboration, support crowdsourcing, and suit specific tasks better than face-to-face methods.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
14.5.1. Origin Story for Moderation# One concept that comes up in a lot of different ethical frameworks is moderation. Famously, Confucian thinkers prized moderation as a sound principle for living, or as a virtue, and taught the value of the ‘golden mean’, or finding a balanced, moderate state between extremes. This golden mean idea got picked up by Aristotle—we might even say ripped off by Aristotle—as he framed each virtue as a medial state between two extremes. You could be cowardly at one extreme, or brash and reckless at the other; in the golden middle is courage. You could be miserly and penny-pinching, or you could be a reckless spender, but the aim is to find a healthy balance between those two. Moderation, or being moderate, is something that is valued in many ethical frameworks, not because it comes naturally to us, per se, but because it is an important part of how we form groups and come to trust each other for our shared survival and flourishing. Moderation also comes up in deontological theories, including the political philosophy tradition that grew out of Kantian rationalism: the tradition that is often identified with John Rawls, although there are many other variations out there too. In brief, here is the journey of the idea: Kant was influenced by ideas that were trending in his time–the European era we call the “Enlightenment”, which became very interested in the idea of rationality. We could write books about what they meant by the idea of “rationality”, and Kant certainly did so, but you probably already have a decent idea of what rationality is about. Rationalism tries to use reasoning, logical argument, and scientific evidence to figure out what to make of the world. Kant took this idea and ran with it, exploring the question of what if everything, even morality, could be derived from looking at rationality in the abstract. Many philosophers and, let’s face it, many sensible people since Kant have questioned whether his project could succeed, or whether his question was even a good question to be asking. Can one person really get that kind of “god’s-eye view” of ultimate rationality? People disagree a lot about what would be the most rational way to live. Some philosophers even suggested that it is hard to think about what is rational or reasonable without our take being skewed by our own aims and egos. We instinctively take whatever suits our own goals and frame it in the shape of reasons. Those who do not want their wealth taxed have reasons in the shape of rational arguments for why they should not be taxed. Those who do believe wealth should be taxed have reasons in the shape of rational arguments for why taxes should be imposed. Our motivations can massively affect which of those rationales we find to be most rational. This is what John Rawls wanted to address.
Moderation, as a balance between extremes, is central to many ethical frameworks, from Confucian and Aristotelian “golden mean” ideas to Kantian rationalism. Kant’s attempt to derive morality from pure rationality inspired Rawls to explore fairness beyond personal bias. Rational arguments often reflect self-interest, so true moderation helps foster objective ethical standards.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
14.1. What Content Gets Moderated# Social media platforms moderate (that is ban, delete, or hide) different kinds of content. There are a number of categories that they might ban things: 14.1.1. Quality Control# In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam, mass-produced unsolicited messages, generally advertisements, scams, or trolling. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content.
Social media platforms use moderation to maintain quality by banning spam, misinformation, and off-topic posts. This helps retain user engagement, as content quality standards vary across platforms. Sites like 4chan may accept offensive content as “quality” but still block spam, while most others enforce stricter content guidelines to ensure an enjoyable user experience.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.5.2. Cultural appropriation# The online community activity of copying and remixing can be a means of cultural appropriation, which is when one cultural group adopts something from another culture in an unfair or disrespectful way (as opposed to a fair, respectful cultural exchange). For example, many phrases from Black American culture have been appropriated by white Americans and had their meanings changed or altered (like “woke”, “cancel”, “shade”, “sip/spill the tea”, etc.). Additionally, white Americans often use images and gifs of Black people reacting and expressing emotions. This modern practice with gifs has been compared to the earlier (and racist) art forms of blackface, where white actors would paint their faces black and then act in exaggerated unintelligent ways.
Cultural appropriation in online spaces raises important issues about respect, power dynamics, and historical context. When dominant cultures adopt elements from marginalized groups without understanding or honoring their origins, it can distort meanings and perpetuate stereotypes. Mindful, respectful cultural exchange is essential to avoid reinforcing harmful practices.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.1.2. Memes# In the 1976 book The Selfish Gene, evolutionary biologist Richard Dawkins1 said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc.
Dawkins' concept of "memes" brilliantly extends evolutionary principles to culture, showing how ideas, like genes, spread, adapt, or fade. It highlights how technology, art, beliefs, and ideologies evolve, influenced by social dynamics and adaptability. This perspective underscores that cultural evolution shapes human history similarly to biological evolution.
-
Biological evolution is how living things change, generation after generation, and how all the different forms of life, from humans to bacteria, came to be. Evolution occurs when three conditions are present: Replication (with Inheritance) An organism can make a new copy of itself, which inherits its characteristics Variations / Mutations The characteristics of an organism are sometimes changed, in a way that can be inherited by future copies Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself When those three conditions are present, then over time successive generations of organisms will: be more adapted to their environment divide into different groups and diversify stumble upon strategies for competing with or cooperating with other organisms.
Evolution, through replication, variation, and natural selection, is a powerful mechanism that shapes life’s diversity. With each generation, organisms adapt to their environments, either evolving new traits to survive or diverging into unique species. This dynamic process underpins both competition and cooperation in ecosystems, fueling life’s endless forms and resilience.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
12.3.1. Replication (With Inheritance)# For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites
Social media replication spreads content rapidly, with platforms enabling actions like liking, reposting, and quoting to reach wider audiences. Built-in features encourage modification, creating "inherited" versions that evolve in context. Beyond platform tools, users also replicate by screenshotting or sharing across sites, expanding the content’s reach and impact.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).
Individual analysis examines personal actions and intentions, while systemic analysis uncovers biases embedded in policies and institutions. For example, sentencing disparities for crack vs. powder cocaine in the 90s disproportionately impacted Black communities, showing how systemic biases can lead to unfair outcomes even without individual intent to discriminate.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts. And probably many more factors as well!
Recommendation algorithms drive what we see online by analyzing our behavior, connections, and even our location to personalize content. They prioritize recent, popular, or similar posts, shaping user experience but also raising privacy concerns, especially as they leverage personal data. Balancing relevance with ethical transparency is essential.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment. 10.2.4. Making a tool adapt to users# When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person.
Universal Design and Ability-Based Design showcase how thoughtful design can enhance inclusivity. By accommodating diverse needs in physical and digital spaces, these approaches shift responsibility from the individual to the designer. Such proactive adaptations promote accessibility, empowering everyone to navigate spaces and technology more independently and comfortably.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.
This perspective highlights how disability is often situational and socially constructed. When society designs spaces assuming universal abilities, it creates barriers for those who differ. By rethinking accessibility to include diverse abilities, we can reduce exclusion and recognize disability as a reflection of society’s assumptions, not individual limitations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does
Privacy violations extend beyond hacking to include unclear policies, unauthorized sharing, and the misuse of metadata. These breaches can expose personal information without consent, as seen in cases like John McAfee's or Netflix's data leak. Even anonymized data can be deanonymized, making privacy protections increasingly challenging in the digital age.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
.1. Privacy# There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent. 9.1.1. Privacy Rights# Some governments and laws protect the privacy of individuals (using a Natural Rights ethical framing). These include the European Union’s General Data Protection Regulation (GDPR), which includes a “right to be forgotten”, and the United State’s Supreme Court has at times inferred a constitutional right to privacy
Privacy is essential for personal security, autonomy, and maintaining control over one's information. It allows individuals to manage their reputation, avoid harm, and protect themselves from misuse of data by companies, governments, or individuals. However, social media platforms often compromise privacy, raising ethical concerns about surveillance and consent.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
People working with data sets always have to deal with problems in their data, stemming from things like mistyped data entries, missing data, and the general problem of all data being a simplification of reality. Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with.
Unintentional and intentional data poisoning can severely compromise the usefulness of datasets. Whether caused by viral social media trends or deliberate sabotage, such as spamming job applications, these incidents highlight the vulnerability of data collection processes and the potential for disruption, often undermining research or organizational operations.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to
Social media platforms use data to increase user engagement and profits through targeted advertising. While this can be useful for businesses and consumers, it raises ethical concerns when ads are directed at vulnerable groups, like children or addicts, exploiting their weaknesses for financial gain.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
Trolling for newbies highlights the dynamics of online communities, where experienced users assert their status by tricking newcomers. This reinforces group identity through shared knowledge, but it also fosters exclusion and creates barriers for those trying to join, shaping the early culture of internet message boards with elitism.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad?
As societies grow, trust becomes harder to establish, making group identity more crucial. People often rely on shared values, cultural markers, or common experiences to determine who belongs. The need to define "we" versus "not-we" becomes a way to manage trust and maintain social cohesion in complex, diverse communities.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Select one of the above assumptions that you think is important to address. Then write a 1-2 sentence scenario where a user could not use Facebook as expected because of the assumption you selected. This represents one way the design could exclude certain users.
Scenario: A domestic abuse survivor trying to rebuild their life wants to connect with friends and family on Facebook, but they cannot use a pseudonym due to Facebook's policy requiring their legal name. By using their real name, they risk being found by their abuser, making the platform unsafe for them to use.
-
What assumptions does Facebook’s name policy make about its users’ identities and their needs that might not be true or might cause problems? List as many as you can think of (bullet points encouraged).
Assumes everyone uses a single, consistent name: The policy assumes that people use only one name across all contexts, but many people use different names in different settings (e.g., professional vs. personal). Assumes legal names reflect authentic identity: Some individuals may not feel that their legal name represents their true identity, such as members of the LGBTQ+ community who haven't legally changed their name yet. Overlooks cultural naming conventions: The policy doesn’t fully account for cultures with naming conventions that don’t fit Western norms, such as individuals with mononyms (one name) or complex, multi-part names. Ignores privacy and safety concerns: People facing threats (e.g., activists, abuse survivors) may need to use pseudonyms to protect themselves, and the policy could force them to reveal their real identity, putting them at risk. Assumes names are stable over time: It doesn't consider that people's names might change frequently due to personal or cultural reasons, making it difficult for users who need flexibility in updating their profiles.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites?
I compared Twitter (now X) and Instagram post views. Both immediately allow likes, comments, and sharing. One step away, Twitter offers retweets and quoting, while Instagram allows saving posts. Two steps away, Twitter gives options like adding to lists, and Instagram provides reporting. Both focus on easy engagement but differ in extra features like lists vs. post saving.
-
Now it’s your turn to try designing a social media site. Decide a type of social media site (e.g., a video site like youtube or tiktok, or a dating site, etc.), and a particular view of that site (e.g., profile picture, post, comment, etc.). Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?
For a social media site focused on collaborative learning (like a mix of Reddit and Khan Academy), I'd design a profile page showing posts, achievements, and study interests. Immediate actions: create posts, comment, upvote. One or two steps away: start discussions, follow users. Not allowed: deleting others' posts, editing achievements.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Can you think of an example of pernicious ignorance in social media interaction?
Pernicious ignorance in social media often occurs when users spread misinformation, ignoring facts and rejecting correction. For example, during public health crises, false claims about treatments or vaccines persist despite expert advice. This ignorance fuels division, harms public understanding, and undermines efforts to address real issues effectively.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
All data is a simplification of reality.
Data reduces vast details into manageable representations, often leaving out nuances. While useful for analysis, interpretation, and decision-making, it’s important to recognize its limitations and avoid oversimplifying or missing context.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Why do you think social media platforms allow bots to operate?
Social media platforms allow bots to operate because they can boost user engagement, automate tasks, and drive traffic. Some bots serve useful purposes like scheduling posts or providing updates. However, platforms may struggle to regulate harmful bots, which can spread misinformation or manipulate discussions, due to the scale and complexity of monitoring.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.
Bots on social media can mimic regular users, either posting autonomously or acting as tools for humans to post content. These automated systems can seamlessly blend into online communities, making it difficult to distinguish between human and bot interactions, thus influencing discussions or amplifying certain messages without appearing artificial.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others). These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society. Key figures:
Confucianism, founded by Confucius in 6th-century BCE China, emphasizes morality, social harmony, and ethical conduct. Its core values include filial piety, respect for authority, and self-cultivation. Confucianism focuses on the importance of relationships, particularly family, and promotes virtues like benevolence, righteousness, and proper behavior to create a harmonious society.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What do you think is the responsibility of tech workers to think through the ethical implications of what they are making?
Tech workers have a responsibility to consider the ethical implications of their creations, as their work can significantly impact society. They must prioritize user safety, privacy, fairness, and long-term consequences, ensuring that technology is used to benefit rather than harm. Ethical awareness fosters accountability and helps prevent unintended negative outcomes, shaping a more responsible tech industry.
-