40 Matching Annotations
  1. Mar 2026
    1. 21.2. Ethics in Tech# In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI) Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      One thing that stood out to me is how concerns about technology’s impact have existed for a very long time, even back to ancient Greece and the invention of writing. It shows that people have always worried about how new technologies might change society. This makes me think it is especially important for modern tech developers to consider the ethical consequences of their creations before releasing them.

    1. 21.1. What We Covered# We covered a lot of topics in this book, and we hope you learned something and found it valuable! 21.1.1. Social Media# We covered a number of topics in relation to social media: Bots Data History of Social Media Authenticity Trolling Data Mining Privacy and Security Accessibility Recommendation Algorithms Virality Mental Health Content Moderation Content Moderators Crowdsourcing Harassment Public Shaming Capitalism Colonialism We hope that by the end of this book you know a lot of social media terminology (e.g., context collapse, parasocial relationships, the network effect, etc.), that you have a good overview of how social media works and is used, and what design decisions are made in how social media works, and the consequences of those decisions. We also hope you are able to recognize how trends on internet-based social media tie to the whole of human history of being social and can apply lessons from that history to our current situations. 21.1.2. Ethics# We covered a number of ethics frameworks and you got practice applying them in different situations: Confucianism Taoism Virtue Ethics Aztec Virtue Ethics Natural Rights Consequentialism Deontology Ethics of Care Ubuntu American Indigenous Ethics Divine Command Theory Egoism Existentialism Nihilism We hope that by the end of this book, you have a familiarity with applying different ethics frameworks, and considering the ethical tradeoffs of uses of social media and the design of social media systems. Again, our goal has been not necessarily to come to the “right” answer, but to ask good questions and better understand the tradeoffs, unexpected side-effects, etc. 21.1.3. Automation# We also covered a number of topics in automation, such as: History of Programming Python Programming Language JupyterHub and JupyterNotebooks Variables Data types (e.g., numbers, strings) Reddit Praw library (posting, searching, etc.) Other code libraries (e.g., time) For Loops Conditionals (if/else) Lists Dictionaries Functions (calling, and writing our own) Sentiment Analysis Recursion (for printing tweets and replies) We hope that by the end of this course, you have a familiarity of what programming is and some of what you can do with it. We particularly hope you have a familiarity with basic Python programming concepts, and an ability to interact with Reddit using computer programs.

      One thing I found interesting is how this course connects social media, ethics, and programming together. It made me realize that technologies like algorithms and bots are not just technical tools, but also have ethical and social consequences. Understanding these connections helps us think more critically about how social media platforms are designed and used.

    1. 20.5.1. Subjugation# In colonialism, one group or country subjugates another group, often imposing laws, religion, culture, and languages on that group. In this case, Zuckerberg and Meta are imposing their version of the Internet on people around the world. In particular, when Zuckerberg offers free Internet, it only comes with access to a few sites, such as Wikipedia, and of course Facebook. So Zuckerberg is choosing what part of the Internet people get access to. And while the people might gladly accept this deal, the bargain is being made by two people in very unequal positions, and Zuckerberg has almost complete freedom to set the terms of the deal. See also: ‘It’s digital colonialism’: how Facebook’s free internet service has failed its users 20.5.2. Taking Resources# In colonialism, the colonialist group also takes resources from the subjugated group. But what resources is Meta getting out of this? Especially if the people they are giving free Internet to don’t have money to make it worth selling ads to show them. In our view, Meta is getting two main benefits out of getting people with no Internet access onto the Internet with Facebook: More Behavioral Data# They get more behavioral data. Even if they can’t sell ads for this group of people yet, they are still accumulating a larger data set with a larger percentage of Earth’s population. Preventing Competition# Most importantly, they can prevent a competitor from taking hold. If these people got Internet access through a non-Facebook option, they might join a new or competing social media network, and through the network effect, that competing Network might take off. And that would be a threat to Meta trying to corner the market on Social Media. A particularly telling example of this is the story of WhatsApp: Though WhatsApp was founded in the US (in 2009), it became very popular outside the US, becoming much more commonly used than Facebook Messenger. Facebook was terrified of losing out on the non-US market, since they wanted to control everything, so in 2014 Facebook spent $19 billion dollars to purchase WhatsApp: WhatsApp was just too far ahead in the international mobile messaging race for Facebook to catch up[…] Facebook either had to surrender the linchpin to mobile social networking abroad, or pony up and acquire WhatsApp before it got any bigger. It chose the latter. TechCruch 20.5.3. Belief in Inferiority of the Subjugated People# Finally, colonialism is justified by belief in the inferiority of the subjugated people (e.g., barbaric, savage, godless, backwards), and the superiority of the group doing the subjugation (e.g., civilized, advanced). So how do we see this here? In the Time Magazine article mentioned in the last section, Zuckerberg focuses on ways he notices that it is worse in rural India: Zuckerberg said] [“There were, like, 40 students sitting on the floor, and then the guy running it was saying that there were 1.4 million schools and this was one of the better ones,” he said later—he can never resist a statistic. “There was no power. There are no toilets in the whole village!” But this might not be the whole picture. Perhaps they have different and valuable community arrangements, or stories, or customs, or any number of things that Zuckerberg didn’t care to notice. Or if one of the people in rural India came and visited Zuckerberg, perhaps they would say: “Oh Mark, I see. You have no real friends or community. That’s sad.” So when Zuckerberg and Meta impose their products (and culture) on people in rural India, those people, and the world might be losing something.

      This section made me think about how social media might be different if users had more control over how the platforms are run. If users were involved in decision-making, the platforms might focus more on community well-being instead of profit. However, it could also create challenges, such as disagreements between different groups of users or the risk of majority groups making decisions that harm minorities or non-users. The idea of social media being designed in different cultural contexts is also interesting, because different cultures might prioritize values like family connections, community responsibility, or social harmony. This shows that the design of technology is influenced by cultural values, and social media could look very different depending on who creates and governs it.

    1. 20.3. Colonialism in Programming# Colonialism shows up in programming languages as well. 20.3.1. Programming in English# Most programming languages are based in English, and there are very few non-English programming languages, and those that exist are rarely used. The reason few non-English programming languages exist is due to the network effect, which we mentioned last chapter. Once English became the standard language for programming, people who learn programming learn English (or enough to program with it). Attempts to create a non-English programming language face an uphill battle, since even those that know that language would still have to re-learn all their programming terms in the non-English language. Now, since many people do speak other languages, you can often find comments, variable names, and even sometimes coding libraries which use non-English languages, but the core coding terms (e.g., for, if, etc.), are still almost always in English. See also this academic paper: Non-Native English Speakers Learning Computer Programming: Barriers, Desires, and Design Opportunities 20.3.2. Programming Adoption Through Silicon Valley# The book Coding Places: Software Practice in a South American City by Dr. Yuri Takhteyev explores how programming in Brazil differs from programming in Silicon Valley. Dr. Takhteyev points out that since tech companies are centralized in Silicon Valley, this then means Silicon Valley determines which technologies (like programming languages or coding libraries) get adopted. He then compares this to how the art world works: “If you want to show [your art] in Chicago, you must move to New York. He then rewords this for tech: if you want your software to be used widely in Brazil, you should write it in Silicon Valley. We can see this happening in a study by StackOverflow. They found that some technologies which are gaining in popularity in Silicon Valley (Python and R), are not commonly used in poorer countries, whereas programming tech that is considered outdated in Silicon Valley (android and PHP), is much more popular in poorer countries. In his book, Takhteyev tracks the history of the Lua programming language, which was invented in Brazil but became adopted in Silicon Valley. In order to gain popularity in Silicon Valley (and thus the rest of the world), the developers had to make difficult tradeoffs, no longer customizing it for the needs of their Brazilian users.

      This reading made me think about how technology is often developed from the perspective of wealthy countries without fully considering the needs of people in other parts of the world. The example of One Laptop Per Child shows how good intentions are not always enough if the designers do not involve the communities they are trying to help. It also highlights how power and decision-making in the tech industry are concentrated in a few places, which can lead to solutions that do not actually work well for the people they are meant to serve. I think this shows the importance of listening to local communities and designing technology with their input rather than assuming what they need.

    1. 19.6. Programming, Gender, Status, and Money# While we’ve been talking about capitalism and social media platforms, we also want to look at the world of programming as well. In particular, we want to highlight how the profession of programming went from being a disrespected, low-pay job for women, to being a highly respected and high paying job for men. 19.6.1. Programming as Women’s Work# As you may have noticed in chapter 2 of the book, the first programmers were almost all women. When computers were being invented, men put themselves in charge of building the physical devices (hardware), and then gave the work of programming the computer (software) to their assistants, often women. So, for example, you can see this at various stages of computer development, such as: 1800s: Charles Babbage describes the first full computer (Analytical Engine), and Ada Lovelace writes down the first computer program for it. 1945: The first general-purpose electrical computer was created by men and programmed by women 1950s: Grace Hopper invents the compiler to help with programming the computers (built by men) As women were advancing the field of computer programming, some of them became frustrated with how they were viewed, such as Margaret Hamilton: Fig. 19.2 Photo of Margaret Hamilton next to the computer program source code (which she was in charge of) for the Apollo missions to the moon.# When Margaret Hamilton was in charge of creating the software to run on the Apollo rockets, the men around her considered programming to be easy and less serious than the “engineering” they were doing in building the rocket. So, she began calling the programming she was doing “software engineering” to convey the complexity and rigor of the work she and her team were doing. She was able to convince her colleagues and the term “software engineering” became common. Still, up through the 1960s and 1970s, most computer companies made their money by selling the physical hardware. They happened to include some software to go with it, and people who bought the hardware would sometimes hire people to make more software. So software was still considered secondary, and up until the early 1980s, women were getting around 37% of computer science degrees. Fig. 19.3 Graph showing percentage of women receiving degrees in different fields from NPR.# 19.6.2. Programming for Boys and Men# In the early 1980s, a number of things changed which ended up with programming seen as a male profession, and a highly profitable and respected one. One of the changes was that some men in the computer business figured out how to make money selling software. This was particularly the case for Bill Gates who convinced companies like IBM to license his software, so he could continue making money as more people used it. Another change was that as computers became small enough for people to buy them for their homes, they became seen as toys for boys and not girls. The same transition is seen in video game consoles from being for the whole family to being for boys only (e.g., the Nintendo Game Boy). In the end, computer programming became profitable and male-dominated. As many are trying to get women into programming, so that they aren’t cut out of profitable and important fields, Amy Nguyen warns that men might just decide that programming is low status again (as has happened before in many fields): The history of women in the workplace always tells the same story: women enter a male-dominated profession, only to find that it’s no longer a respectable field. Because they’re a part of it, so men leave in droves. Because women do it, and therefore it must not be important. Because society would rather discredit an entire profession than acknowledge that a female-dominated field might be doing something that actually matters.

      I found this reading very interesting because it shows how the social perception of programming has changed over time. In the early days, programming was considered less important work and was often done by women, even though many of them made significant contributions to the field. As software became more profitable and respected, the field gradually became more male-dominated. This made me realize that the status of a profession is not always based on the difficulty or importance of the work, but can also be shaped by social and economic factors. It also highlights why it is important to make technology fields more inclusive so that opportunities in influential and well-paid careers are accessible to everyone.

    1. Why do social media platforms make decisions that harm users? And why do social media platforms sometimes go down paths of self-destruction and alienating their users? Sometimes these questions can be answered by looking at the economic forces that drive decision-making on social media platforms, in particular with capitalism. So let’s start by defining capitalism. 19.1.1. Definition of Capitalism:# Capitalism is: “an economic system characterized by private or corporate ownership of capital goods, by investments that are determined by private decision, and by prices, production, and the distribution of goods that are determined mainly by competition in a free market” Merriam-Webster Dictionary In other words, capitalism is a system where: Individuals or corporations own businesses These business owners make what they want and set their own prices. They compete with other businesses to convince customers to buy their products. These business owners then hire wage laborers at predetermined rates for their work, while the owners get the excess business profits or losses. Related Terms# Here are a few more terms that are relevant to capitalism that we need to understand in order to get to the details of decision-making and strategies employed by social media companies. Shares / Stocks Shares or stocks are ownership of a percentage of a business, normally coming with getting a percentage of the profits and a percentage of power in making business decisions. Companies then have a board of directors who represent these shareholders. The board is in charge of choosing who runs the company (the CEO). They have the power to hire and fire CEOs For example: in 1985, the board of directors for Apple Computers denied Steve Jobs (co-founded Apple) the position of CEO and then they fired him completely CEOs of companies (like Mark Zuckerberg of Meta) are often both wage-laborers (they get a salary, Zuckerberg gets a tiny symbolic $1/year) and shareholders (they get a share of the profits, Zuckerberg owns 16.8%) Free Market Businesses set their own prices and customers decide what they are willing to pay, so prices go up or down as each side decides what they are willing to charge/spend (no government intervention) See supply and demand What gets made is theoretically determined by what customers want to spend their money on, with businesses competing for customers by offering better products and better prices Especially the people with the most money, both business owners and customers Monopoly “a situation where a specific person or enterprise is the only supplier of a particular thing” Monopolies are considered anti-competitive (though not necessarily anti-capitalist). Businesses can lower quality and raise prices, and customers will have to accept those prices since there are no alternatives. Cornering a market is being close enough to a monopoly to mostly set the rules (e.g., Amazon and online shopping) 19.1.2. Socialism# Let’s contrast capitalism with socialism: Socialism, in contrast is a system where: A government owns the businesses (sometimes called “government services”) A government decides what to make and what the price is the price might be free, like with public schools, public streets and highways, public playgrounds, etc. A government then may hire wage laborers at predetermined rates for their work, and the excess business profits or losses are handled by the government For example, losses are covered by taxes, and excess may pay for other government services or go directly to the people (e.g., Alaska uses its oil profits to pay people to live there). As an example, there is one Seattle City Sewer system, which is run by the Seattle government. Having many competing sewer systems could actually make a big mess of the underground pipe system. 19.1.3. Accountability in Capitalism and other systems# Let’s look at who the leaders of businesses (or services) are accountable for in capitalism and other systems. Democratic Socialism (i.e., “Socialists1”)# With socialism in a representative democracy (i.e., “democratic socialism”), the government leaders are chosen by the people through voting. And so, while the governmental leaders are in charge of what gets made, how much it costs, and who gets it, those leaders are accountable to the voters. So, in a democratic socialist government, theoretically, every voter has an equal say in business (or government service) decisions. Note, that there are limitations to the government leaders being accountable to the people their decisions affect, such as government leaders ignoring voters’ wishes, or people who can’t vote (e.g., the young, non-citizens, oppressed minorities) and therefore don’t get a say.

      I thought this assignment was interesting because it connected programming with a real-world scenario. It helped me understand how the way we design an algorithm can affect fairness and outcomes for different people. I also liked that it made us think not only about writing correct code, but also about the social impact of algorithms.

    1. 18.4. Repair and Reconciliation# The idea of repair (or reconciliation) has shown up a couple of times already, both in the role of shame in child development, and in the Enforcing Social Norms: The Morality of Public Shaming paper. Let’s look more at what a repair might or might not look like. 18.4.1. Limits of Reconciliation# When we think about repair and reconciliation, many of us might wonder where there are limits. Are there wounds too big to be repaired? Are there evils too great to be forgiven? Is anyone ever totally beyond the pale of possible reconciliation? Is there a point of no return? One way to approach questions of this kind is to start from limit cases. That is, go to the farthest limit and see what we find there by way of a template, then work our way back toward the everyday. Let’s look at two contrasting limit cases: one where philosophers and cultural leaders declared that repairs were possible even after extreme wrongdoing, and one where the wrongdoers were declared unforgivable.1 Nuremberg Trials# After the defeat of Nazi Germany, prominent Nazi figures were put on trial in the Nuremberg Trials. These trials were a way of gathering and presenting evidence of the great evils done by the Nazis, and as a way of publicly punishing them. We could consider this as, in part, a large-scale public shaming of these specific Nazis and the larger Nazi movement. Some argued that there was no type of reconciliation or forgiveness possible given the crimes committed by the Nazis. Hannah Arendt argued that no possible punishment could ever be sufficient: The Nazi crimes, it seems to me, explode the limits of the law; and that is precisely what constitutes their monstrousness. For these crimes, no punishment is severe enough. It may well be essential to hang Göring, but it is totally inadequate. Hannah Arendt/Karl Jaspers correspondence, 1926-1969 See also: Eichmann in Jerusalem: A Report on the Banality of Evil by Hannah Arendt Truth and Reconciliation Commission# In South Africa, when the oppressive and violent racist apartheid system ended, Nelson Mandela and Desmond Tutu set up the Truth and Reconciliation Commission. The commission gathered testimony from both victims and perpetrators of the violence and oppression of apartheid. We could also consider this, in part, a large-scale public shaming of apartheid and those who hurt others through it. Unlike the Nuremberg Trials, the Truth and Reconciliation Commission gave a path for forgiveness and amnesty to the perpetrators of violence who provided their testimony. See also: What Archbishop Tutu’s ubuntu credo teaches the world about justice and harmony 18.4.2. Steps for Repentance# For when reconciliation is possible, what would it look like? In the article Famous abusers seek easy forgiveness. Rosh Hashanah teaches us repentance is hard. by Rabbi Danya Ruttenberg, she outlines a set of steps for “repentance” needed for someone to have their relationship with others repaired: “The bad actor must own the harm perpetrated, ideally publicly” “They must do the hard internal work to become the kind of person who does not harm in this way — which is a massive undertaking, demanding tremendous introspection and confrontation of unpleasant aspects of the self” “They must make restitution for harm done, in whatever way that might be possible” “Then — and only then — they must apologize sincerely to the victim” “Lastly, the next time they are confronted with the opportunity to commit a similar misdeed, they must make a different, better choice” 18.4.3. Repair Example# On February 6, 2022, Jeremy Schneider became the Twitter “main character of the day” for posting the following Tweet, which was widely condemned as being mean and not understanding other people’s experiences: Fig. 18.1 Jeremy Schneider’s Tweet# In what was an unusual turn of events for a Twitter “main character of the day,” Jeremy Schneider later made an apology that was mostly accepted by the Twitter users who had criticized his Tweet: Fig. 18.2 Part 1 of Jeremy Schneider’s apology# Fig. 18.3 Part 2 of Jeremy Schneider’s apology# 18.4.4. Reflection questions# Do you think there are situations where reconciliation is not possible? What would reconciliation look like (if possible), when a social media platform is used in a genocide (see: Meta urged to pay reparations for Facebook’s role in Rohingya genocide) Does Jeremy Schneider’s apology cover the five steps of repentance listed by Rabbi Danya Ruttenberg? Pick a situation where someone is being publicly shamed. Who is responsible for accepting or rejecting their apology/repentance? Pick a social media platform and a situation where someone is being publicly shamed. What might that person do to try to repair or reconcile after the public shaming? Pick a social media platform. In what ways does that platform make it difficult to repair or reconcile after public shaming? 1 We give these two examples to illustrate how important it is to appreciate the breadth of views on this incredibly difficult question, not to imply that one view or the other is preferable. The Nuremberg Trials and the Truth and Reconciliation Commission are both attempts at responding to great evils, and we believe it is important to understand different views of people who suffered. So take your time to think through your intuitions about these limit cases, and research different perspectives on these events (and other atrocities), and then work your way back to the everyday context of social media posting. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch18_public_shaming" }, predefinedOutput: true } kernelName = 'python3' previous 18.3. Perspectives on the Ethics of Public Shaming

      I think reconciliation is possible in some situations, but it requires real effort from the person who caused harm. They need to admit what they did, reflect on why it was wrong, and sincerely apologize. Jeremy Schneider’s apology seems to follow many of these steps because he admitted his tweet was mean, explained how he reflected on it, and promised to think more carefully before posting in the future. However, on social media it is often difficult to repair harm because posts spread quickly and large numbers of people may continue criticizing someone even after they apologize.

    1. 18.3.2. Schadenfreude# Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others. A 2009 satirical article from the parody news site The Onion satirizes public shaming as being for objectifying celebrities and being entertained by their misfortune: Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy. Nation Demands Fresh Celebrity Meat - The Onion 18.3.3. Normal People# While the example from The Onion above focuses on celebrity, in the time since it was written, social media has taken a larger role in society and democratized celebrity. As comedian Bo Burnham puts it: “[This] celebrity pressure I had experienced on stage has now been democratized and given to everybody [through social media]. And everyone is feeling this pressure of having an audience, of having to perform, of having a sort of, like, proper noun version of your own name and then the self in your heart.” (NPR Fresh Air Interview) Also, Rebecca Jennings worries about how public shaming is used against “normal” people who are plucked out of obscurity to be shamed by huge crowds online: “Millions of people became invested in this (niche! not very interesting!) drama because it gives us something easy to be angry or curious or self-righteous about, something to project our own experiences onto, and thereby contributing even more content to the growing avalanche. Naturally, some decided to go look up the central character’s address, phone number, and workplace and share it on the internet. […] ‘It’s on social media, so it’s public!’ one could argue as a case for people’s right to act like forensic analysts on social media, and that is true. But this justification is typically valid when a) the person posting is someone of note, like a celebrity or a politician, and b) when the stakes are even a little bit high. In most cases of normal-person canceling, neither standard is met. Instead, it’s mob justice and vigilante detective work typically reserved for, say, unmasking the Zodiac killer, except weaponized against normal people. […] Platforms like TikTok, where even people with few or no followers often go viral overnight, expedite the shaming process. Stop canceling normal people who go viral 18.3.4. Enforcing Norms# In the philosophy paper Enforcing Social Norms: The Morality of Public Shaming, Paul Billingham and Tom Parr discuss under what conditions public shaming would be morally permissible. They are concerned not with actions primarily intended to induce shame in the target, but rather actions that may cause a person to shame, but are motivated by “seeking to draw attention to a social norm violation, and to rally others to their cause.” In this situation, they outline the following constraints that must be considered when publicly shaming someone in this way: Proportionality: The negative consequences of shaming someone should not be worse than the positive consequences Necessity: There must not be another more effective method of achieving the goal Respect for Privacy: There must not be unnecessary violations of privacy Non-Abusiveness: The shaming must not use abusive tactics. Reintegration “Public shaming must aim at, and make possible, the reintegration of the norm violator back into the community, rather than permanently stigmatizing them.”

      I think public shaming becomes bad when it targets normal people who have little power and when the punishment is disproportionate to what they did. Social media can quickly turn small mistakes into massive harassment, which can seriously harm someone’s life. However, public shaming might be more justified when it is used to hold powerful individuals or institutions accountable for actions that harm society, especially if there are no other effective ways to enforce those norms.

  2. Feb 2026
    1. 17.5. Justifying Harassment# So let’s look at how harassment gets justified. One research paper (Morally Motivated Networked Harassment as Normative Reinforcement) suggests a process that often happens with online harassment, where the harassers feel their actions are justified. They say these play out as follows: A target is identified as breaking the norm of a community (often not their own community, so this is a case of context collapse). This provides a justification for people to harass the target. A key social media account (the amplifier), promotes the accusation in their community (again, often not the one the target is in). The amplifier’s audience then harasses the target. The target experiences negative emotions (stress, depression, etc.), and self-censors and withdraws. The targets’ speech (and others who might have said something similar) is therefore silenced. The amplifier’s network found a common enemy and cause, and this reinforces their values and norms. Does this sound bad? Let’s look at some more specific examples and see what you think. 17.5.1. Examples Attempts at Justifying Harassment# Doxing Racist Organization Members# We’ll start in a time before the Internet: The Ku Klux Klan (KKK) is an American white-supremacist terrorist organization known to harass and murder Black people and others. Members of the KKK keep their identity secret by wearing white robes and hoods over their faces. Often influential and powerful members of society were part of the KKK, such as police officers and government officials. In the 1920s, a magazine colled Tolerance published lists of members of the KKK and their addresses, what we would now call “doxing.” They hoped to end the hateful and violent KKK organization. Fig. 17.1 Tolerance magazine from 1923.# Fig. 17.2 Part of the East St. Louis list of KKK members.# As a more recent event on internet-based social media, we find Twitter users trying to identify participants at a white supremacist rally: Fig. 17.3 Results of the modern doxing campaign# Related: Is it ethical to punch a Nazi? The Lion-Killing Dentist# In 2015, a US dentist named Walter Palmer went to Zimbabwe, lured a lion out of a protected area, and killed it. Many people were upset about this, and that there seemed to be no legal consequences for Dr. Palmer. Angry people sent a surge of traffic to Dr. Palmer’s website, which was taken offline. Vitriolic reviews flooded his Yelp page. A Facebook page titled “Shame Lion Killer Dr. Walter Palmer and River Bluff Dental” drew thousands of users. Dr. Palmer’s face was scrubbed from industry websites. Killer of Cecil the Lion Finds Out That He Is a Target Now, of Internet Vigilantism Dr. Palmer later apologized for killing the lion, but then in 2020, he went to Mongolia and killed a protected wild ram. Billionaires# One phrase that became popular on Twitter in 2022, especially as Elon Musk was in the process of buying Twitter, was: “It is always morally correct to bully billionaires.” (Note: We could not find the exact origins of this phrase or its variations). This is related to the concept in comedy of “punching up,” that is, making fun of people in positions of relatively more power. Trolling# We already mentioned this in the trolling chapter, but we thought we’d copy it here again, but this is one troll’s justification for trolling: The purpose of the community … I guess is to exchange ideas and techniques, and to plan co-ordinated trolling. The underlying philosophical purpose or shared goal, anyway, would be to disrupt people’s rosy vision of the internet as their own personal emotional safe place that serves as a proxy for real-life interactions they are lacking (i.e. going online to demonstrate one’s grief over a public disaster like Japan [2011 Tsunami] with total strangers who have no real connection to the event). From Interview with a troll Gamergate# Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint. The video below talks about how two factions within gamergate fed off each other (you can watch the whole gamergate series here) 17.5.2. Reflection Questions# When do you think crowd harassment is justified (or do you think it is never justified)? Do you feel differently about crowd harassment if the target is rich, famous, or powerful (e.g., a politician)? Do you feel differently about crowd harassment depending on what the target has been doing or saying?

      This section makes harassment more complicated than simply “good vs. bad.” I found it especially interesting how harassment can be framed as moral enforcement, where people believe they are defending community values. The examples show how power dynamics matter — doxing the KKK feels different from targeting private individuals — but it also raises concerns about mob justice and the risk of escalation. Overall, it highlights how easily moral outrage can turn into collective harm, even when participants believe they are justified.

    1. 17.1. Individual harassment# Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc. Individual harassment can also be done publicly before an audience (such as classmates or family). For example: Bullying: like posting public mean messages Impersonation: Making an account that appears to be from someone and having that account say things to embarrass or endanger the victim. Doxing: Publicly posting identifying information about someone (e.g., full name, address, phone number, etc.). Revenge porn / deep-fake porn Etc.

      This section shows how social media doesn’t create harassment, but it significantly amplifies and transforms it. I think the distinction between private and public harassment is especially important, because public forms like doxing or impersonation can multiply harm through audience participation. It also highlights how digital tools lower the barriers to surveillance and abuse, making harassment more persistent and harder to escape than in offline settings.

    1. 16.4. Power Users and Lurkers# When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      This section highlights an important pattern in online communities: participation is highly unequal. I found it interesting how consistent the “power user vs. lurker” dynamic is across platforms like Wikipedia, StackOverflow, and Twitter. It shows that crowdsourcing does not actually mean equal contribution from everyone. Instead, a small group shapes most of the content, which raises important questions about influence, representation, and whose voices dominate online spaces.

    1. 16.1. Crowdsourcing Definition# When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves. When a crowd is providing financial contributions, that is called crowdfunding (e.g., patreon, kickstarter, gofundme). Humans have always collaborated on tasks, and crowds have been enlisted in performing tasks long before the internet existed. What social media (and other internet systems) have done is expand the options for how people can collaborate on tasks. 16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There (pdf here). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond. Because of these (and other) differences, different forms of communication might be preferable for different tasks. For example, you might send an email to the person sitting next at work to you if you want to keep an archive of the communication (which is also conveniently grouped into email threads). Or you might send a text message to the person sitting next to you if you are criticizing the teacher, but want to do so discretely, so the teacher doesn’t notice. These different forms of communication can then support different methods of crowdsourcing.

      This section does a good job showing that crowdsourcing isn’t just about “using a lot of people,” but about how different communication features — like synchronicity, anonymity, and archiving — shape collaboration. I especially like the idea of “Beyond Being There,” because it challenges the assumption that online communication is just a weaker version of in-person interaction. Instead, digital systems have unique strengths that can actually make certain types of crowdsourcing more effective.

    1. 15.2. Example Moderator Set-ups# Let’s look in more detail at some specific examples of moderator set-ups: 15.2.1. Reddit# Reddit is divided into subreddits which are often about a specific topic. Each subreddit is moderated by volunteers who have special permissions, who Reddit forbids from making any money: Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods? A post starting a discussion thread on reddit about reddit In addition to the subreddit moderators, all Reddit users can upvote or downvote comments and posts. The reddit recommendation algorithm promotes posts based on the upvotes and downvotes, and comments that get too many downvotes get automatically hidden. Finally, Reddit itself does some moderation as a platform in determining which subreddits can exist and has on occasion shut down some. Reflection Question:# What is your take on the ethical trade-offs of unpaid Reddit moderators? What do you think Reddit should do? 15.2.2. Wikipedia# Wikipedia is an online encyclopedia that is crowdsourced by volunteer editors. You can go right now and change a Wikipedia page’s content if you want (as long as the page isn’t locked)! You can edit anonymously, or you can create an account. The Wikipedia community gives some editors administrator access, so they can perform more moderation tasks like blocking users or locking pages. Editors and administrators are generally not, paid, though they can be paid by other groups if they disclose and fill out forms Wikipedia exists in multiple languages (each governed somewhat independently). When looking at the demographics of who writes the English Wikipedia articles, editors of Wikipedia skew heavily male (around 80% or 90%), and presumably administrators skew heavily male as well. This can produce bias in how things are moderated. For example, Donna Strickland had no Wikipedia page before her Nobel. Her male collaborator did: “Articles on Strickland had been drafted on the online encyclopedia before in May 2018 — but the draft was rejected by moderators. ‘This submission’s references do not show that the subject qualifies for a Wikipedia article,’ the moderators wrote, despite the fact that the original author linked to a page that showed Strickland was once president of the Optical Society, a major physics professional organization and publisher of some of the field’s top journals.” Reflection Question:# How should Wikipedia handle their editor/administrator demographics? 15.2.3. Facebook# While Facebook groups and individual pages can be moderated by users, for the platform as a while, Facebook has paid moderation teams to make moderation decisions (whether on content flagged by bots, or content flagged by users). As Facebook has grown, it has sought users from all over the globe, but as of 2019: Facebook had menus and prompts in 111 different languages, which were deemed to be “officially supported” Facebook’s “Community standards” rules were only translated into 41 of those languages Facebook’s content moderators know about 50 languages (though they say they hire professional translators when needed) Automated tools for identifying hate speech only work in about 30 languages

      This section highlights how moderation structures reflect power, labor, and representation. I found the Wikipedia example especially striking, since the gender imbalance among editors can directly shape whose knowledge is recognized. It shows that moderation is not neutral—it reflects who participates and who has authority. The Facebook language gap also raises concerns about global fairness, suggesting that moderation systems often privilege certain regions and languages over others.

    1. 15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy). 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret. 15.1.3. Dedicated Moderation Teams# After a company starts working on moderation, they might decide to invest in teams specifically dedicated to content moderation. These teams of content moderators could be considered human computers hired to evaluate examples against the content moderation policy of the platform they are working for. 15.1.4. Individuals moderating their own spaces# You can also have people moderate their own spaces. For example: when you text on the phone, you are in charge of blocking numbers if you want to (though the phone company might warn you of potential spam or scams) When you make posts on Facebook or upload videos to YouTube, you can delete comments and replies Also in some of these systems, you can allow friends access to your spaces to let them help you moderate them. 15.1.5. Volunteer Moderation# Letting individuals moderate their own spaces is expecting individuals to put in their own time and labor. You can do the same thing with larger groups and have volunteers moderate them. Reddit does something similar where subreddits are moderated by volunteers, and Wikipedia moderators (and editors) are also volunteers. 15.1.6. Automated Moderators (bots)# Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.

      This section clearly shows that moderation is not just about rules, but about who is doing the moderating and how the system is structured. I found the comparison between volunteer moderators, paid moderation teams, and automated bots especially helpful because it highlights trade-offs in labor, expertise, and fairness. It also made me realize that moderation always involves resource decisions, not just ethical ones.

    1. By

      This section helped me understand moderation not just as a technical policy decision, but as an ethical concept rooted in virtue ethics and social contract theory. I found Rawls’s “veil of ignorance” especially useful for thinking about how fair moderation rules should be created. Mills’s critique also adds an important perspective by showing how power shapes social contracts. Overall, this reading deepens the idea that moderation is not only about limiting speech, but about whose values and interests are being protected.

    1. 14.3.1. 4chan/8chan (minimal moderation)# Sites like 4chan and 8chan bill themselves as sites that support free-speech, in the sense that they don’t ban trolling and hateful speech, though they may remove some illegal content, like child pornography. One thing these sites do ban though, is spam. While much of spam is certainly legal, and a form of speech, this speech is restricted on these sites. If the chat boards filled up with spam, the users would find it boring and leave, so for practical reasons, these sites still moderate for spam (though they may allow some uses of ironic spam, copypasta). 14.3.2. Reddit (subreddits with volunteer moderators)# Reddit is composed of many smaller discussion boards, called subreddits. These subreddits range from friendly to very toxic, with different moderators in charge of each subreddit. Reddit as a larger platform decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: I censored out a racial slur for Black people), and r/fatpeoplehate. In a study of what happened after this ban: Post-ban, hate speech by the same users was reduced by as much as 80-90 percent. […] “Members of banned communities left Reddit at significantly higher rates than control groups. […] Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald). […] However, within those communities, hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators. 14.3.3. Facebook (hired moderators)# Facebook uses hired moderators to handle content moderation on the platform at large (though Facebook groups are moderated by users). When users (or computer programs) flag content, the hired moderators will look at it and decide what to do. Facebook also discovered in internal research that, “the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.” 14.3.4. Removing the option of feedback# For a period of time, most news organizations allowed comments on their articles, but around 2013 many of these sites simply removed the possibility of leaving comments, as they felt allowing comments did more harm than good. 14.3.5. Public Figure Exception# Twitter, Facebook, and other platforms had an exception to their normal moderation policies for political leaders, where they wouldn’t ban them even if they violated site policies (most notably applied to Donald Trump). After the January 6th insurrection at the US Capital, Donald Trump was banned first from Twitter, then from Facebook, and Facebook announced an end to special treatment for politicians. 14.4. Government Censorship# Governments might also have rules about content moderation and censorship, such as laws in the US against Child Sexual Abuse Material (CSAM). China additionally censors various news stories in their country, like stories about protests. In addition to banning news on their platforms, in late 2022 China took advantage of Elon Musk having fired almost all Twitter content moderators to hide news of protests by flooding Twitter with spam and porn.

      This section shows that even platforms that promote “free speech” still moderate content in practice, such as banning spam, because complete freedom can harm user experience. I found the Reddit example especially interesting, since banning hateful communities significantly reduced hate speech. It also highlights how platform policies and algorithms shape user behavior. Overall, moderation is not just about censorship, but about how systems influence online communities.

    1. 13.5.4. Search through news submissions and only display good news# Now we will make a different version of the code that computes the sentiment of each submission title and only displays the ones with positive sentiment. # Look up the subreddit "news", then find the "hot" list, getting up to 10 submission submissions = reddit.subreddit("news").hot(limit=10) # Turn the submission results into a Python List submissions_list = list(submissions) # go through each reddit submission for submission in submissions_list: #calculate sentiment title_sentiment = sia.polarity_scores(submission.title)["compound"] if(title_sentiment > 0): print(submission.title) print() Copy to clipboard Fake praw is pretending to select the subreddit: newsBreaking news: A lovely cat took a nice long nap today! Breaking news: Some grandparents made some yummy cookies for all the kids to share! Copy to clipboard 13.5.5. Try it out on real Reddit# If you want, you can skip the fake_praw step and try it out on real Reddit, from whatever subreddit you want Did it work like you expected? You can also only show negative sentiment submissions (sentiment < 0) if you want to see only bad news.

      This demo was interesting because it shows how algorithms can intentionally shape what users see. Filtering for only positive news might seem helpful for improving mental health, but it also raises questions about whether hiding negative information creates a distorted view of reality. I also noticed that sentiment analysis is a simplified way to judge content, since tone and context can be more complex than just positive or negative. Overall, this example shows how small design decisions in algorithms can significantly influence users’ emotional experiences online.

    1. 13.1.1. Digital Detox?# Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why.

      This section does a good job showing that the relationship between social media and mental health is complex rather than purely positive or negative. I found the example of Facebook’s mood experiment especially interesting because it raises ethical concerns about consent and manipulation, not just mental health outcomes. The discussion of digital detox was also thoughtful, particularly the idea that blaming technology itself may prevent us from improving how platforms are designed. Overall, this reading encourages a more nuanced understanding of social media’s impact instead of oversimplifying it as either harmful or beneficial.

    1. 12.7. Activity: Value statements in what goes viral# 12.7.1. Choose three scenarios# When content goes viral there may be many people with a stake in it’s going viral, such as: The person (or people) whose content or actions are going viral, who might want attention, or get financial gain, or might be embarrassed or might get criticism or harassment, etc. Different people involved might have different interests. Some may not have awareness of it happening at all (like a video of an infant). Different audiences might have interests such as curiosity or desire to bring justice to a situation or desire to get attention for themselves or their ideas based on engaging the viral content, or desire to troll or harass others. Social networking platforms might have interests such as increased attention to their platform or increased advertising, or increased or decreased reputation (in views of different audiences). List at least three different scenarios of content going viral and list out the interests of different groups and people in the content going viral. 12.7.2. Create value statements# Social media platforms have some ability to influence what goes viral and how (e.g., recommendation algorithms, what actions are available, what data is displayed, etc.), though they only have partial control, since human interaction and organization also play a large role. Still, regardless of whether we can force any particular outcome, we can still consider of what you think would be best for what content should go viral, how much, and in what ways. Create a set of value statements for when and how you ideally would want content to go viral. Try to come up with at least 10 value statements. We encourage you to consider different ethics frameworks as you try to come up with ideas.

      This section clearly shows that virality isn’t neutral and always involves tradeoffs between different groups. I liked how the examples highlight that what benefits platforms or audiences can still harm individuals, especially through misinformation or loss of privacy. It also made me think more about how recommendation systems should reflect ethical values, not just engagement metrics

    1. 12.2.1. Books# The book Writing on the Wall: Social Media - The First 2,000 Years describes how, before the printing press, when someone wanted a book, they had to find someone who had a copy and have a scribe make a copy. So books that were popular spread through people having scribes copy each other’s books. And with all this copying, there might be different versions of the book spreading around, because of scribal copying errors, added notes, or even the original author making an updated copy. So we can look at the evolution of these books: which got copied, and how they changed over time. 12.2.2. Chain letters# When physical mail was dominant in the 1900s, one type of mail that spread around the US was a chain letter. Chain letters were letters that instructed the recipient to make their own copies of the letter and send them to people they knew. Some letters gave the reason for people to make copies might be as part of a pyramid scheme where you were supposed to send money to the people you got the letter from, but then the people you send the letter to would give you money. Other letters gave the reason for people to make copies that if they made copies, good things would happen to them, and if not bad things would, like this: You will receive good luck within four days of receiving this letter, providing, you in turn send it on. […] An RAF officer received $70,000 […] Gene Walsh lost his wife six days after receiving the letter. He failed to circulate the letter. Fig. 12.2 An example chain letter from https://cs.uwaterloo.ca/~mli/chain.html.# The spread of these letters meant that people were putting in effort to spread them (presumably believing making copies would make them rich or help them avoid bad luck). To make copies, people had to manually write or type up their own copies of the letters (or later with photocopiers, find a machine and pay to make copies). Then they had to pay for envelopes and stamps to send it in the mail. As these letters spread we could consider what factors made some chain letters (and modified versions) spread more than others, and how the letters got modified as they spread.

      This section is a fun and clear way to show that “going viral” isn’t just an internet phenomenon. The examples of books, chain letters, and sourdough starters nicely illustrate how ideas and practices spread through effort, incentives, and social networks long before digital platforms existed. I especially like the chain letter example because it clearly shows how emotional pressure and fear helped drive sharing, which feels very similar to modern online virality. Overall, this makes the concept of cultural evolution and memes much more concrete and easy to understand.

    1. 11.4.1. Filter Bubbles# One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge. The filter bubbles can be good or bad, such as forming bubbles for: Hate groups, where people’s hate and fear of others gets reinforced and never challenged Fan communities, where people’s appreciation of an artist, work of art, or something is assumed, and then reinforced and never challenged Marginalized communities can find safe spaces where they aren’t constantly challenged or harassed (e.g., a safe space) 11.4.2. Amplifying Polarization and Negativity# There are concerns that echo chambers increase polarization, where groups lose common ground and ability to communicate with each other. In some ways echo chambers are the opposite of context collapse, where contexts are created and prevented from collapsing. Though others have argued that people do interact across these echo chambers, but the contentious nature of their interactions increases polarization. Along those lines, ff social media sites simply amplify content that gets strong reactions, they will often amplify the most negative and polarizing content. Recommendation algorithms can make this even works. For example: At one point, Facebook counted the default “like” reaction less than the “anger” reaction, which amplified negative content. On Twitter, one study found (full article on archive.org): “Whereas Google gave higher rankings to more reliable sites, we found that Twitter boosted the least reliable sources, regardless of their politics.” According to another study on Twitter: “An analysis […] suggested that when users swarm tweets to denounce them with quote tweets and replies, they might be cueing Twitter’s algorithm to see them as particularly engaging, which in turn might be prompting Twitter to amplify those tweets. The upshot is that when people enthusiastically gather to denounce the latest Bad Tweet of the Day, they may actually be ensuring more people see it than had they never decided to pile on in the first place. That possibility raises serious questions of what constitutes responsible civic behavior on Twitter and whether the platform is in yet another way incentivizing combative behavior.” Though this is a big concern about Internet-based social media, traditional media sources also play into this: For example, this study: Cable news has a much bigger effect on America’s polarization than social media, study finds Note: polarization itself is not necessarily bad (do we want to make everyone believe the exact same thing?), and some argue that in some situations polarization is even a good thing. 11.4.3. Radicalization# Building off of the amplification polarization and negativity, there are concerns (and real examples) of social media (and their recommendation algorithms) radicalizing people into conspiracy theories and into violence. Rohingya Genocide in Myanmar# A genocide of the Rohingya people in Myanmar started in 2016, and in 2018 Facebook admitted it was used to ‘incite offline violence’ in Myanmar. In 2021, the Rohingya sued Facebook for £150bn over how Facebook amplified hate speech and didn’t take down inflammatory posts. The Flat Earth Movement# The flat earth movement (an absurd conspiracy theory that the earth is actually flat, and not a globe) gained popularity in the 2010s. As YouTuber Dan Olson explains it in his (rather long) video In Search of a Flat Earth: Modern Flat Earth [movement] was essentially created by content algorithms trying to maximize retention and engagement by serving users suggestions for things that are, effectively, incrementally more concentrated versions of the thing they were already looking at. Bizarre cranks peddling random theories are an aspect of civilization that has always been with us, so it was inevitable that they would end up on YouTube, but the algorithm made sure they found an audience. These systems were accidentally identifying people susceptible to conspiratorial and reactionary thinking and sending them increasingly deeper into Flat Earth evangelism. Dan Oleson then explained that by 2020, the flat earth content was getting less views: The bottom line is that Flat Earth has been slowly bleeding support for the last several years. Because they’re all going to QAnon. See also: YouTube aids flat earth conspiracy theorists, research suggests 11.4.4. Discussion Questions# What responsibilities do you think social media platforms should have in regards to larger social trends? Consider impact vs. intent. For example, consequentialism only cares about the impact of an action. How do you feel about the importance of impact and intent in the design of recommendation algorithms? What strategies do you think might work to improve how social media platforms use recommendations?

      This section does a great job showing how recommendation algorithms can unintentionally amplify polarization and even contribute to radicalization. The examples (Facebook reactions, Twitter quote-tweet dynamics, and the flat earth → QAnon pipeline) clearly illustrate how engagement-based systems can reward negativity and extreme content. I also appreciate the nuance at the end that polarization itself isn’t always bad, which keeps the discussion balanced rather than alarmist. Overall, this is a clear, well-supported explanation of why algorithmic design choices have serious social consequences beyond individual user intent.

    1. 11.2. Ethical Analysis of Recommendation Algorithms# When we look at ethics and responsibility in regards to recommendation algorithms, it can be helpful to consider the difference between individual analysis and systemic analysis. 11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act). 11.2.2. Recommendation Algorithms as Systems# Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system. Fig. 11.1 A tweet highlighting the difference between structural problems (systemic analysis) and personal choices (individual analysis).# Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results: Fig. 11.2 A tweet from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior.# Elon Musk’s view expressed in that tweet is different than some of the ideas of the previous owners, who at least tried to figure out how to make Twitter’s algorithm support healthier conversation. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      This section clearly explains the difference between individual and systemic responsibility, and the sentencing example makes the idea of systemic bias very concrete and easy to understand. I especially like how recommendation algorithms are framed as systems that can produce harmful outcomes even without bad intent from individual designers or users. The contrast between blaming users and addressing structural problems is effective, and the tweets help connect theory to real-world discourse. Overall, this is a strong and thought-provoking explanation of why ethical analysis of algorithms needs to go beyond individual behavior.

    1. 10.5. Design Analysis: Accessibility# We want to provide you, the reader, a chance to explore accessibility more. In this activity you will be looking at a social media site on your device (e.g., your phone or computer). We will again follow the five step CIDER method (Critique, Imagine, Design, Expand, Repeat). So open a social media site on your device (the website or app may have additional accessibility settings, but don’t use those for now, just consider how it works as it is currently). Then do the following (preferably on paper or in a blank computer document): 10.5.1. Critique (3-5 minutes, by yourself):# What assumptions do the site and your device make about individuals or groups using social media, which might not be true or might cause problems? List as many as you can think of (bullet points encouraged). 10.5.2. Imagine (2-3 minutes, by yourself):# Select one of the above assumptions that you think is important to address. Then write a 1-2 sentence scenario where a user face difficulties because of the assumption you selected. This represents one way the design could exclude certain users. 10.5.3. Design (3-5 minutes, by yourself):# Brainstorm ways to change the site or your device to avoid the scenario you wrote above. List as many different kinds of potential solutions you can think of – aim for ten or more (bullet points encouraged). 10.5.4. Expand (5-10 minutes, with others):# Combine your list of critiques with someone else’s (or if possible, have a whole class combine theirs). 10.5.5. Repeat the Imagine and Design Tasks:# Select another assumption from the list above that you think is important to address. Make sure to choose a different assumption than you used before. Choose one that you didn’t come up with yourself, if possible. Repeat the Imagine and Design steps. 10.5.6. Explore accessibility settings# Now, try to find the accessibility settings on the social media site and on your device. For each setting you see, try to come up with what disabilities that setting would be beneficial for (there may be multiple).

      This activity is a really effective way to make accessibility feel concrete instead of abstract. By starting with critique and assumptions, it highlights how many “default” design choices silently exclude users before accessibility settings are even considered. I especially like how the Imagine and Design steps force you to think through a specific user’s experience and then brainstorm multiple solutions, rather than jumping straight to a single fix. Ending with exploring existing accessibility settings also reinforces that accessibility is often an afterthought in design, even though it should be part of the core system from the beginning.

    1. 10.2. Accessible Design# There are several ways of managing disabilities. All of these ways of managing disabilities might be appropriate at different times for different situations. 10.2.1. Coping Strategies# Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)1. This way of managing disabilities puts the burden fully on disabled people to manage their disability in a world that was not designed for them, trying to fit in with “normal” people. 10.2.2. Modifying the Person# Another way of managing disabilities is assistive technology, which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example: Glasses help people with near-sightedness see in the same way that people with “normal” vision do Walkers and wheelchairs can help some disabled people move around closer to the way “normal” people can (though stairs can still be a problem) A spoon might automatically balance itself when held by someone whose hands shake Stimulants (e.g., caffeine, Adderall) can increase executive function in people with ADHD, so they can plan and complete tasks more like how neurotypical people do. Assistive technologies give tools to disabled people to help them become more “normal.” So the disabled person becomes able to move through a world that was not designed for them. But there is still an expectation that disabled people must become more “normal,” and often these assistive technologies are very expensive. Additionally, attempts to make disabled people (or people with other differences) act “normal” can be abusive, such as Applied Behavior Analysis (ABA) therapy for autistic people, or “Gay Conversion Therapy.” 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment. 10.2.4. Making a tool adapt to users# When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person. 10.2.5. Are things getting better?# We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people. 1 We’ve also noticed many youtube video essayists have mentioned having ADHD. This is perhaps another job that attracts those who tend to hyperfocus on whatever topic grabbed their attention, and then after releasing their video, move on to something completely different. 2 Universal Design has taken some criticism. Some have updated it, such as in acknowledging that different people’s needs may be contradictory, and others have replaced it with frameworks like Inclusive Design..

      This section does a great job comparing different ways of managing disability and, more importantly, showing how each approach places responsibility on different people. Coping strategies and modifying the person often shift the burden onto disabled individuals, asking them to adapt or appear “normal” in environments that were not designed for them. In contrast, universal design and ability-based design move that responsibility to designers and programmers, emphasizing systems that work for a wider range of users. I also appreciated the final point that accessibility is not a linear story of progress—new technologies can improve access for some people while creating new barriers for others, making accessibility an ongoing design challenge rather than a solved problem.

    1. 4.1.2. Basic Data Types# First, we’ll look at a few basic data storage types. We’ll also be including some code examples you can look at, though don’t worry yet if you don’t understand the code, since we’ll be covering these in more detail throughout the rest of the book. Booleans (True / False)# Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values. Fig. 4.4 A blue checkmark is something an account either has or doesn’t so it can be stored as a binary value.# Booleans are often created when doing sort of comparison or test, like: Do I have enough money in my wallet to pay for the item? Does this tweet start with “hello” (meaning it is a greeting)? Click to see example Python code # Save a boolean value in a variable called does_user_have_blue_checkmark does_user_have_blue_checkmark = True # Save a boolean value in a variable based on a comparison. # The code checks if a wallet has more in it than the cost of the item # which will be True or False, and be saved in has_enough_money has_enough_money = money_in_wallet > cost_of_item # Save a boolean value in a variable based on a function call. # The code checks if the text of a tweet (stored in tweet_text) starts # with "Hello", which will be True or False, and be saved in is_greeting is_greeting = tweet_text.starts_with("Hello") Copy to clipboard Numbers# Numbers are normally stored in two different ways: Integer: whole numbers like 5, 37, -10, and 0 Floating point numbers: these can represent decimals like: 0.75, -1.333, and 3 x 10 ^ 8 Fig. 4.5 The number of replies, retweets, and likes can be represented as integer numbers (197.8K can be stored as a whole number like 197,800).

      This section helped me clearly see how different data types represent different kinds of information. Booleans are especially interesting because they force complex situations into true/false decisions, which can oversimplify reality. It also made me realize how choices about numbers and strings affect what computers can accurately store and how much meaning might be lost through rounding or categorization.

    1. Dictionaries# The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”). You can think of this as like a language dictionary where there is a word and a definition for each word. Then you can look up any name or word and find the value or definition. Example: An English Language Dictionary with definitions of three terms: Social Media: An internet-based platform used for people to form connections to each other and share things. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations Automation: Making a process or activity that can run on its own without needing a human to guide it. The Dictionary data type allows programmers to combine several pieces of data by naming each piece. When we do this, the dictionary will have a number of names, and for each of those names a piece of information (called a “value” in this context). Dictionary: Name 1: Value 1 Name 2: Value 2 Name 3: Value 3 So if we look at the example tweet, we can combine all the data in a dictionary. Fig. 4.9 A tweet with photos of a cute puppy! (source)# Dictionary (with some of the data): user_name: “WeRateDogs®” user_handle: “@dog_rates” user_has_blue_checkmark: True tweet_text: “This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10” number_of_replies: 1533 number_of_retweets: 26200 number_of_likes: 197800 Click to see example Python code # Save some info about a tweet in a variable called tweet_info tweet_info = { "user_name": "WeRateDogs®", "user_handle": "@dog_rates", "user_has_blue_checkmark": True, "tweet_text": "This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10", "number_of_replies": 1533, "number_of_retweets": 26200, "number_of_likes": 197800 } Copy to clipboard Note: We’ll demonstrate dictionaries later in Chapter 5: History of Social Media, and Chapter 8: Data Mining. Groups within Groups# We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination. So for example, I could make a list of Twitter users. Each Twitter user could be a dictionary with info about that user, and one piece of information it might have is a list of who that user is following. List of users: User 1: Username: kylethayer (a String) Twitter handle: @kylemthayer (a String) Profile Picture: [TODO picture here] (an image) Follows: @SusanNotess, @UW, @UW_iSchool, @ajlunited, … (a list of Strings) User 2: Username: Dr Susan Notess (a String) Twitter handle: @SusanNotess (a String) Profile Picture: [TODO picture here] (an image) Follows: @kylemthayer, @histoftech, @j_kalla, @dbroockman, @qaxaawut, @shengokai, @laniwhatison (a list of Strings)

      I like the dictionary analogy because it makes clear how data gets structured and labeled. By assigning names to values, dictionaries don’t just store information, they also shape how programmers interpret and access it. This made me realize that how data is organized can influence what questions are easy—or hard—to ask later.

    1. 3.4. Bots and Responsibility# As we think about the responsibility in ethical scenarios on social media, the existence of bots causes some complications. 3.4.1. A Protesting Donkey?# To get an idea of the type of complications we run into, let’s look at the use of donkeys in protests in Oman: “public expressions of discontent in the form of occasional student demonstrations, anonymous leaflets, and other rather creative forms of public communication. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.” From Kings and People: Information and Authority in Oman, Qatar, and the Persian Gulf by Dale F. Eickelman1 In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission. 3.4.2. Bots and responsibility# Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected. 3.4.3. Reflection questions# How are people’s expectations different for a bot and a “normal” user? Choose an example social media bot (find on your own or look at Examples of Bots (or apps).) What does this bot do that a normal person wouldn’t be able to, or wouldn’t be able to as easily? Who is in charge of creating and running this bot? Does the fact that it is a bot change how you feel about its actions? Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability? 1 We haven’t been able to get the original chapter to load to see if it indeed says that, but I found it quoted here and here. We also don’t know if this is common or representative of protests in Oman, nor that we fully understand the cultural importance of what is happening in this story. Still, we are using it at least as a thought experiment. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3'

      I found the donkey protest example helpful for understanding how responsibility can be separated from action. Just like the donkey does not understand the protest it carries, bots can perform actions without intention or awareness. This makes it harder to assign responsibility, since the people who design, deploy, or benefit from a bot may all have different roles and intentions.

    1. 3.1. Definition of a bot# There are several ways computer programs are involved with social media. One of them is a “bot,” a computer program that acts through a social media account. There are other ways of programming with social media that we won’t consider a bot (and we will cover these at various points as well): The social media platform itself is run with computer programs, such as recommendation algorithms (chapter 12). Various groups want to gather data from social media, such as advertisers and scientists. This data is gathered and analyzed with computer programs, which we will not consider bots, but will cover later, such as in Chapter 8: Data Mining. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm: Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.# { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3' previous 3. Bots next

      This section helped clarify that not all automation on social media counts as a bot. I found it especially useful that the definition focuses on whether the account is operated by computer code rather than by humans, even if those humans behave mechanically, like in click farms. This distinction makes it easier to think more precisely about responsibility and accountability when automation affects online spaces.

    1. 9.3. Additional Privacy Violations# Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      This section made me realize that privacy violations don’t always involve hacking or illegal access. Even data that seems harmless—like metadata or anonymized datasets—can still expose people in ways they never agreed to. I was especially surprised by how companies can infer new information or create shadow profiles about both users and non-users, which shows how limited individual control over personal data really is.

    1. 9.2. Security# While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch09_privacy" }, predefinedOutput: true } kernelName = 'python3' previous 9.1. Privacy next 9.3. Additional Privacy Violations By Kyle Thayer and Susan Notess © Copyright 2022.

      This section helped me realize that security failures are often not just technical problems, but also human and organizational ones. Even when proper security practices are well known, companies still choose convenience or cost-saving over protecting users’ data. What stood out to me most was how easily individuals can become targets through things like password reuse or phishing, which makes personal security practices like two-factor authentication feel necessary rather than optional.

    1. 8.2. Data From the Reddit API# When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them. You can also look up information on the data that you can get from the Reddit API by looking at the Reddit API Documentation. The Reddit API lets you access just some of the data that Reddit tracks, but Reddit and other social media platforms track much more than they let you have access to.

      This section shows how powerful—and dangerous—data mining can be when patterns are taken out of context. The examples make it clear that just because data lines up does not mean it reveals a true cause, especially with spurious correlations. It highlights how easily data can be used to support misleading or biased conclusions, which is especially concerning when these inferences affect real people’s identities and social outcomes.

    1. Media Data# Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      This section clearly shows how much data social media platforms collect, often beyond what users knowingly provide. I was especially surprised by the idea that platforms can collect data about non-users through photos or contact lists. It makes it clear that participation in social media data systems isn’t always a choice, which raises serious concerns about privacy and consent.

  3. Jan 2026
    1. 2.3.5. Compilers and Programimng Languages# History# In the early 1950s, Grace Hopper proposed a better way of programming a computer. She suggested creating a “programming language” based on English words with a “compiler” computer program that would turn the computer language code into binary computer instructions. photo of Grace Hopper c. 1960, at that time a Commander in the US Navy. When Hopper’s ideas were mostly ignored, she proceeded to create her own compiler and later helped design some of the most important and influential early programming languages and compilers. The new set-up for programming# So, thanks to Grace Hopper, we now have a new set-up for computer programming, which is what programmers still use today: When someone wants a computer to perform a task (that hasn’t already been programmed), a human programmer will act as a translator to translate that task into a programming language. Next, a compiler (or interpreter) program will translate the programming language code into the binary code that the computer runs. In this set-up, the programming language acts as an intermediate language the way that French did in my earlier analogy. In this set-up, a programmers basic task is to do these three things: Given a problem, break it down into steps for a computer Write those steps down in a programming language Run the compiler or interpreter, so the computer program can run on the computer Programming languages# Programming languages (e.g., Python, R, Java) are specially designed languages that attempt to split the difference between how a computer thinks and communicates and how people think and communicate. There are many programming languages, with different specializations and trade-offs. In this book, we will use Python, which is commonly used in data science tasks, and has support for writing programs that work with Reddit. Compilers / Interpreters# Compilers are special programs that translate code written in a programming language into the binary 0s and 1s that a computer runs. There are two varieties of compilers: standard compiler: takes a whole computer program and turn it all into binary so it can be run later interpreter: turns the computer language code into binary as it is running the program Python uses an interpreter, so when you run a Python program, the interpreter translates the Python code into binary while it’s running it. Programming in this book# Throughout the rest of this book, we will take ideas for programs written in English and translate them into Python code, and we will look at Python code and translate it back into English descriptions of what the code does. The Python Interpreter will then translate this code into binary instructions, which the computer will then run. Next, let’s look at an example computer program that posts one tweet.

      Grace Hopper’s work shows how programming languages and compilers make computers more accessible to humans by acting as a bridge between human language and machine code. By introducing higher-level languages and compilers, she shifted programming from thinking only in binary to thinking in structured steps, which made software development more flexible and powerful. This structure also highlights that programmers play a key role in translating human intent into actions computers can execute.

    1. 1.2. Kumail Nanjiani’s Reflections on Ethics in Tech# Image source Kumail Nanjiani was a star of the Silicon Valley TV Show, which was about the tech industry. He posted these reflections on ethics in tech on Twitter (@kumailn) on November 1, 2017: As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end. Kumail Nanjiani 1.2.1. Reflection questions:# What do you think is the responsibility of tech workers to think through the ethical implications of what they are making? Why do you think the people who Kumail talked with didn’t have answers to his questions?

      I think tech workers have a responsibility to consider the ethical implications of what they create, because technology can shape behavior, privacy, and power in ways that are difficult to reverse. As Kumail Nanjiani points out, once technology is released, it cannot simply be taken back, so ethical thinking should happen before harm occurs.

      I think the people Kumail spoke with lacked answers because ethical reflection is often not prioritized in tech culture. Many developers focus on whether something can be built rather than whether it should be built, and since these questions are rarely asked, they may not be prepared to address them.

    1. 7.6.3. Trolling and Nihilism# While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      This section helped me think about trolling in a much more nuanced way, especially the idea that disruption itself isn’t automatically good or bad. I found the discussion about group formation and norm enforcement really useful, because it explains why trolling can feel threatening—it challenges the patterns and signals that groups rely on to define who belongs. The comparison between trolling, protest, and revolution also stood out to me, since it shows how moral judgment often depends on whether we see the existing social order as legitimate. Overall, this section made it clear that evaluating trolling ethically requires looking beyond intent or humor and examining what is being disrupted and who is harmed or protected by that disruption.

    1. 7.2. Origins of trolling# While the term “trolling” in the sense we are talking about in this chapter comes out of internet culture, the type of actions that we now call trolling have been happening as far back as we have historical records. 7.2.1. Pre-internet trolling# Before the internet, there were many activities that we would probably now call “trolling”, such as: Hazing: Causing difficulty or suffering for people who are new to a group Satire: (e.g., A Modest Proposal) which takes a known form, but does something unexpected or disruptive with it. Practical jokes / pranks The video above is a 1957 April Fool’s Day hoax video broadcast by the BBC claiming to show how spaghetti noodles are harvested from trees. Additionally, the enjoyment of causing others pain or distress (“lulz”) has also been part of the human experience for millennia: “Boys throw stones at frogs in fun, but the frogs do not die in fun, but in earnest.” Bion of Borysthenes (Greece ~300 BCE) Additionally, the inauthentic arguments have long been observed, and were memorably explored by Jean-Paul Sartre as “Bad Faith”. “Bad faith” here means pretending to hold views or feelings, while not actually holding them (this may be intentional, or it may be through self-deception). Sartre particularly observed this in arguments made by antisemites while he lived in Nazi-controlled Paris: “Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.” Jean-Paul Sartre, 1945 CE, Paris, France 7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out] You can read more at: knowyourmeme and wikipedia { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch07_trolling" }, predefinedOutput: true } kernelName = 'python3' previous 7.1. What is trolling

      This section helped me realize that trolling isn’t just an internet-specific problem, but a behavior that has existed long before online spaces. I found the connection to satire, hazing, and especially Sartre’s idea of “bad faith” really interesting, because it shows how trolling often isn’t about genuine disagreement but about disrupting or provoking others. Understanding these historical roots makes it clearer why trolling is so persistent online today, and why simply asking trolls to “argue rationally” often doesn’t work.

    1. 6.5. Parasocial Relationships# Another phenomenon related to authenticity which is common on social media is the parasocial relationship. Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all. Parasocial relationships are not a new phenomenon, but social media has increased our ability to form both sides of these bonds. As comedian Bo Burnham put it: “This awful D-list celebrity pressure I had experienced onstage has now been democratized.” Learn more about parasocial relationships: StrucciMovies: Fake Friends YouTube Series Sarah Z: How Fans Treat Creators 33 min

      The example of Mr. Rogers shows that parasocial relationship sare not automatically unethical or inauthentic. What seems important here is that he tried to clearly define the limits of the relationship, such as calling viewers “television friends” and explaining that visits were not possible. This transparency helped make the parasocial relationship feel more authentic, even though it was not a real two-way friendship.

    1. 6.1. Authenticity# Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      The lonelygirl15 example shows why authenticity matters so much on social media. What upset people was not that the story was fictional, but that the way the connection was presented did not match reality. This makes me think that authenticity is less about whether something is “real” or “fake,” and more about whether audiences clearly understand what kind of relationship or signal they are engaging with.

    1. 5.7. Reflection Activities: Actions on Social Media Designs# 5.7.1. Comparing social media actions# Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites? 5.7.2. Design a social media site# Now it’s your turn to try designing a social media site. Decide a type of social media site (e.g., a video site like youtube or tiktok, or a dating site, etc.), and a particular view of that site (e.g., profile picture, post, comment, etc.). Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?

      This activity shows how design choices influence user behavior by making some actions more visible than others, By comparing different platforms, it becomes clear that actions like sharing or liking are often prioritized, while actions like reporting or privacy controls are placed further away.

    1. The first versions of internet-based social media started becoming popular in the late 1900s. The internet of those days is now called “Web 1.0.” The Web 1.0 internet had some features that make it stand out compared to later internet trends: If you wanted to make a profile to talk about yourself, or to show off your work, you had to create your own personal webpage, which others could visit. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information. Communication platforms were generally separate from these profiles or personal web pages.

      Early Web 1.0 social media required much more technical effort from users, such as creating personal webpages. This likely limited participation to people with more technical knowledge and made online communities smaller and less diverse.