37 Matching Annotations
  1. Mar 2024
    1. In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI) Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      This topic is intriguing because it crosses the divide between technology and humans, forcing us to consider the moral implications of innovation in great detail. It forces us to critically assess our roles as technology producers, consumers, and regulators, encouraging us to take into account not only the potential benefits of our discoveries but also their effects on society, morality, and the environment. We are urged to participate in a more nuanced discussion on how technology impacts our world and our future by considering historical examples and modern criticisms, emphasizing the intricate relationship between technical advancement and human values. It is an interesting and important field of study since this conversation is vital to ensure that technical advancement is in line with moral values and advances society as a whole.

    2. In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus (~370BCE), Socrates tells (or makes up1) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI) Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      Through historical narratives and modern examples, the repeating theme of innovators ignoring the ethical consequences of their technology draws attention to the critical gulf that exists between technical advancement and its influence on society. This pattern emphasizes how crucial ethical responsibility is for driving technological advancement among innovators and the larger community. In order to ensure that innovations not only increase human capacities but also safeguard society well-being, avert harm, and positively impact the future, it highlights the necessity of a conscientious approach that takes potential consequences into consideration. These lessons serve as a helpful reminder of how crucial it is to incorporate varied stakeholder participation and ethical foresight into the innovation process in order to prevent mistakes from being made again and to develop technologies that genuinely benefit humanity.

    1. The tech industry is full of colonialist thinking and practices, some more subtle than others. To begin with, much of the tech industry is centralized geographically, specifically in Silicon Valley, San Francisco, California. The leaders and decisions in how tech operates come out of this one wealthy location in a wealthy nation. Then, much of tech is dependent on exploiting cheap labor, often in dangerous conditions, in other countries (thus extracting the resource of cheap labor, from places with “inferior” governments and economies). This labor might be physical labor, or dealing with dangerous chemicals, or the content moderators who deal with viewing horrific online content. Tech industry leaders in Silicon Valley then take what they made with exploited labor, and sell it around the world, feeling good about themselves, believing they are benefitting the world with their “superior” products. 20.2.1. Example: One Laptop Per Child# An example of how this can play out is the failed One Laptop Per Child (OLPC) project. In late 2005, tech visionary and MIT Media Lab founder Nicholas Negroponte [introduced a] $100 laptop would have all the features of an ordinary computer but require so little electricity that a child could power it with a hand crank OLPC’s $100 laptop was going to change the world — then it all went wrong OLPC wanted to give every child in the world a laptop, so they could learn computers, believing he would benefit the world. But this project failed for a number of reasons, such as: The physical device didn’t work well. The hand-powered generator was unreliable, the screen too small to read. OLPC was not actually providing a “superior” product to the rest of the world. When they did hand out some, it didn’t come with good instructions. Kids were just supposed to figure it out on their own. If this failed, it must be the fault of the poor people around the world. It wasn’t designed for what kids around the world would actually want. They didn’t take input from actual kids around the world. OLPC thought they had superior knowledge and just assumed they knew what people would want. In the end, this project fell apart, and most of tech moved

      I think this is interesting because the significant ethical and sustainability issues are brought to light by the tech industry's colonialist tactics, which include centralizing decision-making in affluent locations like Silicon Valley and taking use of cheap labor in less developed nations. An excellent illustration of these issues is the One Laptop Per Child (OLPC) project, which shows how well-meaning projects can fall short because of a lack of awareness and consideration for the requirements and circumstances of the communities they are intended to assist. This strategy downplays the potential advantages of technology while simultaneously sustaining inequality and downplaying the significance of inclusive, participatory development. It is imperative to acknowledge and tackle these colonialist inclinations in order to establish a tech sector that really serves all residents of the world, values diversity of opinion, and advances fair development.

    1. Meta’s way of making profits fits in a category called Surveillance Capitalism. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed). 19.2.2. Meta’s Business Model# So, what Meta does to make money (that is, how shareholders get profits), is that they collect data on their users to make predictions about them (e.g., demographics, interests, etc.). Then they sell advertisements, giving advertisers a large list of categories that they can target for their ads. The way that Meta can fulfill their fiduciary duty in maximizing profits is to try to get: More users: If Meta has more users, it can offer advertisers more people to advertise to. More user time: If Meta’s users spend more time on Meta, then it has more opportunities to show ads to each user, so it can sell more ads. More personal data: The more personal data Meta collects, the more predictions about users it can make. It can get more data by getting more users, and more user time, as well as finding more things to track about users. Reduce competition: If Meta can become the only social media company that people use, then they will have cornered the market on access to those users. This means advertisers won’t have any alternative to reach those users, and Meta can increase the prices of their ads. 19.2.3. How Meta Tries to Corner the Market of Social Media# To increase profits, Meta wants to corner the market on social media. This means they want to get the most users possible to use Meta (and only Meta) for social media. Before we discuss their strategy, we need a couple background concepts: Network effect: Something is more useful the more people use it (e.g., telephones, the metric system). For example, when the Google+ social media network started, not many people used it, which meant that if you visited it there wasn’t much content, so people stopped using it, which meant there was even less content, and it was eventually shut down. Network power: When more people start using something, it becomes harder to use alternatives. For example, Twitter’s large user base makes it difficult for people to move to a new social media network, even if they are worried the new owner is going to ruin it, since the people they want to connect with aren’t all on some other platform. This means Twitter can get much worse and people still won’t benefit from leaving it. Let’s look at a scene from the movie The Social Network (about the origins of Facebook), where Sean Parker (who created the music-sharing app Napster) talks to Facebook founders Mark Zuckerberg and Eduardo Saverin about their strategy to grow Facebook: In that clip, you will notice strategies for trying to use the network effect (though they don’t call it that) by targeting specific users to try to make Facebook more desirable than competitors. They also discuss how they could start running ads now (making them a million dollars). But instead, if they don’t sell ads now (running the company at a loss) then they can maximize their growth. Then, when they have grown much larger and have enough network power, users won’t quit when they start selling ads later (and they’ll make a billion dollars). So, looking back at Meta’s goal (getting the most users possible to use Meta, and only Meta for social media), let’s look at some obstacles and how Meta tries to overcome these obstacles: Obstacle: Users don’t want ads on Facebook Solution: No ads until Facebook has attracted enough users (network power) so that users won’t leave when ads are introduced (Facebook introduced ads in 2007) Obstacle: People speak different languages Solution: Increase language support of Facebook so more people can use the site Obstacle: Not everyone has the internet Solution: Give them free internet, but push them to Facebook while doing so (called Free Basic) Obstacle: A competing company social media company has a user base (e.g., Instagram, Snapchat) Solution: Try to purchase the company, or copy their features

      The core of the surveillance capitalism notion is Meta's strategy to dominate the social media industry, which entails building a sizable user base, prolonging user engagement times, and gathering copious amounts of personal data to improve targeted advertising. With the help of network effects and network strength, Meta hopes to make its platforms essential to users and marketers alike. By making calculated choices—such postponing the introduction of advertisements, increasing language support, and providing free internet services—Meta is able to overcome challenges like resistance to advertisements, language barriers, and restrictions on internet access, ensuring that its platforms continue to be the top option. This strategy keeps it above the digital advertising scene by maximizing profit margins and securing its position as the leading social media giant. It also gives marketers unmatched targeting capabilities.

  2. Feb 2024
    1. We previously looked at how shame might play out in childhood development, let’s look at different views of people being shamed in public. 18.3.1. Weak against strong# Jennifer Jacquet argues that shame can be morally good as a tool the weak can use against the strong: The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us. […] A good rule of thumb is to go after groups, but I don’t exempt individuals, especially not if they are politically powerful or sizeably impact society. But we must ask ourselves about the way those individuals are shamed and whether the punishment is proportional. Jennifer Jacquet: ‘The power of shame is that it can be used by the weak against the strong’ 18.3.2. Schadenfreude# Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others. A 2009 satirical article from the parody news site The Onion satirizes public shaming as being for objectifying celebrities and being entertained by their misfortune: Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy. Nation Demands Fresh Celebrity Meat - The Onion 18.3.3. Normal People# While the example from The Onion above focuses on celebrity, in the time since it was written, social media has taken a larger role in society and democratized celebrity. As comedian Bo Burnham puts it: “[This] celebrity pressure I had experienced on stage has now been democratized and given to everybody [through social media]. And everyone is feeling this pressure of having an audience, of having to perform, of having a sort of, like, proper noun version of your own name and then the self in your heart.” (NPR Fresh Air Interview) Also, Rebecca Jennings worries about how public shaming is used against “normal” people who are plucked out of obscurity to be shamed by huge crowds online: “Millions of people became invested in this (niche! not very interesting!) drama because it gives us something easy to be angry or curious or self-righteous about, something to project our own experiences onto, and thereby contributing even more content to the growing avalanche. Naturally, some decided to go look up the central character’s address, phone number, and workplace and share it on the internet. […] ‘It’s on social media, so it’s public!’ one could argue as a case for people’s right to act like forensic analysts on social media, and that is true. But this justification is typically valid when a) the person posting is someone of note, like a celebrity or a politician, and b) when the stakes are even a little bit high. In most cases of normal-person canceling, neither standard is met. Instead, it’s mob justice and vigilante detective work typically reserved for, say, unmasking the Zodiac killer, except weaponized against normal people. […] Platforms like TikTok, where even people with few or no followers often go viral overnight, expedite the shaming process.

      I think this is interesting because The debate around public humiliation, its potential for good and bad, and the moral issues it raises, touches on important facets of human behavior and society. It raises important questions about how we manage accountability, empathy, and justice in a globalized society, which makes it a really fascinating and timely topic.

    1. While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles: Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? (archived copy) Jimmy Kimmel’s Halloween Candy Prank: Harmful Parenting? We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes). 18.2.1. Aside on “Cancel Culture”# The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.”

      I think this is important because examining "cancel culture" and public shaming is crucial because of the significant effects they have on social justice, moral behavior, individual wellbeing, and the operation of digital communities. Comprehending and managing these occurrences necessitates a careful evaluation of their diverse effects and the creation of considerate, just solutions.

    1. 17.4.1. Moderation and Violence# You might remember from Chapter 14 that social contracts, whether literal or metaphorical, involve groups of people all accepting limits to their freedoms. Because of this, some philosophers say that a state or nation is, fundamentally, violent. Violence in this case refers to the way that individual Natural Rights and freedoms are violated by external social constraints. This kind of violence is considered to be legitimated by the agreement to the social contract. This might be easier to understand if you imagine a medical scenario. Say you have broken a bone and you are in pain. A doctor might say that the bone needs to be set; this will be painful, and kind of a forceful, “violent” action in which someone is interfering with your body in a painful way. So the doctor asks if you agree to let her set the bone. You agree, and so the doctor’s action is construed as being a legitimate interference with your body and your freedom. If someone randomly just walked up to you and started pulling at the injured limb, this unagreed violence would not be considered legitimate. Likewise, when medical practitioners interfere with a patient’s body in a way that is non-consensual or not what the patient agreed to, then the violence is considered illegitimate, or morally bad. We tend to think of violence as being another “normatively loaded” word, like authenticity. But where authenticity is usually loaded with a positive connotation–on the whole, people often value authenticity as a good thing–violence is loaded with a negative connotation. Yes, the doctor setting the bone is violent and invasive, but we don’t usually call this “violence” because it is considered to be a legitimate exercise of violence. Instead, we reserve the term “violence” mostly for describing forms of interference that we consider to be morally bad. 17.4.2. A Bit of History# In muc

      I thought this is very interesting because investigating harassment via various perspectives provides a deep, comprehensive comprehension of the difficulties and possibilities brought about by our progressively digital society. It invites thoughtful consideration of the nature of community, the obligations of people and institutions, and the changing parameters of legal safeguards and social standards.

    1. Harassment can also be done through crowds. Crowd harassment has also always been a part of culture, such as riots, mob violence, revolts, revolution, government persecution, etc. Social media then allows new ways for crowd harassment to occur. Crowd harassment includes all the forms of individual harassment we already mentioned (like bullying, stalking, etc.), but done by a group of people. Additionally, we can consider the following forms of crowd harassment: Dogpiling: When a crowd of people targets or harasses the same person. Public Shaming (this will be our next chapter) Cross-platform raids (e.g., 4chan group planning harassment on another platform) Stochastic terrorism The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. See also: An atmosphere of violence: Stochastic terror in American politics In addition, fake crowds (e.g., bots or people paid to post) can participate in crowd harassment. For example: “The majority of the hate and misinformation about [Meghan Markle and Prince Henry] originated from a small group of accounts whose primary, if not sole, purpose appears to be to tweet negatively about them. […] 83 accounts are responsible for 70% of the negative hate content targeting the couple on Twitter.”

      The phenomena of "crowd harassment" uses the strength of a large gathering to threaten, intimidate, or injure a target, either a person or a gathering. When many people participate in such actions, the harassment's impact is amplified, making it more difficult for the target to confront or escape. This behavior's shift to the digital sphere, particularly through social media platforms, has brought forth new dynamics and processes that enable harassment of this kind at a speed and scale never before possible.

    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      I think comprehending the mechanics of engagement in social media and crowdsourcing platforms is essential and interesting to creating digital ecosystems that are more egalitarian, inclusive, and productive. It pushes us to consider how to maximize the abilities and commitment of power users to drive more people to participate while boosting the overall productivity and creativity of these platforms.

    1. Let’s now consider some examples of planned crowdsourcing, meaning a system or task was intentionally created and given to a crowd to work on. 16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora: An crowdsourced question and answer site. Stack Overflow: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk. Project Sidewalk: Crowdsourcing sidewalk information for mobility needs (e.g., wheelchair users). 16.2.2. Example Crowdsourcing Tasks# You probably already have some ideas of how crowds can work together on things like editing articles on a site like Wikipedia or answer questions on a site like Quora, but let’s look at some other examples of how crowds can work together. Fold-It is a game that lets players attempt to fold proteins. At the time, researchers were having trouble getting computers to do this task for complex proteins, so they made a game for humans to try it. Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players. Fig. 16.1 Screenshot of the fold-it game.# A research study demonstrated the power of crowd work by having Mechanical Turk workers build off of the work done by previous workers. To demonstrate, they wrote a note with intentionally bad and almost undecipherable handwriting: Fig. 16.2 A note written with intentionally bad handwriting.# Turkers (the people who do Mechanical Turk tasks) were then given the handwritten note and after the first few attempts at deciphering it, Turkers were either a previous attempt at deciphering the note, or asked to vote on which interpretations were improvements. They were instructed to leave parentheses around sections they weren’t sure about. Here is a selection of subsequent attempts at interpreting the note (from the paper): version 1: You (?) (?) (?) (work). (?) (?) (?) work (not) (time). I (?) (?) a few grammatical mistakes. Overall your writing style is a bit too (phoney). You do (?) have good (points), but they got lost amidst the (writing). (signature) version 4: You (misspelled) (several) (words). (?) (?) (?) work next (time). I also notice a few grammatical mistakes. … version 5: You (misspelled) (several) (words). (Plan?) (spellcheck) (your) work next time. I also notice a few grammatical mistakes. Overall your writing style is a bit too phoney. You do make some good (points), but they got lost amidst the (writing). (signature) version 6: You (misspelled) (several) (words). Please spellcheck your work next time. I also notice a few grammatical mistakes. Overall your writing style is a bit too phoney . You do make some good (points), but they got lost amidst the (writing). (signature)

      Using the combined knowledge, abilities, and insights of a sizable population (the public), crowdsourcing is a potent technique that can be used to solve issues, come up with solutions, or finish projects that would be difficult, time-consuming, or unachievable for lone individuals or small groups. This method works especially well for assignments that call for a wide range of opinions, originality, or human contribution.

    1. Reddit is divided into subreddits which are often about a specific topic. Each subreddit is moderated by volunteers who have special permissions, who Reddit forbids from making any money: Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods? A post starting a discussion thread on reddit about reddit In addition to the subreddit moderators, all Reddit users can upvote or downvote comments and posts. The reddit recommendation algorithm promotes posts based on the upvotes and downvotes, and comments that get too many downvotes get automatically hidden. Finally, Reddit itself does some moderation as a platform in determining which subreddits can exist and has on occasion shut down some. Reflection Question:# What is your take on the ethical trade-offs of unpaid Reddit moderators? What do you think Reddit should do? 15.2.2. Wikipedia# Wikipedia is an online encyclopedia that is crowdsourced by volunteer editors. You can go right now and change a Wikipedia page’s content if you want (as long as the page isn’t locked)! You can edit anonymously, or you can create an account. The Wikipedia community gives some editors administrator access, so they can perform more moderation tasks like blocking users or locking pages. Editors and administrators are generally not, paid, though they can be paid by other groups if they disclose and fill out forms Wikipedia exists in multiple languages (each governed somewhat independently). When looking at the demographics of who writes the English Wikipedia articles, editors of Wikipedia skew heavily male (around 80% or 90%), and presumably administrators skew heavily male as well. This can produce bias in how things are moderated. For example, Donna Strickland had no Wikipedia page before her Nobel. Her male collaborator did: “Articles on Strickland had been drafted on the online encyclopedia before in May 2018 — but the draft was rejected by moderators. ‘This submission’s references do not show that the subject qualifies for a Wikipedia article,’ the moderators wrote, despite the fact that the original author linked to a page that showed Strickland was once president of the Optical Society, a major physics professional organization and publisher of some of the field’s top journals.” Reflection Question:# How should Wikipedia handle their editor/administrator demographics? 15.2.3. Facebook# While Facebook groups and individual pages can be moderated by users, for the platform as a while, Facebook has paid moderation teams to make moderation decisions (whether on content flagged by bots, or content flagged by users). As Facebook has grown, it has sought users from all over the globe, but as of 2019: Facebook had menus and prompts in 111 different languages, which were deemed to be “officially supported” Facebook’s “Community standards” rules were only translated into 41 of those languages Facebook’s content moderators know about 50 languages (though they say they hire professional translators when needed) Automated tools for identifying hate speech only work in about 30 languages Reflection Questions:# What dangers are posed with languages that have limited or no content moderation? What do you think Facebook should do about this?

      I think whats is interesting is these platforms serve as examples of how to manage material at scale, ensure equitable and inclusive representation, and leverage community governance in a complex way. The conversations surrounding editorial demographics, language support, and unpaid moderators highlight the continuous moral, practical, and technological difficulties that digital platforms face in fostering and preserving thriving online communities.

    1. Before we talk about public criticism and shaming and adults, let’s look at the role of shame in childhood. In at least some views about shame and childhood1, shame and guilt hold different roles in childhood development: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view, a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.

      I think whats important is supporting children's emotional well-being, social growth, and moral knowledge during childhood requires an awareness of and appropriate response to the subtle differences between shame and guilt. It makes it possible for adults to help kids respond constructively to errors and setbacks, establishing the foundation for them to grow into emotionally strong, socially proficient, and morally aware adults.

    1. Moderation Tools# We’ve looked at what type of content is moderated, now let’s look at how it is moderated. Sometimes individuals are given very little control over content moderation or defense from the platform, and then the only advice that is useful is: “don’t read the comments.” But some have argued that this shifts responsibility onto the individual users getting negative comments, when the responsibility should be on the people in charge of creating the platform. So let’s look at the type of content moderation controls that might be given to individuals, and might be used by platforms. 14.2.1. Individual User Controls# Individual users are often given a set of moderation tools they can use themselves, such as: Block an account: a user can block an account from interacting with them or seeing their content Mute an account: a user can allow an account to try interacting with them, but the user will never see what that account did. Mute a phrase or topic: some platforms let users block content by phrases or topics (e.g., they are tired of hearing about cryptocurrencies, or they don’t want spoilers for the latest TV show). Delete: Some social media platforms let users delete content that was directed at them (e.g., replies to their post, posts on their wall, etc.) Report: Most social media sites allow users to report or flag content as needing moderation. And there are other options and nuances as well, depending on the platform.

      What's significant, in my opinion, about this is that these platform-level safeguards are essential to preserving a civil and safe online community. They act as a background against which the various moderation methods function, making sure that the platforms themselves share some of the accountability for content moderation rather than just the users.

    1. Doomscrolling# Doomscrolling is: “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.” Merriam-Webster Dictionary Fig. 13.1 Tweet on doomscrolling the day after insurrectionists stormed the US Capital (while still in the middle of the COVID pandemic).# The seeking out of bad news, or trying to get news even though it might be bad, has existed as long as people have kept watch to see if a family member will return home safely. But of course, new mediums can provide more information to sift through and more quickly, such as with the advent of the 24-hour news cycle in the 1990s, or, now social media. 13.2.2. Trauma Dumping# While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?). Trauma dumping can be bad for the mental health of those who have this trauma unexpectedly thrown at them, and it also often isn’t helpful for the person doing the trauma dumping either: Venting, by contrast, is a healthy form of expressing negative emotion, such as anger and frustration, in order to move past it and find solutions. Venting is done with the permission of the listener and is a one-shot deal, not a recurring retelling or rumination of negativity. A good vent allows the venter to get a new perspective and relieve pent-up stress and emotion. While there are benefits to venting, there are no benefits to trauma dumping. In trauma dumping, the person oversharing doesn’t take responsibility or show self-reflection. Trauma dumping is delivered on the unsuspecting. The purpose is to generate sympathy and attention not to process negative emotion. The dumper doesn’t want to overcome their trauma; if they did, they would be deprived of the ability to trauma dump. How to Overcome Social Media Trauma Dumping 13.2.3. Munchausen by Internet# Munchausen Syndrome (or Factitious disorder imposed on self) is when someone pretends to have a disease, like cancer, to get sympathy or attention. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness in an online community than in an in-person community, so many have done so (like the fake @Sciencing_Bi fake dying of covid in the authenticity chapter). People who fake these illnesses often do so as a result of their own mental illness, so, in fact, “they are sick, albeit […] in a very different way than claimed.” 13.2.4. Digital Self-Harm# Sometimes people will harm their bodies (called “self-harm”) as a way of expressing or trying to deal with negative emotions or situations. Self-harm doesn’t always have to be physical though, and some people find ways of causing emotional self-harm through the internet. Self-Bullying# One form of digital self-harm is self-bullying, where people set up fake alternate accounts which they then use to post bullying messages at themselves. Negative Communities# Another form of digital self-harm is through joining toxic negative communities built around tearing each other down and reinforcing a hopeless worldview. (Content warning: sex and self-harm) In 2018, Youtuber ContraPoints (Natalie Wynn) made a video exploring the extremely toxic online Incel community and related it to her own experience with a toxic 4chan community. (Content warning: Sex, violence, self-hatred, and self-harm) Note: The video might not embed right, and if you want to watch it, you might have to click to open it in youtube. Since you might not want to watch a 35-minute video, here are a few key summary points and quotes: “Incel” is short for “involuntarily celibate,” meaning they are men who have centered their identity on wanting to have sex with women, but with no women “giving” them sex. Incels objectify women and sex, claiming they have a right to have women want to have sex with them. Incels believe they are being unfairly denied this sex because of the few sexually attractive men (”Chads”), and because feminism told women they could refuse to have sex. Some incels believe their biology (e.g., skull shape) means no women will “give” them sex. They will be forever alone, without sex, and unhappy. The incel community has produced multiple mass murderers and terrorist attacks. In the video, ContraPoints says that in some forums, incels will post pictures of themselves, knowing and expecting that the community will proceed to criticize everything about their appearance and reinforce hopelessness: The truth about incels is that almost all of them are completely normal looking guys. But of course that’s not the feedback they get from other incels. The feedback they get is that their chins are weak, their hair is thin, their skin is garbage and there’s no hope whatsoever, no woman will ever love them, they are truecels with no option but to lie down and rot. ContraPoints then relates this to her experience with a 4chan message board that, unlike other in other online trans communities, consisted of trans women tearing down each others’ appearances, saying that no one would ever see them as a woman (they would never “pass” as a woman), and saying that no trans woman could ever pass. As a somewhat public trans woman, the community often posted about her: For a while I had some stans on the board who basically viewed me as inspiration […] of course that kind of post is frowned upon. If I’m not looked at as a big-skulled manly freak, if my transition is going well, that means that some of their transitions might go well too, and that is an unacceptable conclusion for a community founded on self-loathing and hopelessness. So it was necessary for the rest of the board to explain why I didn’t pass, why I would never pass, and why anyone who looked less good than me shouldn’t even fucking think about it. They shouldn’t transition at all, they should just repress, they should lie down and rot. ContraPoints says she regularly searched these forums to see what terrible things people said about her: And there would be this thrill of going to TTTT and reading other people saying what my deepest anxieties told me was really true. And that was always painful but there was a kind of pleasure too. There was a rush. It’s exciting to burst out of the politically correct bubble and say what you’re really thinking: that personality doesn’t matter because big-skulled Chads get all the girls, that ContraPoints is a big-skulled hon with a voice like nails on a chalkboard. And at first I justified the habit by telling myself I was just doing research. I have to keep tabs on what the bigots are saying, that’s simply my job. But soon I realized it wasn’t just research, and it was infecting me away from the computer. She then describes this as a form of digital self-harm, calling it “masochistic epistemology: whatever hurts is true” (note: “masochistic” means seeking pain, and “epistemology” means how you determine what is true). ContraPoints then gives her advice to these incels who have turned inward with self-hatred and digital self-harm: So, incels. I’m not going to respond to your worldview like its an intellectual position worthy of rational debate. Because these ideas and arguments, you’re not using them the way rational people use arguments. You’re using them as razor blades to abuse yourselves. And I know because I’ve done the exact same thing. The incel worldview is catastrophizing. It’s an anxious death spiral. And the solution to that has to be therapeutic, not logical.

      I think the concept of digital self-harm, which has gained the most interest, including self-bullying and involvement in negative online communities, demonstrates a concerning trend of using digital platforms to promote self-loathing and despair. These activities indicate a masochistic approach to digital information, in which some people seek out or create negative feedback loops that perpetuate detrimental self-perceptions and mental health results. It exposes the dark side of internet interaction, in which anonymity, community dynamics, and a lack of physical presence all contribute to negative actions against oneself and others.

    1. 13.1. Social Media Influence on Mental Health# In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image Comedian and director Bo Burnham has his own observations about how social media is influencing mental health: “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience - NPR Fresh Air (10:15-11:20) It can be difficult to measure the effects of social media on mental health since there are so many types of social media, and it permeates our cultures even of people who don’t use it directly. Some researchers have found that people using social media may enter a dissociation state, where they lose track of time (like what happens when someone is reading a good book). Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset. 13.1.1. Digital Detox?# Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why.

      I think the discussion concerning social media and mental health is not about deciding between blatant rejection and unquestioning acceptance. It is about achieving a balanced relationship with digital technology, in which the advantages are maximized while the hazards are actively addressed through informed use, supporting communities, and responsible platform administration.

    1. In order to understand what we are talking about when we say something goes “viral”, we need to first understand evolution and memes. 12.1.1. Evolution# Biological evolution is how living things change, generation after generation, and how all the different forms of life, from humans to bacteria, came to be. Evolution occurs when three conditions are present: Replication (with Inheritance) An organism can make a new copy of itself, which inherits its characteristics Variations / Mutations The characteristics of an organism are sometimes changed, in a way that can be inherited by future copies Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself When those three conditions are present, then over time successive generations of organisms will: be more adapted to their environment divide into different groups and diversify stumble upon strategies for competing with or cooperating with other organisms. 12.1.2. Memes# In the 1976 book The Selfish Gene, evolutionary biologist Richard Dawkins1 said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc.

      I think the content about memes are interesting to me, because memes describe how cultural aspects spread from one person to another, evolving and adapting along the way. This concept contributes to the rapid spread of particular ideas or trends across civilizations, providing a framework for analyzing how and why certain cultural phenomena become dominant while others fade away.

    1. 12.3. Evolution in social media# Let’s now turn to social media and look at how evolution happens there. As we said before, evolution occurs when there is: replication (with inheritance), variations or mutations, and natural selection, so let’s look at each of those. 12.3.1. Replication (With Inheritance)# For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites 12.3.2. Variations / Mutations# When content is replicated on social media, it may be modified. The Social media system might have built-in ways to do this, like a quote tweet or reply adding some sort of comment to the original post, effectively making a new version of the post that can spread around. Fig. 12.5 Monica Lewinsky posted this quote tweet that answers a question with a side-eye emoji, which her audiences will understand as referring to her affair with then-US-president Bill Clinton.# Through quote tweeting, a modified version of the original tweet (now with Lewinsky’s emoji response) spread as people liked, retweeted, replied, and put it in Buzzfeed lists Additionally, content can be copied by being screenshotted, or photoshopped. Text and images can be copied and reposted with modifications (like a poem about plums). And content in one form can be used to make new content in completely new forms, like this “Internet Drama” song whose lyrics are from messages sent back and forth between two people in a Facebook Marketplace: 12.3.3. “Natural” Selection# It isn’t clear what should be considered as “nature” in a social media environment (human nature? the nature of the design of the social media platform? are bots unnatural?), so we’ll just instead talk about selection. When content (and modified copies of content) is in a position to be replicated, there are factors that determine whether it gets selected for replicated or not. As humans look at the content they see on social media they decide whether they want to replicate it for some reason, such as: “that’s funny, so I’ll retweet it” “that’s horrible, so I’ll respond with an angry face emoji” “reposting this will make me look smart” “I am inspired to use part of this to make a different thing” Groups and organizations make their own decisions on what social media content to replicate as well (e.g., a news organization might find a social media post newsworthy, so they write articles about it). Additionally, content may be replicated because of: Paid promotion and ads, where someone pays money to have their content replicated Astroturfing: where crowds, often of bots, are paid to replicate social media content (e.g., like, retweet) Finally, social media platforms use algorithms and design layouts which determine what posts people see. There are various rules and designs social media sites can use, and they can amplify natural selection and unnatural selection in various ways. They can do this through recommendation algorithms as we saw last chapter, as well as choosing what actions are allowed and what amount of friction is given to those actions, as well as what data is collected and displayed. Different designs of social media platforms will have different consequences in what content has viral, just like how different physical environments determine which forms of life thrive and how they adapt and fill ecological niches.

      In the world of social media, the replication resembles biological process of reproduction. This replication occurs as a result of different actions taken by users on social networking platforms, such as liking, reposting, commenting, and employing services like paid promotion, quote tweeting, or TikTok Duets. These activities not only propagate the original material, but they can also add content when the content is modified, similar to the concept of genetic inheritance, in which offspring inherit qualities from their parents with minor changes.

    1. How recommendations can go well or poorly# Friends or Follows:# Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you. Reminders:# Automated reminders can go well in a situation such as when a user enjoys the nostalgia of seeing something from their past. Automated reminders can go poorly when they give users unwanted or painful reminders, such as for miscarriages, funerals, or break-ups Ads:# Advertisements shown to users can go well for users when the users find products they are genuinely interested in, and for making the social media site free to use (since the site makes its money from ads). Advertisements can go poorly if they become part of discrimination (like only showing housing ads to certain demographics of people), or reveal private information (like revealing to a family that someone is pregnant) Content (posts, photos, articles, etc.)# Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones. Content recommendations can go poorly when it sends people down problematic chains of content, like by grouping videos of children in a convenient way for pedophiles, or Amazon recommending groups of materials for suicide. 11.3.2. Gaming the recommendation algorithm# Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content. For example, in the case of the simple “show latest posts” algorithm, the best way to get your content seen is to constantly post and repost your content (though if you annoy users too much, it might backfire). Other strategies include things like: Clickbait: trying to give you a mystery you have to click to find the answer to (e.g., “You won’t believe what happened when this person tried to eat a stapler!”). They do this to boost clicks on their link, which they hope boosts them in the recommendation algorithm, and gets their ads more views Trolling: by provoking reactions, they hope to boost their content more Coordinated actions: have many accounts (possibly including bots) like a post, or many people use a hashtag, or have people trade positive reviews

      These ideas are interesting to me because they present two opposing viewpoints: fostering social ties at the expense of potential mental suffering. Though they may unintentionally suggest links that will cause mood swings, effective algorithms can also help create new, meaningful connections.

    1. Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act). 11.2.2. Recommendation Algorithms as Systems# Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      The fact that the United States had different sentencing criteria for crack and powder cocaine in the 1990s is a typical example of systematic bias. Because African Americans were more prone to use crack cocaine than powder cocaine, which was more frequently used by white people, this strategy resulted in disproportionately punitive sentences for this community. Even when individual judges presided over these cases impartially, the system's design produced results that were racially biased.

  3. Jan 2024
    1. We mentioned Design Justice earlier, but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself. 1 If you can’t see the video, it shows someone with light skin putting their hand under a soap dispenser, and soap comes out. Then a person with dark skin puts their hand under a soap dispenser, and nothing happens. The person with dark skin then puts a white paper towel on their hand and then when they put their hand under the soap dispenser, soap comes out. When the person with dark skin takes off the white paper towel, the soap dispenser won’t work for them anymore.

      I thought it was interesting to find the image cropping issue and the soap dispenser issue on Twitter and it's a reminder of the importance of diversity, or lack thereof, in technology and design. They are a wake-up call for the IT industry to actively include diverse groups in their design process and to prioritize inclusive design approaches.

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision: Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind, a disability. But there are also a small number of people who are tetrachromats and can see four base colors2 and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances). Disabilities can be accepted as socially normal, like is sometimes the case for wearing glasses or contacts, or it can be stigmatized as socially unacceptable, inconvenient, or blamed on the disabled person. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical people. Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby. As you look through all these types of disabilities, you might discover ways you have experienced disability in your life. Though please keep in mind that different disabilities can be very different, and everyone’s experience with their own disability can vary. So having some experience with disability does not make someone an expert in any other experience of disability. As for our experience with disability, Kyle has been diagnosed with generalized anxiety disorder and Susan has been diagnosed with depression. Kyle and Susan also both have: near sightedness: our eyes cannot focus on things far away (unless we use corrective lenses, like glasses or contacts) ADHD: we have difficulty controlling our focus, sometimes being hyperfocused and sometimes being highly distracted and also have difficulties with executive dysfunction. 1 There are many ways to think about disability, such as legal (what legally counts as a disability?), medical (what is a problem to be cured?), identity (who views themselves as “disabled”), etc. We are focused here more on disability as it relates to design and who things in our world are designed for. 2 Trying to name the four base colors seen by tetrachromats is not straightforward since our color names are based on trichromat vision. It seems that for tetrachromats blue would be the same, but they would see three different base colors in the red/green range instead of two.

      This paragraph of the article tells how technological developments have the potential to redefine disability. For example, the development of cochlear implants and hearing aids has changed the way society views hearing loss. Similarly, Braille displays and screen reading technology have revolutionized the way blind people access information. I think these are all very significant things, and increasingly people are seeing disability as a spectrum rather than a binary state (disabled vs. non-disabled). Individuals may have different levels of skills in different areas. For example, a person may have a mild visual impairment that has no effect on most activities, but can be a major handicap in specific situations, such as dimly lit areas. I've known people with this problem so I think the point is important.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts.

      These arguments about the protection of personal information on social media sites and in the larger digital ecosystem are both interesting and timely. The text said that instances such as the Facebook data breach affected millions of people, demonstrating the massive extent of personal data breaches. And, with practically everyone utilizing digital services, data security is a common concern. Then to follow The complexity of maintaining good digital security in the face of emerging threats poses a significant challenge to organizations.

    1. There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent. 9.1.1. Privacy Rights# Some governments and laws protect the privacy of individuals (using a Natural Rights ethical framing). These include the European Union’s General Data Protection Regulation (GDPR), which includes a “right to be forgotten”, and the United State’s Supreme Court has at times inferred a constitutional right to privacy.

      I believe that the importance of privacy in social media and, more broadly, in digital life stems from various factors, including moral, legal, societal, and personal issues. However, I believe that the most crucial point is the safeguarding of personal autonomy. Privacy allows individuals to control their own information, which is an essential component of personal autonomy and dignity. There is also the reality that the right to privacy gives individuals the ability to select what information to share and with whom to share it, without being influenced or coerced. In addition, in the lack of privacy protection, personal information can be utilized for commercial, political, or malevolent objectives. While everyone in today's society uses social media platforms on a daily basis, it is certainly important to protect your personal information.

    1. Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’ { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch08_data_mining" }, predefinedOutput: true } kernelName = 'python3' previous 8.3. Mining Social Media Data

      I believe these are also interesting to consider from an economic standpoint, much as effective marketing focused advertising allows businesses to reach specific demographics, boosting the efficiency and potential advantages of advertising. Following that, there is an economic impact because it alters advertising techniques across industries and has a significant impact on how businesses deploy their marketing expenses.

    1. For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence. Social media data can also be used to infer information about larger social trends like the spread of misinformation. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      Social media data is an enormously rich source of information these days, and analyzing it all using various methods, such as Artificial Intelligence (AI), can reveal a wide range of personal ideas and social insights. Some of the things highlighted in the article provide insights into how AI systems might deduce a user's ethnicity, political opinions, and interests by examining text, photos, and patterns of interaction. These elements, in my opinion, are important since they allow for the analysis of objective and meaningful data based on race and interests.

    1. We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out]

      I thought this was interesting because this tendency has evolved into more damaging forms, such as flaming in online gaming groups. In the 2000s, entire communities evolved around trolling, with a mindset that frequently included contentious and aggressive expectations.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously.

      Trolling on the internet entails posting fake content with the purpose of eliciting negative emotions such as anger or sadness. This behavior might emerge in a variety of ways. Concern trolling, for example, seeks to interrupt conversations by masking them as anxiety, whereas humorous product evaluations or silly fake tales strive to undermine the usefulness or legitimacy of online places. The underlying motivation is typically to cause disruption and provoke strong emotional reactions, disturbing the normal flow of speech and influencing the overall environment of online communities. Trolling can cause slight annoyance to severe emotional anguish, depending on the type and context of the conduct.

    1. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below:

      In my opinion, I think public personas and code-switching are intricate yet crucial facets of interpersonal communication. They enable people to successfully negotiate various social environments, showcasing various aspects of their true selves in polite and situation-appropriate ways. I also really like the example of Obama of how he used different type of movement to different audience. This is one of his way to show his social intelligence and respect to others.

    1. In 2016, when Donald Trump was running a campaign to be the US President, one twitter user pointed out that you could see which of the Tweets on Donald Trump’s Twitter account were posted from an Android phone and which from an iPhone, and that the tone was very different. A data scientist decided to look into it more and found: “My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).” (Read more in this article from The Guardian) Note: we can no longer run code to check this ourselves because first, Donald Trump’s account was suspended in January 2021 for inciting violence, then when Elon Musk decided to reinstate Donald Trump’s account (using a Twitter poll as an excuse, but how many of the votes were bots?), Elon Musk also decided to remove the ability to look up a tweet’s source.

      I thought this was very interesting, how the tweets from an iPhone and Android could have such big difference. Also how they could tell the difference between the campaign's tweets which to be an iPhone, and to Trump's own android. I never really paid attention to Trumps election tweets, but I thought this is very interesting.

    1. The user interface of a computer system (like a social media site), is the part that you view and interact with. It’s what you see on your screen and what you press or type or scroll over. Designers of social media sites have to decide how to layout information for users to navigate and decide how the user performs various actions (like, retweet, post, look up user, etc.). Some information and actions will be made larger and easier to access while others will be smaller or hidden in menus or settings. As we look at these interfaces, there are two key terms we want you to know: Affordances are what a user interface lets you do. In particular, it’s what a user interface makes feel natural to do. So for example, an interface might have something that looks like it should be pressed, or an interface might open by scrolling a little so it is clear that if you touch it you can make it scroll more (see a more nuanced explanation here) Friction is anything that gets in the way of a user performing an action. For example, if you have to open and navigate through several menus to find the privacy settings, that is significant friction. Or if one of the buttons has a bug and doesn’t work when you press it, so you have to find another way of performing that action, which is significant friction. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad: Fig. 5.6 An ad on a mobile device, which has an incredibly small, hard to press “x” button. You need to press that button to close the ad. If you miss the “x”, it takes you to more advertising.# Another example of intentionally adding friction was a design change Twitter made in an attempt to reduce misinformation: When you try to retweet an article, if you haven’t clicked on the link to read the article, it stops you to ask if you want to read it first before retweeting.

      I learned that affordance are what a user interface allows you to perform are affordances. Specifically, it's what a UI makes it feel like it should do. For instance, an interface may include a button that appears to need to be hit, or it may open by scrolling slightly, making it obvious that touching it will cause it to scroll farther. Then, friction is anything that prevents a user from carrying out an action is called friction. For instance, there would be a lot of friction if you had to open multiple menus and search for the privacy options. Alternatively, there could be a problem in one of the buttons that prevents it from working when you press it.

    1. The first versions of internet-based social media started becoming popular in the late 1900s. The internet of those days is now called “Web 1.0.” The Web 1.0 internet had some features that make it stand out compared to later internet trends: If you wanted to make a profile to talk about yourself, or to show off your work, you had to create your own personal webpage, which others could visit. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information. Communication platforms were generally separate from these profiles or personal web pages. Let’s look at some of the history of Web 1.0 Social Media.

      Something interesting I think is that how social media are already popular in the last 1900s. It also had a name, call the web 1.0. Also, making up your own profile, so the others can reach you is the same as what we would have done nowadays on social media platform as well.

    1. The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”). You can think of this as like a language dictionary where there is a word and a definition for each word. Then you can look up any name or word and find the value or definition. Example: An English Language Dictionary with definitions of three terms: Social Media: An internet-based platform used for people to form connections to each other and share things. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations Automation: Making a process or activity that can run on its own without needing a human to guide it. The Dictionary data type allows programmers to combine several pieces of data by naming each piece. When we do this, the dictionary will have a number of names, and for each of those names a piece of information (called a “value” in this context).

      A dictionary's key-value pair arrangement is its most distinguishing feature. Each item in a dictionary consists of a key (like a word in a language dictionary) and a value (the definition). When it comes to finding, adding, or changing data based on the key, this structure is very effective.

    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata). For example: If we think of a tweet’s contents (text and photos) as the main data of a tweet, then additional information such as the user, time, and responses would be considered metadata. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc. Now that we’ve looked some at the data in a tweet, let’s look next at how different pieces of this information are saved.

      Metadata is used to classify, organize, label, and understand data, making sorting and searching for data much easier. It is a very fast way for people to get to their destination on the internet. It is also important for the company to track their data as well.

    1. In order to understand how a bot is built and can work, we will now look at the different ways computer programs can be organized. We will cover a bunch of examples quickly here, to hopefully give you an idea of many options for how to write a program. Don’t worry if you don’t follow all of it, as we will go back over these one at a time in more detail throughout the book. In this section, we will not show actual Python computer programs (that will be in the next section). Instead, here we will focus on what programmers call “psuedocode,” which is a human language outline of a program. Psuedocode is intended to be easier to read and write. Pseudocode is often used by programmers to plan how they want their programs to work, and once the programmer is somewhat confident in their pseudocode, they will then try to write it in actual programming language code.

      I think the primary goal of pseudocode is to focus on the algorithm's fundamental logic without becoming bogged down by the syntax of a specific programming language is interesting. It acts as a bridge between the problem-solving and coding phases.

    1. On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing).

      Harming bots on the internet is everywhere, and even I have experienced getting them before. I have seen it on TikTok, YouTube, Instagram, all types of social media. The example you provided, regarding Star Wars: The Last Jedi director Rian Johnson, exemplifies a particularly frightening facet of this behavior. The fact that Johnson's tweets were most likely generated by a bot demonstrates how easily online speech can be altered.

    1. There are absolute moral rules and duties to follow (regardless of the consequences). They can be deduced by reasoning about the objective reality. Kantianism: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” Meaning: only follow rules that you are ok with everyone else following. For example, you might conclude that it is wrong to lie no matter what the consequences are. Kant certainly thought so, but many have disagreed with him. Deontological thinking comes out of the same era as Natural Rights thinking, and they are rooted in similar assumptions about the world. Deontology is often associated with Kant, because at that time, he gave us one of the first systematic, or comprehensive, interpretations of those ideas in a fully-fledged ethical framework. But deontological ethics does not need to be based on Kant’s ethics, and many ethicists working in the deontological tradition have suggested that reasoning about the objective reality should lead us to derive different sets of principles. Key figures: Immanuel Kant, 1700’s Germany Christine Korsgaard present USA

      I thought Deontology is very interesting, because it seems like many deontological views are characterized by moral absolutism, which holds that particular actions are innately right or wrong regardless of their consequences or other external variables. I learned that the most influential deontologist was Immanuel Kant, a German philosopher from the 18th century. Kant felt that moral behavior should be guided by universal moral laws or norms that could be deduced through reason.

    2. Confucianism (another link)# Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others). These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society. Key figures: Confucius ~500, China Mencius ~350, China Xunzi ~300 BCE, China

      The main thing about Confucianism is about taking care of those people who are younger, and have manner to your parents, elder people. But it also is about A philosophical approach to accepting ambiguity and comprehending complexities. It can challenge binary thinking and promote the idea that being confused is a natural and valuable state of mind that can lead to more in-depth investigation and knowledge.