32 Matching Annotations
  1. Mar 2026
    1. 7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out] You can read more at: knowyourmeme and wikipedia

      Internet trolling started in the 1980s and 1990s. It began on message boards and in early online games.

      On message boards, experienced users tricked new users. They posted fake questions. New users tried to answer, and experienced users felt smart. This is where the word "troll" came from.

      In early online games called MUDs, players caused trouble for each other. Strong players attacked weak players over and over. Some players also started mean conversations on purpose.

      In the 2000s, trolling became the main focus of some websites. These included 4chan and some Reddit forums.

      These communities made their own rules. One rule said it is fun to ruin beautiful things. Other rules were about women. One rule said there are no girls on the internet. Another rule demanded that women show photos of their bodies.

    1. 7.1.2. Why troll?# If the immediate goal of the action of trolling is to cause disruption or provoke emotional reactions, what is it that makes people want to do this disruption or provoking of emotional reactions? Some reasons people engage in trolling behavior include: Amusement: Trolls often find the posts amusing, whether due to the disruption or emotional reaction. If the motivation is amusement at causing others’ pain, that is called doing it for the lulz. Gatekeeping: Some trolling is done in a community to separate out an ingroup from outgroup (sometimes called newbies or normies). The ingroup knows that a post is just trolling, but the outgroup is not aware and will engage earnestly. This is sometimes known as trolling the newbies. Feeling Smart: Going with the gatekeeping role above, trolling can make a troll or observer feel smarter than others, since they are able to see that it is trolling while others don’t realize it. Feeling Powerful: Trolling sometimes gives trolls a feeling of empowerment when they successfully cause disruption or cause pain.** Advance and argument / make a point: Trolling is sometimes done in order to advance an argument or make a point. For example, proving that supposedly reliable news sources are gullible by getting them to repeat an absurd gross story. Punish or stop: Some trolling is in service of some view of justice, where a person, group or organization is viewed as doing something “bad” or “deserving” of punishment, and trolling is a way of fighting back.

      People troll for different reasons. Some do it for amusement, which means they find it funny to cause trouble or get a reaction, especially when they enjoy seeing others upset, and this is sometimes called doing it for the lulz. Others troll to separate insiders from outsiders in a community, so the insiders know it is a joke while outsiders do not, and this is known as trolling the newbies. Some people troll because it makes them feel smart, since they know it is a troll while others do not realize it. Others troll because it gives them a feeling of power, especially when they successfully cause disruption or hurt someone. Finally, some troll to fight back, because they believe a person or group deserves punishment, and they use trolling as a way to respond.

    1. 21.1.1. Social Media# We covered a number of topics in relation to social media: Bots Data History of Social Media Authenticity Trolling Data Mining Privacy and Security Accessibility Recommendation Algorithms Virality Mental Health Content Moderation Content Moderators Crowdsourcing Harassment Public Shaming Capitalism Colonialism We hope that by the end of this book you know a lot of social media terminology (e.g., context collapse, parasocial relationships, the network effect, etc.), that you have a good overview of how social media works and is used, and what design decisions are made in how social media works, and the consequences of those decisions. We also hope you are able to recognize how trends on internet-based social media tie to the whole of human history of being social and can apply lessons from that history to our current situations.

      In section 21.1.1 on Social Media, we learned about how social media works and its impact on society. This included topics like data, algorithms, privacy, mental health, and content moderation, as well as issues such as trolling and harassment. Overall, this section helped us understand key concepts and the effects of social media design on users and society.

    2. 21.1.3. Automation# We also covered a number of topics in automation, such as: History of Programming Python Programming Language JupyterHub and JupyterNotebooks Variables Data types (e.g., numbers, strings) Reddit Praw library (posting, searching, etc.) Other code libraries (e.g., time) For Loops Conditionals (if/else) Lists Dictionaries Functions (calling, and writing our own) Sentiment Analysis Recursion (for printing tweets and replies) We hope that by the end of this course, you have a familiarity of what programming is and some of what you can do with it. We particularly hope you have a familiarity with basic Python programming concepts, and an ability to interact with Reddit using computer programs.

      In section 21.1.3 on Automation, we learned about the basics of programming and how it can be used to automate tasks. This included key concepts such as variables, data types, loops, conditionals, lists, dictionaries, and functions. We also used Python and tools like Jupyter Notebook, and explored libraries such as PRAW for interacting with Reddit. In addition, we were introduced to more advanced ideas like sentiment analysis and recursion. Overall, this section helped us understand how programming works and how it can be used to build simple automated programs.

    1. 18.2.1. Aside on “Cancel Culture”# The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.”

      This section explains that "cancel culture" is a broad term that means different things in different situations. It can refer to people facing consequences for serious crimes like sexual assault, but also for minor mistakes or misunderstandings. The results can range from just getting criticized to losing a job or facing legal trouble. Because the term is so vague, the section chooses to use clearer terms like "public shaming" and "public criticism" instead.

    2. 18.2. Online Criticism and Shaming# While public criticism and shaming have always been a part of human culture, the Internet and social media have created new ways of doing so. We’ve seen examples of this before with Justine Sacco and with crowd harassment (particularly dogpiling). For an example of public shaming, we can look at late-night TV host Jimmy Kimmel’s annual Halloween prank, where he has parents film their children as they tell the parents tell the children that the parents ate all the kids’ Halloween candy. Parents post these videos online, where viewers are intended to laugh at the distress, despair, and sense of betrayal the children express. I will not link to these videos which I find horrible, but instead link you to these articles: Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? (archived copy) Jimmy Kimmel’s Halloween Candy Prank: Harmful Parenting? We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes).

      This section discusses how the internet has changed public criticism and shaming. It gives two examples: Jimmy Kimmel's Halloween prank, where parents film their upset children for laughs, which some argue is harmful, and the #MeToo movement, which used public shaming to call out sexual harassers but also served as a way for victims to organize and push for change. The section shows that online shaming can be both harmful and useful depending on the situation.

    1. 19.2.3. How Meta Tries to Corner the Market of Social Media# To increase profits, Meta wants to corner the market on social media. This means they want to get the most users possible to use Meta (and only Meta) for social media. Before we discuss their strategy, we need a couple background concepts: Network effect: Something is more useful the more people use it (e.g., telephones, the metric system). For example, when the Google+ social media network started, not many people used it, which meant that if you visited it there wasn’t much content, so people stopped using it, which meant there was even less content, and it was eventually shut down. Network power: When more people start using something, it becomes harder to use alternatives. For example, Twitter’s large user base makes it difficult for people to move to a new social media network, even if they are worried the new owner is going to ruin it, since the people they want to connect with aren’t all on some other platform. This means Twitter can get much worse and people still won’t benefit from leaving it. Let’s look at a scene from the movie The Social Network (about the origins of Facebook), where Sean Parker (who created the music-sharing app Napster) talks to Facebook founders Mark Zuckerberg and Eduardo Saverin about their strategy to grow Facebook:

      This section explains how Meta tries to dominate social media by using two key concepts: network effect and network power. Network effect means a platform becomes more valuable as more people use it, so new networks struggle to compete. Network power means once a platform has a large user base, people find it hard to leave even if they want to, because everyone they connect with is still there. Together, these ideas show how Meta locks users into its platforms and makes it difficult for competitors to succeed.

    2. 19.2.1. Surveillance Capitalism# Meta’s way of making profits fits in a category called Surveillance Capitalism. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).

      This section explains surveillance capitalism, where companies like Meta make money by collecting and using user data. They started by tracking behavior to personalize sites, but then realized this data was valuable and began collecting more than needed. They use this data to sell targeted ads, like showing pregnancy ads to certain users. This becomes a problem when companies allow harmful targeting, like letting advertisers reach groups based on hate, which Meta has done in the past.

    1. 17.3. Who gets harassed?# While anyone is vulnerable to harassment online (and offline as well), some people and groups are much more prone to harassment, particularly marginalized and oppressed people in a society. Historically of course, different demographic groups have been subject to harassment or violence, such as women, LGBTA+ people, and Black people (e.g., the FBI trying to convince Martin Luther King Jr. to commit suicide). On social media this is true as well. For example, the last section mentioned the (partially bot-driven) harassment campaign against Meghan Markle and Prince Henry was at least partially driven by Meghan Markle being Black (the same racism shown in the British Press). When Amnesty International looked at online harassment, they found that: Women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets. Troll Patrol Findings

      This section explains that while anyone can face online harassment, some groups are targeted much more often. Marginalized groups like women, LGBTA+ people, and Black people face higher rates of abuse. For example, Meghan Markle faced racist harassment online, and studies show that women of color, especially Black women, are much more likely to receive abusive tweets than white women. This shows how harassment often targets people who already face discrimination in society.

    2. 17.3.1. Intersectionality# As we look at the above examples we can see examples of intersectionality, which means that not only are people treated differently based on their identities (e.g., race, gender, class, disability, weight, height, etc.), but combinations of those identities can compound unfair treatment in complicated ways. For example, you can test a resume filter and find that it isn’t biased against Black people, and it isn’t biased against women. But it might turn out that it is still biased against Black women. This could happen because the filter “fixed” the gender and race bias by over-selecting white women and Black men while under-selecting Black women. Key figures:

      This section explains intersectionality, the idea that people face unfair treatment not just because of one part of their identity, but because of how different identities combine. For example, a hiring algorithm might seem fair because it treats women and Black people equally overall, but it could still be biased against Black women specifically. This happens when the algorithm over-selects white women and Black men while leaving out Black women. Intersectionality helps us see these kinds of hidden biases.

    1. 16.3.2. Well-Intentioned Harm# Sometimes even well-intentioned efforts can do significant harm. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves. They quickly settled on a missing man (Sunil Tripathi) as the culprit (it turned out had died by suicide and was in no way related to the case), and flooded the Facebook page set up to search for Sunil Tripathi, causing his family unnecessary pain and difficulty. The person who set up the “Find Boston Bomber” Reddit board said “It Was a Disaster” but “Incredible”, and Reddit apologized for online Boston ‘witch hunt’.

      This section shows how well-intentioned online efforts can cause real harm. After the Boston Marathon bombing, Reddit users tried to help by identifying the bomber but wrongly accused a missing man named Sunil Tripathi, who was already dead and had nothing to do with the attack. His family faced harassment and pain because of the false accusations. Even though people meant well, their actions became a harmful "witch hunt," and Reddit later apologized for what happened.

    2. 16.3.1. “Solving” a “Problem”# When social media users work together, we can consider what problem they are solving. For example, for some of the Tiktok Duet videos from the virality chapter, the “problem” would be something like “how do we create music out of this source video” and the different musicians contribute their own piece to the solution. For some other examples: In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “Where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location. In the case of Canucks’ staffer uses social media to find fan who saved his life, the “problem” was “Who is the fan who saved the Canucks’ staffer’s life?” and the solution was basically to try to identify and dox the fan (though hopefully in a positive way). In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “Who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it.

      This section explains how social media users can work together to solve problems. In each example, users collaborate to find answers, like locating a missing hiker, identifying a fan who saved someone's life, or finding a couple in a viral photo. While the outcomes are positive here, the section also hints that this kind of group effort can sometimes cross into doxxing, even when done with good intentions.

    1. 14.2.2. Platform Controls# Social media platforms themselves have their own options for how they can moderate comments, such as: Delete: Platforms can delete posts or comments. Suspend: Platforms can temporarily lock a user out until, for a set amount of time, or until they agree to delete some content and behave differently Ban: Platforms can permanently ban users and also try to ban users coming from certain internet connections Auto-detect: Platforms can also use computer programs to automatically detect potential violations of content to automatically block, or flag for follow-up.

      This section lists the main tools platforms use to moderate content. Platforms can delete posts, suspend users for a set time, or permanently ban them. They can also use computer programs to automatically detect and block content that might break the rules. These tools give platforms different ways to respond to harmful content, from quick fixes to long-term bans.

    2. 4.2. Moderation Tools# We’ve looked at what type of content is moderated, now let’s look at how it is moderated. Sometimes individuals are given very little control over content moderation or defense from the platform, and then the only advice that is useful is: “don’t read the comments.” But some have argued that this shifts responsibility onto the individual users getting negative comments, when the responsibility should be on the people in charge of creating the platform. So let’s look at the type of content moderation controls that might be given to individuals, and might be used by platforms.

      This section shifts focus from what content gets moderated to how moderation actually works. It points out that users often have little control over protecting themselves from harmful content, leaving them with unhelpful advice like "don't read the comments." Some critics argue this wrongly puts responsibility on victims instead of on the platforms and people who design them. The section then introduces different moderation tools that could be given to users or used by platforms to address this problem.

    1. 13.6. Design Analysis: Mental Health# We want to provide you, the reader, a chance to explore mental health more. We want you to be considering potential benefits and harms to the mental health of different people (benefits like reducing stress, feeling part of a community, finding purpose, etc. and harms like unnecessary anxiety or depression, opportunities and encouragement of self-bullying, etc.). As you do this you might consider personality differences (such as introverts and extroverts), and neurodiversity, the ways people’s brains work and process information differently (e.g., ADHD, Autism, Dyslexia, Face blindness, depression, anxiety). But be careful generalizing about different neurotypes (such as Autism), especially if you don’t know them well. Instead try to focus on specific traits (that may or may not be part of a specific group) and the impacts on them (e.g., someone easily distracted by motion might…., or someone sensitive to loud sounds might…, or someone already feeling anxious might…). We will be doing a modified version of the five-step CIDER method (Critique, Imagine, Design, Expand, Repeat). While the CIDER method normally assumes that making a tool accessible to more people is morally good, if that tool is potentially harmful to people (e.g., give people unnecessary anxiety), then making the tool accessible to more people might be morally bad. So instead of just looking at the assumptions made about people and groups using a social media site, we will be also looking at potential harms to different people and groups using a social media site. So open a social media site on your device. Then do the following (preferably on paper or in a blank computer document):

      This section asks readers to think about how social media affects mental health in both good and bad ways. It suggests considering different types of people, like introverts, extroverts, and people with ADHD or anxiety, instead of making broad generalizations. The goal is to look at specific traits and how social media might help or harm people with those traits, especially when using the CIDER method to evaluate design choices.

    1. 13.2.3. Munchausen by Internet# Munchausen Syndrome (or Factitious disorder imposed on self) is when someone pretends to have a disease, like cancer, to get sympathy or attention. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness in an online community than in an in-person community, so many have done so (like the fake @Sciencing_Bi fake dying of covid in the authenticity chapter). People who fake these illnesses often do so as a result of their own mental illness, so, in fact, “they are sick, albeit […] in a very different way than claimed.”

      This sentence means that people who pretend to be sick online (like faking cancer or COVID) actually have their own problems too. They may have mental illnesses, just in a different way than what they claim. So in a sense, they really are "sick," but not in the physical way they pretend to be.

    1. 12.1.2. Memes# In the 1976 book The Selfish Gene, evolutionary biologist Richard Dawkins1 said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc. We can even consider the evolutionary forces that play in the spread of true and false information (like an old saying: “A lie is halfway around the world before the truth has got its boots on.”)

      Dawkins' concept of memes highlights the evolutionary nature of cultural transmission, showing how ideas and behaviors can spread and evolve just like genes. In the context of digital culture, memes often evolve rapidly, shifting and adapting across platforms, sometimes distorting their original form, but continuing to replicate due to their appeal or relevance to different audiences.

    1. 11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).

      The comparison between individual and systemic analysis shows how societal structures can perpetuate inequality even without individual malice. By focusing on systemic behaviors, we can better understand how seemingly neutral rules can create disproportionately harmful effects on marginalized communities.

    1. 5.2.5. Texting# Around the same time, phone texting capabilities (SMS) started becoming popular as another way to send messages to your friends, family and acquaintances. Additionally, many news sites and fan pages started adding built-in comment sections on their articles and bulletin boards for community discussion.

      The rise of SMS texting and comment sections shows how online communication gradually became more immediate and participatory. These features allowed users not only to receive information but also to respond and interact with others, which helped lay the foundation for the highly interactive social media environments we see today.

    2. 5.2. Web 1.0 Social Media# The first versions of internet-based social media started becoming popular in the late 1900s. The internet of those days is now called “Web 1.0.” The Web 1.0 internet had some features that make it stand out compared to later internet trends: If you wanted to make a profile to talk about yourself, or to show off your work, you had to create your own personal webpage, which others could visit. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information. Communication platforms were generally separate from these profiles or personal web pages. Let’s look at some of the history of Web 1.0 Social Media.

      The description of Web 1.0 shows a more decentralized internet where individuals controlled their own webpages. In contrast, modern social media platforms concentrate control over identity, visibility, and interaction, which raises important questions about platform power and governance.

  2. Jan 2026
    1. 7.2. Origins of trolling# While the term “trolling” in the sense we are talking about in this chapter comes out of internet culture, the type of actions that we now call trolling have been happening as far back as we have historical records.

      I thought it was interesting that trolling existed long before the internet. The internet just makes it easier for these behaviors to spread and reach more people than they could in the past.

    1. repository open issue .md .pdf Contents 7.4.1. Reflection questions Responding to trolls? Contents 7.4.1. Reflection questions 7.4. Responding to trolls?# One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      This section made me rethink the advice to “not feed the trolls.” Ignoring harassment doesn’t always make it stop, and it can end up putting all the responsibility on the person being targeted. Strong moderation seems more fair because it holds trolls accountable instead of asking victims to deal with it on their own.

    1. 6.3.1. Inauthentic Behaviors# Inauthentic behavior is when the reality doesn’t match what is being presented. Inauthenticity has, of course, existed throughout human history, from Ea-nasir complaining in 1750 BCE that the copper he ordered was not the high quality he had been promised, to 1917 CE in England when Arthur Conan Doyle (the author of the Sherlock Holmes stories) was fooled by photographs that appeared to be of a child next to fairies.

      These historical examples show that inauthentic behavior is not new, but has long been part of human communication. What changes in digital contexts is the scale and speed at which inauthenticity can spread, especially when platforms and automated systems amplify misleading representations.

    2. 6.3. Inauthenticity# In 2016, the Twitter account @Sciencing_Bi was created by an anonymous bisexual Native American Anthropology professor at Arizona State University (ASU). She talked about her experiences of discrimination and about being one of the women who was sexually harassed by a particular Harvard professor. She gained a large Twitter following among academics, including one of the authors of this book, Kyle. Separately, in 2018 during the MeToo movement, one of @Sciencing_Bi’s friends, Dr. BethAnn McLaughlin (a white woman), co-founded the MeTooSTEM non-profit organization, to gather stories of sexual harassment in STEM (Science, Technology, Engineering, Math). Kyle also followed her on Twitter until word later spread of Dr. McLaughlin’s toxic leadership and bullying in the MeTooSTEM organization (Kyle may have unfollowed @Sciencing_Bi at the same time for defending Dr. McLaughlin, but doesn’t remember clearly). Then, in April 2020, in the early days of the COVID pandemic, @Sciencing_Bi complained of being forced to teach in person at ASU when it wasn’t safe, and then began writing about their COVID symptoms.

      This case illustrates how perceptions of authenticity on social media are often built through personal narratives and shared experiences. When accounts gain trust through identity claims and moral authority, questions of inauthenticity can have especially high stakes, affecting not only credibility but also the communities and causes connected to that account.

    1. The 1980s and 1990s also saw an emergence of more instant forms of communication with chat applications. Internet Relay Chat (IRC) lets people create “rooms” for different topics, and people could join those rooms and participate in real-time text conversations with the others in the room.

      IRC illustrates an early shift from one-to-one messaging toward real-time, group-based communication organized around shared topics. The structure of chat rooms anticipates many features of modern online communities, while still relying on relatively minimal automation compared to today’s platform-driven social spaces.

    2. One of the early ways of social communication across the internet was with Email, which originated in the 1960s and 1970s. These allowed people to send messages to each other, and look up if any new messages had been sent to them.

      Highlighting email as an early form of online social communication helps put modern platforms in historical context. Unlike today’s algorithm-driven feeds, early email systems were largely user-initiated, which shows how automation and visibility have become much more central to social interaction over time.

    1. There are several options for how to save dates and times. Some options include a series of numbers (year, month, day, hour, minute, and second), or a string that with all of those pieces of information written out. Sometimes only the date is saved, with no time information, and sometimes the time information will include the timezone. Dates turn out to be one of the trickier data types to work with in practice. One of the main reasons for this is that what time or day it depends on what time zone you are in. So, for example, when Twitter tells me that the tweet was posted on Feb 10, 2020, does it mean Feb 10 for me? Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      This discussion shows that dates and times are not neutral data, but depend on context such as time zones and perspective. When platforms define concepts like “today” or “yesterday,” they are making design choices that can shape how activity is measured and interpreted, which becomes especially important when analyzing user behavior or automated posting.

    1. When computers store numbers, there are limits to how much space is can be used to save each number. This limits how big (or small) the numbers can be, and causes rounding with floating-point numbers. Additionally, programming languages might include other ways of storing numbers, such as fractions, complex numbers, or limited number sets (like only positive integers).

      This section highlights how numerical limits are built into computer systems rather than being purely abstract. These constraints matter because rounding or restricted number types can affect how automated systems make decisions, especially at scale, where small numerical differences may accumulate into meaningful outcomes.

    2. Computers typically store text by dividing the text into characters (the individual letters, spaces, numerals, punctuation marks, emojis, and other symbols). These characters are then stored in order and called strings (that is a bunch of characters strung together, like in Fig. 4.6 below).

      This technical explanation is helpful because it shows how human language is reduced to structured data that computers can process. When social media platforms treat text as strings, meaning and emotional context can be lost or oversimplified, which becomes ethically important when automated systems moderate content, detect harassment, or make decisions about visibility and punishment.

    1. Sometimes in programming, we want to group several steps (i.e., statements) together. When we group these steps together we call it a code “block.” These blocks of code often used with conditionals (e.g., if this condition is true, do these five steps), and with loops (e.g., for each of these items, do these five steps).

      Explaining code blocks this way helps clarify how automated actions are grouped and repeated in bot behavior. When blocks are combined with conditionals and loops, it becomes clear how a single decision rule can lead to large-scale repeated actions, which is especially relevant when considering how bots can amplify content or behavior across platforms.

    2. In order to understand how a bot is built and can work, we will now look at the different ways computer programs can be organized. We will cover a bunch of examples quickly here, to hopefully give you an idea of many options for how to write a program. Don’t worry if you don’t follow all of it, as we will go back over these one at a time in more detail throughout the book. In this section, we will not show actual Python computer programs (that will be in the next section). Instead, here we will focus on what programmers call “psuedocode,” which is a human language outline of a program. Psuedocode is intended to be easier to read and write. Pseudocode is often used by programmers to plan how they want their programs to work, and once the programmer is somewhat confident in their pseudocode, they will then try to write it in actual programming language code.

      This explanation of pseudocode is helpful because it lowers the barrier to understanding how bots are structured without requiring prior programming knowledge. Framing pseudocode as a planning and thinking tool emphasizes that building bots is not just a technical process, but also a conceptual one where ethical choices can be made early, before code is even written.

    1. This means that media, which includes painting, movies, books, speech, songs, dance, etc., all communicates in some way, and thus are social. And every social thing humans do is done through various mediums. So, for example, a war is enacted through the mediums of speech (e.g., threats, treaties, battle plans), coordinated movements, clothing (uniforms), and, of course, the mediums of weapons and violence.

      I find this example effective because it expands the idea of media beyond digital platforms and shows that media is embedded in almost all human action. By framing war itself as something enacted through multiple mediums, this passage highlights how communication, symbols, and coordination are inseparable from power and violence, which is especially relevant when thinking about how modern social media can amplify or legitimize conflict.