16 Matching Annotations
  1. Last 7 days
    1. 4.1.2. Basic Data Types# First, we’ll look at a few basic data storage types. We’ll also be including some code examples you can look at, though don’t worry yet if you don’t understand the code, since we’ll be covering these in more detail throughout the rest of the book. Booleans (True / False)# Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values. Fig. 4.4 A blue checkmark is something an account either has or doesn’t so it can be stored as a binary value.# Booleans are often created when doing sort of comparison or test, like: Do I have enough money in my wallet to pay for the item? Does this tweet start with “hello” (meaning it is a greeting)? Click to see example Python code # Save a boolean value in a variable called does_user_have_blue_checkmark does_user_have_blue_checkmark = True # Save a boolean value in a variable based on a comparison. # The code checks if a wallet has more in it than the cost of the item # which will be True or False, and be saved in has_enough_money has_enough_money = money_in_wallet > cost_of_item # Save a boolean value in a variable based on a function call. # The code checks if the text of a tweet (stored in tweet_text) starts # with "Hello", which will be True or False, and be saved in is_greeting is_greeting = tweet_text.starts_with("Hello") Copy to clipboard Numbers# Numbers are normally stored in two different ways: Integer: whole numbers like 5, 37, -10, and 0 Floating point numbers: these can represent decimals like: 0.75, -1.333, and 3 x 10 ^ 8 Fig. 4.5 The number of replies, retweets, and likes can be represented as integer numbers (197.8K can be stored as a whole number like 197,800).

      This section helped me clearly see how different data types represent different kinds of information. Booleans are especially interesting because they force complex situations into true/false decisions, which can oversimplify reality. It also made me realize how choices about numbers and strings affect what computers can accurately store and how much meaning might be lost through rounding or categorization.

    1. Dictionaries# The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”). You can think of this as like a language dictionary where there is a word and a definition for each word. Then you can look up any name or word and find the value or definition. Example: An English Language Dictionary with definitions of three terms: Social Media: An internet-based platform used for people to form connections to each other and share things. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations Automation: Making a process or activity that can run on its own without needing a human to guide it. The Dictionary data type allows programmers to combine several pieces of data by naming each piece. When we do this, the dictionary will have a number of names, and for each of those names a piece of information (called a “value” in this context). Dictionary: Name 1: Value 1 Name 2: Value 2 Name 3: Value 3 So if we look at the example tweet, we can combine all the data in a dictionary. Fig. 4.9 A tweet with photos of a cute puppy! (source)# Dictionary (with some of the data): user_name: “WeRateDogs®” user_handle: “@dog_rates” user_has_blue_checkmark: True tweet_text: “This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10” number_of_replies: 1533 number_of_retweets: 26200 number_of_likes: 197800 Click to see example Python code # Save some info about a tweet in a variable called tweet_info tweet_info = { "user_name": "WeRateDogs®", "user_handle": "@dog_rates", "user_has_blue_checkmark": True, "tweet_text": "This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10", "number_of_replies": 1533, "number_of_retweets": 26200, "number_of_likes": 197800 } Copy to clipboard Note: We’ll demonstrate dictionaries later in Chapter 5: History of Social Media, and Chapter 8: Data Mining. Groups within Groups# We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination. So for example, I could make a list of Twitter users. Each Twitter user could be a dictionary with info about that user, and one piece of information it might have is a list of who that user is following. List of users: User 1: Username: kylethayer (a String) Twitter handle: @kylemthayer (a String) Profile Picture: [TODO picture here] (an image) Follows: @SusanNotess, @UW, @UW_iSchool, @ajlunited, … (a list of Strings) User 2: Username: Dr Susan Notess (a String) Twitter handle: @SusanNotess (a String) Profile Picture: [TODO picture here] (an image) Follows: @kylemthayer, @histoftech, @j_kalla, @dbroockman, @qaxaawut, @shengokai, @laniwhatison (a list of Strings)

      I like the dictionary analogy because it makes clear how data gets structured and labeled. By assigning names to values, dictionaries don’t just store information, they also shape how programmers interpret and access it. This made me realize that how data is organized can influence what questions are easy—or hard—to ask later.

    1. 3.4. Bots and Responsibility# As we think about the responsibility in ethical scenarios on social media, the existence of bots causes some complications. 3.4.1. A Protesting Donkey?# To get an idea of the type of complications we run into, let’s look at the use of donkeys in protests in Oman: “public expressions of discontent in the form of occasional student demonstrations, anonymous leaflets, and other rather creative forms of public communication. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.” From Kings and People: Information and Authority in Oman, Qatar, and the Persian Gulf by Dale F. Eickelman1 In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission. 3.4.2. Bots and responsibility# Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected. 3.4.3. Reflection questions# How are people’s expectations different for a bot and a “normal” user? Choose an example social media bot (find on your own or look at Examples of Bots (or apps).) What does this bot do that a normal person wouldn’t be able to, or wouldn’t be able to as easily? Who is in charge of creating and running this bot? Does the fact that it is a bot change how you feel about its actions? Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability? 1 We haven’t been able to get the original chapter to load to see if it indeed says that, but I found it quoted here and here. We also don’t know if this is common or representative of protests in Oman, nor that we fully understand the cultural importance of what is happening in this story. Still, we are using it at least as a thought experiment. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3'

      I found the donkey protest example helpful for understanding how responsibility can be separated from action. Just like the donkey does not understand the protest it carries, bots can perform actions without intention or awareness. This makes it harder to assign responsibility, since the people who design, deploy, or benefit from a bot may all have different roles and intentions.

    1. 3.1. Definition of a bot# There are several ways computer programs are involved with social media. One of them is a “bot,” a computer program that acts through a social media account. There are other ways of programming with social media that we won’t consider a bot (and we will cover these at various points as well): The social media platform itself is run with computer programs, such as recommendation algorithms (chapter 12). Various groups want to gather data from social media, such as advertisers and scientists. This data is gathered and analyzed with computer programs, which we will not consider bots, but will cover later, such as in Chapter 8: Data Mining. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm: Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.# { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3' previous 3. Bots next

      This section helped clarify that not all automation on social media counts as a bot. I found it especially useful that the definition focuses on whether the account is operated by computer code rather than by humans, even if those humans behave mechanically, like in click farms. This distinction makes it easier to think more precisely about responsibility and accountability when automation affects online spaces.

    1. 9.3. Additional Privacy Violations# Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      This section made me realize that privacy violations don’t always involve hacking or illegal access. Even data that seems harmless—like metadata or anonymized datasets—can still expose people in ways they never agreed to. I was especially surprised by how companies can infer new information or create shadow profiles about both users and non-users, which shows how limited individual control over personal data really is.

    1. 9.2. Security# While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch09_privacy" }, predefinedOutput: true } kernelName = 'python3' previous 9.1. Privacy next 9.3. Additional Privacy Violations By Kyle Thayer and Susan Notess © Copyright 2022.

      This section helped me realize that security failures are often not just technical problems, but also human and organizational ones. Even when proper security practices are well known, companies still choose convenience or cost-saving over protecting users’ data. What stood out to me most was how easily individuals can become targets through things like password reuse or phishing, which makes personal security practices like two-factor authentication feel necessary rather than optional.

    1. 8.2. Data From the Reddit API# When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them. You can also look up information on the data that you can get from the Reddit API by looking at the Reddit API Documentation. The Reddit API lets you access just some of the data that Reddit tracks, but Reddit and other social media platforms track much more than they let you have access to.

      This section shows how powerful—and dangerous—data mining can be when patterns are taken out of context. The examples make it clear that just because data lines up does not mean it reveals a true cause, especially with spurious correlations. It highlights how easily data can be used to support misleading or biased conclusions, which is especially concerning when these inferences affect real people’s identities and social outcomes.

    1. Media Data# Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      This section clearly shows how much data social media platforms collect, often beyond what users knowingly provide. I was especially surprised by the idea that platforms can collect data about non-users through photos or contact lists. It makes it clear that participation in social media data systems isn’t always a choice, which raises serious concerns about privacy and consent.

    1. 2.3.5. Compilers and Programimng Languages# History# In the early 1950s, Grace Hopper proposed a better way of programming a computer. She suggested creating a “programming language” based on English words with a “compiler” computer program that would turn the computer language code into binary computer instructions. photo of Grace Hopper c. 1960, at that time a Commander in the US Navy. When Hopper’s ideas were mostly ignored, she proceeded to create her own compiler and later helped design some of the most important and influential early programming languages and compilers. The new set-up for programming# So, thanks to Grace Hopper, we now have a new set-up for computer programming, which is what programmers still use today: When someone wants a computer to perform a task (that hasn’t already been programmed), a human programmer will act as a translator to translate that task into a programming language. Next, a compiler (or interpreter) program will translate the programming language code into the binary code that the computer runs. In this set-up, the programming language acts as an intermediate language the way that French did in my earlier analogy. In this set-up, a programmers basic task is to do these three things: Given a problem, break it down into steps for a computer Write those steps down in a programming language Run the compiler or interpreter, so the computer program can run on the computer Programming languages# Programming languages (e.g., Python, R, Java) are specially designed languages that attempt to split the difference between how a computer thinks and communicates and how people think and communicate. There are many programming languages, with different specializations and trade-offs. In this book, we will use Python, which is commonly used in data science tasks, and has support for writing programs that work with Reddit. Compilers / Interpreters# Compilers are special programs that translate code written in a programming language into the binary 0s and 1s that a computer runs. There are two varieties of compilers: standard compiler: takes a whole computer program and turn it all into binary so it can be run later interpreter: turns the computer language code into binary as it is running the program Python uses an interpreter, so when you run a Python program, the interpreter translates the Python code into binary while it’s running it. Programming in this book# Throughout the rest of this book, we will take ideas for programs written in English and translate them into Python code, and we will look at Python code and translate it back into English descriptions of what the code does. The Python Interpreter will then translate this code into binary instructions, which the computer will then run. Next, let’s look at an example computer program that posts one tweet.

      Grace Hopper’s work shows how programming languages and compilers make computers more accessible to humans by acting as a bridge between human language and machine code. By introducing higher-level languages and compilers, she shifted programming from thinking only in binary to thinking in structured steps, which made software development more flexible and powerful. This structure also highlights that programmers play a key role in translating human intent into actions computers can execute.

    1. 1.2. Kumail Nanjiani’s Reflections on Ethics in Tech# Image source Kumail Nanjiani was a star of the Silicon Valley TV Show, which was about the tech industry. He posted these reflections on ethics in tech on Twitter (@kumailn) on November 1, 2017: As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end. Kumail Nanjiani 1.2.1. Reflection questions:# What do you think is the responsibility of tech workers to think through the ethical implications of what they are making? Why do you think the people who Kumail talked with didn’t have answers to his questions?

      I think tech workers have a responsibility to consider the ethical implications of what they create, because technology can shape behavior, privacy, and power in ways that are difficult to reverse. As Kumail Nanjiani points out, once technology is released, it cannot simply be taken back, so ethical thinking should happen before harm occurs.

      I think the people Kumail spoke with lacked answers because ethical reflection is often not prioritized in tech culture. Many developers focus on whether something can be built rather than whether it should be built, and since these questions are rarely asked, they may not be prepared to address them.

  2. Jan 2026
    1. 7.6.3. Trolling and Nihilism# While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      This section helped me think about trolling in a much more nuanced way, especially the idea that disruption itself isn’t automatically good or bad. I found the discussion about group formation and norm enforcement really useful, because it explains why trolling can feel threatening—it challenges the patterns and signals that groups rely on to define who belongs. The comparison between trolling, protest, and revolution also stood out to me, since it shows how moral judgment often depends on whether we see the existing social order as legitimate. Overall, this section made it clear that evaluating trolling ethically requires looking beyond intent or humor and examining what is being disrupted and who is harmed or protected by that disruption.

    1. 7.2. Origins of trolling# While the term “trolling” in the sense we are talking about in this chapter comes out of internet culture, the type of actions that we now call trolling have been happening as far back as we have historical records. 7.2.1. Pre-internet trolling# Before the internet, there were many activities that we would probably now call “trolling”, such as: Hazing: Causing difficulty or suffering for people who are new to a group Satire: (e.g., A Modest Proposal) which takes a known form, but does something unexpected or disruptive with it. Practical jokes / pranks The video above is a 1957 April Fool’s Day hoax video broadcast by the BBC claiming to show how spaghetti noodles are harvested from trees. Additionally, the enjoyment of causing others pain or distress (“lulz”) has also been part of the human experience for millennia: “Boys throw stones at frogs in fun, but the frogs do not die in fun, but in earnest.” Bion of Borysthenes (Greece ~300 BCE) Additionally, the inauthentic arguments have long been observed, and were memorably explored by Jean-Paul Sartre as “Bad Faith”. “Bad faith” here means pretending to hold views or feelings, while not actually holding them (this may be intentional, or it may be through self-deception). Sartre particularly observed this in arguments made by antisemites while he lived in Nazi-controlled Paris: “Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.” Jean-Paul Sartre, 1945 CE, Paris, France 7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out] You can read more at: knowyourmeme and wikipedia { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch07_trolling" }, predefinedOutput: true } kernelName = 'python3' previous 7.1. What is trolling

      This section helped me realize that trolling isn’t just an internet-specific problem, but a behavior that has existed long before online spaces. I found the connection to satire, hazing, and especially Sartre’s idea of “bad faith” really interesting, because it shows how trolling often isn’t about genuine disagreement but about disrupting or provoking others. Understanding these historical roots makes it clearer why trolling is so persistent online today, and why simply asking trolls to “argue rationally” often doesn’t work.

    1. 6.5. Parasocial Relationships# Another phenomenon related to authenticity which is common on social media is the parasocial relationship. Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all. Parasocial relationships are not a new phenomenon, but social media has increased our ability to form both sides of these bonds. As comedian Bo Burnham put it: “This awful D-list celebrity pressure I had experienced onstage has now been democratized.” Learn more about parasocial relationships: StrucciMovies: Fake Friends YouTube Series Sarah Z: How Fans Treat Creators 33 min

      The example of Mr. Rogers shows that parasocial relationship sare not automatically unethical or inauthentic. What seems important here is that he tried to clearly define the limits of the relationship, such as calling viewers “television friends” and explaining that visits were not possible. This transparency helped make the parasocial relationship feel more authentic, even though it was not a real two-way friendship.

    1. 6.1. Authenticity# Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      The lonelygirl15 example shows why authenticity matters so much on social media. What upset people was not that the story was fictional, but that the way the connection was presented did not match reality. This makes me think that authenticity is less about whether something is “real” or “fake,” and more about whether audiences clearly understand what kind of relationship or signal they are engaging with.

    1. 5.7. Reflection Activities: Actions on Social Media Designs# 5.7.1. Comparing social media actions# Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites? 5.7.2. Design a social media site# Now it’s your turn to try designing a social media site. Decide a type of social media site (e.g., a video site like youtube or tiktok, or a dating site, etc.), and a particular view of that site (e.g., profile picture, post, comment, etc.). Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?

      This activity shows how design choices influence user behavior by making some actions more visible than others, By comparing different platforms, it becomes clear that actions like sharing or liking are often prioritized, while actions like reporting or privacy controls are placed further away.

    1. The first versions of internet-based social media started becoming popular in the late 1900s. The internet of those days is now called “Web 1.0.” The Web 1.0 internet had some features that make it stand out compared to later internet trends: If you wanted to make a profile to talk about yourself, or to show off your work, you had to create your own personal webpage, which others could visit. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information. Communication platforms were generally separate from these profiles or personal web pages.

      Early Web 1.0 social media required much more technical effort from users, such as creating personal webpages. This likely limited participation to people with more technical knowledge and made online communities smaller and less diverse.