18 Matching Annotations
  1. Last 7 days
    1. Given the complex relationship between internet-based social media and mental health, let’s first look at some social media activities that people may find harmful to their mental health. Here are a few examples: 13.2.1. Doomscrolling# Doomscrolling is: “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.” Merriam-Webster Dictionary Fig. 13.1 Tweet on doomscrolling the day after insurrectionists stormed the US Capital (while still in the middle of the COVID pandemic).# The seeking out of bad news, or trying to get news even though it might be bad, has existed as long as people have kept watch to see if a family member will return home safely. But of course, new mediums can provide more information to sift through and more quickly, such as with the advent of the 24-hour news cycle in the 1990s, or, now social media.

      Doomscrolling isn’t a totally new impulse, people have always sought information during scary times, but social media makes it much harder to stop because the stream is endless and constantly updating. That’s why newer platforms can intensify anxiety even when the underlying desire (staying informed) is understandable.

    1. Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them).

      This highlights how “digital detox” can be helpful as a personal tool, but treating the internet as inherently toxic turns into the same kind of nostalgia we’ve seen before. Collee’s point is that idealizing a return to “offline” can ignore who is actually included or excluded in that fantasy, similar to how “wilderness” was romanticized while Indigenous people were erased.

  2. Feb 2026
    1. When someone creates content that goes viral, they didn’t necessarily intend it to go viral, or viral in the way that it does. If a user posts a joke, and people share it because they think it is funny, then their intention and the way the content goes viral is at least somewhat aligned. If a user tries to say something serious, but it goes viral for being funny, then their intention and the virality are not aligned. Let’s look at some examples of the relationship between virality and intent. 12.4.1. Building on the original intention# Content is sometimes shared without modification fitting the original intention, but let’s look at ones where there is some sort of modification that aligns with the original intention. We’ll include several examples on this page from the TikTok Duet feature, which allows people to build off the original video by recording a video of themselves to play at the same time next to the original. So for example, This tweet thread of TikTok videos (cross-posted to Twitter) starts with one Tiktok user singing a short parody musical of an argument in a grocery store. The subsequent tweets in the thread build on the prior versions, first where someone adds themselves singing the other half of the argument, then where someone adds themselves singing the part of their child, then where someone adds themselves singing the part of an employee working at the store1:

      This section shows that virality isn’t just about how many people see something, it’s also about how others reshape it, sometimes in ways that support the creator’s original purpose. Features like TikTok Duets can amplify a post by letting others add on in a way that stays aligned with the initial intent, but that alignment isn’t guaranteed.

    1. Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act). 11.2.2. Recommendation Algorithms as Systems# Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      Individual analysis looks at a person’s intentions and choices, while systemic analysis looks at how rules or algorithms can create biased outcomes even when no single individual intends them. Recommendation systems matter ethically because they can systematically amplify harm or inequality at scale, so fixing problems often requires changing the system.

    1. Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      This list shows how privacy risks often come less from a single bad action and more from how data travels and persists across systems. Even when users think they are acting safely or anonymously, metadata, inference, and platform ownership can quietly undermine consent and control, making privacy feel fragile and conditional rather than guaranteed.

    1. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      This highlights the tension between privacy and accountability on social media. Messages are labeled “private,” but that privacy is conditional and mediated by the platform. While access to private messages can be essential for safety and harm prevention, it also requires strong safeguards so that surveillance and misuse don’t become the default rather than the exception.

    1. Data can be poisoned intentionally as well. For example, in 2021, workers at Kellogg’s were upset at their working conditions, so they agreed to go on strike, and not work until Kellogg’s agreed to improve their work conditions. Kellogg’s announced that they would hire new workers to replace the striking workers: Kellogg’s proposed pay and benefits cuts while forcing workers to work severe overtime as long as 16-hour-days for seven days a week. Some workers stayed on the job for months without a single day off. The company refuses to meet the union’s proposals for better pay, hours, and benefits, so they went on strike. Earlier this week, the company announced it would permanently replace 1,400 striking workers. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers – Vice MotherBoard People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:

      This example shows how data poisoning can be used deliberately as a form of collective action, where misleading data is introduced to disrupt a system’s ability to function as intended. It also raises ethical questions about whether manipulating data is justified when it is used to counteract perceived corporate power and support labor rights, rather than for personal gain.

    1. One thing to note in the above case of candle reviews and COVID is that just because something appears to be correlated, doesn’t mean that it is connected in the way it looks like. In the above, the correlation might be due mostly to people buying and reviewing candles in the fall, and diseases, like COVID, spreading most during the fall. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:

      This highlights the classic problem of spurious correlation, where two trends move together without a direct causal link. For example, ice cream sales and drowning incidents are correlated because both increase in the summer, but eating ice cream does not cause drowning, seasonal factors drive both patterns.

  3. Jan 2026
    1. One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      This challenges the idea that ignoring trolls is an effective solution, showing how silence can sometimes enable harassment to escalate rather than stop. It also highlights how “don’t feed the trolls” shifts responsibility onto victims, while stronger moderation places accountability on abusers and platforms instead.

    1. Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad?

      This highlights how authenticity and trust are closely tied to group identity, since deciding who belongs also means deciding who does not. As groups grow larger and more diverse, shared signals, norms, or identities often replace personal relationships as shortcuts for determining who can be trusted.

    1. 4Chan has various image-sharing bulletin boards, where users post anonymously. Perhaps the most infamous board is the “/b/” board for “random” topics. This board emphasizes “free speech” and “no rules” (with exceptions for child pornography and some other illegal content). In these message boards, users attempt to troll each other and post the most shocking content they can come up with. They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community. Many memes, groups, and forms of internet slang come from 4Chan, such as: lolcats Rickroll ragefaces “Anonymous” the hacker group Bronies (male My Little Pony fans) much of trolling culture (we will talk more about in Chapter 7: Trolling) But one 4Chan user found 4chan to be too authoritarian and restrictive and set out to create a new “free-speech-friendly” image-sharing bulletin board, which he called 8chan. 5.5.3. 8Chan (now 8Kun)# 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      Anonymous features on social media platforms allow for people to post whatever they feel like regardless of how appropriate the content is. A lot of times these platforms turn into a place for bigots to post heinous content unchecked.

    1. Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts

      This being an example of pre-internet social media is so interesting. It reminds me of community based social media platforms such as reddit threads where messages are posted on one dedicated page. Would current graffiti tagging still be considered social media?

    1. One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”.

      This example highlights how ethical blind spots can arise when people focus on personal benefit and social validation. It shows how easily children and marginalized communities can be overlooked when posting content.

    1. Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter, and Twitter is accusing Musk’s complaint of being an invented excuse to back out of the deal, and the case is now in court.

      This example shows how numerical data can appear trustworthy, even when the methods behind it are unclear or undisclosed. It also shows how statistics can be used as tools in power struggles, where the same data can be interpreted differently depending on who benefits from the claim.

    1. Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.

      This is my first time seeing a click-farm, and I usually associate these accounts with bot accounts because they perform similar things. Both are used to boost inauthentic engagement for posts, advertisements, or other content.

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      This example shows how separating intention from action can be used to avoid responsibility. This can be applied to bots, since people who deploy them can hide behind the automated actions while denying responsibility for what the bot produces.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look.

      Social media is highly manipulated to show what we want people to see. Whether that be doctoring photos or only posting sensationalized media. This concept shows that online spaces can create distorted perceptions of reality rather than reflecting authentic life.

    1. “Rational Selfishness”: It is rational to seek your own self-interest above all else. Great feats of engineering happen when brilliant people ruthlessly follow their ambition.

      Egoism is an interesting ethical framework because it prioritizes acting based on self-interests and encourages individuals to seek out opportunities that benefit themselves. While this thinking can help drive innovation and ambition, this mindset is flawed because sometimes the greatest work in human history are the result of multiple minds working together.