34 Matching Annotations
  1. Mar 2026
    1. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content.

      I've noticed every time something irks me on a platform I immediately think back to the high-friction low-friction concept. Perhaps this is a lasting takeaway of this class.

    1. But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      The takeaway I have is short term consequences are a lot different then long term consequences; and anybody, even Socrates, cannot judge them accurately. With time all things are revealed. What was ethical today might not be judged as ethical tomorrow.

    1. The history of women in the workplace always tells the same story: women enter a male-dominated profession, only to find that it’s no longer a respectable field. Because they’re a part of it, so men leave in droves. Because women do it, and therefore it must not be important. Because society would rather discredit an entire profession than acknowledge that a female-dominated field might be doing something that actually matters.

      I think an important distinction to make is when women enter a male-dominated profession, its no longer a respectable field to males. Because what is respectable to males has importance to both males and females, whereas what is respectable to females is respectable to females, overall it is less respectable. Women still have the opportunity to view it as respectable for themselves, regardless of the viewpoints of men

    1. Network power: When more people start using something, it becomes harder to use alternatives. For example, Twitter’s large user base makes it difficult for people to move to a new social media network, even if they are worried the new owner is going to ruin it, since the people they want to connect with aren’t all on some other platform. This means Twitter can get much worse and people still won’t benefit from leaving it.

      This reminds of Facebook's recent attempt to create a twitter alternative, Threads. It pushes users from Instagram toward Threads by trying to tackle this very problem. If people gravitate toward content for choosing a platform, Facebook benefits from showing parts of interesting content only viewable on Threads

    1. In the article Famous abusers seek easy forgiveness. Rosh Hashanah teaches us repentance is hard. by Rabbi Danya Ruttenberg, she outlines a set of steps for “repentance” needed for someone to have their relationship with others repaired: “The bad actor must own the harm perpetrated, ideally publicly” “They must do the hard internal work to become the kind of person who does not harm in this way — which is a massive undertaking, demanding tremendous introspection and confrontation of unpleasant aspects of the self” “They must make restitution for harm done, in whatever way that might be possible” “Then — and only then — they must apologize sincerely to the victim” “Lastly, the next time they are confronted with the opportunity to commit a similar misdeed, they must make a different, better choice”

      A pity of our generation is time moves to fast to complete this process. People are shamed, then immediately apologize with no evidence of action. The public either accepts or rejects the apology based off of feelings and moves on. A week later, everyone forgets it happened.

    1. The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us.

      I think based off the definition provided guilt is scalable. Consider Germany's reparations toward the jewish community. The country feels responsible, though many people today didn't participate in the holocaust.

    1. Well, individuals can block or mute harassers, but the harassers may be a large group, or they might make new accounts. They might also try to use the legal system, but online harassment is often not taken seriously, and harassers often use tactics that avoid being illegal.

      This relates to the concept that there isn't a definite group of harassers, but a collective action resulting in harassment. The action taken as the sum of its parts doesn't equal harassment, but taken as a whole it does

    1. Additionally, we can consider the following forms of crowd harassment: Dogpiling: When a crowd of people targets or harasses the same person. Public Shaming (this will be our next chapter) Cross-platform raids (e.g., 4chan group planning harassment on another platform) Stochastic terrorism The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. See also: An atmosphere of violence: Stochastic terror in American politics

      I can think of the "harassment" that a lot of popular streamers go through. Through the pay to send a message feature on many platforms, viewers can add in or make fun of the stream. I don't think its wholly harassment however, as most streamers get paid and use the feature voluntarily.

  2. Feb 2026
    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter:

      This reminds me of the phenomenon where the comments in the comment section are unreasonably biased because a certain group of people, angry people, are far more likely to comment, thereby skewing the perception of a post.

    1. Turkers (the people who do Mechanical Turk tasks) were then given the handwritten note and after the first few attempts at deciphering it, Turkers were either a previous attempt at deciphering the note, or asked to vote on which interpretations were improvements. They were instructed to leave parentheses around sections they weren’t sure about. Here is a selection of subsequent attempts at interpreting the note (from the paper):

      This reminds me of the wisdom of the mob. Any individual may not know how to do something. But taking in the average of everyone's opinion somehow creates a solid answer. My guess is that when a mob is consulted any singular unconscious bias is removed. Although, it depends largely on the composition of a crowd; a bunch of chinese people would never decipher this

    1. Rawls proposed a famous thought experiment. Imagine we were going to redesign America. A huge lottery was done to gather people from all walks of life into a committee to decide how the society should be structured and how it should function. Naturally, they will all have their own interests in mind, so Rawls proposed that they all be hidden behind a “veil of ignorance”, making it so that while they are on the committee, the people have no idea who they are, or what sort of life they will have once the new design is implemented. (The veil of ignorance is not a real thing, and it is extremely unclear how such an obscuring could be accomplished, although science fiction writers have had fun trying to imagine it.) Rawls’s thought was that if you don’t know whether you will be in one of society’s more powerful roles or more disadvantaged roles, then you will have the motivation to make sure you will be okay, whatever role you get in the end. Therefore, the committee members would design a just and fair society, so that they would be okay no matter where they end up. The design the committee agrees to forms the basis of a new “social contract”, or agreement about how society works.

      I like this idea in theory and somewhat in practice. If it could produce the results it purports, it would be pretty just. However, I don't think it would last due to the rot by self selection that all societies face. That is, in the long run positions of power are sought more and more by those with ulterior motives.

    1. 14.3.2. Reddit (subreddits with volunteer moderators)# Reddit is composed of many smaller discussion boards, called subreddits. These subreddits range from friendly to very toxic, with different moderators in charge of each subreddit. Reddit as a larger platform decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: I censored out a racial slur for Black people), and r/fatpeoplehate. In a study of what happened after this ban: Post-ban, hate speech by the same users was reduced by as much as 80-90 percent. […] “Members of banned communities left Reddit at significantly higher rates than control groups. […] Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald). […] However, within those communities, hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators. 14.3.3. Facebook (hired moderators)

      I think volunteers are generally worse than hired moderators given the inclination of the most devoted of a community to volunteers. This can result in more extremist policies and annoying ego-trips that would deter growth in some communities. However, this is sporadic.

    1. “Incel” is short for “involuntarily celibate,” meaning they are men who have centered their identity on wanting to have sex with women, but with no women “giving” them sex. Incels objectify women and sex, claiming they have a right to have women want to have sex with them. Incels believe they are being unfairly denied this sex because of the few sexually attractive men (”Chads”), and because feminism told women they could refuse to have sex. Some incels believe their biology (e.g., skull shape) means no women will “give” them sex. They will be forever alone, without sex, and unhappy. The incel community has produced multiple mass murderers and terrorist attacks.

      Often, this is a self reinforcing cycle. Dwelling on the fact that women aren't interested in you won't make you more appealing. Additionally with the rise in standards from social media, incels also won't go for women similar to them. Couple these factors together produces no good.

    1. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance:

      As the frequency of social media usage increases, the level of standards rises as well. People who watch social media are constantly exposed to some of the most tailored and presentable people, which raises their own perceptions of what is normal.

    1. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      This oversight is noticeable when someone "bots" their streams or their likes and comments. These are key metadata points that algorithms focus on, so taking advantage of the system can generate a big reward. Social media sites do try to regulate this, but individual content creators have an ethical duty to prevent it.

    1. Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts.

      These are all instances of metadata. This is extremely useful to push engaging content, but there is less control on the quality of content. I've noticed many reels that are just short clips of movies, or barely noticeable reaction content that contribute nothing. Maybe this is due to the greater resource load it takes to filter for content, or its harder to decide for someone what types of content they should watch.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      It would be interesting to contrast how effective it is to continually survey disabled people compared to getting a disabled designer. I can see it being beneficial to have a person who intimately knows the issues disabled people face, but at the same time there may be many disabilities and experiences from intersecting disabilities.

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example:

      I think that this better encapsulates the meaning of a disability rather than the traditional connotation. In this way, every disability is situational and the responsibility to mitigate is on the society, rather on the individual.

    1. Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much

      This reminds me of how teenagers at times would screenshot dms. A lot of times it was evidence for an argument, but other times it would be malicious. This isn't wholly a bad thing however, this could be used to expose creeps.

    1. Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it

      I remember a story in 2014, where many celebrities' personal photos were leaked because of a simple phishing scam involving an employee. It reminded me that network protection is not the only that needs to be protected but employees need training as well.

    1. Go to your google account (assuming you have one) profile information and go to “Data & Privacy”

      This I find interesting because many people share the sentiment that I have, which posits that it is better to get ads tailored to your wants than ads devoid of interesting material. As long as Google uses my data only for ads, then I think its okay. Although, if it were to violate that order, I would have no way of knowing...

    1. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      This raises an interesting connection for me because hedge fund "quants" also use this strategy: finding seemingly useless or irrelevant data to game the market. Somehow, both are effective, if not only for a short period

  3. Jan 2026
    1. “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation.

      I think the anonymity discussed in previous chapters in this case allows people to behave like their true selves. Some people in real life are too scared to bully in this way, but online they can do so without reprocussioin.

    2. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”

      I think bad faith is harmful to society. Kant's social maximum of "always tell the truth" goes against this. Bad faith is used to disguise poor arguments and give false legitamacy by creating these in-groups and out-groups

    1. 6.6.1. Anonymity encouraging inauthentic behavior# Anonymity can encourage inauthentic behavior because, with no way of tracing anything back to you[1], you can get away with pretending you are someone you are not, or behaving in ways that would get your true self in trouble. 6.6.2. Anonymity encouraging authentic behavior# Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      It has been noted in studies, of the internet population, a plurality of those who comment are angry people. These are the same angry people in real life, and they don't care about anonymity. Those that stay anonymous still feel emotions of those who comment, so they are discouraged from participating. Thus, the online space is more filled with anonymous and not so anonymous haters

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us.

      I think this is most represented in the idea of online trolls. The act of trolling provides and inherent dicotomy between the trolled, who feel betrayed by the dishonesty; and those who side with the "troller," who feels connection with the troll.

    1. One famous example of reducing friction was the invention of infinite scroll [e31]. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin [e32] invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I think the progress toward less friction has been good in the sense less of our time is wasted, but also paradoxically more time is wasted. With Appled ID and infinite scroll, becoming addicted to social media, and the more immediate the sense of dopamine, has increased the level of Pavlov conditioning.

    1. Japanese image-sharing bulletin board called Futaba or 2chan [e19].

      I wonder how this company might seek legal recourse for this action. Does Japanese law have a provision for stealing code? does the U.S?

    1. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations)

      It may be also important to know the expected utility, if surveyed for potential affected persons, their responses may not be the truth. A person will never know what a punch feels like until it hits them. Building off of simplification of of data, what one person feels may not be what another ends up feeling. Thus, net utility is never one hundred percent certain.

    1. In this example, I decided that each of these would count as “1 apple.”

      Its always important to remember, when looking at data, it isn't always objective. Data is made up of what the creator choose to include. Data can be missing important distinctions (like small or big apples), information (an apple is just outside the picture frame, or can be intentionally omitted. It is always useful to look at the parameters of a study

    1. Or a computer program can repeat an action until a condition is met:

      This reminds me of when youtubers post videos of followers doing "day x until y" messages. I never considered the possibility that it was fake until now. If you combine this with the sleep feature and randomize the timeframe of the post, it could look very real. I also wonder if in the near future this could be done with AI to create automated videos.

    1. ethically justifiable

      To me I find it problematic in practice for there to be a distinction between ethical and non-ethical use of antagonistic bots. Everybody has their own worldview and values. To define some of these values as ethical on social media is to impose them on everyone. Maybe this would be okay if there was a democratic way for this. But there isn't. These bots are made to "get a rise out of people" or stir emotions. Subjecting people to that through automated bots under the guise of ethics I disagree with

    1. a human programmer will act as a translator to translate that task into a programming language.

      Its alien to me how logic gates somehow translate to english words. It must take a lot of ones and zeros to make that happen.

    1. Being and

      I think the core idea of Confucianism, that those in power have a duty over those they have power over, is really important in today's age. Social media companies are afforded extreme power and to be cliche "with great power comes great responsibility.