30 Matching Annotations
  1. Mar 2026
    1. Jennifer Jacquet argues that shame can be morally good as a tool the weak can use against the strong: The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us. […] A good rule of thumb is to go after groups, but I don’t exempt individuals, especially not if they are politically powerful or sizeably impact society. But we must ask ourselves about the way those individuals are shamed and whether the punishment is proportional. Jennifer Jacquet: ‘The power of shame is that it can be used by the weak against the strong’

      i would like to add that public shaming can also be related to consequentialism. The reason is that, for the person to be publicly shamed, they must have done something wrong. But sometimes being publicly shamed are done out of malice to make fun of someone, which is most directly tied to the ethics of egoism.

    1. 8.2.1. Aside on “Cancel Culture”# The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.”

      I feel like cancel culture has a lot of gray areas as if even if it is just a slight rumor, not proven to be true, people will jump to conclusions really quickly. Moreover, since cancel culture are like that, people can just make stuff up and once it became viral, it is hard to share the real truth, leading to that person being cancelled.

    1. So how can platforms and individuals stop themselves from being harassed? Well, individuals can block or mute harassers, but the harassers may be a large group, or they might make new accounts. They might also try to use the legal system, but online harassment is often not taken seriously, and harassers often use tactics that avoid being illegal. The platform itself sometimes can be helpful. Reporting harassment might result in the user being banned, or the platform might decide to take out entire problematic sections, such as when Reddit banned its most toxic subreddits, and found it reduced toxic behavior on the site overall. There are also other tools to help individuals that are getting harassment from a crowd. For example, the Twitter app “block-party” supports mass blocking and other advanced features.

      I think the best way to stop harassment is to deactivate your account for a short period if it's from a large number of people. But overall, I think that social media harassment should be taken seriously, as people are aware that they are trying to avoid being illegal, but it is still attempted. Therefore, social media companies should make stricter policies that overall limit the activities these harassers can do.

    1. You might remember from Chapter 14 that social contracts, whether literal or metaphorical, involve groups of people

      I would also like to add that the ethical framework behind harassment on social media could also be consequentialism. Perhaps a person said something really bad or outrageous that it lead to people being angry, leading to them being sent death threats, and overall harassed by that certain group of individuals.

  2. Feb 2026
    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      When it comes to lurkers, I think the best way is not credit them if it's a project or something else that requires crediting the people who have worked on it. It may not be as effective, but I think, as of now, that is the best way to avoid lurkers.

    1. 16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora: An crowdsourced question and answer site. Stack Overflow: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk. Project Sidewalk: Crowdsourcing sidewalk information for mobility needs (e.g., wheelchair users).

      I would like to add that Reddit can also be a platform for crowdsourcing, especially for younger people whoares asking for help or recommendations. Additionally, sometimes with crowdsourcing, there are people who might be there just to cause trouble, which can be a gray area in this topic.

    1. When social media companies like Facebook hire moderators, they often hire teams in countries where they can pay workers less. The moderators then are given sets of content to moderate and have to make quick decisions about each item before looking at the next one. They have to get through many posts during their time, and given the nature of the content (e.g., hateful content, child porn, videos of murder, etc.), this can be traumatizing for the moderators: Facebook Is Ignoring Moderators’ Trauma: ‘They Suggest Karaoke and Painting’ In addition to the trauma, by finding places where they can pay workers less and get them to do undesirable work, they are exploiting current inequalities to increase their profits. So, for example, “[Colombia’s Ministry of Labor has launched an investigation into TikTok subcontractor Teleperformance [for content moderators], relating to alleged union-busting, traumatic working conditions and low pay]”(https://time.com/6231625/tiktok-teleperformance-colombia-investigation/)

      I feel like this is the biggest problem that is currently happening on many social media companies, which leads to less better performance. I think companies should have their own department of moderators that are well-trained for better moderation.

    1. 14.3.2. Reddit (subreddits with volunteer moderators)# Reddit is composed of many smaller discussion boards, called subreddits. These subreddits range from friendly to very toxic, with different moderators in charge of each subreddit. Reddit as a larger platform decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: I censored out a racial slur for Black people), and r/fatpeoplehate. In a study of what happened after this ban: Post-ban, hate speech by the same users was reduced by as much as 80-90 percent. […] “Members of banned communities left Reddit at significantly higher rates than control groups. […] Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald). […] However, within those communities, hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators.

      I think that the Reddit moderators have a lot of gray areas, as even though they have people in charge of a certain subreddit, there should be some posts or subreddits that shouldn't even exist. As it ranges from really disturbing and horrific.

    1. For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”

      I would like to add that not just facebook who has taking noticable actions, but also TikTok, where they seem to be more transparent about it. TikTok has this algorithm where they can detect harmful words in a search bar, and on top of the results, instead of getting what you searched, they will provide links that allow people to seek help.

    1. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat.

      With people editing themselves to look better, it really creates this environment that is built on lies, which can sometimes discourage other users when they can't really tell if the post is edited or not. This also applies to not just appearances but also lifestyles, where some people will edit them by buying expensive stuff or going on expensive trips.

    1. 12.4.4. Intentionally bad or offensive content# Users can also create intentionally bad or offensive content in an attempt to make it go viral (which is a form of trolling). So when criticism of this content goes viral, that is in fact aligned with the original purpose. For example, this cooking video contains an unusual recipe (SpaghettiOs as a pie filling) and unusual cooking methods (like using forearms to spread butter).

      There is a saying that when you post something on the internet that is either offensive or embarrassing, the internet will make sure you will never forget about it. Especially if it is offensive, as the people on the internet will go to the extent of trying to find your private information.

    1. Content (posts, photos, articles, etc.)# Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones. Content recommendations can go poorly when it sends people down problematic chains of content, like by grouping videos of children in a convenient way for pedophiles, or Amazon recommending groups of materials for suicide.

      I would like to add that sometimes the content suggested can stem from the content u liked or shared, and while it may seem like a good thing, it also has some gray areas. For example, an accidental like or a share can mess up your whole algorithm, which will completely change the type of content that the social media would suggest.

    1. Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of t

      I feel like the universal design will be the most affective compare to the ability-based design and assistive technology. This is because it takes into account all the people who will be using it and tries to create something where everyone benefits from it.

    1. When designers and programmers don’t think to take into account different groups of people, then they might make designs that don’t work for everyone. This problem often shows up in how designs do or do not work for people with disabilities. But it also shows up in other areas as well. The following tweet has a video of a soap dispenser that apparently was only designed to work for people with light-colored skin.

      I agree with the first statement above, as it is common for programmers and designers to consider common disabilities and create accessibility features for them. There are still some disabilities that some may not know about or get ignored, which leads to those people with those specific disabilities not being able to use the website.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      This has been a massive problem since the creation and widespread use of social media, and there has never been a definitive resolution to it, as hackers will always find ways to access people's personal information. Additionally, as mentioned, employees will always misuse their access and would also likely sell important personal information, especially if it comes to celebrities.

    1. Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      I would like to say that I have a suspicion that our privacy was never secured or private, especially to the companies that are running the social media platform. The reason is that I feel like our activities are always under surveillance to prevent anything bad from happening or is going to happen.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      I feel like unintentional data poisoning can also occur even without influencers informing their viewers about it. For example, if a website offers benefits to users in exchange for collecting their data, and that specific website becomes widely used, the data will also become more narrowly defined in terms of demographics.

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’

      I also want to add that social media platforms have become more and more efficient at targeting ads to their target audiences, which are based on their likes and feeds. I find this quite efficient, but also the companies can abuse this method by putting more ads than other people's posts, which can annoy the users.

  3. Jan 2026
    1. While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction.

      I would like to add that this can also be related to the philosophy of egoism. The reason being the trolls are motivated by their own interests, resulting in their selfishness and not considering the person they are trolling, which leads to them harassing or harming that person for their own gain.

    1. Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored.

      It is also evident that there are trolls who are willing to fabricate fake information and rumors about that person. The reason is their way to get a reaction out of them, and if the rumor becomes more and more widespread, the person will have no choice but to go online and reply to clarify that it is not true.

    1. And they also talked about the advertising motives behind supporting social causes (even if some employees do indeed support them), and the advertising motivation behind tweeting openly about how they are basing their decisions based on advertising.

      I never see them doing this as an issue, as behind the screen of who is running the brand account is also a human, and that being able to break free from professionalism and connecting with people online, and express their opinions can be seen as a good thing, but also it can be a brilliant marketing strategy. With many brands being able to relate to and speak about the problems today help show that the corporation also cares about the people, which can help them gain more supporters and customers. But sometimes, if the topic is really controversial, or something that shouldn't be said online, some companies will still speak on it, which may hurt their companies image.

    1. Authenticity is a concept we use to talk about connections and interactions when the way the connection is presented matches the reality of how it functions. An authentic connection can be trusted because we know where we stand. An inauthentic connection offers a surprise because what is offered is not what we get. An inauthentic connection could be a good surprise, but usually, when people use the term ‘inauthentic’, they are indicating that the surprise was in some way problematic: someone was duped.

      I would like to add that with the rise of AI and deepfakes, the term 'authenticity' has begun to lose its meaning as it becomes increasingly difficult to distinguish between real and fake. Also, with people exaggerating their lifestyle, it will also affect the viewers into thinking that they aren't living a happy life.

    1. 4Chan has various image-sharing bulletin boards, where users post anonymously. Perhaps the most infamous board is the “/b/” board for “random” topics. This board emphasizes “free speech” and “no rules” (with exceptions for child pornography and some other illegal content). In these message boards, users attempt to troll each other and post the most shocking content they can come up with. They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community.

      With the ability to remain anonymous, people will most likely post something borderline horrific or something that near violating the policy without any consequences. This also leads to cyberbullying and exposes the personal information of other people without that person knowing.

    1. Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread

      I want to add that there are also other ways that people used to share news and updates. Commonly, people would send mail and letters, but some would use carrier pigeons to tie a letter on their ankle to send out messages. Additionally, there are also the Bellmen who will stand in the middle of the town and yell out the current news and updates.

    1. In addition to representing data with different data storage methods, computers can also let you add additional constraints on what can be saved. So, for example, you might limit the length of a tweet to 280 characters, even though the computer can store longer strings.

      Data constraint plays a really important role in many social media platforms as they help set realistic boundaries when a person fills out their personal information. This also helps companies keep a record of what age the users usually are that uses their social media. Moreover, it all help prevent any possible trolls who will put random informations just to use the social media.

    1. Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).

      The pixels that only have three color components have always made me really curious, as there is only green, blue,e and red, ed but once they are in a group,oup tcane to create a whole new different color that sometimes seems impossible. Additionally, one of the most intruiging thing about these grids is that with only these three colors, they can create the color white.

    1. Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected.

      I believe it is crucial to analyze the intentions of the various people involved, as they are the root of how a bot behaves online. This also draws back to the fact that all actions made by the bots should be traced back to the programmers, as bots are not autonomous and do not know the difference between right and wrong; only the people who are giving them instructions know.

    1. On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing)

      In addition to bots that were made to spam advertisements or as a way for influencers to gain more followers, some bots can help some influencers gain more recognition. Aside from just followers, some will buy bots just to comment on their posts or share them, which can make them seem more popular, leading them to appear on more people's feeds and social media, thus gaining more real-life followers.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look.

      I would like to disagree on this because posting about their good news and manipulating their photo are two different things. While one states something that actually happened, manipulating photos is just being deceptive. Additionally, who would actually post about their bad news, given the fact that they might get made fun of by other people, and a majority of those would not receive consequences.

    1. Being and becoming an exemplary person (e.g., benevolent; sincere; honoring and sacrificing to ancestors; respectful to parents, elders and authorities, taking care of children and the young; generous to family and others). These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society.

      While these ethics have good intentions, they often do not result in a harmonious society. The reason is that it gives those specific people a lot of power and authority, and people would abuse that authority for their own needs. But since people want to be seen as exemplary people and don't want to disrupt society, they would not dare to question or go against those specific people