22 Matching Annotations
  1. Last 7 days
    1. “Incel” is short for “involuntarily celibate,” meaning they are men who have centered their identity on wanting to have sex with women, but with no women “giving” them sex. Incels objectify women and sex, claiming they have a right to have women want to have sex with them. Incels believe they are being unfairly denied this sex because of the few sexually attractive men (”Chads”), and because feminism told women they could refuse to have sex. Some incels believe their biology (e.g., skull shape) means no women will “give” them sex. They will be forever alone, without sex, and unhappy. The incel community has produced multiple mass murderers and terrorist attacks.

      The internet certainly accelerates dangerous communities, especially when users are lonely and struggle with mental health. The incel community has only continued to expand and grow, developing into the looksmaxxing and blackpill communities today and even connecting with major right wing content creators and politicians.

    1. “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience - NPR Fresh Air (10:15-11:20)

      This quote shows why it’s so hard to address most issues around social media. It’s almost a necessary evil, as it is an essential part of all of our lives. It is difficult to keep up with real life if you’re not connected online too, so solutions to social media addiction or negative effects must be found on the other end — the developers’ end.

  2. Feb 2026
    1. Building off of the amplification polarization and negativity, there are concerns (and real examples) of social media (and their recommendation algorithms) radicalizing people into conspiracy theories and into violence.

      Echo chambers form at rapid rates, especially on certain platforms which allow for this acceleration. This can be very dangerous.

    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      It seems like most social media platforms nowadays prioritize reactivity and anger in creating algorithms. A lot of platforms show me discourse or controversial videos in order to maximize engagement, but these social media structures appear to be affecting real life behavior, especially of children.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      I think this shows the importance of representation in programming and how important it is to test technology before it is launched. Even better, it is important to have representation amongst the people developing these products. When marginalized identities can’t achieve success in programming, other people with marginalized identities can’t enjoy these products either.

    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision:

      In a world of ever-growing diversity, it is vital to keep in mind all the different types of people who might be using a certain technology. Many websites and apps now have accessibility settings which account for any disabilities or accessibility needs, but these are constantly being revamped or updated — standardizing and requiring accessibility settings is a helpful thing to do since it ensures that everyone can use different platforms and have a good experience.

    1. Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women

      Internally, there should be more security to prevent employees from abusing their power. There are many ways to do this technologically and socially. An important thing to do is ensure that employees follow strict rules and encryption should be stronger.

    1. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      This makes me think of the Tea app data leak in which almost all users had their information leaked because the developer allegedly stored it all in a public Google Drive. Certain ethical situations should be mandatory to follow when creating a social platform, even (or especially) if a developer is inexperienced.

    1. For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence.

      With so much information about individuals being publicly available in the age of social media, it is very easy for people to find information about anyone — whether they are hackers who can access specific metadata or just a passing by user who observes your profile. It is interesting to observe how people’s behavior has changed over the last decade as our lives become more and more public.

    1. Social media platforms collect various types of data on their users.

      Data collection has a variety of gains and drawbacks for both parties, the platform and the users. But it is important to maintain ethics when collecting data by asking questions such as what are the legal limitations on how much data can be collected and how that data can be used.

  3. Jan 2026
    1. 7.3.4. RIP trolling# RIP trolling is where trolls find a memorial page and then all work together to mock the dead person and the people mourning them. Here’s one example from 2013: A Facebook memorial page dedicated to Matthew Kocher, who drowned July 27 in Lake Michigan, had attracted a group of Internet vandals who mocked the Tinley Park couple’s only child, posting photos of people drowning with taunting comments superimposed over the images. One photo showed a submerged person’s hand breaking through the water with text reading “LOL u drowned you fail at being a fish,” according to a screen grab of the page shared with the Tribune after the post was removed. Cruel online posts known as RIP trolling add to Tinley Park family’s grief from the Chicago Tribune 7.3.5. Flooding Police app with K-pop videos# To go in a different direction for our last example, let’s look at an example of trolling as a form of protest. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.

      Comparing these two examples (RIP trolling and protest trolling) makes the stark differences between different kinds of trolling. Trolling, like many other things on the internet, isn’t necessarily good or bad, and it is driven by intention. It can be used for good and for bad.

    1. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it

      There’s lots of examples of widespread/mass trolling on 4chan changing entire communities or corners of the internet. When interacting with an individual troll, the consequences don’t go much further than mild annoyance, but when several people come together to collectively troll, they can cause real world consequences.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      Naturally, it’s impossible for anyone to truly capture their entire selves in a social media profile. Everything is some type of performance to a certain degree, whether that’s informed by social media trends and behaviors or digital interactions.

    1. There are many ways inauthnticity shows up on internet-based social media, such as: Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone Sockpuppet (or a “burner” account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad)

      I think inauthenticity is just an inevitable thing about social media if not a defining factor. Nowadays with so many bots rampant across different platforms, and AI generated, automated content (i.e. the Dead Internet Theory), social media has become characterized by fake stories and lies. Inauthenticity ranges from someone lying about an anecdote to full-blown fake news.

    1. One famous example of reducing friction was the invention of infinite scroll.

      Ethically it is important to consider the implications of reducing friction in web design. While it does make the user experience more convenient and comfortable, it also promotes overuse of social media and creates a “doomscrolling trap” leading to modern issues like excessive use of technology.

    1. Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites?

      When comparing posts on Twitter/X and Instagram, you can see that Instagram must contain an image and the user who posted it, and can include additional info such as a text caption, music, location, etc. Twitter shows the text and the user who posted it but has limitations on what else can be added, although links and images are supported. There are similar action options such as likes, comments, shares and now as of recent, reposts.

    1. So all data that you might find is a simplification. There are many seemingly simple questions that in some situations or for some people, have no simple answers, questions like: What country are you from? What if you were born in one country, but moved to another shortly after? What if you are from a country that no longer exists like Czechoslovakia? Or from an occupied territory? How many people live in this house? Does a college student returning home for the summer count as living in that house? How many words are in this chapter? Different programs use different rules for what counts as a “word” E.g., this page has “2 + 2 = 4”, which Microsoft Word counts as 5 words, and Google Docs counts as 3 words.

      Simplifying data may frequently be convenient when creating a widely-applicable program, but it involves leaving at least one group or perspective out. Because of this, simplification of data often contains inherent bias and developers should be aware of this.

    2. The data in question here is over what percentage of Twitter users are spam bots, which Twitter claimed was less than 5%, and Elon Musk claimed is higher than 5%

      In the modern age, it is important to understand what truly counts as a “bot” considering the intricacies of automation and the philosophical question of autonomous AI. Can we consider bots as a valid representation of general public consensus as they become more prevalent?

    1. How are people’s expectations different for a bot and a “normal” user?

      People typically don’t expect to glean much useful information from bots. In my case, at least, I would typically block or ignore them. Additionally, it’s often easy to identify a bit, but in the age of AI these lines are becoming more blurred.

    1. Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      I think it’s interesting how bots are frequently used to push a certain message, often political. While this may be able to boost positive movements and spread information, it is a dangerous capability and misconstrues the true standings of most people’s beliefs.

    1. We can’t give every example, but here is a range of different things social media platforms do (though this is all an oversimplification).

      A major issue with “social media” and trying to apply ethical safeguards to it, is that social media is a vast collection of digital platforms. This vastness makes social media hard to define, and thus it becomes difficult to address issues on a wide scale, for example through law. Additionally, social media varies across different regions, peoples, and languages, and similarly, so do ethics.

    1. Natural Rights# Locke: Everyone has a right to life, liberty, and property Jefferson in the Declaration of Independence: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Discussions of “human rights” fit in the Natural Rights ethics framework

      Natural rights are certainly the basis for the majority of most modern ethics. Everyone is created equal, and therefore in designing laws and regulations, we must ensure that everyone is treated equally. Of course, this isn’t always effective because “equal” or “equitable” treatment isn’t easy to define, and those who create the regulations/laws have their own biases.