22 Matching Annotations
  1. Last 7 days
    1. “Incel” is short for “involuntarily celibate,” meaning they are men who have centered their identity on wanting to have sex with women, but with no women “giving” them sex. Incels objectify women and sex, claiming they have a right to have women want to have sex with them. Incels believe they are being unfairly denied this sex because of the few sexually attractive men (”Chads”), and because feminism told women they could refuse to have sex. Some incels believe their biology (e.g., skull shape) means no women will “give” them sex. They will be forever alone, without sex, and unhappy. The incel community has produced multiple mass murderers and terrorist attacks.

      Often, this is a self reinforcing cycle. Dwelling on the fact that women aren't interested in you won't make you more appealing. Additionally with the rise in standards from social media, incels also won't go for women similar to them. Couple these factors together produces no good.

    1. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance:

      As the frequency of social media usage increases, the level of standards rises as well. People who watch social media are constantly exposed to some of the most tailored and presentable people, which raises their own perceptions of what is normal.

  2. Feb 2026
    1. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      This oversight is noticeable when someone "bots" their streams or their likes and comments. These are key metadata points that algorithms focus on, so taking advantage of the system can generate a big reward. Social media sites do try to regulate this, but individual content creators have an ethical duty to prevent it.

    1. Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively) can be used to suggest friends or contacts.

      These are all instances of metadata. This is extremely useful to push engaging content, but there is less control on the quality of content. I've noticed many reels that are just short clips of movies, or barely noticeable reaction content that contribute nothing. Maybe this is due to the greater resource load it takes to filter for content, or its harder to decide for someone what types of content they should watch.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      It would be interesting to contrast how effective it is to continually survey disabled people compared to getting a disabled designer. I can see it being beneficial to have a person who intimately knows the issues disabled people face, but at the same time there may be many disabilities and experiences from intersecting disabilities.

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example:

      I think that this better encapsulates the meaning of a disability rather than the traditional connotation. In this way, every disability is situational and the responsibility to mitigate is on the society, rather on the individual.

    1. Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much

      This reminds me of how teenagers at times would screenshot dms. A lot of times it was evidence for an argument, but other times it would be malicious. This isn't wholly a bad thing however, this could be used to expose creeps.

    1. Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it

      I remember a story in 2014, where many celebrities' personal photos were leaked because of a simple phishing scam involving an employee. It reminded me that network protection is not the only that needs to be protected but employees need training as well.

    1. Go to your google account (assuming you have one) profile information and go to “Data & Privacy”

      This I find interesting because many people share the sentiment that I have, which posits that it is better to get ads tailored to your wants than ads devoid of interesting material. As long as Google uses my data only for ads, then I think its okay. Although, if it were to violate that order, I would have no way of knowing...

    1. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      This raises an interesting connection for me because hedge fund "quants" also use this strategy: finding seemingly useless or irrelevant data to game the market. Somehow, both are effective, if not only for a short period

  3. Jan 2026
    1. “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation.

      I think the anonymity discussed in previous chapters in this case allows people to behave like their true selves. Some people in real life are too scared to bully in this way, but online they can do so without reprocussioin.

    2. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”

      I think bad faith is harmful to society. Kant's social maximum of "always tell the truth" goes against this. Bad faith is used to disguise poor arguments and give false legitamacy by creating these in-groups and out-groups

    1. 6.6.1. Anonymity encouraging inauthentic behavior# Anonymity can encourage inauthentic behavior because, with no way of tracing anything back to you[1], you can get away with pretending you are someone you are not, or behaving in ways that would get your true self in trouble. 6.6.2. Anonymity encouraging authentic behavior# Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      It has been noted in studies, of the internet population, a plurality of those who comment are angry people. These are the same angry people in real life, and they don't care about anonymity. Those that stay anonymous still feel emotions of those who comment, so they are discouraged from participating. Thus, the online space is more filled with anonymous and not so anonymous haters

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. We value authenticity because it has a deep connection to the way humans use social connections to manage our vulnerability and to protect ourselves from things that threaten us.

      I think this is most represented in the idea of online trolls. The act of trolling provides and inherent dicotomy between the trolled, who feel betrayed by the dishonesty; and those who side with the "troller," who feels connection with the troll.

    1. One famous example of reducing friction was the invention of infinite scroll [e31]. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin [e32] invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I think the progress toward less friction has been good in the sense less of our time is wasted, but also paradoxically more time is wasted. With Appled ID and infinite scroll, becoming addicted to social media, and the more immediate the sense of dopamine, has increased the level of Pavlov conditioning.

    1. Japanese image-sharing bulletin board called Futaba or 2chan [e19].

      I wonder how this company might seek legal recourse for this action. Does Japanese law have a provision for stealing code? does the U.S?

    1. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations)

      It may be also important to know the expected utility, if surveyed for potential affected persons, their responses may not be the truth. A person will never know what a punch feels like until it hits them. Building off of simplification of of data, what one person feels may not be what another ends up feeling. Thus, net utility is never one hundred percent certain.

    1. In this example, I decided that each of these would count as “1 apple.”

      Its always important to remember, when looking at data, it isn't always objective. Data is made up of what the creator choose to include. Data can be missing important distinctions (like small or big apples), information (an apple is just outside the picture frame, or can be intentionally omitted. It is always useful to look at the parameters of a study

    1. Or a computer program can repeat an action until a condition is met:

      This reminds me of when youtubers post videos of followers doing "day x until y" messages. I never considered the possibility that it was fake until now. If you combine this with the sleep feature and randomize the timeframe of the post, it could look very real. I also wonder if in the near future this could be done with AI to create automated videos.

    1. ethically justifiable

      To me I find it problematic in practice for there to be a distinction between ethical and non-ethical use of antagonistic bots. Everybody has their own worldview and values. To define some of these values as ethical on social media is to impose them on everyone. Maybe this would be okay if there was a democratic way for this. But there isn't. These bots are made to "get a rise out of people" or stir emotions. Subjecting people to that through automated bots under the guise of ethics I disagree with

    1. a human programmer will act as a translator to translate that task into a programming language.

      Its alien to me how logic gates somehow translate to english words. It must take a lot of ones and zeros to make that happen.

    1. Being and

      I think the core idea of Confucianism, that those in power have a duty over those they have power over, is really important in today's age. Social media companies are afforded extreme power and to be cliche "with great power comes great responsibility.