23 Matching Annotations
  1. Nov 2022
    1. Cross-platform raids (e.g., 4chan group planning harassment on another platform)

      Mob behavior, similar to real world riots and gang violence. People take the mentality of one single unit, much like ants when acting as a group.

    1. Hacking: Hacking into an account or device to discover secrets, or make threats.

      What about hacking for governmental purposes? Hacking into devices that have been compromised to give users back access to their own devices.

    1. Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.

      This should be integrated alongside a human moderation system. Although bots are efficient and detect problems faster than any human could, they are also subject to bugs, exploitation and false alarms. Having both a human moderator to review and double check reports from bots is probably the best way to go about moderating any digital ecosystem.

    2. If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff

      This is not a recommended form of moderation, having people who can create and form rules on the spot, will ultimately lead to chaos. Multiple "mods" that have essentially god perms and benefits to bend rules and "reality" in the space to their liking will lead to tyranny and power abuse. Ex: Discord servers, Minecraft and other video game servers

    1. Have you ever reported a post/comment for violating social media platform rules?

      Yes, although the feature is useful and gives users the ability to help moderate the flow of hate speech on the internet, I still wonder how they stop it from being abused. For example, let's say that a famous social figure does something mildly offensive, on air, or in a movie, something completely seperate from social media, and their "haters" spam his social media accounts with reports, at some point, an automated algorithm will ban the account. My question is, how do you regulate this as a social media app?

    1. Facebook uses hired moderators to handle content moderation on the platform at large (though Facebook groups are moderated by users). When users (or computer programs) flag content, the hired moderators will look at it and decide what to do.

      When does it cross the line of allowing "free speech"? When do you consider something hate speech? When the content is offensive to one person? 10? 100?

    1. “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.”

      Humans are attracted to drama, that is why "doom scrolling" is even a problem to begin with. It is only amplified with our boredom during covid, not being able to go out and always being stuck with our phone 24 7

    1. The problem is it - we are hyper-connected,

      Although social media can have some bad influence on both our mental health and physical health, it has already been integrated into society. Not partaking in social media would mean that we will be behind on everything, news, business, friendships, etc.

  2. Oct 2022
    1. to the previous example, social media content can go viral for being perceived as “bad” or “embarrassing.” For example, in 2002, a 15 year old kid made a video of himself swinging a pretend lightsaber, that went viral and was mocke

      This characteristic of virality causes people to perform dangerous and sometimes idiotic antics just to be "noticed by the algorithm". They try to predict what the algorithm likes and wants to push out.

    1. around

      The whole point of virality is for one social media post or thread to keep multiplying. The snowball effect plays a big role here, there only needs to be one single push for a post to blwo up

    1. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation

      What about individually-catered ads, how do our devices and applications know what items we are looking for? Is data-scraping our taps and clicks, as well as our search history an infringement of privacy, and where do we draw the line?

    1. This way the database can only confirm that a password was the right one, but it can’t independently up what the password is.

      Each password acts as a key, each character is converted into a "sudo" character that acts as a dummy key for the computer. The sudo code is verified and grants the user permission to access the app or platform if it matches up with the code stored in the computer's database.

    1. But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing:

      Replying to a troll comment would insinuate that you were deeply offended or affected mentally, which is the goal of trolling. To avoid making this mistake, it is better to just let things be, and not be so worked up about comments on the internet and diigital space.

    1. Amusement: Trolls often find the posts amusing, whether due to the disruption or emotional reaction. If the motivation is amusement at causing others’ pain, that is called doing it for the lulz.

      Trolling is the result of the digital freedom that all users are innately entitled to. Having the ability to amuse oneself is a side effect that is brought about by trolling and the rise of the new digital age.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media?

      For me personally, how I act entirely depends on the mood of the conversation, and the personal identity of the person(s) im speaking to. For example, if I were to address a professor about a homework question, I would be more formal and careful with my selection of words and sentence structure. On the other hand, if I were, let's say, in a video game setting, chatting with my friends on an online platform like "Discord", I would be more laid back and relaxed.

    1. Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone Sockpuppet (related to a burner account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad) Astroturfing: An artificially created crowd to make something look like it has popular support Parody accounts: An account that is intentionally mimicking a person or position, but intended to be understood as fake. Schrodinger’s asshole: the guy who says awful shit, and decides if he was “only kidding” depending on your reaction. Various types of trolling, which we will cover in the next chapter

      Inauthenticity is the means of providing someone with misleading or untrue information that can be used to influence someones perspective on something.

    1. While mainstream social media platforms grew in popularity, there was a parallel growth of social media platforms that were based on having “no rules”, and were sources for many memes and piecs of internet culture, as well as hubs of much anti-social behavior (e.g., trolling, harassment, hate-groups, murders, etc.).

      These social media platforms grew because people are naturally drawn to the feeling of having complete freedom. In this case, to post whatever they want without having to worry about the consequences of their action.

    1. And all of these websites became much more interactive, with updates appearing on users’ screens without the user having to request them.

      The technological world is constantly improving, it is an evergoing cycle. This quote in the book illustrates the taoist ideology, that the cycle of the universe will continually take place indefinitely. In the same vein, tech and our ability to program will continue to improve as a collective, sharing new knowledge, more efficient algorithms, and new programming languages, the potential is limitless.

    1. Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms (the images link to the tweets I got them from if you want to see more in the twitter thread):

      How do we determine and regulate what goes on data collection forms in our modern day apps? Gender perception and new groups of "sexualities" are constantly being created, how can we keep a system that regularly updates itself to conform with the perception of the world.

    1. used phrases like “fake or spam” accounts and “fake/spam/duplicates,” which might lead to different numbers.

      How does the twitter algorithm determine whether or not accounts are fake or botted? Are there certain criterias that the accounts have to check off before being flagged, do they get verified by a human moderator before receiving punishment.

    1. In a computer program, when you save information for later use, instead of putting it in a bowl, you give it a name. The computer then makes a place in its memory with that name, and saves the information you asked it to save. Then you can use that name later in the program to ask the computer what was saved in that spot.

      This characteristic of bots can be utilized for good and bad, malicious hackers or companies may create bots that keep track of out personal information, either to put out ads that cater towards us or for more malicious reasons such as, identity theft and hacking. On the other hand, being able to store and retain information is great because it allows programmers ease of access to different values that have been assigned a variable name, allowing the coder to call the variable to use the value assigned.

    1. Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites.

      A bot is limited to the coding capacity of the programmer, it isn't an artificial intelligence and will only run according to what it has been assigned to do. Due to this reason, programmers probably like to stick with providing each bot with one main function that sticks out, although having a bot that is good at everything sounds like the ideal scenario, I feel like it would disperse the focus and effort that the programmers put into each specific function.

    1. nfucianism (another link)# Being and becoming an exemplary person (e.g., respectful, sincere, generous) through ceremonies/rituals. Ceremonies and rituals include things like tea drinking and sacrifice to ancestors Key figures: Confucius ~500, China Mencius ~350, China Xunzi ~300 BCE, China Taoism# Act with unforced actions in harmony with the natural cycles of the universe. Trying to force something to happen will likely backfire.

      We cannot go against the will of nature, everything will go according to the course of the universe. All we can do is to go with the flow of time, and the universe, forcing changes will only lead to disaster. These are the core teachings of Taoism, applying this to modern society would mean that there is really nothing we can do about current societal issues and norms, everything is predetermined by the "universal cycle".Confucianism on the other hand, calls us to become an examplary person, this can also be interpreted as a calling to help solve societal issues with our own strength and effort.