- Nov 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While anyone is vulnerable to harassment online (and offline as well), some people and groups are much more prone to harassment, particularly marginalized and oppressed people in a society. Historically of course, different demographic groups have been subject to harassment or violence, such as women, LGBTA+ people, and Black people (e.g., the FBI trying to convince Martin Luther King Jr. to commit suicide). On social media this is true as well. For example, the last section mentioned the (partially bot-driven) harassment campaign against Meghan Markle and Prince Henry was at least partially driven by Meghan Markle being Black (the same racism shown in the British Press).
I find that women seem to be more vulnerable to attack than men. Take what happened recently as an example. Recently, an amateur named Mai Mai participated in a divorce variety show. Her performance in the show received a lot of attacks due to her excessive individualism. Almost every video related to her received a lot of criticism from netizens (almost 10,000 comments), and she was also on the hot search many times because of her performance. But on the other hand, a male rapper did a lot of unethical things, insulting fans, sleeping with fans, etc. Although the severity of these behaviors was far greater than that of the previous woman, his discussion was far less than that of the previous lady, even though he was an artist.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone.
The tracking here shocked me. Although it is very important to be faithful in a relationship, I think it is inappropriate to track your partner's social media to ensure that he or she is faithful. Everyone needs their own privacy. Although they are a couple, they are also two separate individuals. Each of them has their own privacy, and they should also respect the privacy of others. This act of invading others' privacy is why tracking is classified as a form of harassment.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.
Yes, not only on the Internet, but also in real life, such as in group tasks in school, most of the time it is a small number of people who complete a large part of the task. One reason is that this small number of people has a very thorough understanding of the knowledge, so they can complete the task more efficiently. The remaining people do not have a deep understanding of the knowledge, and they cannot provide too much value to the group. Another reason is that the remaining people are unreliable and they simply want to throw the task to others. I think this belongs to the first reason.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
16.3.2. Well-Intentioned Harm# Sometimes even well-intentioned efforts can do significant harm. For example, in the immediate aftermath of the 2013 Boston Marathon bombing, FBI released a security photo of one of the bombers and asked for tips. A group of Reddit users decided to try to identify the bomber(s) themselves. They quickly settled on a missing man (Sunil Tripathi) as the culprit (it turned out had died by suicide and was in no way related to the case), and flooded the Facebook page set up to search for Sunil Tripathi, causing his family unnecessary pain and difficulty. The person who set up the “Find Boston Bomber” Reddit board said “It Was a Disaster” but “Incredible”, and Reddit apologized for online Boston ‘witch hunt’.
This brought me a new way of thinking. When people work together to do something, in order to make their existence more valuable, people often want to contribute to the team, even though sometimes this contribution is not positive. However, due to people's contributions, this can cause a lot of confusion. And when people are online, they often don't think, they just accept everything they see online, and don't think before believing it, which will make more and more people believe in false information and cause confusion.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
From this point of view, I think the disadvantages of data mining outweigh the advantages. Although data mining can improve the user experience, when the platform uses this data to make profits, it will harm the users. It puts the cart before the horse and turns something that should serve the users into something that hurts the users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)
I think this is wrong. Although the platform can recommend better content to us after getting our information, such behavior will cause trouble to users. When people do not want their information to be discovered by others or their interests change and they are tired of their previous interests, seeing the content pushed by the platform based on data mining will make users irritated.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do not argue with trolls - it means that they win
I have heard that it takes two to tango. When one party keeps trying to cause a riot, others will ignore him and he will lose interest in causing a riot. But is this really useful on the Internet? Some people are constantly venting their negative emotions in life through the Internet and spreading negative and pessimistic comments. Our disregard may make them more rampant, so there needs to be specific regulations to reduce the occurrence of such behavior.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy:
What causes women to be treated unfairly in society? We can always see women being criticized online, from their appearance to their figure to their personality, and in many dirty jokes, women's privacy is the most ridiculed. However, people on the Internet have magnified this misogyny by taking advantage of the "anonymous nature" of the Internet.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Astroturfing: An artificially created crowd to make something look like it has popular support
This reminds me of Kpop fans who buy robots to like their idols' posts in order to make their idols look more popular. Because people have a herd mentality, when people find that an idol's post has a lot of likes, they may choose to like it as well.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.
This reminds me of an internet celebrity I knew before. She uploaded a video about picking up elementary school students' homework in Paris, and this video caused a big sensation on the video website. But later it was discovered that her video was self-directed and self-acted. She gained huge traffic with a video, but in the end her social platform account was blocked for spreading false facts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.
Yes, infinite scrolling really makes people more addicted to social media. For example, TikTok, I think it's also a kind of infinite scrolling, you can never finish watching short videos, and at the same time, when you finish watching the previous short video, you will look forward to the next content, so it makes you addicted.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.
I found that social media users have high stickiness. When I saw that Facebook caused many social media platforms to close down, I thought about why Facebook still has a lot of users since its inception. I think the high stickiness of social media users is because they have posted a lot of posts on this software. When they change the social media they use, the new social media does not have the posts they posted before, so they are reluctant to change.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).
Are these the three primary colors that can make up all colors? This is very clever, only three colors can make up all colors, but I am curious how the computer can mix these colors in the correct proportions to produce the exact color? At the same time, how is white composed?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Computers typically store text by dividing the text into characters (the individual letters, spaces, numerals, punctuation marks, emojis, and other symbols). These characters are then stored in order and called strings (that is a bunch of characters strung together, like in Fig. 4.6 below).
This reminds me of the Java language I learned in CSE class. In Java, a series of characters is also called a string. But in Java, if you want to enter some specific symbols such as brackets and quotation marks in a string, you need to add "\" to make the machine recognize it smoothly.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist
He reminded me of a saying: "If you keep company with the good, you will be good; if you keep company with the bad, you will be bad." Since some of the language on social media is very negative, if the bot is allowed to learn the language of social media users without supervision, the bot will eventually get out of control. Therefore, I think before designing the bot, we should tell the bot what it can learn and what it cannot learn. This is very interesting. I look forward to the content of the book later.
-
The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.
I think people post various offensive posts through a large number of bots not only to annoy the bloggers being attacked, but also to confuse a large number of ignorant onlookers and change their views on a thing. There is a word called herd mentality, which means that when people see that many people's ideas are different from their own, they may choose to change their own ideas to make them the same as the majority.
-