- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
From this point of view, I think the disadvantages of data mining outweigh the advantages. Although data mining can improve the user experience, when the platform uses this data to make profits, it will harm the users. It puts the cart before the horse and turns something that should serve the users into something that hurts the users.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)
I think this is wrong. Although the platform can recommend better content to us after getting our information, such behavior will cause trouble to users. When people do not want their information to be discovered by others or their interests change and they are tired of their previous interests, seeing the content pushed by the platform based on data mining will make users irritated.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do not argue with trolls - it means that they win
I have heard that it takes two to tango. When one party keeps trying to cause a riot, others will ignore him and he will lose interest in causing a riot. But is this really useful on the Internet? Some people are constantly venting their negative emotions in life through the Internet and spreading negative and pessimistic comments. Our disregard may make them more rampant, so there needs to be specific regulations to reduce the occurrence of such behavior.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy:
What causes women to be treated unfairly in society? We can always see women being criticized online, from their appearance to their figure to their personality, and in many dirty jokes, women's privacy is the most ridiculed. However, people on the Internet have magnified this misogyny by taking advantage of the "anonymous nature" of the Internet.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Astroturfing: An artificially created crowd to make something look like it has popular support
This reminds me of Kpop fans who buy robots to like their idols' posts in order to make their idols look more popular. Because people have a herd mentality, when people find that an idol's post has a lot of likes, they may choose to like it as well.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.
This reminds me of an internet celebrity I knew before. She uploaded a video about picking up elementary school students' homework in Paris, and this video caused a big sensation on the video website. But later it was discovered that her video was self-directed and self-acted. She gained huge traffic with a video, but in the end her social platform account was blocked for spreading false facts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.
Yes, infinite scrolling really makes people more addicted to social media. For example, TikTok, I think it's also a kind of infinite scrolling, you can never finish watching short videos, and at the same time, when you finish watching the previous short video, you will look forward to the next content, so it makes you addicted.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.
I found that social media users have high stickiness. When I saw that Facebook caused many social media platforms to close down, I thought about why Facebook still has a lot of users since its inception. I think the high stickiness of social media users is because they have posted a lot of posts on this software. When they change the social media they use, the new social media does not have the posts they posted before, so they are reluctant to change.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).
Are these the three primary colors that can make up all colors? This is very clever, only three colors can make up all colors, but I am curious how the computer can mix these colors in the correct proportions to produce the exact color? At the same time, how is white composed?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Computers typically store text by dividing the text into characters (the individual letters, spaces, numerals, punctuation marks, emojis, and other symbols). These characters are then stored in order and called strings (that is a bunch of characters strung together, like in Fig. 4.6 below).
This reminds me of the Java language I learned in CSE class. In Java, a series of characters is also called a string. But in Java, if you want to enter some specific symbols such as brackets and quotation marks in a string, you need to add "\" to make the machine recognize it smoothly.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist
He reminded me of a saying: "If you keep company with the good, you will be good; if you keep company with the bad, you will be bad." Since some of the language on social media is very negative, if the bot is allowed to learn the language of social media users without supervision, the bot will eventually get out of control. Therefore, I think before designing the bot, we should tell the bot what it can learn and what it cannot learn. This is very interesting. I look forward to the content of the book later.
-
The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.
I think people post various offensive posts through a large number of bots not only to annoy the bloggers being attacked, but also to confuse a large number of ignorant onlookers and change their views on a thing. There is a word called herd mentality, which means that when people see that many people's ideas are different from their own, they may choose to change their own ideas to make them the same as the majority.
-