- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Filter Bubbles# One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge.
Filter bubbles is also one of the most important causes for information cocoon. Users are repeatedly exposed to content that reinforces their existing views, while alternative perspectives are filtered out. This continual reinforcement creates a self-contained environment where users rarely encounter differing ideas, deepening confirmation biases and narrowing their worldview.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons:
Another reason for social platforms to keep their recommendation algorithms secret may be protecting users' privacy. The algorithms are heavily rely on users data to create personalized content. Therefore, publish of detail of algorithms may put users' privacy in risk. There may also be other reasons, like the technical complexity of the algorithms.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Design Justice
Traditional design practices often favor the needs of dominant groups, leaving out those with different abilities, cultural backgrounds, and socioeconomic statuses. In addition, technologies may also harm certain groups and created bias causes discrimination on the groups. Therefore, design justice is curial in modern society.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.
For example, simplification of manipulate system is one example for technologies that being not accessible. Devices with only touchscreen interfaces (such as some ATMs) are hard for users with visual impairments or motor disabilities to navigate effectively. In addition, mart glasses, like Google Glass, rely on visual prompts or voice commands, which can exclude users with vision or hearing impairments.
-
- Oct 2024
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What incentives to social media companies have to violate privacy?
There is potential profit comes from violating users' privacy. For example, these information can be sell to ad companies to make targeting ads. Or sell these data to third parties. And it is hard for users to find out social media sell their information, the risks involve is low.
-
What incentives do social media companies have to protect privacy?
Social media companies need to follow laws like CCPA, fall to protect their users' privacy could cause them breaking these laws, which incur costs on lawsuit or class action. In addition, there is competition between social media companies, fall to protect users' information may can their users be attracted by their competitors.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
After looking at your ad profile, ask yourself the following: What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?
I did not use the google account that used UW email too much. However, surprisingly, google knows a lot of my private information. And a lot of them are pretty accurate. This also make me to think whether I need to improve my awareness on data security.
-
您的广告资料中哪些内容准确、不准确或令人惊讶?
I did not use the google account that used UW email too much. However, surprisingly, google knows a lot of my private information. And a lot of them are pretty accurate. This also make me to think whether I need to improve my awareness on data security.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Are you surprised by any of the things that can be done with data mining? Do you think there is information that could be discovered through data mining that social media companies shouldn’t seek out (e.g., social media companies could use it for bad purposes, or they might get hacked and others could find it)?
Yes, for example health data (especially mental health ) is very risky for users. Anxiety, depression, or suicidal ideation language on net can be identified using data mining. Social media exploit this to push targeted content or manipulate users during vulnerable times.
-
您是否认为可以通过数据挖掘发现社交媒体公司不应该寻找的信息(例如,社交媒体公司可能将其用于不良目的,或者他们可能会遭到黑客攻击而其他人可能会找到它)?
Yes, for example health data (especially mental health ) is very risky for users. Anxiety, depression, or suicidal ideation language on posts can be identified using data mining. Social media companies might exploit this to push targeted content or manipulate users during vulnerable times.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy
I think some of them may not take nihilistic philosophy. A lot of trolling are hired for specific reason, which can be used to infamise their employers' rivals. There are also trolling serve for political ideas. So they may be trolling on purpose.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Some reasons people engage in trolling behavior include:
In some cases, there are may be other reasons that causes people to engage trolling behaviors. People might troll to fit in with a certain online community or group of friends who engage in similar behaviors. Or they may use trolling to get more subscribes and pursue profits.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What are the ways in which a parasocial relationship can be authentic or inauthentic?
I suppose the different between authentic or inauthentic parasocial relationship in streamers-followers sense is whether the celebrity consider a specific follower as "fans". However, this judgement often based on whether followers spend enough money on streamers and streamers don't their fans' names (even username) in most circumstance. Therefore, I don't see any difference for follower between authentic and inauthentic parasocial relationship.
-
Where do you see parasocial relationships on social media?
The relationship between streamers and their followers has became the one of the most common parasocial relationship, nowadays. In some sense, this kind of parasocial relationship was built by streamers on purpose. They will pick nicknames for their followers' group and responds to chat like they are talking like friends.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?
I would like users to access extra contents of a post(picture, comments, number of likes) immediately. Users can immediately do some positive interaction with the post(writing comment, leaving likes, Subscribe). However, I will set posting and deleting post one or two steps away, to prevent some users accidentally post their privacy or delete their posts.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)
Take twitter as an example. When you want to delete you personal comments or tweets, it general only takes one or two steps, which can be considered as low friction. However, if you want to delete shared or viral content, it is much harder. These content permanently exit on internet, even if you delete it, which can be considered as high firction or disallowed.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important?
When charitable organization raising funds for patients suffering form specific illness, they may interview these patients and relatives of the patients to evoke empathy. However, they overlook the feeling of patients and their relatives. Letting patients recall their pains and speak in front of camera aggravates the pains of patients.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Address
For address, you may want to use test as datatype and put it into a string with age, name and relationship status. These data should be considered as metadata. For constraint, limited format of address: street, city, state, country, zip code. Limited length of zip code and street number. Create a list of exist countries and relevant states.
-