24 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, UNITED STATES, 2018. ISBN 978-1-4798-3364-1. URL: https://orbiscascade-washington.primo.exlibrisgroup.com/permalink/01ALLIANCE_UW/8iqusu/alma99162068349301452 (visited on 2023-12-10).

      Not all information on the internet is fair or unbiased. Some information carries prejudice. For example, certain search engine results may favor white people and discriminate against people of color. This kind of racial bias present on social media and the internet can also influence real life. Search engines are not completely neutral tools, and people in the real world need to recognize the difference between online and reality to avoid being negatively influenced by harmful content online.

    1. How have your views on social media changed (or been reinforced)?

      Before taking this course, I thought social media was simply a place to relax where people could get the information they needed, chat, or just waste the time. But after going through this course, I realized that social media is way more complex than I had imagined. These platforms use algorithms to analyze user preferences and then use that data for targeted advertising or to slowly influence users’ opinions, including their political views. Social media can also be misused by other users to cause harm, such as through cyberbullying, discrimination, or personal attacks. In short, social media is like another world, and in that world, we all have a responsibility both to ourselves and to others. Every action we take on these platforms can lead to serious consequences.

  3. May 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. C. Thi Nguyen. Twitter, the Intimacy Machine. The Raven Magazine, December 2021. URL: https://ravenmagazine.org/magazine/twitter-the-intimacy-machine/ (visited on 2023-12-10).

      Twitter’s retweet feature often leads to misunderstandings, which can easily escalate into online abuse. For example, a joke might be funny to someone who gets it, but if it's misunderstood, it can quickly cause conflict. The same logic applies to Twitter retweets, when a tweet is taken out of context or misinterpreted, it can lead to insults and punishment by the platform. That’s why a retraction feature is important. If someone realizes they misjudged a tweet or misunderstood the humor, they should be able to retract their reaction and apologize, allowing for reconciliation.

    1. How would a user do the retraction? What options would they have (e.g., can they choose to keep or delete the original tweet content)? What additional information would they be able to provide?

      I believe that adding a retraction feature to Twitter would be very useful. It would give people a chance to take back inappropriate content that they may have posted unintentionally, allowing them to retract their post and attach an apology or explanation. This provides an opportunity for people to make up for their mistakes, rather than being immediately attacked by other users or punished by the platform. This feature could also lead some users to act with less responsibility, knowing they have an easy way to undo their posts.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alice E. Marwick. Morally Motivated Networked Harassment as Normative Reinforcement. Social Media + Society, 7(2):20563051211021378, April 2021. URL: https://doi.org/10.1177/20563051211021378 (visited on 2023-12-10), doi:10.1177/20563051211021378.

      Some users believe that their harassment can be justified, claiming that they are defending justice by purifying the online environment. However, harassment, by its very nature, is still a deliberate act of harm. It involves personal attacks, threats, and violations of privacy. Such behavior goes beyond the boundaries set by platforms and constitutes a serious form of online violence.

    1. Billionaires

      I believe that harassment is never justified. Harassment involves actions like online insults, cyberstalking, and invasion of personal information to harm a user. While some people may think that harassment is acceptable when directed at extremists such as racists, white supremacists, or sexists. While I strongly disagree, there are clearly better ways to address such issues than resorting to harassment. For example, we can use facts and logic to refute their views instead of launching personal attacks, or report their behavior through legal and official channels.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Patreon. URL: https://www.patreon.com/ (visited on 2023-12-08).

      Although Patreon is also a crowdsourcing platform, unlike free platforms such as Wikipedia, much of its content is behind a paywall. Users publish their work on Patreon, and interested audiences can pay to access it. However, the platform still brings together many artists and creators who share their artwork or software projects. Therefore, Patreon can also be considered a form of crowdsourcing, because a large number of users upload useful and meaningful content to the platform.

    1. Wikipedia [p12]: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages [p13]), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute.

      Wikipedia is a classic example of a crowdsourcing website, where everyone has the right to edit its content. If the site were created and maintained by just a few people, the information it provides would definitely not be as broad or detailed as it is now, simply because the amount of information on Wikipedia is massive, and the workload would be overwhelming. Although some content, for exmaple that is related to politics, history, or subjective opinions may be inaccurate due to personal biases, the fact that anyone can edit Wikipedia helps balance this issue, since others can directly correct or improve the information.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Maggie Fick and Paresh Dave. Facebook's flood of languages leaves it struggling to monitor content. Reuters, April 2019. URL: https://www.reuters.com/article/idUSKCN1RZ0DL/ (visited on 2023-12-08).

      Facebook’s AI moderation can handle hate speech in 30 languages. However, this is far from enough to clean up the online environment. For example, hate speech related to ethnic cleansing in Myanmar was unchecked for a long time. Such content has a serious impact on the integrity of the internet. Therefore, human moderation is still necessary. Facebook employs around 15,000 content moderators who collectively speak 50 languages. This highlights Facebook’s reactive approach in multilingual environments.

    1. 15.1.6. Automated Moderators (bots)

      Although automated moderation is widely used by many social media platforms, bots can not do everything. They often fail to understand everything people say. For example, insults can be disguised using symbols or abbreviations, making it difficult for bots to detect them. If such content isn’t taken down, it becomes irresponsible to other users. However, automated moderation can significantly reduce the workload for human moderators, since thousands of posts are made on social media every day, and it’s simply impossible for humans to review them all.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. What is self-harm?

      When people are under extreme pressure, they may turn emotional wounds into physical ones as a way to relieve psychological stress. This becomes a form of self-punishment and a means of escaping mental pain. Unable to express their inner suffering, they choose to transform that invisible pain into something visible, hoping to gain attention and understanding from others. This kind of pathological coping mechanism also occurs online, where some individuals cyberbully themselves in order to feel more pain, attract more attention, and seek more comfort.

    1. 13.2.4. Digital Self-Harm

      More and more teenagers are now using the internet to harm themselves. They use verbal abuse to insult themselves and reinforce their feelings of self-hatred and self-denial. Some even create anonymous accounts to attack themselves, provoking others to join in and further deepen their self-negativity. In response to this situation, social media platforms should take greater responsibility and improve their algorithms. For example, they could use tools like VADER to detect the mood of a post, and if the content is overly negative, it should be removed to protect every user.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Cultural appropriation. December 2023. Page Version ID: 1188894586. URL: https://en.wikipedia.org/w/index.php?title=Cultural_appropriation&oldid=1188894586 (visited on 2023-12-08).

      Cultural appropriation refers to one cultural group adopting elements of another culture without permission. This behavior is often accompanied by a sense of devaluation, when one culture doesn't understand another, using its elements casually can lead to offense. The use of memes is a good example that many users took away parts of another culture and turn them into fashion or entertainment. People need to consider whether this is disrespectful to that culture.

    1. 12.5.2. Cultural appropriation

      Cultural appropriation is very common, but some memes are not ethical. The spread of these memes on social media is a very unethical thing. For example, language targeting Black people may seem funny to the users who post it, but the background story involved may not be funny at all. Some users even use these memes thst contain offensive background stories to harass other users. Whether intentional or not, these actions are a form of attack, show no respect for others, and is against the original purpose of memes, to bring joy. Therefore, with every share, users should consider the story behind the meme. This is a responsibility to oneself and to all other users.

  10. Apr 2025
  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Universal design. December 2023. Page Version ID: 1188054790. URL: https://en.wikipedia.org/w/index.php?title=Universal_design&oldid=1188054790 (visited on 2023-12-07).

      Universal design is a concept in product design that aims to make products usable by as many people as possible, taking into account factors such as age, physical ability, gender, and more, with the goal of promoting equality. Common examples include websites that provide images to replace text, video subtitles, or speech-to-text functions features that consider both visually impaired and hearing-impaired users. Such designs are usable by everyone. These designs are not only fair, but also simple, flexible, tolerant of error, and low in physical effort, making them thoughtful and inclusive for all.

    1. 10.3.3. Design Justice

      Many designs fail to consider the existence of people with disabilities. For example, some video games include dialogue but do not provide subtitles, making it difficult for users with hearing impairments to have a good experience. Or for example a movie, if a viewer is colorblind, they may struggle to understand the movie properly. Another common example is CAPTCHA verification where some of these require users to recognize distorted numbers, which can be difficult even for people with normal vision, let alone those with visual impairments. All of these designs overlook the needs of a wide range of users. All users should be treated equally. They have the right to enjoy the same experience, not to be excluded or ignored.

    1. Shadow profile. November 2023. Page Version ID: 1187676640. URL: https://en.wikipedia.org/w/index.php?title=Shadow_profile&oldid=1187676640 (visited on 2023-12-06).

      The creation of shadow profiles by Facebook is, in itself, already a violation of privacy. Facebook collects users' related information without their agreement and analyzes the content they interact with to push targeted content and advertisements. After the data breach incident, people realized that shadow profiles actually exist. During a congressional hearing, Mark Zuckerberg himself admitted that he would not want his own personal data exposed. There is almost no true privacy left on the internet today, and users' privacy rights are no longer fully under their own control.

    1. This includes the creation of Shadow Profiles [i25], which are information about the user that the user didn’t provide or consent to

      Facebook’s "shadow profiles" are created by analyzing the content a user interacts with, tagging them in order to better deliver targeted content and advertisements. However, due to Facebook’s poor security, user information has often been leaked. Facebook, for example, once leaked the personal data of 530 million users, which was an enormous number. Other companies can exploit these leaked user data to create significant profits. Users only have very little control over their own privacy, making it easy for their rights of privacy to be violated. Although the modern internet offers great convenience, it often sacrifices user's personal privacy in return.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      The article points out that even if you are not a Facebook user, Facebook can still collect your information if you interact with websites that use Facebook plugins. In fact, even if you’re not online, Facebook can still gather your data if someone uploads your contact information. The data is continuously recollected, making it extremely difficult to delete, and the control over that data does not belong to the hands of the user. From Facebook’s view, every individual carries labels that make it easier to recommend content and targeted ads. I believe that everyone’s data is part of their personal privacy, and the way big data is being harvested so recklessly goes against ethics. Even Mark Zuckerberg admitted during a hearing that he wouldn’t want his own data to be collected and made public, therefore there is no reason for collecting and analyzing the data of others. People have the right to privacy.

    1. Google

      In order to sell products more effectively, advertisers use data to mine consumers' needs. Once they detect what a customer might want, they fills the user's browser with targeted ads to promote their products, as it makes people feel like they are constantly being watched. For example, if your computer breaks down and you just need to get it repaired, data mining might lead advertisers to think you need a new one and start sending ads for discounted laptops. This could trigger impulsive buying, leading to unnecessary waste, since a simple repair would have sufficed. Big data also assigns people different labels and shows different ads to different users. It feels like people are pets in a pet store, each with a bunch of tags hanging on them, and they can do nothing about it. People should have the right to refuse being analyzed by big data.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. 4chan

      4chan is a website filled with massive cyberbullying and trolling that hides behind a facade of nihilism. It is saturated with racism, misogyny, anti-LGBTQ rhetoric, and other forms of harassment that target people based on their identities, while its users claim their actions are meaningless. Trolls gains satisfaction from hurting others, while they deny responsibility by insisting it's all just a joke. These individuals know exactly what they are doing, but use humor as a shield. They utilize the anonymity of the internet, while the victims are often left with emotional wounds that take a long time to heal.

    1. Trolling

      Many trolls believe that their actions fall under nihilism, where they claim that what they do is meaningless, and they have no intention of hurting others, thus they shouldn't be held accountable. But the irony is, while they claiming their harm is meaningless and unintentional, in reality, their behavior is well thought throughed. They target specific vulnerabilities in others such as interests, race, or even gender. What the trolls may say thoughtlessly and without responsibility becomes a lasting wound in the ears of others. Some people may never forget the harm caused by trolls. Everyone has self-respect, yet these trolls ruthlessly attack at people’s weaknesses and hide behind the shield of nihilism. This is, without a doubt, a true form of cyberbullying.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Aza

      Aza Raskin is the inventor of infinite scroll, which is undeniably a brilliant design. It effectively helps apps retain users and increase user engagement. However, Raskin himself deeply regrets creating it, as the invention has had harmful effects on users, which causes many people to lose interest in the real world and become overly attracted in the digital one. Many social media platforms today lack friction, meaning there are no barriers to simply swiping to the next video, making it super easy to get instant happiness. As a result, users become addicted very quickly. Raskin is now involved with the Center for Humane Technology, dedicating himself to changing the addictive nature of digital platforms and encouraging people to focus more on the quality of life in the real world.

    1. Aza Raskin regrets [e33] what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      TikTok is a classic example of infinite scroll and is highly addictive. People open the app and swipe through one short video after another, and before they know it, time has slipped away. What makes it even more dangerous is its powerful data algorithm where it can attract users of all ages by analyzing how long they stay on each video and then sned content base on their preferences. This can be very harmful, as people start to care less about the real world, neglect their real-life situations, and lose sight of healthy values.