33 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Steve Krenzel [@stevekrenzel]. With Twitter's change in ownership last week, I'm probably in the clear to talk about the most unethical thing I was asked to build while working at Twitter. đź§µ. November 2022. URL: https://twitter.com/stevekrenzel/status/1589700721121058817 (visited on 2023-12-10).

      This is a tweet from Steve Krezel, who was a programmer at Twitter. His tweet is an example of how technology has been used unjustly. Social media is a great example of how the consequences and uses of new technology are larger and more expansive than what the original creators might think of. Many social media platforms were initially created without thinking who they would be giving their platforms to, or how they would really be used.

    1. If you could magically change anything about how social media sites are designed, what would it be?

      At the end of the day, social media platforms need to make money. However, exploiting a customer base through making a more addicting product feels very counter-intutitive towards bringing people closer together. I think allowing users to have free will, and to be more educated on what a trained algorithm really does and what a social media platform is taking from all the data we are giving it is important. Both the user and the designers need to be more aware of the consequences of social media algorithms long term.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. https://en.wikipedia.org/w/index.php?title=Leopold_II_of_Belgium&oldid=1189115939

      This wikipedia article talks about Leopold the Second. Leopold the Second was the King of Belgium, who owned a few colonies in Africa. The Belgian colonies were known for their torture and brutalist nature, and King Leopold was known for his brutal reign in the Congo which has led to the instability of the Democratic People's Republic of the Congo which has remained to this day.

    1. In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in?

      In America there is definitely a lot more Capitalist business models, which we can see through social media companies, transnational corporations etc. However, there are still a few socialist functions in our society such as Medicare, which provide many important health services that the government runs. There are also non-profits, which are run on a smaller level. However I think that the Capitalist business model is much more prevalent.

  4. May 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Paul Billingham and Tom Parr. Enforcing social norms: The morality of public shaming. European J of Philosophy, 28(4):997–1016, December 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/ejop.12543 (visited on 2023-12-10),

      This link takes us to a published journal on the ethics around social norms and public shaming. Bilingham and Parr discuss when public shaming is morally justifiable. I think that the framework on when public shaming can be ethically used is important within modern day cancel culture. However, it is very unlikely for these type of frameworks to be significantly implemented due to online anonymity which incentivizes trolling and other socially unacceptable behaviors to be seen as "okay" online.

    1. Non-Abusiveness: The shaming must not use abusive tactic

      This is a great framework for ethical shaming. However I wonder if shaming can be non-abusive if its aims are to stop behaviors through guilt tripping. I think cancel culture and public shaming have been effective, but only in making individuals more afraid of the public eye rather than as a tool to help expose abuses of power or overlooked crimes.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Index on Censorship. Interview with a troll. Index on Censorship, September 2011. URL: https://www.indexoncensorship.org/2011/09/interview-with-a-troll/ (visited on 2

      The interview with a troll is an article where a troll talks about their specific reasonings on why they engage in trolling. I found it to have a fairly hedonistic justification, but found it interesting that even some trolls have boundaries that they do not cross. If trolling is done to incite anger for entertainment, then why keep moral lines? How angry can someone be until it is not funny and why?

    1. Do you believe crowd harassment is ever justified?

      There have been successful uses of crowd shaming to get people to apologize, or for larger companies to change some of their behaviors. However, I believe the effectiveness of crowd shaming is not good enough to justify the means of which to ever use it.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jeremy Gray. Missing hiker rescued after Twitter user tracks him down using his last-sent photo. DPReview, April 2021. URL: https://www.dpreview.com/news/0703531833/missing-hiker-rescued-after-twitter-user-tracks-him-down-using-a-photo (visited on 2023-12-08).

      This article and the use of crowd sourcing to find this missing person reminds me of the communities online which crowd source help to try to find missing children from their missing posters and pictures. Doxxing someone online may be one of the more easier things to do with open source information.

    1. Disinformation campaigns also make use of crowdsoucing. An academic research paper Disinformation as Collaborative Work [p31] (pdf [p32]) lays out a range of disinformation campaigns: Orchestrated: Entirely fake and astroturfed, no genuine users contributing. Cultivated: Intentionally created misinformation that is planted in a community. It is then spread by real users not aware they are part of a disinformation campaign. Emergent and self-sustaining: Communities creating and spreading their own rumors or own conspiracy narratives.

      I think it is interesting to see how misinformation can be worsened due to online crowd sourcing, which ends up exacerbating problems. The method of emergent and self sustaining disinformation about the Rohingya Muslims in Myanmar self sustained itself through disinformative posts on Facebook, which led to crowds deciding to riot together. In this way, social media and crowd sourcing has made actions and information more attainable at a global scale, but can easily create misunderstandings.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Catherine Liao, Rita and Shu. Great Wall of porn obscures China protest news on Twitter. TechCrunch, November 2022. URL: https://techcrunch.com/2022/11/28/great-wall-of-porn-obscures-china-protest-news-on-twitter/ (visited on 2023-12-08)

      This link talks about how government censorship can often times conflict with big social media moderation policies, and how countries can retaliate against them. Here China spams pornography to resist protesting news, which in a way remind me of DDoS attacks onto servers. I know that now China works much more closely with social media platforms instead of spamming them. I wonder if other countries do this too, and why certain state get influence and loopholes from content moderation policies?

    1. Facebook also discovered in internal research that, “the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content [n7].”

      It is very interesting how many social media algorithms still uplift hate speech and inflammatory posts; even though these posts often goes against regulation standards. I think Twitter is a great example of how a lack of moderation has led to hate speech and misinformation posts to become more popular than posts which align towards guidelines. I wonder what ethical grey area this falls into, and who would be held accountable: the original person who posted, the person who creates the algorithm which pushes hate, or the audiences who enable the algorithm and the post by driving more attention and "virality" towards the situation?

    1. Munchausen Syndrome (or Factitious disorder imposed on self [m13]) is when someone pretends to have a disease, like cancer, to get sympathy or attention. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness in an online community than in an in-person community, so many have done so [m14] (like the fake @Sciencing_Bi fake dying of covid in the authenticity chapter). People who fake these illnesses often do so as a result of their own mental illness, so, in fact, “they are sick, albeit […] in a very different way than claimed” [m15].

      It's interesting to see how this has now become so common that its almost created its own community. I wonder why the internet creates these polarizing communities

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Benjamin Goggin. Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts. Business Insider, January 2019. URL: https://www.businessinsider.com/facebook-is-using-ai-to-try-to-predict-if-youre-suicidal-2018-12 (visited on 2023-12-08).

      This article talks about how Facebook used AI algorithms to try to detect when users were suicdial so that Facebook could send this information to crisis responders. The ethical issues presented with this are that Facebook is creating a medical profile of its users, and sharing this to other people without user consent. In addition, Facebook employees are not equipped to handle medical emergencies.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Morgan Sung. Their children went viral. Now they wish they could wipe them from the internet. NBC News, November 2022. URL: https://www.nbcnews.com/pop-culture/influencers-parents-posting-kids-online-privacy-security-concerns-rcna55318 (visited on 2023-12-08).

      This article talks about one mother's experience with going viral through a video with her and her child. The unwanted attention and viewership that her child had received was alarming and raises concerns on the morality of children going viral in a consensual/ privacy lens, as well as fears of exposing these children to predators online. A lot of families may depend on their children to bring in revenue through views, but does this offset the potential that these views are coming from a harmful and predatory place?

    1. ince genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene” [l4]).

      That is so interesting that the passing down of genetic information is what influenced the name for meme! I wonder what image or text was first associated with this term.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kashmir Hill. How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did. Forbes, February 2012. URL: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/ (visited on 2023-12-07).

      I thought that the article was incredibly interesting! I've had similar feelings with safari recommendations. If I visit a retail website, even before I give my email or information, they email me first now! I think that the targeted ads that Target uses has probably been a long standing thing, especially with all the "consent to cookies" tags online, but we turn blind eye to it. I wonder if Targets observations on customer purchases to predict future purchases and create "hidden profiles" of customers could also become racial profiling with specific brands or products in the future.

    1. Additionally, because of how YouTube categorizes content, if someone tries to make content that doesn’t fill well in the existing categories, the recommendation algorithm might not boost it, or it might boost it in ill-fitting locations.

      I think it's really interesting how algorithms are not always suited for intersectional categories. I feel like I do see a lot of "groups" when it comes to specific categories that algorithms tend to prefer (ex. "food content creators," "clothing content creators") and those who don't stick to a specific category tend to not get recommended as much in comparison to those that stick to their niche. I wonder why this is more favored compared to more diversified content creation.

  12. Apr 2025
    1. https://en.wikipedia.org/w/index.php?title=General_Data_Protection_Regulation&oldid=1187294017

      The General Data Protection Regulation (GDPR) is a regulation stating the rights to data privacy within the EU. GDPR is an aspect of the EU's shift to more information protections such as the Digital Services Act, in an attempt to crack down on illegal information sharing.

    1. Metadata: Sometimes the metadata that comes with content might violate someone’s privacy.

      The use of metadata is a common way that people have used the internet to find open information about another person or organizations whereabouts without their consent.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. a special individual encryption process [i6] for each individual password. This way the database can only con

      I remember learning about the encryption process, and the difficulty levels of ciphers and decrypting code into plain text. I think it is interesting how people used to use Cesar ciphers and have moved to more difficult ciphers such as RSA's, which require two separate keys for encryption and decryption of the plain text.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Nicole Nguyen. Here's Who Facebook Thinks You Really Are. September 2016. Section: Tech. URL: https://www.buzzfeednews.com/article/nicolenguyen/facebook-ad-preferences-pretty-accurate-tbh (visited on 2024-01-30).

      I remember when the Cambridge analytica scandal blew up, but now algorithms are everywhere. I wonder why algorithms and custom tailored ads are now seen as a neutral or accepted thing and not a privacy issue like before?

    1. ds with might be used to infer your sexual orientation [h9]. Social media data might also be used to infer people’s:

      This reminded me about the topics of algorithmic biases and the dangers of data scraping which were discussed in INFO 200. We talked about how these inferences are not always accurate, as they are often trained on skewed datasets. I wonder how ethical data mining is for training machine learning and AI.

    1. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”

      I think this is an interesting quote from Jean-paul Satre which highlights an aspect of internet use and trolling that we haven't touched on yet: systemic power imbalances. I think often with trolling and with the use of bots for either propaganda or political uses, there are power imbalances which can often be seen through the use of hate speech towards specific groups to further generate "flame wars". It's important to remember just as Satre states, that these people do not come to discuss, but only to polarize and instill fear.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. https://en.wikipedia.org/w/index.php?title=A_Modest_Proposal&oldid=1186969923

      This link goes to a wikipedia article based on Jonathan Swift's A Modest Proposal, which is an ironic post detailing how poor Irish beggars should eat their own young. I think that Swift's article is satirical and could be seen as trolling. However, modern trolling does not usually have the intention of conveying a deeper message; unlike satire.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. leads us to place value on authenticity.

      I think that authenticity is valued in today's culture, but that we also accept inauthenticity and duplicates more in capitalist societies. The ability to acquire and posses objects or concepts (i.e wealth or luxury) for affordable prices used to be seen as wrong, but nowadays dupes have become very popular. Signalling a shift away from authenticity.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited

      I remember when lonely girl15 was released and how much misinformation was out surrounding the show. It wasn't certain if it was real, staged, or all fake. The amount of misinformation drove this show to be more viral than it initially was, leading to many more instances of these "staged" social media users.

    1. Age Name Address Relationship status etc.

      Age could be limited to just integers from 1-100. Name would and address could be given a string so that the user could write anything? As there could be too many names and addresses to give a default number of options. However, relationship status would be good to have that default set number of string options. I think it is interesting to see how sruvey formulation influences the type of data we collect, and how thet changes our outcomes.

    1. (e.g., if this condition is true, do these five steps),

      I've had to look at conditional statements before with SQL injection. However, I am still a little confused on what counts as true conditional statements. Does it change with every language?

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07

      I had heard about click farms before, but to actually see the phones strapped up to a desk and seeing human computers manually click on ads or posts was more unsettling than I had initially thought. I also thought it was interesting to see how click farms spread misinformation through digital astroturfing. Creating distortions in our social environment.

    1. It is rational to seek your own self-interest above all else.

      I think it is interesting that both Egoism and Utilitarianism often end up having similar end results, yet the ideologies come from opposing values. Egoism is the justification of selfishness in capitalist society. Utilitarianism is the needs of many being met, often through some sacrifice. Although one may look morally more "just" than the other, both utilitarianism and egoism usually justify the sacrifice of some/ the community, or the self for a higher "goal."

    2. Prefers spontaneity and play

      I wonder what historical, environmental or societal changes within East Asia at the time caused the break between Confucianism and Taoism. Confucianism and Taoism are both popularized around the same time period, yet Taoism's values appear to be more lax and less about setting up formal rules of how individuals in society should act.

    1. And more than a few wars have been fought over ethical disagreements that couldn’t be resolved.

      I wonder which wars the author is alluding to. Do religious wars count as an ethical war? To what extent does ethics overlap with belief? Does belief in supernatural beings count as an ethical value, or is that an area of belief separate from morality and values?