10 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jasper Jackson. Donald Trump 'writes angrier and more negative Twitter posts himself'. The Guardian, August 2016. URL: https://www.theguardian.com/media/2016/aug/10/donald-trump-twitter-republican-candidate-android-iphone (visited on 2023-11-24). [f6] X (formerly Twitter). Permanent suspension of @realDonaldTrump. January 2021. URL: https://blog.twitter.com/en_us/topics/company/2020/suspension (visited on 2023-11-24).

      I remember back when this happened, when Twitter was still X. So many people cried out in anger when Trump was banned, claiming that free speech was being infringed upon. But Twitter was a private company that had the right to ban whoever it wanted. The reaction so many people, including Elon Musk and Trump himself, I feel was the writing on the walls to Musk eventually purchasing Twitter. But also looking into the question of authenticity, and seeing that Trump may have had a team specifically hired to post inflammatory rhetoric really speaks to how difficult moderation is on the internet, and how much the platform of Twitter has changed as Musk took over.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      Why does authenticity when watching something bother us? We sit down and watch fictional movies for hours at a time, watch hour long episodes of fictional TV shows weekly and give it our full attention, but what about watching someone online tell stories bothers us? I think it's likely associated with mistrust and dishonesty, like the passage says. Influences now lie all the time, but are much more careful and covert in how they do it, when as a result, when they're caught, the consequences are more dramatic in comparison to inauthenticity with low effort to cover it up.

  4. Oct 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. What is user friction? Why you're losing users and how to stop. August 2023. URL: https://www.fullstory.com/user-friction/ (visited on 2023-11-24).

      I actually found this article really interesting, since it spoke to things I feel myself and many other users have experienced online before. Many of us have rage clicked at old websites that refuse to load, even though there's no logical indication that brute force will somehow force the program to work. And cognitive or emotional friction is a very real issue, as sometimes when the website or UI is frustrating enough it's easier to just abandon it altogether.

    1. While the Something Awful forums had edgy content, one 15-year-old member of the Something Awful forum called “Anime Death Tentacle Rape Whorehouse” was frustrated by content restrictions on Something Awful, and created his own new site with less restrictions: 4Chan.

      Genuinely mindblowing name. I thought this story would be a singular instance, but the twist that it ended up being the massive platform we know as 4chan. Is the point of social media to allow complete and unrestricted socialization, or something else entirely? The point I'm trying to make and I think the major takeaway we can glean from 4chan now that it's a few years removed, is that complete lack of restriction on the internet usually serves to enable people to engage is violent or degenerate behaviors with significantly less consequences than there would be in the real world.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Binder. The majority of traffic from Elon Musk's X may have been fake during the Super Bowl, report suggests. February 2024. Section: Tech. URL: https://mashable.com/article/x-twitter-elon-musk-bots-fake-traffic (visited on 2024-03-31).

      I notice lots of people responded to this source, because it's really telling and ironic how Elon proclaimed that the bots on Twitter were a big problem discouraging him from purchasing the platform, yet we're seeing reports that his acquisition has only increased bot usage on the platform. And even though I feel that this data is likely accurate, is there a chance that, in line with our discussion of data being a simplification of reality, this bot traffic may be overestimated? Or even underestimated? It's something to think about.

    1. As you can see, TurboTax has a limit on how long last names are allowed to be, and people with too long of names have different strategies with how to deal with not fitting in the system. Gender# Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:

      I wonder, why does this happen? Is it some kind of attempt at shortcut or automation to make the developing process smoother for the developers, at the cost of how user friendly the interface ends up being? Essentially I feel like these results indicate that developers are using cost-cutting practices to make development finish quicker. This ultimately benefits the large majority of people who fall into easy categories, but is to the detriment of people who are outliers.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Brian Whitaker. Oman's Sultan Qaboos: a classy despot. The Guardian, March 2011. URL: https://www.theguardian.com/commentisfree/2011/mar/04/oman-sultan-qaboos-despot (visited on 2023-11-17).

      I found myself interested in the image of a ruler posing to be benevolent and cultured while really being ignorant and dismissive towards their people. Specifically the detail of how difficult it is for the people of Oman to assemble and to speak out makes me understand the connection between the Sultan and social media bots. If social media congregation is the only reasonable way for people to speak out against a neglectful government, it makes the ethical question of automated bots a bit more complicated.

    1. # Go through the tweets to see which ones have curse words for mention in mentions.data: # check if the tweet has a curse word if(predict(mention.text))[0] == 1): # if it did have a curse word, put it in the cursing mentions list cursing_mentions.append(mention)

      I remember learning about some of this stuff in AP Comp Sci Principles. When we were hearing about automated bots that go through social media and take specific actions, and then further provided the steps to run code to make that happen, I started trying to put the steps of the code together in my mind. I figure you need to iterate through a list to look for particular phrases, which you'd set within another list, along with a for loop to detect your desired word in social media. When I start to get lost is when I think about scaling that to be bigger.

    1. Bots# Bots are computer programs that act through a social media account. We will talk about them more in the next (Chapter 3). There are also various applications that are made to help users interact with social media. For example, there are social media manager programs that help people schedule posts and let multiple people use the same account (particularly useful if you are something like a news organization).

      The kinds of bots we've used so far seem pretty simple. It's telling a computer to send a post to social media. But nowadays, we have an overwhelming amount of bots, to the point that a decent chunk of the content I see online is reposted stuff on a clear bot page that I just have to scroll through. Even though we as a class are a bot farm, it's obviously way less consequential. It gets really crazy when you think about the creators who have their content stolen, and reposted across dozens of different burner accounts, just to amass a following on at least one. I think nowadays it's gone too far.

    1. Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” That is, What is moral is to do what makes the most people the most happy.

      I'd say of all of the different frameworks provided, Consequentialism has the most direct application and parallels with many of the ethical questions and debates we often have in regards to social media. Often the game that gets played when it comes to social media is the data and the numbers, and we see developers measure value, success, and popularity largely through the numbers they get fed. And just as one could argue this mindset is flawed, you could say the same flaws exist in Consequentialism. As much at looking at final outcomes can be a rational way to make decisions, it ultimately strips some of the humanity and nuance away from said decisions in the short term. I found the parallels between these two mindsets very interesting.