8 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text messaging. November 2023. Page Version ID: 1184681792. URL: https://en.wikipedia.org/w/index.php?title=Text_messaging&oldid=1184681792 (visited on 2023-11-24).

      I found this source interesting and overall talked about the idea of SMS texting and how it evolved into a dominant communication tool. It mentions that text messaging originally came from the Short Message Service which was constrained to about 160 characters leading to users to abbreviate words to common words we use today like lol or u. Something that started as a technical limitation ended up influencing language and culture globally. Because early communication tools like texting weren't just about sending messages, but also about shaping how people interact and express themselves. These design constraints in tech can unintentionally create long term behavioral changes.

    1. In the 1980s and 1990s, Bulletin board system (BBS) [e6] provided more communal ways of communicating and sharing messages.

      This system actually stood out to me in this reading of this chapter because it shows how much effort and intention went into communication during the Web 1.0 era compared to our modern day now. For example, having to create your own personal webpage or actively join specific spaces like the BBS meant that users had to be way more deliberate about where and how they would interact. This made me think more about how different it was from now where content is constantly pushed to us through algorithms where back then you would have to go find conversations where the conversations find you now. I feel like this shift has contributed to things like doomscrolling because there is way less friction and work to accessing so much content. I do have a question if the web back then was able to create more or less meaningful interactions than nowadays because there weren't as many features but people had a sense to choose to be apart of something.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).

      From this article, the writer sums up that math entirely depends on context and not just the fixed built rules that we've all known it by. For example, rounding or real world situations can make something like 2+2 equal 5 in practice. This anecdote stood out to me because it shows that numbers and metrics aren't always fully objective, but could be subjective. Things like ratings are shaped by how we define and measure them which makes me more cautious to trust data at face value because of how possible it is for data to be manipulated.

    1. All data is a simplification of reality# We’ve talked about how we represent data on a computer, but let’s now step back and think about the nature of data itself.

      I thought this statement of how all data is a simplification of reality was really interesting and was able to relate it to myself. This made me rethink how I usually treat numbers and datasets as objective truth. For example, in my own experience working with data like in a database with sql, I’ve always focused on getting the “correct” metric. But this reading made me realize that even before analysis begins, there are already subjective decisions being made like what counts as a user, a transaction, or even an active account. That’s similar to the Twitter bot example, where changing the definition of a spam bot can completely change the final percentage. I also think this connects a lot to product and tech decisions, like in product management metrics like engagement or retention seem straightforward, but they’re actually based on how we define user behavior. If those definitions are flawed or overly simplified, then the decisions we make based on them could also be misleading. So it’s not just a technical issue but also an ethical one too because simplifications can shape real outcomes.

  4. Apr 2026
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      This article explains how these dystopian looking click farms use large numbers of phones and accounts to artificially boost likes, follows, and engagement on social media. This tricks the algorithm into promoting content and makes it seem more popular than it really is. It highlights how this can spread misinformation and influence what people believe and see on the internet and ultimately cheat the system to get visibility online.

    1. [Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place. https://www.indiewire.com/2018/10/star-wars-last-jedi-backlash-study-russian-trolls-rian-johnson-1202008645/ [c11]

      This specific statistic about the backlash to "The Last Jedi" was really surprising to me, especially where over half the negative tweets weren't even from real people. I think this anecdote made me rethink about how I interpret online reactions, especially the fact it could be fake. Personally, this also made me realize how it easy it is to assume what we see on social media is definitive and real public opinion, when it can be heavily manipulated by fake data. It makes me question how often I've formed opinions based on something that wasn't representative of real people. I'm thankful that this was an eye-opening reminder to myself that I shouldn't be too critical of myself because it might not be real.

  6. Mar 2026
    1. In this class, you will be building up a ‘toolbox’ for thinking about ethics.

      I really like this line in this reading because it expresses that the idea of ethics isn't something with just one right answer. I like this framing, but it also made me a little uncomfortable at the same time because if ethics is just a set of tools we choose from, does that mean people can kind of pick the framework that justifies what they already want to do? For example, in social media situations a company could use a consequence based approach on their platform to justify something like data collection and it would benefit people overall and ignore the violation of privacy. Similarly, someone else could use a rights based framework to say the opposite, it almost feels like ethics could be bent and flexible that could be helpful yet also kinda dangerous. In my real life like when I would work in group projects, sometimes people aren't actually disagreeing on the facts, but what "matters more" ethically. The toolbox idea hits the idea well but makes me wonder to what extent do we allow people to use ethics to selectively support their own interests?

    1. There are many more ethics frameworks that we haven’t mentioned here. You can look up some more here.

      I ended up looking at the site that lead to see other ethics frameworks out of curiosity and came across the Social Networking and Ethics page, as I felt it was really relevant to this course. I felt it was really interesting how it talks about separating ethical impacts into direct, indirect, and structural categories. I felt in practice, these categories blur together way more than the framework suggests, like misinformation in social media isn't just a direct harm between users, but it's also shaped by platform algorithms or structure and is accentuated through interactive behavior or indirect behavior. I'm curious if treating these as different categories could oversimplify how responsibility is shared between users like how we discussed during class who is responsible for coding the bluesky bot and running the bot. It seems like harm comes from individual users but in reality, the design of the platform and business models are just as responsible. So instead of how the article defines of thinking of these categories as separate, I think it's more useful to see them together as responsibilities that entail one another so it's easier to see how much power platforms have when certain behaviors are exhibited on the sites.