6 Matching Annotations
  1. Last 7 days
    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata). For example: If we think of a tweet’s contents (text and photos) as the main data of a tweet, then additional information such as the user, time, and responses would be considered metadata. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc.

      One question I had in this section is whether metadata can affect people more strongly than the post itself. If someone sees a lot of likes or a blue checkmark first, will they trust the post more before they even read it carefully? This made me wonder how much our opinions on social media are shaped by the content, and how much they are shaped by the extra information around it.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).

      I read this article about how numbers may seem clear and objective, even though they do not always reflect the real things they are supposed to measure. What stood out to me most was the part about sentiment ratings, IQ, and aggression scales. The article shows that even something as simple as 2+2=4 can become more complicated when the context changes. This made me think about social media too, because numbers like likes, views, and ratings often seem trustworthy even though they only show part of the picture.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL:

      I read C2, which explains how click farms work and how they create fake popularity online. What stood out to me most was that this system relies on real people, not just bots or software, to keep liking and following content across many devices. That connected to the chapter’s idea of “human computers” and made me see false engagement differently. It is not just about technology. It's also about the people doing this repetitive work behind the scenes. I found it a little disturbing to realize how easily likes, follows, and views can be manipulated, since many users, including me, often treat those numbers as signs of trust or popularity.

    1. 3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist [c14]

      The example of Tay stood out to me because it showed how quickly a bot can learn harmful behavior online. I don't think this happened only because of racist users. It also seems like the bot was created without enough protection or clear limits on what it could learn. What shocked me most was that it only took one day for Tay to start repeating harmful ideas back to people. That makes this example feel less like a random mistake and more like a warning about what can happen when AI is released without enough care. This made me think that AI can easily reflect the worst parts of people when there are no strong limits.

  4. Apr 2026
    1. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online.

      I don't think it is enough for creators to say that misuse isn't their fault. Even if developers can't control every use of a technology, they still make choices that can make harm more likely or less likely. I also think it matters that laws and society often can't keep up with new technology. When that happens, the people making it have even more responsibility to stop and think about possible harm before releasing it. Deepfakes are a good example, because a tool that can create realistic fake images or videos can easily be used for harassment, spreading false information, or violating someone’s privacy.

    1. resulting in a harmonious society.

      This section made me wonder what Confucian ethics would look like in online communities where people don't know each other personally. Can ideas like respect, sincerity, and social harmony still work in digital spaces that are often anonymous and fast-moving?