15 Matching Annotations
  1. Last 7 days
    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      What feels most concerning in these examples is that users don’t really have a way to verify whether companies are following proper security practices. We’re asked to trust platforms with highly sensitive information, but when breaches happen, the consequences mostly fall on users rather than the companies. This creates a serious imbalance in responsibility and risk.

    1. We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private

      The idea of context collapse really stood out to me here. Even when we don’t feel like we’re hiding anything unethical, we still expect different parts of our lives to stay in different social contexts. Social media often removes that boundary, which can make normal behavior feel risky or embarrassing once it’s exposed to a wider audience.

    1. The Reddit API lets you access just some of the data that Reddit tracks, but Reddit and other social media platforms track much more than they let you have access to.

      I found it interesting that APIs create a filtered version of reality for researchers and developers. When we analyze Reddit data through the API, it’s easy to forget that we are only seeing what the platform allows us to see, not the full picture of user behavior. This makes me wonder how data mining conclusions might change if hidden data—like deleted interactions or internal metrics—were included.

    1. Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might):

      What stood out to me is how much of this data is collected passively rather than intentionally shared. Actions like pausing on a post or scrolling speed feel almost subconscious, yet they can reveal a lot about a user’s interests or emotional state. This raises an ethical question about whether users truly understand how much they are communicating through behavior alone, not just through what they choose to post.

  2. Jan 2026
    1. One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling.

      “Don’t feed the trolls” assumes that attention is the only reward trolls seek, but that does not always match how harassment works. In some cases, the goal seems less about reaction and more about control or persistence. This makes platform-level intervention more important than individual restraint.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction.

      I find it interesting that trolling often relies on sincerity as camouflage. A troll’s post can look earnest enough to invite real engagement, which is what makes the disruption effective. This suggests that trolling exploits the basic expectation of good faith in online spaces.

    1. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).

      For Trump’s tweets, authenticity matters more in whether the account clearly represents whose voice it is than who physically typed the message. If users think they are reading Trump but it is actually his team, that mismatch weakens authenticity. Emotional tone also shapes what people perceive as “real.”

    1. Authenticity is a concept we use to talk about connections and interactions when the way the connection is presented matches the reality of how it functions.

      Authenticity here seems less about being fully “real” and more about whether what is presented matches what is actually happening. As long as users understand the role someone is playing, the interaction can still feel authentic. Confusion, not performance itself, is what really damages trust.

    1. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information.

      Reading about Web 1.0 made me realize how passive early internet use was compared to today. Back then, having a personal webpage felt more like displaying a digital resume, while now social media is built around constant interaction and feedback.

      It makes me wonder whether this shift toward nonstop engagement has made communication more meaningful, or just more addictive.

    1. Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread

      What I found interesting is how the technical limits of early media directly shaped how people communicated. For example, when books had to be copied by hand, only the most “valuable” or popular ideas survived and spread.

      This makes me think that even today, what goes viral is still deeply influenced by the platform’s constraints — except now those limits are algorithms and attention spans rather than physical labor.

    1. Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people.

      The idea of pernicious ignorance explains a lot about how social media decisions are made. By focusing only on visible data like engagement, we often ignore harm, privacy, or effects on marginalized groups. This makes ethical choices seem easier, but also less responsible.

    1. Because all data is a simplification of reality, those simplifications work well for some people and some situations but can cause problems for other people and other situations.

      This section made me realize that data systems are not neutral—they are built around assumptions about who users are. The examples of name length and gender options show how people who don’t “fit” the system are forced to adjust or misrepresent themselves. Often, the issue isn’t user error, but the limits of the data design.

    1. Examples of Bots

      The section on antagonistic bots was especially interesting to me. It’s concerning how bots can create the illusion of mass support or backlash, even when most real users don’t feel that strongly. This makes me think that bots don’t just add noise, but can actually change how people interpret public opinion.

    1. Definition of a bot

      I liked how this chapter clearly separates bots from things like recommendation algorithms or data analysis tools. Defining bots as programs that act through accounts helped me better understand why bots feel more “social” and sometimes more misleading than other forms of automation.

    1. Ethics Frameworks

      I think the ethics frameworks section could also include justice as fairness. This framework focuses on whether rules are fair to everyone, especially to people who have less power or visibility. In the context of social media, it helps explain why platform rules should protect vulnerable users, not just benefit the majority.