25 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. The Know Your Meme source explains that copypasta is copied text spread repeatedly by people, not always by bots. I think this matters for moderation because harmful spam is not only automated. Human users can also flood platforms, annoy others, or spread harmful messages, so moderation tools need to consider behavior and context

    1. I think moderation is necessary, even on platforms that claim to support free speech. Without rules, spam, harassment, and offensive content can make people leave. But I also think moderation should be transparent, because users deserve to know why something is removed or hidden

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. The NPR article made me think more about company responsibility. It says Facebook’s own research found Instagram could harm some teens’ mental health, especially around body image, but the company still publicly defended the platform. That feels unethical because users deserve honesty when a product may affect vulnerable teenagers.

    1. One thing I found interesting is that social media itself is not always “bad” for mental health. The chapter separates unhealthy activities from healthier ones, which feels more realistic. I think the problem is not only screen time, but what we do online: compare ourselves, doomscroll, or seek support and connection.

  4. May 2026
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. I think the part about going viral is interesting because it shows that fame online is not always positive. Many people want attention, but sudden attention can feel stressful or scary. It also connects to recommendation algorithms because platforms reward posts that get strong reactions. I wonder if social media should give users more control when their posts suddenly go viral.

    2. The source about “a lie traveling faster than truth” connects strongly to social media virality. I think it is interesting that this idea existed long before the internet, but platforms now make it happen much faster. It makes me wonder whether viral sharing rewards speed more than accuracy, which can make misinformation harder to stop.

    1. I think the part about going viral is interesting because it shows that fame online is not always positive. Many people want attention, but sudden attention can feel stressful or scary. It also connects to recommendation algorithms because platforms reward posts that get strong reactions. I wonder if social media should give users more control when their posts suddenly go viral.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. This TechCrunch article surprised me because Facebook used phone numbers given for security in ways that affected privacy. I think this is unfair because users may trust a platform more when it asks for two-factor authentication. Security information should not be reused for search, ads, or tracking without clear consent.

    1. I think recommendation algorithms are useful, but also kind of scary. They can help people find content they enjoy, but they can also trap people in filter bubbles where they only see ideas they already agree with. This can make people more extreme or isolated. Platforms should care about impact, not just engagement.

  7. Apr 2026
  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One source I looked at explains the social model of disability, which says disability is mainly caused by society, not just a person’s body. For example, barriers like stairs or negative attitudes create disability more than the impairment itself

    1. The chapter made me think accessibility should not be optional. Many apps ignore people with disabilities, which feels unfair. Features like captions help everyone, not just some users. I think designers should include accessibility from the start. Why do companies still treat it as less important if it benefits so many people?

    1. One source from WIRED discusses how companies collect personal data not only from what users share, but also from their behavior across devices. It explains that even people who avoid certain platforms can still have data collected through things like cameras or tracking systems. This made me realize privacy is almost impossible to fully protect today. Even if I try to be careful, my data is still being collected indirectly, which feels concerning and unfair.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. This chapter made me realize privacy is more about control than secrecy. Even when I agree to terms, I don’t fully understand how my data is used. It feels like a trade-off between convenience and safety. Like mediæval systems, power is unequal, and users have limited control over large tech companies today.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One detail from this Vox article on Facebook shadow profiles that stood out to me is that Facebook can collect data on people who don’t even have an account, using things like browsing history and information from their friends. This makes me question how meaningful “consent” really is online, since even avoiding a platform doesn’t fully protect your data. It connects to our class discussion about data mining being automated and often invisible to users.

    1. One thing that stood out to me is how much data platforms can collect just from small actions like likes or watch time. From my experience with TikTok, the algorithm quickly learns my interests, which feels both convenient and a little invasive. It connects to automation because these systems constantly collect and analyze data in the background. It makes me question whether users really understand or consent to how much information is being gathered.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One source from the bibliography that I found interesting is the Scientific American article about trolling culture . The article explains how trolling is not just random bad behavior, but actually part of a larger online subculture that can influence mainstream media. For example, trolls sometimes create fake or exaggerated content just to see if news outlets will pick it up, which shows how easily misinformation can spread . What surprised me is that trolling is not always meaningless—it can actually shape real-world conversations. This made me realize that trolling is more powerful than I thought. It connects to what we learned in class about attention and engagement, because trolls are able to manipulate both people and media systems. It also made me question how reliable online information really is, since even professional media can sometimes fall for these tactics. Overall, this source made me think that trolling is not just about individuals being annoying, but about how the internet system allows misleading content to spread and gain attention.

    2. One source I found interesting is the Scientific American article about trolling culture. It explains that trolling is not just random behavior, but part of a subculture that can trick mainstream media. Trolls sometimes create fake stories to see if news outlets will believe and share them. This surprised me because it shows trolling can have real impact, not just online but in real life. It made me question how reliable online information is and how easily people can be misled.

    1. One idea that stood out to me is that trolls mainly want attention, not real discussion. I’ve seen this happen a lot online—posts that are clearly meant to provoke people get the most replies. It feels like social media almost rewards trolling because more engagement means more visibility. I also thought about the advice to ignore trolls. It makes sense, but it’s hard to do. Sometimes I feel like I should respond, especially when someone spreads wrong information. So I’m not sure if ignoring is always the best solution. The idea of using bots to reply to trolls is interesting, but also a bit weird. It makes me wonder if using automation to fix online problems might actually create new ones.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One source from the bibliography that stood out to me is the analysis of Trump’s tweets from Variance Explained. The article uses data analysis to show that tweets from Android devices were more negative and aggressive, while iPhone tweets were more neutral and informational . I find this interesting because it suggests that even something as simple as the device used can reveal differences in authorship and tone. It also connects to the idea of authenticity in the chapter—what seems like one “authentic voice” online may actually be produced by multiple people. This makes me question how often we assume a single identity behind an account when that might not be true.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One idea that stood out is how anonymity can actually support authenticity. At first, that seems contradictory, but it makes sense because people may feel safer expressing their true thoughts or identities when they are not tied to their real name. I think this creates a challenge for platforms, since anonymity can also lead to harmful behavior. It makes me wonder whether platforms can balance authenticity and anonymity at the same time.

    2. One idea from this chapter that stood out to me is the tension between authenticity and anonymity online. I found it interesting that the chapter suggests anonymity can sometimes support authenticity rather than undermine it. At first, that feels counterintuitive because we usually think of anonymous accounts as less trustworthy or even deceptive. But thinking about it more, I agree that anonymity can allow people to express parts of their identity they might hide in real life due to fear of judgment or consequences. For example, people may share honest opinions, personal struggles, or marginalized identities more openly when they are anonymous. At the same time, this creates a difficult balance for platforms, because anonymity can also enable harmful behavior. It makes me wonder: is it even possible for a platform to encourage “authenticity” without limiting anonymity, or are those two goals always in tension?

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. The Bloomberg article says Twitter claims spam bots are under 5% of users, but some people argue the number is higher. This shows how hard it is to measure data on social media. It connects to Chapter 4 because data is not completely objective—how you define something like a “bot” can change the result. I think this also affects trust. If different groups give very different numbers, it’s hard to know what is true. It makes me feel that social media data is less reliable than it looks.

    1. One idea from Chapter 4 that stood out to me is the claim that all data is a simplification of reality. This made me realize how much social media reduces complex human behavior into simple numbers like likes, views, or follower counts. From my own experience using apps like TikTok and Instagram, it feels like people start valuing themselves based on these metrics, even though they don’t fully represent who they are. For example, a post might not get many likes, but that doesn’t mean it has no meaning or value. I think this simplification can be harmful because platforms treat these numbers as if they are objective truth, which can influence how algorithms promote content and how users judge themselves and others. It makes me question whether social media data is actually reflecting reality, or just shaping a distorted version of it.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. One source from the bibliography that stood out to me is Sean Cole’s article about click farms. The article explains how companies hire large groups of real people to manually like, follow, and interact with content in order to artificially boost popularity. What surprised me is that this is not even fully automated—many “fake” engagements actually come from humans working in organized systems, which makes it harder to detect than bots. This connects to the chapter’s discussion of bots and influence, because it shows that manipulation online is not only done by algorithms but also by coordinated human labor. In my opinion, this makes the problem even more serious, since it blurs the line between real and fake activity and makes platforms harder to regulate.

    1. Reading this chapter made me realize how powerful and subtle social media bots can be. I used to think bots were just obvious spam accounts, but the examples show that many bots can look very human and influence people without being noticed. This connects to what i learned before about media shaping behavior—bots can amplify certain ideas and make them seem more popular than they actually are. I think this is a little concerning, especially during elections or social movements, because people might unknowingly be influenced by automated systems. At the same time, I don’t think bots are always bad, since they can also provide useful services. A question I still have is: how can platforms realistically detect advanced bots without also wrongly flagging real users?