38 Matching Annotations
  1. Jun 2025
    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content.

      I’ve definitely noticed how easy it is to fall for engagement bait, especially on Instagram Reels or TikTok. Sometimes people purposely post stuff that’s weird or just plain wrong because they know the comments will boost it. I used to reply a lot, but now I realize that just helps their content spread more. This chapter made me think more about when to just scroll and ignore instead of giving it attention.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ted Chiang. Will A.I. Become the New McKinsey? The New Yorker, May 2023. URL: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey (visited on 2023-12-10).

      The article by Ted Chiang brings up how AI might replace consulting firms like McKinsey, which reminded me of how social media algorithms already function like advisors, deciding what we see. It made me think about how platforms have that kind of power over our decisions and emotions, and most people don't even notice it.

    1. What if social media sites were governed by their users instead of by shareholders (e.g., governed by the subjugated instead of the colonialists)? How would users participate in decision-making?

      If social media was governed by users instead of shareholders, I think platforms would feel completely different. People would probably vote on major decisions like algorithm changes, moderation rules, or new features. There could be surveys, forums, or even elected user reps who speak for different communities. It would take more time and effort, but the platform would likely be more fair. Right now most decisions are based on what makes money, not what actually helps users.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. John Smith (explorer). December 2023. Page Version ID: 1189283105. URL: https://en.wikipedia.org/w/index.php?title=John_Smith_(explorer)&oldid=1189283105 (visited on 2023-12-10).

      The Wikipedia page about John Smith reminded me how history is usually written from the perspective of people in power. It connects to the chapter because most social media platforms are also controlled by a small group that benefits the most. If users had more say, it would be like shifting the narrative to include voices that usually get ignored. It is not just about ownership, it is about whose experience shapes the rules.

    1. What if government regulations said that social media sites weren’t allowed to make money based on personal data / targeted advertising? What other business models could they use? How would social media sites be different?

      If social media sites were not allowed to make money from personal data or targeted ads, they would probably switch to paid subscriptions or charge for extra features. I think the experience might get better because the platform would not be designed just to keep people scrolling. But it would also mean that not everyone could afford to use it, and smaller creators might not get as much attention. It would make social media feel more like a product instead of something free to everyone.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alaska Permanent Fund. December 2023. Page Version ID: 1187862782. URL: https://en.wikipedia.org/w/index.php?title=Alaska_Permanent_Fund&oldid=1187862782 (visited on 2023-12-10).

      The Alaska Permanent Fund Wikipedia page made me think about how data could be treated more like a shared resource. If companies are making money off of everyone’s data, maybe users should get something in return. It connects to the chapter because it shows that there are other ways to think about value and who deserves a part of it, not just the platform making money.

  5. May 2025
    1. What do you consider to be the most important factors in making an instance of public shaming good (if you think that is possible)?

      I think public shaming can sometimes be justified, but it depends on a lot of things. It matters who is being called out, what they did, and whether they actually have power or influence. If someone hurts others and never takes accountability, I think it can be fair to call them out publicly. But if it is just someone making a small mistake or a private person being dragged online, that feels unnecessary. I think intent and scale both matter. It should not turn into a mob trying to ruin someone's whole life over one post.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. The Onion. Nation Demands Fresh Celebrity Meat. The Onion, September 2009. URL: https://www.theonion.com/nation-demands-fresh-celebrity-meat-1819571041 (visited on 2023-12-10).

      The Onion article was obviously satire, but it still made a point. It joked about how the public is always looking for someone new to cancel or drag, especially celebrities. That connects to the chapter because it shows how public shaming has become entertainment for some people. Even if someone deserves criticism, the way people pile on can go too far and turn it into a trend instead of something meaningful.

    1. Do you think there are ways of changing how quote posts work that would reduce harassment (e.g., changing who can do it, who can view it, whether the quoted post is displayed above the new comment or after)?

      I think quote posts can be useful but also really toxic depending on how people use them. Sometimes they’re just used to call someone out or make fun of them, and the original person ends up getting harassed. One idea could be letting users turn off the option for their posts to be quoted. Or maybe make it so quote posts do not show up to everyone unless both users follow each other. That could help limit how often random people get targeted.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Stochastic terrorism. October 2023. Page Version ID: 76245726. URL: https://en.wiktionary.org/w/index.php?title=stochastic_terrorism&oldid=76245726 (visited on 2023-12-10).

      The Wiktionary page on stochastic terrorism talked about how public comments can be used to encourage violence without directly saying it. That reminded me of how quote posts sometimes work. People can quote something and add a rude comment, and even if they are not directly attacking someone, it can still lead to followers harassing the person. It connects to the chapter because it shows how design choices can lead to real harm, even if that is not the intention.

    1. In what ways do you think you’ve participated in any crowdsourcing online?

      I think I’ve participated in crowdsourcing without really thinking about it. Things like answering polls, tagging photos, or even writing reviews all count. I’ve left reviews on food apps or shared feedback when a website asked for it, and that helps companies improve their services. Even something like using CAPTCHA helps train AI. It’s kind of weird how small actions like that actually add up and support bigger systems.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. WIRED. How to Not Embarrass Yourself in Front of the Robot at Work. September 2015. URL: https://www.youtube.com/watch?v=ho1RDiZ5Xew (visited on 2023-12-08).

      The WIRED video was kind of funny but also made a good point. It talked about how people interact with robots in the workplace and how important it is to understand their role. It made me think about how crowdsourcing also trains machines by using human input. The more we interact with tech, the more it learns from us. It connects to the chapter because we are all shaping how automation works just by doing everyday things online.

    1. What dangers are posed with languages that have limited or no content moderation? What do you think Facebook should do about this?

      When there is no content moderation in certain languages, it makes it easy for hate speech, fake news, and harmful posts to spread without being caught. A lot of people rely on social media for information, so if nothing is being checked, it can be dangerous. I think Facebook should hire more people who actually speak those languages and understand the culture. Relying on auto-translation is not enough. If a platform works in a language, it should also be moderated in that language.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Wikipedia:Administrators. November 2023. Page Version ID: 1187624916. URL: https://en.wikipedia.org/w/index.php?title=Wikipedia:Administrators&oldid=1187624916 (visited on 2023-12-08).

      The Wikipedia page about administrators showed how moderation works differently depending on the platform. On Wikipedia, admins are just regular users who help enforce rules. It is based on trust and community input. That made me think that maybe social media platforms could also use a system like that, especially in areas where they do not have enough moderators. It could help fill the gap where content is being missed.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Copypasta. May 2009. URL: https://knowyourmeme.com/memes/copypasta (visited on 2023-12-08).

      The Copypasta page on Know Your Meme was funny to read, but it also made me think about how internet culture works. Something starts as a joke, then gets copied so much that no one remembers where it came from. Some copypastas are harmless, but others can be offensive or used to harass people. It connects to the chapter because even when something is repeated as a joke, it can still break rules or make people uncomfortable depending on how it’s used.

    1. Have you ever reported a post/comment for violating social media platform rules?

      I have reported a post before, but honestly not often. It was a comment on TikTok that was clearly racist and got a lot of likes, which made it worse. I don’t usually report things unless they’re really bad, because most of the time I assume nothing will happen. But that one just felt too much to ignore. Reading this chapter made me think more about how the report systems are kind of hidden or hard to trust. It feels like people don’t know what counts as “bad enough” to report unless it’s super obvious.

    1. In what ways have you found social media bad for your mental health and good for your mental health?

      Social media has definitely affected my mental health in both good and bad ways. It helps me stay connected to people, especially when I feel isolated or just want to check in without actually texting anyone. But it also makes it easy to compare myself to others. Sometimes I scroll through posts and feel like everyone else is doing more or looks better or has their life together, even when I know it's just a highlight reel. It messes with my mood without me realizing it sometimes.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health (visited on 2023-12-08).

      The NPR article about Facebook's data on teen mental health stood out to me because it showed how complicated this topic actually is. Everyone online makes it seem like social media is either completely terrible or completely fine, but the article says the data is mixed. That made me think more about how people’s experiences are different. Just because something is bad for one person doesn’t mean it affects everyone the same way. It connects back to the chapter because it shows how easy it is to oversimplify big issues when it comes to tech and mental health.

    1. How do you think attribution should work when copying and reusing content on social media (like if you post a meme or gif on social media)? When is it ok to not cite sources for content? When should sources be cited, and how should they be cited? How can you participate in cultural exchange without harmful cultural appropriation?

      I think reposting a meme or gif is usually fine without citing if it’s already gone viral and everyone’s sharing it. But if someone made original art, a specific video, or something that clearly took effort, they should be credited. It doesn’t have to be formal, but tagging them or mentioning their username shows respect. It’s about knowing when your repost could affect the creator, especially if they’re trying to grow or get recognition.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).

      At first the Wikipedia article about the evolution of cetaceans seemed random, but it reminded me how content online constantly shifts and changes. One meme can go through a hundred versions until no one remembers where it started. That’s what makes internet culture fast and fun, but it also makes it really easy to forget where things come from. It made me think that just because something is popular doesn’t mean the original source doesn’t matter.

    1. What experiences do you have of social media sites making particularly bad recommendations for you?

      The part about how recommendation algorithms can trap you in a filter bubble definitely reminded me of my own experience on YouTube and TikTok. Once I liked a few videos on a topic, it felt like that’s all I ever saw. It’s kind of scary how fast the algorithm shapes what you think about just by showing you the same stuff over and over. It made me think more about how these systems aren’t neutral-- they guide what people care about without them realizing.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Fair Sentencing Act. May 2023. Page Version ID: 1153436887. URL: https://en.wikipedia.org/w/index.php?title=Fair_Sentencing_Act&oldid=1153436887 (visited on 2023-12-07).

      I looked at the Fair Sentencing Act Wikipedia page and was kind of surprised to see it used in this chapter. At first, it didn’t seem connected, but then I realized the link is about fairness and how systems can create or reduce bias. It made sense with the idea that recommendation algorithms can be unfair if they amplify certain voices and ignore others. Just like the justice system needed reform, these algorithms do too.

  14. Apr 2025
    1. We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.

      The question about whether things are actually getting better for disabled people made me pause. I feel like on the surface, it seems like there’s progress because we see more features labeled “accessible” or “inclusive,” but a lot of it still leaves people out. Some new tech helps certain disabilities but ignores others. It also depends on where you live and what support you have. It reminded me that saying things are “better” doesn't mean they’re actually good or fair for everyone.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ash. Autism is NOT A Disability. July 2022. URL: https://www.autism360.com/autism-is-not-a-disability/ (visited on 2023-12-07).

      I read the article “Autism is NOT a Disability” and thought it gave a really different view. The author talks about autism as a different way of thinking, not something that needs to be fixed. That stood out to me because it shows how our idea of “accessible” is shaped by what we think is normal. It connects back to the chapter because it shows how progress isn’t just about making new tools — it’s also about changing how we see people in the first place.

    1. Right to privacy. November 2023. Page Version ID: 1186826760. URL: https://en.wikipedia.org/w/index.php?title=Right_to_privacy&oldid=1186826760#United_States (visited on 2023-12-05).

      The Wikipedia page about the right to privacy made me think about how privacy is not just about hiding stuff. It is also about having control over what gets shared. That made me think about how a lot of apps take your data even when you do not know it. It connects to the chapter because so many platforms are designed to collect information without making it clear.

    1. 9.5.2. Imagine

      I thought it was interesting how the chapter talked about apps assuming people are fine sharing their personal info. I have noticed that a lot too. Some apps will not even let you use them unless you give them your phone number or location. It made me realize how a lot of designs do not think about people who are more private. It feels unfair when there are no real options to say no without losing access to the app.

    1. What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?

      Looking through my ad profile made me realize how much of my online activity is being tracked even when I forget about it. Some of the topics made sense and lined up with things I actually searched, but other parts felt completely random. What surprised me most was how confident the profile felt, even though some of it was way off. It made me think about how much companies like Google are assuming about me, and how little I really know about what data they collect or how they’re using it. I don’t think I’m comfortable with it, even if I already expected it. It just feels strange to be watched that closely, especially when it’s not always accurate.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-

      The Vox article about Facebook collecting data on people who don’t even have accounts was honestly unsettling. I knew they collected a lot, but I didn’t realize it could go that far. It made me think about how privacy online doesn’t really exist anymore. Even choosing not to sign up for something doesn’t mean your data is safe. This reminded me of the section in the chapter that talked about how users are not always in control of their own information. It feels like no matter what you do, your actions online are still being watched and used for something.

    1. Have you witnessed different responses to trolling? What happened in those cases? What do you think is the best way to deal with trolling?

      This chapter made me think about how often trolling gets ignored or laughed off. I’ve seen it happen in comment sections and even in group chats, where someone says something clearly meant to mess with people, and others either ignore it or try to play along. But I’ve also seen times where it genuinely upset someone and no one really knew how to handle it. I don’t think ignoring trolls always works. Sometimes it just gives them more control over the situation. It made me wonder if platforms should have stronger tools for this or if that would just lead to trolls getting more creative in harmful ways.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Know Your Meme. Know Your Meme: Three Wolf Moon. May 2011. URL: https://www.youtube.com/watch?v=TbNQ746eLiU (visited on 2023-12-05).

      I watched the “Three Wolf Moon” video from Know Your Meme and it made me realize how quickly the internet can take something random and turn it into a huge joke. The sarcastic reviews on the shirt were funny, but it also showed how trolling can look playful instead of mean. Before this chapter, I would not have thought of that as trolling, but now I can see how it fits. It also made me think about how sometimes harmless jokes turn into something bigger and not everyone finds them funny. The line between fun and harmful is not always clear.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      I definitely act differently depending on the platform. On Instagram, I try to post when I feel like I look good or when I want to show something that fits a certain vibe. But in close friends stories or private messages, I’m way more unfiltered. I don’t think either version is fake, but sometimes it feels like I’m only showing a small piece of who I am. It makes me wonder if I even know what “authentic” really means online anymore. The way I express myself has changed a lot over time, and I don’t know if that’s just growing up or if I’ve just adapted to what people expect to see.

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Pretendian.

      I read the Wikipedia article about Pretendians and honestly it was frustrating. It’s wild how people can just make false claims about being Indigenous online and then benefit from it. What stuck with me was how much harm that causes, not just by spreading misinformation, but by taking opportunities and visibility away from real Indigenous people. It reminded me of how easy it is to perform an identity online and have people believe it, especially when there’s no way to verify who someone actually is. It ties back to what the chapter says about how identity works on social media, and how it can be both powerful and really dangerous depending on how it’s used.

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. What are Affordances?

      The source about affordances reminded me of when we talked about this in INFO 200. I remember learning about the difference between real and perceived affordances, but this went more in depth. The examples about false affordances stood out to me, especially the one about underlined text that isn’t actually clickable. I’ve run into that before, and it always throws me off. It seems like a small thing, but it affects how easy or frustrating something is to use. This made me think more about how important affordances are in building a smooth and trustworthy interface.

    1. infinite scroll

      The section about infinite scroll made me think about how much time I spend on Instagram Reels without realizing it. I’ll go on the app to check one thing, and then end up scrolling for almost an hour. It surprised me to learn that infinite scroll was invented intentionally to reduce friction, and that the person who created it regrets it now. That part really made me pause. It made me think about how features that seem harmless can change our habits in ways we don’t even notice. I also wonder how much responsibility designers have when something they create ends up making it harder for people to disconnect.

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL:

      Reading this made me think about how fake engagement really messes with what feels real online. It’s crazy how people spend so much just to make something look popular, even when it’s not. I feel like this connects to bots too because a lot of people use bots just to make themselves seem more successful online. Things like this is what makes it hard to trust what we see on social media sometimes.

    1. Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.

      This section made me think about how bots kind of blur responsibility online. When bots spread harmful content, people usually blame the platform first, but it feels like the blame is shared. We can acknowledge the person who made the bot, the one who runs it, and the platform letting it exist. Bots kind of give people an excuse to avoid responsibility by saying it wasn’t fully their doing, but I feel like no matter what, there’s always a human choice behind it.

    1. How often do you hear phrases like “social media isn’t real life”?

      I hear this phrase a lot, but I don’t think it reflects how deeply social media is tied to real life now. I've noticed that many people feel pressure to appear more successful or wealthier online, even if it’s not true, and that creates unrealistic expectations for others. Even if the content is filtered or exaggerated, the emotional and social effects- like comparison, insecurity, or validation are still very real. So in many ways, what happens online is real life.

    1. In acting virtuously, you are training yourself to bec

      While virtue ethics emphasizes personal character and building good habits, I think it’s important to consider how someone’s environment affects their ability to act virtuously. For example, if someone grows up in a space where being honest or speaking out is punished, it can be difficult to practice those virtues- even if they want to. I think ethics should account for how much social pressure or context influences our choices, not just individual traits.