12 Matching Annotations
  1. Last 7 days
    1. The part about Amazon Mechanical Turk is kind of eye-opening. It seems super efficient for companies to break down big tasks, but at the same time, it feels a bit weird that these workers are basically treated like human processing units without much protection.

    1. I never really thought about the difference between crowdsourcing and just regular outsourcing before reading this. It’s pretty interesting how the 'open call' part is what makes it unique, but it also makes me wonder how platforms actually filter out bad data when anyone can join in.

    2. I never really thought about the difference between crowdsourcing and just regular outsourcing before reading this. It’s pretty interesting how the 'open call' part is what makes it unique, but it also makes me wonder how platforms actually filter out bad data when anyone can join in.

    1. s interesting how platforms like 4chan have such different ideas of “quality” content compared to mainstream sites. Even though they allow a lot of offensive material, they still draw the line at spam because it ruins the user experience in a boring, repetitive way. This shows that moderation is always tied to what a platform thinks will keep its core users engaged, not just some universal idea of good content.

  2. Feb 2026
    1. It’s really eye-opening to see how social media algorithms are designed to keep us scrolling, even when it’s bad for our mental health. I’ve definitely felt the pressure to keep up with others’ posts, and it’s helpful to understand that this isn’t just my own issue—it’s a feature of the platforms we use.

    1. The distinction between individual and systemic analysis in the context of recommendation algorithms really changed how I think about online bias. It’s easy to blame individual users or content creators for problematic content, but this chapter makes it clear that the systems and rules built into these platforms often play a much larger role in shaping outcomes. The example of Elon Musk blaming users for the algorithm’s behavior perfectly illustrates this issue, as it shifts responsibility away from the systemic design choices that drive content recommendations and onto the people who use the platform

    1. The part about recommendation algorithms using location data from our IP addresses really stood out to me. It’s unsettling to think that platforms can use this information to suggest content based on what people near me are interacting with, and it makes me more aware of how much personal data is being collected without my explicit consent.

    2. The part about recommendation algorithms using location data from our IP addresses really stood out to me. It’s unsettling to think that platforms can use this information to suggest content based on what people near me are interacting with, and it makes me more aware of how much personal data is being collected without my explicit consent.

    1. One point that stood out to me is how data mining on social media often happens without users’ explicit consent, even when platforms claim to be transparent. This creates a concerning ethical gap because users may not realize how their casual interactions, like liking a post or following an account, are being aggregated and sold to third parties. It makes me wonder what more could be done to make these practices visible to the average user, so they can make more informed choices about their data.