The part about Amazon Mechanical Turk is kind of eye-opening. It seems super efficient for companies to break down big tasks, but at the same time, it feels a bit weird that these workers are basically treated like human processing units without much protection.
- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I never really thought about the difference between crowdsourcing and just regular outsourcing before reading this. It’s pretty interesting how the 'open call' part is what makes it unique, but it also makes me wonder how platforms actually filter out bad data when anyone can join in.
-
I never really thought about the difference between crowdsourcing and just regular outsourcing before reading this. It’s pretty interesting how the 'open call' part is what makes it unique, but it also makes me wonder how platforms actually filter out bad data when anyone can join in.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The Tumblr porn ban example really drives home how risky it is for platforms to make sudden, sweeping changes to their moderation policies.
-
s interesting how platforms like 4chan have such different ideas of “quality” content compared to mainstream sites. Even though they allow a lot of offensive material, they still draw the line at spam because it ruins the user experience in a boring, repetitive way. This shows that moderation is always tied to what a platform thinks will keep its core users engaged, not just some universal idea of good content.
-
- Feb 2026
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The idea that social media can create these fake, one-sided relationships really resonated with me.
-
It’s really eye-opening to see how social media algorithms are designed to keep us scrolling, even when it’s bad for our mental health. I’ve definitely felt the pressure to keep up with others’ posts, and it’s helpful to understand that this isn’t just my own issue—it’s a feature of the platforms we use.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The distinction between individual and systemic analysis in the context of recommendation algorithms really changed how I think about online bias. It’s easy to blame individual users or content creators for problematic content, but this chapter makes it clear that the systems and rules built into these platforms often play a much larger role in shaping outcomes. The example of Elon Musk blaming users for the algorithm’s behavior perfectly illustrates this issue, as it shifts responsibility away from the systemic design choices that drive content recommendations and onto the people who use the platform
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The part about recommendation algorithms using location data from our IP addresses really stood out to me. It’s unsettling to think that platforms can use this information to suggest content based on what people near me are interacting with, and it makes me more aware of how much personal data is being collected without my explicit consent.
-
The part about recommendation algorithms using location data from our IP addresses really stood out to me. It’s unsettling to think that platforms can use this information to suggest content based on what people near me are interacting with, and it makes me more aware of how much personal data is being collected without my explicit consent.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The section on how data mining can amplify echo chambers and polarization really resonated with me.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One point that stood out to me is how data mining on social media often happens without users’ explicit consent, even when platforms claim to be transparent. This creates a concerning ethical gap because users may not realize how their casual interactions, like liking a post or following an account, are being aggregated and sold to third parties. It makes me wonder what more could be done to make these practices visible to the average user, so they can make more informed choices about their data.
-