The Bloomberg article says Twitter claims spam bots are under 5% of users, but some people argue the number is higher. This shows how hard it is to measure data on social media. It connects to Chapter 4 because data is not completely objective—how you define something like a “bot” can change the result. I think this also affects trust. If different groups give very different numbers, it’s hard to know what is true. It makes me feel that social media data is less reliable than it looks.
- Last 7 days
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One idea from Chapter 4 that stood out to me is the claim that all data is a simplification of reality. This made me realize how much social media reduces complex human behavior into simple numbers like likes, views, or follower counts. From my own experience using apps like TikTok and Instagram, it feels like people start valuing themselves based on these metrics, even though they don’t fully represent who they are. For example, a post might not get many likes, but that doesn’t mean it has no meaning or value. I think this simplification can be harmful because platforms treat these numbers as if they are objective truth, which can influence how algorithms promote content and how users judge themselves and others. It makes me question whether social media data is actually reflecting reality, or just shaping a distorted version of it.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
One source from the bibliography that stood out to me is Sean Cole’s article about click farms. The article explains how companies hire large groups of real people to manually like, follow, and interact with content in order to artificially boost popularity. What surprised me is that this is not even fully automated—many “fake” engagements actually come from humans working in organized systems, which makes it harder to detect than bots. This connects to the chapter’s discussion of bots and influence, because it shows that manipulation online is not only done by algorithms but also by coordinated human labor. In my opinion, this makes the problem even more serious, since it blurs the line between real and fake activity and makes platforms harder to regulate.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Reading this chapter made me realize how powerful and subtle social media bots can be. I used to think bots were just obvious spam accounts, but the examples show that many bots can look very human and influence people without being noticed. This connects to what i learned before about media shaping behavior—bots can amplify certain ideas and make them seem more popular than they actually are. I think this is a little concerning, especially during elections or social movements, because people might unknowingly be influenced by automated systems. At the same time, I don’t think bots are always bad, since they can also provide useful services. A question I still have is: how can platforms realistically detect advanced bots without also wrongly flagging real users?
-