8 Matching Annotations
  1. Last 7 days
    1. With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)

      I found the discussion of affordances and friction especially thought-provoking, because design choices are not neutral—they actively guide user behavior. Features like infinite scroll reduce friction in a way that benefits engagement metrics, but from an ethical perspective (especially care ethics or virtue ethics), they can undermine users’ ability to rest, reflect, or disengage. This makes me think that “frictionless design” is not always ethically better, and sometimes intentional friction can actually support more responsible and humane use of social media.

    1. 5.5.3. 8Chan (now 8Kun)# 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      I find these “virtually rule-free” platforms deeply contradictory. On one hand, they undeniably gave rise to much of the early internet culture and memes, yet on the other, they also provide fertile ground for extreme and violent content. When “free speech” is treated as the sole principle, it's easy to overlook the real people who get hurt as a result. This makes me believe that “no rules” itself is not a neutral choice.

    1. 3.2.5. Fake Bots# We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      I hadn't fully realized there were so many “unofficial” bots out there—like those that bypass API restrictions by simulating human clicks. This feels riskier than simply registering bots. Add to that fake bots (real people pretending to be bots), and it becomes harder to verify information sources, further eroding trust on the platform.

    2. 3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist

      I think the Tay example perfectly illustrates that “learning bots” are not neutral—what they learn depends entirely on their environment. If the platform itself is saturated with malicious content and the bot lacks sufficient filtering or constraints, it can quickly become corrupted and even amplify existing problems.

    1. 2.2.2. The “Golden Rule”# One widespread ethical principle is what English speakers sometimes call the “Golden Rule”: “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’” Confucius, Analects 15.23 (~500 BCE China) “There is nothing dearer to man than himself; therefore, as it is the same thing that is dear to you and to others, hurt not others with what pains yourself.” Gautama Buddha, Udānavarga 5:18 (~500 BCE Nepal/India) “That which is hateful to you do not do to another; that is the entire Torah, and the rest is its interpretation.” Hillel the Elder, Talmud Shabbat, folio 33a (~0 CE Palestine) “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.” Jesus of Nazareth, Matthew 7:12 (~30 CE Palestine) And many more…

      I find the “Golden Rule” sounds simple enough, but it doesn't always work well in practice because everyone's feelings and boundaries are different. Especially on social media, judging behavior by thinking “I don't mind, so others shouldn't either” can actually overlook the feelings of those who are genuinely affected.

    1. Taoism# Act with unforced actions in harmony with the natural cycles of the universe. Trying to force something to happen will likely backfire. Rejects Confucian focus on ceremonies/rituals. Prefers spontaneity and play. Like how water (soft and yielding), can, over time, cut through rock. Key figures: Lao Tzu ~500 BCE China Lao Tzu Zhuangzi ~300 BCE China

      As a Chinese student, I actually resonate quite deeply with the Taoism mentioned here. Taoism emphasizes “governing through non-action” and following nature's course. This inclines me to question “forceful intervention” and “over-optimization” when considering social media and tech ethics. Sometimes, the more platforms try to control user behavior, the more likely they are to backfire—much like Taoism's idea that “the harder you try, the more unbalanced things become.”

  2. Jan 2026
    1. We’ve now looked at how different ways of storing data and putting constraints on data can make social media systems work better for some people than others, and we’ve looked at how this data also informs decision-making and who is taken into account in ethics analyses. Given all that can be at stake in making decisions on how data will be stored and constrained, choose one type of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained.

      This section made me realize that storing personal data on social media is not just a technical question, but also an ethical one. For example, age can be stored as a number, but platforms still need to decide how precise it should be and how it might be used or misused. It also made me question whether some data, like exact address, really needs to be stored at all given the privacy risks.

    1. If we look at a data field like gender, there are different ways we might try to represent it. We might try to represent it as a binary field, but that would exclude people who don’t fit within a gender binary. So we might try a string that allows any values, but taking whatever text users end up typing might make data that is difficult to work with (what if they make a typo or use a different language?). So we might store gender using strings, but this time use a preset list of options for users to choose from, perhaps with a way of choosing “other,” and only then allow the users to type their own explanation if our categories didn’t work for them. Perhaps you question whether you want to store gender information at all. Now it’s your turn, choose some data that you might want to store on a social media type, and think through the storage types and constraints you might want to use: Age Name Address Relationship status etc.

      I found the discussion about representing gender as data especially thoughtful, because it shows how technical design decisions can have real social consequences. Treating gender as a simple binary might make data easier to process, but it can erase people’s identities and experiences. I also like the idea of combining preset options with an “other” field, since it balances inclusivity with the need for usable and consistent data.