47 Matching Annotations
  1. Last 7 days
    1. Facebook uses hired moderators to handle content moderation on the platform at large (though Facebook groups are moderated by users). When users (or computer programs) flag content, the hired

      I think this passage shows how difficult content moderation is on large social media platforms like Facebook. Even though moderators are hired to review harmful posts, the platform’s algorithms may still promote controversial or inflammatory content because it increases engagement and keeps users active. After reading this, I realized that social media companies are not only influenced by ethics, but also by business goals and user activity. It made me think more critically about the type of content I interact with online and how algorithms shape what people see every day.

    2. Facebook uses hired moderators to handle content moderation on the platform at large (though Facebook groups are moderated by users). When users (or computer programs) flag content, the hired moderators will look at it and decide what to do.

      I think this passage shows how difficult content moderation is on large social media platforms like Facebook. Even though moderators are hired to review harmful posts, the platform’s algorithms may still promote controversial or inflammatory content because it increases engagement and keeps users active. After reading this, I realized that social media companies are not only influenced by ethics, but also by business goals and user activity. It made me think more critically about the type of content I interact with online and how algorithms shape what people see every day.

    3. Facebook also discovered in internal research that, “the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content [n7].”

      I think this passage shows how difficult content moderation is on large social media platforms like Facebook. Even though moderators are hired to review harmful posts, the platform’s algorithms may still promote controversial or inflammatory content because it increases engagement and keeps users active. After reading this, I realized that social media companies are not only influenced by ethics, but also by business goals and user activity. It made me think more critically about the type of content I interact with online and how algorithms shape what people see every day.

    4. Facebook also discovered in internal research that, “the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize en

      I think this passage shows how difficult content moderation is on large social media platforms like Facebook. Even though moderators are hired to review harmful posts, the platform’s algorithms may still promote controversial or inflammatory content because it increases engagement and keeps users active. After reading this, I realized that social media companies are not only influenced by ethics, but also by business goals and user activity. It made me think more critically about the type of content I interact with online and how algorithms shape what people see every day.

    5. One thing these sites do ban though, is spam. While much of spam is certainly legal, and a form of speech, this speech is restricted on these sites. If the chat boards filled up with spam, the users would find it boring and leave, so for practical reasons, these sites still moderate for spam (though they may allow some uses of ironic spam, copypasta [n5]).

      This passage is interesting because it shows that even platforms that strongly support free speech still place limits on certain types of content like spam. It highlights the idea that content moderation is often done for practical reasons, such as keeping users engaged and maintaining the platform’s usability. I think the passage effectively demonstrates that no online platform is completely unmoderated, since every site must balance freedom of expression with user experience.

    1. While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping [m11]. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?).

      This passage gives an important perspective on how social media can blur the boundaries between healthy emotional sharing and “trauma dumping.” It explains how parasocial relationships and large online audiences can make people overshare personal struggles without considering the impact on others. I think the passage is effective because it encourages people to think more carefully about online communication, empathy, and emotional boundaries.

    1. Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5]

      This passage is powerful because it shows how social media algorithms can influence people’s emotions and behavior without them even realizing it. The example of Facebook’s experiment raises important ethical concerns about privacy, consent, and manipulation online. It also makes readers think more critically about how technology affects mental health and daily interactions.

  2. May 2026
    1. The microorganisms in the starter will continue multiplying if you let them, and you can add flour and water to make it larger, then split it into multiple starters. You can repeat this process again and again, occasionally using some starters to bake bread, but you can share the starters with others.

      This passage uses sourdough starter as a creative example to explain how microorganisms can continue growing and spreading over time. I think the comparison is effective because it makes the concept of evolution easier to understand through something familiar and practical. The idea that people can keep feeding, dividing, and sharing the starter also connects well to how culture and information spread between humans. Overall, the paragraph is simple but engaging, and it helps readers see how biological and cultural evolution can work in similar ways.

    1. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene” [l4]).

      This passage is interesting because it connects biological evolution with cultural evolution in a simple and understandable way. The explanation of how genes survive through shared traits in a bee colony makes the idea of evolution feel more practical and less abstract. I also like how it introduces Dawkins’ idea of the “meme” as cultural information that spreads between people, similar to how genes spread biologically. Overall, the paragraph effectively combines science and culture while encouraging readers to think about how ideas, beliefs, and behaviors evolve in society.

    1. The filter bubbles can be good or bad, such as forming bubbles for: Hate groups, where people’s hate and fear of others gets reinforced and never challenged Fan communities, where people’s appreciation of an artist, work of art, or something is assumed, and then reinforced and never challenged Marginalized communities can find safe spaces where they aren’t constantly challenged or harassed (e.g., a safe space [k15])

      This passage does a good job explaining the concept of filter bubbles by showing both their positive and negative impacts. The use of concrete examples—such as hate groups, fan communities, and safe spaces—makes the idea more relatable and easier to understand. It is especially strong in highlighting that filter bubbles are not purely harmful but can also provide benefits, depending on the context. However, the explanation could be slightly more concise and supported with real-world evidence to make it more persuasive.

    1. Advertisements shown to users can go well for users when the users find products they are genuinely interested in, and for making the social media site free to use (since the site makes its money from ads). Advertisements can go poorly if they become part of discrimination (like only showing housing ads to certain demographics of people [k9]), or reveal private information (like revealing to a family that someone is pregnant [k10])

      This passage clearly explains both the benefits and risks of automated features like reminders and ads on social media. It is effective because it uses concrete, relatable examples (such as breakups or privacy leaks) to illustrate how these systems can impact users emotionally and ethically. However, the tone is somewhat general and could be strengthened by adding more specific evidence or data. Overall, it provides a balanced and thoughtful overview of the topic.

  3. Apr 2026
    1. Reusing code instead of repeating code: When we find ourselves repeating a set of actions in our program, we end up writing (or copying) the same code multiple times. If we put that repeated code in a function, then we only have to write it once and then use that function in all the places we were repeating the code.

      This explanation is clear and effective, directly highlighting the core benefit of functions—eliminating redundant code and simplifying maintenance—with a straightforward, relatable example that makes the concept easy to grasp.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]).

      This passage clearly explains how proper password storage, using unique encryption for each credential, protects against data breaches by forcing attackers into slow brute-force attempts, and effectively contrasts this best practice with the severe risks of poor security through real-world examples.

    1. Social media data can also be used to infer information about larger social trends like the spread of misinformation [h11].

      This sentence clearly shows that social media data can be used to understand broader social trends, such as the spread of misinformation. I think it is an interesting and important point because it connects individual data to larger societal issues. The sentence is clear, but it could be slightly more specific with an example.

    1. The AT Protocol API lets you access a lot of the data that Bluesky tracks (since Bluesky is a more open social media protocol), but Bluesky probably track much more than they let you have access to (like what other social media platforms do).

      This sentence is clear and explains that the API provides a lot of data, but the platform likely has access to even more that is not shared. I think this is a realistic point and reminds us that the data we get is limited. However, the sentence is a bit long and could be easier to read if it were shorter.

    1. In 2011, a group on 4chan started spreading a plan for making a “Forever Along Involuntary Flashmob.” You can see their instructions below:

      In 2011, a group on 4chan started spreading a plan for making a “Forever Along Involuntary Flashmob.” You can see their instructions below:

      image in the form of a sort of flier with meme faces of foreveralone and trolls. The text reads: How to make your very own Forever Alone Involuntary Flashmob. 1. create fake online dating profile as mildly cute woman from NYC - just use somechicks facebook to get several believable pics etc. etc. 2. find forever alone guys from NYC on dating site, get them to believe you're interested. 3. Once forever alone guy takes the bait, suggest you meet for a date at this time and location: Pay phones outside 47th Digital store 46th st * Broadway NEW YORK 7:30PM Friday 13th May 2011. 4. watch angry alone guys flashmob rage at earthcam.com/usa/newyork/timessquare/ (select Camera 2). Also Remember: This will only work if we keep spreading these instructions and actually get involved. There is no limit to how many fake profiles and people you can trick or method used. Take time and prepare - think smart, if they suspect you're pushing the time and date too hard it aint gonna work.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion, such as anger or emotional pain. When the goal is disruption, it might be attempting to derail a conversation (e.g., concern trolling [g4]), or make a space no longer useful for its original purpose (e.g., joke product reviews), or try to get people to take absurd fake stories seriously [g5].

      This definition clearly explains trolling by focusing on its purpose rather than just the behavior itself. I think it is helpful because it shows that trolling is not only about posting false information, but about intentionally provoking emotions or disrupting conversations. What stands out to me is how it highlights both emotional harm and the impact on online spaces, which makes the concept more complete.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. There are many ways to define and talk about authenticity and why it matters to people, but for the purposes of this book, we will use the following definition:

      I think this passage gives a clear and useful definition of authenticity. It shows that authenticity is not just about being “real,” but about whether what is presented actually matches reality. I find this idea interesting because it explains why people feel uncomfortable when something seems misleading, even if it’s not intentionally harmful. It also makes me think about how often online interactions don’t fully reflect the truth.

    2. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      I think this passage clearly explains why authenticity matters so much in social media. People don’t just dislike being deceived—they feel upset because it breaks their trust. I agree with this idea, especially today when it’s hard to tell what is real online. It shows how important it is for platforms and users to be more honest and transparent.

    3. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      I think this passage clearly explains why authenticity matters so much in social media. People don’t just dislike being deceived—they feel upset because it breaks their trust. I agree with this idea, especially today when it’s hard to tell what is real online. It shows how important it is for platforms and users to be more honest and transparent.

    1. But one 4Chan user found 4chan to be too authoritarian and restrictive and set out to create a new “free-speech-friendly” image-sharing bulletin board, which he called 8chan.

      I think this example shows how “free speech” can be interpreted very differently by different people. While the user wanted a less restrictive space, it also led to the creation of a platform that later became associated with harmful and extreme content. This makes me question whether having fewer rules online always leads to better outcomes.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      I think this part is interesting because it shows how blogs started as very simple personal tools, almost like online diaries. It’s surprising how something so basic later became a foundation for modern social media. Today, platforms like Instagram or Twitter feel very advanced, but they actually come from the same idea of regularly sharing updates and interacting with others.

    1. We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination.

      This idea of combining dictionaries and lists made me realize how complex data structures can actually be in real life. At first, it sounds confusing, but when I think about social media, it makes sense. For example, a user can have multiple types of information, and some of that information (like followers or posts) is already a list. So organizing data this way feels more natural and realistic. It also shows how programming tries to model real-world relationships between people and information.

    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).

      This part helped me better understand what metadata actually means. Before, I only thought about the main content like text or photos as “data,” but I didn’t realize that information like time, user, and interactions can also be very important. In real life, I feel like metadata might even reveal more about a person than the content itself. For example, when we use social media, patterns like when we post or who we interact with can say a lot about our habits. This makes me think more about privacy and how much information we are unintentionally sharing.

    1. As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day.

      This example about Microsoft Tay is kind of scary because it shows how easily AI can be influenced by people online. Tay was supposed to learn from users, but instead it quickly picked up harmful and racist language. This makes me think that AI is not neutral—it reflects the behavior of the people interacting with it. It also raises a question about responsibility: should developers be more careful about how they design these systems to prevent this kind of outcome?

    2. As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots).

      I found this example really interesting because it shows how bots can influence people’s opinions without them realizing it. The fact that a large number of tweets came from accounts in another country makes me think about how easy it is to manipulate public discussions online. It also raises a question for me: how can regular users tell the difference between real opinions and coordinated bot activity? I feel like this is especially important today since social media plays such a big role in shaping what people believe.

    1. But when we use the phrase “social media,” we mean something much more specific, something involving computers (or smartphones), the internet, communication, and networks of connected people.

      I think this definition is helpful because it shows that social media is not just about content, but also about connections between people. From my experience, platforms like Instagram or TikTok feel more like networks of interaction rather than just places to consume information.

    1. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news.

      I agree with this idea because I feel like a lot of new technologies focus more on what is possible rather than what is right. For example, with social media and AI, people create things very quickly without fully thinking about the consequences, like misinformation or privacy issues.

  6. Feb 2025
  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Terry Gross. Director Bo Burnham On Growing Up With Anxiety — And An Audience. NPR, July 2018. URL: https://www.npr.org/2018/07/18/630069876/director-bo-burnham-on-growing-up-with-anxiety-and-an-audience

      Terry Gross’s NPR interview with Bo Burnham, "Director Bo Burnham On Growing Up With Anxiety — And An Audience," is a valuable resource for understanding Burnham’s perspective on anxiety, performance, and filmmaking. As an experienced interviewer, Gross is known for her thoughtful and in-depth conversations, and this interview likely provides meaningful insights into Burnham’s personal experiences and creative process, particularly in relation to his film Eighth Grade.

      Since NPR is a reputable news organization, the link is a reliable source for those interested in Burnham’s work, mental health, or the broader themes of adolescence and social media. If you’re looking for a critical analysis, I’d need more details on what specific aspects you’d like to evaluate—such as its depth, objectivity, or the effectiveness of the interview format.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. There is also a word, ‘ethic’, but that has different usage. So for example, someone’s ‘work ethic’ is different from the ‘ethics of work’ to which they might subscribe

      The phrase refers to differences in the use of ‘ethic’, particularly in the context of ‘work ethic’, meaning that individuals' attitudes towards work may differ from the theoretical norms to which they subscribe. At the same time, the plural forms of ‘data’ and ‘media’ emphasise the specific use of some words in the language. Overall, the sentences clearly point out these linguistic details.

    2. 还有一个词“ethic”,但用法不同。例如,某人的“工作伦理”与他们可能认同的“工作伦理”不同。在相关说明中,有些人会告诉您“数据”和“媒体”都是复数。这

      The phrase refers to differences in the use of ‘ethic’, particularly in the context of ‘work ethic’, meaning that individuals' attitudes towards work may differ from the theoretical norms to which they subscribe. At the same time, the plural forms of ‘data’ and ‘media’ emphasise the specific use of some words in the language. Overall, the sentences clearly point out these linguistic details.

  9. Jan 2025
    1. 平台还可以针对特定人群进行定制,例如针对印度低收入盲人的社交媒体平台 [ b6 ]。 此外,一些网站主要用于其他目的,但也有社交媒体组件,例如具有用户评论和客户问答的亚马逊在线商店,或具有评论部分的新闻网站。

      In order to help low-income blind people in India, a special social networking platform can be designed with the following key features:

      1. Voice-activated: users can easily listen to messages, post and chat by speaking instead of clicking.
      2. Traffic Saving: the platform is designed to be simple, consumes less traffic and can be used on regular mobile phones.
      3. Help each other: Blind people can make friends and chat through the platform, as well as learn new skills and find jobs.
      4. Support more people: Collaborate with organisations of the blind, share their stories and let the society support them more.

      The platform is simple and practical, making it easier for blind people to connect to the world and find new opportunities!

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. 黄金法则。2023 年 10 月。页面版本 ID:1182407809。URL:https://en.wikipedia.org/w/index.php? title =Golden_Rule&oldid=1182407809(访问于 2023-11-15)

      The Golden Rule is centred on the principle of ‘treat others as you would like to be treated’, which promotes empathy and mutual respect. This principle is simple and easy to understand. For example, if you want people to be nice to you, then you should be nice to them. However, it also has limitations, for example, everyone has different ideas and needs, and what you think is good may not be liked by others. Therefore, in practice, in addition to following the Golden Rule, you should also put yourself in other people's shoes to understand what they really need, so that you can get along better and help others.

    1. 娱乐:喷子们经常觉得帖子很有趣,无论是因为扰乱还是情绪反应。如果动机是为了取悦别人而给别人带来痛苦,那就叫做为了好玩 [ g6 ]

      A large part of the reason for malicious comments nowadays is because of so-called ‘fun’ or jealousy. Or maybe it's just stereotyping, because many people are bored and want to have fun in their own lives, so they attack others, and then there are different opposing sides to refute, start their discussion topics, increase the heat of the topic, in fact, there are a lot of bloggers will use this method to make their own videos more popular, but some innocent people will be killed because some netizens for their own fun to scold! A female student of a well known school dyed her hair pink and that girl committed suicide because of it, I think these people who go and malign others for fun will go and reflect on themselves

    1. 现在住在英国),我们倾向于从美国、英国和其他英语社交媒体中寻找例子。我们对 Twitter(现在品牌为“X”)、Facebook 和 Youtube 等社交媒体网络也更

      I actually think tiktok will be more accessible to students now instead, for example, I swiped quite a few videos on ins and yotube before just saying that Americans are now trying to move to live in China because tiktok is banned in the U.S. I know it's a joke but young people may be more inclined to use tiktok nowadays because of it's Coverage is very wide, for example, young people may like to dance or share pets and other things they can share through this software, and more people of the same age, which will promote the probability of their use, the use of more people, the information on it will naturally become more, and most of the users are students. Common topics and the direction of the discussion of the problem is also more unified.

    1. Python 中,我们可以查找变量并对其进行处理,然后将其保存到新变量中。例如,如果我们将一条推文的“总参与度”视为所有点赞、回复和引用推文数量的总和,我们可以计算该数字并将其存储到新变量中:

      It's very convenient because I think people nowadays don't have a good memory, and this auto-save feature is very helpful for us to perform better operations, and it doesn't even matter if we delete it by mistake because it will save it for us automatically!

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. 交媒体如何影响着我们的世界和我们自己,无论是好是坏? 社交媒体平台创建者做出的决定如何影响其使用方式和人们的行为?

      Social media can be good and bad for us, I think it is not good if it is too early for some minors to be exposed to social media because it can make them become mature too early, they may even fall in love early and it can make kids adult, which can be very bad for their development, but there is also a good side to social media because it can make it easier for students to communicate with their teachers more better, and it becomes very easy to use, just for example canvas, and sometimes people can also use social software to make themselves in a happy mood, for example they can post tiktok and ins to interact with others. To lighten their mood because no one knows them online.

    1. 但人类计算机最终被电子计算机所取代,而与电子计算机的通信并不简单。

      I don't think electronic computers will be replaced by human computers, a very simple example can be given - chatgpt, a website commonly used by college students, this website is indeed very intelligent, but his answers can be seen at a glance that they are ai synthesis and not words that a normal college student could have answered, and that's the way some teachers can find out that their students are using this to cheat, so it's just a tool that can assist us in our studies, and if humans ever stop evolving them, they will stagnate. So it's just a tool that can assist us in our studies and electronic computers are made by humans so if humans ever stop evolving them they will stagnate. In other words, if the development of electronic computers is really fast and widespread, a large part of the population will lose their jobs and have no financial resources, and then people will also think that in fact, electronic computers are not so good as imagined, and will slowly fade away later on

    1. 您认为技术工人的责任是什么?他们是否应该认真思考自己所做之事的道德影响? 您认为为什么与库梅尔交谈的人没有回答他的问题?

      I think the technology workers is to protect some of the video of personal privacy as well as copyright infringement issues,, the existence of moral problems of all the things to eradicate, they should make good use of this technology to severely punish the network is not clean of the wind, so that good video can be more comprehensive and widespread for people to see, and the bad things immediately stop, for false news as well as network violence to increase the management system, because these will make a lot of people or things These will put many people or things in a difficult ‘vortex’, and even some people will commit suicide because of this, just like the last thing in the article, once they are released, they are released, and there is no guardian, and the end is really horrible. /// Second question: the lack of a positive response is because they are also afraid (I think) because they know what they are doing is a bad thing.