32 Matching Annotations
  1. Mar 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alex Blechman [@AlexBlechman]. Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus. November 2021. URL: https://twitter.com/AlexBlechman/status/1457842724128833538 (visited on 2023-12-10).

      The tweet by Alex Blechman about the Torment Nexus really resonated with me, as it highlights a very interesting and somewhat eerie intersection between science fiction and real-world technology. Blechman’s creation of the Torment Nexus as a cautionary tale in his book serves as a stark warning about the dangers of unchecked technological development. In November 2021, a tech company announced that they had created the Torment Nexus, seemingly echoing Blechman’s dystopian vision. This blurring of the line between fiction and reality is concerning, as it reflects how science fiction often acts as a mirror to real-world fears about the potential consequences of technological advancements. It raises a question: are we using science fiction as a way to process our anxieties about tech, or is the very creation of these technologies merely a matter of time?

    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch21_conclusions\03_going_forward" }, predefinedOutput: true } kernelName = 'python3'

      I find it really important to understand how algorithms and engagement-driven strategies shape our social media experience. The chapter made me reflect on how many times I've mindlessly scrolled through my feed, only to be served sensationalized content that seems to be designed just to provoke an emotional reaction, whether it's anger or excitement. It’s unsettling to realize that much of the content we interact with online is engineered to generate reactions, sometimes to the detriment of our mental health.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Free market. December 2023. Page Version ID: 1189274274. URL: https://en.wikipedia.org/w/index.php?title=Free_market&oldid=1189274274 (visited on 2023-12-10).

      I came across an interesting article on Wikipedia titled "Free Market" (accessed December 2023). It provides a detailed overview of the concept of free markets, including their historical development and various interpretations in different economic systems. The article discusses how free markets are seen as central to capitalist economies, with minimal government intervention.

      One point that particularly stood out to me was the section on the criticisms of free markets, which highlights concerns such as income inequality, exploitation, and the potential for monopolies to form. It made me reflect on the challenges of maintaining a truly competitive and fair market, especially as large corporations often have more power than smaller businesses or individual consumers.

    1. Another source of responses to Meta (and similar social media sites), is concern around privacy (especially in relation to surveillance capitalism). The European Union passed the General Data Protection Regulation (GDPR) [s50] law, which forces companies to protect user information in certain ways and give users a “right to be forgotten” [s51] online.

      The introduction of the General Data Protection Regulation by the European Union is a crucial step in protecting individuals' privacy rights, especially in the context of social media and surveillance capitalism. This regulation requires companies to be more transparent about how they use personal data and gives users greater control over their information.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg van Achterberg. Jimmy Kimmel’s Halloween prank can scar children. Why are we laughing? Washington Post, October 2017. URL: https://www.washingtonpost.com/outlook/jimmy-kimmel-wants-to-prank-kids-why-are-we-laughing/2017/10/20/9be17716-aed0-11e7-9e58-e6288544af98_story.html (visited on 2023-12-10).

      This connects to the broader idea of public shaming or public criticism because it shows how something that appears lighthearted can still have a lasting negative impact. The prank is a form of mild public humiliation, which reminds me of how “cancel culture” can sometimes involve actions that are disproportionate to the offense—whether it’s a joke, a misunderstanding, or a genuine mistake. It made me question where we draw the line between humor and harm, especially when the “victims” of these pranks or criticisms are vulnerable, like children or public figures. Would love to see more consideration given to this balance in discussions around both public shaming and humor.

    1. The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.”

      I find it really interesting how the term “cancel culture” has evolved and been used in so many different ways. It’s fascinating how the term can refer to something as serious as criminal offenses like sexual assault, but it can also apply to someone being criticized for something that might not be as severe, or even a misunderstanding. It’s also a bit troubling to think about how a single comment or mistake can lead to someone being “canceled,” especially when it’s based on a misinterpretation or something that wasn’t meant to harm.

  5. Feb 2025
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. ShiningConcepts. r/TheoryOfReddit: reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods? November 2021. URL: www.reddit.com/r/TheoryOfReddit/comments/qrjwjw/reddit_is_valued_at_more_than_ten_billion_dollars/ (visited on 2023-12-08).

      I came across an interesting discussion on Reddit titled “Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods?” This was posted in November 2021 in the r/TheoryOfReddit subreddit and raised a thought-provoking point about the power dynamics in online communities.

    1. Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child sexual abuse material (CSAM). So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia).

      The chapter touches on the complex issue of content moderation on social media platforms and the legal concerns they face with allowing certain content to remain posted. One thing that particularly stands out to me is the balance social media companies must strike between freedom of expression and legal responsibility. For instance, the fact that sites like 8kun are trying to avoid scrutiny by moving to countries with looser regulations (like Russia) raises some deep concerns about accountability and the role of tech companies in preventing harmful content.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah McQuate. 'I don't even remember what I read': People enter a 'dissociative state' when using social media. ScienceDaily, May 2022. URL: https://www.sciencedaily.com/releases/2022/05/220523135018.htm (visited on 2023-12-08).

      The idea that people might not even remember what they read but still experience emotional responses speaks to the potential harm of consuming content passively. It made me reflect on how easily we can become desensitized to the emotions and stories of others, especially when we’re constantly bombarded with new information without fully processing it. This article really deepened my understanding of how social media isn't just a tool for connection, but also a potential source of emotional overload and disconnection.

    1. While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping [m11]. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?).

      The concept of trauma dumping really resonated with me. It’s easy to forget that sharing personal struggles, especially in the context of social media, can affect others differently depending on the platform and audience. I’ve seen people feel very comfortable sharing their personal experiences with large, sometimes uninvited, audiences, and that can be overwhelming for those who don’t have the emotional capacity or relationship to process such deep emotions.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      In Writing on the Wall: Social Media - The First 2,000 Years by Tom Standage, the author discusses the deep historical roots of social media and how human communication, through written words and symbols, has evolved over centuries. One particular point that stood out to me was Standage's comparison of early social media platforms like the Roman forum and medieval manuscript circulation to today's platforms like Facebook and Twitter. He explains that the core purpose of these ancient systems—sharing ideas, opinions, and information—remains remarkably similar to how we use social media today, even if the medium has drastically changed.

    1. Additionally, content can be copied by being screenshotted, or photoshopped. Text and images can be copied and reposted with modifications (like a poem about plums [l17]). And content in one form can be used to make new content in completely new forms, like this “Internet Drama” song whose lyrics are from messages sent back and forth between two people in a Facebook Marketplace:

      This chapter made me think a lot about how content creation and sharing have evolved in the digital age. The example of modifying content—like photoshopping or reposting with changes—reminds me of how so many viral memes or internet trends are built on repurposing existing content. It's fascinating how a simple piece of text or image can take on an entirely new life with minor alterations or context shifts.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kelsey D. Atherton [@AthertonKD]. Oh, you're experiencing a structural problem? Have you ever considered trying different personal choices instead? April 2019. URL: https://twitter.com/AthertonKD/status/1120376944061583360 (visited on 2023-12-07).

      Kelsey D. Atherton's tweet from April 2019 provides an interesting commentary on the tendency to suggest personal choices or individual responsibility as a solution to structural problems. The tweet highlights a frustrating trend where complex societal issues are often reduced to matters of personal decisions, ignoring the larger systemic factors at play. This perspective resonates with debates around how we address problems like inequality or climate change, where solutions are often framed as a matter of personal action (e.g., "just recycle more" or "eat less meat"), rather than addressing the structural and institutional changes needed to make a real impact.

    1. Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues [k11] that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones.

      This chapter raises an interesting point about TikTok's recommendation system. While it’s often assumed that the app's algorithm is highly advanced and sophisticated, Professor Arvind Narayanan’s perspective that the success lies more in the design of the platform itself is intriguing. From my experience, I’ve found that TikTok does indeed excel at showing content that resonates with me, but I’ve never considered the design element—how easy it is to quickly skip over content that doesn’t capture my interest—could play a pivotal role in this. It almost feels like a "self-correcting" mechanism that empowers users to curate their own feed without the algorithm needing to do all the heavy lifting.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Robson. The women with superhuman vision. BBC, February 2022. URL: https://www.bbc.com/future/article/20140905-the-women-with-super-human-vision (visited on 2023-12-07).

      David Robson's article "The Women with Superhuman Vision" (BBC, February 2022) presents a fascinating exploration of individuals with extraordinary visual abilities, specifically highlighting women who possess the rare condition of tetrachromacy—where they can perceive a broader spectrum of colors than the average human. Robson discusses both the science behind this phenomenon and the implications it might have for our understanding of perception and cognition.

    1. When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design [j18]. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection.

      This concept of Ability-based design really resonates with me. It’s fascinating how technology can be adaptive in such personalized ways. The example about the phone adjusting brightness and color scheme based on lighting conditions makes a lot of sense—those kinds of automatic adjustments are subtle but can have such a big impact on accessibility and user experience.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind [j2], a disability. But there are also a small number of people who are tetrachromats [j3] and can see four base colors[2] and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability.

      This section on trichromacy versus tetrachromacy really got me thinking about how subjective our experience of reality can be. The fact that tetrachromats can see a wider range of colors, yet this extra ability isn’t necessarily an advantage in society, is fascinating. It makes me wonder how much of human experience is shaped by limitations we don't even question, simply because we all share similar sensory experiences. It's like, we assume that what we see is the “full” version of reality, but there are other ways of perceiving the world that most of us will never fully understand.

    1. Right to privacy. November 2023. Page Version ID: 1186826760. URL: https://en.wikipedia.org/w/index.php?title=Right_to_privacy&oldid=1186826760 (visited on 2023-12-05).

      One thing I found especially interesting was the mention of the 1973 U.S. Supreme Court decision in Katz v. United States,, which expanded privacy protections to include expectations of privacy in public spaces, like phone booths. It’s fascinating how our concept of privacy has shifted from physical spaces to digital spaces—today, many of our most private moments are shared on platforms without us even fully understanding how that information might be accessed or used.

    1. Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize [i22], John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala [i23].

      This is a great example of how metadata can unintentionally expose private information, even when the content itself doesn’t directly reveal anything sensitive. In John McAfee’s case, the article in Vice inadvertently gave away his location by embedding GPS coordinates in the photos. It’s a stark reminder that we often don’t consider the full range of data that can be hidden within files, even ones that seem benign like images or videos.

  12. Jan 2025
  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      The article by Kurt Wagner on Facebook’s shadow profiles is eye-opening. It shows how Facebook collects data on non-users through things like phone numbers or interactions with users, even if you don’t have an account. This highlights how pervasive data collection can be, raising questions about how much control we really have over our personal information. It ties back to the chapter’s discussion on privacy concerns—how much should companies be able to collect without our consent?

    1. For example, social media data about who you are friends with might be used to infer your sexual orientation [h4]. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)

      Reading about how social media data can be used to infer personal traits like sexual orientation, political leanings, or susceptibility to scams raises ethical concerns. While it's useful for targeting products or offering assistance, it feels unsettling that this data might be used without full consent.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Spaghetti-tree hoax. November 2023. Page Version ID: 1187320430. URL: https://en.wikipedia.org/w/index.php?title=Spaghetti-tree_hoax&oldid=1187320430 (visited on 2023-12-05).

      The Wiktionary page on "Concern troll" provides a useful definition and context for understanding this behavior, which involves someone pretending to be genuinely concerned about an issue while subtly undermining or derailing the conversation. This concept is highly relevant to discussions about online discourse, especially in activist or politically charged spaces, where concern trolling can be used as a tactic to distract or weaken movements.

    1. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.

      This example of K-pop fans swarming the Dallas Police app during the Black Lives Matter protests in 2020 is a fascinating display of digital activism and collective resistance. It highlights the power of online communities to subvert surveillance efforts in innovative and unexpected ways. This incident resonated with me because it demonstrates how technology, which is often used as a tool for control, can also be repurposed for resistance by marginalized groups and their allies.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-24).

      Todd Vaziri’s observation about the difference in tone between tweets sent from an iPhone versus those sent from an Android during Donald Trump’s 2016 campaign provides an intriguing glimpse into the interplay between technology and personal branding in modern politics. This detail not only highlights the potential delegation of social media tasks to a team (iPhone tweets) but also suggests how Trump’s personal voice (Android tweets) leaned toward more hyperbolic and inflammatory language.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Separately, in 2018 during the MeToo movement [f7] , one of @Sciencing_Bi’s friends, Dr. BethAnn McLaughlin (a white woman), co-founded the MeTooSTEM non-profit organization, to gather stories of sexual harassment in STEM (Science, Technology, Engineering, Math). Kyle also followed her on Twitter until word later spread of Dr. McLaughlin’s toxic leadership and bullying in the MeTooSTEM organization (Kyle may have unfollowed @Sciencing_Bi at the same time for defending Dr. McLaughlin, but doesn’t remember clearly).

      The story of Dr. BethAnn McLaughlin and the fallout from her leadership in the MeTooSTEM organization brings up an important tension between advocacy and accountability. It’s inspiring to see movements like MeTooSTEM aiming to bring light to the critical issue of sexual harassment in STEM, but it’s disheartening when leadership within such movements becomes a source of harm.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Comedy Central. Drunk History - John Adams and Thomas Jefferson Had Beef. February 2018. URL: https://www.youtube.com/watch?v=l6Ove4_JsCM (visited on 2023-11-24).

      The video dramatizes the strained friendship and rivalry between Adams and Jefferson during the founding of the United States. It emphasizes Adams’ Federalist beliefs and Jefferson’s Democratic-Republican ideals, which contributed to their political and personal clashes. Despite their tensions, the story concludes with their correspondence later in life, showcasing their ability to reconcile.

    1. 大约在同一时间,手机短信功能 (SMS) [ e9 ]开始流行起来,成为向朋友、家人和熟人发送消息的另一种方式。

      The text messaging feature on cell phones is a great invention, for international students like me who are far away from home can contact their families through text messaging, which makes it very easy for us to communicate with each other.

    1. 图像是通过定义点网格(称为像素)来创建的。每个像素都有三个数字来定义颜色(红色、绿色和蓝色),网格以列表(行)的形式创建,列表由列表(列)组成。

      Until I found out about this, I had no idea that images were created by defining a grid of points. Until one day I was searching for images on my computer and inadvertently zoomed in on an image, and then I was pleasantly surprised to realize that I could actually see numerous dots on this image, which became more and more obvious the more I zoomed in.

    1. But Kurt Skelton was an actual human (in spite of the well done video claiming he was fake). He was just trolling his audience. Professor Casey Fiesler [c16] talked about it on her TikTok channel:

      This one makes me feel funny. I haven't seen a video like this before because all I've seen before are robots pretending to be real people. And a real person pretending to be a robot is a new idea to get more attention and get people to look for some breaks on whether he is a robot or not.

    2. On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers [c8], or making fake crowds that appear to support a cause (called Astroturfing [c9]).

      This has happened to me because I often see people replying to the same text or comments that have nothing to do with the content in the comments section of bloggers who post very strange ideas, and that's when I can guess that it should be the blogger who has gone and bought bot comments to increase the heat poisoning.

    1. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues.

      I feel strongly about the point of modifying the video. I've often learned of people who use Ai technology to change the face of a person in a video in a way that is indistinguishable to the naked eye, and then send the video to the family of the person whose face has been changed and then extort money from them. It's a very scary technology.

    1. 成为并成为模范人物(例如,仁慈、真诚、尊敬和祭祀祖先、尊敬父母、长辈和权威、照顾儿童和年轻人、对家人和他人慷慨)。这些品质通常通过仪式和礼仪(包括祭祀祖先、音乐和饮茶)来表现和实现,从而形成和谐的社会。

      Confucianism is the most influential school of thought in ancient China, which began in the Spring and Autumn Period and the Warring States Period. Confucius was the founder of Confucianism. There are nine core ideas in Confucianism: benevolence, righteousness, propriety, wisdom, faith, forgiveness, loyalty, filial piety and fraternity. Of these nine core ideas, four are the most important: loyalty, filial piety, benevolence and righteousness.