34 Matching Annotations
  1. Mar 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Olivia Solon. 'It's digital colonialism': how Facebook's free internet service has failed its users. The Guardian, July 2017. URL: https://www.theguardian.com/technology/2017/jul/27/facebook-free-basics-developing-markets (visited on 2023-12-10).

      Solon’s article critically examines Facebook’s Free Basics initiative, arguing that the service—while framed as a benevolent effort to connect underserved populations—can be seen as a form of digital colonialism. By offering limited, controlled internet access, the program not only restricts users' information but also serves as a strategic move to entrench Meta’s market dominance, echoing historical patterns of exploitation and power imbalance.

    1. “Mark Zuckerberg is on a crusade to put every single human being online.”

      This bold claim encapsulates Zuckerberg’s narrative of benevolence, yet it also invites us to critically assess the implications of his vision. While connecting everyone appears altruistic, it raises questions about the paternalistic, even colonial, dynamics at play—are we truly being empowered, or are we being subtly steered into a model that prioritizes corporate profit and Western-centric values over local needs and cultural diversity?

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Cory Doctorow. The ‘Enshittification’ of TikTok. Wired, 2023. URL: https://www.wired.com/story/tiktok-platforms-cory-doctorow/ (visited on 2023-12-10).

      Doctorow’s article provides a compelling analysis of how platforms like TikTok can gradually degrade user experience in their pursuit of profit—a process he calls “enshittification.” By detailing how monetization pressures lead to design choices that prioritize engagement over quality, it challenges us to rethink the long-term implications of surveillance capitalism on both platform integrity and user well-being.

    1. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users.

      This sentence captures the genesis of a business model that prioritizes profit through data exploitation. While tailoring content can enhance user experience, it also sets the stage for companies like Meta to collect far more information than users may realize, leading to significant ethical concerns regarding privacy and manipulation. This approach not only fuels targeted advertising but also consolidates market power, often at the expense of user autonomy and data protection.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Paul Billingham and Tom Parr. Enforcing social norms: The morality of public shaming. European J of Philosophy, 28(4):997–1016, December 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/ejop.12543 (visited on 2023-12-10), doi:10.1111/ejop.12543.

      evaluating when public shaming might be ethically acceptable. They outline essential constraints—proportionality, necessity, respect for privacy, non-abusiveness, and reintegration—that help ensure shaming is aimed at reinforcing social norms without causing undue harm. This analysis is particularly valuable in our digital era, where public shaming can easily devolve into mob justice, and it invites us to think critically about how we can balance accountability with compassion in online communities.

    1. Public shaming must aim at, and make possible, the reintegration of the norm violator back into the community, rather than permanently stigmatizing them.

      This guideline emphasizes that the goal of public shaming should be corrective rather than punitive. It suggests that while holding individuals accountable is important, the process should ultimately allow for their reintegration into society. This raises crucial questions: How do we balance accountability with compassion, and what mechanisms can ensure that shaming doesn’t lead to irreversible social exclusion?

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ellie Hall. Twitter Data Has Revealed A Coordinated Campaign Of Hate Against Meghan Markle. BuzzFeed News, October 2021. URL: https://www.buzzfeednews.com/article/ellievhall/bot-sentinel-meghan-markle-prince-harry-twitter (visited on 2023-12-10).

      Hall’s investigative report reveals how coordinated efforts on Twitter have been used to target and harass Meghan Markle, highlighting the dark side of algorithm-driven social media dynamics. By exposing how automated behaviors and organized groups amplify hate speech, the article underscores the urgent need for platforms to develop robust moderation tools and ethical guidelines that protect individuals from such targeted abuse.

    1. The amplifier’s network found a common enemy and cause, and this reinforces their values and norms.

      This sentence encapsulates how harassment can become self-perpetuating within a community. By identifying a common enemy, the group not only justifies its actions but also strengthens its own identity and cohesion—often at the expense of the targeted individual’s voice. It raises ethical questions about how online communities can normalize harmful behavior while sidelining dissent and critical dialogue.

  6. Feb 2025
  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Karen Hao. How Facebook got addicted to spreading misinformation. MIT Technology Review, March 2021. URL: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/ (visited on 2023-12-08).

      Hao’s article provides a deep dive into how Facebook’s internal incentives and algorithmic tweaks inadvertently fueled the spread of misinformation. By examining the platform’s structural issues, the piece highlights the ethical challenges that arise when engagement metrics drive content amplification, often at the cost of truth and public trust. This analysis is a crucial resource for understanding the broader implications of tech design on societal discourse, prompting us to question how platforms can be re-engineered to prioritize accurate information over sensationalism.

    1. Mills argued that a truly just society would need to include ALL subgroups in devising and agreeing to the imagined social contract, instead of some subgroups using their rights and freedoms as a way to impose extra moderation on the rights and freedoms of other groups.

      This idea is both provocative and timely, as it challenges the conventional power structures behind content moderation on social media. It suggests that if marginalized communities had an equal voice in shaping the rules, moderation practices might better protect against systemic bias and ensure fairer representation of diverse perspectives. This prompts us to reconsider who truly benefits from current moderation policies and how they might evolve to foster a more inclusive digital public sphere.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kate Crawford. Time to regulate AI that interprets human emotions. Nature, 592(7853):167–167, April 2021. URL: https://www.nature.com/articles/d41586-021-00868-5 (visited on 2023-12-08), doi:10.1038/d41586-021-00868-5.

      In this succinct article, Crawford emphasizes the urgent need to regulate AI systems that analyze human emotions, highlighting the risks of misinterpretation and potential harm, especially when these systems influence mental health assessments. Her argument is a crucial reminder that as technology becomes more integrated into our emotional lives, it must be governed by robust ethical standards to safeguard individual well-being.

    1. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5].

      This sentence highlights a profound ethical dilemma in digital research—manipulating user experience without their consent. It underscores how even well-intentioned experiments can backfire when transparency is compromised, ultimately eroding trust in social media platforms. How can companies balance the need to improve their algorithms with the ethical obligation to respect users’ autonomy and mental well-being?

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meme. December 2023. Page Version ID: 1187840093. URL: https://en.wikipedia.org/w/index.php?title=Meme&oldid=1187840093#Etymology (visited on 2023-12-08).

      This Wikipedia entry on “Meme” provides a concise yet comprehensive overview of how the concept originated—from Richard Dawkins’ introduction in The Selfish Gene to its evolution as a fundamental element of internet culture. It’s particularly interesting to see how memes serve as modern vehicles for cultural transmission, much like chain letters did in earlier times, effectively demonstrating how ideas replicate and mutate in the digital age.

    1. Chain letters were letters that instructed the recipient to make their own copies of the letter and send them to people they knew.

      This sentence highlights the mechanics of chain letters, which exemplify how information spreads in a viral manner, similar to modern-day digital phenomena. While the medium has evolved from physical mail to SMS and social media, the underlying concept-spreading ideas or behaviors through networks —remains the same. Chain letters also raise ethical questions about manipulation and trust, echoing concerns about viral content today. What are the ethical boundaries between harmless fun and exploiting social connections for manipulation?

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Folding Ideas. In Search Of A Flat Earth. September 2020. URL: https://www.youtube.com/watch?v=JTfhYyTuT44 (visited on 2023-12-07).

      This video essay by Folding Ideas provides a fascinating analysis of how YouTube’s recommendation algorithms can inadvertently funnel users into increasingly extreme viewpoints, ultimately contributing to the rise of movements like Flat Earth. It details how the system’s emphasis on maximizing retention leads to a gradual narrowing of content, creating echo chambers where fringe ideas gain traction. This raises important questions about the ethical responsibilities of platform designers in balancing engagement with the potential societal impacts of such recommendation strategies.

    1. Modern Flat Earth [movement] was essentially created by content algorithms trying to maximize retention and engagement by serving users suggestions for things that are, effectively, incrementally more concentrated versions of the thing they were already looking at.

      This insight from Dan Olson underscores how recommendation algorithms can accidentally create fertile ground for conspiracy theories to spread. By continually feeding users more extreme versions of the same content, platforms can push people into insular “echo chambers.” It raises ethical questions about whether these algorithms should be optimized purely for engagement, or if there’s a responsibility to mitigate the risk of radicalizing or misleading users.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alannah Oleson. Beyond “Average” Users: Building Inclusive Design Skills with the CIDER Technique. Bits and Behavior, October 2022. URL: https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d (visited on 2023-12-07).

      Oleson’s article challenges the notion of designing for an “average” user by introducing the CIDER technique, which encourages designers to build truly inclusive products. It provides practical strategies for identifying and mitigating biases in design processes, reminding us that when we account for diverse user experiences, we create technology that is more accessible and equitable.

    1. When designers and programmers don’t think to take into account different groups of people, then they might make designs that don’t work for everyone.

      This sentence underscores the fundamental importance of inclusive design, reminding us that overlooking diverse user needs can lead to products that unintentionally marginalize entire groups. It highlights why diversity in design teams isn’t just a buzzword—it’s essential to creating technology that truly serves a broad spectrum of society.

    1. General Data Protection Regulation. November 2023. Page Version ID: 1187294017. URL: https://en.wikipedia.org/w/index.php?title=General_Data_Protection_Regulation&oldid=1187294017 (visited on 2023-12-05).

      The GDPR is one of the most comprehensive privacy regulations globally, aiming to give individuals greater control over their personal data. One of its most impactful provisions is the right to be forgotten, which allows users to request the deletion of their personal data. However, its enforcement has been challenging, as companies often struggle with compliance, and critics argue that large tech firms still collect vast amounts of user data. How effective do you think the GDPR has been in truly protecting user privacy in the face of rapidly evolving digital surveillance practices?

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy.

      This statement encapsulates a critical trade-off in our digital lives—while social media connects us, it also requires us to sacrifice some personal privacy. Even as companies collect and analyze our data to enhance user experience or for security, it raises important ethical questions about consent and the extent to which individuals should control their own information. How can we, as users, balance the benefits of connectivity with the need to safeguard our private lives in an era of pervasive data collection?

  13. Jan 2025
  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Christie Aschwanden. Science Isn’t Broken. FiveThirtyEight, August 2015. URL: https://fivethirtyeight.com/features/science-isnt-broken/ (visited on 2023-12-05).

      Aschwanden’s article is a great reminder that science is often more complex and nuanced than we assume. The interactive feature on FiveThirtyEight, where users can manipulate data to “prove” different political parties are better for the economy, is a striking example of how data can be framed to support various narratives. This reinforces the importance of critical thinking when interpreting statistical claims, especially in the era of social media where misleading correlations and p-hacking can easily shape public opinion.

    1. It turns out that if you look at a lot of data, it is easy to discover spurious correlations [h8] where two things look like they are related, but actually aren’t.

      This is a crucial point in data analysis, as spurious correlations can lead to misleading conclusions if not properly scrutinized. The example of Yankee Candle reviews and COVID-19 cases is a fascinating demonstration of how seemingly unrelated variables can align, but it also highlights the need for careful interpretation. In an era where AI and machine learning are increasingly used to analyze social media trends, how can we ensure that correlations are meaningfully investigated rather than simply accepted at face value?

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. [g28] Gregory Pratt. Cruel online posts known as RIP trolling add to Tinley Park family's grief. Chicago Tribune, August 2013. URL: https://www.chicagotribune.com/suburbs/ct-xpm-2013-08-12-ct-met-rip-trolling-20130812-story.html (visited on 2023-12-05).

      This article provides a heartbreaking account of RIP trolling, a form of online harassment where trolls target grieving families by posting cruel comments on memorial pages or other public online spaces dedicated to loved ones. It emphasizes the psychological harm caused by such trolling and highlights the challenges of regulating this behavior on social media platforms. This source underscores the darker side of trolling culture and how anonymity on the internet can amplify malicious behavior, which aligns with the broader discussion of trolling origins and ethical considerations in this chapter.

    1. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with.

      This description highlights how trolling originally relied on exploiting a knowledge gap to create division between "in-groups" and "out-groups." While it might have been relatively harmless in its early form, it set the stage for more malicious forms of trolling that capitalize on power dynamics or vulnerabilities. It’s worth asking: could these early practices have been reshaped into more positive forms of mentorship or community-building instead of exclusion and ridicule?

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Lindsay Ellis. YouTube: Manufacturing Authenticity (For Fun and Profit!). September 2018. URL: https://www.youtube.com/watch?v=8FJEtCvb2Kw (visited on 2023-11-24).

      Lindsay Ellis’s video essay provides a nuanced exploration of how content creators on YouTube strategically curate their personas to appear authentic while still aligning with platform algorithms and audience expectations. One key insight is her discussion on the monetization of relatability, where creators profit from fostering parasocial relationships. This perspective is especially relevant to the broader discussion of authenticity in social media, as it highlights the tension between genuine self-expression and performative branding. It raises the question: can authenticity exist in spaces where financial incentives reward calculated personas?

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Authenticity is a concept we use to talk about connections and interactions when the way the connection is presented matches the reality of how it functions.

      This definition captures the essence of why authenticity matters—it's about aligning expectations with reality. In social media, where personas can be curated or exaggerated, this alignment becomes complex. For example, influencers often present "authentic" glimpses into their lives, but the curated nature of their content can blur the lines between genuine connection and performative branding. Does this curated authenticity still count as authentic if it fulfills the audience’s expectations, or does it inherently mislead?

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      Standage's Writing on the Wall offers a unique perspective by tracing the history of social media-like communication systems back to ancient Rome and beyond. One striking detail is the comparison between Roman "social networks" of handwritten letters and today’s digital platforms, emphasizing that the need for connection and sharing information transcends technology. This historical framing adds depth to modern discussions of social media design by reminding us that many of today’s ethical and social challenges have historical precedents. It would be intriguing to explore how past solutions might inform present-day platform design.

    1. One famous example of reducing friction was the invention of infinite scroll [e31].

      Infinite scroll is a fascinating example of how reducing friction can significantly impact user behavior. While it creates a seamless and engaging browsing experience, it also raises ethical concerns about its contribution to overuse or addiction. Aza Raskin's regret over its invention underscores the unintended consequences of prioritizing ease of use over user well-being. Could intentionally adding friction, like periodic reminders to take a break, help mitigate these effects while maintaining usability?

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sasha Costanza-Chock. September 2023. Page Version ID: 1176749847. URL: https://en.wikipedia.org/w/index.php?title=Sasha_Costanza-Chock&oldid=1176749847 (visited on 2023-11-24).

      Costanza-Chock’s Design Justice introduces a compelling framework for rethinking how technology is created, emphasizing inclusivity and the active participation of marginalized communities in the design process. One significant detail is the critique of “default” design assumptions that often cater to privileged groups while ignoring diverse user needs. This book’s perspective is especially relevant in discussing the ethical implications of automation and data systems in social media, highlighting the need to address biases embedded in design choices. It would be interesting to apply the principles of design justice to the creation and regulation of social media bots.

    1. We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination.

      This flexibility in combining data structures like lists and dictionaries is incredibly powerful for modeling real-world scenarios. For example, in social media applications, nested dictionaries and lists are commonly used to represent user profiles, posts, and interactions. However, this complexity can also lead to performance issues or errors if the structures aren’t well-designed. How do developers balance this flexibility with maintaining readability and efficiency in large-scale applications?

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      Sarah Jeong’s article provides critical insights into the risks of biases in automated systems and highlights practical steps for avoiding harm. She emphasizes the importance of careful dataset curation and ethical oversight in bot creation, which remains highly relevant as AI continues to shape social media dynamics. It would be interesting to compare this advice with more recent examples of AI-driven bots, such as ChatGPT, to assess how effectively these concerns have been addressed over time.

    1. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected.

      This highlights the complexity of assigning ethical responsibility when bots are involved. Unlike the protesting donkey, a bot's actions can be precisely designed to fulfill intentions, even if indirectly. Should ethical accountability extend beyond the bot's creator to include those who deploy or schedule the bot, especially when their purposes diverge? This fragmentation of responsibility complicates enforcement and raises concerns about transparency on social media platforms.

    1. It might help to think about ethical frameworks as tools for seeing inside of a situation.

      This analogy of ethical frameworks as diagnostic tools, like x-rays or MRIs, is particularly effective in highlighting their practical application. However, unlike medical tools, ethical frameworks often depend on subjective interpretation, which might lead to conflicting “diagnoses” of a situation. How can we reconcile these differing ethical “readings” when they point towards opposing courses of action?

    1. Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.

      Kantian Ethics relates to good and morality, in that it would mean that if you don't know if something is right, fall back on consistency and universality when making a moral decision. Yet it poses practical difficulties in the realm of social media, where worldwide platforms can clash with diverse cultural norms and legal parameters. Would it be reasonable to expect a single “universal law” to effectively guide ethical moderation policies in such profoundly different contexts?