26 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Devin Coldewey. Study finds Reddit's controversial ban of its most toxic subreddits actually worked. TechCrunch, September 2017. URL: https://techcrunch.com/2017/09/11/study-finds-reddits-controversial-ban-of-its-most-toxic-subreddits-actually-worked/ (visited on 2023-12-08).

      In a field dominated by philosophy, the Coldewey article about the 2017 Reddit subreddit bans are a relatively few pieces of empirical evidence. While banning toxic communities may have decreased overall hate speech across platforms (i.e., users did not simply migrate and reestablish at the same levels) this was empirically measured; therefore, this will provide an objective measure that does mitigate the motivated reasoner problem when evaluating moderation actions from a Rawlsian perspective. However, there are limitations to the consequences-based "escape" mechanism - while the Coldewey article quantifies the reduction in hate speech as a result of the ban, it does not determine whether or not those who were banned had their attitudes toward hate speech changed as a result of being banned. That is a far more difficult measurement to make.

    1. 14.5. Moderation and Ethics# In the contexts of social media and public debate, moderation has a meaning that is about creating limits and boundaries about what is posted to keep things working well. But this meaning of “moderation” grew out of a wider, more generic concept of moderation. You might remember seeing moderation coming up in lists of virtues in virtue ethics, back in Chapter 2. So what does moderation (the social practice of limiting what is posted) have to do with moderation (the abstract ethical quality)? 14.5.1. Origin Story for Moderation# One concept that comes up in a lot of different ethical frameworks is moderation. Famously, Confucian thinkers prized moderation as a sound principle for living, or as a virtue, and taught the value of the ‘golden mean’, or finding a balanced, moderate state between extremes. This golden mean idea got picked up by Aristotle—we might even say ripped off by Aristotle—as he framed each virtue as a medial state between two extremes. You could be cowardly at one extreme, or brash and reckless at the other; in the golden middle is courage. You could be miserly and penny-pinching, or you could be a reckless spender, but the aim is to find a healthy balance between those two. Moderation, or being moderate, is something that is valued in many ethical frameworks, not because it comes naturally to us, per se, but because it is an important part of how we form groups and come to trust each other for our shared survival and flourishing. Moderation also comes up in deontological theories, including the political philosophy tradition that grew out of Kantian rationalism: the tradition that is often identified with John Rawls, although there are many other variations out there too. In brief, here is the journey of the idea: Kant was influenced by ideas that were trending in his time–the European era we call the “Enlightenment”, which became very interested in the idea of rationality. We could write books about what they meant by the idea of “rationality”, and Kant certainly did so, but you probably already have a decent idea of what rationality is about. Rationalism tries to use reasoning, logical argument, and scientific evidence to figure out what to make of the world. Kant took this idea and ran with it, exploring the question of what if everything, even morality, could be derived from looking at rationality in the abstract. Many philosophers and, let’s face it, many sensible people since Kant have questioned whether his project could succeed, or whether his question was even a good question to be asking. Can one person really get that kind of “god’s-eye view” of ultimate rationality? People disagree a lot about what would be the most rational way to live. Some philosophers even suggested that it is hard to think about what is rational or reasonable without our take being skewed by our own aims and egos. We instinctively take whatever suits our own goals and frame it in the shape of reasons. Those who do not want their wealth taxed have reasons in the shape of rational arguments for why they should not be taxed. Those who do believe wealth should be taxed have reasons in the shape of rational arguments for why taxes should be imposed. Our motivations can massively affect which of those rationales we find to be most rational. This is what John Rawls wanted to address.

      The section title of “Origins for Moderation” is quiet in its performance of the very problem described, i.e., it illustrates how a series of moderation’s ancestry (Confucius→Aristotle→Kant→Rawls) is also a way of demonstrating which rationality can determine the golden mean. Critics have argued for some time that Rawl's "veil of ignorance" was created to remove from consideration just those forms of self-interested thinking (the wealthy man's "rational" argument against paying taxes) that are identified in the chapter; however, at least as many critics argue that the veil still includes within it certain assumptions regarding what a rational person values. The chapter defines this problem as having been resolved by Rawls; yet, the bigger question remains that moderation-as-a-virtue and moderation-as-content policies suffer from the same fundamental flaw -- each assumes there will exist a mediator/moderator standing apart from all other participants involved in the moderation process.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image. Psychology Today, February 2020. URL: https://www.psychologytoday.com/us/articles/202002/selfies-filters-and-snapchat-dysmorphia-how-photo-editing-harms-body-image (visited on 2023-12-08).

      Kamenetz’s “Snapchat dysmorphia” – when individuals opt for cosmetic surgery so as to resemble their filtered self-portraits – illustrates how the harms identified by the CIDER analysis can be demonstrated. Users are seeing a manipulated version of what they may believe is an improved version of themselves because of the way in which the filters were built. At no point during the design process does this premise get questioned. Likewise, there is no even distribution of the costs associated with the premises (i.e., those who already have body-image issues or suffer from depression will experience the negative consequences far greater than others). This relates closely to the chapter’s assertion that increasing accessibility to a tool known to produce harm is a morally wrongful action.

    1. 13.6. Design Analysis: Mental Health# We want to provide you, the reader, a chance to explore mental health more. We want you to be considering potential benefits and harms to the mental health of different people (benefits like reducing stress, feeling part of a community, finding purpose, etc. and harms like unnecessary anxiety or depression, opportunities and encouragement of self-bullying, etc.). As you do this you might consider personality differences (such as introverts and extroverts), and neurodiversity [m37], the ways people’s brains work and process information differently (e.g., ADHD, Autism, Dyslexia, Face blindness, depression, anxiety). But be careful generalizing about different neurotypes (such as Autism [m38]), especially if you don’t know them well. Instead try to focus on specific traits (that may or may not be part of a specific group) and the impacts on them (e.g., someone easily distracted by motion might…., or someone sensitive to loud sounds might…, or someone already feeling anxious might…). We will be doing a modified version of the five-step CIDER method [m39] (Critique, Imagine, Design, Expand, Repeat). While the CIDER method normally assumes that making a tool accessible to more people is morally good, if that tool is potentially harmful to people (e.g., give people unnecessary anxiety), then making the tool accessible to more people might be morally bad. So instead of just looking at the assumptions made about people and groups using a social media site, we will be also looking at potential harms to different people and groups using a social media site. So open a social media site on your device. Then do the following (preferably on paper or in a blank computer document):

      The emphasis by the author of this chapter to avoid generalization when talking about neurotypes, as well as her encouragement to "look at specific traits," shares some similarities with what Socrates said about writing in his Phaedrus (Socrates was concerned that writing addressed all people and nobody in particular, as a result, writing could never be responsive to the person who was reading). Labels for categories, such as "Autism," or "ADHD," work similarly to how writing works, they reduce a spectrum to a single target audience. As a result, using the CIDER approach to break down design into its various components, while looking at each component separately (i.e., trait-specific), can be seen as asking designers to design dialogically rather than categorically. Rather than asking "what needs do autistic users have?", the question would ask something similar to "how does somebody who has a specific sensitivity to motion experience things?". This chapter makes an epistemological claim (that is, a claim about knowledge) quietly, specifically, the author believes that good design, just as good rhetorical communication should, is dependent upon understanding specifics rather than relying on generalizations.

  4. May 2026
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. The Selfish Gene. December 2023. Page Version ID: 1188207750. URL: https://en.wikipedia.org/w/index.php?title=The_Selfish_Gene&oldid=1188207750 (visited on 2023-12-08).

      The chapter on memes & inheritance with modification, the chapter’s use of “meme” to describe viral inheritance traces directly back to dawkin’s book The Selfish Gene (1976) where he coined the term meme to explain cultural units which spread mutate and compete for attention, essentially applying evolutionary logic to ideas. Worth noting is how far the word has drifted from its original meaning. Dawkins meant meme as a serious theoretical concept parallel to the gene, an explanation of why some ideas persist across generations while others die out. The internet reduced it to image macro format. That drift demonstrates perfectly the idea of inheritance with modification the chapter describes, the word meme got replicated; got mutated; mutation won out over original meaning.

    1. 12.3.1. Replication (With Inheritance)# For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites

      What the author calls an inheritance with regard to content replication and viral potential is also interesting since it uses a biological analogy (mutation) but the way the author applies this creates an interesting duality. Since mutations that can replicate do not have to be good as well as being able to reproduce faster than others; a screenshot stripped of the context from which it was taken, or a quote tweet using someones words, take on the viral ability of the original piece and lose what ever accuracy there was in them. Thus, the modification(s), that will allow content to go viral and be copied are typically those that remove all nuance from the content.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Systemic bias. November 2023. Page Version ID: 1185361788. URL: https://en.wikipedia.org/w/index.php?title=Systemic_bias&oldid=1185361788 (visited on 2023-12-07).

      The authors of the systemic bias article define the term "bias" in terms of how it develops into structural patterns of a system over time, rather than individual prejudice. Therefore, applied to recommendation algorithms, the authors are saying that a recommendation algorithm can be designed so as to systematically downprioritizing certain communities and/or types of content (without any one designer's intention) through its design. One thing I find interesting is that the authors list out everything an algorithm weighs when making a decision for a user; i.e., location, what users have engaged with, etc. However, they do not ask who's behavior those variables are based upon. That is the definition of systematic bias.

    1. 11.1. What Recommendation Algorithms Do# When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that. Phone numbers or email addresses (sometimes collected deceptively [k1]) can be used to suggest friends or contacts. And probably many more factors as well! Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      The list of recommendations in this chapter resembles a confession by the platform. The platform has your "where" (location), your "how long" (hover time) your "who" (contacts) and your "what" (out-loud speech) yet keeps its own algorithm secret. The unbalanced nature of this relationship is something to consider. As users create the behavioral data on which the algorithms are trained, we as users cannot access or see the algorithm itself. This represents a type of 'one-way mirror.'

      This mirrors the issues of robot accountability that were discussed in previous chapters, transparency is a design issue, not a technological requirement. The fact that the secrecy exists creates an environment where users do not have the opportunity for informed consent.

  7. Apr 2026
  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).

      This article supports the chapter's definition of disability as a social construction. It states the difference between the medical model (the problem belongs to the individual with a disability), and the social model (the disability was caused by the way society was developed). I believe the social model is more convincing because it makes the designers of society responsible for changing barriers rather than making individuals who have disabilities accountable for those barriers. Examples such as the staircase at Walmart or the reach on grocery shelves support this belief. If we would design products differently many disabilities would cease to be present.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. 10.1. Disability# A disability is an ability that a person doesn’t have, but that their society expects them to have.[1] For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision: Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind [j2], a disability. But there are also a small number of people who are tetrachromats [j3] and can see four base colors[2] and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome [j4]

      This new way of thinking about disability has totally changed my perspective. Prior to now, I never thought about the fact that what we define as a "disability" in large part will depend upon what a society believes are things that people should be able to do. The example with tetrachromats shows this so clearly. Tetrachromats (as far as I know) literally see colors better than everyone else, but since our society does not build itself based off of a tetrachromatic's abilities they provide absolutely no advantages. So, the question now is this; If our society were to build itself around tetrachromats, would the rest of us be considered "color impaired/disabled"? This relates to something I've observed lately; A lot of the structure and design that make up the physical world (and many other aspects of society), seem to default back to a fairly narrow idea of what constitutes a "normal" human body. As shown in both the examples of staircases and grocery shelves, much of the social definition of disabilities (the limitations caused by them) is socially constructed through design rather than the actual individual. I believe that there is an ethically relevant issue here too; Since the definitions of disability are socially constructed, it seems reasonable to conclude that society also has some level of obligation/responsibility to either create barriers to disablement, or eliminate existing ones through its design decisions.

    1. Right to privacy. November 2023. Page Version ID: 1186826760. URL: https://en.wikipedia.org/w/index.php?title=Right_to_privacy&oldid=1186826760 (visited on 2023-12-05).

      The Wikipedia article on the Right to Privacy identifies the history of the right to privacy as far-reaching in time, however, I was particularly struck by the lack of global parity regarding protection of this right, the U.S. remains uniquely alone among developed countries in its absence of an all-encompassing national privacy legislation; instead the country relies upon an array of piecemeal laws which address specific sectors or industries; whereas the European Union adopted a broad-ranging regulatory framework for digital privacy (GDPR). This realization became even more pronounced considering the breadth of information contained within this chapter concerning data breaches and other corporate misuses of private citizen's data, that such an important economy has yet to develop an overarching legal standard designed to protect the digital privacy rights of its citizens.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text [i8], meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users [i9]. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women [i10] Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax [i11] hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users [i12], or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google [i13] Social engineering [i14], where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site [i15]. Many of the actions done by the con-man Frank Abagnale [i16], which were portrayed in the movie Catch Me If You Can [i17] One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication [i18] on your accounts.

      The suggestion made by the author in this section for the reader to implement two-factor authentication is very practical; however, I believe the author does a poor job at addressing a significant nuance regarding the two types of two-factor authentication methods. Two-factor authentication was presented as a simple protective tool, whereas SMS based two-factor authentication, which is currently one of the most commonly used forms of two factor, has limitations. For example, while it may appear easy enough for individuals to simply use their SMS-based phone number to access their online accounts via a second verification step, it would be difficult for hackers to intercept the codes sent to the individual’s phone via SMS if they were using an authenticator app such as Google Authenticator. This is due to the fact that authenticator apps generate time sensitive codes locally on the user’s smartphone.

      While the author provided an abundance of information related to social engineering/phishing attacks in this section, he failed to provide guidance for the reader when it comes to the limitations with SMS-based two-factor authentication. For instance, since phishing attacks could potentially trick users into inputting their SMS based two-factor authentication code on a fake website, then send it immediately to the hacker once inputted, I find myself wondering whether the author should have been more explicit with his recommendations for the different forms of two-factor authentication rather than just stating “two-factor” in general terms.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      Wagner describes how Facebook collects information regarding those who are not using the service (non-users) mainly in two ways: through its pixel placed on third-party websites and via contact lists loaded onto the site from current users. Wagner points out that what relates to the chapter’s coverage of targeted ads is the consent gap that shadow profiles expose. As long as the chapter expresses an ethical concern for how advertisers will target users, there exists an assumption that users have at least chosen to use the platform. However, it appears that advertisers' targeting mechanism uses the same information (data) for non-users as for users.

    1. 8.4. How is this data used# Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’ [h18].

      The Trump campaign's use of targeted advertisements to suppress Black voters demonstrates a darker form of structural affordance of data-based services (i.e., that the infrastructure developed to allow consumers to be matched with spider stuffed animals could also be altered/weaponized in order to disenfranchise or suppress votes). The chapter considers the use of targeted advertising to be "ethically questionable" yet frames the user as an end-beneficiary ("users may very well be interested in finding out about your stuffed animal toys") while the 2016 example removes this framing completely, the user is now being targeted, rather than being sold something, and the ultimate objective of the campaign was to reduce the political agency of the user.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Assassination of Martin Luther King Jr. November 2023. Page Version ID: 1186577416. URL: https://en.wikipedia.org/w/index.php?title=Assassination_of_Martin_Luther_King_Jr.&oldid=1186577416#Alleged_government_involvement (visited on 2023-12-05).

      The Wikipedia article on the Martin Luther King Jr. Assassination lists numerous decades-long conspiracy theories involving the U.S. government that continue to be alleged by individuals even though the U.S. government has officially denied its involvement. This relates directly to Chapter Sixteen’s discussion of Trust Heuristics: When institutional trust collapses for many individuals (for example, COINTELPRO, Vietnam War and the assassination of Martin Luther King Jr.), people do not simply abandon their use of these heuristics. They switch them. Once upon a time, the government was an in-group signal. Now, it is an out-group marker. Rather than what appears to be "conspiracy thinking," this behavior can actually represent the same type of recognition patterns that are discussed throughout Chapter Sixteen, but in a context where official sources have proven themselves to be untrustworthy. Chapter Sixteen views failures in heuristics as bugs. History shows us some failures may be adaptive responses.

    1. Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad? One answer to this challenge is that we use various heuristics (that is, shortcuts for thinking) like stereotypes and signaling to quickly guess where a person stands in relation to us. Sometimes wearing items of a certain brand signals to people with similar commitments that you might be on the same page. Sometimes features that are strongly associated with certain social groups—stereotypes—are assumed to tell us whether or not we can trust someone. Have you ever tried to change or mask your accent, to avoid being marked as from a certain region? Have you ever felt the need to conceal something about yourself that is often stereotyped, or to use an ingroup signal to deflect people’s attention from a stereotyped feature? There is a reason why stereotypes are so tenacious: they work… sort of. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us. We do have a tendency to display elements of our history and identity, even if we have never thought about it before. This creates an issue, however, when the stereotype is not apt in some way. This might be because we diverge in some way from the categories that mark us, so the stereotype is inaccurate. Or this might be because the stereotype also encodes value judgments that are unwarranted, and which lead to problems with implicit bias. Some people do not need to think loads about how they present in order to come across to people in ways that are accurate and supportive of who they really are. Some people think very carefully about how they curate a set of signals that enable them to accurately let people know who they are or to conceal who they are from people outside their squad.

      The final section differentiates people that simply have a natural projection of "who they are" as individuals versus people that deliberately create curated signals to help them control how they present themselves. However, even though this differentiation is made in silence, it falls apart: regardless of whether or not these curated signals are created, these signals (like all other signals) can be misinterpreted, imitated, or decontextualized, just like Socrates stated regarding the written word in his Phaedrus.

      Writing removes the ability for the person delivering the message to explain or correct the message delivered -- and curation of your identity is no exception. Ultimately, someone else can put on the same brand of clothing, use the same accent, mimic the same marker(s), etc. and the decision-making heuristics will have no way to distinguish between the two.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Todd Vaziri [@tvaziri]. Every non-hyperbolic tweet is from iPhone (his staff). Every hyperbolic tweet is from Android (from him). August 2016. URL: https://twitter.com/tvaziri/status/762005541388378112 (visited on 2023-11-24)

      The significance of the Vaziri observation lies in the fact that Trump’s authenticity signal came about by accident on his platform. His choice to use an Android versus an iPhone can be seen as a “readability” tool to distinguish the person from the machine. This framing of authenticity (in terms of performance via behavior) presents evidence that authenticity may also leak out unintentionally due to technical traces instead of intentional presentations.

    1. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona [f20]. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching [f21]. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below: Fig. 6.6 President Obama giving a very different handshakes [f22] to a white man and a Black man (Kevin Durant [f23]). See also this Key & Peele comedy sketch on greeting differences [f24] with Jordan Peele [f25] playing Obama, and also Key & Peele’s Obama’s Anger Translator sketch [f26].# Read/watch more about code-switching here: How Code-Switching Explains The World [f27] ‘Key & Peele’ Is Ending. Here Are A Few Of Its Code Switch-iest Moments [f28] Still, modifications of behavior can also be inauthentic. In the YouTube Video Essay: YouTube: Manufacturing Authenticity (For Fun and Profit!) [f29] by Lindsay Ellis, Ellis explores nuances in authenticity as a YouTuber. She highlights the emotional labor [f30] of keeping emotional expressions consistent with their public persona, even when they are having different or conflicted feelings. She also highlights how various “calls to action” (e.g., “subscribe to my channel”) may be necessary for business and can be (and appear) authentic or inauthentic.

      It appears that in the chapter "authenticity" is used as a positive term for social roles without defining what differentiates an authentic role from one that has been constructed as a performance. At what point, if a role is being carried out consistently and deliberately, does the performance become the person? This question is hinted at through Ellis, yet it leaves room for philosophical inquiry as to when a "genuine self" becomes apparent (bad faith by Sartre, or perhaps simply the issue of whether there is any true self at all).

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mark R. Cheathem. Conspiracy Theories Abounded in 19th-Century American Politics. URL: https://www.smithsonianmag.com/history/conspiracy-theories-abounded-19th-century-american-politics-180971940/ (visited on 2023-11-24).

      In today’s era of misinformation, I had never thought of conspiracies as an everyday occurrence, but according to Cheatham’s article, they were a common tool used by 19th-century American political parties to gain voter support. They used these conspiracy theories to gain popularity; however, these conspiracy theories contributed to an erosion of faith in our democracy. What really stands out to me is that all of this was accomplished using nothing other than newspapers and "word-of-mouth" (social networks). This phenomenon shows that the actual mechanism behind spreading conspiracies has nothing to do with what you are using as your medium; rather, it is simply for political advantage. And if anything social media has made it easier and less expensive.

    1. As we talked about previously in a section of Chapter 2 (What is Social Media?), pretty much anything can count as social media, and the things we will see in internet-based social media show up in many other places as well. The book Writing on the Wall: Social Media - The First 2,000 Years [e1] by Tom Standage outlines some of the history of social media before internet-based social media platforms such as in times before the printing press: Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread Later, sometime after the printing press, Stondage highlights how there was an unusual period in American history that roughly took up the 1900s where, in America, news sources were centralized in certain newspapers and then the big 3 TV networks. In this period of time, these sources were roughly in agreement and broadcast news out to the country, making a more unified, consistent news environment (though, of course, we can point out how they were biased in ways like being almost exclusively white men).

      The way in which Standage reconsiders this point is significant. It seems that what many regard as "the ideal model for reliable news "the media landscape of the middle of the last century is a one-time event and not the norm. In fact, since time began, humans have habituated communicating through forms of decentralized and thus often very noisy methods of communication (graffiti, handwritten reproductions, and blogs). This brings up another difficult question: are people who lament the loss of unified public discourse because social media "destroyed it" grieving for something that was never truly beneficial or simply grieving over the short period during which a few gatekeepers agreed on things?

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      The author, Julia Evans, describes how numerical computations suffer from precision issues, "computers cannot calculate most decimal values exactly," and how these result in "small errors with large consequences." These concerns also relate to our prior discussion on utility calculus. How much confidence should we have in machines being able to perform simple arithmetic if they are subject to precision loss? Therefore, given how difficult it is to rely on computers for even the simplest of mathematical operations (the building blocks of data-driven ethics), quantifying something as abstractly subjective as human well-being appears to be an even greater challenge than previously thought.

    1. Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people. One of the classic forms of this approach is utilitarianism, which says we should do whatever maximizes ‘utility’ for most people. Confusingly, ‘utility’ in this case does not refer to usefulness, but to a sort of combo of happiness and wellbeing. When a utilitarian tries to decide how to act, they take stock of all the probable outcomes, and what sort of ‘utility’ or happiness will be brought about for all parties involved. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations).

      The passage describes how utility calculus can be done effectively. However, utility cannot be quantitatively measured, as noted in the textbook, which defines utility as a “sort of combination of happiness and well-being.” Because there is no single unit for measuring utility, you cannot measure a person’s grief and another person’s satisfaction and sum them up into an effective or meaningful quantity. In a section on data-informed ethical decisions, the inability to measure the basic premise of the approach is a serious issue: because the basic premise of the approach cannot be measured, we are producing the appearance of rigorous morality rather than actual rigorous morality.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Plato. Phaedrus: Translated by Benjamin Jowett. January 2013. Page Version ID: 1189255462.

      In Phaedrus, Plato foresaw this issue; He prophesized that writing was dangerous since it leaves the written word permanently disconnected from its author. A donkey is just an old-time representation of that; Bots currently represent the ultimate extreme of an ancient technology. Plato didn't have a problem with technology but rather losing accountability through its use.

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      This was an unsettling example for me. To realize that a message can travel around the globe without any connection to its creator, and that the carrier (whether it be a donkey or a bot) does not know the message, leads me to believe that the chances of holding someone accountable for their actions based on a message they have sent out into the world are statistically negligible. I also started to think about how much I have probably interacted with bot-generated information online without ever realizing it. At what point do I begin to feel deceived when interacting with a bot even when the content itself may be true?

  17. Mar 2026
    1. Kumail Nanjiani was a star of the Silicon Valley [a6] TV Show, which was about the tech industry. He posted these reflections on ethics in tech on Twitter (@kumailn) on November 1, 2017: As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end. Kumail Nanjiani

      It is certainly true that technological advancement is happening at a much greater speed than society and the law can adapt. Technologies such as deepfakes, bots, and recommendation algorithms are capable of affecting millions of individuals prior to regulatory bodies creating guidelines to govern their use. This has led me to believe that companies should be developing new technology with an eye toward the ethical implications it may have on its users, rather than relying upon subsequent laws or regulations to govern how they are used.

    1. Consequentialism# Sources [b46] [b47] Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” That is, What is moral is to do what makes the most people the most happy. Key figures: Jeremy Bentham [b48] 1700’s England John Stuart Mill [b49], 1800’s England

      Utilitarianism is another theory associated with Consequentialism; a version of Consequentialism focused upon maximizing overall happiness or general well-being. Social Media and technology are common areas in which utilitarianism is applied due to the fact that many companies will defend their choices using claims such as "my platform supports/benefits millions." The problem with this form of reasoning, however, is that it may support causing harm to a small group of people while benefiting a large one. For example, features designed to create user engagement could lead to an increase in harassment and misinformation.