36 Matching Annotations
  1. Mar 2024
    1. 21.3.3. As a Potential Tech Worker# As a potential worker in the tech industry, you might someday find yourself in a position where you have influence over how social media platforms are designed, programmed, or operated (e.g., you could be a programmer, or designer, or content moderator). We hope that if you find yourself in one of these positions, you consider the ethics of what you are doing. We hope you could then bring those concerns into how you design and implement automated systems for social media sites. An Example of Action# As an example of what someone in this position might do, let’s consider this story from Steve Krenzel, who was a software engineer at Twitter from 2015-2017. With Twitter’s change in ownership last week, I’m probably in the clear to talk about the most unethical thing I was asked to build while working at Twitter. […] Twitter was on its death bed and was desperate for money. A large telco wanted to pay us to log signal strength data in N. America and send it to them. My plan was to aggregate signal strength by carrier / by location. I worked with Data Science to find a granularity – minimum area size and minimum distinct users per area – that would preserve anonymity even when combined with other sources of data (differential privacy). When we sent this data to the telco they said the data was useless. They switched their request and said they want to be able to tell how many of our users are entering their competitors’ stores. A bit sketchier, but maybe workable in a privacy respecting way? We ran an alternative by the telco. They didn’t like it and were frustrated. So was Sales. I was asked to go to telco’s HQ and figure out exactly what they want. The subsequent request was absurd. I wound up meeting with a Director who came in huffing and puffing. The Director said “We should know when users leave their house, their commute to work, and everywhere they go throughout the day. Anything less is useless. We get a lot more than that from other tech companies.” I responded with some variant of “No fucking way”. There was no universe where I was going to help sell granular identifiable user location data. This led to more internal meetings. Legal said the request was fine – none of it violated the user ToS [Terms of Service]. Normally they might find another engineer to do this work, but my whole team was aligned with the privacy concerns. Twitter had also just done layoffs (aside: time is a flat circle), so there were no spare engineers around. […] My last email written at Twitter was to Jack [Twitter CEO]. To his credit, he responded quickly with something to the effect of “Let me look into that and make sure there isn’t a misunderstanding. It doesn’t seem right. We wouldn’t want to do that.” It was in his hands now. As far as I know, the project actually got canned. Jack genuinely didn’t like it. I don’t know if this mindset will hold true with the new owner of Twitter though. I would assume Elon will do far worse things with the data. And, for the any employees still at Twitter, don’t underestimate the power of a pocket veto. Sometimes it doesn’t work out, or you have to escalate and risk it back firing, but a good pocket veto is a tool to learn to wield well. Twitter Thread Business Insider Article You aren’t likely to end up in a situation as dramatic as this. If you find yourself making a stand for ethical tech work, it would probably look more like arguing about what restrictions to put on a name field (e.g., minimum length), prioritizing accessibility, or arguing that a small piece of data about users is not really needed and shouldn’t be tracked. But regardless, if you end up in a position to have an influence in tech, we want you to be able to think through the ethical implications of what you are asked to do and how you choose to respond. You can also look at how you can organize with other workers, through things like the Alphabet Workers Union (Alphabet is the parent company at Google).

      I think this section serves as a powerful call to action for individuals entering the tech industry, emphasizing the importance of ethical consideration and personal agency in technological development. It illustrates, through a real-world example, how individual decisions can significantly impact privacy and ethics in technology, underscoring the potential for positive influence within the industry.

    1. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI) Fig. 21.1 The start of an xkcd comic compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd)# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      I think this section greatly underscores the perennial nature of human concerns regarding technological advancements and their unforeseen consequences, echoing historical skepticism from ancient philosophies to modern critiques. It serves as a critical reminder that while innovation is celebrated, its ethical implications and societal impacts must be meticulously considered.

    1. 20.1.2. Decolonization / Postcolonialism / Liberation / Landback# Counter to colonialism, decolonization, postcolonialism, liberation, and landback are movements where the colonized/oppressed take back power from the colonialists/oppressors, and grapple with the consequences of having been colonized. This may be a response to colonization by: Government occupation (e.g., England ruling India/Pakistan, USA removing Indigenous Americans from their land, USA ruling Cuba and the Philippines) Key figures: Mahatma Gandhi, 1800s-1900s, India Toussaint Louverture, 1700s-1800s, Haiti Patrice Lumumba, 1900s, Congo Oppressed group in a country with restricted rights or ability to make their voices heard (e.g., women’s rights and civil rights movements in USA) Key figures: Susan B. Anthony, 1800s-1900s, USA Martin Luther King Jr., 1900s, USA Nelson Mandela, 1900s-2000s, South Africa Cultural and economic dominance (e.g., the global power centers of Silicon Valley, Hollywood, Wall Street, etc.) Key figures: Gayatri Chakravorty Spivak, present, India, USA Edward Said, 1900s-2000s, Palestine, USA

      I think this section provides a crucial understanding of how colonialism's historical patterns of dominance and subjugation continue to influence modern structures. Moreover, this section includes the control and distribution of social media platforms, which are helpful for me to understand it clearer.

    1. 19.1.3. Accountability in Capitalism and other systems# Let’s look at who the leaders of businesses (or services) are accountable for in capitalism and other systems. Democratic Socialism (i.e., “Socialists1”)# With socialism in a representative democracy (i.e., “democratic socialism”), the government leaders are chosen by the people through voting. And so, while the governmental leaders are in charge of what gets made, how much it costs, and who gets it, those leaders are accountable to the voters. So, in a democratic socialist government, theoretically, every voter has an equal say in business (or government service) decisions. Note, that there are limitations to the government leaders being accountable to the people their decisions affect, such as government leaders ignoring voters’ wishes, or people who can’t vote (e.g., the young, non-citizens, oppressed minorities) and therefore don’t get a say. Capitalism# In capitalism, business decisions are accountable to the people who own the business. In a publicly traded business, that is the shareholders. The more money someone has invested in a company, the more say they have. And generally in a capitalist system, the rich have the most say in what happens (both as business owners and customers), and the poor have very little say in what happens. When shareholders buy stocks in a company, they are owed a percentage of the profits. Therefore it is the company leaders’ fiduciary duty to maximize the profits of the company (called the Friedman Doctrine). If the leader of the company (the CEO) intentionally makes a decision that they know will reduce the company’s profits, then they are cheating the shareholders out of money the shareholders could have had. CEOs mistakenly do things that lose money all the time, but doing so on purpose is a violation of fiduciary duty. There are many ways a CEO might intentionally lower profits unfairly, such as by having their company pay more than necessary when buying something from the CEO’s friend’s company. But even if a CEO decides to reduce profits for a good reason (e.g., it may be unethical to overwork the employees), then they are still violating their fiduciary duty, and the board of directors might fire them or pressure them into prioritizing profits above all else. For example, the actor Stellan Skarsgård complained that in the film industry, it didn’t matter if a company was making good movies at a decent profit. If there is an opportunity for even more profit by making worse movies, then that is what business leaders are obligated to do: “When raw market forces come in, [movie] studios start being run by companies that don’t care if they’re dealing in films or toothpaste so long as they get their 10% [return]. When AT&T took over Time Warner, it immediately told HBO to become lighter and more commercial. They were always making money. But not enough for an investor.” Stellan Skarsgård Or as another example, if the richest man in the world offers to buy out a social media site for more than it’s worth, then it is the fiduciary duty of the leaders of the social media site to accept that offer. It doesn’t matter if it is clear that this rich man doesn’t know what he is doing and is likely to destroy the social media site, and potentially cause harm to society at large; the fiduciary duty of the company leaders is to get as much money as possible to their shareholders, and they can’t beat being overpaid by the richest man in the world. Rejecting that deal would be cheating the stockholders out of money. CEOs of social media companies, under pressure from the board of directors, might also make decisions that prioritize short-term profits for the shareholders over long-term benefits, leading to what author Corey Doctorow calls the “Enshittification” of platforms (See his article: The ‘Enshittification’ of TikTok: Or how, exactly, platforms die., also archived here). Privately owned businesses or organizations are a little different in that the owner (or owners) have full say on what happens, and are free to make it as unprofitable or profitable as they want. Though, if the private ownership of the business was purchased with loans, then they have some responsibilities to the lenders. Other Accountability Models# Besides the privately owned and publicly traded businesses in capitalism, and government services in socialism, there are other accountability models as well. For example: In a publicly funded organization, non-profit organization, or crowd-funded project (e.g., Wikipedia, NPR, Kickstarter projects, Patreon creators, charities), the investors (or donors) are not investing in profits from the organization, but instead are investing in the product or work the organization does. Therefore the responsibility to investors is not to make profits but to do the work investors are paying for. In this model, the more money someone invests or donates, the more say they have over what the organization does (like capitalism and unlike democratic socialism). For example, when buying groceries, you might let the grocery store take an extra $5 from you to give to a charity that gives food to the needy. Then the grocery store corporation will give $5 to the charity, but the corporation also gets $5 more say in how the charity operates (and they can pressure the charity to not do anything that hurts the corporation’s profits, and thus look charitable without violating their fiduciary duty). In a consumer co-operative businesses and organizations, the customers of the business have a say in how the business is run, and therefore the leaders are accountable to the customers. So if the customers want the business to do something that can only be done by treating the employees poorly, then the business leaders are obligated to follow the customer’s demands. If the company makes excess profits, that money is sent out to the customers. An example of a consumer co-operative is the outdoor recreation gear store REI. In a worker co-operative businesses and organizations, the employees at the company are the people who have a say in how the business is run, and therefore the leaders are accountable to the employees (rather than vice-versa). Since the business leaders are controlled by the workers, this is a system where the workers control the means of production (e.g., they control the factories, offices or other business resources). If the business makes excess profits, that money is sent out to the employees.

      I think this section effectively highlights the intricate balance between capitalism's drive for profit and the ethical considerations that can sometimes be sidelined. I In my opinion, it emphasizes the importance of understanding these dynamics to navigate the complexities of social media decision-making.

  2. Feb 2024
    1. Truth and Reconciliation Commission# In South Africa, when the oppressive and violent racist apartheid system ended, Nelson Mandela and Desmond Tutu set up the Truth and Reconciliation Commission. The commission gathered testimony from both victims and perpetrators of the violence and oppression of apartheid. We could also consider this, in part, a large-scale public shaming of apartheid and those who hurt others through it. Unlike the Nuremberg Trials, the Truth and Reconciliation Commission gave a path for forgiveness and amnesty to the perpetrators of violence who provided their testimony.

      I think this section thoughtfully examines the complex and nuanced nature of repair and reconciliation. It attracts me to think about the highlight of the delicate balance between acknowledging profound wrongs and fostering a pathway towards healing and forgiveness, even in the face of egregious acts.

    1. In the philosophy paper Enforcing Social Norms: The Morality of Public Shaming, Paul Billingham and Tom Parr discuss under what conditions public shaming would be morally permissible. They are concerned not with actions primarily intended to induce shame in the target, but rather actions that may cause a person to shame, but are motivated by “seeking to draw attention to a social norm violation, and to rally others to their cause.” In this situation, they outline the following constraints that must be considered when publicly shaming someone in this way: Proportionality: The negative consequences of shaming someone should not be worse than the positive consequences Necessity: There must not be another more effective method of achieving the goal Respect for Privacy: There must not be unnecessary violations of privacy Non-Abusiveness: The shaming must not use abusive tactics. Reintegration “Public shaming must aim at, and make possible, the reintegration of the norm violator back into the community, rather than permanently stigmatizing them.”

      I think this section insightfully navigates the ethical complexities of public shaming. It balances its potential as a tool for the powerless against the imperative for proportionality and respect for individual dignity, so urging a thoughtful consideration of its impacts and limitations.

    1. Harassment in social media contexts can be difficult to define, especially when the harassment pattern is created by a collective of seemingly unconnected people. Maybe each individual action can be read as unpleasant but technically okay. But taken together, all the instances of the pattern lead up to a level of harm done to the victim which can do real damage. Because social media spaces are to some extent private spaces, the moderators of those spaces can ask someone to leave if they wish. A Facebook group may have a ‘policy’ listed in the group info, which spells out the conditions under which a person might be blocked from the group. As a Facebook user, I could decide that I don’t like the way someone is posting on my wall; I could block them, with or without warning, much as if I were asking a guest to leave my house. In the next section, we will look in more detail about when harassment tactics get used; how they get justified, and what all this means in the context of social media.

      in my opinion, this detailed exploration of the nuanced nature of violence and harassment underscores the complexity of defining and addressing these issues within the frameworks of law and social norms. It leads me to think the fine line between permissible actions and those that cause harm, which totally broaden my views.

    1. When Amnesty International looked at online harassment, they found that: Women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets. Troll Patrol Findings

      I think this section powerfully highlights the disproportionate impact of online harassment on marginalized communities. It emphasizes the urgent need for inclusive and effective solutions to combat this issue. And people are prone to be targeted by ill persons leading to mental sickness.

    1. 16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There (pdf here). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond.

      I think this section detailedly outlines the foundational principles of crowdsourcing, which highlightes its historical roots and the innovative ways technology enhances our ability to collaborate and communicate across various contexts. I learned a lot about how technology improve our lives.

    1. 16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora: An crowdsourced question and answer site. Stack Overflow: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk. Project Sidewalk: Crowdsourcing sidewalk information for mobility needs (e.g., wheelchair users).

      I think these diverse examples ranging from academic research to practical tasks vividly illustrates the versatile nature of crowdsourcing. And it showcases its ability to harness collective intelligence across various domains.

    1. 15.2.3. Facebook# While Facebook groups and individual pages can be moderated by users, for the platform as a while, Facebook has paid moderation teams to make moderation decisions (whether on content flagged by bots, or content flagged by users). As Facebook has grown, it has sought users from all over the globe, but as of 2019: Facebook had menus and prompts in 111 different languages, which were deemed to be “officially supported” Facebook’s “Community standards” rules were only translated into 41 of those languages Facebook’s content moderators know about 50 languages (though they say they hire professional translators when needed) Automated tools for identifying hate speech only work in about 30 languages

      I think these examples critically examines the reliance on volunteer moderation within major online platforms, highlighting the ethical and operational dilemmas faced by Reddit, Wikipedia, and Facebook. So I know how they brings to the forefront the challenges of compensating moderators, so diversity among contributors, and addressing language barriers in content moderation, thereby questioning the sustainability and fairness of current practices.

    1. Some philosophers, like Charles W. Mills, have pointed out that social contracts tend to be shaped by those in power, and agreed to by those in power, but they only work when a less powerful group is taken advantage of to support the power base of the contract deciders. This is a rough way of describing the idea behind Mills’s famous book, The Racial Contract. Mills said that the “we” of American society was actually a subgroup, a “we” within the broader community, and that the “we” of American society which agrees to the implicit social contract is a racialized “we”. That is, the contract is devised by and for, and agreed to by, white people, and it is rational–that is, it makes sense and it works–only because it assumes the subjugation and the exploitation of people of color. Mills argued that a truly just society would need to include ALL subgroups in devising and agreeing to the imagined social contract, instead of some subgroups using their rights and freedoms as a way to impose extra moderation on the rights and freedoms of other groups

      I think this part effectively bridges historical ethical concepts of moderation with contemporary challenges in digital content moderation, emphasizing the nuanced balance between individual freedoms and community well-being. The exploration of moderation from virtue ethics to social contract theories underscores its pivotal role in fostering trust and cooperation within societies, both offline and online.

    1. She then describes this as a form of digital self-harm, calling it “masochistic epistemology: whatever hurts is true” (note: “masochistic” means seeking pain, and “epistemology” means how you determine what is true). ContraPoints then gives her advice to these incels who have turned inward with self-hatred and digital self-harm: So, incels. I’m not going to respond to your worldview like its an intellectual position worthy of rational debate. Because these ideas and arguments, you’re not using them the way rational people use arguments. You’re using them as razor blades to abuse yourselves. And I know because I’ve done the exact same thing. The incel worldview is catastrophizing. It’s an anxious death spiral. And the solution to that has to be therapeutic, not logical.

      I think this section has the exploration of digital self-harm and negative online communities, which shows us the darker aspects of social media, where the quest for connection and validation can paradoxically lead to self-destructive behavior. And it demonstrates the critical need for mental health awareness and interventions at the digital age, as the boundaries between online harm and real-world psychological distress are increasingly blurred. By recognizing these patterns as manifestations of deeper psychological issues rather than mere online activities can pave the way for more effective support and healing strategies.

    1. “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.”

      I think Bo Burnham's reflections highlight the paradox of social media, which is a tool that simultaneously connects and isolates, empowers and objectifies. so it shows the need for a vital understanding of its impact, and the line between beneficial and harmful effects is often blurred. it becomes crucial to examine not just the immediate effects on individual mental health, but also the broader societal implications of our online interactions.

    1. When someone creates content that goes viral, they didn’t necessarily intend it to go viral, or viral in the way that it does. If a user posts a joke, and people share it because they think it is funny, then their intention and the way the content goes viral is at least somewhat aligned. If a user tries to say something serious, but it goes viral for being funny, then their intention and the virality are not aligned. Let’s look at some examples of the relationship between virality and intent. 12.4.1. Building on the original intention# Content is sometimes shared without modification fitting the original intention, but let’s look at ones where there is some sort of modification that aligns with the original intention. We’ll include several examples on this page from the TikTok Duet feature, which allows people to build off the original video by recording a video of themselves to play at the same time next to the original. So for example, This tweet thread of TikTok videos (cross-posted to Twitter) starts with one Tiktok user singing a short parody musical of an argument in a grocery store. The subsequent tweets in the thread build on the prior versions, first where someone adds themselves singing the other half of the argument, then where someone adds themselves singing the part of their child, then where someone adds themselves singing the part of an employee working at the store1: As another example, this tweet is instructions for how to interact with it (add a picture), and people keep copying the instructions with their replies.

      I think the discussion on virality and intention highlights the unpredictable nature of digital content dissemination, where the original purpose of content can diverge significantly from its eventual reception and impact. As a result, this unpredictability underscores the complex dynamics between creators' intentions and audience interpretations, illustrating how the digital age amplifies the potential for misalignment between the two.

    1. When physical mail was dominant in the 1900s, one type of mail that spread around the US was a chain letter. Chain letters were letters that instructed the recipient to make their own copies of the letter and send them to people they knew. Some letters gave the reason for people to make copies might be as part of a pyramid scheme where you were supposed to send money to the people you got the letter from, but then the people you send the letter to would give you money. Other letters gave the reason for people to make copies that if they made copies, good things would happen to them, and if not bad things would, like this:

      I think the historical spread of chain letters serves as a compelling example of how human behavior and belief systems can drive the virality of content, even in the absence of digital platforms. These letters leverage the promise of good fortune or the fear of misfortune and illustrate the powerful role of psychological motivators in the dissemination of ideas and practices across communities.

    1. Content (posts, photos, articles, etc.)# Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones. Content recommendations can go poorly when it sends people down problematic chains of content, like by grouping videos of children in a convenient way for pedophiles, or Amazon recommending groups of materials for suicide.

      I think we need to understand the nuances of recommendation algorithms, which is critical in addressing their influence on individual experiences. As these systems are designed to enhance user interaction, they can inadvertently perpetuate biases and present content that may not always align with the best interests or intentions of the users.

    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system. Fig. 11.1 A tweet highlighting the difference between structural problems (systemic analysis) and personal choices (individual analysis).# Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results: Fig. 11.2 A tweet from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior.# Elon Musk’s view expressed in that tweet is different than some of the ideas of the previous owners, who at least tried to figure out how to make Twitter’s algorithm support healthier conversation. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      I think it is critical to acknowledge that while individual actions within a system are influential, they often cannot rectify inherent biases built into recommendation algorithms. I think such distinction is particularly important in understanding that systemic issues require systemic solutions, which can help us to build more ethics algoritms.

    1. We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.

      I think this part thoughtfully presents the diverse strategies for managing disabilities. It highlights the shift from individual coping strategies to broader, more inclusive approaches like Universal Design and Ability-based Design. Then it draws attention to the importance of designing environments and tools that adapt to the varied needs of individuals, rather than expecting people to conform to a 'normal' standard.

    1. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical people.

      I think this section insightfully highlights the fluidity and societal construct of disability, which emphasizes that what is considered a disability can vary greatly depending on the societal norms and expectations. I also learned that the importance of inclusive design in public spaces and products, considering the diverse range of human abilities and experiences. It then turns out to tell me that it not only accommodates those currently categorized as having disabilities but also acknowledges the spectrum of human functionality, leading to a more inclusive and empathetic society

    1. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation.

      I think the complexities of data privacy extend beyond what is explicitly shared. The ability to infer sensitive information from seemingly harmless data points illustrates the pervasive nature of privacy risks in the digital era. It shows me a reminder that privacy protection is not just about guarding obvious personal details but also about understanding the potential implications of all data we interact with.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users.

      I think these incidents highlight a fundamental challenge in digital security: even well-established companies can make critical errors in protecting user data. This shows the importance of ongoing vigilance and improvement in security practices, as the consequences of such breaches extend far beyond the immediate exposure of passwords or personal information.

  3. Jan 2024
    1. 8.7.2. Intentional Data Poisoning# Data can be poisoned intentionally as well. For example, in 2021, workers at Kellogg’s were upset at their working conditions, so they agreed to go on strike, and not work until Kellogg’s agreed to improve their work conditions. Kellogg’s announced that they would hire new workers to replace the striking workers: Kellogg’s proposed pay and benefits cuts while forcing workers to work severe overtime as long as 16-hour-days for seven days a week. Some workers stayed on the job for months without a single day off. The company refuses to meet the union’s proposals for better pay, hours, and benefits, so they went on strike. Earlier this week, the company announced it would permanently replace 1,400 striking workers. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers – Vice MotherBoard People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.

      I think this section effectively illustrates the concept of data poisoning, both unintentional and intentional, and its real-world impacts. The examples like the TikToker influencing survey demographics and the Kellogg's strike, demonstrate the diverse ways data can be compromised. This section also highlights a critical aspect of data ethics and integrity in the digital age, emphasizing the need for robust data validation methods and awareness of potential manipulation risks

    1. 8.3.1. Spurious Correlations# One thing to note in the above case of candle reviews and COVID is that just because something appears to be correlated, doesn’t mean that it is connected in the way it looks like. In the above, the correlation might be due mostly to people buying and reviewing candles in the fall, and diseases, like COVID, spreading most during the fall. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example: Fig. 8.3 An example spurious correlation from Tyler Vigen’s collection of Spurious Correlations# By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want. This is because sometimes different pieces of data line up coincidentally (coincidences happen), and if you try enough combinations, you can find the coincidence that lines up with your conclusion. If you want to explore the difficulty of inferring trends from data, the website fivethirtyeight.com has an interactive feature called “Hack Your Way To Scientific Glory” where, by changing how you measure the US economy and how you measure what political party is in power in the US, you can “prove” that either Democrats or Republicans are better for the economy. Fivethirtyeight has a longer article on this called “Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.”

      The observation about spurious correlations is crucial in the context of data mining, especially with social media data. It highlights the importance of critical analysis and the need to distinguish between correlation and causation. This serves as a reminder that not all patterns or trends derived from data mining are necessarily meaningful or indicative of underlying causal relationships

    1. More on Ethics# There are many more ethics frameworks that we haven’t mentioned here. You can look up some more here. Also, many of these ethics frameworks overlap and different ones can be considered versions of another. So the Confucianist definition of an exemplary person could be considered as virtues in virtue ethics. Existentialism can be considered a form of Nihilism. Moral Relativism (saying that what is good or bad is just totally subjective, and depends on who you ask.) can also be considered a form of Nihilism, etc. You can also follow any of the other links in this page or read books like this, or watch the TV show The Good Place (currently streaming on nbc.com and Netflix)

      Given the comprehensiveness and diversity of the ethical frameworks outlined in the text, it is clear that each framework provides a unique lens through which to analyze and understand the ethical complexities of social media interactions and programming decisions. From the virtue-centered perspective of Confucianism to the results-oriented approach of consequentialism, and from the relationship focus of Ubuntu to the egoistic individualism perspective, these frameworks offer a wealth of ethical considerations. The importance lies in their ability to provide different perspectives and insights into the ethical dilemmas presented by the digital age. Whether addressing issues of privacy, authenticity, or the broader impacts of artificial intelligence and automation, these ethical frameworks give us the tools to critically assess and navigate the ethical landscape of technology and its intersection with human values.

    1. 1.1. The case of Justine Sacco’s racist joke tweet# In 2013, Justine Sacco, a PR director at IAC, was boarding a flight to South Africa, and posted the following racist and insensitive joke tweet, which went viral while she was in-flight and unable to check Twitter: 1.1.1. Timeline of events:# Justine Sacco, a PR director at IAC with only 170 followers posted a racist joke tweet right before getting on an 11-hour flight to South Africa Someone emailed the tweet to valleywag.gawker.com. Valleywag wrote a post on it and tweeted the post. Word spread, and Justine’s tweet went viral. Twitter users found other recent offensive tweets by Justine about countries she was traveling in. IAC (Justine’s employer) called the tweet “outrageous, offensive” but “Unfortunately, the employee in question is unreachable on an international flight.” Twitter users, now knowing that Justine is on a flight, started the hashtag #hasjustinelanedyet, which started trending on Twitter (including some celebrities tweeting about it). Twitter users were able to deduce which flight Justine was on. One Twitter user got a photo of Justine turning on her phone after getting off the plane. That user also talked to her father at the airport and tweeted about the photo and their responses. Justine lost her job at IAC, apologized, and was later rehired by IAC. Sources: Buzzfeed, IBTimes, later Vox, later New York Times profile 1.1.2. What our focus will be# Rather than talk about whether any or all of the responses to Justine’s racist joke tweet were deserved, let’s instead talk about why it played out as it did: Why did so many people see it? How did it spread? What enabled someone to be able to get a photo of her checking the phone at the airport? 1.1.3. Reflection questions# What motivated Twitter users to put time and energy into this? What things about the design of Twitter enabled these events to happen? For example, you might notice that the interface shows where Sacco was located when tweeting, Hillingdon, London, which is where Heathrow Airport is located, helping people deduce which flight she was on. What financial motivations does Twitter have? How does that influence Twitter’s design? What changes to Twitter could have changed how this story went?

      The case of Justin Sacco’s tweet highlights the far-reaching impact of social media’s immediacy and reach. The incident highlighted not only individual responsibility in digital communications, but also the collective power dynamics of social media platforms. This case is a great reminder of the complex interplay between individual behavior, technological structures, and collective social responses in the digital age.

    1. 7.6. Ethics and Trolling# 7.6.1. Background: Forming Groups# Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad? One answer to this challenge is that we use various heuristics (that is, shortcuts for thinking) like stereotypes and signaling to quickly guess where a person stands in relation to us. Sometimes wearing items of a certain brand signals to people with similar commitments that you might be on the same page. Sometimes features that are strongly associated with certain social groups—stereotypes—are assumed to tell us whether or not we can trust someone. Have you ever tried to change or mask your accent, to avoid being marked as from a certain region? Have you ever felt the need to conceal something about yourself that is often stereotyped, or to use an ingroup signal to deflect people’s attention from a stereotyped feature? There is a reason why stereotypes are so tenacious: they work… sort of. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us. We do have a tendency to display elements of our history and identity, even if we have never thought about it before. This creates an issue, however, when the stereotype is not apt in some way. This might be because we diverge in some way from the categories that mark us, so the stereotype is inaccurate. Or this might be because the stereotype also encodes value judgments that are unwarranted, and which lead to problems with implicit bias. Some people do not need to think loads about how they present in order to come across to people in ways that are accurate and supportive of who they really are. Some people think very carefully about how they curate a set of signals that enable them to accurately let people know who they are or to conceal who they are from people outside their squad. Because patterns are so central to how our brains process information, patterns become extremely important to how societies change or stay the same. TV tropes is a website that tracks patterns in media, such as the jump scare The Seven Basic Plots Patterns build habits. Habits build norms. Norms build our reality. To create a social group and have it be sustainable, we depend on stable patterns, habits, and norms to create the reality of the grouping. In a diverse community, there are many subsets of patterns, habits, and norms which go into creating the overall social reality. Part of how people manage their social reality is by enforcing the patterns, habits, and norms which identify us; another way we do this is by enforcing, or policing, which subsets of patterns, habits, and norms get to be recognized as valid parts of the broader social reality. Both of these tactics can be done in appropriate, just, and responsible ways, or in highly unjust ways. 7.6.2. Ethics of Disruption (Trolling)# Trolling is a method of disrupting the way things are, including group structure and practices. Like these group-forming practices, disruptive trolling can be deployed in just or unjust ways. (We will come back to that.) These disruptive tactics can also be engaged with different moods, ranging from playful (like some flashmobs), to demonstrative (like activism and protests), to hostile, to warring, to genocidal. You may have heard people say that the difference between a coup and a revolution is whether it succeeds and gets to later tell the story, or gets quashed. You may have also heard that the difference between a traitor and a hero depends on who is telling the story. As this class discusses trolling, as well as many of the other topics of social media behavior coming up in the weeks ahead, you are encouraged to bear this duality of value in mind. Trolling is a term given to describe behavior that aims to disrupt (among other things). To make value judgments or ethical judgments about instances of disruptive behavior, we will need to be thoughtful and nuanced about how we decide to pass judgments. One way to begin examining any instance of disruptive behavior is to ask what is being disrupted: a pattern, a habit, a norm, a whole community? And how do we judge the value of the thing being disrupted? Returning to the difference between a coup and a revolution, we might say that a national-level disruption is a coup if it fails, and a revolution if it succeeds. Or we might say that such a disruption is a coup if it intends to disrupt a legitimate instance of political domination/statehood, but a revolution if the instance of political domination is illegitimate. If you take a close look at English-language headlines in the news about uprisings occurring near to or far from here, it should become quickly apparent that both of these reasons can drive an author’s choice to style an event as a coup. To understand what the author is trying to say, we need to look inside the situation and see what assumptions are driving their choice to characterize the disruption in the way that they do. Trolling is disruptive behavior, and whether we class it as problematic or okay depends in part on how we judge the legitimacy of the social reality which is being disrupted. Trolling can be used, in principle, for good or bad ends. 7.6.3. Trolling and Nihilism# While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework). 7.6.4. Reflection Exercise# Revisit the K-Pop protest trolling example in section 7.3. Take your list of ethical frameworks from Chapter 2 and work through them one by one, applying each tool to the K-Pop trolling. For each theory, think of how many different ways the theory could hook up with the example. For example, when using a virtue ethics type of tool, consider how many different people’s character and flourishing could be developed through this? When using a tool based on outcomes, like consequentialism, how many different elements of the outcome can you think of? The goal here is to come up with as many variations as you can, to see how the tools of ethical analysis can help us see into different aspects of the situation. Once you have made your big list of considerations, choose 2-3 items that, in your view, feel most important. Based on those 2-3 items, do you evaluate this trolling event as having been morally good? Why? What changes to this example would change your overall decision on whether the action is ethical?

      The section provides a profound exploration of the complexities involved in understanding and evaluating disruptive behaviors in social media contexts. It compellingly illustrates how the formation of groups, the use of stereotypes, and the enforcement of norms are all deeply intertwined with our cognitive processes and societal structures. The examination of trolling as a form of disruption that can be deployed for both just and unjust ends invites readers to reflect on the multifaceted nature of these actions and their ethical implications.

    1. 7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out] You can read more at: knowyourmeme and wikipedia

      The section on the origins of trolling provides a comprehensive historical context for the evolution of this behavior, tracing it from pre-internet activities to its current manifestations in online communities. It's intriguing to see how actions we now recognize as trolling have deep roots in human behavior, from the mischief of practical jokes to the more malicious pleasure some find in the distress of others. The transition from traditional forms of these behaviors to their modern counterparts in digital spaces illustrates how technology amplifies and transforms human interaction. Particularly interesting is the examination of the cultural and ethical implications of trolling, as reflected in the early online communities and the troubling philosophies and rules that emerged in trolling-centric forums.

    1. 5.6.2. User Interfaces# The user interface of a computer system (like a social media site), is the part that you view and interact with. It’s what you see on your screen and what you press or type or scroll over. Designers of social media sites have to decide how to layout information for users to navigate and decide how the user performs various actions (like, retweet, post, look up user, etc.). Some information and actions will be made larger and easier to access while others will be smaller or hidden in menus or settings. As we look at these interfaces, there are two key terms we want you to know: Affordances are what a user interface lets you do. In particular, it’s what a user interface makes feel natural to do. So for example, an interface might have something that looks like it should be pressed, or an interface might open by scrolling a little so it is clear that if you touch it you can make it scroll more (see a more nuanced explanation here) Friction is anything that gets in the way of a user performing an action. For example, if you have to open and navigate through several menus to find the privacy settings, that is significant friction. Or if one of the buttons has a bug and doesn’t work when you press it, so you have to find another way of performing that action, which is significant friction. Designers sometimes talk about trying to make their user interfaces frictionless, meaning the user can use the site without feeling anything slowing them down. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad: Fig. 5.6 An ad on a mobile device, which has an incredibly small, hard to press “x” button. You need to press that button to close the ad. If you miss the “x”, it takes you to more advertising.# Another example of intentionally adding friction was a design change Twitter made in an attempt to reduce misinformation: When you try to retweet an article, if you haven’t clicked on the link to read the article, it stops you to ask if you want to read it first before retweeting. Fig. 5.7 When Kyle attempted to retweet this article, twitter stopped me to ask if he wanted to read the article first.# One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites. With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)

      This insightful section on social media design delves into the complexities and importances of how social media platforms shape user interactions through design choices. Discussions about connection types, user interfaces, affordances, and friction provide a nuanced understanding of user experience, highlighting how subtle design elements can significantly impact online behavior and interaction patterns. Ethical reflections on connection formation invite readers to consider the broader implications of these design choices, emphasizing the responsibilities of designers and users in the ethical landscape of digital social spaces.

    1. 5.3.2. Social Networking Services# 2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.

      The section effectively captures the changes in online interaction and content creation that occurred in the early 2000s. The evolution from static web pages to dynamic interactive platforms marks a significant leap in the way people connect, share and interact with content on the internet. The emergence of blogging and social networking services not only democratized content creation, allowing individuals to express themselves and build communities around shared interests, but also set the stage for the profound social, cultural, and economic impact of these platforms in the coming years. The transition to Web 2.0 marked the beginning of a new era of digital communication, laying the foundation for today's ever-evolving and complex online social interactions.

    1. 4.4. How Data Informs Ethics# Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people. One of the classic forms of this approach is utilitarianism, which says we should do whatever maximizes ‘utility’ for most people. Confusingly, ‘utility’ in this case does not refer to usefulness, but to a sort of combo of happiness and wellbeing. When a utilitarian tries to decide how to act, they take stock of all the probable outcomes, and what sort of ‘utility’ or happiness will be brought about for all parties involved. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations). Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness. When we think about how data is used online, the idea of a utility calculus can help remind us to check whether we’ve really got enough data about how all parties might be impacted by some actions. Even if you are not a utilitarian, it is good to remind ourselves to check that we’ve got all the data before doing our calculus. This can be especially important when there is a strong social trend to overlook certain data. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action. Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important? One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”. The reckoning about how pernicious ignorance had allowed many to feel comfortable leaving the interests of many out of the utility calculus for use of images online continued. You can read an article about it here, or see a similar reckoning discussed by National Geographic: “For Decades, Our Coverage Was Racist. To Rise Above Our Past, We Must Acknowledge It”.

      This section particularly the exploration of utilitarianism in the context of social media, provides a thought-provoking perspective on ethical decision-making. The concept of the utility calculus as a method of predicting the outcomes and moral implications of our actions highlights the importance of comprehensive data collection and the potential pitfalls of biased or incomplete data. The discussion cleverly highlighted the challenges of navigating social media in an ethical manner, which must consider

    1. What changes with these measures?# While we don’t have direct access to all the data ourselves, we can imagine that different definitions would lead to different results. And there isn’t a “best” or “unbiased” definition we should be using, since all definitions are simplifications that will help with some tasks and hurt with others. We have to be aware that we are always making these simplifications, try to be clear about what simplifications we are making, and think through the ethical implications of the simplifications we are making. 1 There is one exception where you can have data that isn’t a simplification, and that is if the data source is symbolic (e.g., numbers) and you are applying unambiguous rules (e.g., math). Since it starts out as a symbol, it doesn’t need to be simplified to be represented with symbols. For example, data that can be made without simplification include: A list of the first 10 prime numbers. The number of times the letter ‘a’ (capital or lowercase) appears in this sentence.

      This section treats data as a simplification of reality, providing important insights into the interpretation and utilization of data, particularly in the context of social media and its complexities. Twitter users’ arguments with bots illustrate the practical challenges and ethical considerations that arise when using data in real-world scenarios, highlighting the need for critical engagement and nuanced understanding when interpreting data.

    1. 3.2. Examples of Bots (or apps)# There are many types of bots in the social media world. Here are some examples of different bots: 3.2.1. Friendly bots:# Some bots are intended to be helpful, using automation to make tasks easier for others or to provide information, such as: Auto caption: https://twitter.com/headlinerclip Vaccine progress: https://twitter.com/vax_progress Blocking groups of people: https://twitter.com/blockpartyapp_ Social Media managing programs that help people schedule and coordinate posts Delete old tweets: https://tweetdelete.net/ See a new photo of a red panda every hour: https://twitter.com/RedPandaEveryHr Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites. 3.2.2. Antagonistic bots:# On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing). As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots). “I’ve gotten a rush of tweets – coordinated tweets. Like, somewhere else on the internet there’s like a group on the internet saying, ‘Okay, everyone tweet Rian Johnson.’ All from Russian accounts, and all begging me not to kill Admiral Hux in this movie.” From: https://www.imdb.com/video/vi3962091545 (start at 7:49) After the Star Wars: Last Jedi was released, there was a significant online backlash. When a researcher looked into it: [Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place. https://www.indiewire.com/2018/10/star-wars-last-jedi-backlash-study-russian-trolls-rian-johnson-1202008645/ Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      this section reflects the multifaceted and often ambiguous nature of social media automation. It’s fascinating to see the range from friendly bots designed to simplify and enrich our online experiences to hostile bots that can manipulate narratives and skew public opinion. This diversity underscores the profound impact automation has on digital communications, highlighting both its potential benefits and potential for abuse. Discussions about registered versus unregistered bots and the phenomenon of fake bots add another layer, showing the complexity of discerning true automation versus human imitation or deception. Overall, this section effectively outlines the double-edged nature of robotics in the digital realm, prompting us to navigate this space with a critical and informed approach.

    1. How are people’s expectations different for a bot and a “normal” user? Choose an example social media bot (find on your own or look at Examples of Bots (or apps).) What does this bot do that a normal person wouldn’t be able to, or wouldn’t be able to as easily? Who is in charge of creating and running this bot? Does the fact that it is a bot change how you feel about its actions? Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?

      The section on robots and responsibility interestingly compares the use of donkeys in protests in Oman to the deployment of robots in the digital realm, highlighting the complex moral landscape of agency in socio-political expression. These reflective questions thoughtfully prompt me to consider the multifaceted role and impact of bots, encouraging deeper thinking about ethical considerations, the responsibilities of creators and users, and the broader impact on social media platforms and their user communities.

    1. Many people were upset at being deceived, and at the many levels of inauthenticity of Dr. McLaughlin’s actions, such as: Dr. McLaughlin pretended to be a person (@Sciencing_Bi) who didn’t exist. Dr. McLaughlin, as a white woman, created an account where she pretended to be a Native American (see more on “pretendians”). Dr. Mclaughlin put herself at the center of the MeToo movement as it related to STEM, but then Dr. Mclaughlin turned out to be a bully herself. Dr. McLaughlin used the fake @Sciencing_Bi to shield herself from critizism. From the NYTimes article: “‘The fact that @Sci-Bi was saying all these things about BethAnn, saying that BethAnn had helped her, it didn’t make me trust BethAnn — but it made me less willing to publicly criticize her because I thought that public criticism would be felt by the people she was helping,’ he said. ‘Who turned out to be fake.’” Though Dr. McLaughlin claimed a personal experience as a witness in a Title IX sexual harassment case, through the fake @Sciencing_Bi, she invented an experience of sexual harassment from a Harvard professor. This professor was being accused of sexual harassment by multiple real women, and these real women were very upset to find out that @Sciencing_Bi, who was trying to join them, was not a real person. Dr. McLaughlin, through the @Sciencing_Bi account, pretended to have an illness she didn’t have (COVID). She made false accusations against Arizona State University’s role in the (fake) person getting sick, and she was able to get attention and sympathy through the fake illness and fake death of the fake @Sciencing_Bi.

      @Sciencing_Bi’s case and Dr. McLaughlin’s actions provide a deeply disturbing example of inauthenticity that raises significant ethical concerns about online identity integrity and the consequences of deception. This incident not only exploits sensitive issues such as discrimination, sexual harassment, and the COVID-19 pandemic for personal gain, but also undermines the credibility of authentic voices and actions. The deceptions involved – from impersonating marginalized individuals to fabricating experiences of illness and harassment – not only expose a profound lack of respect for the individuals and communities they purportedly represent, but also contribute to a broader erosion of trust in digital spaces. This case is a stark reminder of the potential harm that inauthentic behavior can have on the credibility of individuals, communities, and important social movements, emphasizing the need for vigilance, critical thinking, and ethical standards in our online interactions.

    1. When someone presents themselves as open and as sharing their vulnerabilities with us, it makes the connection feel authentic. We feel like they have entangled their wellbeing with ours by sharing their vulnerabilities with us. Think about how this works with celebrity personalities. Jennifer Lawrence became a favorite of many when she tripped at the Oscars, and turned the moment into her persona as someone with a cool-girl, unpolished, unfiltered way about her. She came across as relatable and as sharing her vulnerabilities with us, which let many people feel that they had a closer, more authentic connection with her. Over time, that persona has come to be read differently, with some suggesting that this open-styled persona is in itself also a performance. Does this mean that her performance of vulnerability was inauthentic?

      The section on the multifaceted nature of authenticity, particularly how it intertwines with social connections and vulnerability, which offers deep insights into the human psyche. The concept that authenticity in relationships is not just about truthfulness but also about the entanglement of vulnerabilities and mutual protection is profound. It's interesting to consider how this dynamic plays out in various spheres, from personal relationships to public figures like celebrities. The example of Jennifer Lawrence's public persona and the discussion on whether her perceived authenticity is genuine or a performance adds another layer to the complexity of authenticity. It raises intriguing questions about the nature of authenticity itself. The text does an excellent job of highlighting the nuances and the inherent subjectivity in understanding and valuing authenticity.