36 Matching Annotations
  1. Nov 2023
    1. As a member of society, we hope you are informed about the role social media plays in shaping society, such as how design decisions and bots can influence social movements (polarizing, spreading, or stifling different them), and the different economic, social, and governmental pressures that social media platforms operate under. We hope you are then able to advocate for ways of improving how social media operates in society. That might be through voting, or pressuring government officials, or spreading ideas and information, or organizing coordinated actions or protests.

      I agree it's important we don't see social media platforms as inevitable or beyond question. Through voting, contacting representatives, spreading awareness, and other civic participation we can pressure positive changes. Perhaps the first step is increasing public consciousness of the real-world impacts algorithms, bots, and policies have.

    1. Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      This brings up an excellent point about unintended consequences. Even innovations created with good intentions can end up causing harm. Therefore, we need to carefully consider potential implications before releasing new technologies. Going forward, it's important that inventors, policymakers, and others try to mitigate ethical risks.

    1. An example of how this can play out is the failed One Laptop Per Child (OLPC) project. In late 2005, tech visionary and MIT Media Lab founder Nicholas Negroponte [introduced a] $100 laptop would have all the features of an ordinary computer but require so little electricity that a child could power it with a hand crank OLPC’s $100 laptop was going to change the world — then it all went wrong OLPC wanted to give every child in the world a laptop, so they could learn computers, believing he would benefit the world. But this project failed for a number of reasons, such as: The physical device didn’t work well. The hand-powered generator was unreliable, the screen too small to read. OLPC was not actually providing a “superior” product to the rest of the world. When they did hand out some, it didn’t come with good instructions. Kids were just supposed to figure it out on their own. If this failed, it must be the fault of the poor people around the world. It wasn’t designed for what kids around the world would actually want. They didn’t take input from actual kids around the world. OLPC thought they had superior knowledge and just assumed they knew what people would want.

      The failure of OLPC highlights the importance of co-design and community involvement when creating technologies intended to help marginalized groups. Even with good intentions, tech visionaries like Negroponte can suffer from a "white savior" complex, assuming they know what is best for people rather than asking them. Moreover, OLPC did not involve their target users which were children in developing countries to understand their needs and feedback. This led to inappropriate and unreliable hardware, dooming the project when it hit the ground.

    1. 9.6.1. Programming as Women’s Work# As you may have noticed in chapter 2 of the book, the first programmers were almost all women. When computers were being invented, men put themselves in charge of building the physical devices (hardware), and then gave the work of programming the computer (software) to their assistants, often women. So, for example, you can see this at various stages of computer development, such as: 1800s: Charles Babbage describes the first full computer (Analytical Engine), and Ada Lovelace writes down the first computer program for it. 1945: The first general-purpose electrical computer was created by men and programmed by women 1950s: Grace Hopper invents the compiler to help with programming the computers (built by men)

      It's fascinating to learn how instrumental women were in the early days of computer programming. As the hardware was being developed primarily by men, programming fell to their female assistants. This trends spans from Babbage and Lovelace in the 1800s, to the ENIAC programmers in the 1940s, to Grace Hopper's groundbreaking work on compilers in the 1950s. It's a shame these women's contributions aren't more widely celebrated. Their innovations paved the way for modern computing.

    1. 18.3.1. Weak against strong# Jennifer Jacquet argues that shame can be morally good as a tool the weak can use against the strong: The real power of shame is it can scale. It can work against entire countries and can be used by the weak against the strong. Guilt, on the other hand, because it operates entirely within individual psychology, doesn’t scale. […] We still care about individual rights and protection. Transgressions that have a clear impact on broader society – like environmental pollution – and transgressions for which there is no obvious formal route to punishment are, for instance, more amenable to its use. It should be reserved for bad behaviour that affects most or all of us. […] A good rule of thumb is to go after groups, but I don’t exempt individuals, especially not if they are politically powerful or sizeably impact society. But we must ask ourselves about the way those individuals are shamed and whether the punishment is proportional.

      While shame can empower the weak, it can also further marginalize vulnerable groups if used carelessly. Campaigns of public shaming should consider potential negative consequences. Jacquet is right that shame works best against groups rather than individuals. Even then, we must be cautious in wielding shame and combine it with education/dialogue.

    1. In at least some views about shame and childhood1, shame and guilt hold different roles in childhood development: Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action. In this view, a good parent might see their child doing something bad or dangerous, and tell them to stop. The child may feel shame (they might not be developmentally able to separate their identity from the momentary rejection). The parent may then comfort the child to let the child know that they are not being rejected as a person, it was just their action that was a problem. The child’s relationship with the parent is repaired, and over time the child will learn to feel guilt instead of shame and seek to repair harm instead of hide.

      The distinction between shame and guilt is an important one in childhood development. Shame can be damaging to a child's self-esteem and sense of self, while guilt allows the child to separate their actions from their worth as a person. Parents and teachers should aim to cultivate guilt over shame by criticizing the action, not the child. Comforting a shamed child is crucial to rebuilding their trust and security in the relationship.

    1. Billionaires# One phrase that became popular on Twitter in 2022, especially as Elon Musk was in the process of buying Twitter, was: “It is always morally correct to bully billionaires.” (Note: We could not find the exact origins of this phrase or its variations). This is related to the concept in comedy of “punching up,” that is, making fun of people in positions of relatively more power.

      I understand the sentiment behind this phrase since billionaires often seem disconnected from ordinary people's struggles. However, I don't think more hostility or bullying is the answer. If we want billionaires to use their wealth more responsibly, we need open and constructive dialogue, not just insults.

    1. Crowd harassment includes all the forms of individual harassment we already mentioned (like bullying, stalking, etc.), but done by a group of people. Additionally, we can consider the following forms of crowd harassment: Dogpiling: When a crowd of people targets or harasses the same person. Public Shaming (this will be our next chapter) Cross-platform raids (e.g., 4chan group planning harassment on another platform) Stochastic terrorism The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. See also: An atmosphere of violence: Stochastic terror in American politics

      Crowd harassment represents a dangerous shift in power dynamics online. Anonymity online can lead people to act in cruel ways they likely wouldn't in person. Platforms need to consider how their algorithms and design may enable these harmful behaviors.

    1. 16.3.5. Spreading rumors and disinformation# Crowds on social media can also share rumors, and can be an essential (if unreliable) way of spreading information during a crises. Disinformation campaigns also make use of crowdsoucing. An academic research paper Disinformation as Collaborative Work (pdf) lays out a range of disinformation campaigns: Orchestrated: Entirely fake and astroturfed, no genuine users contributing. Cultivated: Intentionally created misinformation that is planted in a community. It is then spread by real users not aware they are part of a disinformation campaign. Emergent and self-sustaining: Communities creating and spreading their own rumors or own conspiracy narratives.

      This section highlights both the potential benefits and risks of crowdsourcing information through social media. On one hand, crowds can rapidly share updates and reports during crises when official information is lacking. This can provide life-saving awareness. However, the same viral sharing also enables rumors and propaganda to spread widely.

    2. In the case of a missing hiker rescued after Twitter user tracks him down using his last-sent photo, the “problem” was “Where did the hiker disappear?” and the crowd investigated whatever they could to find the solution of the hiker’s location.

      The missing hiker example illustrates how social media crowds can mobilize to solve real-world problems. By leveraging the abilities and local knowledge of an online network, the crowd could search for clues about the hiker's location. This case and others like it highlight the power of crowdsourcing and mass collaboration.

    1. When social media companies like Facebook hire moderators, they often hire teams in countries where they can pay workers less. The moderators then are given sets of content to moderate and have to make quick decisions about each item before looking at the next one. They have to get through many posts during their time, and given the nature of the content (e.g., hateful content, child porn, videos of murder, etc.), this can be traumatizing for the moderators: Facebook Is Ignoring Moderators’ Trauma: ‘They Suggest Karaoke and Painting’ In addition to the trauma, by finding places where they can pay workers less and get them to do undesirable work, they are exploiting current inequalities to increase their profits. So, for example, “[Colombia’s Ministry of Labor has launched an investigation into TikTok subcontractor Teleperformance [for content moderators], relating to alleged union-busting, traumatic working conditions and low pay]”(https://time.com/6231625/tiktok-teleperformance-colombia-investigation/)

      The nature of content moderation emphasizes the costs behind keeping online platforms "safe". While automation may help, human judgment is still needed which may expose workers to harm. Thus, platforms have an ethical obligation to provide support and fair wages if they benefit from this labor.

    1. 14.3.5. Public Figure Exception# Twitter, Facebook, and other platforms had an exception to their normal moderation policies for political leaders, where they wouldn’t ban them even if they violated site policies (most notably applied to Donald Trump). After the January 6th insurrection at the US Capital, Donald Trump was banned first from Twitter, then from Facebook, and Facebook announced an end to special treatment for politicians.

      The public figure exception highlights the challenges platforms face in balancing free speech and safety. On one hand, banning elected officials could prevent important political discourse and be seen as censorship. On the other hand, dangerous rhetoric from influential figures has real risks. It's a difficult balance between preserving free speech and limiting harmful content.

    1. So you might find a safe space online to explore part of yourself that isn’t safe in public (e.g., Trans Twitter and the beauty of online anonymity). Or you might find places to share or learn about mental health (in fact, from seeing social media posts, Kyle realized that ADHD was causing many more problems in his life than just having trouble sitting still, and he sought diagnosis and treatment). There are also support groups for various issues people might be struggling with, like ADHD, or having been raised by narcissistic parents.

      I relate with the examples of online communities providing spaces for sharing experiences many might not feel comfortable discussing openly offline. The connections formed in these groups have aided my mental health journey tremendously. While precautions are needed, the internet has opened up new possibilities for providing mutual support around challenges that often isolate us. These online spaces can help us know that we are not alone.

    1. “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.”

      I think Burnham makes an important point about how social media can exacerbate feelings of loneliness and disconnection. By constantly curating our online personas and viewing other people's life, we get caught in a cycle of performative connection that can feel isolating. As someone who struggles with social anxiety, I relate to that sense of detachment Burnham describes. Social media promises community but delivers the opposite.

  2. Oct 2023
    1. 10.3.2. Who gets to be designers# In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      As Dr. Bennett's research highlights, disabled individuals are often excluded from the design process or assigned to secondary roles when technologies meant to assist them are developed. This is incredibly problematic as disabled users have invaluable lived expertise when it comes to understanding their own needs and challenges. To create truly accessible designs, disabled voices must be centered from the very start as the primary designers and decision-makers. They should lead the process, not just offer feedback.

    1. 10.2.5. Are things getting better?# We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people.

      Technology companies should think about accessibility from the very start, not as an afterthought. They should talk to disabled people when designing new products to understand how to make them truly usable for all. Accessibility helps everyone in some way. But we need a shift in perspectives, not just new devices. The whole system must improve accessibility together.

    1. For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification (a.k.a., inheritance). There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles There are also ways of replicating social media content that aren’t directly built into the social media platform, such as: copying images or text and reposting them yourself taking screenshots, and cross-posting to different sites

      The built-in sharing and engagement features on social platforms like likes, reposts, etc. act as replication mechanisms for memes and viral content. They allow certain posts to spread rapidly through the site. This shows how the very infrastructure of social media is designed to amplify content in an evolutionary way, based on user engagement.

    1. A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc. We can even consider the evolutionary forces that play in the spread of true and false information (like an old saying: “A lie is halfway around the world before the truth has got its boots on.”)

      The saying about lies spreading faster than truth is relevant today in the age of misinformation and social media. It highlights the need for critical thinking skills when consuming content, especially online. Moreover. it seems that false or misleading memes often exploit people's emotions and spread rapidly by piggybacking on people's biases.

    1. Content recommendations can go well when users find content they are interested in. Sometimes algorithms do a good job of it and users are appreciative. TikTok has been mentioned in particular as providing surprisingly accurate recommendations, though Professor Arvind Narayanan argues that TikTok’s success with its recommendations relies less on advanced recommendation algorithms, and more on the design of the site making it very easy to skip the bad recommendations and get to the good ones.

      The example of how TikTok's design allows users to easily skip past unappealing content shows that recommendation algorithms are not solely responsible for issues like filter. The overall user experience and interface design play a big role too. Companies should focus not just on building the "perfect" recommendation algorithm, but also on empowering users to easily navigate and control what they see.

    1. Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act).

      I found the distinction between individual analysis and systemic analysis to be insightful. It's easy to blame individuals for biased outcomes, but often there are systemic issues at play that cause biased results regardless of individual intent. The example with the differences in crack vs powder cocaine sentencing guidelines was eye-opening. It shows how seemingly neutral rules can contain racial bias into the system.

    1. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them.

      This is concerning to me as someone who values privacy. I want to believe that anonymized data really is anonymous but examples like the Netflix dataset show it may not be as anonymous as claimed. Even without names attached, combinations of data like movie ratings could point back to specific users. It makes me wary of how much anonymous data is out there that could potentially be traced back to real identities.

    1. When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      I think there are ways we can try to regain some privacy when using social media. For example, being careful about what personal info we share publicly and using privacy settings wisely. In addition, it's important to be mindful of what we say in private messages on social platforms since nothing is ever really deleted. I believe we have to accept that complete privacy is hard on platforms designed to connect us.

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’

      The potential for social media sites to allow unethical targeted advertising highlights why we need thoughtful regulations around digital advertising. While targeted ads based on interests may sometimes benefit users, the ability to specifically target vulnerable groups like children is extremely concerning. Furthermore, the example of the Trump campaign targeting Black voters in order to deter their turnout demonstrates a profoundly undemocratic use of social media data. Weaponizing voter data to disenfranchise citizens should be unacceptable in a free society.

    1. Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence. Social media data can also be used to infer information about larger social trends like the spread of misinformation. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      The example about COVID-19 cases correlating with more Yankee Candle reviews mentioning lack of scent is fascinating. It represents how an unexpected data source like online reviews can shed light on a public health issue. By noticing this pattern, the researchers found a creative way to roughly track COVID cases. This demonstrates how analyzing data in creative ways can reveal insights that would otherwise remain hidden.

    1. One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored.

      The idea of "not feeding the trolls" is a common suggestion to deal with online harassment. However, as Film Crit Hulk points out, in many cases, ignoring trolls doesn't make them go away, it can make them even more persistent and aggressive. This is especially true when it comes to ongoing online harassment. This debate reminds us that addressing trolling isn't always straightforward and it's important to find a balance between not rewarding bad behavior and effectively dealing with online harassment.

    1. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation.

      It's fascinating to learn about the origins of trolling behavior, particularly how it evolved from early internet message boards and multiplayer games like MUDs. The concept of experienced users trolling newbies to assert their superiority sheds light on the psychology of online interactions. It's a reminder of how online culture and behavior have transformed over time. Moreover, it highlights the need for a more thoughtful and empathetic approach to online communication.

    1. This trend brought complicated issues of authenticity because presumably there was some human employee that got charged with running the company’s social media account. We are simultaneously aware that, on the one hand, that human employee may be expressing themselves authentically (whether playfully or about serious issues), but also that human is at the mercy of the corporation and the corporation can at any moment tell that human to stop or replace that human with another. One particularly noteworthy corporate brand account was the Twitter account of Steak-Umms, which both delved into serious human topics and into the larger trend of corporate brand account authenticity: why are so many young people flocking to brands on social media for love, guidance, and attention? I’ll tell you why. they’re isolated from real communities, working service jobs they hate while barely making ends meat, and are living w/ unchecked personal/mental health problems (see whole thread here)

      The rise of authentic and sometimes socially conscious posts from corporate social media accounts has sparked questions about their genuineness. While it's clear that real people manage these accounts and express themselves sincerely, they must also adhere to the company's guidelines and goals. For instance, the Twitter account of Steak-Umms engaged in discussions about serious issues affecting young people which sheds light on the tension between a humanized corporate approach and the company's interests. I believe this situation reflects the changing nature of social media interactions, where it's becoming more challenging to distinguish personal expression from corporate messaging.

    1. 6.5.1. Example: Mr. Rogers# As an example of the ethically complicated nature of parasocial relationships, let’s consider the case of Fred Rogers, who hosted a children’s television program from 1968 to 2001. In his television program, Mr. Rogers wanted all children to feel cared for and loved. To do this, he intentionally fostered a parasocial relationship with the children in his audience (he called them his “television friends”): I give an expression of care every day to each child, to help him realize that he is unique. I end the program by saying, “You’ve made this day a special day, by just your being you. There’s no person in the whole world like you, and I like you, just the way you are.”

      The case of Mr. Rogers exemplifies the intricacies of parasocial relationships, specifically in the context of children's television. While his intention was to provide comfort and support to young viewers, it raises ethical questions about the boundaries and implications of such connections. On one hand, his messages of care and acceptance were heartwarming and likely had a positive impact on countless children. However, It's crucial to think about whether creating these special bonds with viewers might mix up what's real and what's on the screen, potentially leading to unrealistic expectations or attachments.

    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites. With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)

      The introduction of infinite scroll was definitely a groundbreaking innovation that transformed the way we navigate digital content. It's fascinating to see how it revolutionized the user experience by removing the need to click through multiple pages, allowing for uninterrupted scrolling through search results or social media posts. However, Aza Raskin's regret regarding the unintended consequences of infinite scroll on user behavior is an interesting point. It's a reminder that while enhancing accessibility are essential goals in user interface design, they must be balanced with considerations about how such design choices can impact user engagement and well-being.

    1. 2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.

      It's interesting how Facebook became preeminent so quickly, showing how fast social media can change. At the same time, YouTube offered something new by focusing on videos, giving individuals more ways to connect online. Looking back at the history, it's clear that technology keeps evolving and shaping how we interact online. I believe this raises important questions about how digital platforms will change in the future and how they affect our online relationships.

    1. Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.

      This passage provides an interesting perspective on utilitarianism and the role of data in the context of making moral decisions. It emphasizes the importance of having all the necessary information when using utilitarianism. Moreover, the text also raises a point about considering the interests of people we may not know personally. In our society, the consequences of our actions extend beyond our immediate circles and failing to account for these broader implications can lead to skewed moral judgments. It serves as a reminder that the moral choices we make based on utilitarianism are only as good as the data we have access to.

    1. 4.3. Who does data fit?# Because all data is a simplification of reality, those simplifications work well for some people and some situations but can cause problems for other people and other situations. Thus, when designers of social media systems make decisions about how data will be saved and what constraints will be put on the data, they are making decisions about who will get a better experience. Based on these decisions, some people will fit naturally into the data system, while others will have to put in extra work to make themselves fit, and others will have to modify themselves or misrepresent themselves to fit into the system. So, for example, if we made a form that someone needed to enter their address, we could assume everyone is in the United States and not have any country selection.

      I believe this text brings up an important ethical consideration in the world of data and technology. It highlights the fact that data systems, including social media platforms, are not neutral but shaped by design choices. These choices can have strong implications on user experiences and inclusivity. The example of assuming everyone is in the United States for address entry is a clear representation of how design decisions can inadvertently require individuals to adapt, potentially leading to misrepresentation.

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      I found the comparison between protesting donkeys in Oman and bots to be intriguing. It highlights the complex nature of assigning responsibility when there is a disconnect between intentions and actions. Similar to the donkey in the protest, bots act without understanding the broader context or ethical implications.

    1. As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day.

      The mention of Microsoft's Tay bot getting corrupted in 2016 is a clear reminder of the challenges surrounding bots in the online world. It's surprising how quickly a well-intentioned project can take a negative turn when exposed to harmful input from users. I believe this raises important questions about the responsibility of both bot creators and online communities in ensuring the ethical use of such technology.

  3. Sep 2023
    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look. We’re curious to see how this phrase is continued to be used, and how these sentiments are continuing, being rejected, or evolving.

      The discussion on the view of social media as distinct from "real life" is thought-provoking. It's interesting how the notion that "the internet isn't real life" has been used historically to downplay the significance of online interactions and their consequences. Today, this view is expressed in multiple ways, as shown by Nate Silver's tweet. It raises important questions about the authenticity and representativeness of what we encounter on social media platforms.

    1. Kantianism: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”

      The Kantian ethical framework is intriguing because it challenges us to think about whether the moral rules we believe in can apply universally. Furthermore, this notion helps us to consider the implications of our actions and whether they align with a principle that could work for everyone, not just ourselves. It's a thought-provoking concept that adds depth to discussions about ethical frameworks.