37 Matching Annotations
  1. Dec 2023
    1. In England in the early 1800s, Luddites were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI)

      This shows how far back this fear of innovation really stems (leading all the way to the 1800s with the textile factories). Makes me wonder if all of these innovation are really that bad, or if it is human nature to shy away from innovation.

    2. But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons, or Aza Raskin regretting his invention infinite scroll.

      Interesting how so many inventions that had good causes were used for evil purposes. The cotton gin being one of these examples. I think social media as a whole fits in with this category. Initially, the intention was positive and wholesome, but as time passed it started to become a beast of its own.

    1. Tech industry leaders in Silicon Valley then take what they made with exploited labor, and sell it around the world, feeling good about themselves, believing they are benefitting the world with their “superior” products.

      Interesting how the tech industry evolved into this. I always thought of colonialism as being an ancient idea that doesn't apply to our world today. In fact, I am wrong because colonialism is present but in very different ways. Silicon Valley is a perfect example of this.

    1. Surveillance capitalism began when internet companies started tracking user behavior data to make their sites more personally tailored to users. These companies realized that this data was something that they could profit from, so they began to collect more data than strictly necessary (“behavioral surplus”) and see what more they could predict about users. Companies could then sell this data about users directly, or (more commonly), they could keep their data hidden, but use it to sell targeted advertisements. So, for example, Meta might let an advertiser say they want an ad to only go to people likely to be pregnant. Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).

      In modern day, it looks like social media platforms have become into a beast of their own. No longer the fun and innocent content sharing website they once were. This idea of taking user's data and selling it for profit without the user knowing is exactly what I'm talking about.

  2. Nov 2023
    1. On February 6, 2022, Jeremy Schneider became the Twitter “main character of the day” for posting the following Tweet, which was widely condemned as being mean and not understanding other people’s experiences:

      I thought this was a really interesting series of tweets. Basically, this person made a tweet that offended some people. As a result, he got a lot of unwanted heat. His response is quite extraordinary given the lengths he goes to apologize.

    1. Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others.

      That's an interesting idea. But I think it is true. A lot of the general public does find it entertaining to witness and read about the misfortunes of people who have celebrity status. As such, schadenfreude is a correct concept.

    1. In addition, fake crowds (e.g., bots or people paid to post) can participate in crowd harassment. For example: “The majority of the hate and misinformation about [Meghan Markle and Prince Henry] originated from a small group of accounts whose primary, if not sole, purpose appears to be to tweet negatively about them. […] 83 accounts are responsible for 70% of the negative hate content targeting the couple on Twitter.” Twitter Data Has Revealed A Coordinated Campaign Of Hate Against Meghan Markle

      I didn't know this was what actually happened. I thought people in Britain had a genuine issue with that marriage. Technology can really influence the general public's outlook on issues without the public even knowing. Quite fascinating.

    2. Stochastic terrorism The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random. See also: An atmosphere of violence: Stochastic terror in American politics

      Stochastic terrorism is a term I've never heard of. Basically, someone "orchestrates an act of terror" but not intentionally. This is done through this person's large following. I think the attack on the White House would be a good example of this, back in January of the last election.

    1. Fold-It is a game that lets players attempt to fold proteins. At the time, researchers were having trouble getting computers to do this task for complex proteins, so they made a game for humans to try it. Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.

      That is actually really interesting. I could have never imagined that scientific discoveries would stem from something as silly a crowdsourcing. Fold-It truly is a phenomenon that must be studies further.

    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves

      I've never heard of this term "crowdsourcing". Although, I have seen it all around me. Like the example given, Wikipedia is the product to a crowd of people making small contributions (ie. crowdsourcing). Just found it interesting that we give this behavior a word.

    1. Individual users are often given a set of moderation tools they can use themselves, such as: Block an account: a user can block an account from interacting with them or seeing their content Mute an account: a user can allow an account to try interacting with them, but the user will never see what that account did. Mute a phrase or topic: some platforms let users block content by phrases or topics (e.g., they are tired of hearing about cryptocurrencies, or they don’t want spoilers for the latest TV show). Delete: Some social media platforms let users delete content that was directed at them (e.g., replies to their post, posts on their wall, etc.) Report: Most social media sites allow users to report or flag content as needing moderation.

      Interesting that some of this moderation power is given to the users as well. I think this is also an effective means of quality control since there is such an abundance of content and comments out on the internet.

    1. In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam, mass-produced unsolicited messages, generally advertisements, scams, or trolling.

      I never thought that there would be people working behind the scenes to monitor what content is put out on certain websites. Now that it is mentioned, this idea does make a lot of sense. You don't want users to continuously post offensive or inappropriate content to these websites because it can ruin the site's reputation.

    1. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them).

      In other words, times are changing and people should learn to adapt. There may be a time in the future where people will idealize current day. Where we only have smartphones and laptops as compared to whatever there may be in that time. The solution is not to revert back in time, but to learn to adapt to modern day.

    2. Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset.

      That is really interesting. I've heard of scientific experiments where newborn twins that were adopted were separated at birth in order to tackle the question of nature vs nurture. Of course this is extremely unethical because there wasn't any consent given to the research subjects and because you are messing with helpless human beings. So how is this Facebook experiment any different? I am surprised to hear that Facebook would commit an unethical action like so.

    1. There are ways of duplicating that are built into social media platforms: Actions such as: liking, reposting, replying, and paid promotion get the original posting to show up for users more Actions like quote tweeting, or the TikTok Duet feature let people see the original content, but modified with new context. Social media sites also provide ways of embedding posts in other places, like in news articles

      Based off of this portion of text, I get the sense that social media platforms are built for spreading information or content. There is an abundance of methods at your disposal to share content with other people. Of course some content will become viral given enough people share it.

    1. A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next.

      This is an interesting explanation of what a meme is. Basically, it states that a meme spreading is analogous to a gene in a DNA sequence spreading to other species. I just thought this was a very interesting comparison.

    1. Fig. 11.2 A tweet from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior.

      I guess this tweet sums up one of the behaviors of the algorithm. If something is gaining popularity for the wrong reason, the algorithm doesn't know that it is for the wrong reason. The algorithm only knows that it is gaining popularity. As such, that specific content will keep getting pushed by the algorithm.

    1. Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      I didn't know that algorithms vary from company to company. I thought the algorithm that recommends videos to my feed would be standardized across all social media companies. This is because of how similar all of these algorithms seem to work. I feel like they all recommend videos based off of the same criteria.

  3. Oct 2023
    1. The following tweet has a video of a soap dispenser that apparently was only designed to work for people with light-colored ski

      I would assume this is an honest mistake. Of course, I don't think the designers behind the soap dispenser really thought to exclude people of a certain pigment of skin color. Just goes to show how many little details need to be taken into account when designing certain products if the goal is to not exclude any one group.

    1. Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind, a disability. But there are also a small number of people who are tetrachromats and can see four base colors2 and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability.

      Interesting point regarding disabilities. I always assumed a disability would be something that inhibits someone's day to day activities in some shape or form. Although, the book defines disabilities basically as something that goes against the norm. The example given is how some people (referred to as tetrachromats) can see more combinations of colors than most people. As such, they diverge from the norm in this sense, yet are not impaired by this "disability".

    1. Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      Just another interesting comment I noticed. I didn't expect that non users would have their data/information tracked as well. Seems rather odd to me. I guess this just goes to show how widespread the idea of collecting data from users has grown. Makes sense considering how these companies profit off of this data.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      I found this comment to be interesting. Of course when I sign up to use a social media app/website, I know that by signing the terms and conditions I am giving up some privacy. Although, I would be under the impression that the companies would have a higher standard to keep my private information secure. I feel uneasy now knowing about these past hacks into major tech companies where a lot of private information was accessed and given out.

    1. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause.

      I found this idea to be interesting. Often times, when I see real data and statistics backing up a claim, I almost entirely give credit to the claim. It's almost as if I don't actually need to read the data to believe it. I think many other people fall into this trap as well, this being giving too much respect to data. This sentence makes a great claim that often times, data can be coincidental. The example in the graph shows how data backs the claim that the divorce rate and margarine consumption are correlated, but this is just an absurd idea. The data simply is coincidental.

    1. Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other

      I knew that platforms can collect data such as where users are located and when users are logged on and logged off. Although, I found it absurd that information on what posts users pause over and what specific things users click on surprising. I didn't expect that that even minuscule things such as how much time I spent looking at something could be recorded and used to tailor results to my "liking".

    1. Here are some examples of parody reviews of the banana slicer:

      I found this example interesting because it is probably the only example I read in this chapter that is representative of harmless trolling. I find the trolls to be motivated more so from a comedic purpose. There doesn't seem to be any internal motive to illicit a negative reaction from the recipient. Overall, I enjoyed reading about this example of the banana slicer.

    1. While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction.

      I thought this sentence perfectly sums up what I read about tolling thus far in this chapter. From the examples and descriptions given thus far, I definitely agree that all these extreme and sometimes vulgar actions taken through the act of trolling is simply to illicit a reaction from the viewer. The reactions being good or bad entirely depends on the situation, but overall I get the sense that trolling is not a positive thing.

    1. onymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      I thought this was an interesting point relating anonymity to authenticity. Whenever I think about someone being anonymous on the internet, I always think of them getting the freedom to express toxic behavior that may be considered "inauthentic". I never thought about it from the perspective that anonymity could enforce true authentic behavior. Just an interesting point I ran into when reading this page.

    1. “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).”

      I never followed Trump's tweets during the election season, so I didn't know about this. I guess in hindsight, this does make sense. I knew Trump had a reputation of tweeting aggressively and having emotion conveyed through his tweet. So it makes sense campaign announcements are tweeted in a different tone. I also see the connection to authenticity, and how viewers might lose faith in his tweets if part of them are not actually written from Trump.

    1. Fig. 5.4 AIM let you organize your contacts and see who was currently online.

      Looking at the graphic, I see a stark contrast from the social media of modern day to the social media of the past (specifically in the layout and design). I feel like social media has become much more simplistic in its design as well as more user friendly. The image displayed looks really clustered and confusing to look at. Just goes to show how far social media and technology has evolved.

    1. Graffiti and other notes left on walls were used for sharing updates, spreading rumors, and tracking accounts Books and news write-ups had to be copied by hand, so that only the most desired books went “viral” and spread

      I never thought about what social media was before the era of social media that we know of today. The textbook makes a really good point that social media is simply ways that people communicate and spread rumors, updates, and news. In that sense, graffiti and news write-ups was basically social media from another time period. They fulfill all these basic requirements for it.

    1. As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc.

      I never thought about data in this sense, but it does make sense. Putting a number to an experience or event is a massive oversimplification of it, and I don't think it does any justice to how grand or spectacular (or perhaps not grand and not spectacular) the event was. How can you capture the emotions, the sounds, the sights, the smells, etc. by labelling it with a number? You simply can't. I guess this is what the authors are trying to get at when they say you are taking/losing characteristics when using data.

    2. When looking at real-life data claims and datasets, you will likely run into many different problems and pitfalls in using that data. Any dataset you find might have: missing data erroneous data (e.g., mislabeled, typos) biased data manipulated data

      Of course what is being said makes sense. Often times data sets published to the public do err in some sense or the other. Whether it be missing data, erroneous data, biased data, manipulated data, etc. I found this interesting because when I read statistics off of news articles or textbooks, I often don't question the numbers. I think this is because when I see a really finite number (such as 52.8%), I give it high regard and credibility. When in reality, I should question it and pay more attention to any flaws such as the ones listed here.

    1. With all languages (including programming languages), you combine pieces of the language together according to specific rules in order to create meaning. For example: Consider this sentence in English:

      I have never coded before (except briefly using scratch.mit in elementary school). As such, I find this idea of treating coding as a language (like English) very interesting. Essentially, I can structure code through various rules and laws to give it proper meaning. Just like how English has certain laws regarding things like punctuation, verb tenses, etc.; what it looks like is programming has their own set of laws that will govern how I write code.

    2. I have never coded before (except briefly using scratch.mit in elementary school). As such, I find this idea of treating coding as a language (like English) very interesting. Essentially, I can structure code through various rules and laws to give it proper meaning. Just like how English has certain laws regarding things like punctuation, verb tenses, etc.; what it looks like is programming has their own set of laws that will govern how I write code.

    1. Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.

      I found this idea of a click farm really interesting. I've never heard of such a concept before, it being a "human bot". I almost see a parallel between the mass production of goods during the industrial revolution to what is nowadays, a mass production of internet likes through labor as well. Just an interesting idea I thought I should comment on.

  4. Sep 2023
    1. Ancient Ethics

      Another framework that was not mentioned in 2.2.3, but would fit into the category of "Ancient Ethics" would be stoicism. A brief rundown on the key points in stoicism: A big belief of stoicism is that all people are equals. Things such as our wealth, status, power, possessions, and stature are neither good nor bad, nor should they be important to social relationships.

    1. t might help to think about ethical frameworks as tools for seeing inside of a situation. In medicine, when doctors need to see what’s going on inside someone’s body, they have many different tools for looking in, depending on what they need to know.

      I thought this was an interesting comparison that was made between ethics and medicine. In medicine, doctors often prescribe various different tests in order to find out the root of a patient's problem. Similarly, when presented with an ethical dilemma, people often take not account various differs views and perspectives before reaching a conclusion. I never viewed ethics in this sense.