6 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      The author, Julia Evans, describes how numerical computations suffer from precision issues, "computers cannot calculate most decimal values exactly," and how these result in "small errors with large consequences." These concerns also relate to our prior discussion on utility calculus. How much confidence should we have in machines being able to perform simple arithmetic if they are subject to precision loss? Therefore, given how difficult it is to rely on computers for even the simplest of mathematical operations (the building blocks of data-driven ethics), quantifying something as abstractly subjective as human well-being appears to be an even greater challenge than previously thought.

    1. Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people. One of the classic forms of this approach is utilitarianism, which says we should do whatever maximizes ‘utility’ for most people. Confusingly, ‘utility’ in this case does not refer to usefulness, but to a sort of combo of happiness and wellbeing. When a utilitarian tries to decide how to act, they take stock of all the probable outcomes, and what sort of ‘utility’ or happiness will be brought about for all parties involved. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations).

      The passage describes how utility calculus can be done effectively. However, utility cannot be quantitatively measured, as noted in the textbook, which defines utility as a “sort of combination of happiness and well-being.” Because there is no single unit for measuring utility, you cannot measure a person’s grief and another person’s satisfaction and sum them up into an effective or meaningful quantity. In a section on data-informed ethical decisions, the inability to measure the basic premise of the approach is a serious issue: because the basic premise of the approach cannot be measured, we are producing the appearance of rigorous morality rather than actual rigorous morality.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Plato. Phaedrus: Translated by Benjamin Jowett. January 2013. Page Version ID: 1189255462.

      In Phaedrus, Plato foresaw this issue; He prophesized that writing was dangerous since it leaves the written word permanently disconnected from its author. A donkey is just an old-time representation of that; Bots currently represent the ultimate extreme of an ancient technology. Plato didn't have a problem with technology but rather losing accountability through its use.

    1. In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      This was an unsettling example for me. To realize that a message can travel around the globe without any connection to its creator, and that the carrier (whether it be a donkey or a bot) does not know the message, leads me to believe that the chances of holding someone accountable for their actions based on a message they have sent out into the world are statistically negligible. I also started to think about how much I have probably interacted with bot-generated information online without ever realizing it. At what point do I begin to feel deceived when interacting with a bot even when the content itself may be true?

  4. Mar 2026
    1. Kumail Nanjiani was a star of the Silicon Valley [a6] TV Show, which was about the tech industry. He posted these reflections on ethics in tech on Twitter (@kumailn) on November 1, 2017: As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end. Kumail Nanjiani

      It is certainly true that technological advancement is happening at a much greater speed than society and the law can adapt. Technologies such as deepfakes, bots, and recommendation algorithms are capable of affecting millions of individuals prior to regulatory bodies creating guidelines to govern their use. This has led me to believe that companies should be developing new technology with an eye toward the ethical implications it may have on its users, rather than relying upon subsequent laws or regulations to govern how they are used.

    1. Consequentialism# Sources [b46] [b47] Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” That is, What is moral is to do what makes the most people the most happy. Key figures: Jeremy Bentham [b48] 1700’s England John Stuart Mill [b49], 1800’s England

      Utilitarianism is another theory associated with Consequentialism; a version of Consequentialism focused upon maximizing overall happiness or general well-being. Social Media and technology are common areas in which utilitarianism is applied due to the fact that many companies will defend their choices using claims such as "my platform supports/benefits millions." The problem with this form of reasoning, however, is that it may support causing harm to a small group of people while benefiting a large one. For example, features designed to create user engagement could lead to an increase in harassment and misinformation.