4 Matching Annotations
  1. Last 7 days
    1. gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm

      Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.

  2. Jun 2023
    1. Overview of how tech changes work moral changes. Seems to me a detailing of [[Monstertheorie 20030725114320]] diving into a specific part of it, where cultural categories are adapted to fit new tech in. #openvraag are the sources containing refs to either Monster theory by Smits or the anthropoligical work of Mary Douglas. Checked: it doesn't, but does cite refs by PP Verbeek and Marianne Boenink, so no wonder there's a parallel here.

      The first example mentioned points in this direction too: the 70s redefinition of death as brain death, where it used to be heart stopped (now heart failure is a cause of death), was a redefinition of cultural concepts to assimilate tech change. Third example is a direct parallel to my [[Empathie verschuift door Infrastructuur 20080627201224]] [[Hyperconnected individuen en empathie 20100420223511]]

      Where Monstertheory is a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural cats and tech get adapted) is the pragmatic route (where the mediation theory of PP Verbeek is located), it doesn't as such provide ways to act / intervene. Does this taxonomy provide agency?

      Or is this another way to locate where moral effects might take place, but still the various types of responses to Monsters still may determine the moral effect?

      Zotero antilib Mechanisms of Techno-moral Change

      Via Stephen Downes https://www.downes.ca/post/75320

  3. May 2023
    1. This disconnect between its superhuman intelligence and incompetence is one of the hardest things to reconcile.

      generative AI as very smart and super incompetent at the same time, which is hard to reconcile. Is this a [[Monstertheorie 20030725114320]] style cultural category challenge? Or is the basic one replacing human cognition?

  4. Jan 2023
    1. In this age of AI, where tech and hype try to steer how we think about “AI” (and by implication, about ourselves and ethics), for monetary gain and hegemonic power (e.g. Dingemanse, 2020; McQuillan, 2022), I believe it is our academic responsibility to resist.

      When hype is used to influence public opinion, there's a obligation to resist. (Vgl [[Crap detection is civic duty 2018010073052]], en [[Progress is civic duty of reflection 20190912114244]]) Also with which realm of [[Monstertheorie 20030725114320]] are we dealing here with this type of response? In the comments on Masto it's partly positioned as monster slaying, but that certainly isn't it. It's warning against monster embracing. I think the responses fall more into monster adaptation than assimiliation, as it aims to retain existing cultural categories although recognising the challenges issued against it. Not even sure the actual LLM is the monster perceived, but its origins and the intentions and values of the company behind it. Placing it outside the Monster realm entirely.