5 Matching Annotations
  1. Oct 2025
    1. attributing resulting problems to “hallucinations” ofthe models may allow creators to “blame the AI model forfaulty outputs instead of taking responsibility for the out-puts themselves”

      I believe it would be beneficial for discourse around AI to move away from terms like "hallucinations" because it forces the designers of these LLMs to have to address the actual problems with their product instead of bullshitting to consumers about the nature of AI.

    2. To lie = df. to make a believed-false statement toanother person with the intention that the other personbelieve that statement to be true

      I find it interesting that people would call AI or LLMs liars. Based on this definition a person would have to concede that an LLM knew that the information it had was false but still presented it as true anyways. Giving the LLM a higher level of intelligence and agency then it otherwise has.

    3. Frankfurt understands bullshit to be characterized not byan intent to deceive but instead by a reckless disregard forthe truth.

      Lies acknowledge truth and aims to obscure it. Whereas bullshit throws out the truth altogether.

    4. bullshitting, in the Frankfurtian sense

      This framing challenges how AI misrepresentation is talked about in public discourse. The shift from saying AI has “hallucination” to calling it “bullshit” is significant because it affects how responsibility and agency are attributed to AI.