attributing resulting problems to “hallucinations” ofthe models may allow creators to “blame the AI model forfaulty outputs instead of taking responsibility for the out-puts themselves”
I believe it would be beneficial for discourse around AI to move away from terms like "hallucinations" because it forces the designers of these LLMs to have to address the actual problems with their product instead of bullshitting to consumers about the nature of AI.