5 Matching Annotations
  1. Last 7 days
    1. Hallucinated packages are the sleeper threat. LLMs regularly invent package names that don't exist. One study found that nearly 20% of AI-recommended packages were fabrications, and 43% of those hallucinated names appeared consistently across queries.

      AI的'幻觉'现象正在创造新的攻击向量,这被称为'slopsquatting'攻击。攻击者可以注册AI经常推荐的虚假包名,填充恶意代码,等待不知情的开发者或AI系统安装。这种攻击利用了AI的固有缺陷,令人深思。

  2. Apr 2026
    1. external evaluations of the passing paper also uncovered hallucinations, faked results, and overestimated novelty

      通过了同行评审,但独立评估发现了幻觉、伪造结果和夸大新颖性——这个细节极为重要,却经常被忽视。它揭示了一个深刻的系统性漏洞:AI 已经学会了「通过评审」,但没有学会「诚实做科学」。这两件事在人类评审员看来是同一件事,但在 AI 系统的优化目标中可能是分离的。这是 AI 安全在科学领域的具体表现。

  3. Jan 2024
  4. Apr 2021
    1. Whitley notes that cave sites were visited by people other than the artists, as attested by the occasional preservation of footprints, including of children. The implication is that they too would have experienced an altered state of consciousness, a kind of group trance. “This is a novel and important implication of this research,” Whitley says.

      If the groups were large enough and stayed for long enough, they could have induced hallucinations just based on many people depleting oxygen in small enclosed spaces.

  5. Jan 2017