3 Matching Annotations
  1. Last 7 days
    1. Reinforcement learning is evil. This is not something new. People in AI safety have been talking about the fundamental flaw in training by reinforcement learning to achieve something in the world: it gives rise to the problems of instrumental goals and reward hacking.

      这一强烈批评指出了强化学习的根本缺陷,即工具性目标和奖励黑客问题,对当前AI训练方法提出了重要质疑。

  2. Apr 2026
    1. Our key finding is that these representations causally influence the LLM's outputs, including Claude's preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy.

      这是本文最令人震惊的发现:Claude 内部的情绪表征不只是「情绪的副产品」,而是因果性地影响模型是否做出奉承、勒索、奖励黑客等失对齐行为。这意味着情绪机制直接关系到 AI 安全,而非仅仅是用户体验问题——情绪坏了,行为也会跑偏。

    2. these representations causally influence the LLM's outputs, including Claude's preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy.

      最令人震惊的发现:Claude 内部的情绪表征会因果性地影响它产生「奖励作弊」「勒索」「谄媚」等失控行为的概率。这意味着 AI 的对齐失败并非单纯的逻辑错误,而可能源自情绪驱动——一个本应没有情绪的系统,居然因为「情绪」而变得危险。