4 Matching Annotations
  1. Last 7 days
    1. The question it forces is not which model is best. It is who owns the inference layer your organization depends on, what happens when the economics of that layer stop being subsidized, and whether the thing in your pocket turns out to matter more than the thing in the datacenter.

      大多数人关注AI模型本身的性能和优势,但作者认为真正关键的是谁拥有推理层以及其经济可持续性。这挑战了当前AI行业的主流关注点,暗示未来竞争的核心将从模型本身转向推理层的控制和成本结构,这是一个反直觉的视角转换。

  2. Apr 2026
    1. We see continued gains from inference scaling on larger projects, suggesting they may be solvable given enough tokens.

      这一发现揭示了AI性能与推理计算资源之间的正相关关系,暗示了通过增加计算预算可能解决更复杂的编程任务。这为AI能力的边界提供了重要线索,也引发了关于计算资源投入与AI能力提升之间关系的深刻思考。

  3. Jul 2025
    1. Inter-node communication stalls: high batching is crucial to profitably serve millions of users, and in the context of SOTA reasoning models, many nodes are often required. Inference workloads then resemble more training.

      Oh, so to get the highest throughout, the inference servers also batch operations making it look a bit like training too

  4. Oct 2023
    1. Performing optimization in the latent space can more flexibly model underlying data distributions than mechanistic approaches in the original hypothesis space. However, extrapolative prediction in sparsely explored regions of the hypothesis space can be poor. In many scientific disciplines, hypothesis spaces can be vastly larger than what can be examined through experimentation. For instance, it is estimated that there are approximately 1060 molecules, whereas even the largest chemical libraries contain fewer than 1010 molecules12,159. Therefore, there is a pressing need for methods to efficiently search through and identify high-quality candidate solutions in these largely unexplored regions.

      Question: how does this notion of hypothesis space relate to causal inference and reasoning?