3 Matching Annotations
  1. Last 7 days
    1. LLMs aren’t capable of learning on-the-job, so no matter how much we scale, we’ll need some new architecture to enable continual learning.And once we have it, we won’t need a special training phase — the agent will just learn on-the-fly, like all humans, and indeed, like all animals.This new paradigm will render our current approach with LLMs obsolete.

      Richard Sutton on LLM dev: a) core problem is LLMs can't learn from use. Diff architecture necessary for continual learning b) if you've got continual learning then current big-bang training no longer useful. facit: LLM approach not sustainable and dead end.