6 Matching Annotations
  1. Jul 2023
    1. Yann LeCun released his vision for the future of Artificial Intelligence research in 2022, and it sounds a lot like Reinforcement Learning.

  2. Jun 2023
    1. the Transformers are not there yet they will not come up with something that hasn't been there before they will come up with the best of everything and 00:26:59 generatively will build a little bit on top of that but very soon they'll come up with things we've never found out we've never known
      • difference between
        • ChatGPT (AI)
        • AGI
  3. May 2023
    1. gents learn their behavior,

      Behavior here is experience, information that is stored in the memory and retrieved for reflection and learning to happen. Does that mean Believable Agents or Generative Agents can essentially become aware of their own existence and potentially begin to question and compare the virtual/internal environment with the external environment ?

    1. must have an alignment property

      It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.

  4. May 2022
  5. Mar 2019
    1. “Meditations on Moloch,”

      Clicked through to the essay. It appears to be mainly an argument for a super-powerful benevolent general artificial intelligence, of the sort proposed by AGI-maximalist Nick Bostrom.

      The money quote:

      The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

      🔗 This is a great New Yorker profile of Bostrom, where I learned about his views.

      🔗Here is a good newsy profile from the Economist's magazine on the Google unit DeepMind and its attempt to create artificial general intelligence.