14 Matching Annotations
  1. Jun 2024
    1. it is strikingly plausible that by 2027 models 00:03:36 will be able to do the work of an AI researcher SL engineer that doesn't require believing in sci-fi it just requires in believing in straight lines on a graph

      for - quote - AI prediction for 2027 - Leopold Aschenbrenner

      quote - AI prediction for 2027 - Leopold Aschenbrenner - (see quote below) - it is strikingly plausible that by 2027 - models will be able to do the work of an AI researcher SL engineer - that doesn't require believing in sci-fi - it just requires in believing in straight lines on a graph

    1. I think that Noam chsky said exactly a year ago in New York Times around a year ago that generative AI is not any 00:18:37 intelligence it's just a plagiarism software that learned stealing human uh work transform it and sell it as much as possible as cheap as possible

      for - AI music theft - citation - Noam Chomsky - quote - Noam Chomsky - AI as plagiarism on a grand scale

      to - P2P Foundation - commons transition plan - Michel Bauwens - netarchical capitalism - predatory capitalism - https://wiki.p2pfoundation.net/Commons_Transition_Plan#Solving_the_value_crisis_through_a_social_knowledge_economy

  2. Dec 2023
    1. it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
      • for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap

      • quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us

      • author: Yuval Noah Harari
      • date 2023
    1. i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
      • for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap

      • quote: danger of AI

        • I think the most dangerous thing about AI is not super smart AI, it's stupid AI that is good enough to be put in charge of certain processes but not good enough to not make really bad mistakes
      • author: Thomas Homer-Dixon
      • date: 2021
  3. Sep 2023
    1. the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist. To see that, it is useful to consider what it might be like to have the freedom to control what thought one had next.
      • for: quote, quote - Michael Levin, quote - self as control agent, self - control agent, example, example - control agent - imperfection, spontaneous thought, spontaneous action, creativity - spontaneity
      • quote: Michael Levin

        • the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist.
      • comment

        • adjacency between
          • nondual awareness
          • self-construct
          • self is illusion
          • singular, solid, enduring control agent
        • adjacency statement
          • nondual awareness is the deep insight that there is no solid, singular, enduring control agent.
          • creativity is unpredictable and spontaneous and would not be possible if there were perfect control
      • example - control agent - imperfection: start - the unpredictability of the realtime emergence of our next exact thought or action is a good example of this
      • example - control agent - imperfection: end

      • triggered insight: not only are thoughts and actions random, but dreams as well

        • I dreamt the night after this about something related to this paper (cannot remember what it is now!)
        • Obviously, I had no clue the idea in this paper would end up exactly as it did in next night's dream!
  4. Jul 2023
    1. BY 2029, ARTIFICIALLY INTELLIGENT MACHINES WILL SURPASS HUMAN INTELLIGENCE BY 2049, AI IS PREDICTED TO BE A BILLION TIMES MORE INTELLIGENT THAN US
      • quote
        • 2029 - AI will surpass human intelligence
        • 2049 - AI will be one billion X more intelligent than us
  5. Jun 2023
    1. scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
      • "if we reverse this

        • if we have the best of us take charge
        • the best of us will tell AI
          • don't try to kill the the enemy,
            • try to reconcile with the enemy
          • don't try to create a competitive product
            • that allows me to lead with electric cars,
              • create something that helps all of us overcome global climate change
          • that's the interesting bit
            • the actual threat ahead of us is
              • not the machines at all
                • the machines are pure potential pure potential
              • the threat is how we're going to use them"
      • comment

        • again, see Ronald Wright's quote above
        • it's very salient to this context
    2. the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
    3. if we give up on human connection we've given up on the remainder of humanity
      • quote
        • "If we give up on human connection, we give up on the remainder of humanity"
    4. with great power comes great responsibility we have disconnected power and responsibility
      • quote
        • "with great power comes great responsibility. We have disconnected power and responsibility."
          • "With great power comes great responsibility
          • We have disconnected power and responsibility
          • so today a 15 year old,
            • emotional without a fully developed prefrontal cortex to make the right decisions yet this is science and we developed our prefrontal cortex fully
            • and at age 25 or so with all of that limbic system emotion and passion
            • would buy a crispr kit and modify a rabbit to become a little more muscular and
            • let it loose in the wild
          • or an influencer who doesn't really know how far the impact of what they're posting online
            • can hurt and cause depression or
            • cause people to feel bad by putting that online
        • There is a disconnect between the power and the responsibility and
        • the problem we have today is that
          • there is a disconnect between those who are writing the code of AI and
          • the responsibility of what's going about to happen because of that code and
          • I feel compassion for the rest of the world
          • I feel that this is wrong
          • I feel that for someone's life to be affected by the actions of others
            • without having a say "
    5. the biggest challenge if you ask me what went wrong in the 20th century 00:42:57 interestingly is that we have given too much power to people that didn't assume the responsibility
      • quote
        • "what went wrong in the 20th century is that we have given too much power to people that didn't assume the responsbility"
    6. this is an arms race has no interest 00:41:29 in what the average human gets out of it it
      • quote
        • "this is an arms race"
  6. Apr 2023
    1. Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

      Quote - AI Gedanken - AI risk - The Paperclip Maximizer

  7. Sep 2020