6 Matching Annotations
  1. Last 7 days
    1. AI models could develop personalities during training that are (or if they occurred in humans would be described as) psychotic, paranoid, violent, or unstable, and act out, which for very powerful or capable systems could involve exterminating humanity.

      for - progress trap - AI - abstraction - progress trap - AI with feelings & AI without feelings - no win? - One major and obvious aspect of current AI LLMs is that they are not only artificial in their intelligence, but also artificial in their lack of real world experiences. They are not embodied (and it would likely be a highly dubious ethical justification for their embodiment as in AI - powered robots) - Once we have the first known AI robot killing a human, it will be an indicator we have crossed the Rubicon - AI LLMs have ZERO realworld experience AND they are trained as artificial COGNITIE intelligence, not artificial EMOTIONAL intelligence - Without having the morals and social norms a human being is brought up with, it can become psychotic because they don't intrinsically value life - To attempt to program them with morals is equally dangerous because of moral relativity. A Christian nationalist's morality might be that anyone who is associated with abortions don't have a right to live and should be killed - an eye for an eye. Or a jihadist and muslim extremist with ISIS might feel all westerners do not have a right to exist because they don't follow Allah. - Do we really want moral programmability? - When we have a psychotic person armed with a lethal weapon, that is a dangerous situation. If we have a nation of super geniuses who go rogue, that is danger multiplied many orders of magnitude.

  2. Dec 2025
    1. it's not so much about we have to you know expand the scope of the church or you know civilize people who don't have Jesus Christ and becomes more about we have to uh expand the market and we have to uh you know increase the the you know national revenue and the acreage that's under cultivation

      for - history - progress - after Enlightenment - no long about converting savages to Christians - became about expanding markets

  3. Jan 2025
  4. Sep 2024
    1. nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI

      for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story

    2. when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle

      for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story

  5. Sep 2016