5 Matching Annotations
  1. Apr 2025
    1. highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.

      for - progress trap - ASI - progress trap - AGI

    2. Many in Silicon Valley believe we're less than a year away from AI that can automate most software engineering work

      for - progress trap - AGI - one year away from automating software work

    3. an AI-powered denial capability is useless if it behaves unpredictably, and if it executes on its instructions in ways that have undesired, high-consequence side-effects.

      for - progress trap - AGI

    4. The "move fast and break things" ethos of Silicon Valley is incompatible with the security demands of superintelligence

      for - progress trap - AGI - Silicon Valley move fast and break things strategy - incompatible with security of AGI

  2. Jun 2024
    1. nobody's really pricing this in

      for - progress trap - debate - nobody is discussing the dangers of such a project!

      progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us