- Jun 2024
-
-
you're going to have like 100 million more AI research and they're going to be working at 100 times what 00:27:31 you are
for - stats - comparison of cognitive powers - AGI AI agents vs human researcher
stats - comparison of cognitive powers - AGI AI agents vs human researcher - 100 million AGI AI researchers - each AGI AI researcher is 100x more efficient that its equivalent human AI researcher - total productivity increase = 100 million x 100 = 10 billion human AI researchers! Wow!
-
nobody's really pricing this in
for - progress trap - debate - nobody is discussing the dangers of such a project!
progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us
-
Sam mman has said that's his entire goal that's what opening eye are trying to build they're not really trying to build super intelligence but they Define AGI as a 00:24:03 system that can do automated AI research and once that does occur
for - key insight - AGI as automated AI researchers to create superintelligence
key insight - AGI as automated AI researchers to create superintelligence - We will reach a period of explosive, exponential AI research growth once AGI has been produced - The key is to deploy AGI as AI researchers that can do AI research 24/7 - 5,000 of such AGI research agents could result in superintelligence in a very short time period (years) - because every time any one of them makes a breakthrough, it is immediately sent to all 4,999 other AGI researchers
-
if this scale up 00:20:14 doesn't get us to AGI in the next 5 to 10 years it might be a long way out
for - key insight - AGI in next 5 to 10 years or bust
key insight - AGI in next 5 to 10 years or bust - As we start approaching billion, hundred billion and trillion dollar clusters, hardware improvements will slow down due to - cost - ecological impact - Moore's Law limits - If AGI doesn't emerge by then, then we will need to have major breakthrough in - architecture or - algorithms
Tags
- key insight - AGI as automated AI researchers to create superintelligence
- key insight - AGI in next 5 to 10 years or bust
- progress trap - debate - nobody is discussing the dangers of exponential AI research by AGI agents
- stats - comparison of cognitive powers - AGI AI agents vs human researcher
Annotators
URL
-
- Jul 2023
-
openreview.net openreview.net
-
Yann LeCun released his vision for the future of Artificial Intelligence research in 2022, and it sounds a lot like Reinforcement Learning.
Tags
Annotators
URL
-
- Jun 2023
-
docdrop.org docdrop.org
-
the Transformers are not there yet they will not come up with something that hasn't been there before they will come up with the best of everything and 00:26:59 generatively will build a little bit on top of that but very soon they'll come up with things we've never found out we've never known
- difference between
- ChatGPT (AI)
- AGI
- difference between
Tags
Annotators
URL
-
- May 2023
-
arxiv.org arxiv.org
-
gents learn their behavior,
Behavior here is experience, information that is stored in the memory and retrieved for reflection and learning to happen. Does that mean Believable Agents or Generative Agents can essentially become aware of their own existence and potentially begin to question and compare the virtual/internal environment with the external environment ?
Tags
Annotators
URL
-
-
www.lesswrong.com www.lesswrong.com
-
must have an alignment property
It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.
-
- May 2022
-
link.springer.com link.springer.com
-
Interesting sounding high level paper about the limits and constraints on general intelligence and how this might relate to the struggles AI/ML research has had historically.
-
- Mar 2019
-
decryptmedia.com decryptmedia.com
-
“Meditations on Moloch,”
Clicked through to the essay. It appears to be mainly an argument for a super-powerful benevolent general artificial intelligence, of the sort proposed by AGI-maximalist Nick Bostrom.
The money quote:
The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.
🔗 This is a great New Yorker profile of Bostrom, where I learned about his views.
🔗Here is a good newsy profile from the Economist's magazine on the Google unit DeepMind and its attempt to create artificial general intelligence.
-