21 Matching Annotations
  1. Dec 2025
    1. We can expect that AI will lead to improvements in technologies that slow or prevent climate change, from atmospheric carbon-removal and clean energy technology to lab-grown meat that reduces our reliance on carbon-intensive factory farming.

      It is funny that AI is being used to mitigate climate change as we are poising the air, water, and creating such a a devastating environmental impact using AI for very mundane tasks. I feel like this argument shows the true reason why the author is writing the article.

    2. I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.

      Using AI to govern will be extremely difficult and will require a lot of thought to make sure everything goes correctly.,

    3. Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.

      I found that this was an interesting take on what AI can do, but at the end of the day, it was still a sales pitch to investors on what AI can do for the future and why they should invest in it.

    4. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great.

      Developing countries can be left out of the benefits of AI because they simply don't have the resources to utilize AI.

    5. Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders

      AI can give a unique perspective to questions we have been dealing with for a while. This could also be used elsewhere and benefit different industries.

    6. and resisting the temptation to rely on natural resource wealth); it’s plausible that “AI finance ministers and central bankers” could replicate or exceed this 10% accomplishmen

      He claims here that AI could replicate this, but I don't see how AI could do the same and if people would trust AI to make such massive decisions. It seems almost like he's promoting AI to be the solution to everything.

    7. Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology

      AI may not be so useful in the fields of biology, but its use may have not been found yet or the technology is not good enough to be used just yet.

    8. data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way)

      Modern biology has massive datasets, but AI progress is limited by noisy, ambiguous, or confounded data. This shows that adding more data is not the solution, you need data that is understood and can produce a positive effect when ingested by AI.

    9. Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10

      Over time, these systems may innovate new methods that reduce current bottlenecks. This may be done through new experiments, new jurisdictions, new data-gathering paradigms.

    10. Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable

      AI cant solve things that are unsolvable (obviously), this means that we can't create something out of nothing. It also means that AI is still restricted and held to the same rules we are held to.

    11. Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help.

      Intelligence cannot substitute for missing empirical evidence—a crucial limitation for subjects like particle physics, biology, etc.

    12. Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn

      Physical processes impose unavoidable latency; no amount of intelligence can culture cells or grow animals faster than biology allows.

    13. I believe that in the AI age, we should be talking about the marginal returns to intelligence77 The closest economics work that I’m aware of to tackling this question is work on “general purpose technologies” and “intangible investments” that serve as complements to general purpose technologies., and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high.

      The author introduces a powerful economic lens: intelligence as a production factor whose returns can be quantified, helping predict where AI will accelerate progress and where it won’t.

    14. Second, and conversely, you might believe that technological progress is saturated or rate-limited by

      This frames a core debate: whether intelligence alone can accelerate progress or whether external constraints—data, society, physical time—will always bottleneck innovation.

    15. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on

      We are quite close if not already achieve this capability.

    16. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

      This is what I feel like current AI cannot do and wont be able to do.

    17. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that gro

      It seems almost like we should create a committee or standard to guide the development of AI.

    18. as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation.

      I feel like this is something that is frequently overlooked when we hear from technology leaders like Bill Gates, Zuckerberg, Altman, etc.

    19. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.

      Take everything with a grain of salt and research who is supporting these articles and papers.

    20. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces.

      Benefits come naturally and are not controlled by the creators. There may also be benefits that couldn't have been predictedĄ

    21. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

      People need to unde4rstand that all new technology has inherent risks and if we want to positively benefit from it we need to adequately prepare ourselves.