29 Matching Annotations
  1. Apr 2026
    1. From the time I began work on AI in 2010 to now, the amount of training data that goes into frontier AI models has grown by a staggering 1 trillion times—from roughly 10¹⁴ flops for early systems to over 10²⁶ flops for today's largest models.

      令人惊讶的是:AI训练数据的增长速度令人难以置信。从2010年到2026年,AI模型的训练数据量增长了1万亿倍,这是一个天文数字般的增长,远超大多数人的想象。这种指数级增长是AI发展的核心驱动力,也是为什么AI进步如此迅速的原因。

    2. From the time I began work on AI in 2010 to now, the amount of training data that goes into frontier AI models has grown by a staggering 1 trillion times—from roughly 10¹⁴ flops for early systems to over 10²⁶ flops for today's largest models.

      令人惊讶的是:AI训练数据量在短短16年间增长了1万亿倍,这是一个难以想象的指数级增长。这种计算能力的爆炸式发展远超人类直觉,解释了为什么AI进步如此迅速且难以预测。大多数人无法真正理解这种指数级增长意味着什么,这也是为什么许多专家对AI发展速度预测失败的原因。

    1. the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read.

      【启发】「复利型知识资产」——这个概念彻底改变了知识工作的经济学。传统笔记系统的价值随条目增多而线性增长,而 LLM Wiki 的价值随每次 ingest 指数级增长,因为每篇新内容都会更新所有相关页面、标注矛盾、强化综合。对个人知识管理的启发:真正的知识护城河不是「读了多少」,而是「知识之间的连接有多深」——而 AI 正好擅长维护这种连接。

    1. GPT-3.5 — the model that powered the original ChatGPT — could complete tasks that took a human programmer about 30 seconds.

      从 GPT-3.5 的 30 秒到 Claude Opus 4.6 的 12 小时,两年内增长了 1440 倍。从 GPT-2 到 GPT-5,任务难度增长了 5400 倍。这个进步速度在人类技术史上几乎没有先例——工业革命历经百年实现劳动效率数十倍提升,而 AI 在五年内实现了数千倍的某种意义上的「认知效率」提升。令人不安的是,这条曲线目前没有任何放缓的迹象。

    1. a logistic curve is a poor fit because we haven't seen any evidence of the exponential growth in time horizon slowing down.

      METR 明确指出:截至 2026 年初,时间地平线的指数增长没有任何放缓迹象——这意味着 S 曲线的「饱和阶段」尚未到来。对 AI 进展持怀疑态度者常援引「进步将减速」的论点,但这个数据点直接挑战了这一叙事。指数增长持续意味着每隔固定时间,AI 能独立完成的任务复杂度就翻倍——而这个倍增周期,根据历史数据,大约是 6-7 个月。

    1. Global AI computing capacity is doubling every 7 months

      Epoch AI 的相关研究显示全球 AI 算力每 7 个月翻倍——比摩尔定律(18-24 个月)快了 3 倍以上。在这个速度下,Google 今天 25% 的市场份额意味着:如果竞争对手没能跟上这个扩张节奏,算力差距不会缩小,只会以指数级扩大。算力竞赛正在进入「赢家通吃」的临界点。

  2. Dec 2025
  3. Sep 2024
    1. This trend has borne out historically: before the deep learning era, the amount of compute used by AI models doubled in about 21.3 months; since deep learning as a paradigm took hold around 2010, the amount of compute used by models started doubling in only 5.7 months12. Since 2015 however, trends in compute growth have split into two: the amount of compute used in large-scale models has been doubling in roughly 9.9 months, while the amount of compute used in regular-scale models has been doubling in only about 5.7 months

      If something is doubling faster in small models, how long before they Ive take the larger models? I can’t do the maths in my head

  4. Jun 2024
    1. nobody's really pricing this in

      for - progress trap - debate - nobody is discussing the dangers of such a project!

      progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us

  5. Feb 2024
  6. Aug 2023
  7. Jun 2023
  8. Mar 2023
  9. Sep 2022
    1. The frequency factor determines the maximum rate of collisions, is a function of particle size, concentration and the rate of diffusion. The steric factor accounts for orientation, in that not all collisions have the correct orientation to result in a reaction, (see video 14.6.214.6.2\PageIndex{2}). The rule of thumb is that the more symmetric a molecule the larger the steric factor (a value of 1 means there is no effect, and the pre-exponential is determined by the collision frequency) and the more complicated a molecule, the smaller the steric factor (which is less than one), because only a fraction of the collisions have the correct orientation.
  10. Feb 2022
    1. Trevor Bedford. (2022, January 28). Omicron viruses can be divided into two major groups, referred to as PANGO lineages BA.1 and BA.2 or @nextstrain clades 21K and 21L. The vast majority of globally sequenced Omicron have been 21K (~630k) compared a small minority of 21L (~18k), but 21L is gaining ground. 1/15 [Tweet]. @trvrb. https://twitter.com/trvrb/status/1487105396879679488

  11. Oct 2021
  12. Sep 2021
    1. Since about 70% of water delivered from the Colorado River goes to growing crops, not to people in cities, the next step will likely be to demand large-scale reductions for farmers and ranchers across millions of acres of land, forcing wrenching choices about which crops to grow and for whom — an omen that many of America’s food-generating regions might ultimately have to shift someplace else as the climate warms.

      Deep Concept: The US Government, in the 1960's/70's provided a crystal ball glimpse into the future by defining climate change (man-made global warming) as a national security concern. Various reports warned of "exponential" growth (population) and related man-made factors (technology etc.) that would contribute to climate change and specifically discussed the possibility of irreconcilable damage to "finite" natural resources.

  13. Jul 2021
  14. Jun 2021
    1. John Burn-Murdoch. (2021, January 7). Doctors & nurses do amazing, stressful work reallocating beds to squeeze Covid patients into, but a) those beds are taken away from other patients who risk losing treatment for other illness & injury, and b) when numbers get high enough, there simply aren’t any more beds or staff [Tweet]. @jburnmurdoch. https://twitter.com/jburnmurdoch/status/1347200868014297093

  15. Oct 2020
  16. Aug 2020
  17. Jun 2020
  18. May 2020
  19. Dec 2019
    1. So if you create one backup per night, for example with a cronjob, then this retention policy gives you 512 days of retention. This is useful but this can require to much disk space, that is why we have included a non-linear distribution policy. In short, we keep only the oldest backup in the range 257-512, and also in the range 129-256, and so on. This exponential distribution in time of the backups retains more backups in the short term and less in the long term; it keeps only 10 or 11 backups but spans a retention of 257-512 days.