- Feb 2025
-
-
Detailed explanation of what DeepSeek model is doing differently to improve performance and training time over ChatGPT.
-
- Jan 2025
-
www.robotscooking.com www.robotscooking.com
-
This monetization strategy highlights a potential risk of the PRC model being co-opted by commercial publishers to prioritize profit over the principles of openness andaccessibility. By charging for submissions, even those that do not proceed to full curationor publication, journals could exploit the publish-first ethos of PRC for financial gain,further complicating the already high cost of academic publishing for researchers andcausing even more inequities.
good to see this focus on the way money flows in the scholarly communications system...I wonder if it might not be brought up earlier as a concern to more fully addressed in its proper place as it is here...
-
- Dec 2024
-
-
there still seems to be a little bit of Gap in data that doesn't account for 0.2 de celsus warming that is present extra scientists have not been able to comfortably explain over the past in fact several years why there is this little bit of extra global warming it is a major major Gap
for - stats - climate crisis - global mean temperature gap in models vs measurement of - 0.2 Deg C - from The Print - YouTube - Low clouds disappearing over earth, rapidly acceleration heating - 2024, Dec
-
-
www.youtube.com www.youtube.com
-
when you want to use Google, you go into Google search, and you type in English, and it matches the English with the English. What if we could do this in FreeSpeech instead? I have a suspicion that if we did this, we'd find that algorithms like searching, like retrieval, all of these things, are much simpler and also more effective, because they don't process the data structure of speech. Instead they're processing the data structure of thought
for - indyweb dev - question - alternative to AI Large Language Models? - Is indyweb functionality the same as Freespeech functionality? - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan - data structure of thought - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
Tags
- indyweb dev - question - alternative to AI Large Language Models? - Is indyweb functionality the same as Freespeech functionality? - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
- data structure of thought - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
Annotators
URL
-
-
www.pnas.org www.pnas.org
-
Across all global land area, models underestimate positive trends exceeding 0.5 °C per decade in widening of the upper tail of extreme surface temperature distributions by a factor of four compared to reanalysis data and exhibit a lower fraction of significantly increasing trends overall.
for - question - climate crisis - climate models underestimate warming in some areas up to 4x - what is the REAL carbon budget if adjusted to the real situation?
question - climate crisis - climate models underestimate warming in some areas up to 4x<br /> - What is the REAL carbon budget if adjusted to the real situation? - If we have even less than 5 years remaining in our carbon budget, then how many years do we actually have to stay within 1.5 Deg C?
-
for - climate model - gaps - from - post - LinkedIn - Reality vs Climate Models - Kasper Benjamin Reimer BjørkskovKasper Benjamin Reimer Bjørkskov
from - post - Linked In - Reality vs Climate Models - Kasper Benjamin Reimer BjørkskovKasper Benjamin Reimer Bjørkskov - https://hyp.is/Dc_w8rM2Ee-I0VO9JwZKNg/www.linkedin.com/feed/update/urn:li:activity:7270232384455720960/?commentUrn=urn:li:comment:(activity:7270232384455720960,7270500962702655489)&dashCommentUrn=urn:li:fsd_comment:(7270500962702655489,urn:li:activity:7270232384455720960)
Tags
- climate model - gaps
- question - climate crisis - climate models underestimate warming in some areas up to 4x - what is the REAL carbon budget if adjusted to the real situation?
- from - post - Linked In - Reality vs Climate Models - Kasper Benjamin Reimer BjørkskovKasper Benjamin Reimer Bjørkskov
Annotators
URL
-
-
www.linkedin.com www.linkedin.com
-
for - climate crisis - paper - models are underestimating by up to 4x - to - paper - Global emergence of regional heatwave hotspots outpaces climate model simulations Kornhuber et al, 2024 - https://hyp.is/9cS36LMtEe-2oL8C4AgQOQ/www.pnas.org/doi/10.1073/pnas.2411258121
-
If we are underestimating, then does that mean our carbon budget figures are too high and we don't have 5 years of carbon budget remaining at the BAU rate, but significantly less?
for - climate crisis - models are underestimating as much as 4x - question - does our current remaining carbon budget of 5 years BAU need to be reduced?
Tags
- to - paper - Global emergence of regional heatwave hotspots outpaces climate model simulations Kornhuber et al, 2024
- climate crisis - models are underestimating as much as 4x - question - does our current remaining carbon budget of 5 years BAU need to be reduced?
- climate crisis - paper - models are underestimating by up to 4x
Annotators
URL
-
- Nov 2024
-
www.columbia.edu www.columbia.edu
-
GCM-dominated approach allows censorship of alternative perspectives,when the models have a common, or at least widespread, problem: lack of realistic sensitivityto injection of freshwater into the upper layers of the ocean.
for - climate crisis - Global Climate Models (GCM) limitation - do not allow alternative perspectives - unrealistic sensitivity to injection of fresh water into upper layers of the ocean - Jim Hansen
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
the majority of working group three which has been dominated by the integrated assessment model these big models that basically economic models with a bit of technology or a bit of mythical technology and a bit of um social sciences bolted on the side and and a small climate model but basically just economic models the business as usual models these models have dominated what we have to do about climate change
for - climate crisis - IPCC - warning - working group 3 - integrated assessment models - are basically economic models - with a bit of mythical technology - a bit of social science - Kevin Anderson
-
I've for a long time said I don't think working group three should be part of the ipcc it's just inate reducing emissions is innately political
for - climate crisis - IPCC - warning - working group 3 - integrated assessment models - is just about reducing emissions - inherently political - Kevin Anderson
-
working group three is just Exxon in Disguise um you know there are good people in working group three but working group three and integrated assessment models good people working some of the people are good people there working in deeply subjective boundaries that have been set up by we mustn't Rock the political boat
for - climate crisis - IPCC - warning - working group 3 - Integrated Assessment Models - Some good people here but - It's just Exxon in disguise - Kevin Anderson
-
the climate is coming back and killing people already yeah so it is killing people it's not killing enough of the high mters it's not killing our children but it's killing poor people's children typically people of color a long way away we we've never cared about them and we continue not to care about them and we embed that complete disregard and colonialism in our models as well our so-called objective models um on what we should do about climate change they're deeply Colonial models and that feed into the ipcc
for - quote - IPCC - climate models - are deeply colonial - Kevin Anderson
Tags
- quote - IPCC - climate models - are deeply colonial - Kevin Anderson
- climate crisis - IPCC - warning - working group 3 - integrated assessment models - are basically economic models - with a bit of mythical technology - a bit of social science - Kevin Anderson
- climate crisis - IPCC - warning - working group 3 - integrated assessment models - is just about reducing emissions - inherently political - Kevin Anderson
- climate crisis - IPCC - warning - working group 3 - Integrated Assessment Models - Some good people here but - It's just Exxon in disguise - Kevin Anderson
Annotators
URL
-
-
-
Best suited for deployment of trained AI models in Android and iOS operating systems, TensorFlow Lite provides customers with on-device machine learning capability through mobile-optimized pre-trained models. It’s efficient while having low latency and compatibility for multiple languages which makes it very versatile. Developers can leverage its lightweight and mobile-optimized models to provide on-device AI functionality with minimal latency when implementing TensorFlow Lite in mobile apps.
Implementing Trained AI Models in Mobile App Development is transforming app experiences by integrating machine learning into iOS and Android platforms. From AI-powered personalization to advanced analytics, trained models empower intelligent decision-making and enhanced functionality.
-
-
-
AI models collapse when trained on recursively generated data by Ilia Shumailov et al.
ᔥ[[Mathew Lowry]] in AI4Communities post - MyHub Experiments Wiki (accessed:: 2024-11-06 09:43:23)
-
- Sep 2024
-
metagov.org metagov.org
-
https://metagov.org/projects/koi-pond
Metagov's KOI (Knowledge Organization Infrastructure) is a graph database that supports relationships between knowledge objects, users, and groups within Metagov. via JM
-
- Jul 2024
-
- Apr 2024
-
snarfed.org snarfed.org
- Mar 2024
-
research.ibm.com research.ibm.com
-
https://research.ibm.com/blog/retrieval-augmented-generation-RAG
PK indicates that folks using footnotes in AI are using rag methods.
-
-
-
Abstract
结论:预测结果,好于MOST(MO估计系统地低估了湍流通量的大小,改善了与观测值和减小与观测通量偏离的总幅度。),不同地点的泛化能力 不足:不含物质通量,预测结果待提升,结果因稳定性而异常,不同季节的泛化能力,运用了不易获得的变量(找到最小观测集)
Tags
Annotators
-
- Feb 2024
-
docdrop.org docdrop.org
-
for - climate crisis - interview - Neil degrasse Tyson - Gavin Schmidt - 2023 record heat - NASA explanation
podcast details - title: How 2023 broke our climate models - host: Neil degrasse Tyson & Paul Mercurio - guest: NASA director, Gavin Schmidt - date: Jan 2024
summary - Neil degrasse and his cohost Paul Mercurio interview NASA director Gavin Schmidt to discuss the record-breaking global heating in 2023 and 2024. - Neil and Paul cover a lot in this short interview including: - NASA models can't explain the large jump in temperature in 2023 / 2024. Yes, they predicted incremental increases, but not such large jumps. Gavin finds this worrying. - PACE satellite launches this month, to gather important data on the state of aerosols around the planet. This infomration can help characterize more precisely the role aerosols are playing in global heating. - geoengineering with aerosols is not considered a good idea by Gavin, as it essentially means once started, and if it works to cool the planet, we would be dependent on them for centuries. - Gavin stresses the need for a cohesive collective solution, but that it's beyond him how we achieve that given all the denailsim and misinformation that influeces policy out there.
-
- Jan 2024
-
arxiv.org arxiv.org
-
Hubinger, et. al. "SLEEPER AGENTS: TRAINING DECEPTIVE LLMS THAT PERSIST THROUGH SAFETY TRAINING". Arxiv: 2401.05566v3. Jan 17, 2024.
Very disturbing and interesting results from team of researchers from Anthropic and elsewhere.
-
-
cdn.openai.com cdn.openai.com
-
GPT-4 System CardOpenAIMarch 23, 2023
-
-
www.technologyreview.com www.technologyreview.com
-
- for: progress trap -AI, carbon footprint - AI, progress trap - AI - bias, progress trap - AI - situatedness
-
- Nov 2023
-
www.youtube.com www.youtube.com
-
haha, china and russia and friends are shitting all over your "scientific models".<br /> the ONLY problem is "too many humans", aka overpopulation, caused by pacifism.<br /> these "save the world" policies are collective suicide for the 95% useless eaters. byee!
-
- Oct 2023
-
-
Introduction of the RoBERTa improved analysis and training approach to BERT NLP models.
-
-
arxiv.org arxiv.org
-
(Chen, NeurIPS, 2021) Che1, Lu, Rajeswaran, Lee, Grover, Laskin, Abbeel, Srinivas, and Mordatch. "Decision Transformer: Reinforcement Learning via Sequence Modeling". Arxiv preprint rXiv:2106.01345v2, June, 2021.
Quickly a very influential paper with a new idea of how to learn generative models of action prediction using SARSA training from demonstration trajectories. No optimization of actions or rewards, but target reward is an input.
-
-
arxiv.org arxiv.org
-
Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.
-
-
arxiv.org arxiv.org
-
Zecevic, Willig, Singh Dhami and Kersting. "Causal Parrots: Large Language Models May Talk Causality But Are Not Causal". In Transactions on Machine Learning Research, Aug, 2023.
-
-
www.gatesnotes.com www.gatesnotes.com
-
"The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes
-
-
www.inc.com www.inc.com
-
Minda Zetlin. "Bill Gates Says We're Witnessing a 'Stunning' New Technology Age. 5 Ways You Must Prepare Now". Inc.com, March 2023.
-
-
arxiv.org arxiv.org
-
Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"
Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.
-
-
arxiv.org arxiv.org
-
Training language models to follow instructionswith human feedback
Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.
-
-
cdn.openai.com cdn.openai.com
-
GPT-2 Introduction paper
Language Models are Unsupervised Multitask Learners A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, (2019).
-
-
arxiv.org arxiv.org
-
"Attention is All You Need" Foundational paper introducing the Transformer Architecture.
-
-
-
GPT-3 introduction paper
-
-
arxiv.org arxiv.org
-
"Are Pre-trained Convolutions Better than Pre-trained Transformers?"
-
-
arxiv.org arxiv.org
-
LaMDA: Language Models for Dialog Application
"LaMDA: Language Models for Dialog Application" Meta's introduction of LaMDA v1 Large Language Model.
-
-
-
Benyamin GhojoghAli Ghodsi. "Attention Mechanism, Transformers, BERT, and GPT: Tutorial and Survey"
-
- Sep 2023
-
-
in 2018 you know it was around four percent of papers were based on Foundation models in 2020 90 were and 00:27:13 that number has continued to shoot up into 2023 and at the same time in the non-human domain it's essentially been zero and actually it went up in 2022 because we've 00:27:25 published the first one and the goal here is hey if we can make these kinds of large-scale models for the rest of nature then we should expect a kind of broad scale 00:27:38 acceleration
-
for: accelerating foundation models in non-human communication, non-human communication - anthropogenic impacts, species extinction - AI communication tools, conservation - AI communication tools
-
comment
- imagine the empathy we can realize to help slow down climate change and species extinction by communicating and listening to the feedback from other species about what they think of our species impacts on their world!
-
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Recent work has revealed several new and significant aspects of the dynamics of theory change. First, statistical information, information about the probabilistic contingencies between events, plays a particularly important role in theory-formation both in science and in childhood. In the last fifteen years we’ve discovered the power of early statistical learning.
The data of the past is congruent with the current psychological trends that face the education system of today. Developmentalists have charted how children construct and revise intuitive theories. In turn, a variety of theories have developed because of the greater use of statistical information that supports probabilistic contingencies that help to better inform us of causal models and their distinctive cognitive functions. These studies investigate the physical, psychological, and social domains. In the case of intuitive psychology, or "theory of mind," developmentalism has traced a progression from an early understanding of emotion and action to an understanding of intentions and simple aspects of perception, to an understanding of knowledge vs. ignorance, and finally to a representational and then an interpretive theory of mind.
The mechanisms by which life evolved—from chemical beginnings to cognizing human beings—are central to understanding the psychological basis of learning. We are the product of an evolutionary process and it is the mechanisms inherent in this process that offer the most probable explanations to how we think and learn.
Bada, & Olusegun, S. (2015). Constructivism Learning Theory : A Paradigm for Teaching and Learning.
Tags
Annotators
URL
-
- Aug 2023
-
arxiv.org arxiv.org
-
Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network
-
- Jul 2023
-
arxiv.org arxiv.org
-
LLAMA 2 Release Paper
-
-
arxiv.org arxiv.org
-
Daniel Adiwardana Minh-Thang Luong David R. So Jamie Hall, Noah Fiedel Romal Thoppilan Zi Yang Apoorv Kulshreshtha, Gaurav Nemade Yifeng Lu Quoc V. Le "Towards a Human-like Open-Domain Chatbot" Google Research, Brain Team
Defined the SSI metric for chatbots used in LAMDA paper by google.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Bowen Baker et. al. (Open AI) "Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos" Arkiv, June 2022.
Introduction of VPT : New semi-supervied pre-trained model for sequential decision making on Minecraft. Data are from human video playthroughs but are unlabelled.
-
- Jun 2023
-
docdrop.org docdrop.org
-
These seven models of harmonic realization get progressively more advanced, but eventhe initial ones—provided that they are performed in time and with a good rhythmicfeel—can convincingly express the majority of jazz progressions. As you get morecomfortable at realizing harmonic progressions using these models, experiment withdifferent metric placements and variations of the Charleston rhythm
-
The ability to realize harmonic progressions on the keyboard is an essential skill for thecontemporary jazz musician, regardless of her/his primary instrument. The forthcomingmodels of keyboard playing will help to accomplish this objective
-
Figure 12.2 illustrates the use of Model II. The R.H. distributes the Charleston rhythmat different locations within the measure
-
Chapter 21 introduces 13 phrase models that illustrate the essential harmonic, contrapuntal,and structural properties of the different eight-bar phrases commonly found in standardtunes.
-
The terms “turnaround” and “tag ending” are generic labels that do not indicate a partic-ular chord sequence; rather, they suggest the specific formal function of these progressions.In jazz, there is a certain subset of harmonic progressions whose names suggest specificchord successions. When jazz musicians use the term “Lady Bird” progression,for instance, it connotes a particular chromatic turnaround from Tadd Dameron’s tuneof the same title recorded in 1947. Figure 13.9 illustrates the chord structure of thatprogression using Model VI of harmonic realization
Tags
- jse-har-kbd-models
- harmonic-realisation-models
- invertible-counterpoint
- harmony
- voice-leading
- harmony-keyboard-models
- tag-endings
- lady-bird-progression
- turnarounds
- keyboard-models
- jse-har-realisation
- phrase-models
- rhythm-charleston
- source:terefenko
- jse-charleston-rhythm
- phrase-prototypes
- jse-phr-phrase-prototypes
Annotators
URL
-
- May 2023
- Apr 2023
-
-
Raworth, Kate. Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. White River Junction, Vermont: Chelsea Green Publishing, 2017.
-
-
srush.github.io srush.github.io
-
The Annotated S4 Efficiently Modeling Long Sequences with Structured State Spaces Albert Gu, Karan Goel, and Christopher Ré.
A new approach to transformers
-
-
-
Efficiently Modeling Long Sequences with Structured State SpacesAlbert Gu, Karan Goel, and Christopher R ́eDepartment of Computer Science, Stanford University
-
-
-
Bowman, Samuel R.. "Eight Things to Know about Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2304.00612v1.
Abstract
The widespread public deployment of large language models (LLMs) in recent months has prompted a wave of new attention and engagement from advocates, policymakers, and scholars from many fields. This attention is a timely response to the many urgent questions that this technology raises, but it can sometimes miss important considerations. This paper surveys the evidence for eight potentially surprising such points: 1. LLMs predictably get more capable with increasing investment, even without targeted innovation. 2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment. 3. LLMs often appear to learn and use representations of the outside world. 4. There are no reliable techniques for steering the behavior of LLMs. 5. Experts are not yet able to interpret the inner workings of LLMs. 6. Human performance on a task isn't an upper bound on LLM performance. 7. LLMs need not express the values of their creators nor the values encoded in web text. 8. Brief interactions with LLMs are often misleading.
Found via: Taiwan's Gold Card draws startup founders, tech workers | Semafor
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Bowen Baker et. al. (Open AI) "Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos" Arkiv, June 2022.
New supervised pre-trained model for sequential decision making on Minecraft. Data are from human video playthroughs but are unlabelled.
reinforcement-learning foundation-models pretrained-models proj-minerl minecraft
-
-
-
It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.
This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.
-
- Mar 2023
-
arxiv.org arxiv.org
-
Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.
Abstract
We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.
-
-
web.archive.org web.archive.org
-
Dass das ägyptische Wort p.t (sprich: pet) "Himmel" bedeutet, lernt jeder Ägyptologiestudent im ersten Semester. Die Belegsammlung im Archiv des Wörterbuches umfaßt ca. 6.000 Belegzettel. In der Ordnung dieses Materials erfährt man nun, dass der ägyptische Himmel Tore und Wege hat, Gewässer und Ufer, Seiten, Stützen und Kapellen. Damit wird greifbar, dass der Ägypter bei dem Wort "Himmel" an etwas vollkommen anderes dachte als der moderne westliche Mensch, an einen mythischen Raum nämlich, in dem Götter und Totengeister weilen. In der lexikographischen Auswertung eines so umfassenden Materials geht es also um weit mehr als darum, die Grundbedeutung eines banalen Wortes zu ermitteln. Hier entfaltet sich ein Ausschnitt des ägyptischen Weltbildes in seinem Reichtum und in seiner Fremdheit; und naturgemäß sind es gerade die häufigen Wörter, die Schlüsselbegriffe der pharaonischen Kultur bezeichnen. Das verbreitete Mißverständnis, das Häufige sei uninteressant, stellt die Dinge also gerade auf den Kopf.
Google translation:
Every Egyptology student learns in their first semester that the Egyptian word pt (pronounced pet) means "heaven". The collection of documents in the dictionary archive comprises around 6,000 document slips. In the order of this material one learns that the Egyptian heaven has gates and ways, waters and banks, sides, pillars and chapels. This makes it tangible that the Egyptians had something completely different in mind when they heard the word "heaven" than modern Westerners do, namely a mythical space in which gods and spirits of the dead dwell.
This is a fantastic example of context creation for a dead language as well as for creating proper historical context.
-
In looking at the uses of and similarities between Wb and TLL, I can't help but think that these two zettelkasten represented the state of the art for Large Language Models and some of the ideas behind ChatGPT
-
-
www.inc.com www.inc.com
-
"There is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all."
Is there? By whom? Why industry only and not government, academia and civil society?
-
-
dl.acm.org dl.acm.org
-
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?
-
-
www.nytimes.com www.nytimes.com
-
L.L.M.s have a disturbing propensity to just make things up out of nowhere. (The technical term for this, among deep-learning experts, is ‘‘hallucinating.’’)
-
‘‘I think it lets us be more thoughtful and more deliberate about safety issues,’’ Altman says. ‘‘Part of our strategy is: Gradual change in the world is better than sudden change.’’
What are the long term effects of fast breaking changes and gradual changes for evolved entities?
-
OpenAI had a novel structure, which the organization called a ‘‘capped profit’’ model.
-
-
insightmaker.com insightmaker.com
-
// Insight Maker is used to model system dynamics and create agent based models by creating causal loop diagrams and allowing users to run simulations on those
-
- Feb 2023
-
www.washingtonpost.com www.washingtonpost.com
-
Could it be the sift from person to person (known in both directions) to massive broadcast that is driving issues with content moderation. When it's person to person, one can simply choose not to interact and put the person beyond their individual pale. This sort of shunning is much harder to do with larger mass publics at scale in broadcast mode.
How can bringing content moderation back down to the neighborhood scale help in the broadcast model?
-
-
www.klimareporter.de www.klimareporter.de
-
Hartmut Grassl u.a. über den Unterschied von Klimaneutralität und Treibhausgasneutralität, über Suffizienz und Klimamodellierung.
-
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
One of the most well-documented shortcomings of large language models is that they can hallucinate. Because these models have no direct knowledge of the physical world, they're prone to conjuring up facts out of thin air. They often completely invent details about a subject, even when provided a great deal of context.
-
language models are incredible "yes, and" machines, allowing writers to quickly explore seemingly unlimited variations on their ideas.
-
The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.
Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?
-
-
docdrop.org docdrop.org
-
What signals are available to participants,and how are they compiled into estimates of rank? Their modelassumes that knowledge of rank is noisy, but not (statistically) biased.While we can build more-sophisticated models of the biases in ourjudgments, however, Kawakatsu et al.’s (1) success highlights thevirtues of simplicity. It is possible, for example, that, even if the sig-nals are not accurate at first, we might act to make them so.
In the fraternity and other social spaces, how does one correct for a "bad first date", a botched meeting, or a lone bad day? Does statistical thermodynamics as a model provide clues? How would rank be determined here in an unbiased way? What about individual chemical affinities and how chemical interactions change and/or bias the samples?
-
- Jan 2023
-
ncase.me ncase.me
-
An interesting interactive model for segregation here. See also https://www.bloomberg.com/news/articles/2014-12-10/an-immersive-game-shows-how-easily-segregation-arises-and-how-we-might-fix-it for press coverage.
-
-
en.wikipedia.org en.wikipedia.org
-
A term recommended by Eve regarding an interdisciplinary approach that accounts for multiple feedback loops within complex systems. Need to confer complex systems science to see if ADHD is already addressed in that domain.
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
"Talking About Large Language Models" by Murray Shanahan
-
-
tedgioia.substack.com tedgioia.substack.com
-
creative fields like music and writing live and die based on creativity, not financial statements and branding deals.
-
- Dec 2022
-
jarche.com jarche.com
-
If my interpretation of the Retrieval quadrant is correct, it will become much more difficult to be an average, or even above average, writer. Only the best will flourish. Perhaps we will see a rise in neo-generalists.
This is probably true of average or poor software engineers given that GPT-3 can produce pretty reasonable code snippets
-
-
digitalcredentials.mit.edu digitalcredentials.mit.edu
-
Develop Credential Quality Guidelines and Processes
Noteworthy that the recommendations for quality prioritize 1) The granularity of documenting learning outcomes; and 2) that credentials use standards that can be independently verified and validated.
-
Standardization of these concepts would allow for validators to sift through credential wallets anddistinguish which credentials are most relevant in a specific use case. Critical to linking up such trustinformation is a more prominent role for dedicated trust providers in the credential ecosystem.These organizations include accreditation boards and regulators of professions, as well as otherssuch as ranking boards and private quality assurance agencies who publish quality standards foreducational organizations and maintain lists of which organizations match the criteria
What constitutes TRUST?
-
Multiple initiatives have tried to make various kinds of social recommendations by issuingcredentials. However, up to this point they have worked better in closed social networks rather thanas open credentials due to the ability of social networks to tie a recommendation with the profile(and identity) of the recommender. There are also several nascent initiatives to create open linkeddata around which skills, credentials and issuers are valued by employers.
Clearly, the LinkedIn recommendations use case is an example of one of these initiatives. It has not succeeded in creating strong social signals anchored in trust models. We are wise to consider what's missing from efforts like this. An even greater concern however, and one that I believe is an essential if we are to realize the transformative potential of digital credentials, is how to design social signals built on trust models that help all people. In a world long-governed by "it's not what you know, it's who you know," the social signals and trust models are overweighted in favor of people with connections to other people, organizations and brands that are all to some degree legacies of exclusionary and inequitable systems. We are likely to build new systems that perpetuate the same problems if we do not intentionally design them to function otherwise. For people (especially those from historically underserved populations) worthy of the recommendations but lacking in social connections, how do they access social recommendations built on trust models?
-
-
-
One of the clear signs that the bottleneck to low-income adults working moreresults from their lack of opportunities is provided by looking at their hours of workover the business cycle. When the economy is strong and jobs are plentiful, low-incomeworkers are more likely to find work, find work with higher pay, and be able to securemore hours of work than when the economy is weak. In 2000, when the economy wasclose to genuine full employment, the unemployment rate averaged 4.0 percent and thepoverty rate was 11.3 percent; but in 2010, in the aftermath of the Great Recession, theunemployment rate averaged 9.6 percent and the poverty rate was almost 15.1 percent.What changed in those years was not poor families’ attitudes toward work but simplythe availability of jobs. Among the bottom one-fifth of nonelderly households, hoursworked per household were about 40 percent higher in the tight labor market of 2000than in recession- plagued 2010.Given the opportunity for work or additional work hours, low-income Americanswork more. A full-employment agenda that increases opportunities in the labor market,alongside stronger labor standards such as a higher minimum wage, reduces poverty.
How can we frame the science of poverty with respect to the model of statistical mechanics?
Unemployment numbers have very little to do with levels of poverty. They definitely don't seem to be correlated with poverty levels, in fact perhaps inversely so. Many would say that people are lazy and don't want to work when the general reality is that they do want to work (for a variety of reasons including identity and self-esteem), but the amount of work they can find and the pay they receive for it are the bigger problems.
-
-
www.theatlantic.com www.theatlantic.com
-
natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
-
-
jack-clark.net jack-clark.net
-
Houston, we have a Capability Overhang problem: Because language models have a large capability surface, these cases of emergent capabilities are an indicator that we have a ‘capabilities overhang’ – today’s models are far more capable than we think, and our techniques available for exploring the models are very juvenile. We only know about these cases of emergence because people built benchmark datasets and tested models on them. What about all the capabilities we don’t know about because we haven’t thought to test for them? There are rich questions here about the science of evaluating the capabilities (and safety issues) of contemporary models.
-
- Nov 2022
-
community.interledger.org community.interledger.org
-
11/30 Youth Collaborative
I went through some of the pieces in the collection. It is important to give a platform to the voices that are missing from the conversation usually.
Just a few similar initiatives that you might want to check out:
Storycorps - people can record their stories via an app
Project Voice - spoken word poetry
Living Library - sharing one's story
Freedom Writers - book and curriculum based on real-life stories
-
-
aclanthology.org aclanthology.org
-
Misleading Templates There is no consistent re-lation between the performance of models trainedwith templates that are moderately misleading (e.g.{premise} Can that be paraphrasedas "{hypothesis}"?) vs. templates that areextremely misleading (e.g., {premise} Isthis a sports news? {hypothesis}).T0 (both 3B and 11B) perform better givenmisleading-moderate (Figure 3), ALBERT andT5 3B perform better given misleading-extreme(Appendices E and G.4), whereas T5 11B andGPT-3 perform comparably on both sets (Figure 2;also see Table 2 for a summary of statisticalsignificances.) Despite a lack of pattern between
Their misleading templates really are misleading
{premise} Can that be paraphrased as "{hypothesis}"
{premise} Is this a sports news? {hypothesis}
-
Insum, notwithstanding prompt-based models’impressive improvement, we find evidence ofserious limitations that question the degree towhich such improvement is derived from mod-els understanding task instructions in waysanalogous to humans’ use of task instructions.
although prompts seem to help NLP models improve their performance, the authors find that this performance is still present even when prompts are deliberately misleading which is a bit weird
-
Suppose a human is given two sentences: “Noweapons of mass destruction found in Iraq yet.”and “Weapons of mass destruction found in Iraq.”They are then asked to respond 0 or 1 and receive areward if they are correct. In this setup, they wouldlikely need a large number of trials and errors be-fore figuring out what they are really being re-warded to do. This setup is akin to the pretrain-and-fine-tune setup which has dominated NLP in recentyears, in which models are asked to classify a sen-tence representation (e.g., a CLS token) into some
This is a really excellent illustration of the difference in paradigm between "normal" text model fine tuning and prompt-based modelling
-
-
aclanthology.org aclanthology.org
-
Antibiotic resistance has become a growingworldwide concern as new resistance mech-anisms are emerging and spreading globally,and thus detecting and collecting the cause– Antibiotic Resistance Genes (ARGs), havebeen more critical than ever. In this work,we aim to automate the curation of ARGs byextracting ARG-related assertive statementsfrom scientific papers. To support the researchtowards this direction, we build SCIARG, anew benchmark dataset containing 2,000 man-ually annotated statements as the evaluationset and 12,516 silver-standard training state-ments that are automatically created from sci-entific papers by a set of rules. To set upthe baseline performance on SCIARG, weexploit three state-of-the-art neural architec-tures based on pre-trained language modelsand prompt tuning, and further ensemble themto attain the highest 77.0% F-score. To the bestof our knowledge, we are the first to leveragenatural language processing techniques to cu-rate all validated ARGs from scientific papers.Both the code and data are publicly availableat https://github.com/VT-NLP/SciARG.
The authors use prompt training on LLMs to build a classifier that can identify statements that describe whether or not micro-organisms have antibiotic resistant genes in scientific papers.
Tags
Annotators
URL
-
-
www.exponentialview.co www.exponentialview.co
-
“The metaphor is that the machine understands what I’m saying and so I’m going to interpret the machine’s responses in that context.”
Interesting metaphor for why humans are happy to trust outputs from generative models
-
-
threadreaderapp.com threadreaderapp.com
-
https://threadreaderapp.com/thread/1590111416014409728.html
I'm slowly getting the feeling that Musk is a system one thinker who relies on others to do his system two thinking.
-
-
arxiv.org arxiv.org
-
"On the Opportunities and Risks of Foundation Models" This is a large report by the Center for Research on Foundation Models at Stanford. They are creating and promoting the use of these models and trying to coin this name for them. They are also simply called large pre-trained models. So take it with a grain of salt, but also it has a lot of information about what they are, why they work so well in some domains and how they are changing the nature of ML research and application.
-
-
www.scotthyoung.com www.scotthyoung.com
-
understanding mental models of learning will make it easier to think about learning problems.
-
- Oct 2022
-
glasp.notion.site glasp.notion.site
-
Business ModelWill I get charged at some point? How do you make money to run this product?TBD
"TBD 🚀🚀🚀" is such a bad indication for the future of a product
-
- Aug 2022
-
app.participate.com app.participate.com
-
In the end, universities live off of being exclusive.
-
-
hechingerreport.org hechingerreport.org
-
many of their parents are still paying back their student loans.
-
-
www.insidehighered.com www.insidehighered.com
-
Harris said this model is often better for the textbook authors OpenStax works with, whom Harris called "the long tail" behind the minority of financially successful academic authors -- those who wouldn't necessarily sell enough units to make a lot in royalties, but who are committed to their work nonetheless.
-
"We are fully committed to providing affordable, high-quality learning solutions for students," Joyner said. "We are excited to think openly and collaboratively with key partners like OpenStax to ensure that we, and our authors, are able to reach as many students as possible in new and highly accessible ways."
-
- Jul 2022
-
www.youtube.com www.youtube.com
-
https://www.youtube.com/watch?v=7s4xx_muNcs
Don't recommend unless you have 100 hours to follow up on everything here that goes beyond the surface.
Be aware that this is a gateway for what I'm sure is a relatively sophisticated sales funnel.
Motivational and a great start, but I wonder how many followed up on these techniques and methods, internalized them and used them every day? I've not read his book, but I suspect it's got the usual mnemonic methods that go back millennia. And yet, these things are still not commonplace. People just don't seem to want to put in the work.
As a result, they become a sales tool with a get rich quick (get smart quick) hook/scheme. Great for Kwik's pocketbook, but what about actual outcomes for the hundreds who attended or the 34.6k people who've watched this video so far?
These methods need to be instilled in youth as it's rare for adults to bother.
Acronyms for remembering things are alright, but not incredibly effective as most people will have issues remembering the acronym itself much less what the letters stand for.
There seems to be an over-fondness for acronyms for people selling systems like this. (See also Tiago Forte as another example.)
-
- Jun 2022
-
Local file Local file
-
Ernest Hemingway was one of the most recognized and influentialnovelists of the twentieth century. He wrote in an economical,understated style that profoundly influenced a generation of writersand led to his winning the Nobel Prize in Literature in 1954.
Forte is fairly good at contextualizing people and proving ethos for what he's about to present. Essentially saying, "these people are the smart, well-known geniuses, so let's imitate them".
Humans are already good at imitating. Are they even better at it or more motivated if the subject of imitation is famous?
See also his sections on Twyla Tharp and Taylor Swift...
link to : - lone genius myth: how can there be a lone genius when the majority of human history is littered with imitation?
-
-
www.theatlantic.com www.theatlantic.com
-
It was as if Silicon Valley had made a secret pact to subsidize the lifestyles of urban Millennials. As I pointed out three years ago, if you woke up on a Casper mattress, worked out with a Peloton, Ubered to a WeWork, ordered on DoorDash for lunch, took a Lyft home, and ordered dinner through Postmates only to realize your partner had already started on a Blue Apron meal, your household had, in one day, interacted with eight unprofitable companies that collectively lost about $15 billion in one year.
...but we'll make up for it in volume.
-
-
-
Free public projects private projects starting at $9/month per project
For many tools and apps payment for privacy is becoming the norm.
Examples: - Kumu.io - Github for private repos - ...
pros: - helps to encourage putting things into the commons
cons: - Normalizes the idea of payment for privacy which can be a toxic tool.
discuss...
-
- May 2022
-
www.latimes.com www.latimes.com
-
- Apr 2022
-
twitter.com twitter.com
-
Adam Kucharski [@adamjkucharski]. (2021, October 26). Lots of useful international examples in the comments to this post 👇 And more from @cmmid_lshtm here: Https://github.com/cmmid [Tweet]. Twitter. https://twitter.com/adamjkucharski/status/1452905501684011008
-
-
twitter.com twitter.com
-
Kai Kupferschmidt. (2021, December 1). @DirkBrockmann But these kinds of models do help put into context what it means when certain countries do or do not find the the variant. You can find a full explanation and a break-down of import risk in Europe by airport (and the people who did the work) here: Https://covid-19-mobility.org/reports/importrisk_omicron/ https://t.co/JXsYdmTnNP [Tweet]. @kakape. https://twitter.com/kakape/status/1466109304423993348
-
-
twitter.com twitter.com
-
NetScience on Twitter. (n.d.). Twitter. Retrieved 15 February 2021, from https://twitter.com/net_science/status/1360990028168503297
-
-
twitter.com twitter.com
-
ReconfigBehSci. (2022, January 26). RT @chrischirp: One consequence of the Omicron epidemic moving from.older people into children is dropping hospital admissions. Fewer adm… [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1486618430182731776
-
- Feb 2022
-
-
Learnings: - It's easy to assume people in the past didn't care or were stupid. But people do things for a reason. Not understanding the reason for how things are is a missed learning opportunity, and very likely leads to unintended consequences. - Similar to having a valid strong opinion, one must understand why things are as they are before changing them (except if the goal is only signaling).
-
-
www.businessinsider.com www.businessinsider.com
-
In her 2021 book "Bet on Yourself," which features a foreword by Schmidt, Hiatt lays out the two key ways she "up-leveled" her career."First I have prioritized finding a manager who is modeling the career path I want to take and embodies the leadership qualities I want to possess," she wrote. "Second, I have chosen roles that surround me with top quality people and a depth of opportunities to grow with them."
Look at their life and how it can bring opportunities and then if you will be exposed and streched.
-
-
Local file Local file
-
Our brains work not that differently in terms of interconnectedness.Psychologists used to think of the brain as a limited storage spacethat slowly fills up and makes it more difficult to learn late in life. Butwe know today that the more connected information we alreadyhave, the easier it is to learn, because new information can dock tothat information. Yes, our ability to learn isolated facts is indeedlimited and probably decreases with age. But if facts are not kept
isolated nor learned in an isolated fashion, but hang together in a network of ideas, or “latticework of mental models” (Munger, 1994), it becomes easier to make sense of new information. That makes it easier not only to learn and remember, but also to retrieve the information later in the moment and context it is needed.
Our natural memories are limited in their capacities, but it becomes easier to remember facts when they've got an association to other things in our minds. The building of mental models makes it easier to acquire and remember new information. The down side is that it may make it harder to dramatically change those mental models and re-associate knowledge to them without additional amounts of work.
The mental work involved here may be one of the reasons for some cognitive biases and the reason why people are more apt to stay stuck in their mental ruts. An example would be not changing their minds about ideas of racism and inequality, both because it's easier to keep their pre-existing ideas and biases than to do the necessary work to change their minds. Similar things come into play with respect to tribalism and political party identifications as well.
This could be an interesting area to explore more deeply. Connect with George Lakoff.
-
-
every.to every.to
-
Most writing is chasing clout, rather than insight
As the result of online business models and SEO, most writing becomes about chasing clout and audience eyeballs rather than providing thought provoking insight and razor sharp analysis. The audience reaction has weakened with the anger reaction machines like Twitter.
We need better business models that aren't built on hype.
Tags
Annotators
URL
-
-
-
Founded in partnership with a team of entrepreneurial journalists who believe in a better model to create excellent content while narrowing the synapse between elite creators and their audiences.
http://puck.news/who-is-puck/
Another platform play of journalists banding together to find a niche space of readers.
-
-
therebooting.substack.com therebooting.substack.com
-
Aligning editorial mission and business model is critical.
One of the most complex questions in journalism in the past decade or more is how can one best align editorial mission with the business model? This is particularly difficult because the traditional business model(s) have been shifting in the move to online.
-
Axios Pro is bundling newsletters together in a high-priced subscription product ($2,500 for the bundle; $599 each) aimed squarely at deep-pocketed investors.
Old business advice: find the rich and charge them a pretty penny for something they either think they need or fear they can't live without.
-
- Dec 2021
-
certificates.creativecommons.org certificates.creativecommons.org
-
Legal Cases: Open Education
-
- Nov 2021
-
careframework.org careframework.org
-
also into business models that can better serve the interests of both students and educators.
We are at a point in time where we need to reflect on our business practices. The question should be who are we serving and how do we show we are serving them.
-
- Oct 2021
-
-
“Speed kills.” If you are able to be nimble, assess the ever-changing environment, and adapt quickly, you’ll always carry the advantage over any opponents. Start applying the OODA Loop to your day-to-day decisions and watch what happens. You’ll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you’ll pause to consider your biases, take in additional information, and be more thoughtful of consequences.
In che modo si può applicare il modello OODA Loop nella vita quotidiana?
Semplicemente applicando ad ogni nostra decisione le fasi previste dal modello, rendendo questo processo una abitudine riusciremo ad essere sempre più veloci nell'eseguirlo e questo ci darà la velocità necessaria per sopravvivere e vincere.
-
When you act fast enough, other people view you as unpredictable. They can’t figure out the logic behind your decisions.
Quale è il ruolo della velocità e della prevedibilità del nostro operato nel modello OODA Loop ?
Operare a velocità maggiore degli altri ci rende imprevedibili e questo ci fornisce un vantaggio competitivo, adatto all'OODA Loop che è per definizione un modello fluido.
-
Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized. Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system.
In che modo la [[Seconda legge della termodinamica]] si applica all' #incertezza e come possiamo utilizzarla come parte del modello OODA Loop ?
Il principio afferma che all'interno di un sistema chiuso tutto tenderà sempre all'entropia. Per questo bisogna essere dei sistemi aperti acquisendo ogni volta informazioni dal contesto, così da evitare che la situazione diventi caotica.
-
The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both.
In che modo si applica il [[principio di indeterminazione di Heisenberg]] nel modello [[OODA Loop]] ed in che modo ci aiuta ad affrontare l' #incertezza ?
Il principio afferma che è impossibile determinare in maniera specifica due proprietà fisiche allo stesso tempo.
Boyd estende questo concetto anche alla gestione delle informazioni, cercare di gestire al meglio due variabili informative diverse è troppo difficile ed all'atto pratico induce a maggiore incertezza
-
Boyd referred to three key principles to support his ideas: Gödel’s theorems, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics. Of course, we’re using these principles in a different way from their initial purpose and in a simplified, non-literal form.
Quali sono i tre principi che possono aiutarci nella gestione dell'incertezza e parte integrante del modello OODA Loop?
- Il teorema di #Godel
- [[principio di indeterminazione di Heisenberg]]
- [[Seconda legge della termodinamica]]
-
Gödel’s theorems indicate any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. For fighter pilots, their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.
In cosa consiste il teorema di #Godel e come si colloca nel modello [[OODA Loop]] ?
Questo teorema afferma che ogni modello mentale sarà sprovvisto di alcune informazioni, è inevitabile, per questo bisogna applicare il metodo di #Bayes per aggiornare le informazioni ed allinearsi alla realtà.
Già essere consapevoli di questa inevitabile incertezza ci rende più forti nei suoi confronti e capaci di gestirla.
-
If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.
Cosa è importante ricordare riguardo l'incertezza che deriva da un contesto in cui le informazioni sono sempre in aggiornamento e sempre cambiano secondo il modello [[OODA Loop]] ?
La cosa più importante da ricordare è che l'incertezza del contesto è irrilevante se si adoperano i giusti filtri decisionali.
-
If we can’t cope with uncertainty, we end up stuck in the observation stage. This sometimes happens when we know we need to make a decision, but we’re scared of getting it wrong. So we keep on reading books and articles, asking people for advice, listening to podcasts, and so on.
In quale situazione rischiamo di ritrovarci se non siamo capaci di gestire l'incertezza?
Rischiamo di ritrovarci in una situazione in cui la paura di prendere una decisione ci paralizza e continuiamo ad osservare, studiare, analizzare senza mai agire.
-
Speed is a crucial element of military decision-making. Using the OODA Loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.
Quale è il primo dei benefici più importanti di applicare il modello [[OODA Loop]] nella propria vita?
Consiste nella velocità con cui si possono prendere decisioni, più si utilizza questo metodo più sarà facile muoversi in contesti dalle informazioni variegate perché il pattern decisionale sarà lo stesso.
-