338 Matching Annotations
  1. Apr 2024
  2. Mar 2024
  3. Feb 2024
    1. for - climate crisis - interview - Neil degrasse Tyson - Gavin Schmidt - 2023 record heat - NASA explanation

      podcast details - title: How 2023 broke our climate models - host: Neil degrasse Tyson & Paul Mercurio - guest: NASA director, Gavin Schmidt - date: Jan 2024

      summary - Neil degrasse and his cohost Paul Mercurio interview NASA director Gavin Schmidt to discuss the record-breaking global heating in 2023 and 2024. - Neil and Paul cover a lot in this short interview including: - NASA models can't explain the large jump in temperature in 2023 / 2024. Yes, they predicted incremental increases, but not such large jumps. Gavin finds this worrying. - PACE satellite launches this month, to gather important data on the state of aerosols around the planet. This infomration can help characterize more precisely the role aerosols are playing in global heating. - geoengineering with aerosols is not considered a good idea by Gavin, as it essentially means once started, and if it works to cool the planet, we would be dependent on them for centuries. - Gavin stresses the need for a cohesive collective solution, but that it's beyond him how we achieve that given all the denailsim and misinformation that influeces policy out there.

  4. Jan 2024
    1. Hubinger, et. al. "SLEEPER AGENTS: TRAINING DECEPTIVE LLMS THAT PERSIST THROUGH SAFETY TRAINING". Arxiv: 2401.05566v3. Jan 17, 2024.

      Very disturbing and interesting results from team of researchers from Anthropic and elsewhere.

  5. Nov 2023
    1. haha, china and russia and friends are shitting all over your "scientific models".<br /> the ONLY problem is "too many humans", aka overpopulation, caused by pacifism.<br /> these "save the world" policies are collective suicide for the 95% useless eaters. byee!

  6. Oct 2023
    1. (Chen, NeurIPS, 2021) Che1, Lu, Rajeswaran, Lee, Grover, Laskin, Abbeel, Srinivas, and Mordatch. "Decision Transformer: Reinforcement Learning via Sequence Modeling". Arxiv preprint rXiv:2106.01345v2, June, 2021.

      Quickly a very influential paper with a new idea of how to learn generative models of action prediction using SARSA training from demonstration trajectories. No optimization of actions or rewards, but target reward is an input.

    1. Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.

    1. Zecevic, Willig, Singh Dhami and Kersting. "Causal Parrots: Large Language Models May Talk Causality But Are Not Causal". In Transactions on Machine Learning Research, Aug, 2023.

    1. "The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes

    1. Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"

      Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.

    1. Training language models to follow instructionswith human feedback

      Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.

    1. LaMDA: Language Models for Dialog Application

      "LaMDA: Language Models for Dialog Application" Meta's introduction of LaMDA v1 Large Language Model.

  7. Sep 2023
    1. in 2018 you know it was around four percent of papers were based on Foundation models in 2020 90 were and 00:27:13 that number has continued to shoot up into 2023 and at the same time in the non-human domain it's essentially been zero and actually it went up in 2022 because we've 00:27:25 published the first one and the goal here is hey if we can make these kinds of large-scale models for the rest of nature then we should expect a kind of broad scale 00:27:38 acceleration
      • for: accelerating foundation models in non-human communication, non-human communication - anthropogenic impacts, species extinction - AI communication tools, conservation - AI communication tools

      • comment

        • imagine the empathy we can realize to help slow down climate change and species extinction by communicating and listening to the feedback from other species about what they think of our species impacts on their world!
    1. Recent work has revealed several new and significant aspects of the dynamics of theory change. First, statistical information, information about the probabilistic contingencies between events, plays a particularly important role in theory-formation both in science and in childhood. In the last fifteen years we’ve discovered the power of early statistical learning.

      The data of the past is congruent with the current psychological trends that face the education system of today. Developmentalists have charted how children construct and revise intuitive theories. In turn, a variety of theories have developed because of the greater use of statistical information that supports probabilistic contingencies that help to better inform us of causal models and their distinctive cognitive functions. These studies investigate the physical, psychological, and social domains. In the case of intuitive psychology, or "theory of mind," developmentalism has traced a progression from an early understanding of emotion and action to an understanding of intentions and simple aspects of perception, to an understanding of knowledge vs. ignorance, and finally to a representational and then an interpretive theory of mind.

      The mechanisms by which life evolved—from chemical beginnings to cognizing human beings—are central to understanding the psychological basis of learning. We are the product of an evolutionary process and it is the mechanisms inherent in this process that offer the most probable explanations to how we think and learn.

      Bada, & Olusegun, S. (2015). Constructivism Learning Theory : A Paradigm for Teaching and Learning.

  8. Aug 2023
    1. Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network

      Link: https://arxiv.org/pdf/2308.09543.pdf

  9. Jul 2023
    1. Daniel Adiwardana Minh-Thang Luong David R. So Jamie Hall, Noah Fiedel Romal Thoppilan Zi Yang Apoorv Kulshreshtha, Gaurav Nemade Yifeng Lu Quoc V. Le "Towards a Human-like Open-Domain Chatbot" Google Research, Brain Team

      Defined the SSI metric for chatbots used in LAMDA paper by google.

    1. Bowen Baker et. al. (Open AI) "Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos" Arkiv, June 2022.

      Introduction of VPT : New semi-supervied pre-trained model for sequential decision making on Minecraft. Data are from human video playthroughs but are unlabelled.

  10. Jun 2023
    1. These seven models of harmonic realization get progressively more advanced, but eventhe initial ones—provided that they are performed in time and with a good rhythmicfeel—can convincingly express the majority of jazz progressions. As you get morecomfortable at realizing harmonic progressions using these models, experiment withdifferent metric placements and variations of the Charleston rhythm
    2. The ability to realize harmonic progressions on the keyboard is an essential skill for thecontemporary jazz musician, regardless of her/his primary instrument. The forthcomingmodels of keyboard playing will help to accomplish this objective
    3. Figure 12.2 illustrates the use of Model II. The R.H. distributes the Charleston rhythmat different locations within the measure
    4. Chapter 21 introduces 13 phrase models that illustrate the essential harmonic, contrapuntal,and structural properties of the different eight-bar phrases commonly found in standardtunes.
    5. The terms “turnaround” and “tag ending” are generic labels that do not indicate a partic-ular chord sequence; rather, they suggest the specific formal function of these progressions.In jazz, there is a certain subset of harmonic progressions whose names suggest specificchord successions. When jazz musicians use the term “Lady Bird” progression,for instance, it connotes a particular chromatic turnaround from Tadd Dameron’s tuneof the same title recorded in 1947. Figure 13.9 illustrates the chord structure of thatprogression using Model VI of harmonic realization
  11. May 2023
  12. Apr 2023
    1. The Annotated S4 Efficiently Modeling Long Sequences with Structured State Spaces Albert Gu, Karan Goel, and Christopher Ré.

      A new approach to transformers

    1. Efficiently Modeling Long Sequences with Structured State SpacesAlbert Gu, Karan Goel, and Christopher R ́eDepartment of Computer Science, Stanford University

    1. Bowman, Samuel R.. "Eight Things to Know about Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2304.00612v1.

      Abstract

      The widespread public deployment of large language models (LLMs) in recent months has prompted a wave of new attention and engagement from advocates, policymakers, and scholars from many fields. This attention is a timely response to the many urgent questions that this technology raises, but it can sometimes miss important considerations. This paper surveys the evidence for eight potentially surprising such points: 1. LLMs predictably get more capable with increasing investment, even without targeted innovation. 2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment. 3. LLMs often appear to learn and use representations of the outside world. 4. There are no reliable techniques for steering the behavior of LLMs. 5. Experts are not yet able to interpret the inner workings of LLMs. 6. Human performance on a task isn't an upper bound on LLM performance. 7. LLMs need not express the values of their creators nor the values encoded in web text. 8. Brief interactions with LLMs are often misleading.

      Found via: Taiwan's Gold Card draws startup founders, tech workers | Semafor

    1. Bowen Baker et. al. (Open AI) "Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos" Arkiv, June 2022.

      New supervised pre-trained model for sequential decision making on Minecraft. Data are from human video playthroughs but are unlabelled.

      reinforcement-learning foundation-models pretrained-models proj-minerl minecraft

    1. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

      This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.

  13. Mar 2023
    1. Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.

      Abstract

      We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

    1. Dass das ägyptische Wort p.t (sprich: pet) "Himmel" bedeutet, lernt jeder Ägyptologiestudent im ersten Semester. Die Belegsammlung im Archiv des Wörterbuches umfaßt ca. 6.000 Belegzettel. In der Ordnung dieses Materials erfährt man nun, dass der ägyptische Himmel Tore und Wege hat, Gewässer und Ufer, Seiten, Stützen und Kapellen. Damit wird greifbar, dass der Ägypter bei dem Wort "Himmel" an etwas vollkommen anderes dachte als der moderne westliche Mensch, an einen mythischen Raum nämlich, in dem Götter und Totengeister weilen. In der lexikographischen Auswertung eines so umfassenden Materials geht es also um weit mehr als darum, die Grundbedeutung eines banalen Wortes zu ermitteln. Hier entfaltet sich ein Ausschnitt des ägyptischen Weltbildes in seinem Reichtum und in seiner Fremdheit; und naturgemäß sind es gerade die häufigen Wörter, die Schlüsselbegriffe der pharaonischen Kultur bezeichnen. Das verbreitete Mißverständnis, das Häufige sei uninteressant, stellt die Dinge also gerade auf den Kopf.

      Google translation:

      Every Egyptology student learns in their first semester that the Egyptian word pt (pronounced pet) means "heaven". The collection of documents in the dictionary archive comprises around 6,000 document slips. In the order of this material one learns that the Egyptian heaven has gates and ways, waters and banks, sides, pillars and chapels. This makes it tangible that the Egyptians had something completely different in mind when they heard the word "heaven" than modern Westerners do, namely a mythical space in which gods and spirits of the dead dwell.

      This is a fantastic example of context creation for a dead language as well as for creating proper historical context.

    2. In looking at the uses of and similarities between Wb and TLL, I can't help but think that these two zettelkasten represented the state of the art for Large Language Models and some of the ideas behind ChatGPT

    1. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.

      Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?

    1. L.L.M.s have a disturbing propensity to just make things up out of nowhere. (The technical term for this, among deep-learning experts, is ‘‘hallucinating.’’)
    2. ‘‘I think it lets us be more thoughtful and more deliberate about safety issues,’’ Altman says. ‘‘Part of our strategy is: Gradual change in the world is better than sudden change.’’

      What are the long term effects of fast breaking changes and gradual changes for evolved entities?

    3. OpenAI had a novel structure, which the organization called a ‘‘capped profit’’ model.
    1. // Insight Maker is used to model system dynamics and create agent based models by creating causal loop diagrams and allowing users to run simulations on those

  14. Feb 2023
    1. Could it be the sift from person to person (known in both directions) to massive broadcast that is driving issues with content moderation. When it's person to person, one can simply choose not to interact and put the person beyond their individual pale. This sort of shunning is much harder to do with larger mass publics at scale in broadcast mode.

      How can bringing content moderation back down to the neighborhood scale help in the broadcast model?

    1. One of the most well-documented shortcomings of large language models is that they can hallucinate. Because these models have no direct knowledge of the physical world, they're prone to conjuring up facts out of thin air. They often completely invent details about a subject, even when provided a great deal of context.
    2. language models are incredible "yes, and" machines, allowing writers to quickly explore seemingly unlimited variations on their ideas.
    3. The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.

      Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?

    1. What signals are available to participants,and how are they compiled into estimates of rank? Their modelassumes that knowledge of rank is noisy, but not (statistically) biased.While we can build more-sophisticated models of the biases in ourjudgments, however, Kawakatsu et al.’s (1) success highlights thevirtues of simplicity. It is possible, for example, that, even if the sig-nals are not accurate at first, we might act to make them so.

      In the fraternity and other social spaces, how does one correct for a "bad first date", a botched meeting, or a lone bad day? Does statistical thermodynamics as a model provide clues? How would rank be determined here in an unbiased way? What about individual chemical affinities and how chemical interactions change and/or bias the samples?

  15. Jan 2023
    1. A term recommended by Eve regarding an interdisciplinary approach that accounts for multiple feedback loops within complex systems. Need to confer complex systems science to see if ADHD is already addressed in that domain.

  16. Dec 2022
    1. If my interpretation of the Retrieval quadrant is correct, it will become much more difficult to be an average, or even above average, writer. Only the best will flourish. Perhaps we will see a rise in neo-generalists.

      This is probably true of average or poor software engineers given that GPT-3 can produce pretty reasonable code snippets

    1. Develop Credential Quality Guidelines and Processes

      Noteworthy that the recommendations for quality prioritize 1) The granularity of documenting learning outcomes; and 2) that credentials use standards that can be independently verified and validated.

    2. Standardization of these concepts would allow for validators to sift through credential wallets anddistinguish which credentials are most relevant in a specific use case. Critical to linking up such trustinformation is a more prominent role for dedicated trust providers in the credential ecosystem.These organizations include accreditation boards and regulators of professions, as well as otherssuch as ranking boards and private quality assurance agencies who publish quality standards foreducational organizations and maintain lists of which organizations match the criteria

      What constitutes TRUST?

    3. Multiple initiatives have tried to make various kinds of social recommendations by issuingcredentials. However, up to this point they have worked better in closed social networks rather thanas open credentials due to the ability of social networks to tie a recommendation with the profile(and identity) of the recommender. There are also several nascent initiatives to create open linkeddata around which skills, credentials and issuers are valued by employers.

      Clearly, the LinkedIn recommendations use case is an example of one of these initiatives. It has not succeeded in creating strong social signals anchored in trust models. We are wise to consider what's missing from efforts like this. An even greater concern however, and one that I believe is an essential if we are to realize the transformative potential of digital credentials, is how to design social signals built on trust models that help all people. In a world long-governed by "it's not what you know, it's who you know," the social signals and trust models are overweighted in favor of people with connections to other people, organizations and brands that are all to some degree legacies of exclusionary and inequitable systems. We are likely to build new systems that perpetuate the same problems if we do not intentionally design them to function otherwise. For people (especially those from historically underserved populations) worthy of the recommendations but lacking in social connections, how do they access social recommendations built on trust models?

    1. One of the clear signs that the bottleneck to low-income adults working moreresults from their lack of opportunities is provided by looking at their hours of workover the business cycle. When the economy is strong and jobs are plentiful, low-incomeworkers are more likely to find work, find work with higher pay, and be able to securemore hours of work than when the economy is weak. In 2000, when the economy wasclose to genuine full employment, the unemployment rate averaged 4.0 percent and thepoverty rate was 11.3 percent; but in 2010, in the aftermath of the Great Recession, theunemployment rate averaged 9.6 percent and the poverty rate was almost 15.1 percent.What changed in those years was not poor families’ attitudes toward work but simplythe availability of jobs. Among the bottom one-fifth of nonelderly households, hoursworked per household were about 40 percent higher in the tight labor market of 2000than in recession- plagued 2010.Given the opportunity for work or additional work hours, low-income Americanswork more. A full-employment agenda that increases opportunities in the labor market,alongside stronger labor standards such as a higher minimum wage, reduces poverty.

      How can we frame the science of poverty with respect to the model of statistical mechanics?

      Unemployment numbers have very little to do with levels of poverty. They definitely don't seem to be correlated with poverty levels, in fact perhaps inversely so. Many would say that people are lazy and don't want to work when the general reality is that they do want to work (for a variety of reasons including identity and self-esteem), but the amount of work they can find and the pay they receive for it are the bigger problems.

    1. natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
    1. Houston, we have a Capability Overhang problem: Because language models have a large capability surface, these cases of emergent capabilities are an indicator that we have a ‘capabilities overhang’ – today’s models are far more capable than we think, and our techniques available for exploring the models are very juvenile. We only know about these cases of emergence because people built benchmark datasets and tested models on them. What about all the capabilities we don’t know about because we haven’t thought to test for them? There are rich questions here about the science of evaluating the capabilities (and safety issues) of contemporary models. 
  17. Nov 2022
    1. 11/30 Youth Collaborative

      I went through some of the pieces in the collection. It is important to give a platform to the voices that are missing from the conversation usually.

      Just a few similar initiatives that you might want to check out:

      Storycorps - people can record their stories via an app

      Project Voice - spoken word poetry

      Living Library - sharing one's story

      Freedom Writers - book and curriculum based on real-life stories

    1. Misleading Templates There is no consistent re-lation between the performance of models trainedwith templates that are moderately misleading (e.g.{premise} Can that be paraphrasedas "{hypothesis}"?) vs. templates that areextremely misleading (e.g., {premise} Isthis a sports news? {hypothesis}).T0 (both 3B and 11B) perform better givenmisleading-moderate (Figure 3), ALBERT andT5 3B perform better given misleading-extreme(Appendices E and G.4), whereas T5 11B andGPT-3 perform comparably on both sets (Figure 2;also see Table 2 for a summary of statisticalsignificances.) Despite a lack of pattern between

      Their misleading templates really are misleading

      {premise} Can that be paraphrased as "{hypothesis}"

      {premise} Is this a sports news? {hypothesis}

    2. Insum, notwithstanding prompt-based models’impressive improvement, we find evidence ofserious limitations that question the degree towhich such improvement is derived from mod-els understanding task instructions in waysanalogous to humans’ use of task instructions.

      although prompts seem to help NLP models improve their performance, the authors find that this performance is still present even when prompts are deliberately misleading which is a bit weird

    3. Suppose a human is given two sentences: “Noweapons of mass destruction found in Iraq yet.”and “Weapons of mass destruction found in Iraq.”They are then asked to respond 0 or 1 and receive areward if they are correct. In this setup, they wouldlikely need a large number of trials and errors be-fore figuring out what they are really being re-warded to do. This setup is akin to the pretrain-and-fine-tune setup which has dominated NLP in recentyears, in which models are asked to classify a sen-tence representation (e.g., a CLS token) into some

      This is a really excellent illustration of the difference in paradigm between "normal" text model fine tuning and prompt-based modelling

    1. Antibiotic resistance has become a growingworldwide concern as new resistance mech-anisms are emerging and spreading globally,and thus detecting and collecting the cause– Antibiotic Resistance Genes (ARGs), havebeen more critical than ever. In this work,we aim to automate the curation of ARGs byextracting ARG-related assertive statementsfrom scientific papers. To support the researchtowards this direction, we build SCIARG, anew benchmark dataset containing 2,000 man-ually annotated statements as the evaluationset and 12,516 silver-standard training state-ments that are automatically created from sci-entific papers by a set of rules. To set upthe baseline performance on SCIARG, weexploit three state-of-the-art neural architec-tures based on pre-trained language modelsand prompt tuning, and further ensemble themto attain the highest 77.0% F-score. To the bestof our knowledge, we are the first to leveragenatural language processing techniques to cu-rate all validated ARGs from scientific papers.Both the code and data are publicly availableat https://github.com/VT-NLP/SciARG.

      The authors use prompt training on LLMs to build a classifier that can identify statements that describe whether or not micro-organisms have antibiotic resistant genes in scientific papers.

    1. “The metaphor is that the machine understands what I’m saying and so I’m going to interpret the machine’s responses in that context.”

      Interesting metaphor for why humans are happy to trust outputs from generative models

    1. "On the Opportunities and Risks of Foundation Models" This is a large report by the Center for Research on Foundation Models at Stanford. They are creating and promoting the use of these models and trying to coin this name for them. They are also simply called large pre-trained models. So take it with a grain of salt, but also it has a lot of information about what they are, why they work so well in some domains and how they are changing the nature of ML research and application.

  18. Oct 2022
    1. Business ModelWill I get charged at some point? How do you make money to run this product?TBD

      "TBD 🚀🚀🚀" is such a bad indication for the future of a product

  19. Aug 2022
    1. Harris said this model is often better for the textbook authors OpenStax works with, whom Harris called "the long tail" behind the minority of financially successful academic authors -- those who wouldn't necessarily sell enough units to make a lot in royalties, but who are committed to their work nonetheless.
    2. "We are fully committed to providing affordable, high-quality learning solutions for students," Joyner said. "We are excited to think openly and collaboratively with key partners like OpenStax to ensure that we, and our authors, are able to reach as many students as possible in new and highly accessible ways."
  20. Jul 2022
    1. https://www.youtube.com/watch?v=7s4xx_muNcs

      Don't recommend unless you have 100 hours to follow up on everything here that goes beyond the surface.

      Be aware that this is a gateway for what I'm sure is a relatively sophisticated sales funnel.


      Motivational and a great start, but I wonder how many followed up on these techniques and methods, internalized them and used them every day? I've not read his book, but I suspect it's got the usual mnemonic methods that go back millennia. And yet, these things are still not commonplace. People just don't seem to want to put in the work.

      As a result, they become a sales tool with a get rich quick (get smart quick) hook/scheme. Great for Kwik's pocketbook, but what about actual outcomes for the hundreds who attended or the 34.6k people who've watched this video so far?

      These methods need to be instilled in youth as it's rare for adults to bother.


      Acronyms for remembering things are alright, but not incredibly effective as most people will have issues remembering the acronym itself much less what the letters stand for.


      There seems to be an over-fondness for acronyms for people selling systems like this. (See also Tiago Forte as another example.)

  21. Jun 2022
    1. Ernest Hemingway was one of the most recognized and influentialnovelists of the twentieth century. He wrote in an economical,understated style that profoundly influenced a generation of writersand led to his winning the Nobel Prize in Literature in 1954.

      Forte is fairly good at contextualizing people and proving ethos for what he's about to present. Essentially saying, "these people are the smart, well-known geniuses, so let's imitate them".

      Humans are already good at imitating. Are they even better at it or more motivated if the subject of imitation is famous?

      See also his sections on Twyla Tharp and Taylor Swift...

      link to : - lone genius myth: how can there be a lone genius when the majority of human history is littered with imitation?

    1. It was as if Silicon Valley had made a secret pact to subsidize the lifestyles of urban Millennials. As I pointed out three years ago, if you woke up on a Casper mattress, worked out with a Peloton, Ubered to a WeWork, ordered on DoorDash for lunch, took a Lyft home, and ordered dinner through Postmates only to realize your partner had already started on a Blue Apron meal, your household had, in one day, interacted with eight unprofitable companies that collectively lost about $15 billion in one year.

      ...but we'll make up for it in volume.

    1. Free public projects private projects starting at $9/month per project

      For many tools and apps payment for privacy is becoming the norm.

      Examples: - Kumu.io - Github for private repos - ...

      pros: - helps to encourage putting things into the commons

      cons: - Normalizes the idea of payment for privacy which can be a toxic tool.

      discuss...

  22. May 2022
  23. Apr 2022
    1. Kai Kupferschmidt. (2021, December 1). @DirkBrockmann But these kinds of models do help put into context what it means when certain countries do or do not find the the variant. You can find a full explanation and a break-down of import risk in Europe by airport (and the people who did the work) here: Https://covid-19-mobility.org/reports/importrisk_omicron/ https://t.co/JXsYdmTnNP [Tweet]. @kakape. https://twitter.com/kakape/status/1466109304423993348

  24. Feb 2022
    1. Learnings: - It's easy to assume people in the past didn't care or were stupid. But people do things for a reason. Not understanding the reason for how things are is a missed learning opportunity, and very likely leads to unintended consequences. - Similar to having a valid strong opinion, one must understand why things are as they are before changing them (except if the goal is only signaling).

    1. In her 2021 book "Bet on Yourself," which features a foreword by Schmidt, Hiatt lays out the two key ways she "up-leveled" her career."First I have prioritized finding a manager who is modeling the career path I want to take and embodies the leadership qualities I want to possess," she wrote. "Second, I have chosen roles that surround me with top quality people and a depth of opportunities to grow with them."

      Look at their life and how it can bring opportunities and then if you will be exposed and streched.

    1. Our brains work not that differently in terms of interconnectedness.Psychologists used to think of the brain as a limited storage spacethat slowly fills up and makes it more difficult to learn late in life. Butwe know today that the more connected information we alreadyhave, the easier it is to learn, because new information can dock tothat information. Yes, our ability to learn isolated facts is indeedlimited and probably decreases with age. But if facts are not kept

      isolated nor learned in an isolated fashion, but hang together in a network of ideas, or “latticework of mental models” (Munger, 1994), it becomes easier to make sense of new information. That makes it easier not only to learn and remember, but also to retrieve the information later in the moment and context it is needed.

      Our natural memories are limited in their capacities, but it becomes easier to remember facts when they've got an association to other things in our minds. The building of mental models makes it easier to acquire and remember new information. The down side is that it may make it harder to dramatically change those mental models and re-associate knowledge to them without additional amounts of work.


      The mental work involved here may be one of the reasons for some cognitive biases and the reason why people are more apt to stay stuck in their mental ruts. An example would be not changing their minds about ideas of racism and inequality, both because it's easier to keep their pre-existing ideas and biases than to do the necessary work to change their minds. Similar things come into play with respect to tribalism and political party identifications as well.

      This could be an interesting area to explore more deeply. Connect with George Lakoff.

    1. Most writing is chasing clout, rather than insight

      As the result of online business models and SEO, most writing becomes about chasing clout and audience eyeballs rather than providing thought provoking insight and razor sharp analysis. The audience reaction has weakened with the anger reaction machines like Twitter.

      We need better business models that aren't built on hype.

    1. Founded in partnership with a team of entrepreneurial journalists who believe in a better model to create excellent content while narrowing the synapse between elite creators and their audiences.

      http://puck.news/who-is-puck/

      Another platform play of journalists banding together to find a niche space of readers.

    1. Aligning editorial mission and business model is critical.

      One of the most complex questions in journalism in the past decade or more is how can one best align editorial mission with the business model? This is particularly difficult because the traditional business model(s) have been shifting in the move to online.

    2. Axios Pro is bundling newsletters together in a high-priced subscription product ($2,500 for the bundle; $599 each) aimed squarely at deep-pocketed investors.

      Old business advice: find the rich and charge them a pretty penny for something they either think they need or fear they can't live without.

  25. Dec 2021
  26. Nov 2021
    1. also into business models that can better serve the interests of both students and educators.

      We are at a point in time where we need to reflect on our business practices. The question should be who are we serving and how do we show we are serving them.

  27. Oct 2021
    1. “Speed kills.” If you are able to be nimble, assess the ever-changing environment, and adapt quickly, you’ll always carry the advantage over any opponents. Start applying the OODA Loop to your day-to-day decisions and watch what happens. You’ll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you’ll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

      In che modo si può applicare il modello OODA Loop nella vita quotidiana?

      Semplicemente applicando ad ogni nostra decisione le fasi previste dal modello, rendendo questo processo una abitudine riusciremo ad essere sempre più veloci nell'eseguirlo e questo ci darà la velocità necessaria per sopravvivere e vincere.

    2. When you act fast enough, other people view you as unpredictable. They can’t figure out the logic behind your decisions.

      Quale è il ruolo della velocità e della prevedibilità del nostro operato nel modello OODA Loop ?

      Operare a velocità maggiore degli altri ci rende imprevedibili e questo ci fornisce un vantaggio competitivo, adatto all'OODA Loop che è per definizione un modello fluido.

    3. Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized. Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system.

      In che modo la [[Seconda legge della termodinamica]] si applica all' #incertezza e come possiamo utilizzarla come parte del modello OODA Loop ?

      Il principio afferma che all'interno di un sistema chiuso tutto tenderà sempre all'entropia. Per questo bisogna essere dei sistemi aperti acquisendo ogni volta informazioni dal contesto, così da evitare che la situazione diventi caotica.

    4. The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both.

      In che modo si applica il [[principio di indeterminazione di Heisenberg]] nel modello [[OODA Loop]] ed in che modo ci aiuta ad affrontare l' #incertezza ?

      Il principio afferma che è impossibile determinare in maniera specifica due proprietà fisiche allo stesso tempo.

      Boyd estende questo concetto anche alla gestione delle informazioni, cercare di gestire al meglio due variabili informative diverse è troppo difficile ed all'atto pratico induce a maggiore incertezza

    5. Boyd referred to three key principles to support his ideas: Gödel’s theorems, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics. Of course, we’re using these principles in a different way from their initial purpose and in a simplified, non-literal form.

      Quali sono i tre principi che possono aiutarci nella gestione dell'incertezza e parte integrante del modello OODA Loop?

      • Il teorema di #Godel
      • [[principio di indeterminazione di Heisenberg]]
      • [[Seconda legge della termodinamica]]
    6. Gödel’s theorems indicate any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. For fighter pilots, their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

      In cosa consiste il teorema di #Godel e come si colloca nel modello [[OODA Loop]] ?

      Questo teorema afferma che ogni modello mentale sarà sprovvisto di alcune informazioni, è inevitabile, per questo bisogna applicare il metodo di #Bayes per aggiornare le informazioni ed allinearsi alla realtà.

      Già essere consapevoli di questa inevitabile incertezza ci rende più forti nei suoi confronti e capaci di gestirla.

    7. If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

      Cosa è importante ricordare riguardo l'incertezza che deriva da un contesto in cui le informazioni sono sempre in aggiornamento e sempre cambiano secondo il modello [[OODA Loop]] ?

      La cosa più importante da ricordare è che l'incertezza del contesto è irrilevante se si adoperano i giusti filtri decisionali.

    8. If we can’t cope with uncertainty, we end up stuck in the observation stage. This sometimes happens when we know we need to make a decision, but we’re scared of getting it wrong. So we keep on reading books and articles, asking people for advice, listening to podcasts, and so on.

      In quale situazione rischiamo di ritrovarci se non siamo capaci di gestire l'incertezza?

      Rischiamo di ritrovarci in una situazione in cui la paura di prendere una decisione ci paralizza e continuiamo ad osservare, studiare, analizzare senza mai agire.

    9. Speed is a crucial element of military decision-making. Using the OODA Loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

      Quale è il primo dei benefici più importanti di applicare il modello [[OODA Loop]] nella propria vita?

      Consiste nella velocità con cui si possono prendere decisioni, più si utilizza questo metodo più sarà facile muoversi in contesti dalle informazioni variegate perché il pattern decisionale sarà lo stesso.

    10. There’s a difference between making decisions and enacting decisions. Once you make up your mind, it’s time to take action. By taking action, you test your decision out.

      In che modo si collega la fase di azione del modello [[OODA Loop]] a quella di decisione?

      Si collega perché la fase precedente imposta il mindset mentre questa passa all'effettiva azione.

    11. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages

      In che termini è importante ragionare quando si parla della fase di decisione del modello [[OODA Loop]] ?

      È importante ragionare non da un punto di vista granitico, è importante avere un approccio di testing ed essere flessibili. Ogni decisione sarà semplicemente una ipotesi da mettere a confronto della realtà ed in funzione della quale avremo dei risultati che saranno informazioni da poter utilizzare per la prossima decisione.

    12. He recommended a process of “deductive destruction”: paying attention to your own assumptions and biases, then finding fundamental mental models to replace them.

      Quale processo può essere alternativo a quello creato da Munger per raccogliere modelli mentali?

    13. He identified the following four main barriers that impede our view of objective information: Our cultural traditions – we don’t realize how much of what we consider universal behavior is actually culturally prescribed Our genetic heritage – we all have certain constraints Our ability to analyze and synthesize – if we haven’t practiced and developed our thinking skills, we tend to fall back on old habits The influx of new information – it is hard to make sense of observations when the situation keeps changing

      Quali sono le barriere principali che impediscono una giusta applicazione della fase di orientamento nell' [[OODA Loop]] ?

      • Le tradizioni culturali: molte di quello che consideriamo oggettivo o ovvio è in realtà una semplice convenzione;
      • Limiti genetici;
      • Limiti razionali, le capacità di pensiero e raziocinio sono frutto di addestramento e quindi possono essere maggiori o minori;
      • La frequenza di aggiornamento delle informazioni nel contesto;
    14. To orient yourself is to recognize any barriers that might interfere with the other parts of the OODA Loop. Orientation means connecting yourself with reality and seeing the world as it really is, as free as possible from the influence of cognitive biases and shortcuts.

      In cosa consiste all'atto pratico la fase di orientamento dell' [[OODA Loop]] ?

      Consiste nel riconoscere le barriere che potrebbero interferire con l'esecuzione delle altre fasi del processo OODA.

    15. If you want to make good decisions, you need to master the art of observing your environment.

      Quale è uno dei presupposti fondamentali da considerare per applicare il modello [[OODA Loop]] ?

  28. Sep 2021
    1. Competent scientists do not believe their own models or theories, but rather treat them as convenient fictions. ...The issue to a scientist is not whether a model is true, but rather whether there is another whose predictive power is enough better to justify movement from today's fiction to a new one. Steve Vardeman, 1987. Comment. Journal of the American Statistical Association 82 : 130-131. [kw]

      easier said than done

    1. A series of studies conducted by Frédéric Vallée-Tourangeau, a professor of psychology at Kingston University in Britain; Gaëlle Vallée-Tourangeau, a professor of behavioral science at Kingston; and their colleagues, has explored the benefits of such interactivity. In these studies, experimenters pose a problem; one group of problem solvers is permitted to interact physically with the properties of the problem, while a second group must only think through the problem. Interactivity “inevitably benefits performance,” they report.

      Physical interactivity with a problem may help improve results.

    2. Moving mental contents out of our heads and onto the space of a sketch pad or whiteboard allows us to inspect it with our senses, a cognitive bonus that the psychologist Daniel Reisberg calls “the detachment gain.”

      Moving ideas from our heads into the real world, whether written or potentially using other modalities, can provide a detachment gain, by which we're able to extend those ideas by drawing, sketching, or otherwise using them.

      How might we use the idea of detachment gain to better effect in our pedagogy? I've heard anecdotal evidence of the benefit of modality shifts in many spaces including creating sketchnotes.

      While some sketchnotes don't make sense to those who weren't present for the original talk, perhaps they're incredibly useful methods for those who are doing the modality shifts from hearing/seeing into writing/drawing.

  29. Aug 2021
    1. “The real economics of college have shifted so much during the last 70 years, and we have not made adjustments to all those changes. Students are in an equation that has not adapted to the circumstances.”
  30. Jul 2021
    1. How a memory palace works When we’re learning something new, it requires less effort if we connect it to something we already know, such as a physical place. This is known as elaborative encoding. Once we need to remember the information, we can “walk” around the palace and “see” the various pieces. The idea is to give your memories something to hang on to. We are pretty terrible at remembering things, especially when these memories float freely in our heads. But our spatial memory is actually pretty decent, and when we give our memories some needed structure, we provide that missing order and context. For example, if you struggle to remember names, it can be helpful to link people you meet to names you already know. If you meet someone called Fred and your grandmother had a cat called Fred, you could connect the two. Creating a multisensory experience in your head is the other part of the trick. In this case, you could imagine the sound of Fred meowing loudly. To further aid in recall, the method of loci is most effective if we take advantage of the fact that it’s easiest to remember memorable things. Memory specialists typically recommend mentally placing information within a physical space in ways that are weird and unusual. The stranger the image, the better.

      This notion of using spatial memory to encode other concepts - or even the P-A-O sytem where a 2 digit number encodes a person performing an action is an interesting idea for someone like me who forgets quite a bit.

  31. Jun 2021
    1. The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.

      We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.

    2. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

  32. May 2021
    1. In a way, the essential premise of the collab-house business model is not far from that of pornographic entertainment. (Where else do talent and crew and cadres of management congregate in furnished mansions to produce intimate content?) Interestingly, but maybe not surprisingly, many TikTok influencers, including some here at the Clubhouse, have made the crossover from social media to pornography, using apps such as OnlyFans to post nude pics for their legions of subscribers.
    2. Later in my visit, Chase Zwernemann, the twenty-one-year-old VP of talent management, will tell me that “we really see ourselves like influencing professors.” And if this weren’t enough, under the Clubhouse aegis is a trio of TikTok houses, each of which corresponds, apparently, to a different level of academe. There’s Clubhouse BH—the grad school—which is meant for “our more seasoned influencers.” (If this phrase conjures for you images of geriatrics taking selfies in suggestive postures, please know that by “seasoned influencers” they simply mean people who have been in the business a while and have thus reached the ripe old age of twenty-two or twenty-three.) Beneath that is Clubhouse FTB, which apparently serves as the undergraduate program. And finally, there’s Not a Content House, the high school of the Clubhouse venture, one meant to appeal to an even younger demographic.

      He uses academe, but I might liken it to a studio system of sorts.

    1. In viewing academia as a business, you should always give customers what they want, and this applies on two levels. First, always consider the demand for the research product. This is much easier said than done. Anyone can acknowledge that the customers are always right, but truly listening to them and extracting what they need is difficult, especially if you have your own personal desires with respect to the product (in this case, the research). Talk to the funding customer constantly. Second, most students are, in effect, employees, and the adviser is a boss who doubles as a customer. In some respects, your adviser will provide your pay cheque, or at least govern it. Thus, do what the customer requires. In addition, always consider your audience when writing and presenting. In the case of a thesis, the audience is your adviser and committee. Again, talk to the customers constantly.
    2. Anyone who treats research as a business tends not to be well received in academia, but they likely have the funding necessary to drive advances, and they may eventually be wealthy.

      There can be clear benefits to treating academia like a business.