21 Matching Annotations
  1. Sep 2023
    1. Spiral Dynamics (SD) is a model of the evolutionary development of individuals, organizations, and societies. It was initially developed by Don Edward Beck and Christopher Cowan based on the emergent cyclical theory of Clare W. Graves, combined with memetics as proposed by Richard Dawkins and further developed by Mihaly Csikszentmihalyi.

      https://en.wikipedia.org/wiki/Spiral_Dynamics

      related to ideas I've had with respect to Werner R. Loewenstein?

  2. Feb 2022
    1. You may remember from school the difference between an exergonicand an endergonic reaction. In the first case, you constantly need toadd energy to keep the process going. In the second case, thereaction, once triggered, continues by itself and even releasesenergy.

      The build up of complexity which results in the creation of life with increasing complexity must certainly be endergonic if the process is to last for any extensive length of time. Once the process becomes exergonic or reaches homeostasis, then the building of complexity and even life itself will cease to exist.

      Must this always be true? Proof? Counter examples?

  3. May 2021
  4. Oct 2020
    1. they found that the glyoxylate and pyruvate reacted to make a range of compounds that included chemical analogues to all the intermediary products in the TCA cycle except for citric acid. Moreover, these products all formed in water within a single reaction vessel, at temperatures and pH conditions mild enough to be compatible with conditions on Earth.
    1. The situation changed in the late 1990s, when the physicists Gavin Crooks and Chris Jarzynski derived “fluctuation theorems” that can be used to quantify how much more often certain physical processes happen than reverse processes. These theorems allow researchers to study how systems evolve — even far from equilibrium.

      look at these papers

    2. maybe there’s more that you can get for free

      Most of what's here in this article (and likely in the underlying papers) sounds to me to have been heavily influenced by the writings of W. Loewenstein and S. Kauffman. They've laid out some models/ideas that need more rigorous testing and work, and this seems like a reasonable start to the process.

      The "get for free" phrase itself is very S. Kauffman in my mind. I'm curious how many times it appears in his work?

    3. Any claims that it has to do with biology or the origins of life, he added, are “pure and shameless speculations.”

      Some truly harsh words from his former supervisor? Wow!

    1. Ideas on how to analyze and predict network behavior have been informed by concepts arising from the computational and social sciences, which are themselves increasingly concerned with understanding networks. The interesting thing about these ideas is that they work at scales ranging from the molecular to the population level.

      scale free networks perhaps?

    2. I had bookmarked this article in the form of tearing out and keeping a paper copy of it in my to read pile back in 2008. Finally getting around to reading it today. It's still an interesting introduction to the broader area which has moved forward, but not significantly enough to date the entire area.

    1. Criticality may be everywhere.

      This seems very similar to S. Kauffman's thesis in At Home in the Universe.

    2. “DNA as Information” Theme issue compiled and edited by Cartwright, J.H.E., Giannerini, S., & Gonzalez, D.L. Philosophical Transactions of the Royal Society A 374 (2016).

      Dig this up and read it

    1. Found reference to this in a review of Henry Quastler's book Information Theory in Biology.

      A more serious thing, in the reviewer's opinion, is the compIete absence of contributions deaJing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts' approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neuraI mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

      Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

    1. When geneticists finally gained the power to cost-efficiently analyze entire genomes, they realized that most disorders and diseases are influenced by thousands of genes, each of which has a tiny effect. To reliably detect these miniscule effects, you need to compare hundreds of thousands of volunteers. By contrast, the candidate-gene studies of the 2000s looked at an average of 345 people!

      I'm hoping that more researchers are contemplating this as they stroll merrily along their way this week.

    1. In the meantime, the classification of viruses remains unclear. Tupanviruses seem to be dependent on their hosts for very little, and other viruses, according to one preprint, even encode ribosomal proteins. “The gap between cellular organisms and viruses is starting to close,” Deeg said.

      Is there a graph of known viruses categoriezed by the machinery that they do or don't have? Can they be classified and sub-classified so that emergent patterns come forward thus allowing us to trace back their ancestry?

    2. “It’s remarkable that viruses seem to mingle into the translational domain so extensively,” said Matthias Fischer, a virologist at the Max Planck Institute for Medical Research in Germany who was not involved with either study.
    1. As it happens, he’d already done some work on coding theory—in the area of biology. The digital nature of DNA had been discovered by Jim Watson and Francis Crick in 1953, but it wasn’t yet clear just how sequences of the four possible base pairs encoded the 20 amino acids. In 1956, Max Delbrück—Jim Watson’s former postdoc advisor at Caltech—asked around at JPL if anyone could figure it out. Sol and two colleagues analyzed an idea of Francis Crick’s and came up with “comma-free codes” in which overlapping triples of base pairs could encode amino acids. The analysis showed that exactly 20 amino acids could be encoded this way. It seemed like an amazing explanation of what was seen—but unfortunately it isn’t how biology actually works (biology uses a more straightforward encoding, where some of the 64 possible triples just don’t represent anything).

      I recall talking to Sol about this very thing when I sat in on a course he taught at USC on combinatorics. He gave me his paper on it and a few related issues as I was very interested at the time about the applications of information theory and biology.

      I'm glad I managed to sit in on the class and still have the audio recordings and notes. While I can't say that Newton taught me calculus, I can say I learned combinatorics from Golomb.

    1. Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.

      There are some plants with huge amounts of DNA compared to their "peers" perhaps these would be interesting test cases for potential experimentation of this.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
    1. ITBio – Chris Aldrich (feed)

      Hey, wait! He's not only following me, but a very distinct subset of my posts! :)

  5. Jul 2019
    1. That mingling has sparked contentious debate among scientists about when and how giant viruses evolved. All of viral evolution is murky: Different groups of viruses likely had very different origins. Some may have been degenerate “escapees” from cellular genomes, while others descended directly from the primordial soup. “Still others have recombined and exchanged genes so many times in the course of evolution that we will never know where they originally came from,” Fischer said.