13 Matching Annotations
  1. Jul 2019
    1. In the meantime, the classification of viruses remains unclear. Tupanviruses seem to be dependent on their hosts for very little, and other viruses, according to one preprint, even encode ribosomal proteins. “The gap between cellular organisms and viruses is starting to close,” Deeg said.

      Is there a graph of known viruses categoriezed by the machinery that they do or don't have? Can they be classified and sub-classified so that emergent patterns come forward thus allowing us to trace back their ancestry?

    2. That mingling has sparked contentious debate among scientists about when and how giant viruses evolved. All of viral evolution is murky: Different groups of viruses likely had very different origins. Some may have been degenerate “escapees” from cellular genomes, while others descended directly from the primordial soup. “Still others have recombined and exchanged genes so many times in the course of evolution that we will never know where they originally came from,” Fischer said.
    3. “It’s remarkable that viruses seem to mingle into the translational domain so extensively,” said Matthias Fischer, a virologist at the Max Planck Institute for Medical Research in Germany who was not involved with either study.
  2. May 2019
    1. When geneticists finally gained the power to cost-efficiently analyze entire genomes, they realized that most disorders and diseases are influenced by thousands of genes, each of which has a tiny effect. To reliably detect these miniscule effects, you need to compare hundreds of thousands of volunteers. By contrast, the candidate-gene studies of the 2000s looked at an average of 345 people!

      I'm hoping that more researchers are contemplating this as they stroll merrily along their way this week.

    1. As it happens, he’d already done some work on coding theory—in the area of biology. The digital nature of DNA had been discovered by Jim Watson and Francis Crick in 1953, but it wasn’t yet clear just how sequences of the four possible base pairs encoded the 20 amino acids. In 1956, Max Delbrück—Jim Watson’s former postdoc advisor at Caltech—asked around at JPL if anyone could figure it out. Sol and two colleagues analyzed an idea of Francis Crick’s and came up with “comma-free codes” in which overlapping triples of base pairs could encode amino acids. The analysis showed that exactly 20 amino acids could be encoded this way. It seemed like an amazing explanation of what was seen—but unfortunately it isn’t how biology actually works (biology uses a more straightforward encoding, where some of the 64 possible triples just don’t represent anything).

      I recall talking to Sol about this very thing when I sat in on a course he taught at USC on combinatorics. He gave me his paper on it and a few related issues as I was very interested at the time about the applications of information theory and biology.

      I'm glad I managed to sit in on the class and still have the audio recordings and notes. While I can't say that Newton taught me calculus, I can say I learned combinatorics from Golomb.

  3. Mar 2019
    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
    1. Found reference to this in a review of Henry Quastler's book Information Theory in Biology.

      A more serious thing, in the reviewer's opinion, is the compIete absence of contributions deaJing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts' approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neuraI mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

      Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

  4. Mar 2018
    1. Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.

      There are some plants with huge amounts of DNA compared to their "peers" perhaps these would be interesting test cases for potential experimentation of this.

  5. Jul 2017
    1. maybe there’s more that you can get for free

      Most of what's here in this article (and likely in the underlying papers) sounds to me to have been heavily influenced by the writings of W. Loewenstein and S. Kauffman. They've laid out some models/ideas that need more rigorous testing and work, and this seems like a reasonable start to the process.

      The "get for free" phrase itself is very S. Kauffman in my mind. I'm curious how many times it appears in his work?

    2. Any claims that it has to do with biology or the origins of life, he added, are “pure and shameless speculations.”

      Some truly harsh words from his former supervisor? Wow!

    3. The situation changed in the late 1990s, when the physicists Gavin Crooks and Chris Jarzynski derived “fluctuation theorems” that can be used to quantify how much more often certain physical processes happen than reverse processes. These theorems allow researchers to study how systems evolve — even far from equilibrium.

      look at these papers

  6. May 2016
    1. “DNA as Information” Theme issue compiled and edited by Cartwright, J.H.E., Giannerini, S., & Gonzalez, D.L. Philosophical Transactions of the Royal Society A 374 (2016).

      Dig this up and read it

    2. Criticality may be everywhere.

      This seems very similar to S. Kauffman's thesis in At Home in the Universe.