76 Matching Annotations
  1. Jul 2022
  2. bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link
    1. he aim of the present paper is to propose a radical resolution to this controversy: weassume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011). Itis in no way restricted to the human brain—although that is the place where we know it in itsmost concentrated form. Therefore, the extended mind hypothesis is in fact misguided,because it assumes that the mind originates in the brain, and merely “extends” itself a little bitoutside in order to increase its reach, the way one’s arm extends itself by grasping a stick.While ancient mystical traditions and idealist philosophies have formulated similarpanpsychist ideas (Seager, 2006), the approach we propose is rooted in contemporaryscience—in particular cybernetics, cognitive science, and complex systems theory. As such, itstrives to formulate its assumptions as precisely and concretely as possible, if possible in amathematical or computational form (Heylighen, Busseniers, Veitas, Vidal, & Weinbaum,2012), so that they can be tested and applied in real-world situations—and not just in thethought experiments beloved by philosophers

      The proposal is for a more general definition of the word mind, which includes the traditional usage when applied to the human mind, but extends far beyond that into a general property of nature herself.

      So in Heylighen's defintion, mind is a property of matter, but of all MINIMALLY ACTIVE matter, not just brains. In this respect, Heylighen's approach has early elements of the Integrated Information Theory (IIT) theory of Koch & Tononi

    1. there was an interesting paper that came out i cited in the in my in my in paper number one that uh was 01:15:53 looking at this question of what is an individual and they were looking at it from an information theory standpoint you know so they came up with this they came up with this uh uh theory uh and i think do they have a name for 01:16:09 it yeah uh information theory of individuality and they say base it's done at the bottom of the slide there and they say basically that uh you know an individual is a process just what's 01:16:20 what we've been talking about before that propagates information from the past into the future so that you know implies uh information flow and implies a cognitive process uh it implies anticipation of 01:16:33 the future uh and it probably implies action and this thing that is an individual it is not like it is a layered hierarchical individual it's like you can draw a circle around 01:16:45 anything you know in a certain sense and call it an individual under you know with certain uh definitions you know if you want to define what its markov blanket is 01:16:57 but uh but you know we are we are we are our cells are individuals our tissues liver say is an individual um a human is an individual a family is an 01:17:12 individual you know and it just keeps expanding outward from there the society is an individual so it really it's none of those are have you know any kind of inherent preference 01:17:24 levels there's no preference to any of those levels everything's an individual layered interacting overlapping individuals and it's just it's just a it's really just a the idea of an individual is just where 01:17:36 do you want to draw your circle and then you can you know then you can talk about an individual at whatever level you want so so that's all about information so it's all about processing information right

      The "individual" is therefore scale and dimension dependent. There are so many ways to define an individual depending on the scale you are looking at and your perspective.

      Information theory of individuality addresses this aspect.

    1. Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. YI suspect that your meaning is an Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. I suspect that your meaning is an

      Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things.

      It's very possible that the meaning you draw from it is an eisegetical one to the meaning which Eco assigns it.

  3. Jun 2022
    1. William James’s self-assessment: “I am no lover of disorder, but fear to lose truth by the pretension to possess it entirely.”
  4. May 2022
    1. Brine, Kevin R., Ellen Gruber Garvey, Lisa M. Gitelman, Steven J. Jackson, Virginia Jackson, Markus Krajewski, Mary Poovey, et al. “Raw Data” Is an Oxymoron. Edited by Lisa M. Gitelman. Infrastructures. MIT Press, 2013. https://mitpress.mit.edu/books/raw-data-oxymoron.

    1. Scott, I'll spend some more in-depth time with it shortly, but in a quick skim of topics I pleasantly notice a few citations of my own work. Perhaps I've done a poor job communicating about wikis, but from what I've seen from your work thus far I take much the same view of zettelkasten as you do. Somehow though I find that you're quoting me in opposition to your views? While you're broadly distinguishing against the well-known Wikipedia, and rightly so, I also broadly consider (unpublished) and include examples of small personal wikis and those within Ward Cunningham's FedWiki space, though I don't focus on them in that particular piece. In broad generalities most of these smaller wikis are closer to the commonplace and zettelkasten traditions, though as you point out they have some structural functional differences. You also quote me as someone in "information theory" in a way that that indicates context collapse. Note that my distinctions and work in information theory relate primarily to theoretical areas in electrical engineering, physics, complexity theory, and mathematics as it relates to Claude Shannon's work. It very specifically does not relate to my more humanities focused work within intellectual history, note taking, commonplaces, rhetoric, orality, or memory. In these areas, I'm better read than most, but have no professional title(s). Can't wait to read the entire piece more thoroughly...

  5. Apr 2022
    1. The book was reviewed in all major magazines and newspapers, sparking what historian Ronald Kline has termed a “cybernetics craze,” becoming “a staple of science fiction and a fad among artists, musicians, and intellectuals in the 1950s and 1960s.”

      This same sort of craze also happened with Claude Shannon's The Mathematical Theory of Information which helped to bolster Weiner's take.

  6. Mar 2022
    1. Melvin Vopson has proposed an experiment involving particle annihilation that could prove that information has mass, and by Einstein's mass-energy equivalence, information is also energy. If true, the experiment would also show that information is one of the states of matter.

      The experiment doesn't need a particle accelerator, but instead uses slow positrons at thermal velocities.

      Melvin Vopson is an information theory researcher at the University of Portsmouth in the United Kingdom.

      A proof that information has mass (or is energy) may explain the idea of dark matter. Vopson's rough calculations indicate that 10^93 bits of information would explain all of the “missing” dark matter.

      Vopson's 2022 AIP Advances paper would indicate that the smallest theoretical size of digital bits, presuming they are stable and exist on their own would become the smallest known building blocks of matter.

      The width of digital bits today is between ten and 30 nanometers. Smaller physical bits could mean more densely packed storage devices.


      Vopson proposes that a positron-electron annihilation should produce energy equivalent to the masses of the two particles. It should also produce an extra dash of energy: two infrared, low-energy photons of a specific wavelength (predicted to be about 50 microns), as a direct result of erasing the information content of the particles.

      The mass-energy-information equivalence principle Vopson proposed in his 2019 AIP Advances paper assumes that a digital information bit is not just physical, but has a “finite and quantifiable mass while it stores information.” This very small mass is 3.19 × 1038 kilograms at room temperature.

      For example, if you erase one terabyte of data from a storage device, it would decrease in mass by 2.5 × 1025 kilograms, a mass so small that it can only be compared to the mass of a proton, which is about 1.67 × 1027 kilograms.

      In 1961, Rolf Landauer first proposed the idea that a bit is physical and has a well-defined energy. When one bit of information is erased, the bit dissipates a measurable amount of energy.

    1. This hierarchical system ensures accuracy, rigour and competencyof information.

      Hierarchical systems of knowledge in Indigenous cultures helps to ensure rigor, competency, and most importantly accuracy of knowledge passed down from generation to generation.

    1. https://www.linkedin.com/pulse/incorrect-use-information-theory-rafael-garc%C3%ADa/

      A fascinating little problem. The bigger question is how can one abstract this problem into a more general theory?

      How many questions can one ask? How many groups could things be broken up into? What is the effect on the number of objects?

  7. Feb 2022
    1. Together: responsive, inline “autocomplete” pow­ered by an RNN trained on a cor­pus of old sci-fi stories.

      I can't help but think, what if one used their own collected corpus of ideas based on their ever-growing commonplace book to create a text generator? Then by taking notes, highlighting other work, and doing your own work, you're creating a corpus of material that's imminently interesting to you. This also means that by subsuming text over time in making your own notes, the artificial intelligence will more likely also be using your own prior thought patterns to make something that from an information theoretic standpoint look and sound more like you. It would have your "hand" so to speak.

    1. And the best ideas are usually the ones we haven’t anticipatedanyway.

      If the best ideas are the ones we haven't anticipated, how are we defining "best"? Most surprising from an information theoretic perspective? One which creates new frontiers of change? One which subsumes or abstracts prior ideas within it? Others?

  8. Jan 2022
    1. https://english.elpais.com/science-tech/2022-01-14/a-spanish-data-scientists-strategy-to-win-99-of-the-time-at-wordle.html

      Story of a scientist trying to optimize for solutions of Wordle.

      Nothing brilliant here. Depressing that the story creates a mythology around algorithms as the solution rather than delving in a bit into the math and science of information theory to explain why this solution is the correct one.

      Desperately missing from the discussion are second and third order words that would make useful guesses to further reduce the solution space for actual readers.

    2. The letters of “aeros” include the five most frequent letters used in English (as Edgar Allan Poe pointed out in the cryptographic challenge included in his famous short story The Golden Beetle)

      "Orate" and "aeros" are respectively the best words to start with when playing Wordle.

    3. “It makes perfect sense,” says Moro from his home in Boston. “For the game to be a success, it needs to be simple and playable, and picking the most common terms means that in the end, we all get it right in just a few tries.”

      Esteban Moro

      For games to be a success they need to meet a set of Goldilock's conditions, they should be simple enough to learn to play and win, but complex enough to still be challenging.

      How many other things in life need this sort of balance between simplicity and complexity to be successful?

      Is there an information theoretic statement that bounds this mathematically? What would it look like for various games?

    4. Cross-referencing the correct answers from previous Wordles with a body of the most commonly used English terms, Moro confirmed that Wardle chooses frequently used words in English, something the game’s inventor also pointed out in his interview with The New York Times, which mentioned that he avoided rare words.

      Wordle specifically chooses more common words which cuts back drastically on the complexity of the game.

    1. https://www.youtube.com/watch?v=z3Tvjf0buc8

      graph thinking

      • intuitive
      • speed, agility
      • adaptability

      ; graph thinking : focuses on relationships to turn data into information and uses patterns to find meaning

      property graph data model

      • relationships (connectors with verbs which can have properties)
      • nodes (have names and can have properties)

      Examples:

      • Purchase recommendations for products in real time
      • Fraud detection

      Use for dependency analysis

  9. Dec 2021
    1. One of the most basic presuppositions of communication is that the partners can mutually surprise each other.

      A reasonably succinct summary of Claude Shannon's 1948 paper The Mathematical Theory of Communication. By 1981 it had firmly ensconced itself into the vernacular, and would have done so for Luhmann as much of systems theory grew out of the prior generation's communication theory.

  10. Oct 2021
  11. Sep 2021
    1. “With whistling, it was more like, let’s see what people did naturally to simplify the signal. What did they keep?” she says.
    2. In practice, almost every whistled tonal language chooses to use pitch to encode the tones.

      Why is pitch encoding of tones more prevalent in tonal languages? What is the efficiency and outcome of the speech and the information that can be encoded?

    3. Whistlers of tonal languages thus face a dilemma: Should they whistle the tones, or the vowels and consonants? “In whistling, you can produce only one of the two. They have to choose,” says Meyer.

      Non-tonal speech is easy to transfer into whistling language, but tonal languages have to choose between whistling the tones or the vowels and consonants as one can only produce one of the two with whistling.

      What effect does this tell us about the information content and density of languages, particularly tonal languages and whistling?

  12. Aug 2021
    1. Normally, thousands of rabbits and guinea pigs are used andkilled, in scientific laboratories, for experiments which yieldgreat and tangible benefits to humanity. This war butcheredmillions of people and ruined the health and lives of tens ofmillions. Is this climax of the pre-war civilization to be passedunnoticed, except for the poetry and the manuring of the battlefields, that the“poppies blow”stronger and better fed? Or is thedeath of ten men on the battle field to be of as much worth inknowledge gained as is the life of one rabbit killed for experi-ment? Is the great sacrifice worth analysing? There can be onlyone answer—yes. But, if truth be desired, the analysis must bescientific.

      Idea: Neural net parameter analysis but with society as the 'neural net' and the 'training examples' things like industrial accidents, etc. How many 'training examples' does it take to 'learn' a lesson, and what can we infer about the rate of learning from these statistics?

  13. Jul 2021
    1. Here by "learning" is meant understanding more, not remem­bering more information that has the same degree of intelli­gibility as other information you already possess.

      A definition of learning here. Is this the thing that's missing from my note above?

    2. The first sense is the one in which we speak of ourselves as reading newspapers, magazines, or anything else that, according to our skill and talents, is at once thoroughly intel­ligible to us. Such things may increase our store of informa­tion, but they cannot improve our understanding, for our understanding was equal to them before we started. Otherwise, we would have felt the shock of puzzlement and perplexity that comes from getting in over our depth-that is, if we were both alert and honest.

      Here they're comparing reading for information and reading for understanding.

      How do these two modes relate to Claude Shannon's versions of information (surprise) and semantics (the communication) itself. Are there other pieces which exist which we're not tacitly including here? It feels like there's another piece we're overlooking.

    1. These criteria – surprise serendipity, information and inner complexity

      These criteria – surprise serendipity, information and inner complexity – are the criteria any communication has to meet.

      An interesting thesis about communication. Note that Luhmann worked in general systems theory. I'm curious if he was working in cybernetics as well?

    2. Irritation: basically, without surprise or disappointment there’s no information. Both partners have to be surprised in some way to say communication takes place.

      This is a basic tenet of information theory. Interesting to see it appear in a work on writing.

  14. May 2021
    1. These “Songline” stories are ancient, exhibit little variation over long periods of time, and are carefully learned and guarded by the Elders who are its custodians [7].

      What is the best way we could test and explore error correction and overwriting in such a system from an information theoretic standpoint?

  15. Apr 2021
    1. A reproduction of Carroll’snotes on his number alphabet will be found in Warren Weaver’s arti-cle “Lewis Carroll: Mathematician,” inScientific Americanfor April1956.)

      I need to track down this reference and would love to see what Weaver has to say about the matter.

      Certainly Weaver would have spoken of this with Claude Shannon (or he'd have read it).

  16. Mar 2021
    1. He introduces the idea of the apophatic: what we can't put into words, but is important and vaguely understood. This term comes from Orthodox theology, where people defined god by saying what it was not.

      Too often as humans we're focused on what is immediately in front of us and not what is missing.

      This same thing plagues our science in that we're only publishing positive results and not negative results.

      From an information theoretic perspective, we're throwing away half (or more?) of the information we're generating. We might be able to go much farther much faster if we were keeping and publishing all of our results in better fashion.

      Is there a better word for this negative information? #openquestions

  17. Feb 2021
    1. The main purpose of this book is to go one step forward, not onlyto use the principle of maximum entropy in predicting probabilitydistributions, but to replace altogether the concept of entropy withthe more suitable concept of information, or better yet, the missinginformation (MI).

      The purpose of this textbook

    2. Thereare also a few books on statistical thermodynamics that use infor-mation theory such as those by Jaynes, Katz, and Tribus.

      Books on statistical thermodynamics that use information theory.

      Which textbook of Jaynes is he referring to?

    3. Levine, R. D. and Tribus, M (eds) (1979),The Maximum Entropy Principle,MIT Press, Cambridge, MA.

      Book on statistical thermodynamics that use information theory, mentioned in Chapter 1.

    4. Katz, A. (1967),Principles of Statistical Mechanics: The Informational TheoryApproach,W.H.Freeman,London.

      Books on statistical thermodynamics that use information theory.

  18. Jan 2021
  19. Nov 2020
  20. Oct 2020
    1. The notion that counting more shapes in the sky will reveal more details of the Big Bang is implied in a central principle of quantum physics known as “unitarity.” Unitarity dictates that the probabilities of all possible quantum states of the universe must add up to one, now and forever; thus, information, which is stored in quantum states, can never be lost — only scrambled. This means that all information about the birth of the cosmos remains encoded in its present state, and the more precisely cosmologists know the latter, the more they can learn about the former.
    1. Social scientist, on the other hand, have focused on what ties are more likely to bring in new information, which are primarily weak ties (Granovetter 1973), and on why weak ties bring new information (because they bridge structural holes (Burt 2001), (Burt 2005)).
    1. Found reference to this in a review of Henry Quastler's book Information Theory in Biology.

      A more serious thing, in the reviewer's opinion, is the compIete absence of contributions deaJing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts' approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neuraI mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

      Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

    1. Similar to my recent musings (link coming soon) on the dualism of matter vs information, I find that the real beauty may lie precisely in the complexity of their combination.

      There's a kernel of an idea hiding in here that I want to come back and revisit at a future date.

  21. Aug 2020
  22. Jul 2020
  23. Jun 2020
  24. May 2020
  25. Apr 2020
  26. Nov 2019
    1. The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

      Circle back around and read this when it comes out.

      Similarly, these other references should be an interesting read as well.

  27. Sep 2019
    1. He is now intending to collaborate with Bourne on a series of articles about the find. “Having these annotations might allow us to identify further books that have been annotated by Milton,” he said. “This is evidence of how digital technology and the opening up of libraries [could] transform our knowledge of this period.”
    2. “Not only does this hand look like Milton’s, but it behaves like Milton’s writing elsewhere does, doing exactly the things Milton does when he annotates books, and using exactly the same marks,” said Dr Will Poole at New College Oxford.

      The discussion of the information theoretic idea of "hand" is interesting here, particularly as it relates to the "hand" of annotation and how it was done in other settings by the same person.

  28. Apr 2019
    1. Digital sociology needs more big theory as well as testable theory.

      Here I might posit that Cesar Hidalgo's book Why Information Grows (MIT, 2015) has some interesting theses about links between people and companies which could be extrapolated up to "societies of linked companies". What could we predict about how those will interact based on the underlying pieces? Is it possible that we see other emergent complex behaviors?

  29. Mar 2019
    1. Engelbart insisted that effective intellectual augmentation was always realized within a system, and that any intervention intended to accelerate intellectual augmentation must be understood as an intervention in a system. And while at many points the 1962 report emphasizes the individual knowledge worker, there is also the idea of sharing the context of one’s work (an idea Vannevar Bush had also described in “As We May Think”), the foundation of Engelbart’s lifelong view that a crucial way to accelerate intellectual augmentation was to think together more comprehensively and effectively. One might even rewrite Engelbart’s words above to say, “We do not speak of isolated clever individuals with knowledge of particular domains. We refer to a way of life in an integrated society where poets, musicians, dreamers, and visionaries usefully co-exist with engineers, scientists, executives, and governmental leaders.” Make your own list.
  30. Jan 2019
    1. By examining information as a product of people’s contingent choices, rather than as an impartial recording of unchanging truths, the critically information-literate student develops an outlook toward information characterized by a robust sense of agency and a heightened concern for justice.

      It seems like there's still a transfer problem here, though. There seems to be an assertion that criticality will be inherently cross-domain, but I'm not clear why that should be true. Why would the critical outlook not remain domain-specific. (To say "if it does, then it isn't critical", seems like a tautology.)

  31. Nov 2018
    1. I had begun to think of social movements’ abilities in terms of “capacities”—like the muscles one develops while exercising but could be used for other purposes like carrying groceries or walking long distances—and their repertoire of pro-test, like marches, rallies, and occupations as “signals” of those capacities.

      I find it interesting that she's using words from information theory like "capacities" and "signals" here. It reminds me of the thesis of Caesar Hidalgo's Why Information Grows and his ideas about links. While within the social milieu, links may be easier to break with new modes of communication, what most protesters won't grasp or have the time and patience for is the recreation of new links to create new institutions for rule. As seen in many war torn countries, this is the most difficult part. Similarly campaigning is easy, governing is much harder.

      As an example: The US government's breaking of the links of military and police forces in post-war Iraq made their recovery process far more difficult because all those links within the social hierarchy and political landscape proved harder to reconstruct.

    1. Understanding Individual Neuron Importance Using Information Theory

      前几天也发现了这个文,果断收藏下载了!

      在信息论下,讨论互信息和分类效率等在网络内部的影响~

    2. Understanding Convolutional Neural Network Training with Information Theory

      要认真读的文,从信息论观点去理解 CNN。

  32. Sep 2018