1,007 Matching Annotations
  1. Last 7 days
    1. If this fits your style and you don’t get any value out of having cards with locators like 3a4b/65m1, then don’t do that (for you) useless make-work. Make sure your system is working for you and you’re not working for your system.

      Risks of replicating physical attributes in digital systems

      This article makes so much sense, but this sentence more than any other. As librarians will will know, a physical book can only be put in one place on a shelf...you can't realistically replicate a book and put it in groupings with all like-minded books. The call number was invented to bring organization to the physical space and the card catalog was invented to have a way for representations of the books—cards!—interfiled in many places to help with finding the book. Luhmann's card numbering sequence was the first thing I dropped when reading about Zettelkasten, and those that insist on that mechanism for their digital slip boxes are artificially constraining their electronic systems with a physical world limitation.

    1. "PDF is where documents go to die. Once something is in PDF, it's like a roach motel for data."

      —Chris Pratley, Microsoft Office's general manager (in TechRadar, 2012)

      obvious bias here on part of Pratley...

      Oddly, even if this were true, I'm not seeing patterns in the wild by which Microsoft products are helping to dramatically accelerate the distribution and easy ability to re-use data within documents. Perhaps its happening within companies or organizations to some extent, but it's not happening within the broader commons of the internet.


      If .pdfs are where information goes to die, then perhaps tools like Hypothes.is are meant to help resurrect that information?

  2. Aug 2022
    1. https://www.napkin.one/

      Yet another collection app that belies the work of taking, making, and connecting notes.

      Looks pretty and makes a promise, but how does it actually deliver? How much work and curation is involved? What are the outputs at the other end?

    1. I like to think of thoughts as streaming information, so I don’t need to tag and categorize them as we do with batched data. Instead, using time as an index and sticky notes to mark slices of info solves most of my use cases. Graph notebooks like Obsidian think of information as batched data. So you have a set of notes (samples) that you try to aggregate, categorize, and connect. Sure there’s a use case for that: I can’t imagine a company wiki presented as streaming info! But I don’t think it aids me in how I usually think. When thinking with pen and paper, I prefer managing streamed information first, then converting it into batched information later— a blog post, documentation, etc.

      There's an interesting dichotomy between streaming information and batched data here, but it isn't well delineated and doesn't add much to the discussion as a result. Perhaps distilling it down may help? There's a kernel of something useful here, but it isn't immediately apparent.

      Relation to stock and flow or the idea of the garden and the stream?

    1. Come back and read these particular texts, but these look interesting with respect to my work on orality, early "religion", secrecy, and information spread:<br /> - Ancient practices removed from their lineage lose their meaning - In spiritual practice, secrecy can be helpful but is not always necessary

      timestamp

    1. Politique documentaire Ensemble des objectifs et processus pilotant la gestion de l’information, incluant la politique d’acquisition, la politique de conservation et la politique de médiation des collections. La politique documentaire est une partie intégrante et essentielle du projet d'établissement, permettant de répondre aux missions de la structure et aux attentes des usagers.
    1. History and Foundations of Information Science

      This series of books focuses on the historical approach or theoretical approach to information science and seeks a broader interpretation of what we consider as information (i.e., information is in the eye of the beholder, be it sets of data, scholarly publications, works of art, material objects, or DNA samples), and an emphasis upon how people access and interact with this information.

      https://mitpress.mit.edu/books/series/history-and-foundations-information-science

    1. http://cluster.cis.drexel.edu/~cchen/talks/2011/ICSTI_Chen.pdf

      The Nature of Creativity: Mechanism, Measurement, and Analysis<br /> Chaomei Chen, Ph.D.<br /> Editor in Chief, Information Visualization<br /> College of Information Science and Technology, Drexel University<br /> June 7‐8, 2011

      Randomly ran across while attempting to source Randall Collins quote from https://hypothes.is/a/8e9hThZ4Ee2hWAcV1j5B9w

    1. OCLC began automated catalog card production in 1971, when the shared cataloging system first went online. Cardproduction increased to its peak in 1985, when OCLC printed 131 million. At peak production, OCLC routinelyshipped 8 tons of cards each week, or some 4,000 packages. Card production steadily decreased since then asmore and more libraries began replacing their printed cards with electronic catalogs. OCLC has printed more than1.9 billion catalog cards since 1971.
    1. There's also a good chance the DNP encourages people to spend non-significant amounts of time journaling and writing notes they never look back on.

      While writing notes into a daily note page may be useful to give them a quick place to live, a note that isn't revisited is likely one that shouldn't have been made at all.

      Tools for thought need to do a better job of staging ideas for follow and additional work. Leaving notes orphaned on a daily notes page may help in the quick capture process, but one needs reminders about them, means of finding them, and potential means of improving them.

      If they're swept away continuously, then they only serve the sort of functionality of cleaning out of ideas that morning pages do. It's bad enough to have a massive scrap heap that looks and feels like work, but it's even worse to have it spread out among hundreds or thousands of separate files.

      Does digital search fix this issue entirely? Or does it just push off the work to later when it won't be done either.

    1. I like to imagine all the thoughts and ideas I’vecollected in my system of notes as a forest. I imagine itas three-dimensional, because the trains of thought I’vebeen working on for some time look like trees, withbranches of argument, point, and counterpoint andleaves of source-based evidence. Actually, the forest isfour-dimensional, because it changes over time, growingas I add more to it. A piece of output I make using thisforest of thoughts is like a path through the woods. It’sa one-dimensional narrative or interpretation that startsat one point, moves in a line or an arc (sometimes azig-zag) through the woods, touching some but not allof the trees and leaves. I like this imagery, because itsuggests there are many ways to move through the forest.
  3. Jul 2022
    1. including history, science, and other content that could build the knowledge and vocabulary they need to understand both written texts and the world around them?

      I have created a first-year student module to think about the information needs they have themselves. e.g. knitters, genealogists, gamers, foodies, etc.

      I'm more interested in students understanding "how" they evaluate information. Are there tools built into websites, e.g. comment sections, thumbs up/down, # of subscribers, etc.

    1. Citing Pliny’s “no book so bad,” Gesner made a point of accumulating information about all the texts he could learn about, barbarian and Christian, in manuscript and in print, extant and not, without separating the good from the bad: “We only wanted to list them, and we have left to others free selection and judgment.”202
    1. . I thinkit’s often an issue for people when they first become note-makers: an anxiety about getting the “right” stuff out ofa book, or even “all the stuff”. I don’t think this iscompletely possible, and I think it’s increasingly lesspossible, the better the book.

      In the 1400s-1600s it was a common desire to excerpt all the value of books and attempts were made, though ultimately futile. This seems to be a commonly occurring desire.


      Often having a simple synopsis and notes isn't as useful as it may not spark the same sort of creativity and juxtaposition of ideas a particular reader might have had with their own context.


      Some have said that "content is king". I've previously thought that "context is king". Perhaps content and context end up ruling as joint monarchs.

    1. WYSIWYG assumes there is only one useful representation of the information: that of the final printed report. Although we are not arguing against a print preview function, even when the goal is to produce a printed document, it may be useful to have a different representation when preparing the document. For example, we may want to see formatting symbols or margin outlines, or it may be useful to see index terms assembled in the margin while we are composing.
    1. The current state web browsers is particularly damning from this perspective. Web browsers have access to such a treasure trove of valuable, often well-structured information about what we learn and how we think, what interests we have, and who we talk to. Rather than trying to take that information and let us build workflows out of them, browsers remain a strictly utilitarian tool – a rectangular window into documents and apps that play dumb, ignorant of the valuable information that transits through them every day.
    2. The vision of the web browser that excites me the most is one where the browser is a medium for creativity, learning, and thinking deeply that spans personal and public spheres of knowledge. This browser will be fast and private, of course, but more than that, this browser will let me explore the Web from the comfort of my own garden of information. It’ll break the barriers between different apps that silo our information to help us search and remember across all of them. It’ll use a deeper machine understanding of language and images to summarize articles, highlight important ideas, and remind me what I should remember. It’ll let me do it all together with other people in a way that feels like real presence, rather than just avatars on screen.
    3. Most existing tools and browsers treat web pages and pieces of notes like complete black boxes of information. These tools know how to scan for keywords, and they have access to the metadata we use to tag our information like hashtags and timestamps, but unlike a human, most current tools don’t try to peer into the contents of our notes or reading materials and operate with an understanding of our information. With ratcheting progress in machine understanding of language, I think we have good high-quality building blocks to start building thinking mediums and information systems that operate with some understanding of our ideas themselves, rather than simply “this is some text”.
    4. So, what are the building blocks of a powerful thinking medium that can actually help us think, more than just recall? For a tool that has such broad access to information like a web browser, I think a critical piece of the puzzle is better machine understanding of language.
    5. If we want to organize information that flows through our lives, we simply can’t restrict our design space to be a single product or app. No matter how great a note-taking app is, my emails are going to live outside of it. No matter how seamless the experience in my contacts app, my text conversations are going to live outside of it. We should acknowledge this fundamental limitation of the “note-taking app” approach to building tools for thought, and shift our focus away from building such siloed apps to designing something that lives on top of these smaller alcoves of personal knowledge to help us organize it regardless of its provenance. If we want to build a software system that can organize information across apps, what better place to start than the one piece of software that has access to it all, where most of us live and work nearly all the time? I think the browser is a rich place to build experiments in this space, and my personal experience building Monocle and Revery support this idea so far.
    6. In the browser of the future, the boundary between my personal information and the wider Web’s information landscape will blur, and a smarter, more literate browser will help me navigate both worlds with a deeper understanding of what I’m thinking about and what I want to discover. It’ll remind me of relevant bookmarks when I’m taking lecture notes; it’ll summarize and pick out interesting details from long news articles for me; it’ll let me search across the Web and my personal data to remember more and learn faster.
    1. I think we’ve barely begun to tap the potential of designing the Web as as built environment and a work of architecture around our digital living spaces. When we design the One Hypertext for people, not just for information, the Web becomes something more than a resource. It becomes the Metaverse.
    2. Today, we find a different set of metaphors for the Web. We don’t go on the Internet as much, or log in and log out anymore. Instead, we’re online or offline, connected or disconnected. “Online” is a state of being, not a place to be. (When was the last time you closed your web browser?) We spend most of our time on the Web not browsing or exploring, but subjecting ourselves to the flow of information that the Internet now levies at our attention.
    1. Participation inequality plagues the internet. Only 1% of people on any given platform create new content. 99% only consume.Many think that's just what happens when human communities scale. But maybe it's just what happens in an internet built for advertising. Consider that:All of the internet's interfaces—social feeds, search bars, news sites—are optimized for consumption.Interfaces for creating new content, particularly knowledge, are antiquated. Word-processors look like they did forty years ago, disconnected from the internet and any content you might write about. Which means: writing requires hours of searching and sorting. Knowledge creation is painful for the people best at it, and inaccessible to most others. What would it take to make writing accessible? Maybe: a totally new kind of interface. Ideally: a word-processor that pulls in the information you need as you type. And what would that take?Unprecedented NLP to make connections as you type,A word-processor redesigned around links, andA highly technical team focused on a non-technical market.If achieved, it would:save writers hours,make knowledge production accessible to anyone who knows how to type, andlay the groundwork for a mainstream knowledge economy.
    1. So we end up with the problem usually referred to as ‘information overload’ but I prefer to call notification literacy. As I say in the linked post, there are preventative measures and mitigating actions you can take as an individual to help ‘increase your notification literacy’. There are also ways of facilitating communities that can help, for example if the platform you’re using has threaded comments, insisting that people use instead of a confusing, undifferentiated stream of messages. You can also ensure you have a separate chat or channel just for important announcements.
    2. I was particularly interested in Chris Aldrich’s observation that knowledge workers tend to talk in spatial terms about their work, especially if distracted. Following interruptions by colleagues or phone calls at work, people may frequently ask themselves “where was I?” more frequently than “what was I doing?” This colloquialism isn’t surprising as our memories for visual items and location are much stronger than actions. Knowledge workers will look around at their environments for contextual clues for what they were doing and find them in piles of paper on their desks, tabs in their computer browser, or even documents (physical or virtual) on their desktops.
    1. During the seventeenth century, this associative view vanished and was replaced by more literallydescriptive views simply of the thing as it exists in itself.

      The associative emblematic worldview prevalent prior to the seventeenth century began to disappear within Western culture as the rise of the early modern period and the beginning of the scientific revolution began to focus on more descriptive modes of thought and representation.


      Have any researchers done specific work on this shift from emblematic to the descriptive? What examples do they show which support this shift? Any particular heavy influences?

      This section cites:<br /> William B. Ashworth, Jr. “Natural History and the Emblematic World View,” in Reappraisals of the Scientific Revolution, David C. Lindberg and Robert S. Westfall, eds #books/wanttoread<br /> which could be a place to start.


      Note that this same shift from associative and emblematic to descriptive and pedantic coincides not only with the rise of the scientific revolution but also with the effects of rising information overload in a post-Gutenberg world as well as the education reforms of Ramus (late 1500s) et al. as well as the beginning of the move away from scholasticism.


      Is there any evidence to support claims that this worldview stemmed from pagan traditions and cultures and not solely the art of memory traditions from ancient Greece? Could it have been pagan traditions which held onto these and they were supplemented and reinforced by ecclesiastical forces which used the Greek traditions?


      Examples of emblematic worldview: - particular colors of flowers meant specific things (red = love, yellow = friendship, etc.) We still have these or remants - Saints had their associative animals and objects - anniversary gifts had associative meanings (paper, silver, gold, etc.) We still have remnants of these things, though most are associated with wealth (gold, silver, platinum anniversaries). When did this tradition actually start? - what were the associative meanings of rabbits, turtles, and other animals which appear frequently in manuscript marginalia? (We have the example of the bee (Latin: apes) which where frequently used this way as being associated with the idea of imitation.) - other broad categories?

  4. bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link
    1. he aim of the present paper is to propose a radical resolution to this controversy: weassume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011). Itis in no way restricted to the human brain—although that is the place where we know it in itsmost concentrated form. Therefore, the extended mind hypothesis is in fact misguided,because it assumes that the mind originates in the brain, and merely “extends” itself a little bitoutside in order to increase its reach, the way one’s arm extends itself by grasping a stick.While ancient mystical traditions and idealist philosophies have formulated similarpanpsychist ideas (Seager, 2006), the approach we propose is rooted in contemporaryscience—in particular cybernetics, cognitive science, and complex systems theory. As such, itstrives to formulate its assumptions as precisely and concretely as possible, if possible in amathematical or computational form (Heylighen, Busseniers, Veitas, Vidal, & Weinbaum,2012), so that they can be tested and applied in real-world situations—and not just in thethought experiments beloved by philosophers

      The proposal is for a more general definition of the word mind, which includes the traditional usage when applied to the human mind, but extends far beyond that into a general property of nature herself.

      So in Heylighen's defintion, mind is a property of matter, but of all MINIMALLY ACTIVE matter, not just brains. In this respect, Heylighen's approach has early elements of the Integrated Information Theory (IIT) theory of Koch & Tononi

  5. bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link
    1. nformation overload, a form of mental bombardment that (Shenk, 1998) aptlycharacterized as “data smog”, since it obscures rather than enlightens, while damaging health byincreasing stress levels. It is typically accompanied by a barrage of interruptions or distractionscaused by incoming emails, phone calls, text messages, tweets or “status updates”.

      Information overload can also be increasingly be characterized by bad actors flooding information spaces with false, distracting or provactive information, creating conflict, abandonment and diluting the efficacy of the space for learning and collaboration.

    1. there was an interesting paper that came out i cited in the in my in my in paper number one that uh was 01:15:53 looking at this question of what is an individual and they were looking at it from an information theory standpoint you know so they came up with this they came up with this uh uh theory uh and i think do they have a name for 01:16:09 it yeah uh information theory of individuality and they say base it's done at the bottom of the slide there and they say basically that uh you know an individual is a process just what's 01:16:20 what we've been talking about before that propagates information from the past into the future so that you know implies uh information flow and implies a cognitive process uh it implies anticipation of 01:16:33 the future uh and it probably implies action and this thing that is an individual it is not like it is a layered hierarchical individual it's like you can draw a circle around 01:16:45 anything you know in a certain sense and call it an individual under you know with certain uh definitions you know if you want to define what its markov blanket is 01:16:57 but uh but you know we are we are we are our cells are individuals our tissues liver say is an individual um a human is an individual a family is an 01:17:12 individual you know and it just keeps expanding outward from there the society is an individual so it really it's none of those are have you know any kind of inherent preference 01:17:24 levels there's no preference to any of those levels everything's an individual layered interacting overlapping individuals and it's just it's just a it's really just a the idea of an individual is just where 01:17:36 do you want to draw your circle and then you can you know then you can talk about an individual at whatever level you want so so that's all about information so it's all about processing information right

      The "individual" is therefore scale and dimension dependent. There are so many ways to define an individual depending on the scale you are looking at and your perspective.

      Information theory of individuality addresses this aspect.

    1. In our current “information age,” or so the story goes, we suffer in new and unique ways. 
    1. Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. YI suspect that your meaning is an Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. I suspect that your meaning is an

      Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things.

      It's very possible that the meaning you draw from it is an eisegetical one to the meaning which Eco assigns it.

  6. Jun 2022
    1. Knowledge is the only resource that getsbetter and more valuable the more it multiplies.

      He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.<br /> —Thomas Jefferson

    2. We’ve been conditioned to view information through aconsumerist lens: that more is better, without limit.
    3. The ability to intentionally and strategically allocateour attention is a competitive advantage in a distracted world. Wehave to jealously guard it like a valuable treasure.

      It would seem that the word treasure here is being used to modify one's attention. Historically in books about "knowledge work" or commonplacing, the word was used with respect to one's storehouse of knowledge itself and not one's attention. Some of the effect is the result of the break in historical tradition being passed down from one generation to another. It's also an indication that the shift in value has moved not from what one knows or has, but that the attention itself is more valued now, even in a book about excerpting, thinking, and keeping knowledge!

      Oh how far we have fallen!

      It's also an indication of the extremes of information overload we're facing that the treasure is attention and not the small tidbits of knowledge and understanding we're able to glean from the massive volumes we face on a daily basis.

    4. Marianne Freiberger, “Information is surprise,” Plus Magazine, March 24,2015, https://plus.maths.org/content/information-surprise

      What a god-awful reference for Claude Shannon. Obviously he found it in his reading through serendipity and didn't bother chasing down the original quote for publication...

    1. For Jerome Bruner, the place to begin is clear: “One starts somewhere—where the learner is.”

      One starts education with where the student is. But mustn't we also inventory what tools and attitudes the student brings? What tools beyond basic literacy do they have? (Usually we presume literacy, but rarely go beyond this and the lack of literacy is too often viewed as failure, particularly as students get older.) Do they have motion, orality, song, visualization, memory? How can we focus on also utilizing these tools and modalities for learning.

      Link to the idea that Donald Trump, a person who managed to function as a business owner and president of the United States, was less than literate, yet still managed to function in modern life as an example. In fact, perhaps his focus on oral modes of communication, and the blurrable lines in oral communicative meaning (see [[technobabble]]) was a major strength in his communication style as a means of rising to power?

      Just as the populace has lost non-literacy based learning and teaching techniques so that we now consider the illiterate dumb, stupid, or lesser than, Western culture has done this en masse for entire populations and cultures.

      Even well-meaning educators in the edtech space that are trying to now center care and well-being are completely missing this piece of the picture. There are much older and specifically non-literate teaching methods that we have lost in our educational toolbelts that would seem wholly odd and out of place in a modern college classroom. How can we center these "missing tools" as educational technology in a modern age? How might we frame Indigenous pedagogical methods as part of the emerging third archive?

      Link to: - educational article by Tyson Yunkaporta about medical school songlines - Scott Young article "You should pay for Tutors"


      aside on serendipity

      As I was writing this note I had a toaster pop up notification in my email client with the arrival of an email by Scott Young with the title "You should pay for Tutors" which prompted me to add a link to this note. It reminds me of a related idea that Indigenous cultures likely used information and knowledge transfer as a means of payment (Lynne Kelly, Knowledge and Power). I have commented previously on the serendipity of things like auto correct or sparks of ideas while reading as a means of interlinking knowledge, but I don't recall experiencing this sort of serendipity leading to combinatorial creativity as a means of linking ideas,

    1. Imagine that this is the back of your eye, okay? And these are two projections from the world. 00:03:22 They're identical in every single way. Identical in shape, size, spectral content. They are the same, as far as your eye is concerned. And yet they come from completely different sources. The one on the right comes from a yellow surface, in shadow, oriented facing the left, viewed through a pinkish medium. 00:03:48 The one on the left comes from an orange surface, under direct light, facing to the right, viewed through sort of a bluish medium. Completely different meanings, giving rise to the exact same retinal information. And yet it's only the retinal information that we get. So how on Earth do we even see? So if you remember anything in this next 18 minutes, remember this: 00:04:13 that the light that falls onto your eye, sensory information, is meaningless, because it could mean literally anything. And what's true for sensory information is true for information generally.

      "...sensory information is meaningless because it could mean literally anything. And what's true for sensory information is true for information generally."

      This is a profound statement and needs to be fully unpacked to understand the ramifications for Deep Humanity.

    1. Encyclopedia of Library and Information ScienceVolume 29 - Stanford University Libraries to System AnalysisBy Allen Kent, Harold Lancour, Jay E. Daily

      Contains significant section on SYNTOL.

    1. William James’s self-assessment: “I am no lover of disorder, but fear to lose truth by the pretension to possess it entirely.”
    1. the man's eight videos posted to TikTok last Thursday and Friday generated much attention. Combined, the posts garnered more than 2 million views and were recirculated on YouTube and Instagram by large-scale content creators reaching exponentially more people

      When parody is consumed as news, and the fake news spreads.

    1. Most of what I've seen on information overload frames it in a negative, but the book Thriving on Overload by Ross Dawson seems to flip the script to frame it as a positive thing.

      <small><cite class='h-cite via'> <span class='p-author h-card'> Marshall Kirkpatrick </span> in Marshall Kirkpatrick on Twitter: "@jerrymichalski @rossdawson Jerry, Ross, re collective intelligence have you seen https://t.co/iOM908iCCt from @ggiacomelli? In listening to your episode of Thriving on Overload podcast, it comes to mind. Especially the 300pg practitioners guide https://t.co/rziczNsXxt cc @vgr" / Twitter (<time class='dt-published'>06/03/2022 22:25:26</time>)</cite></small>

  7. May 2022
    1. I explore how moves towards ‘objective’ data as the basis for decision-making orientated teachers’ judgements towards data in ways that worked to standardise judgement and exclude more multifaceted, situated and values-driven modes of professional knowledge that were characterised as ‘human’ and therefore inevitably biased.

      But, aren't these multifaceted, situated, and values-driven modes also constituted of data? Isn't everything represented by data? Even 'subjective' understanding of the world is articulated as data.

      Is there some 'standard' definition of data that I'm not aware of in the context of this domain?

    1. Brine, Kevin R., Ellen Gruber Garvey, Lisa M. Gitelman, Steven J. Jackson, Virginia Jackson, Markus Krajewski, Mary Poovey, et al. “Raw Data” Is an Oxymoron. Edited by Lisa M. Gitelman. Infrastructures. MIT Press, 2013. https://mitpress.mit.edu/books/raw-data-oxymoron.

    1. Informationbecomes knowledge—personal, embodied, verified—only when weput it to use. You gain confidence in what you know only when youknow that it works. Until you do, it’s just a theory.

      motivational...

    2. .Adopting the habit of knowledge capture has immediate benefitsfor our mental health and peace of mind. We can let go of the fearthat our memory will fail us at a crucial moment. Instead of jumpingat every new headline and notification, we can choose to consumeinformation that adds value to our lives and consciously let go of therest.

      Immediate knowledge capture by highlighting, annotating, or other means when taking notes can help to decrease cognitive load. This is similar to other productivity methods like quick logging within a bullet journal system, writing morning pages, or Getting Things Done (GTD). By putting everything down in one place, you can free your mind of the constant need to remember dozens of things. This frees up your working memory to decrease stress as you know you've captured the basic idea for future filtering, sorting, and work at a later date.

    3. Content tends to pile up all around useven without our involvement. There are probably emails filling yourinbox, updates popping up in your social media feeds, andnotifications proliferating on your smartphone as you’re reading this

      Sources of information overload: emails social media updates/notifications proliferation of browser tabs


      Forte misses this last one

    4. the lessons you will find within thesepages are built on timeless and unchanging principles

      The ideas behind knowledge management are largely timeless, but they are far from unchanging. They have evolved slowly over 2000+ years until we broadly threw many of them away in the early 20th century.

      One only need read a few pages of Ann M. Blair's Too Much to Know: Managing Scholarly Information before the Modern Age to see some of the changes and shifts within the space from the 1400s on.

    5. For the first time in history, we have instantaneous access to theworld’s knowledge.

      While we may have the impression of instant access to the world's knowledge, this is really far from the truth. It's all there, but being able to search through it for what we want or being able to find or generate insight from it involves a massive mountain of hidden work that no one really wants to do in practice.

    1. Scott, I'll spend some more in-depth time with it shortly, but in a quick skim of topics I pleasantly notice a few citations of my own work. Perhaps I've done a poor job communicating about wikis, but from what I've seen from your work thus far I take much the same view of zettelkasten as you do. Somehow though I find that you're quoting me in opposition to your views? While you're broadly distinguishing against the well-known Wikipedia, and rightly so, I also broadly consider (unpublished) and include examples of small personal wikis and those within Ward Cunningham's FedWiki space, though I don't focus on them in that particular piece. In broad generalities most of these smaller wikis are closer to the commonplace and zettelkasten traditions, though as you point out they have some structural functional differences. You also quote me as someone in "information theory" in a way that that indicates context collapse. Note that my distinctions and work in information theory relate primarily to theoretical areas in electrical engineering, physics, complexity theory, and mathematics as it relates to Claude Shannon's work. It very specifically does not relate to my more humanities focused work within intellectual history, note taking, commonplaces, rhetoric, orality, or memory. In these areas, I'm better read than most, but have no professional title(s). Can't wait to read the entire piece more thoroughly...

    1. Ideally, skilled readers organized notes into personal “arks of study,” or data chests. Vincent Placcius’s De arte excerpendi contains an engraving of a note cabinet, or scrinia literaria, in which notes are attached to hooks and hung on bars according to thematic organization, as well as various drawers for the storage of note paper, hooks, and possibly writing supplies. Both Placcius and later Leibniz built such contraptions, though none survives today. While these organizational tools cannot be directly linked to modern computers, it is difficult not to compare them. Placcius’s design looks strikingly like the old punch-card computation machines that date from the 1880s, and the first mainframes, such as the 1962 IBM 7090.

      "arks of study" being used as early data chests or stores is a fascinating conceptualization

    1. In part in order to heighten his praise of Aldus as the ideal printer, Erasmus noted by contrast that most printers, given the absence of regulations, “fill the world with pamphlets and books [that are] . . . foolish, ignorant, malig-nant, libellous, mad, impious and subversive; and such is the flood that even

      things that might have done some good lose all their goodness.”198 The overabundance of bad books drowned out even any good bits that might be present among them.

      And we now say these same sorts of things about the internet and social media.

    1. Everyone is overloaded with information thanks to the digital revolution, so—the PKM people tell us—we need new software and systems to survive and thrive.

      Information overload goes back much further in history than the digital revolution. I might argue that information managers have tamed large portions of the beast already and we've forgotten many of the methods and as a result we're now either reinventing or rediscovering them as we transfer them to the digital space.

    1. Information would get lost in the game of telephone between the client, the designer, and the person managing the project.

    Tags

    Annotators

    1. Pathogenic germline variants in DICER1 underlie an autosomal dominant, pleiotropic tumor-predisposition disorder.

      gene name: DICER 1 PMID (PubMed ID): 33570641 HGNCID: n/a Inheritance Pattern: autosomal dominant Disease Entity: benign and malignant tumor mutation Mutation: somatic Zygosity: heterozygous Variant: n/a Family Information: n/a Case: people of all sexes, ages, ethnicities and races participated CasePresentingHPOs: individuals with DICER1-associated tumors or pathogenic germline DICER1 variants were recruited to participate CasePreviousTesting: n/a gnomAD: n/a

    1. The biggest mistake—and one I’ve made myself—is linking with categories. In other words, it’s adding links like we would with tags. When we link this way we’re more focused on grouping rather than connecting. As a result, we have notes that contain many connections with little to no relevance. Additionally, we add clutter to our links which makes it difficult to find useful links when adding links. That being said, there are times when we might want to group some things. In these cases, use tags or folders.

      Most people born since the advent of the filing cabinet and the computer have spent a lifetime using a hierarchical folder-based mental model for their knowledge. For greater value and efficiency one needs to get away from this model and move toward linking individual ideas together in ways that they can more easily be re-used.

      To accomplish this many people use an index-based method that uses topical or subject headings which can be useful. However after even a few years of utilizing a generic tag (science for example) it may become overwhelmed and generally useless in a broad search. Even switching to narrower sub-headings (physics, biology, chemistry) may show the same effect. As a result one will increasingly need to spend time and effort to maintain and work at this sort of taxonomical system.

      The better option is to directly link related ideas to each other. Each atomic idea will have a much more limited set of links to other ideas which will create a much more valuable set of interlinks for later use. Limiting your links at this level will be incredibly more useful over time.

      One of the biggest benefits of the physical system used by Niklas Luhmann was that each card was required to be placed next to at least one card in a branching tree of knowledge (or a whole new branch had to be created.) Though he often noted links to other atomic ideas there was at least a minimum link of one on every idea in the system.

      For those who have difficulty deciding where to place a new idea within their system, it can certainly be helpful to add a few broad keywords of the type one might put into an index. This may help you in linking your individual ideas as you can do a search of one or more of your keywords to narrow down the existing ones within your collection. This may help you link your new idea to one or more of those already in your system. This method may be even more useful and helpful for those who are starting out and have fewer than 500-1000 notes in their system and have even less to link their new atomic ideas to.

      For those who have graphical systems, it may be helpful to look for one or two individual "tags" in a graph structure to visually see the number of first degree notes that link to them as a means of creating links between atomic ideas.

      To have a better idea of a hierarchy of value within these ideas, it may help to have some names and delineate this hierarchy of potential links. Perhaps we might borrow some well ideas from library and information science to guide us? There's a system in library science that uses a hierarchical set up using the phrases: "broader terms", "narrower terms", "related terms", and "used for" (think alias or also known as) for cataloging books and related materials.

      We might try using tags or index-like links in each of these levels to become more specific, but let's append "connected atomic ideas" to the bottom of the list.

      Here's an example:

      • broader terms (BT): [[physics]]
      • narrower terms (NT): [[mechanics]], [[dynamics]]
      • related terms (RT): [[acceleration]], [[velocity]]
      • used for (UF) or aliases:
      • connected atomic ideas: [[force = mass * acceleration]], [[$$v^2=v_0^2​+2aΔx$$]]

      Chances are that within a particular text, one's notes may connect and interrelate to each other quite easily, but it's important to also link those ideas to other ideas that are already in your pre-existing body of knowledge.


      See also: Thesaurus for Graphic Materials I: Subject Terms (TGM I) https://www.loc.gov/rr/print/tgm1/ic.html

    1. ReconfigBehSci [@SciBeh]. (2021, December 20). This thread is sobering and informative with respect to what overloading health services means in terms of individual experience...worth popping into google translate fir non-German speakers [Tweet]. Twitter. https://twitter.com/SciBeh/status/1472983739890348045

  8. Apr 2022
    1. DICER1 syndrome is an autosomal-dominant, pleiotropic tumor-predisposition disorder

      Gene Name:DICER1 PMID: 30715996 HGNCID: Not on document Inheritance Pattern: Autosomal Dominant Disease Entity: Pleiotropic Tumor-Predisposition Disorder Mutation: Pathogenic Germline Variants Zygosity: Not in document Variant: Not in document Family Information: An individual was found who had family members who were also affected by this mutation. Because of this, those family members were also chosen to participate in this study. Mutation Type: Missense Case: The study was done on more than one individual. Roughly more than half of the individuals were female

    1. (2) ReconfigBehSci on Twitter: “@alexdefig ‘reveal myself’? Really? This is the account of https://t.co/pIBRAjcOpt, the human being typing this right now is Ulrike Hahn. What is the relevance of that to the very specific piece of information I sought to inject in your thread for the benefit of readers?” / Twitter. (n.d.). Retrieved April 29, 2022, from https://twitter.com/SciBeh/status/1444359889313153024

    1. ReconfigBehSci [@SciBeh]. (2021, October 2). @alexdefig I literally came to respond to your subtweet because I retweeted your thread with this information account, which also means it gets indexed in the http://SciBeh.org database. Along with high quality pro arguments- because this is what that account is for. [Tweet]. Twitter. https://twitter.com/SciBeh/status/1444363041710096386

    1. doi: https://doi.org/10.1038/d41586-021-02346-4

      https://www.nature.com/articles/d41586-021-02346-4

      Oddly this article doesn't cover academia.edu but includes ResearchGate which has a content-sharing partnership with the publisher SpringerNature.

      Matthews, D. (2021). Drowning in the literature? These smart software tools can help. Nature, 597(7874), 141–142. https://doi.org/10.1038/d41586-021-02346-4

    2. In 2019, Smolyansky co-founded Connected Papers, one of a new generation of visual literature-mapping and recommendation tools.

      https://www.connectedpapers.com/

      https://twitter.com/ConnectedPapers


      Something about the name Connected Papers reminds me of the same sort of linking name that Manfred Kuehn gave to his note taking software ConnectedText.

    3. Every time Eddie Smolyansky had a few moments to himself, he tried to stay abreast of new publications in his field. But by 2016, the computer-vision researcher, who is based in Tel Aviv, Israel, was receiving hundreds of automated literature recommendations per day. “At some point the bathroom breaks weren’t enough,” he says. The recommendations were “way too much, and impossible to keep up with”.Smolyansky’s ‘feed fatigue’ will be familiar to many academics. Academic alert tools, originally designed to focus attention on relevant papers, have themselves become a hindrance, flooding the inboxes of scientists worldwide.

      An example in the literature of information overload in 2016. Here' it's called "feed fatigue" and relates to an automated feed of literature recommendations.

    1. SmartDevelopmentFund [@SmartDevFund]. (2021, November 2). A kit that enables users to disable misinformation: The #DigitalEnquirerKit empowers #journalists, civil society #activists and human rights defenders at the #COVID19 information front-line. Find out more: Http://sdf.d4dhub.eu #smartdevelopmentfund #innovation #Infopowered https://t.co/YZVooirtU9 [Tweet]. Twitter. https://twitter.com/SmartDevFund/status/1455549507949801472

    1. The DICER1 syndrome is an autosomal dominant tumor‐predisposi-tion disorder associated with pleuropulmonary blastoma, a rare pediatric lung cancer

      GeneName:DICER1 PMID (PubMed ID): PMCID: PMC6418698 PMID: 30672147 HGNCID: NOT LISTED<br /> Inheritance Pattern: Autosomal Dominant Disease Entity: Cancer; benign and malignant tumors including pleuropulmonary blastoma, cystic nephroma, Sertoli-Leydig cell tumors, multinodular goiter, Thryoid cancer, rhabdomyosarcoma, and pineoblastoma. Mutation: Somatic missense variation Mutation type: missense Zygosity: None stated Variant: unregistered…. Family Information: Characterize germline variants in familial early-onset clorectal cancer patients; The observation of germline DICER1 variation with uterine corpus endometrial carcinoma merits additional investigation. CasePresentingHPOs: uterine and rectal cancers in germline mutation

    1. Prof Francois Balloux [@BallouxFrancois]. (2021, December 9). This may have sounded somewhat naïve in early 2020, but by now, I would have expected that anyone with an interest in covid-19 might have acquired some basic notions in infectious disease epidemiology. 1/ [Tweet]. Twitter. https://twitter.com/BallouxFrancois/status/1469063480334561285

    1. Nick Sawyer, MD, MBA, FACEP [@NickSawyerMD]. (2022, January 3). The anti-vaccine community created a manipulated version of VARES that misrepresents the VAERS data. #disinformationdoctors use this data to falsely claim that vaccines CAUSE bad outcomes, when the relationship is only CORRELATED. Watch this explainer: Https://youtu.be/VMUQSMFGBDo https://t.co/ruRY6E6blB [Tweet]. Twitter. https://twitter.com/NickSawyerMD/status/1477806470192197633

    1. (20) James 💙 Neill—😷 🇪🇺🇮🇪🇬🇧🔶 on Twitter: “The domain sending that fake NHS vaccine consent hoax form to schools has been suspended. Excellent work by @martincampbell2 and fast co-operation by @kualo 👍 FYI @fascinatorfun @Kit_Yates_Maths @dgurdasani1 @AThankless https://t.co/pbAgNfkbEs” / Twitter. (n.d.). Retrieved November 22, 2021, from https://twitter.com/jneill/status/1442784873014566913

    1. Book review

      Cook, Trevor. “Review: Blair, Ann M. Too Much to Know: Managing Scholarly Information Before the Modern Age. New Haven: Yale University Press, 2010. Pp. Xv, 397. ISBN 978-0-300-11251-1 (Hardcover) $45.” Renaissance and Reformation 33, no. 4 (December 12, 2011): 109–11. https://doi.org/10.33137/rr.v33i4.15975.

      Note that they've accidentally used the word "in" instead of "Before" in the title of the book.

    1. In the information age, filtering systems, driven by algorithms and artificial intelligence (AI), have become increasingly prominent, to such an extent that most of the information you encounter on the internet is now rearranged, ranked and filtered in some way. The trend towards a more customised information landscape is a result of multiple factors. But advances in technology and the fact that the body of information available online grows exponentially are important contributors.

      And, in fact, the filtering systems are driven by signals of the searcher, not signals of the content. Past behavior (and user profiling), current location (IP address recognition), device type (signal of user intent and/or social-economic status), and other user-specific attributes are being used to attempt to offer users the information that the provider thinks the user is looking for.

    1. In these sessions, students didn’t listen to a description ofcomputer science concepts, or engage in a discussion about the work performedby computer scientists; they actually did the work themselves, under the tutors’close supervision.

      The process seen in cognitive apprenticeships seems more akin to the sorts of knowledge transfer done in primary oral indigenous cultures by passing down stories and performing (song, dance, art, etc.) knowledge.

      It shouldn't be surprising that cognitive apprenticeships work well given their general use by oral cultures over millennia.

      link to: Writing out answers will show gaps in knowledge Performing actions will show gaps in knowledge

    1. One of his last works, the Aurifodina, “The Mine of All Arts and Sci-ences, or the Habit of Excerpting,” was printed in 1638 (in 2,000 copies) andin another fourteen editions down to 1695 and spawned abridgments in Latin(1658), German (1684), and English.

      Simply the word abridgement here leads me to wonder:

      Was the continual abridgement of texts and excerpting small pieces for later use the partial cause of the loss of the arts of memory? Ars excerpendi ad infinitum? It's possible that this, with the growth of note taking practices, continual information overload, and other pressures for educational reform swamped the prior practices.

      For evidence, take a look at William Engel's work following the arts of memory in England and Europe to see if we can track the slow demise by attrition of the descriptions and practices. What would such a study show? How might we assign values to the various pressures at play? Which was the most responsible?

      Could it have also been the slow, inexorable death of many of these classical means of taking notes as well? How did we loose the practices of excerpting for creating new ideas? Where did the commonplace books go? Where did the zettelkasten disappear to?

      One author, with a carefully honed practice and the extant context of their life writes some brief notes which get passed along to their students or which are put into a new book that misses a lot of their existing context with respect to the new readers. These readers then don't know about the attrition happening and slowly, but surely the knowledge goes missing amidst a wash of information overload. Over time the ideas and practices slowly erode and are replaced with newer techniques which may not have been well tested or stood the test of time. One day the world wakes up and the common practices are no longer of use.

      This is potentially all the more likely because of the extremely basic ideas underpinning some of memory and note taking. They seem like such basic knowledge we're also prone to take them for granted and not teach them as thoroughly as we ought.

      How does one juxtapose this with the idea of humanist scholars excerpting, copying, and using classical texts with a specific eye toward preventing the loss of these very classical texts?

      Is this potentially the idea of having one's eye on a particular target and losing sight of other creeping effects?

      It's also difficult to remember what it was like when we ourselves didn't know something and once that is lost, it can be harder and harder to teach newcomers.

    2. On William Webster, An essay on book-keeping (1719) and on Lichtenberg’s com-parison, see Te Heesen (2005). Te Heesen also notes a case of influence in the oppo-site direction, in a cabinet of commercial samples modeled on cabinets of curiosities;see Te Heesen, (2002), 147. Zedelmaier argues that scholarly methods of informa-tion management inspired bureaucratic information management; see Zedelmaier(2004), 203. On Lichtenberg, see von Arburg (2003)

      references worth peeling apart here!! :)

    3. Note- Taking as Information Management

      cross reference her paper:

      Blair, Ann. “Note Taking as an Art of Transmission.” Critical Inquiry 31, no. 1 (September 2004): 85–107. https://doi.org/10.1086/427303.

    4. n the first vernacular bibliography Anton Fran-cesco Doni lauded the happiness of the illiterate who were spared the “maledic-tion of books”

      I love "malediction of books"!illi

    5. Humanist concerns about printing motivated one early appearance of the theme, in Erasmus’s famous digressive commentary on the adage festina lente (make haste slowly), first published in 1525: “Is there anywhere on earth exempt from these swarms of new books? Even if, taken out one at a time, they offered something worth knowing, the very mass of them would be a serious impedi-ment to learning from satiety if nothing else, which can do far more damage where good things are concerned or simply from the fact that men’s minds are easily glutted and hungry for something new, and so these distractions call them away from the reading of ancient authors.” Erasmus complained here about a flood of new books because these were of lesser value than ancient texts and dis-tracted readers from true learning. Erasmus blamed the flood of bad new books on printing.

      I'm reminded here of a similar conversation I had circa 1996 with cinematographer Caleb Deschanel who lamented with me about the increase in the number of new low quality movies available on VHS and DVD and how we both spent time watching a lot of crap instead of focusing on the auteurs and better quality cineaste experiences that were available in remastered formats and collections like the Criterion Collection.

    6. the multitudo librorum was treated as a matter of general experience
    7. A number of ancient compilations, like those of Pliny, Diogenes Laertius, and Stobaeus, were indeed valued as both sources and models in the Renaissance, and authors of miscellaneously arranged compila-tions invoked Aulus Gellius as the founder of that genre.

      While there are ancient compilations by writers including Pliny, Diogenes Laertius, and Stobaeus, many authors in the Renaissance credited Aulus Gellius as the founder of the genre.

    1. The book was reviewed in all major magazines and newspapers, sparking what historian Ronald Kline has termed a “cybernetics craze,” becoming “a staple of science fiction and a fad among artists, musicians, and intellectuals in the 1950s and 1960s.”

      This same sort of craze also happened with Claude Shannon's The Mathematical Theory of Information which helped to bolster Weiner's take.

    1. The reason for this has been discussed earlier in the context of the minimum information principle: you should always try to make sure your brain works in the exactly same way at each repetition.

      There is research that one's first guess or intuition is often correct. In a similar mode, one's first associative thought will likely be the strongest and easiest to remember. It's also more likely that the thought path will occur again and thereby make that association easier to remember in the future.

      What does this research indicate? Has anyone tested for this effect? Does it have a name? the TK effect? (And if it doesn't the TK Effect is actually quite an apt one.)


      This doesn't seem to be the same definition of the minimum information principle as before.

  9. Mar 2022
    1. Melvin Vopson has proposed an experiment involving particle annihilation that could prove that information has mass, and by Einstein's mass-energy equivalence, information is also energy. If true, the experiment would also show that information is one of the states of matter.

      The experiment doesn't need a particle accelerator, but instead uses slow positrons at thermal velocities.

      Melvin Vopson is an information theory researcher at the University of Portsmouth in the United Kingdom.

      A proof that information has mass (or is energy) may explain the idea of dark matter. Vopson's rough calculations indicate that 10^93 bits of information would explain all of the “missing” dark matter.

      Vopson's 2022 AIP Advances paper would indicate that the smallest theoretical size of digital bits, presuming they are stable and exist on their own would become the smallest known building blocks of matter.

      The width of digital bits today is between ten and 30 nanometers. Smaller physical bits could mean more densely packed storage devices.


      Vopson proposes that a positron-electron annihilation should produce energy equivalent to the masses of the two particles. It should also produce an extra dash of energy: two infrared, low-energy photons of a specific wavelength (predicted to be about 50 microns), as a direct result of erasing the information content of the particles.

      The mass-energy-information equivalence principle Vopson proposed in his 2019 AIP Advances paper assumes that a digital information bit is not just physical, but has a “finite and quantifiable mass while it stores information.” This very small mass is 3.19 × 1038 kilograms at room temperature.

      For example, if you erase one terabyte of data from a storage device, it would decrease in mass by 2.5 × 1025 kilograms, a mass so small that it can only be compared to the mass of a proton, which is about 1.67 × 1027 kilograms.

      In 1961, Rolf Landauer first proposed the idea that a bit is physical and has a well-defined energy. When one bit of information is erased, the bit dissipates a measurable amount of energy.

    1. Human minds are made of memories, and today those memories have competition. Biological memory capacities are being supplanted, or at least supplemented, by digital ones, as we rely on recording—phone cameras, digital video, speech-to-text—to capture information we’ll need in the future and then rely on those stored recordings to know what happened in the past. Search engines have taken over not only traditional reference materials but also the knowledge base that used to be encoded in our own brains. Google remembers, so we don’t have to. And when we don’t have to, we no longer can. Or can we? Remembering and Forgetting in the Age of Technology offers concise, nontechnical explanations of major principles of memory and attention—concepts that all teachers should know and that can inform how technology is used in their classes. Teachers will come away with a new appreciation of the importance of memory for learning, useful ideas for handling and discussing technology with their students, and an understanding of how memory is changing in our technology-saturated world.

      How much history is covered here?

      Will mnemotechniques be covered here? Spaced repetition? Note taking methods in the commonplace book or zettelkasten traditions?