25 Matching Annotations
  1. Mar 2023
    1. Abb. 9 Im Normalfall erarbeitete man jedoch eine detaillierte interne Feinsortierung des Belegmaterials häufiger Wörter. Naturgemäß hätte jede Dimension der Analyse (chronologisch, grammatisch, semantisch, graphisch) die Grundlage einer eigenen Sortierordnung bilden können.

      Alternate sort orders for the slips for the Wb include chronological, grammatical, semantic, and graphic, but for teasing out the meanings the original sort order was sufficient. Certainly other sort orders may reveal additional subtleties.

  2. Feb 2023
    1. Folgezettel

      Do folgezettel in combination with an index help to prevent over-indexing behaviors? Or the scaling problem of categorization in a personal knowledge management space?

      Where do subject headings within a zettelkasten dovetail with the index? Where do they help relieve the idea of heavy indexing or tagging? How are the neighborhoods of ideas involved in keeping a sense of closeness while still allowing density of ideas and information?

      Having digital search views into small portions of neighborhoods like gxabbo suggested can be a fantastic affordance. see: https://hypothes.is/a/W2vqGLYxEe2qredYNyNu1A

      For example, consider an anthropology student who intends to spend a lifetime in the subject and its many sub-areas. If they begin smartly tagging things with anthropology as they start, eventually the value of the category, any tags, or ideas within their index will eventually grow without bound to the point that the meaning or value as a search affordance within their zettelkasten (digital or analog) will be utterly useless. Let's say they fix part of the issue by sub-categorizing pieces into cultural anthropology, biological anthropology, linguistic anthropology, archaeology, etc. This problem is fine while they're in undergraduate or graduate school for a bit, but eventually as they specialize, these areas too will become overwhelming in terms of search and the search results. This problem can continue ad-infinitum for areas and sub areas. So how can one solve it?

      Is a living and concatenating index the solution? The index can have anthropology with sub-areas listed with pointers to the beginnings of threads of thought in these areas which will eventually create neighborhoods of these related ideas.

      The solution is far easier when the ideas are done top-down after-the-fact like in the Dewey Decimal System when the broad areas are preknown and pre-delineated. But in a Luhmann-esque zettelkasten, things grow from the bottom up and thus present different difficulties from a scaling up perspective.

      How do we classify first, second, and third order effects which emerge out of the complexity of a zettelkasten? - Sparse indexing can be a useful long term affordance in the second or third order space. - Combinatorial creativity and ideas of serendipity emerge out of at least the third order. - Using ZK for writing is a second order affordance - Storage is a first order affordance - Memory is a first order affordance (related to storage) - Productivity is a second+ order (because solely spending the time to save and store ideas is a drag at the first order and doesn't show value until retrieval at a later date). - Poor organization can be non-affordance or deterrent which results in a scrap heap - lack of a reason why can be a non-affordance or deterrence as well - cross reference this list and continue on with other pieces and affordances

  3. Jan 2023
  4. Dec 2022
    1. Isaac and I uh with another colleague we did a little bit of work trying to look at what would the Swedish policy or the UK policy indeed look like if it was carried out globally and it would look at something like two and a half degrees Centigrade of warming if 00:31:58 not more

      !- key point : Sweden's net zero plan scaled globally - would result in a 2.5 deg C or greater world

  5. Jul 2022
    1. if if one looks around and on the web and elsewhere there are numerous uh kind of experiments going on in in a 00:40:13 variety of things in the and new forms of representative democracy new forms of decision making new forms of economies in the sense of 00:40:26 you know local digital currencies and things like that i think all of those you know all of those are are excellent um you know a resource to draw from 00:40:43 uh of the the task is then is to take these ideas these ideas that are springing up all over and put integrate them in a way that is functional you know 00:40:57 can serve a community initially maybe the community is small just a few thousand but the idea is that it would that it would grow over time exponentially grow to who knows you know hundreds of thousands millions i don't know 00:41:09 so how can you take all those ideas and actually make them work sometimes i i liken it to you know that you might tinker in the garage with a with an airplane you know you might 00:41:21 build a two-seater in the garage and that's totally cool you know you can you know that maybe what the wright brothers did or something that's fantastic but i'm really interested in building a jumbo jet that takes you know 500 00:41:33 passengers at a time in an hour and a half to from new york to london or whatever and doesn't fall in the ocean you know like so how do you do that how do you how do you build a integrated system that is safe that is 00:41:47 resilient that is that it that has metrics that you can monitor progress that has good anticipation so you know where the you know that where you're going tomorrow you know where this 00:42:00 where is this going you know what's going to happen tomorrow and sort of what's going to happen to me you know that's that's part of the question so uh so the so i think the challenge is to take all these ideas that are 00:42:11 that are popping up all over which some really great ideas and then to integrate them into uh into a coherent hole that that spans every one of the i think maybe is it six 00:42:23 systems that i talk about so that so that they're not designed in silos we're not just building the new economic system we're not just building a new educational system we're building this we're building a 00:42:36 a cognitive architecture that includes all of those

      John describes the synthesis he imagines, which includes a system to curate all the existing ideas emerging everywhere into a coherent whole, to scale the ones that promise to the areas that can really benefit from them.

  6. Sep 2021
  7. Apr 2021
  8. Mar 2021
  9. Jan 2021
    1. Scale, inevitably leads to power-law distributed outcomes, leading to the inevitable concentration of talent and resources among a few investigators pursuing a few lines of inquiry, and their pale second-rate imitators. Through this mechanism science at scale reinforces (and in fact, under sufficient political capture imposes) consensus, further annihilating the possibility of the necessary revolutionary synthesis of ideas.

      The solution to any sort of global leaderboard thing - like Mendeley most read or most cited - is to break it up into local communities - most read among your friends or most cited within only the outer leaves of the topic tree.

  10. Dec 2020
  11. Aug 2020
  12. Jun 2020
  13. May 2020
    1. WhyGeneral infrastructure simply takes time to build. You have to carefully design interfaces, write documentation and tests, and make sure that your systems will handle load. All of that is rival with experimentation, and not just because it takes time to build: it also makes the system much more rigid.Once you have lots of users with lots of use cases, it’s more difficult to change anything or to pursue radical experiments. You’ve got to make sure you don’t break things for people or else carefully communicate and manage change.Those same varied users simply consume a great deal of time day-to-day: a fault which occurs for 1% of people will present no real problem in a small prototype, but it’ll be high-priority when you have 100k users.Once this playbook becomes the primary goal, your incentives change: your goal will naturally become making the graphs go up, rather than answering fundamental questions about your system.

      The reason the conceptual architecture tends to freeze is because there is a tradeoff between a large user base and the ability to run radical experiments. If you've got a lot of users, there will always be a critical mass of complaints when the experiment blows up.

      Secondly, it takes a lot of time to scale up. This is time that you cannot spend experimenting.

      Andy here is basically advocating remaining in Explore mode a little bit longer than is usually recommended. Doing so will increase your chances of climbing the highest peak during the Exploit mode.

    2. This is obviously a powerful playbook, but it should be deployed with careful timing because it tends to freeze the conceptual architecture of the system.

      One a prototype gains some traction, conventional Silicon Valley wisdom says to scale it up. This, according to Andy Matuschak has certain disadvantages. The main drawback is that it tends to freeze the conceptual architecture of the system.

  14. Apr 2020
    1. “scaling out is the only cost-effective thing”, but plenty of successful companies managed to scale up with a handful of large machines or VMs
    2. Scaling is hard if you try do it yourself, so absolutely don’t try do it yourself. Use vendor provided, cloud abstractions like Google App Engine, Azure Web Apps or AWS Lambda with autoscaling support enabled if you can possibly avoid it.

      Scaling shall be done with cloud abstractions

  15. Mar 2020
    1. I would like to make an appeal to core developers: all design decisions involving involuntary session creation MUST be made with a great caution. In case of a high-load project, avoiding to create a session for non-authenticated users is a vital strategy with a critical influence on application performance. It doesn't really make a big difference, whether you use a database backend, or Redis, or whatever else; eventually, your load would be high enough, and scaling further would not help anymore, so that either network access to the session backend or its “INSERT” performance would become a bottleneck. In my case, it's an application with 20-25 ms response time under a 20000-30000 RPM load. Having to create a session for an each session-less request would be critical enough to decide not to upgrade Django, or to fork and rewrite the corresponding components.
  16. Jul 2019
  17. Jun 2019
    1. However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.

      Use min-max scaling for image processing & neural networks.

  18. Mar 2019
    1. To solve the problem of ‘scaling up’ requires ‘scaling in’ –by this we mean developing the designs and infrastructure needed to support effective use of an innovation.

      On "scaling-in" rather than "scaling-up".

  19. Sep 2018
    1. As student numbers have increased, teaching has regressed for a variety of reasons to a greater focus on information transmission and less focus on questioning, exploration of ideas, presentation of alternative viewpoints, and the development of critical or original thinking. Yet these are the very skills needed by students in a knowledge-based society.

      Related to Vijay Kumar's iron triangle. You can't increase the number of students without sacrificing quality or increasing costs.

  20. Sep 2017
  21. Aug 2017
  22. Feb 2017