425 Matching Annotations
  1. Last 7 days
    1. Furthermore, multiple coexisting or alternate mechanisms of action likely explain the clinical effects observed, such as the competitive binding of ivermectin with the host receptor-binding region of SARS-CoV-2 spike protein, as proposed in 6 molecular modeling studies.21–26

      The mechanism through which ivermectin works on SARS-CoV-2 may be by competitive binding with the receptor-binding region of the SARS-CoV-2 spike protein as proposed in 4 molecular modelling studies.

  2. Jun 2021
    1. DID infrastructure can be thought of as a global key-value database in which the database is all DID-compatible blockchains, distributed ledgers, or decentralized networks. In this virtual database, the key is a DID, and the value is a DID document. The purpose of the DID document is to describe the public keys, authentication protocols, and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

      DID infrastructure can be thought of as a key-value database.

      The database is a virtual database consisting of various different blockchains.

      The key is the DID and the value is the DID document.

      The purpose of the DID document is to hold public keys, authentication protocols and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

    1. DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc.[1][2] On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar's systems.[3]

      Dutch Certificate Authority gets hacked.

    1. New Trusted Third Parties Can Be Tempting Many are the reasons why organizations may come to favor costly TTP based security over more efficient and effective security that minimizes the use of TTPs: Limitations of imagination, effort, knowledge, or time amongst protocol designers – it is far easier to design security protocols that rely on TTPs than those that do not (i.e. to fob off the problem rather than solve it). Naturally design costs are an important factor limiting progress towards minimizing TTPs in security protocols. A bigger factor is lack of awareness of the importance of the problem among many security architects, especially the corporate architects who draft Internet and wireless security standards. The temptation to claim the "high ground" as a TTP of choice are great. The ambition to become the next Visa or Verisign is a power trip that's hard to refuse. The barriers to actually building a successful TTP business are, however, often severe – the startup costs are substantial, ongoing costs remain high, liability risks are great, and unless there is a substantial "first mover" advantage barriers to entry for competitors are few. Still, if nobody solves the TTP problems in the protocol this can be a lucrative business, and it's easy to envy big winners like Verisign rather than remembering all the now obscure companies that tried but lost. It's also easy to imagine oneself as the successful TTP, and come to advocate the security protocol that requires the TTP, rather than trying harder to actually solve the security problem. Entrenched interests. Large numbers of articulate professionals make their living using the skills necessary in TTP organizations. For example, the legions of auditors and lawyers who create and operate traditional control structures and legal protections. They naturally favor security models that assume they must step in and implement the real security. In new areas like e-commerce they favor new business models based on TTPs (e.g. Application Service Providers) rather than taking the time to learn new practices that may threaten their old skills. Mental transaction costs. Trust, like taste, is a subjective judgment. Making such judgement requires mental effort. A third party with a good reputation, and that is actually trustworthy, can save its customers from having to do so much research or bear other costs associated with making these judgments. However, entities that claim to be trusted but end up not being trustworthy impose costs not only of a direct nature, when they breach the trust, but increase the general cost of trying to choose between trustworthy and treacherous trusted third parties.

      There are strong incentives to stick with trusted third parties

      1. It's more difficult to design protocols that work without a TTP
      2. It's tempting to imagine oneself as a successful TTP
      3. Entrenched interests — many professions depend on the TTP status quo (e.g. lawyers, auditors)
      4. Mental transaction costs — It can be mentally easier to trust a third party, rather than figuring out who to trust.
    2. The high costs of implementing a TTP come about mainly because traditional security solutions, which must be invoked where the protocol itself leaves off, involve high personnel costs. For more information on the necessity and security benefits of these traditional security solutions, especially personnel controls, when implementing TTP organizations, see this author's essay on group controls. The risks and costs borne by protocol users also come to be dominated by the unreliability of the TTP – the DNS and certificate authorities being two quite commom sources of unreliability and frustration with the Internet and PKIs respectively.

      The high costs of TTPs have to do with the high personnel costs that are involved in the centralized solutions.

    3. The certificate authority has proved to be by far the most expensive component of this centralized public key infrastructure (PKI). This is exacerbated when the necessity for a TTP deemed by protocol designers is translated, in PKI standards such as SSL and S/MIME, into a requirement for a TTP. A TTP that must be trusted by all users of a protocol becomes an arbiter of who may and may not use the protocol. So that, for example, to run a secure SSL web server, or to participate in S/MIME, one must obtain a certifcate from a mutually trusted certificate authority. The earliest and most popular of these has been Verisign. It has been able to charge several hundred dollars for end user certificates – far outstripping the few dollars charged (implicitly in the cost of end user software) for the security protocol code itself. The bureaucratic process of applying for and renewing certificates takes up far more time than configuring the SSL options, and the CA's identification process is subject to far greater exposure than the SSL protocol itself. Verisign amassed a stock market valuation in the 10's of billions of U.S. dollars (even before it went into another TTP business, the Internet Domain Name System(DNS) by acquiring Network Solutions). How? By coming up with a solution – any solution, almost, as its security is quite crude and costly compared to the cryptographic components of a PKI – to the seemingly innocuous assumption of a "trusted third party" made by the designers of public key protocols for e-mail and the Web.

      The most expensive (and wasteful) part of Central Public Key Infrastructure is the Certificate Authority (the Trusted Third Party).

      Verisign became a billion dollar company by charging hundreds of dollars in subscription fees for issuing certificates. Even though their security wasn't anything out of the ordinary. It also takes far longer to request a certificate than it does to configure one for actual use.

      Meanwhile the costs paid for the protocol code, captured implicitly in the software's price, is a mere few bucks.

    4. Personal Property Has Not and Should Not Depend On TTPs For most of human history the dominant form of property has been personal property. The functionality of personal property has not under normal conditions ever depended on trusted third parties. Security properties of simple goods could be verified at sale or first use, and there was no need for continued interaction with the manufacturer or other third parties (other than on occasion repair personel after exceptional use and on a voluntary and temporary basis). Property rights for many kinds of chattel (portable property) were only minimally dependent on third parties – the only problem where TTPs were neededwas to defend against the depredations of other third parties. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy. Here are some examples of the ubiquity of personal property in which there was a reality or at least a strong desire on the part of owners to be free of dependence on TTPs for functionality or security: Jewelry (far more often used for money in traditional cultures than coins, e.g. Northern Europe up to 1000 AD, and worn on the body for better property protection as well as decoration) Automobiles operated by and house doors opened by personal keys. Personal computers – in the original visions of many personal computing pioneers (e.g. many members of the Homebrew Computer Club), the PC was intended as personal property – the owner would have total control (and understanding) of the software running on the PC, including the ability to copy bits on the PC at will. Software complexity, Internet connectivity, and unresolved incentive mismatches between software publishers and users (PC owners) have substantially eroded the reality of the personal computer as personal property. This desire is instinctive and remains today. It manifests in consumer resistance when they discover unexpected dependence on and vulnerability to third parties in the devices they use. Suggestions that the functionality of personal property be dependent on third parties, even agreed to ones under strict conditions such as creditors until a chattel loan is paid off (a smart lien) are met with strong resistance. Making personal property functionality dependent on trusted third parties (i.e. trusted rather than forced by the protocol to keep to the agreement governing the security protocol and property) is in most cases quite unacceptable.

      Personal property did not depend on trusted third parties

      For most of human history personal property did not depend on Trusted Third Parties (TTP). To the extent that TTPs were needed, was to defend property from depredataions of other third parties.

      Jewelry, automobile keys, house keys — these all show that humans had a preference for having sovereign access to their property, without relying on third parties.

      This preference remains with us today and you can see it manifest itself in people's anger when they discover that part of their product is not owned by them.

    5. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy.

      The security properties of personal chattel was not a Trusted Third Party (TTP), but their portability and intimacy.

    1. So, what problem is blockchain solving for identity if PII is not being stored on the ledger? The short answer is that blockchain provides a transparent, immutable, reliable and auditable way to address the seamless and secure exchange of cryptographic keys. To better understand this position, let us explore some foundational concepts.

      What problem is blockchain solving in the SSI stack?

      It is an immutable (often permissionless) and auditable way to address the seamless and secure exchange of cryptographic keys.

    1. But, as I have said many times here at AVC, I believe that business model innovation is more disruptive that technological innovation. Incumbents can adapt to and adopt new technological changes (web to mobile) way easier than they can adapt to and adopt new business models (selling software to free ad-supported software). So this new protocol-based business model feels like one of these “changes of venue” as my partner Brad likes to call them. And that smells like a big investable macro trend to me.

      Business model innovation is more disruptive than technological innovation.

    2. This is super important because the more open protocols we have, the more open systems we will have.

      Societal benefits of cryptocurrencies

      The more open protocols we have, the more open systems we have.

    1. From a comment by Muneeb Ali:

      The original Internet protocols defined how data is delivered, but not how it's stored. This lead to centralization of data.

      The original Internet protocols also didn't provide end-to-end security. This lead to massive security breaches. (Other reasons for security breaches as well, but everything was based on a very weak security model to begin with.)

    2. Because we didn’t know how to maintain state in a decentralized fashion it was the data layer that was driving the centralization of the web that we have observed.

      We didn't know how to maintain state in a decentralized fashion, and this is what drove centralization.

    3. I can’t emphasize enough how radical a change this is to the past. Historically the only way to make money from a protocol was to create software that implemented it and then try to sell this software (or more recently to host it). Since the creation of this software (e.g. web server/browser) is a separate act many of the researchers who have created some of the most successful protocols in use today have had little direct financial gain. With tokens, however, the creators of a protocol can “monetize” it directly and will in fact benefit more as others build businesses on top of that protocol.

      Tokens allow protocol creators to profit from their creation, whereas in the past they would need to create an app that implemented the protocol to do so.

    4. Organizationally decentralized but logically centralized state will allow for the creation of protocols that can undermine the power of the centralized incumbents.

      Organizationally decentralized but logically centralized

    1. The important innovation provided by the blockchain is that it makes the top right quadrant possible. We already had the top left. Paypal for instance maintains a logically centralized database for its payments infrastructure. When I pay someone on Paypal their account is credited and mine is debited. But up until now all such systems had to be controlled by a single organization.

      The top right quadrant is the innovation that blockchain represents.

    2. organizationally organizationally centralized decentralized logically eg *new* centralized Paypal Bitcoin logically eg eg decentralized Excel e-mail

      Organizationally decentralized, logically centralized

      Organizationally centralized are systems that are controlled by a single organization. Organizationally decentralized are systems that are not under control of any one entity.

      Logically decentralized are systems that have multiple databases, where participants control their own database entirely. Excel is logically decentralized. Logically centralized are systems that appear as if they have a single global database (irrespective of how it's implemented).

  3. May 2021
    1. It’s estimated there will be over 20 billion connected devices by 2020, all of which will require management, storage, and retrieval of data. However, today’s blockchains are ineffective data receptacles, because every node on a typical network must process every transaction and maintain a copy of the entire state. The result is that the number of transactions cannot exceed the limit of any single node. And blockchains get less responsive as more nodes are added, due to latency issues.

      There's a limit on how much data blockchain can handle because the nodes need to process every transaction and maintain a copy of the entire state.

    2. Another concern was the requirement for a dedicated network. The logic of blockchain is that information is shared, which requires cooperation between companies and heavy lifting to standardize data and systems. The coopetition paradox applied; few companies had the appetite to lead development of a utility that would benefit the entire industry. In addition, many banks have been distracted by broader IT transformations, leaving little headspace to champion a blockchain revolution.

      The coopetition paradox occurred in blockchain development. Companies didn't want to lead investment in technology that would benefit the entire industry.

    3. By late 2017, many people working at financial companies felt blockchain technology was either too immature, not ready for enterprise level application, or was unnecessary. Many POCs added little benefit, for example beyond cloud solutions, and in some cases led to more questions than answers. There were also doubts about commercial viability, with little sign of material cost savings or incremental revenues.

      By late 2017 many blockchain proof of concepts did not add much and the technology seemed unnecessary or too immature.

    1. The Internet was built without a way to know who and what you are connecting to. This limits what we can do with it and exposes us to growing dangers. If we do nothing, we will face rapidly proliferating episodes of theft and deception that will cumulatively erode public trust in the Internet.

      Kim Cameron posits that the internet was built without an identity layer. You have no way of knowing who and what you are connecting to.

    1. Today, the sector of the economy with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending. Here, the potential impact of the IoT is enormous. Farming is capital- and technology-intensive, but it is not yet information-intensive. Advanced harvesting technology, genetically modified seeds, pesticide combinations, and global storage and distribution show how complex modern agriculture has become, even without applying IT to that mix

      The sector with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending.

    2. The IoT creates the ability to digitize, sell and deliver physical assets as easily as with virtual goods today. Using everything from Bluetooth beacons to Wi-Fi-connected door locks, physical assets stuck in an analog era will become digital services. In a device driven democracy, conference rooms, hotel rooms, cars and warehouse bays can themselves report capacity, utilization and availability in real-time. By taking raw capacity and making it easy to be utilized commercially, the IoT can remove barriers to fractionalization of industries that would otherwise be impossible. Assets that were simply too complex to monitor and manage will present business opportunities in the new digital economy.

      IoT ushers in a device driven democracy where conference rooms, hotel rooms and cars can self-report capacity, utilization and availability in real-time.

      IoT can make it easier to fractionalize industries that would otherwise be impossible.

    3. In this model, users control their own privacy and rather than being controlled by a centralized authority, devices are the master. The role of the cloud changes from a controller to that of a peer service provider. In this new and flat democracy, power in the network shifts from the center to the edge. Devices and the cloud become equal citizens.

      In a blockchain IoT the power in the network shifts from the center to the edge.

    4. Challenge five: Broken business modelsMost IoT business models also hinge on the use of analytics to sell user data or targeted advertising. These expectations are also unrealistic. Both advertising and marketing data are affected by the unique quality of markets in information: the marginal cost of additional capacity (advertising) or incremental supply (user data) is zero. So wherever there is competition, market-clearing prices trend toward zero, with the real revenue opportunity going to aggregators and integrators. A further impediment to extracting value from user data is that while consumers may be open to sharing data, enterprises are not.Another problem is overly optimistic forecasts about revenue from apps. Products like toasters and door locks worked without apps and service contracts before the digital era. Unlike PCs or smartphones, they are not substantially interactive, which makes such revenue expectations unrealistic. Finally, many smart device manufacturers have improbable expectations of ecosystem opportunities. While it makes interesting conversation for a smart TV to speak to the toaster, such solutions get cumbersome quickly and nobody has emerged successful in controlling and monetizing the entire IoT ecosystem.So while technology propels the IoT forward, the lack of compelling and sustainably profitable business models is, at the same time, holding it back. If the business models of the future don’t follow the current business of hardware and software platforms, what will they resemble?

      Challenge 5 for IoT: Broken business models

      Conventional IoT business models relied on using and selling user data with targeted advertising. This won't work. Enterprises aren't willing to share data.

      Doorknobs and toasters worked without apps before, and whatever smartness is added, they won't be very interactive. Capturing sufficient value from this will be difficult.

      Having your toaster talk to your fridge sounds interesting, but it doesn't improve the user's life.

    5. Challenge four: A lack of functional valueMany IoT solutions today suffer from a lack of meaningful value creation. The value proposition of many connected devices has been that they are connected – but simply enabling connectivity does not make a device smarter or better. Connectivity and intelligence are a means to a better product and experience, not an end. It is wishful thinking for manufacturers that some features they value, such as warranty tracking, are worth the extra cost and complexity from a user’s perspective. A smart, connected toaster is of no value unless it produces better toast. The few successes in the market have kept the value proposition compelling and simple. They improve the core functionality and user experience, and do not require subscriptions or apps.

      Challenge 4 for IoT: A lack of functional value

      Making a device smart, doesn't necessarily improve the experience. A smart toaster is of no value unless it produces better toast.

    6. Challenge three: Not future-proofWhile many companies are quick to enter the market for smart, connected devices, they have yet to discover that it is very hard to exit. While consumers replace smartphones and PCs every 18 to 36 months, the expectation is for door locks, LED bulbs and other basic pieces of infrastructure to last for years, even decades, without needing replacement. An average car, for example, stays on the road for 10 years, the average U.S. home is 39 years old and the expected lifecycles of road, rail and air transport systems is over 50 years.10 A door lock with a security bug would be a catastrophe for a warehousing company and the reputation of the manufacturer. In the IoT world, the cost of software updates and fixes in products long obsolete and discontinued will weigh on the balance sheets of corporations for decades, often even beyond manufacturer obsolescence.

      Challenge 3 for IoT: Not future proof

      (1) Consumers have different expectations for the longevity of smartphones and PCs (1.5-3 years) than they do door locks, LED bulbs etc.

      (2) A door lock might have a security bug, requiring an update, and impacting the manufacturer's reputation.

      (3) Software updates might need to shipped for discontinued, obsolete products.

    7. Challenge two: The Internet after trustThe Internet was originally built on trust. In the post-Snowden era, it is evident that trust in the Internet is over. The notion of IoT solutions built as centralized systems with trusted partners is now something of a fantasy. Most solutions today provide the ability for centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices by collecting and analyzing user data. In a network of the scale of the IoT, trust can be very hard to engineer and expensive, if not impossible, to guarantee. For widespread adoption of the ever-expanding IoT, however, privacy and anonymity must be integrated into its design by giving users control of their own privacy. Current security models based on closed source approaches (often described as “security through obscurity”) are obsolete and must be replaced by a newer approach – security through transparency. For this, a shift to open source is required. And while open source systems may still be vulnerable to accidents and exploitable weaknesses, they are less susceptible to government and other targeted intrusion, for which home automation, connected cars and the plethora of other connected devices present plenty of opportunities.

      Challenge 2 of IoT: The internet after trust

      In the post-Snowden era, it is not realistic or wise to expect the world of IoT to be based on a centralized trust model.

      Most solutions today provide the ability to centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices.

      Because of the scale of IoT, a centralized trust architecture would not be scalable or affordable.

      Privacy and anonymity must be integrated into its design by giving user control over their own privacy.

      A shift from closed source to open source is required. Open source systems are less susceptible to targeted intrusion.

    8. Challenge one: The cost of connectivityEven as revenues fail to meet expectations, costs are prohibitively high. Many existing IoT solutions are expensive because of the high infrastructure and maintenance costs associated with centralized clouds and large server farms, in addition to the service costs of middlemen. There is also a mismatch in supplier and customer expectations. Historically, costs and revenues in the IT industry have been nicely aligned. Though mainframes lasted for many years, they were sold with enterprise support agreements. PCs and smartphones have not

      Challenge 1 of IoT: The cost of connectivity

      There are high infrastructure and maintenance costs associated with running IoT communication through centralized clouds and the service costs of middlemen.

    1. One way the blockchain could change online security dynamics is the opportunity to replace the flawed “shared-secret model” for protecting information with a new “device identity model.” Under the existing paradigm, a service provider and a customer agree on a secret password and perhaps certain mnemonics—“your pet’s name”—to manage access. But that still leaves all the vital data, potentially worth billions of dollars, sitting in a hackable repository on the company’s servers. With the right design, a blockchain-based system would leave control over the data with customers, which means the point of vulnerability would lie with their devices. The onus is now on the customer to protect that device, so we must, of course, develop far more sophisticated methods for storing, managing, and using our own private encryption keys. But the more important point is that the potential payoff for the hacker is so much smaller for each attack. Rather than accessing millions of accounts at once, he or she has to pick off each device one by one for comparatively tiny amounts. Think of it as an incentives-weighted concept of security.

      Using blockchain we could shift from a shared-secret model to a device identity model.

      This would mean that the customer's data is stored with the customer, not on a central database.

      The onus is then on the customer to protect that data and the device it's on.

      The important point is that you're replacing an attractive single attack vector for the hacker, with many distributed vectors, and reducing the potential pay off of each.

      You achieve security through incentives.

    2. So much of what’s foreseen won’t be viable without distributed trust, whether it’s smart parking systems transacting with driverless cars, decentralized solar microgrids that let neighbors automatically pay each other for power, or public Wi-Fi networks accessed with digital-money micropayments. If those peer-to-peer applications were steered through a centralized institution, it would have to “KYC” each device and its owner—to use an acronym commonly used to describe banks’ regulatory obligation to conduct “know your customer” due diligence. Those same gatekeepers could also curtail competitors, quashing innovation. Processing costs and security risks would rise. In short, a “permissioned” system like this would suck all the seamless, creative fluidity out of our brave new IoT world.

      Permissioned vs. Permissionless

      Many solutions will not be viable without distributed trust because routing all transactions through a central authority comes with too much friction.

      (1) KYC requirements for each node (2) Processing costs rise

      At the same time centralizing these transactions has other adverse effects:

      (1) It gives the centralized entity gatekeeper power, giving them the ability to curtail competitors, thereby quashing innovation. (2) Security risks rise, because data passes through a centralized location.

      Permissioned systems stifle innovation.

    3. Bitcoin has survived because it leaves hackers nothing to hack. The public ledger contains no personal identifying information about the system’s users, at least none of any value to a thief. And since no one controls it, there’s no central vector of attack. If one node on the bitcoin network is compromised and someone tries to undo or rewrite transactions, the breach will be immediately contradicted by the hundreds of other accepted versions of the ledger.

      Bitcoin has not been hacked, in part, because it leaves hackers nothing to hack.

      (1) The public ledger contains to personal identifying information (2) No one controls it, so there's no central vector of attack

    4. Ever since its launch in 2009, there has been no successful cyberattack on bitcoin’s core ledger—despite the tempting bounty that the digital currency’s $9 billion market cap offers to hackers.

      There has been no successful cyberattack on Bitcoin despite the tempting bounty.

    5. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good. Blockchain technology and the spinoff ideas it has spawned provide us with the best chance yet to solve the age-old problem of the Tragedy of the Commons.

      Blockchain technology allows us to distribute trust across a community of users incentivized to protect a public good.

    6. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good.

      Blockchain heralds the nascent system for managing trust in a distributed manner across a community of users incentivized to protect a public good.

    7. The problem was that in those early years, Silicon Valley had no distributed trust management system to match the new distributed communications architecture.

      In the same way we didn't initially have the communication network architecture to support peer-to-peer communication, once we did, we still didn't have the trust architecture to support distributed trust management.

    8. The single most important driver of decentralization has been the fact that human communication—without which societies, let alone economies, can’t exist—now happens over an entirely distributed system: the Internet. The packet-switching technology that paved the way for the all-important TCP/IP protocol pair meant that data could travel to its destination via the least congested route, obviating the need for the centralized switchboard hubs that had dominated telecommunications. Thus, the Internet gave human beings the freedom to talk to each other directly, to publish information to anyone, anywhere. And because communication was no longer handled via a hub-and-spokes model, commerce changed, too. People could submit orders to an online store or enter into a peer-to-peer negotiation over eBay.

      Human communication and economic transactions are by their very nature peer-to-peer. Early telecommunication technology was able to scale these interactions over larger groups of participants and over larger distances, but they did so through a hub-and-spoke model relying on centralized switchboards.

      The internet, with its distributed architecture, offered a way for distributed nodes to communicate across the least congested route between them, thereby constituting a distributed architecture and a better match for the distributed nature of human communication and commerce.

    9. For IT-savvy thieves, it’s the best of both worlds: more and more locations from which to launch surreptitious attacks and a set of ever-growing, centralized pools of valuable information to go after.

      Cybercriminals have the best of both worlds:

      (1) More access to points to launch attacks from due to increased decentralization (e.g. IoT) (2) More and larger centralized pools of valuable information to go after

    10. Decentralization, meanwhile, is pushing the power to execute contracts and manage assets to the edges of the network, creating a proliferation of new access points.

      Decentralization, like the proliferation of IoT, is pushing the power to execute contracts and manage assets to the individual nodes of the network.

    11. Yet, we continue to depend upon something we might call the centralized trust paradigm, by which middlemen entities coordinate our monetary transactions and other exchanges of value. We trust banks to track and verify everyone’s account balances so that strangers can make payments to each other. We entrust our most sensitive health records to insurance firms, hospitals, and laboratories. We rely on public utilities to read our electricity meters, monitor our usage, and invoice us accordingly. Even our new, Internet-driven industries are led by a handful of centralized behemoths to which we’ve entrusted our most valuable personal data: Google, Facebook, Amazon, Uber, etc.

      Despite aggregators driving more decentralized economic exchanges, we continue to rely on a centralized trust paradigm.

      We trust our banks to verify everyone's account balances, we trust our health records to insurance firms, we rely on public utilities to read our electricity meters.

    12. Startups of all kinds are constantly pitching ideas for e-marketplaces and online platforms that would unlock new network effects by bypassing incumbent middlemen and letting people interact directly with each other. Although these companies are themselves centralized entities, the services they provide satisfy an increasing demand for more decentralized exchanges. This shift underpins social media, ride-sharing, crowdfunding, Wikipedia, localized solar microgrids, personal health monitoring, and everything else in the Internet of Things (IoT).

      Aggregators (ride-sharing, social media) have been driving an increase in decentralized economic exchanges, while being built on top of centralized network infrastructure.

      This has allowed them to capture a sizeable portion of the value that is generated by their platforms, but it has also burdened them with custody over large amounts of user data.

    13. At the heart of this failure lies the fact that the ongoing decentralization of our communication and business exchanges is in direct contradiction with the outdated centralized systems we use to secure them. Given that the decentralization trend is fueled by the distributed communications system of the Internet—one in which no central hub acts as information gatekeeper—what’s needed is a new approach to security that’s also based on a distributed network architecture.

      Michael Casey posits that at the heart of the colossal failure of securing the world's online commerce is the contradiction between two things:

      (1) The ongoing decentralization of our communication and business exchanges driven by the distributed communications system of the internet. (2) The centralized systems we use to secure them.

      Communication and economic exchanges are becoming increasingly decentralized, fueled by the distributed infrastructure of the internet. This requires a similar approach to security that is based on a distributed network architecture.

    14. Lloyd’s of London knows a thing or two about business losses—for three centuries, the world’s oldest insurance market has been paying out damages triggered by wars, natural disasters, and countless acts of human error and fraud. So, it’s worth paying attention when Lloyds estimates that cybercrime caused businesses to lose $400 billion between stolen funds and disruption to their operations in 2015. If that number seems weighty—and it ought to—try this one for size: $2.1 trillion. That’s Juniper Research’s total cybercrime loss forecast for the even more digitally interconnected world projected for 2019. To put that figure in perspective, at current economic growth rates, it would represent more than 2% of total world GDP. Learn faster. Dig deeper. See farther. Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more We are witnessing a colossal failure to protect the world’s online commerce.

      Forecasts for cybercrime loss are in the 400 billion to 2.1 trillion range for 2019. This points to a "collosal failure to protect the world's online commerce"

    1. Storing any type of PII on a public blockchain, even encrypted or hashed, is dangerous for two reasons: 1) the encrypted or hashed data is a global correlation point when the data is shared with multiple parties, and 2) if the encryption is eventually broken (e.g., quantum computing), the data will be forever accessible on an immutable public ledger. So the best practice is to store all private data off-chain and exchange it only over encrypted, private, peer-to-peer connections.

      Storing sensitive information on a blockchain, whether encrypted or hashed, is a risk, because it's there forever and it forms a correlation point and the encryption might get broken.

    2. For self-sovereign identity, which can be defined as a lifetime portable digital identity that does not depend on any centralized authority, we need a new class of identifier that fulfills all four requirements: persistence, global resolvability, cryptographic verifiability, and decentralization.

      The four requirements that constitute self-sovereign identity:

      1. Persistence
      2. Global Resolvability
      3. Cryptographic Verifiability
      4. Decentralization
    1. The models for online identity have advanced through four broad stages since the advent of the Internet: centralized identity, federated identity, user-centric identity, and self-sovereign identity.

      Online identity advanced through 4 stages:

      Centralized identity Federated identity User-centric identity Self-sovereign identity

    2. Identity in the digital world is even trickier. It suffers from the same problem of centralized control, but it’s simultaneously very balkanized: identities are piecemeal, differing from one Internet domain to another.

      Identity in the digital world also gets muddied, but in addition it is also balkanized: different internet domains have different identities.

    3. However, modern society has muddled this concept of identity. Today, nations and corporations conflate driver’s licenses, social security cards, and other state-issued credentials with identity; this is problematic because it suggests a person can lose his very identity if a state revokes his credentials or even if he just crosses state borders. I think, but I am not.

      Christopher Allen posits that modern society has muddled the concept of identity by equating it to a driver license or national id card, thereby implying that it is something that can be taken away.

      I would say that it is not society, but the modern state that has not muddied, rather corrupted the concept of identity.

      This also reminds me of idea of how to draw the line of definition for a component with which greater complexity is built up.

  4. Apr 2021
    1. Acquiring viral drift sufficient to produce new influenza strains capable of escaping population immunity is believed to take years of global circulation, not weeks of local circulation.

      Experiencing enough viral drift to produce an influenza variant capable of escaping population immunity is believed to take years of global circulation (not weeks of local circulation).

  5. Mar 2021
    1. Private property rights are not absolute. The rule against the "dead hand" or the rule against perpetuities is an example. I cannot specify how resources that I own will be used in the indefinitely distant future. Under our legal system, I can only specify the use for a limited number of years after my death or the deaths of currently living people.

      Property rights are not absolute, our legal system does not support the ability to specify how resources should be used indefinitely in the future.

    2. Similarly, the set of resources over which property rights may be held is not well defined and demarcated. Ideas, melodies, and procedures, for example, are almost costless to replicate explicitly (near-zero cost of production) and implicitly (no forsaken other uses of the inputs). As a result, they typically are not protected as private property except for a fixed term of years under a patent or copyright.

      It's not well demarcated over what resources property rights may be held. Melodies and ideas, for instance, are virtually costless to replicate. These resources tend not to be protected by private property.

    3. Depending upon circumstances certain actions may be considered invasions of privacy, trespass, or torts. If I seek refuge and safety for my boat at your dock during a sudden severe storm on a lake, have I invaded "your" property rights, or do your rights not include the right to prevent that use? The complexities and varieties of circumstances render impossible a bright-line definition of a person's set of property rights with respect to resources.

      In real-life property rights there are also many gray areas. In programmatic property rights, there is none.

    4. The cost of establishing private property rights—so that I could pay you a mutually agreeable price to pollute your air—may be too expensive. Air, underground water, and electromagnetic radiations, for example, are expensive to monitor and control. Therefore, a person does not effectively have enforceable private property rights to the quality and condition of some parcel of air. The inability to cost-effectively monitor and police uses of your resources means "your" property rights over "your" land are not as extensive and strong as they are over some other resources, like furniture, shoes, or automobiles. When private property rights are unavailable or too costly to establish and enforce, substitute means of control are sought. Government authority, expressed by government agents, is one very common such means. Hence the creation of environmental laws.

      For some types of property and/or use of that property, the costs of monitoring them is too high. It is too expensive to monitor parcels of air for pollution or radiation.

      When a resource cannot be cost-effectively monitored and/or policed, your property rights over this resource become less strong.

      When property rights become too weak, alternative means of control are sought, e.g. government agents and environmental laws.

    5. The two extremes in weakened private property rights are socialism and "commonly owned" resources. Under socialism, government agents—those whom the government assigns—exercise control over resources. The rights of these agents to make decisions about the property they control are highly restricted. People who think they can put the resources to more valuable uses cannot do so by purchasing the rights because the rights are not for sale at any price. Because socialist managers do not gain when the values of the resources they manage increase, and do not lose when the values fall, they have little incentive to heed changes in market-revealed values. The uses of resources are therefore more influenced by the personal characteristics and features of the officials who control them. Consider, in this case, the socialist manager of a collective farm. By working every night for one week, he could make 1 million rubles of additional profit for the farm by arranging to transport the farm's wheat to Moscow before it rots. But if neither the manager nor those who work on the farm are entitled to keep even a portion of this additional profit, the manager is more likely than the manager of a capitalist farm to go home early and let the crops rot.

      Weakened property rights come in two forms (1) socialism and (2) the commons.

      If a socialist manager of a farm isn't entitled extra profit for working extra to transport of the farm's wheat to Moscow before it rots, he likely won't put int the extra hours.

    6. The fundamental purpose of property rights, and their fundamental accomplishment, is that they eliminate destructive competition for control of economic resources. Well-defined and well-protected property rights replace competition by violence with competition by peaceful means.

      Well defined property rights replace competition by violence with competition by peaceful means

    7. Finally, a private property right includes the right to delegate, rent, or sell any portion of the rights by exchange or gift at whatever price the owner determines (provided someone is willing to pay that price). If I am not allowed to buy some rights from you and you therefore are not allowed to sell rights to me, private property rights are reduced. Thus, the three basic elements of private property are (1) exclusivity of rights to the choice of use of a resource, (2) exclusivity of rights to the services of a resource, and (3) rights to exchange the resource at mutually agreeable terms.

      The are three elements that constitute private property:

      (1) Exclusive rights to determine how the property gets used (2) Exclusive rights to the services of a resource (3) Rights to sell the resource

    1. A normal household has to pay rent or make mortgage payments. To arbitrarily exclude the biggest expense to consumers from CPI is pretty misleading.When you create new money prices don't rise evenly. At the moment we have new money being created by central banks and given to privileged institutions who get access to free money. They use that to buy investments: real estate, stocks, etc. These are precisely the things getting really expensive. The last things to get more expensive during big cycles of inflation are employee wages.The world used gold/silver for its currency for most of human history until 1970 when we entered this period of worldwide fiat currencies. Our current situation is pretty remarkable.The whole argument for printing money being OK is dumb. If it's OK to print money to pay for some things why are you not doing it more? Why not make everyone a millionaire?I think that another deception is that we should ordinarily be experiencing price deflation. Every day our society is getting more efficient at making things. If prices for goods are staying the same then it may not be that their value has not changed, they may be less valuable goods, but they cost the same because you're also buying them with less valuable currency.If you have gone through years of moving everything to China to make it cheaper to manufacture, improved technology to make processes more efficient, etc. and I'm still paying the same amount for all of the stuff in my life, then again, maybe all these things are cheaper, but I'm also buying them with currency that's less valuable.Ultimately, printing money doesn't make anyone more productive or produce anything. All it does is redistribute wealth from those that were first to get the new free money away from those that were last to contact it.

      Solid HN comment on inflation

    1. For the evolutionary psychologists an explanation that humans do something for "the sheer enjoyment of it" is not an explanation at all – but the posing of a problem. Why do so many people find the collection and wearing of jewelry enjoyable? For the evolutionary psychologist, this question becomes – what caused this pleasure to evolve?

      For evolutionary psychologists an explanation that humans do something for their enjoyment is not an explanation at all. The question becomes: Why did this pleasure evolve?

    2. Collecting and making necklaces must have had an important selection benefit, since it was costly – manufacture of these shells took a great deal of both skill and time during an era when humans lived constantly on the brink of starvation[C94].

      Because evolution is ruthlessly energy preserving and because our African ancestors would have continuously lived on the brink of starvation, the costly manufacture of ornamental shells must have incurred a large selection benefit to those doing it.

    3. It continued to be used as a medium of exchange, in some cases into the 20th century – but its value had been inflated one hundred fold by Western harvesting and manufacturing techniques, and it gradually went the route that gold and silver jewelry had gone in the West after the invention of coinage – from well crafted money to decoration.

      The value of wampum became inflated more than one hundred fold by Wester harvesting and manufacturing techniques.

    4. The beginning of the end of wampum came when the British started shipping more coin to the Americas, and Europeans started applying their mass-manufacturing techniques. By 1661, British authorities had thrown in the towel, and decided it would pay in coin of the realm – which being real gold and silver, and its minting audited and branded by the Crown, had even better monetary qualities than shells. In that year wampum ceased to be legal tender in New England.

      Wampum stopped being legal considered legal tender by the British in 1661 when they started shipping gold and silver coins from Europe.

    5. Once they got over their hangup about what constitutes real money, the colonists went wild trading for and with wampum. Clams entered the American vernacular as another way to say "money". The Dutch governor of New Amsterdam (now New York) took out a large loan from an English-American bank – in wampum. After a while the British authorities were forced to go along. So between 1637 and 1661, wampum became legal tender in New England. Colonists now had a liquid medium of exchange, and trade in the colonies flourished.[D94]

      The colonists of New England started trading in Wampum and started using it as money. It was accepted as legal tender from 1637 to 1661.

    6. Clams were found only at the ocean, but wampum traded far inland. Sea-shell money of a variety of types could be found in tribes across the American continent. The Iriquois managed to collect the largest wampum treasure of any tribe, without venturing anywhere near the clam's habitat.[D94] Only a handful of tribes, such as the Narragansetts, specialized in manufacturing wampum, while hundreds of other tribes, many of them hunter-gatherers, used it. Wampum pendants came in a variety of lengths, with the number of beads proportional to the length. Pendants could be cut or joined to form a pendant of length equal to the price paid.

      Wampum was traded by hundreds of tribes, but it was only "mined" by a handful living close to the shore.

    7. The colonists' solution was at hand, but it took a few years for them to recognize it. The natives had money, but it was very different from the money Europeans were used to. American Indians had been using money for millenia, and quite useful money it turned out to be for the newly arrived Europeans – despite the prejudice among some that only metal with the faces of their political leaders stamped on it constituted real money. Worse, the New England natives used neither silver nor gold. Instead, they used the most appropriate money to be found in their environment – durable skeleton parts of their prey. Specifically, they used wampum, shells of the clam venus mercenaria and its relatives, strung onto pendants.

      Native American Indians used the shells of clams as money, strung onto pendants. It was the best form of money in their environment.

    1. So when I’m searching for information in this space, I’m much less interested in asking “what is this thing?” than I am in asking “what do the people who know a lot about this thing think about it?” I want to read what Vitalik Buterin has recently proposed regarding Ethereum scalability, not rote definitions of Layer 2 scaling solutions. Google is extraordinarily good at answering the “what is this thing?” question. It’s less good at answering the “what do the people who know about the thing think about it?” question. Why? 

      According to Devin Google is good at answering a question such as "what is this thing?", but not good at answering a questions "what do people who know a lot about this thing say about it?"

      This reminds me of social search

  6. Feb 2021
    1. But in credibly neutral mechanism design, the goal is that these desired outcomes are not written into the mechanism; instead, they are emergently discovered from the participants’ actions. In a free market, the fact that Charlie’s widgets are not useful but David’s widgets are useful is emergently discovered through the price mechanism: eventually, people stop buying Charlie’s widgets, so he goes bankrupt, while David earns a profit and can expand and make even more widgets. Most bits of information in the output should come from the participants’ inputs, not from hard-coded rules inside of the mechanism itself.

      This reminds me of Hayek worrying about the components/primitives of capitalism (e.g. property rights) were being corrupted by socialists.

      You could view the proper "pure" component of capitalism being credibly neutral property rights, and it becomes corrupted if you make it non-credibly neutral, e.g. you introduce preferences in terms of the outcomes.

    2. This is why private property is as effective as it is: not because it is a god-given right, but because it’s a credibly neutral mechanism that solves a lot of problems in society - far from all problems, but still a lot.

      Property rights are credibly neutral

    3. We are entering a hyper-networked, hyper-intermediated and rapidly evolving information age, in which centralized institutions are losing public trust and people are searching for alternatives. As such, different forms of mechanisms – as a way of intelligently aggregating the wisdom of the crowds (and sifting it apart from the also ever-present non-wisdom of the crowds) – are likely to only grow more and more relevant to how we interact.

      This is Jordan Hall's blue church vs. red church.

      Losing trust in institutions perhaps has more emphasis here.

    1. Finding clientsFinally, we were at the moment of truth. Luckily, from our user interviews we knew that companies were posting on forums like Reddit and Craigslist to find participants. So for 3 weeks we scoured the “Volunteers” and “Gigs” sections of Craigslist and emailed people who were looking for participants saying we could do it for them.Success!We were able to find 4 paying clients! 

      UserInterviews found their first clients by replying to ads on Craigslist and Reddit for user interview volunteers with the pitch that they could help the companies find them.

  7. Jan 2021
    1. Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

      Cognitive Fusion serves an important role

      When you are driving a car, you want to be fused with its internal logic, because it will allow you to respond in the quickest possible way to threats. (I'm not sure if this is the same thing though)

    2. Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too). In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic. Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

      Cognitive Fusion

      Cognitive Fusion is a term that comes from Acceptance and Commitment Therapy (ACT).

      CF happens when you identify so strongly with a thought or an emotion that its contents is experienced as the objective way the world is.

      "She is the one" for example is a cognitive fusion.

      The cognitive fusion prevents you from stepping back and examining the construct.

      You experience everything in terms of the belief's internal logic.

    1. This brings me to the fourth pattern of oscillating tension: Shadow values.The pattern goes something like this: We have two values that (without proper planning) tend to be in tension with each other. One of them, we acknowledge, as right and good and ok. One of them we repress, because we think it's bad or weak or evil.Safety vs. AdventureIndependence vs. Love Revenge vs. Acceptance All common examples of value tensions, where one of the values is often in shadow (which one depends on the person).So we end up optimizing for the value we acknowledge. We see adventure as "good", so we optimize for it, hiding from ourselves the fact we care about safety. And man, do we get a lot of adventure. Our adventure meter goes up to 11.But all the while, there's that little safety voice, the one we try ignore. Telling us that there's something we value that we're ignoring. And the more we ignore it, the louder it gets.And meanwhile, because we've gotten so much of it, our adventure voice is getting quieter. It's already up to 11, not a worry right now. Until suddenly, things shift. And where we were going on many adventures, now we just want to stay home, safe. Oscillating tension.

      Shadow Values

      Shadow Values are a pattern of Oscillating Tension.

      When we have two values, one which we make explicit and acknowledge, one which we don't, we might optimize for the one we made explicit.

      This results in our behavior pursuing the maximization of that value, all the while ignoring the implicit one (the shadow value).

      Because this value is getting trampled on, the voice that corresponds to it will start to speak up. The more it gets ignored, the more it speaks up.

      At the same time, the voice corresponding to the value that is getting maximized, becomes quiet. It's satisfied where it is.

      We find ourselves in a place where all we want to do is tend to the value that is not being met.

    1. Volkswagen, the world’s largest car maker, has outspent all rivals in a global bid by auto incumbents to beat Tesla. For years, industry leaders and analysts pointed to the German company as evidence that, once unleashed, the old guard’s raw financial power paired with decades of engineering excellence would make short work of Elon Musk’s scrappy startup. What they didn’t consider: Electric vehicles are more about software than hardware. And producing exquisitely engineered gas-powered cars doesn’t translate into coding savvy.

      Many thought Volkswagen would crush Tesla as soon as they put their weight behind an electric car initiative. What they didn't consider was that an electric car is more about software than it is about hardware.

    1. Note that I have defined privacy in terms of the condition ofothers' lack of access to you. Some philosophers, for example CharlesFried, have claimed that it is your control over who has access to youthat is essential to privacy. According to Fried, it would be ironic tosay that a person alone on an island had privacy.'0 I don't find thisironic at all. But more importantly, including control as part of pri-vacy leads to anomalies. For example, Fried writes that "in our cul-ture the excretory functions are shielded by more or less absoluteprivacy, so much so that situations in which this privacy is violated areexperienced as extremely distressing."" But, in our culture one doesnot have control over who gets to observe one's performance of theexcretory functions, since it is generally prohibited to execute them inpublic.'2 Since prying on someone in the privy is surely a violation of

      Reiman argues that in his definition privacy is defined in terms of others' lack of access to you, and not, as Charles Fried does for instance, in terms of your control over who has access to you.

      He argues this point by saying that since watching someone go to the toilet is certainly a violation of privacy, and since we don't have control over the law dictating that we cannot go to the toilet in public, privacy cannot be about control.

      I think this argument is redundant. Full control would imply that you can deny anyone access to you at their discretion.

    2. It might seem unfair to IVHS to consider it in light of all thisother accumulated information-but I think, on the contrary, that it isthe only way to see the threat accurately. The reason is this: We haveprivacy when we can keep personal things out of the public view.Information-gathering in any particular realm may not seem to pose avery grave threat precisely because it is' generally possible to preserveone's privacy by escaping into other realms. Consequently, as welook at each kind of information-gathering in isolation from the others,each may seem relatively benign.2 However, as each is put into prac-tice, its effect is to close off yet another escape route from public ac-cess, so that when the whole complex is in place, its overall effect onprivacy will be greater than the sum of the effects of the parts. Whatwe need to know is IVHS's role in bringing about this overall effect,and it plays that role by contributing to the establishment of the wholecomplex of information-gathering modalities.

      Reiman argues that we can typically achieve privacy by escaping into a different realm. We can avoid public eyes by retreating into our private houses. It seems we could avoid Facebook by, well, avoiding Facebook.

      If we treat each information-gather in one realm as separate, they may seem relatively benign.

      When these realms are connected, they close off our escape routes and the effect on privacy becomes greater than the sum of its parts.

    3. But notice here that the sort of privacy we wantin the bedroom presupposes the sort we want in the bathroom. Wecannot have discretion over who has access to us in the bedroom un-less others lack access at their discretion. In the bathroom, that is allwe want. In the bedroom, we want additionally the power to decide atour discretion who does have access. What is common to both sortsof privacy interests, then, is that others not have access to you at theirdiscretion. If we are to find the value of privacy generally, then it willhave to be the value of this restriction on others.

      The sort of privacy we want in the bedroom (control over who accesses us) presupposes the sort of privacy we want in the bathroom (others lack access to us at their discretion).

    4. In our bedrooms, we want to have powerover who has access to us; in our bathrooms, we just want others de-prived of that access.

      Reidman highlights two types of privacy.

      The privacy we want to have in the bathroom, which is the power to deprive others of access to us.

      And the privacy we want to have in the bedroom, which is the power to control who has access to us.

    5. By privacy, I understand the condition in which other people aredeprived of access to either some information about you or some ex-perience of you. For the sake of economy, I will shorten this and saythat privacy is the condition in which others are deprived of access toyou.

      Reiman defines privacy as the condition in which others are deprived of access to you (information (e.g. location) or experience (e.g. watching you shower))

    6. No doubt privacyis valuable to people who have mischief to hide, but that is not enoughto make it generally worth protecting. However, it is enough to re-mind us that whatever value privacy has, it also has costs. The moreprivacy we have, the more difficult it is to get the information that

      Privacy is valuable to people who have mischief to hide. This is not enough to make it worth protecting, but it tells us that there is also a cost.

    7. As Bentham realized and Foucault emphasized, the system workseven if there is no one in the guard house. The very fact of generalvisibility-being seeable more than being seen-will be enough toproduce effective social control.4 Indeed, awareness of being visiblemakes people the agents of their own subjection. Writes Foucault,He who is subjected to a field of visibility, and who knows it, as-sumes responsibility for the constraints of power; he makes themplay spontaneously upon himself; he inscribes in himself the powerrelation in which he simultaneously plays both roles; he becomesthe principle of his own subjection.

      The panopticon works as a system of social control even without someone in the guardhouse. It is being seeable, rather than being seen, which makes it effective.

      I don't understand what Foucault says here.

  8. Dec 2020
    1. The other complication is that the organizational techniques I described aren’t distinct. Hierarchies and links are a kind of relation; attributes can be seen as a type of hierarchy (just like songs can be “in” playlists, even though the implementation is a sort on a list) or a relation. All of these, in fact, can be coded using the same mathematical formalisms. What matters is how they differ when encountering each user’s cognitive peculiarities and workflow needs.

      Hierarchies, links and attributes are mathematically identical

      Hierarchies and links are a kind of relation. Attributes can be seen as a type of hierarchy (songs can be "in" a playlist"). These things can be coded with the same mathematical formalisms. What's important is how they differ when seen through the mental model of the user.

    2. First, nearly every application uses some mix of these techniques. The Finder, for instance, has hierarchies but can display them spatially in columns while using metaphors and (soon) attributes as labels.

      Applications tend to use a mix of these structurizing techniques.

    3. You could almost think of these as parts of a larger language: roughly verbs, nouns, and adjectives for the first three. Poetry for the last.

      These structurizing techniques form a language.

      Links — verbs Relationships — Nouns Attributes — Adjectives Metaphors — Poetry

    4. Types of Structure Outliners take advantage of what may be the most primitive of relationships, probably the first one you learned as an infant: in. Things can be in or contained by other things; alternatively, things can be superior to other things in a pecking order. Whatever the cognitive mechanics, trees/hierarchies are a preferred way of structuring things. But it is not the only way. Computer users also encounter: links, relationships, attributes, spatial/tabular arrangements, and metaphoric content. Links are what we know from the Web, but they can be so much more. The simplest ones are a sort of ad hoc spaghetti connecting pieces of text to text containers (like Web pages), but we will see many interesting kinds that have names, programs attached, and even work two-way. Relationships are what databases do, most easily imagined as “is-a” statements which are simple types of rules: Ted is a supervisor, supervisors are employees, all employees have employee numbers. Attributes are adjectives or tags that help characterize or locate things. Finder labels and playlists are good examples of these. Spatial/tabular arrangements are obvious: the very existence of the personal computer sprang from the power of the spreadsheet. Metaphors are a complex and powerful technique of inheriting structure from something familiar. The Mac desktop is a good example. Photoshop is another, where all the common tools had a darkroom tool or technique as their predecessor.

      Structuring Information

      Ted Goranson holds that there are only a couple of ways to structure information.

      In — Possibly the most primitive of relationships. Things can be in other things and things can be superior to other things.

      Links —Links are what we know from the web, but these types of links or only one implementation. There are others, like bi-directional linking.

      Relationships — This is what we typically use databases for and is most easily conceived as "is-a" statements.

      Attributes — Adjectives or tags that help characterize or locate things.

      Metaphors — A technique for inheriting structure from something familiar.

    5. Both of these products were abandoned as supported commercial products when the outlining paradigm was incorporated into other products, notably word processors, presentation applications, and personal information managers.

      MORE and Acta were abandoned as commercial pursuits once outliners were incorporated into other products such as word processors and presentation applications.

    6. Outlining is bred in the blood of Mac users. The Finder has an outliner. Nearly every mail client employs an outliner, as do many word processors and Web development tools.

      The outliner paradigm has seeped into many different applications such as the Mac Finder, development tools and word processors.

    1. Michael Jordan says that current hype around AI has focused mostly on human-imitative capabilities. This focus has hidden certain challenges and, according to Jordan, risks distracting us from major unsolved problems in AI that relate to our ability to make society-scale inference-and-decision-making systems.

      If we want such systems that actually work, we need to craft nothing short of a new engineering discipline (centered on the idea of "provenance") which builds on the building blocks of the past century (algorithms, inference, optimization, algorithms) but which also incorporates the human side and insights from the social sciences, cognitive sciences and the humanities. A true human-centric engineering discipline.

    2. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

      Michael Jordan refers to this as a human-centric engineering discipline.

    3. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.

      We want to build systems that actually work, for that we need to figure out the provenance aspects. This field is so nascent still, that it cannot be viewed yet as an engineering discipline.

    4. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities.

      Michael Jordan says that academia may serve to help bring together researchers from fields that are needed to solve these challenges, such as social sciences, cognitive sciences and the humanities.

      Reminds me of that book on social sciences.

    5. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

      There are many challenges in Intelligent Infrastructure (II) which are not central themes in AI research.

      They need to deal with coordinating distributed, incoherent repositories of information.

      This involves things like:

      making decisions about where the host data to ensure fast delivery (edge computing)

      Dealing with long-tail phenomena, where there's a lot of data on a few individuals and little about most.

      And they need to deal with the human and civil aspects of data such as sharing across administrative and competitive boundaries.

      Lastly, they need to incorporate economics ideas such as incentives and pricing into the realm of computational infrastructures. This is creating markets (blockchain).

      Michael Jordan holds that fields such as music, literature and journalism are crying out for the emergence of such markets.

    6. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges?

      Having defined some terms to divide up the field of AI, Michael Jordan asks if focusing on human-imitative AI is the most productive way to advance these other fields that have been "hidden" under the same label.

    7. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

      Intelligent Infrastructure (II)

      Michael Jordan here coins the term Intelligent Infrastructure to refer to the discipline whereby a web of data, computation and physical entities exists that makes human environments more supportive, interesting and safe.

      We can already see this infrastructure in the fields of transportation, medicine, commerce and finance.

      This isn't captured by the Internet of Things, because IoT doesn't involve interactions with humans at higher levels of abstractions than mere bits.

    8. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.

      Intelligence Augmentation (IA)

      Computation and data are used to create services that augment human intelligence (e.g. a search engine augmenting human memory and factual knowledge).

    9. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean).

      The phrase AI emerged to refer to the aspiration of creating a computer system which possessed human-level intelligence.

    10. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

      This emerging field is often hidden under the label AI, which makes it difficult to reason about.

    11. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

      Analogous to the collapse of early bridges and building, before the maturation of civil engineering, our early society-scale inference-and-decision-making systems break down, exposing serious conceptual flaws.

    12. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

      Michael Jordan draws the analogy with the emergence of civil and chemical engineering, building with the building blocks of the century prior: physics and chemistry. In this case the building blocks are ideas such as: information, algorithm, data, uncertainty, computing, inference and optimization.

    13. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

      This is the key point of the article.

      There is an emerging field, which relies heavily on the skill one might refer to as "provenance", which is necessary to build planetary-scale inference-and-decision-making systems.

    14. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    15. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

    1. these systems exist in isolation, cut off from the other systems of the city. The different systems overlap one another, and they overlap many other systems besides. The units, the physical places recognized as play places, must do the same. In a natural city this is what happens. Play takes place in a thousand places it fills the interstices of adult life. As they play, children become full of their surroundings. How can children become filled with their surroundings in a fenced enclosure! They cannot.

      Tree thinking leads to thinking of recreation as a separate concept. For example, by designing a separate area of children's play.

      This ignores the living reality that play crosses boundaries, changes contexts and that it is a mechanism through which children acquaint themselves with the world.

      Putting play inside a designated area goes against the spirit of play.

    2. It must be emphasized, lest the orderly mind shrink in horror from anything that is not clearly articulated and categorized in tree form, that the idea of overlap, ambiguity, multiplicity of aspect and the semilattice are not less orderly than the rigid tree, but more so. They represent a thicker, tougher, more subtle and more complex view of structure.

      Semilattices are not less ordered than a tree. They simply provide more degrees of order than a tree does.

    3. In a traditional society, if we ask a man to name his best friends and then ask each of these in turn to name their best friends, they will all name each other so that they form a closed group. A village is made up of a number of separate closed groups of this kind. But today's social structure is utterly different. If we ask a man to name his friends and then ask them in turn to name their friends, they will all name different people, very likely unknown to the first person; these people would again name others, and so on outwards. There are virtually no closed groups of people in modern society. The reality of today's social structure is thick with overlap - the systems of friends and acquaintances form a semilattice, not a tree

      Relationships in modern society, unlike traditional society, form overlapping, open structures

    4. However, in every city there are thousands, even millions, of times as many more systems at work whose physical residue does not appear as a unit in these tree structures. In the worst cases, the units which do appear fail to correspond to any living reality; and the real systems, whose existence actually makes the city live, have been provided with no physical receptacle.

      The problem with the tree model is that (in the worst case) the units that appear do not correspond to any living reality and many of the actual systems do not have a physical receptacle.

    5. Each unit in each tree that I have described, moreover, is the fixed, unchanging residue of some system in the living city (just as a house is the residue of the interactions between the members of a family, their emotions and their belongings; and a freeway is the residue of movement and commercial exchange).

      Residue of human activity When a city is conceived of as a tree, each unit represents the fixed residue of some system in the living city. Similarly, a house is the residue of the interactions between members of a family, their emotions, their belongings. A freeway is the residue of movement and commercial exchange.

    6. The enormity of this restriction is difficult to grasp. It is a little as though the members of a family were not free to make friends outside the family, except when the family as a whole made a friendship.

      The limitation of a tree structure is as if you limited members of a family to only make friends when the family as a whole made a friendship.

    7. So that we get a really clear understanding of what this means, and shall better see its implications, let us define a tree once again. Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole.

      Another definition of a tree is that no unit is connected to any other unit except through its parent unit.

    8. Still more important is the fact that the semilattice is potentially a much more complex and subtle structure than a tree. We may see just how much more complex a semilattice can be than a tree in the following fact: a tree based on 20 elements can contain at most 19 further subsets of the 20, while a semilattice based on the same 20 elements can contain more than 1,000,000 different subsets.

      The semilattice is potentially a much more complex structure than the tree because it has orders of magnitude more possible permutations of subsets.

    9. Since this axiom excludes the possibility of overlapping sets, there is no way in which the semilattice axiom can be violated, so that every tree is a trivially simple semilattice.

      Every tree is also a (simple) semilattice.

    10. The tree axiom states: A collection of sets forms a tree if and only if, for any two sets that belong to the collection either one is wholly contained in the other, or else they are wholly disjoint.

      The tree axiom

    11. The semilattice axiom goes like this: A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

      The semilattice axiom

      A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

    12. As we see from these two representations, the choice of subsets alone endows the collection of subsets as a whole with an overall structure. This is the structure which we are concerned with here.

      When we draw a collection of subsets we can see that the choice of subsets endows the collection with a structure.

    13. Now, a collection of subsets which goes to make up such a picture is not merely an amorphous collection. Automatically, merely because relationships are established among the subsets once the subsets are chosen, the collection has a definite structure.

      A collection of subsets, seen as units, convey a structure through the relationships between them

      A collection of subsets, as seen by the viewer of a city, do not constitute an amorphous collection. By virtue of the relationships between the subsets, the collection has a definite structure.

    14. Of the many, many fixed concrete subsets of the city which are the receptacles for its systems and can therefore be thought of as significant physical units, we usually single out a few for special consideration. In fact, I claim that whatever picture of the city someone has is defined precisely by the subsets he sees as units.

      We think of cities as distinguished by the subsets that we see as units.

    15. When the elements of a set belong together because they co-operate or work together somehow, we call the set of elements a system.

      Definition of a System

      When a given set of elements co-operate or work together we call it a system.

    1. As the complexity of the topology underlying a hypermedia system increases, users have more ways to move from one information node to another, and thus can potentially find shorter paths to desired information. This very richness quickly leads to the problem of users becoming “lost in hyperspace,” reported as early as the ZOG work

      The Lost in Hyperspace Problem

      For more complex hypermedia there are more ways to navigate from one information node to the other. This leads to the problem of a user becoming "lost in hyperspace".

    2. Hypermedia is a set of nodes of information (the “hyperbase”) and a mechanism for moving among them.

      Hypermedia consists of a set of nodes of information (hyperbase) and a mechanism for navigating between them.

      A book is the degenerate case where the nodes are the paragraphs and the topology is a linear chain.

    1. Overview diagrams are one of the best tools for orientationand navigation in hypermedia documents [17]. By present-ing a map of the underlying information space, they allowthe users to see where they are, what other information isavailable and how to access the other information. How-ever, for any real-world hypermedia system with many nodesand links, the overview diagrams represent large complexnetwork structures. They are generally shown as 2D or 3Dgraphs and comprehending such large complex graphs is ex-tremely difficult. The layout of graphs is also a very difficultproblem [1].

      Overview diagrams are one of the best tools for orientation and navigation in hypermedia documents.

      For real-world hypermedia documents with many nodes, an overview diagram becomes cluttered and unusable.

    1. Treemaps are a visualization method for hierarchies based on enclosure rather than connection [JS91]. Treemaps make it easy to spot outliers (for example, the few large files that are using up most of the space on a disk) as opposed to parent-child structure.

      Treemaps visualize enclosure rather than connection. This makes them good visualizations to spot outliers (e.g. large files on a disk) but not for understanding parent-child relationships.

    1. Folding This is the one function whose name is confusing because many products use the term for what we called “collapsing” above. For this article, collapsing is the process of making whole headers and paragraphs invisible, tucking them up under a “parent.” Folding is a different kind of tucking under; it works on paragraphs and blocks to reduce them to a single line, hiding the rest. A simple case of folding might involve a long paragraph that is reduced to just the first line—plus some indication that it is folded; this shows that a paragraph is there and something about its content without showing the whole thing. Folding is most common in single-pane outline displays, and a common use is to fold everything so that every header and paragraph is reduced to a single line. This can show the overall structure of a huge document, including paragraph leaves in a single view. You can use folding and collapsing independently. At one time, folding was one of the basics of text editors, but it has faded somewhat. Now only about half of the full-featured editors employ folding. One of the most interesting of these is jEdit. It has a very strong implementation of folding, so strong in fact it subsumes outlining. Though intended as a full editor, it can easily be used as an outliner front end to TeX-based systems. jEdit is shown in the example screenshot in both modes. The view on the right shows an outline folded like MORE and NoteBook do it, where the folds correspond to the outline structure. But see on the left we have shifted to “explicit folding” mode where blocks are marked with triple brackets. Then these entire blocks can be folded independent of the outline. Alas, folding is one area where the Mac is weak, but NoteBook has an implementation that is handy. It is like MORE’s was, and is bound to the outline structure, meaning you can only fold headers, not arbitrary blocks. But it has a nice touch: just entering a folded header temporarily expands it.

      Folding is the affordance of being able to limit the space a block of a text (e.g. a paragraph) takes up to one line.

      This is different from collapsing, which hides nested subordinate elements under a parent element.

    1. The only piece of note taking software on the market that currently supports this feature (that I’m aware of) is Microsoft OneNote.

      OneNote supported on-the-fly interlinking with Double Square Bracket Linking

    2. Cunningham first developed the ability to automatically create internal links (read: new notes) when typing text in CamelCase. This meant you could easily be typing a sentence while describing a piece of information and simply type a word (or series of words) in CamelCase which would create a link to another piece of information (even if its page hadn’t already been created).This was quickly superseded by the double square bracket links most wiki’s use today to achieve the same results, and its the staple creation method in both wiki’s and other premier information systems today.

      History of the Wiki-style linking or [[Double Square Bracket Linking]]

    3. Evernote had long been the gold standard of note taking, flexible, functional and best of all affordable. While its user interface was a little odd at times, the features were excellent, but they made the simple mistake of not enabling wiki style internal links. Instead, they required a user to copy a note link from one note and paste it into another.

      Evernote made the mistake of not allowing on-the-fly wiki-style internal linking.

    4. It needs wiki-like superpowersIf there is one feature that excels above all others in information software of the past two decades that deserves its place in the note taking pantheon, its the humble double bracketed internal link.We all recognise power to store and retrieve information at will, but when you combine this power with the ability to successfully create new knowledge trees from existing documents, to follow thoughts in a ‘stream of consciousness’ non-linear fashion then individual notes transform from multiple static word-silos into a living information system system.Sadly, this is the one major feature that is always neglected, or is piecemeal at best… and one time note taking king Evernote is to blame.

      Tim Kling posits that one of the most important features for a note taking app to have (which most lack at the time of writing) is the ability to link to other notes with the wiki-standard double bracket command.

    5. The hardest part for anyone remotely interested in a solution among this immense array of software is that each and every note taking app developer to date has decided to reinvent the wheel every time they’ve turned on their compiler. It gets even worse once you open the door on purpose-specific note taking applications.

      There seems to be a tendency among developers of note taking apps to reinvent the wheel.

    1. The only piece of note taking software on the market that currently supports this feature (that I’m aware of) is Microsoft OneNote.

      OneNote supported on-the-fly interlinking with Double Square Bracket Linking

    1. Among its many other features, Ecco Pro installed an icon (the "Shooter") into other programs so that you can add text highlighted in the other program to your Ecco Pro outline. And better yet, the information stored in Ecco Pro could be synchronized with the then nearly ubiquitous PalmPilot hardware PIMs.

      Echo had a clipper tool which allowed you to add highlighted text from other programs.

      It also synced with the PalmPilot.

    2. The demise of Ecco Pro was blamed by many (including the publishers of Ecco Pro themselves) on Microsoft's decision to bundle Outlook with Office at no extra charge. And while that was undoubtedly part of the problem, Ecco Pro also failed by marketing itself as merely a fancy PIM to lawyers and others then lacking technological sophistication sufficient to permit them to appreciate that the value and functionality of the product went so far beyond that of supposedly "free" Outlook that the two might as well have originated on different planets.

      Ecco Pro's demise was attributed to Microsoft's decision to bundle Outlook with Office at no extra charge.

      This, even though, in terms of products, they could not have been more different.

    1. Guard fields proved invaluable for breaking cycles[5], a central anxiety of early hypertext [18][11].

      Storyspace used a scripting language to create what they called "Guard Fields" — boolean logic that will make a link clickable or not based on the pages the reader had already visited up until that point.

      What is interesting is that guard fields proved effective at breaking cycles (one of the risks of disorientation in hypertext).

    1. Jeff Sonnabend in the Ecco Yahoo forum: "I remember first trying to learn Ecco 1.0. It was tough until the proverbial light went on. Then it all made sense. For me, it was simply understanding that Ecco is just a data base. So called folders are nothing more than fields in a flat-file table (like a spreadsheet). The rest is interface and implementation of various users' work or management systems in Ecco. That learning curve, to me, is the primary Ecco "weakness", at least as far as new users go."

      There was a steep learning curve involved with using ECCO Pro. Reminds me of Roam, which also has a steep learning curve, but then it feels like it's worth it.

    2. Chris Thompson: "If your goals in using a PIM are mostly calendaring, todos, and a phonebook, then Maximizer, Outlook, and Time and Chaos all do a reasonable job. On an enterprise-level, Lotus Notes would be another good choice. If you're more interested in keeping track of notes or research, Lotus Agenda, Zoot, or InfoHandler are better choices. For keeping track of miscellaneous files, InfoSelect is pretty good. On the other hand, if you want to do a little of everything, and do it well, Ecco really has no rivals."

      ECCO Pro was loved for its ability to do a lot of different things versus being good at one narrow thing. Reminds me of Roam Research.

    1. Those were a few of the problems that could have brought down Ecco Pro. In the end, however, it was one massive problem: There was a company down the street that was developing a product that would make Ecco Pro obsolete. Microsoft would release Office ’97 on November 19th, 1996. Among the many components of the new suite of products was a personal information manager called Outlook. Eight months later, NetManage would release its last update of Ecco Pro, version 4.01. Development of the software effectively ceased after that.

      Price claims ECCO Pro was terminated because it couldn't compete with Microsoft Outlook, released in 1997.

    2. One fundamental issue with Ecco Pro I gleaned from the many phone calls I answered from customers was that people didn’t really know who the product was for. Sales people wanted to use it as a contact manager. Small business owners wanted to use it as a database. Home users wanted to use it to make to-do lists and track appointments. The problem was that it tried to be all those things at once. As a consequence, it did none of them very well. The product was bloated with features and extremely difficult to use. Even seasoned users did not understand its advanced functionality very well. After a year and a half as a phone rep, I still couldn’t offer a good explanation as to who the product was for.

      According to Price, ECCO Pro's problem was that it had so many features, users couldn't figure out who it was for.

  9. Nov 2020
    1. At the same time, use of the web is now ubiquitous,and ”Google” is a verb. With the advent of search en-gines, users have learned to find data by describing whatthey want (e.g., various characteristics of a photo) insteadof where it lives (i.e., the full pathname of the photo inthe filesystem). This can be seen in the popularity ofsearch as a modern desktop paradigm in such products asWindows Desktop Search (WDS) [26]; MacOS X Spot-light [21], which fully integrates search with the Mac-intosh journaled HFS+ file system [7]; and the variousdesktop search engines for Linux [4, 27]. Indeed, MacOS X in particular goes one step further and exports APIsto developers allowing applications to directly access themeta-data store and content index.

      With the advent of search engines, search as a paradigm for retrieving files has become ubiquitous.

    2. Interaction with stable storage in the modern world isgenerally mediated by systems that fall roughly into oneof two categories: a filesystem or a database. Databasesassume as much as they can about the structure of thedata they store. The type of any given piece of datais known (e.g., an integer, an identifier, text, etc.), andthe relationships between data are well defined. Thedatabase is the all-knowing and exclusive arbiter of ac-cess to data.Unfortunately, if the user of the data wants more di-rect control over the data, a database is ill-suited. At thesame time, it is unwieldy to interact directly with stablestorage, so something light-weight in between a databaseand raw storage is needed. Filesystems have traditionallyplayed this role. They present a simple container abstrac-tion for data (a file) that is opaque to the system, and theyallow a simple organizational structure for those contain-ers (a hierarchical directory structure)

      Databases and filesystems are both systems which mediate the interaction between user and stable storage.

      Often, the implicit aim of a database is to capture as much as they can about the structure of the data they store. The database is the all-knowing and exclusive arbiter of access to data.

      If a user wants direct access to the data, a database isn't the right choice, but interacting directly with stable storage is too involved.

      A Filesystem is a lightweight (container) abstraction in between a database and raw storage. Filesystems are opaque to the system (i.e. visible only to the user) and allow for a simple, hierarchical organizational structure of directories.

    1. I've spent the last 3.5 years building a platform for "information applications". The key observation which prompted this was that hierarchical file systems didn't work well for organising information within an organisation.However, hierarchy itself is still incredibly valuable. People think in terms of hierarchies - it's just that they think in terms of multiple hierarchies and an item will almost always belong in more than one place in those hierarchies.If you allow users to describe items in the way which makes sense to them, and then search and browse by any of the terms they've used, then you've eliminated almost all the frustrations of a file system. In my experience of working with people building complex information applications, you need: * deep hierarchy for classifying things * shallow hierarchy for noting relationships (eg "parent company") * multi-values for every single field * controlled values (in our case by linking to other items wherever possible) Unfortunately, none of this stuff is done well by existing database systems. Which was annoying, because I had to write an object store.

      Impressed by this comment. It foreshadows what Roam would become:

      • People think in terms of items belonging to multiple hierarchies
      • If you allow users to describe items in a way that makes sense to them and allow them to search and browse by any of the terms they've used, you've solved many of the problems of existing file systems

      What you need to build a complex information system is:

      • Deep hierarchies for classifying things (overlapping hierarchies should be possible)
      • Shallow hierarchies for noting relationships (Roam does this with a flat structure)
      • Multi-values for every single field
      • Controlled values (e.g. linking to other items when possible)
    1. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.

      What Bush called "associative indexing" is the key idea behind the memex. Any item can immediately select others to which it has been previously linked.

    2. Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space.

      Once two items are linked, tapping a button would take you from one to the other.

    3. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.

      Although Bush envisioned associative trails to be navigable sequences of original content and notes interspersed, what seems to make more sense when viewed through today's technology, is a rich document of notes where the relevant pieces from external documents are transcluded.

    4. And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

      I find this idea of saved associative trails very interesting. In Roam the equivalent would be that you can save a sequence of opened Pages.

    5. Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.

      It should be easy to surpass the mind's performance in terms of storage capacity as well as lossiness. It might be more difficult to surpass it in terms of the speed and flexibility with which it "follows an associative trail"

    6. The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

      The human mind doesn't work according to the file-cabinet metaphor — it operates by association.

      "With one items in its gras, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain."

    7. The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.

      Bush emphasises the importance of retrieval in the storage of information. He talks about technical limitations, but in this paragraph he stresses that retrieval is made more difficult by the "artificiality of systems of indexing", in other words, our default file-cabinet metaphor for storing information.

      Information in such a hierarchical architecture is found by descending down into the hierarchy, and back up again. Moreover, the information we're looking for can only be in one place at a time (unless we introduce duplicates).

      Having found our item of interest, we need to ascend back up the hierarchy to make our next descent.

    8. So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene.

      Retrieval is the key activity we're interested in. Storage only matters in as much as we can retrieve effectively. At the time of writing (1945) large amounts of information could be stored (extend the record), but consulting that record was still difficult.

    9. There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.

      As scientific progress extends into increased specializations, efforts at integrating across disciplines are increasingly superficial.

    10. A record if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted.

      Bush emphasises the need for notes to not only be stored, but also to be queried (consulted).

    11. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

      The rate at which we're generating new knowledge is increasing like never before (and this was written in 1945), but our ability to deal with that information has remained largely unimproved.

    12. Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month's efforts could be produced on call. Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

      Specialization, although necessary, has rendered it impossible to stay up to date with the advances of a field.

    1. Semantically Annotated Content Opens Up Cost-Effective Opportunities: Search beyond keywords; Content aggregation beyond manual sifting through; Relationships discovery beyond human research.

      Benefits of semantic annotation

      1. Search beyond keywords
      2. Content aggregation
      3. Discovering relationships
    1. Knowledge graphs combine characteristics of several data management paradigms: Database, because the data can be explored via structured queries; Graph, because they can be analyzed as any other network data structure; Knowledge base, because they bear formal semantics, which can be used to interpret the data and infer new facts.

      Characteristics / benefits of a knowledge graph

    1. The ontology data model can be applied to a set of individual facts to create a knowledge graph – a collection of entities, where the types and the relationships between them are expressed by nodes and edges between these nodes, By describing the structure of the knowledge in a domain, the ontology sets the stage for the knowledge graph to capture the data in it.

      How ontologies and knowledge graphs relate.

    1. You need to have a habit of tagging something as a to-do to synthesize the idea further, and then periodically go back and review those and write them in a more crisp language, or build up your evergreen notes so that you have this library of thoughts that you are able to get that compound interest on.

      You need a system inside Roam which helps you review notes that are not yet refined.

    2. We encourage people to use the daily notes and to brainstorm and brain dump, and just write all the things they’re thinking. I think that the first thing that we’re interested in is, how do you build systems so that it’s easy for you to take those and gradually refine them?

      Conor is asking himself, how do you get people to take (daily) notes, and how do you get them to refine them.

    3. I think that you need to be able to get compound interest on your thoughts. Good ideas come from when ideas have sex: the intersection of different things that you’ve been reading or different things you’ve been seeing. So you can have better ideas faster if you are actually reviewing the old things and you are building up. You’re not throwing away work.

      Good ideas come when ideas meet, so it is beneficial to promote this.

    4. We’ve always wanted to build a layer on top of the web where every person can have their mental model of how the whole world works, and they can start to share ideas across everything.

      Conor's idea of Roam was a layer on top of the web where everyone can have their mental model of how the world works.

    5. I was originally interested in figuring out how you could figure out what’s actually true online.

      Conor was trying to figure out how to find out what's true online with Roam.

    1. With most mind mapping software something at the bottom of one branch cannot be elegantly linked to something that is categorized in a distant branch unless your mind map is really small. So “mind maps” essentially have the same linear limitation that your computer filing system does.

      Mind mapping runs into the same problem because it is also a hierarchy.

    2. Almost all interfaces today, with the exception of TheBrain visual user interface, are limited to organizing information into hierarchies, where a piece of information can only be categorized into one place. For simple applications this is fine, but for users engaging in more complex business processes, it is simply inadequate. A document will have a variety of different issues or people associated with it – with hierarchies one cannot show all these relationships without multiple copies of the information.

      Shelley Hayduk also identifies the issue that most information management software uses a file cabinet metaphor (i.e. hierarchy). This has the limitation that a piece of information can only be categorized in one place. For more complex things, this is inadequate.

    1. One major advantage of Lotus Notes is that it allows all the major information organization techniques to be used in one information space: outlines, graphics, hypertext links, relational databases, free (rich) text, expanding/collapsing reports, collapsi ng rich text sections, tabbed notebooks (like wizards) and tables. In other words, Lotus Notes is a hodgepodge of every information organization technique Lotus could think of, all thrown into one quirky product. As such, it is phenomenally satisfying a nd phenomenally frustrating at the same time.

      John Redmood claims that the advantage of Lotus Notes was that it brought together a wide range of information organization techniques: outlines, graphics, hypertext links, relational database, expanding/collapsing reports, collapsing rich text sections, tabbed notebooks and tables.

    2. Three panes: A three-pane outliner uses one pane for the table of contents, one pane for items in that "section" or "chapter", and a final pane for the currently highlighted document. I use three-pane outliners for shared projects, where there are many documents in a category that should be isolated from other items.

      A three pane interface introduces a third level of hierarchy that can be used in the organization. It can separate, for instance, the high level chapters in the first pane, the sections of those chapters in the second, and the content in the third.

    3. Two panes: When designing user interfaces for web-based software programs, or for designing web sites, I prefer two pane outliners. The category pane mimics a site map, or a navigation tree. For example, see my web interfac e for MailEngine: the left hand side lists all the possible interface pages, and clicking on a category page brings it up. This is the way two-pane outliners work, and so they work well for this kind of project. Steve Cohen writes: Re: Maple, Jot+, etc. tree-based (= 2-pane) PIMs. Yes, they're better for info storage & organizing, rather than composing/writing.

      In a two pane interface the left pane mimics a sitemap or navigation tree. The left hand side lists all possible parents to navigate to, and when clicked, the main pane will bring up the child content.

      This separates the content work from the organization work.

    4. One pane: With one pane outliners, the content is displayed immediately below the category. A printed legal document is an example of a one-pane document. A web site with a table-of-contents "frame" on the left hand side is similar to a two-pane outline. A Usene t news group is similar to a three pane outline. When writing documents, or organizing ideas for a project (such as a speech, or for software design) I much prefer one pane outlines. I find they are more conducive to collapsing ideas, because you can mix text with categories, rather than radically split ting the organizational technique from the content (as the two and three pane outlines do).

      In one pane outliners the text is displayed under its parent.

      This can be more conducive to writing because you're not splitting work on the organization from work on the content. In writing this separation is fuzzy anyway.

    5. With Lotus Notes, I can combine a hierarchically organized outline view of the documents, with full text searching, hypertext links and traditiona l relational database like reports (for example, a sorted view of items to do).

      What Lotus Notes allowed you to do is to combine a hierarchical organized overview, achieved through an outliner, with search, hyperlinks and relational-database-like reports. Lotus Notes also allowed you to organized different document formats (Word, emails, etc.)

    1. An ontology is as a formal, explicit specification of a sharedconceptualization that is characterized by high semantic ex-pressiveness required for increased complexity [9]. Ontolog-ical representations allow semantic modeling of knowledge,and are therefore commonly used as knowledge bases in artifi-cial intelligence (AI) applications, for example, in the contextof knowledge-based systems. Application of an ontology asknowledge base facilitates validation of semantic relationshipsand derivation of conclusions from known facts for inference(i.e., reasoning) [9]

      Definition of an ontology

    2. A knowledge graph acquires and integrates infor-mation into an ontology and applies a reasonerto derive new knowledge.

      Definition of a Knowledge Graph

    1. If every site that linked to yours was visible on your page, and you had no control over who could and couldn't link to you, it is not hard to imagine the Trollish implications...

      This is an important point. This is why the internet doesn't have contextual backlinks.

    1. Generally it takes a week or two after a person has been infected before they start to produce IgG, and with covid, you’re generally only infectious for about a week after you start to have symptoms, so antibody tests are not designed to find active infections. Instead the purpose is to see if you have had an infection in the past.

      It takes a week or two for an infected person to start producing the antibody IgG which is the type of antibody that typically gets tested for.

      [[Z: Antibody tests are only useful to see if you had an infection in the past]]

    2. In most clinical settings (including the one I work in), all the doctor is provided with is a positive or negative result. No mention is made of the number of cycles used to produce the positive result.

      [[Z: The number of PCR cycles has not been standardized, and is usually not even mentioned to the doctor]]

    3. If you get a positive PCR test and you want to be sure that what you’re finding is a true positive, then you have to perform a viral culture. What this means is that you take the sample, add it to respiratory cells in a petri dish, and see if you can get those cells to start producing new virus particles. If they do, then you know you have a true positive result. For this reason, viral culture is considered the “gold standard” method for diagnosis of viral infections. However, this method is rarely used in clinical practice, which means that in reality, a diagnosis is often made based entirely on the PCR test.

      [[Z: A positive PCR should be followed by a viral culture test to see if you're dealing with a live infection]]

      After a positive PCR test, you don't know if the virus is alive or not. To find this out you can add it to respiratory cells (in the case of a respiratory virus) and see if they start producing virus particles).

      [[Z: Viral culture tests are rarely used in clinical practice]]

      Positive diagnoses of COVID-19 are done base on PCR only.

    4. One thing that’s important to understand at this point is that PCR is only detecting sequences of the viral genome, it is not able to detect whole viral particles, so it is not able to tell you whether what you are finding is live virus, or just non-infectious fragments of viral genome.

      PCR only tells you if you're detecting sequences of the viral genome. It doesn't tell you that what you're finding is live virus or not.

    1. Finally, you gain the ability to reuse previously built packets for new projects. Maybe some research you did for an online marketing campaign becomes useful for a new campaign. Or some sketches that didn’t quite make it into an old design give you inspiration for a new one. Or some book notes you wrote down casually turn out to be very useful for an unforeseen challenge a year later.

      The Intermediate Packet approach allows you to reuse previously built packets for new projects

      By incorporating existing packets in new projects, you gain the ability to deliver new projects much faster.

    2. Fourth, big projects become less intimidating. Big, ambitious projects feel risky, because all the time you spend on it will feel like a waste if you don’t succeed. But if your only goal is to create an intermediate packet and show it to someone — good notes on a book, a Pinterest board of design inspirations, just one module of code — then you can trick yourself into getting started. And even if that particular Big Project doesn’t pan out, you’ll still have the value of the packets at your disposal!

      The Intermediate Packet approach make big projects less intimidating.

      Big projects feel risky because the time you spend on it feels like a waste if you don't succeed. Intermediate Packets allow you to finish smaller chunks. You can use this to trick yourself to get started on bigger things.

    3. By always having a range of packets ready to work on, each one pre-prepared to work on at any time, you can be productive under any circumstances – waiting in the airport before a flight, the doctor’s waiting room, 15 minutes in between meetings.

      If you have a range of packet sizes available to work on, you can use any time block size to deliver value.

    4. Third, you can create value in any span of time. If we see our work as creating these intermediate packets, we can find ways to create value in any span of time, no matter how short. Productivity becomes a game of matching each available block of time (or state of mind, or mood, or energy level) with a corresponding packet that is perfectly suited to it.

      The Intermediate Packet approach ensures you are delivering value after every iteration, regardless of size

      You no longer need to rely on large blocks on uninterrupted time if you focus on delivering something of value at the end of each block of time.

    5. Second, you have more frequent opportunities to get feedback. Instead of spending weeks hammering away in isolation, only to discover that you made some mistaken assumptions, you can get feedback at each intermediate stage. You become more adaptable and more accountable, because you are performing your work in public.

      Intermediate Packets give you more opportunities to get feedback

    6. The first benefit of working this way is that you become interruption-proof. Because you rarely even attempt to load the entire project into your mind all at once, there’s not much to “unload” if someone interrupts you. It’s much easier to pick up where you left off, because you’re not trying to juggle all the work-in-process in your head.

      The intermittent packet approach makes you more resilient towards interruptions

      Because you're not loading an entire project in your mind at once, you're not losing as much context when you get interrupted.

    1. Bringing this back to filtering, not only am I saving time and preserving focus by batch processing both the collection and the consumption of new content, I’m time-shifting the curation process to a time better suited for reading, and (most critically) removed from the temptations, stresses, and biopsychosocial hooks that first lured me in.I am always amazed by what happens: no matter how stringent I was in the original collecting, no matter how certain I was that this thing was worthwhile, I regularly eliminate 1/3 of my list before reading. The post that looked SO INTERESTING when compared to that one task I’d been procrastinating on, in retrospect isn’t even something I care about.What I’m essentially doing is creating a buffer. Instead of pushing a new piece of info through from intake to processing to consumption without any scrutiny, I’m creating a pool of options drawn from a longer time period, which allows me to make decisions from a higher perspective, where those decisions are much better aligned with what truly matters to me.

      Using read-it later apps helps you separate collection from filtering.

      By time-shifting the filtering process to a time better suited for reading, and removed from temptations, you will want to drop 2/3 of the content you save.

      This allows you to "make decisions from a higher perspective"

    1. There are different schools of thought in the realm of productivity.

      The energy school focuses on optimizing your energy levels. The focus school is all about getting into and staying in flow. The efficiency school is obsessed with the logistics of work.

      Tiago positions his philosophy as the value school: Making sure you deliver value after every block of work by delivering, what Tiago calls, Intermediate Packets.

      He draws parallels to Just In Time production from Toyota and Continuous Integration in software development.

      Intermediate Packets is continuous integration for knowledge work.

    1. Alexanderproposeshomesandofficesbedesignedandbuiltbytheireventualoccupants.Thesepeople,hereasons,knowbesttheirrequirementsforaparticularstructure.Weagree,andmakethesameargumentforcomputerprograms.Computerusersshouldwritetheirownprograms.KentBeck&WardCunningham,1987 [7]

      Users should program their own programs because they know their requirements the best.

      [7]: Beck, K. and Cunningham, W. Using pattern languages for object-oriented programs. Tektronix, Inc. Technical Report No. CR-87-43 (September 17, 1987), presented at OOPSLA-87 workshop on Specification and Design for Object-Oriented Programming. Available online at http://c2.com/doc/oopsla87.html (accessed 17 September 2009)

    2. Before the publication of the ‘Gang of Four’ book that popularised software patterns [4], Richard Gabriel described Christopher Alexander’s patterns in 1993 as a basis for reusable object‐oriented software in the following way:Habitabilityisthecharacteristicofsourcecodethatenablesprogrammers,coders,bug­fixers,andpeoplecomingtothecodelaterinitslifetounderstanditsconstructionandintentionsandtochangeitcomfortablyandconfidently.

      Interesting concept for how easy to maintain a piece of software is.

    1. Connected to this are Andy Matuschak’s comments about contextual backlinks bootstrapping new concepts before explicit definitions come into play.

      What Joel says here about Contextual Backlinks is that they allow you to "bootstrap" a concept (i.e. start working with it) without explicit definitions coming into play (or as Andy would say, the content is empty).

    2. Easily updated pages: don’t worry about precisely naming something at first. Let the meaning emerge over time and easily change it (propagating through all references).

      Joel highlights a feature here of Roam and ties it to incremental formalisms.

      In Roam you can update a page name and it propagates across all references.

    3. Cognitive Overhead (aka Cognitive Load): often the task of specifying formalism is extraneous to the primary task, or is just plain annoying to do.

      This is the task that you're required to do when you want to save a note in Evernote or Notion. You need to choose where it goes.

    4. The basic intuition is described well by the Shipman & Marshall paper: users enter information in a mostly informal fashion, and then formalize only later in the task when appropriate formalisms become clear and also (more) immediately useful.

      Incremental formalism

      Users enter information in an informal fashion. They only formalize later when the appropriate formalism becomes clear and/or immediately useful.

    5. It’s important to notice something about these examples of synthesis representations: they go quite a bit further than simply grouping or associating things (though that is an important start). They have some kind of formal semantic structure (otherwise known as formality) that specifies what entities exist, and what kinds of relations exist between the entities. This formal structure isn’t just for show: it’s what enables the kind of synthesis that really powers significant knowledge work! Formal structures unlock powerful forms of reasoning like conceptual combination, analogy, and causal reasoning.

      Formalisms enable synthesis to happen.

    6. I understand synthesis to be fundamentally about creating a new whole out of components (Strike & Posner, 1983).

      A definition for synthesis.

    1. Systems which display backlinks to a node permit a new behavior: you can define a new node extensionally (rather than intensionally) by simply linking to it from many other nodes—even before it has any content.

      Nodes in a knowledge management system can be defined extensionally, rather than intensionally, through their backlinks and their respective context.

    2. This effect requires Contextual backlinks: a simple list of backlinks won’t implicitly define a node very effectively. You need to be able to see the context around the backlink to understand what’s being implied.

      Bi-Directional links, or backlinks, only help define the node being linked to if the context in which the links occur is also provided.

    1. Using Next's special getStaticProps hook and glorious dynamic imports, it's trivial to import a Markdown file and pass its contents into your React components as a prop. This achieves the holy grail I was searching for: the ability to easily mix React and Markdown.

      Colin has perhaps found an alternative to jsx, getting js content into md files.

    1. In such cases it is important to capture the connections radially, as it were, but at the same time also by right away recording back links in the slips that are being linked to. In this working procedure, the content that we take note of is usually also enriched

      By adding a backlink for every link we make, we are also enriching the content we are linking to.

    2. 2. Possibility of linking (Verweisungsmöglichkeiten). Since all papers have fixed numbers, you can add as many references to them as you may want. Central concepts can have many links which show on which other contexts we can find materials relevant for them. Through references, we can, without too work or paper, solve the problem of multiple storage. Given this technique, it is less important where we place a new note. If there are several possibilities, we can solve the problem as we wish and just record the connection by a link [or reference].

      Since a note has a unique identifier, you can link to it.

      Since we can link to notes, it doesn't matter where we place a note.

    1. The future increasingly looks like one where companies use very specific apps to solve their jobs to be done. And collaboration is right where we work. And that makes sense, of course. Collaboration *should* be where you work.

      Collaboration, increasingly, happens where we work.

    2. As it becomes more clear what are specific functional jobs to be done, we see more specialized apps closely aligned with solving for that specific loop. And increasingly collaboration is built in natively to them. In fact, for many reasons collaboration being natively built into them may be one of the main driving forces behind the venture interest and success in these spaces.

      As it becomes more clear what the functional job to be done is, we see more specialized apps aligned with solving that specific loop. Collaboration is increasingly built natively into them.

    3. To understand this is to understand that there is no distinction between productivity and collaboration. But we’re only now fully appreciating it.

      This is perhaps Kwok's central claim in the article. We used to think of productivity and collaboration as separate things when in reality they are inseparable.