328 Matching Annotations
  1. Jul 2020
    1. Big companies have enjoyed big profits, fattened by widening margins as wages stagnate. That’s allowed them to sustain a huge debt load. But drilling down shows that credit quality, as viewed by ratings companies, has tumbled. According to S&P Global Ratings, the companies rated BBB+, BBB, or BBB- (the three lowest investment grades before they would hit “junk” status and face much higher interest payments) now outnumber all of the companies with some level of A-rated debt. It looks as though companies are “gaming” the ratings companies, borrowing as much as they can get away with.

      Precisely what happened with consumers credit scores as a function of their borrowing habits. You see a large number of consumers at the 600 threshold. These arbitrary cutoffs, create problematic tipping points.

    1. Spaced repetition is a technique for spacing out of reviews of previously learned material according to an algorithm designed to optimize your limited time for review. Each time you review a piece of information, you supply feedback to that algorithm which estimates the optimal time to show you that information again.
    1. You'll also want to review your original reaction to those passages. You can capture these reactions, of course, by taking notes.

      Note taking requires a little more effort than I would expend during the initial capture process. There's a wide variance in the thoroughness and time that one takes to write a note. As a consequence note taking, when done most thoroughly, may disrupt the flow of reading. What's more, you do not know ahead of time the relative significance of each passage, and much of the note taking effort can go to waste if too much focus is paid to those passage that are relatively less substantial.

    2. Mortimer Adler, the author of the classic manual on reading How To Read a Book
    3. With reading, this means highlighting especially salient passages.

      Yes we need to highlight salient passages, but more importantly we need to begin our capture of important information, as Mortimer Adler suggests, by "coming to terms" with the author.

    1. There’s a natural tension between the two, compression and context.

      This a false dichotomy and tradeoff. You can compress information based its context.

  2. Jun 2020
    1. My somewhat pious belief was that if people focused more on remembering the basics, and worried less about the “difficult” high-level issues, they'd find the high-level issues took care of themselves. But while I held this as a strong conviction about other people, I never realized it also applied to me. And I had no idea at all how strongly it applied to me. Using Anki to read papers in new fields disabused me of this illusion. I found it almost unsettling how much easier Anki made learning such subjects. I now believe memory of the basics is often the single largest barrier to understanding. If you have a system such as Anki for overcoming that barrier, then you will find it much, much easier to read into new fields.
  3. May 2020
    1. The northern end of the park has typically seen less affluent neighbors and significantly less attention, but Central Park Conservancy is about to change that. Earlier this fall, the non-profit group announced a $150 million renovation that would improve the parkland, add a new boardwalk along the man-made lake known as Harlem Meer, and build a new recreation facility to replace the Lasker pool and skating rink, both of which date back to the 1960’s. (Side note: The Trump Organization has the concession to run the skating rink through 2021, by which time there may be someone else in the White House.) Construction is set to begin in 2021, and completion is estimated for 2024.
    1. “paying for the regular delivery of well-defined value” — are so important. I defined every part of that phrase: Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye. Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to to the subscriber directly, whether that be email, a bookmark, or an app. Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it.
    2. It is very important to clearly define what a subscriptions means. First, it’s not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value. The importance of this distinction stems directly from the economics involved: the marginal cost of any one Stratechery article is $0. After all, it is simply text on a screen, a few bits flipped in a costless arrangement. It makes about as much sense to sell those bit-flipping configurations as it does to sell, say, an MP3, costlessly copied. So you need to sell something different. In the case of MP3s, what the music industry finally learned — after years of kicking and screaming about how terribly unfair it was that people “stole” their music, which didn’t actually make sense because digital goods are non-rivalrous — is that they should sell convenience. If streaming music is free on a marginal cost basis, why not deliver all of the music to all of the customers for a monthly fee? This is the same idea behind nearly every large consumer-facing web service: Netflix, YouTube, Facebook, Google, etc. are all predicated on the idea that content is free to deliver, and consumers should have access to as much as possible. Of course how they monetize that convenience differs: Netflix has subscriptions, while Google, YouTube, and Facebook deliver ads (the latter two also leverage the fact that content is free to create). None of them, though, sell discrete digital goods. It just doesn’t make sense.
    1. Cloze deletion is, of course, just a fancy way of saying fill in the blank. This might sound trivial, but the simple act forces you to consider the surrounding context and search your mind for an answer. This, in turn, is scientifically proven to form stronger memories enabling you to remember profoundly more of what you've read.
    1. The music industry, meanwhile, has, at least relative to newspapers, come out of the shift to the Internet in relatively good shape; while piracy drove the music labels into the arms of Apple, which unbundled the album into the song, streaming has rewarded the integration of back catalogs and new music with bundle economics: more and more users are willing to pay $10/month for access to everything, significantly increasing the average revenue per customer. The result is an industry that looks remarkably similar to the pre-Internet era: Notice how little power Spotify and Apple Music have; neither has a sufficient user base to attract suppliers (artists) based on pure economics, in part because they don’t have access to back catalogs. Unlike newspapers, music labels built an integration that transcends distribution.
  4. Apr 2020
    1. The team behind Hypothesis, an open-source software tool that allows people to annotate web pages, announced in March that its users had collectively posted more than 5 million comments across the scholarly web since the tool was launched in 2011. That’s up from about 220,000 total comments in 2015 (see ‘Comment counts’). The company has grown from 26,000 registered users to 215,000 over the same period.
  5. Jan 2020
    1. "Apple research transferred more stuff into product than any other lab I can think of, including Hewlett-Packard and IBM," the source said, but Jobs wasn't aware enough of the role ARL played in developing current Apple technology before deciding to cut the group's funding, he noted.
  6. Dec 2019
    1. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
    1. Imagine that every car maker save for Toyota insisted on using the infamous East German Trabant as a standard of quality - yet blindly imitated random elements of Toyota's visual design.  How long would it take for the whiners to appear on the scene and start making noises about monopolistic tyranny?  How long would it take for Toyota to start living up to these accusations in earnest?  And why should it not do so?  What is to be gained from corporate sainthood?  From a refusal to fleece eagerly willing suckers for all they're worth?  Idle threats of defection by outraged iPhone developers [4] are laughable nonsense simply because - in the two categories listed - Apple has no competition. Every commercial product which competes directly with an Apple product (particularly the iPhone) gives me (and many others) the distinct impression that "where it is original, it is not good, and where it is good, it is not original."
    1. He then showed you how he could make a few strokes on the keyset to designate the type of link he wanted established, and pick the two symbol structures that were to be linked by means of the light pen. He said that most links possessed a direction, i.e., they were like an arrow pointing from one substructure to another, so that in setting up a link he must specify the two substructures in a given order.
    2. "Most of the structuring forms I'll show you stem from the simple capability of being able to establish arbitrary linkages between different substructures, and of directing the computer subsequently to display a set of linked substructures with any relative positioning we might designate among the different substructures. You can designate as many different kinds of links as you wish, so that you can specify different display or manipulative treatment for the different types."
    3. "You usually think of an argument as a serial sequence of steps of reason, beginning with known facts, assumptions, etc., and progressing toward a conclusion. Well, we do have to think through these steps serially, and we usually do list the steps serially when we write them out because that is pretty much the way our papers and books have to present them—they are pretty limiting in the symbol structuring they enable us to use. Have you even seen a 'scrambled-text' programmed instruction book? That is an interesting example of a deviation from straight serial presentation of steps.3b6b "Conceptually speaking, however, an argument is not a serial affair. It is sequential, I grant you, because some statements have to follow others, but this doesn't imply that its nature is necessarily serial. We usually string Statement B after Statement A, with Statements C, D, E, F, and so on following in that order—this is a serial structuring of our symbols. Perhaps each statement logically followed from all those which preceded it on the serial list, and if so, then the conceptual structuring would also be serial in nature, and it would be nicely matched for us by the symbol structuring.3b6c "But a more typical case might find A to be an independent statement, B dependent upon A, C and D independent, E depending upon D and B, E dependent upon C, and F dependent upon A, D, and E. See, sequential but not serial? A conceptual network but not a conceptual chain. The old paper and pencil methods of manipulating symbols just weren't very adaptable to making and using symbol structures to match the ways we make and use conceptual structures. With the new symbol-manipulating methods here, we have terrific flexibility for matching the two, and boy, it really pays off in the way you can tie into your work.3b6d This makes you recall dimly the generalizations you had heard previously about process structuring limiting symbol structuring, symbol structuring limiting concept structuring, and concept structuring limiting mental structuring.
    4. Suppose that one wants to link Card B to Card A, to make a trail from A to B.

      we should also be able to go from B to A

    5. One need arose quite commonly as trains of thought would develop on a growing series of note cards. There was no convenient way to link these cards together so that the train of thought could later be recalled by extracting the ordered series of notecards. An associative-trail scheme similar to that out lined by Bush for his Memex could conceivably be implemented with these cards to meet this need and add a valuable new symbol-structuring process to the system.
    6. Note, too, the implications extending from Bush's mention of one user duplicating a trail (a portion of his structure) and giving it to a friend who can put it into his Memex and integrate it into his own trail (structure).
    7. An example of this general sort of thing was given by Bush where he points out that the file index can be called to view at the push of a button, which implicitly provides greater capability to work within more sophisticated and complex indexing systems
    8. The associative trails whose establishment and use within the files he describes at some length provide a beautiful example of a new capability in symbol structuring that derives from new artifact-process capability, and that provides new ways to develop and portray concept structures. Any file is a symbol structure whose purpose is to represent a variety of concepts and concept structures in a way that makes them maximally available and useful to the needs of the human's mental-structure development—within the limits imposed by the capability of the artifacts and human for jointly executing processes of symbol-structure manipulation.
    9. As we are currently using it, the term includes the organization, study, modification, and execution of processes and process structures. Whereas concept structuring and symbol structuring together represent the language component of our augmentation means, process structuring represents the methodology component (plus a little more, actually). There has been enough previous discussion of process structures that we need not describe the notion here, beyond perhaps an example or two. The individual processes (or actions) of my hands and fingers have to be cooperatively organized if the typewriter is to do my bidding. My successive actions throughout my working day are meant to cooperate toward a certain over-all professional goal.
    10. With a computer manipulating our symbols and generating their portrayals to us on a display, we no longer need think of our looking at the symbol structure which is stored—as we think of looking at the symbol structures stored in notebooks, memos, and books. What the computer actually stores need be none of our concern, assuming that it can portray symbol structures to us that are consistent with the form in which we think our information is structured.

      Separation of model and view

    11. view generation
    12. But another kind of view might be obtained by extracting and ordering all statements in the local text that bear upon consideration A of the argument—or by replacing all occurrences of specified esoteric words by one's own definitions.
    13. A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of.
    14. Before we pursue further direct discussion of the H-LAM/T system, let us examine some background material. Consider the following historical progression in the development of our intellectual capabilities:2c4a 2c4b (1) Concept Manipulation—Humans rose above the lower forms of life by evolving the biological capability for developing abstractions and concepts. They could manipulate these concepts within their minds to a certain extent, and think about situations in the abstract. Their mental capabilities allowed them to develop general concepts from specific instances, predict specific instances from general concepts, associate concepts, remember them, etc. We speak here of concepts in their raw, unverbalized form. For example, a person letting a door swing shut behind him suddenly visualizes the person who follows him carrying a cup of hot coffee and some sticky pastries. Of all the aspects of the pending event, the spilling of the coffee and the squashing of the pastry somehow are abstracted immediately, and associated with a concept of personal responsibility and a dislike for these consequences. But a solution comes to mind immediately as an image of a quick stop and an arm stab back toward the door, with motion and timing that could prevent the collision, and the solution is accepted and enacted. With only non-symbolic concept manipulation, we could probably build primitive shelter, evolve strategies of war and hunt, play games, and make practical jokes. But further powers of intellectual effectiveness are implicit in this stage of biological evolution (the same stage we are in today).2c4b1 (2) Symbol Manipulation—Humans made another great step forward when they learned to represent particular concepts in their minds with specific symbols. Here we temporarily disregard communicative speech and writing, and consider only the direct value to the individual of being able to do his heavy thinking by mentally manipulating symbols instead of the more unwieldly concepts which they represent. Consider, for instance, the mental difficulty involved in herding twenty-seven sheep if, instead of remembering one cardinal number and occasionally counting, we had to remember what each sheep looked like, so that if the flock seemed too small we could visualize each one and check whether or not it was there.2c4b2 (3) Manual, External, Symbol Manipulation—Another significant step toward harnessing the biologically evolved mental capabilities in pursuit of comprehension and problem solutions came with the development of the means for externalizing some of the symbol-manipulation activity, particularly in graphical representation. This supplemented the individual's memory and ability to visualize. (We are not concerned here with the value derived from human cooperation made possible by speech and writing, both forms of external symbol manipulation. We speak of the manual means of making graphical representations of symbols—a stick and sand, pencil and paper and eraser, straight edge or compass, and so on.) It is principally this kind of means for external symbol manipulation that has been associated with the evolution of the individual's present way of doing his concept manipulation (thinking).
    15. It has been jokingly suggested several times during the course of this study that what we are seeking is an "intelligence amplifier." (The term is attributed originally to W. Ross Ashby[2,3]. At first this term was rejected on the grounds that in our view one's only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective. 2c2a Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human's intelligence.2c2b In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries.
    1. His answer is that our creative minds are being strengthened rather than atrophied by the ability to interact easily with the Web and Wikipedia. “Not only has transactive memory not hurt us,” he writes, “it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

      This is where I disagree with Thompson. The potential for IA is there but we have retrogressed with the advent of the web.

    2. Socrates and his prediction that writing would destroy the Greek tradition of dialectic. Socrates’ primary concern was that people would write things down instead of remembering them. “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories,” Plato quotes him as saying. “They will trust to the external written characters and not remember of themselves.”

      The dialectic process is important particularly in the context of human to computer communication and synthesis. Here Socrates articulates the importance of memory to this process and how writing undermines it. If there is an asymmetry between the mind of the writer and reader the written work provides method of diffusing information from one mind to another. This balance of the mind is true of human to computer interaction as well. We need to expand our memory capacity if we are to be expand the reasoning capacity of computers. But instead we are using computers to substitute our memories. We neglect memory so we can't reason; humans and computers alike.

    3. This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.

      Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.

    4. Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.

      Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.

    5. Like a centaur, the hybrid would have the strength of each of its components: the processing power of a large logic circuit and the intuition of a human brain’s wetware. The result: human-machine teams, even when they didn’t include the best grandmasters or most powerful computers, consistently beat teams composed solely of human grandmasters or superfast machines.

      This is what is most needed: the spark of intuition coupled with the indefatigably pursuit of its implications. We handle the former and computers the latter.

    1. During 1995, a decision was made to (officially) start licensing the Mac OS and Macintosh ROMs to 3rd party manufacturers who started producing Macintosh "clones". This was done in order to achieve deeper market penetration and extra revenue for the company. This decision lead to Apple having over a 10% market share until 1997 when Steve Jobs was re-hired as interim CEO to replace Gil Amelio. Jobs promptly found a loophole in the licensing contracts Apple had with the clone manufacturers and terminated the Macintosh OS licensing program, ending the Macintosh clone era. The result of this action was that Macintosh computer market share quickly fell from 10% to around 3%.
  7. Nov 2019
    1. In languages, as in so many things, there's not much correlation between popularity and quality. Why does John Grisham (King of Torts sales rank, 44) outsell Jane Austen (Pride and Prejudice sales rank, 6191)? Would even Grisham claim that it's because he's a better writer?
    1. Which makes them exactly the kind of programmers companies should want to hire. Hence what, for lack of a better name, I'll call the Python paradox: if a company chooses to write its software in a comparatively esoteric language, they'll be able to hire better programmers, because they'll attract only those who cared enough to learn it. And for programmers the paradox is even more pronounced: the language to learn, if you want to get a good job, is a language that people don't learn merely to get a job.
    1. It would be great if more Americans were trained as programmers, but no amount of training can flip a ratio as overwhelming as 95 to 5. Especially since programmers are being trained in other countries too. Barring some cataclysm, it will always be true that most great programmers are born outside the US. It will always be true that most people who are great at anything are born outside the US.

      No amount of training in the current development paradigm can flip this ratio but if we were to make dev tools simpler and ubiquitous then it just might.

    1. In his Discourse on the Origins of Inequality, Rousseau, anticipating the language of Darwin, states that as the animal-like human species increased there arose a "formidable struggle for existence" between it and other species for food.[34] It was then, under the pressure of necessity, that le caractère spécifique de l'espèce humaine—the specific quality that distinguished man from the beasts—emerged—intelligence, a power, meager at first but yet capable of an "almost unlimited development". Rousseau calls this power the faculté de se perfectionner—perfectibility.[35] Man invented tools, discovered fire, and in short, began to emerge from the state of nature. Yet at this stage, men also began to compare himself to others: "It is easy to see. ... that all our labors are directed upon two objects only, namely, for oneself, the commodities of life, and consideration on the part of others."
    1. This brings me to the crucial issue. Unlike the position that exists in the physical sciences, in economics and other disciplines that deal with essentially complex phenomena, the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones. While in the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable, in the study of such complex phenomena as the market, which depend on the actions of many individuals, all the circumstances which will determine the outcome of a process, for reasons which I shall explain later, will hardly ever be fully known or measurable. And while in the physical sciences the investigator will be able to measure what, on the basis of a prima facie theory, he thinks important, in the social sciences often that is treated as important which happens to be accessible to measurement. This is sometimes carried to the point where it is demanded that our theories must be formulated in such terms that they refer only to measurable magnitudes.
    2. The particular occasion of this lecture, combined with the chief practical problem which economists have to face today, have made the choice of its topic almost inevitable. On the one hand the still recent establishment of the Nobel Memorial Prize in Economic Science marks a significant step in the process by which, in the opinion of the general public, economics has been conceded some of the dignity and prestige of the physical sciences. On the other hand, the economists are at this moment called upon to say how to extricate the free world from the serious threat of accelerating inflation which, it must be admitted, has been brought about by policies which the majority of economists recommended and even urged governments to pursue. We have indeed at the moment little cause for pride: as a profession we have made a mess of things.
    3. It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences – an attempt which in our field may lead to outright error. It is an approach which has come to be described as the “scientistic” attitude – an attitude which, as I defined it some thirty years ago, “is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.”1
    1. Early in the life of the Audius network, the AudiusDAO will control governance. During this bootstrap-ping phase, the Audius DAO will also have the abilityto intervene in catastrophic circumstances to x criticalissues in the Audius blockchain code, such as issues en-abling fraud or resulting in unintended loss of Audiusor Loud tokens.
    2. There will be two groups created at the timeof main network launch: Audius DAO (DecentralizedAutonomous Organization) and Artist Advisory DAO.
    3. To make governance more accessible to users, voting canbe delegated by anyone to other users or groups of users,such that if a user places no vote on a speci c proposal,their designated delegate's vote will be used in place oftheir own.
    4. These user classes are not mutually exclusive. There-fore, if a user has earnings and/or holdings that fall intomultiple classes, their vote can be counted in multipleclasses.
    5. To submit a proposal, a user must bond a set num-ber of Audius tokens (denotedBGP) in the governancesystem, which remain bonded for the duration of theirproposal. Before a proposal's e ective date, the origi-nal submitter can also choose to withdraw the proposalif they so choose, returning their bonded tokens. Thisbond is required as an anti-spam measure and to ensurethat proposers to have a sucient stake in the Audiusprotocol to make changes to it. At the proposal's res-olution (successful, failed, or withdrawn), the bond isreturned to proposal submitter.
    6. Pro-posals also include a block count at which point they gointo e ect; this e ectiveness date must be at least 1 weekin the future at time of proposal submission to give usersample time to review and vote on the proposal.
    7. Participation in governance creates value in Audius,and should be rewarded

      Voting should not be rewarded. Apathy should be penalized.

    8. copy of theseguidelines will be included in a contract on the network,and updates to these guidelines ow through the Audiusgovernance protocol. A full fee and bond schedule forarbitration will be published closer to the time of theAudius main network launch, and these fees and bondscan be modi ed in the Audius governance protocol.

      should be done already...

    9. On a recurring basis,subscription listens would be tallied and payouts wouldbe made to artists by a transparent, auditable subscrip-tion system running on the Audius blockchain.

      What are the mechanisms of the system?

    1. In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[167] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[168] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[169] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[170] There were many other explanations and for each there was a corresponding research program underway.
    2. The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.
    3. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts
    4. The neats: logic and symbolic reasoning[edit source] Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103] Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[105] The scruffies: frames and scripts[edit source] Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[106] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
    1. In 1988 Apple sued Microsoft and Hewlett-Packard on the grounds that they infringed Apple's copyrighted GUI, citing (among other things) the use of rectangular, overlapping, and resizable windows. After four years, the case was decided against Apple, as were later appeals. Apple's actions were criticized by some in the software community, including the Free Software Foundation (FSF), who felt Apple was trying to monopolize on GUIs in general, and boycotted GNU software for the Macintosh platform for seven years.
    1. Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[7] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system.
    2. In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
  8. Oct 2019
    1. We see a number of speci c challenges faced by creatorsand listeners today:1. There is little to no transparency around the originsof creator payouts (e.g. number of plays, location,original gross payment before fees)2. Incomplete rights ownership data often preventscontent creators from getting paid; instead, earn-ings accumulate in digital service providers (DSPs)and rights societies3. There are layers of middlemen and signi cant timedelay involved in payments to creators4. Publishing rights are complicated and opaque, withno incentives for the industry to make rights datapublic and accurate5. Remixes, covers, and other derivative content arelargely censored due to rights management issues6. Licensing issues prevent DSPs and content from be-ing accessible worldwide
    1. I do not see him in this light. I do not think that any one who has pored over the contents of that box which he packed up when he finally left Cambridge in 1696 and which, though partly dispersed, have come down to us, can see him like that. Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child bom with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage.
  9. Sep 2019
    1. One widely circulated report this summer—which appears to have caught Mr. Trump’s attention—estimates that China shed five million industrial jobs, 1.9 million of them directly because of U.S. tariffs, between the beginning of the trade conflict and the end of May this year.
    2. That isn’t insubstantial. But it is still small compared with China’s urban labor force of 570 million. It also represents a slower pace than the 23 million manufacturing jobs shed in China between 2015 and 2017, according to the report, published by China International Capital Corp., an investment bank with Chinese state ownership.
    1. The Fed offered $30 billion of reserves maturing Oct. 8, receiving $62 billion in bids from banks offering collateral in the form of Treasury and mortgage securities. Banks bid for $32 billion more than the amount offered by the Fed. In a second offering, the Fed added $75 billion in overnight reserves, with banks bidding for $80.2 billion, or $5.2 billion more than was available.
    1. A prolonged walkout can quickly take a financial toll on car companies because they book revenue only when a vehicle is shipped to a dealership. An assembly-plant shutdown can cost an auto maker an estimated $1.3 million every hour, according to the Center for Automotive Research in Ann Arbor, Mich.
    2. The GM strike would surpass in size the work stoppage by more than 30,000 employees at Stop & Shop groceries in New England earlier this year. But it would be far smaller than one involving 73,000 GM workers in 2007, when the company’s workforce was much larger.
    1. But sustained Saudi outage of several million daily barrels would rattle markets, because of the lack of other players big enough to step in and provide enough supply to cover the shortfall longer term. Even if Saudi officials were successful in restoring all or most of the lost production, the attack demonstrates a new vulnerability to supply lines across the oil-rich Gulf. Tankers have been paying sharply higher insurance premiums, while shipping rates have soared in the region after a series of maritime attacks on oil-laden vessels, which the U.S. has blamed on Iran.
    1. Reflecting those divisions, officials decided not to enlarge significantly the pool of assets the bank can buy—though it did expand the kinds of corporate and mortgage bonds it can purchase. Without changing rules that prohibit the bank from buying more than a third of any government’s debt, Mr. Ducrozet estimated that the ECB can continue its bond purchases for only 9-12 months.
    1. The Executive [Lincoln] is frequently compelled to affix his signature to bills of the highest importance, much of which he regards as wholly at war with the national interests.
  10. Aug 2019
    1. The USATF Legend Coach Award is in its sixth year and is selected by the USATF Coaches Advisory Committee. The inaugural award was presented to Hall of Fame Tigerbelle Coach Ed Temple in 2014, followed by Dr. Joe Vigil in 2015, Tom Tellez in 2016, Clyde Hart in 2017 and Brooks Johnson last year.
    1. "But in moving towards flat design we are losing much of the wisdom that was embedded in the old 3D style of UI, for example: a user must be able to glance at a screen and know what is an interactive element (e.g., a button or link) and what is not (e.g., a label or motto); a user must be able to tell at a glance what an interactive element does (does it initiate a process, link to another page, download a document, etc.?); the UI should be explorable, discoverable and self-explanatory. But many apps and websites, in the interest of a clean, spartan visual appearance, leave important UI controls hidden until the mouse hovers over just the right area or the app is in just the right state. This leaves the user in the dark, often frustrated and disempowered."
    1. “Democrats think it's not progressive enough because it doesn’t put extra burdens on higher-income people, like an income tax does,” Hines says. “And Republicans worry that it's too easy for the government to raise money with one.”
    1. No available HMDs support VirtualLink at this writing, nor are we aware of any, but it's something to keep in mind if you're waffling between a GeForce RTX card and a last-generation GeForce GTX or a Radeon card for VR. Nothing is certain, but it's possible a future headset may debut with this as the optional or mandatory interface.
    1. When the resort refinanced its debt in 2017 in a $469 million deal, bankers picked DBRS as one of two firms to rate the debt. DBRS had just loosened its standards for such “single-asset” commercial-mortgage deals. DBRS issued grades as much as three rungs higher on comparable slices rated by Morningstar in 2014.
    2. Investor reliance on credit ratings has gone from “high to higher,” says Swedish economist Bo Becker, who co-wrote a study finding that in the $4.4 trillion U.S. bond-mutual-fund industry, 94% of rules governing investments made direct or indirect references to ratings in 2017, versus 90% in 2010.
    1. Negative rates in theory mean the German government can borrow money from investors and get paid for doing so. But Berlin runs a budget surplus and has no desire to increase spending as other slower-growing European countries would like it to do. Olaf Scholz, Germany’s finance minister, has said recently the government doesn’t need to act as if it is in a crisis
    1. At the same time, US companies are deleveraging, which has shrunk the supply of new corporate debt, leading to a dearth of investment-grade issuance. Net supply from municipal borrowers, another vital source of new issuance, has also turned negative so there is not enough available for pension funds and insurers to buy.
    2. “Pension funds can’t match their liabilities with where rates are today so they have to hope that equity markets will continue to rally,” he says.
  11. Jul 2019
    1. The Apple of Steve Jobs needed HyperCard-like products like the Monsanto Company needs a $100 home genetic-engineering set.
    2. The Lisp Machine (which could just as easily have been, say, a Smalltalk machine) was a computing environment with a coherent, logical design, where the “turtles go all the way down.” An environment which enabled stopping, examining the state of, editing, and resuming a running program, including the kernel. An environment which could actually be fully understood by an experienced developer.  One where nearly all source code was not only available but usefully so, at all times, in real time.  An environment to which we owe so many of the innovations we take for granted. It is easy for us now to say that such power could not have existed, or is unnecessary. Yet our favorite digital toys (and who knows what other artifacts of civilization) only exist because it was once possible to buy a computer designed specifically for exploring complex ideas.  Certainly no such beast exists today – but that is not what saddens me most.  Rather, it is the fact that so few are aware that anything has been lost.
    3. The reason for this is that HyperCard is an echo of a different world. One where the distinction between the “use” and “programming” of a computer has been weakened and awaits near-total erasure.  A world where the personal computer is a mind-amplifier, and not merely an expensive video telephone.  A world in which Apple’s walled garden aesthetic has no place. What you may not know is that Steve Jobs killed far greater things than HyperCard.  He was almost certainly behind the death of SK8. And the Lisp Machine version of the Newton. And we may never learn what else. And Mr. Jobs had a perfectly logical reason to prune the Apple tree thus. He returned the company to its original vision: the personal computer as a consumer appliance, a black box enforcing a very traditional relationship between the vendor and the purchaser. Jobs supposedly claimed that he intended his personal computer to be a “bicycle for the mind.” But what he really sold us was a (fairly comfortable) train for the mind. A train which goes only where rails have been laid down, like any train, and can travel elsewhere only after rivers of sweat pour forth from armies of laborers. (Preferably in Cupertino.) The Apple of Steve Jobs needed HyperCard-like products like the Monsanto Company needs a $100 home genetic-engineering set. The Apple of today, lacking Steve Jobs — probably needs a stake through the heart.
    1. Kahle has been critical of Google's book digitization, especially of Google's exclusivity in restricting other search engines' digital access to the books they archive. In a 2011 talk Kahle described Google's 'snippet' feature as a means of tip-toeing around copyright issues, and expressed his frustration with the lack of a decent loaning system for digital materials. He said the digital transition has moved from local control to central control, non-profit to for-profit, diverse to homogeneous, and from "ruled by law" to "ruled by contract". Kahle stated that even public-domain material published before 1923, and not bound by copyright law, is still bound by Google's contracts and requires permission to be distributed or copied. Kahle reasoned that this trend has emerged for a number of reasons: distribution of information favoring centralization, the economic cost of digitizing books, the issue of library staff without the technical knowledge to build these services, and the decision of the administrators to outsource information services
    1. It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web.[5] Unlike the Web, programming, placement, and browsing were all the same tool. Similar systems have been created for HTML but traditional Web services are considerably more heavyweight.
    1. Such are great historical men—whose own particular aims involve those large issues which are the will of the World-Spirit.
    1. One way to look at this is that when a new powerful medium of expression comes along that was not enough in our genes to be part of traditional cultures, it is something we need to learn how to get fluent with and use. Without the special learning, the new media will be mostly used to automate the old forms of thought. This will also have effects, especially if the new media is more efficient at what the old did: this can result in gluts, that act like legal drugs (as indeed are the industrial revolution’s ability to create sugar and fat, it can also overproduce stories, news, status, and new ways for oral discourse.
    2. To understand what has happened, we only need to look at the history of writing and printing to note two very different consequences (a) the first, a vast change over the last 450 years in how the physical and social worlds are dealt with via the inventions of modern science and governance, and (b) that most people who read at all still mostly read fiction, self-help and religion books, and cookbooks, etc.* (all topics that would be familiar to any cave-person).
    1. A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa, Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their 'People Flow' Service Design Thinking by turning the Elevators into a Hall of Fame for the 'Incredibles' comic strip characters. Making their Elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services.
    1. In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s.

      it took a half a century for these disciplines to discern their complementarity!

    1. I do not actually know of a real findability index, but tools in the field of information retrieval could be applied to develop one. One of the unsolved problems in the field is how to help the searcher to determine if the information simply is not available.
    2. Although some have written about information overload, data smog, and the like, my view has always been the more information online, the better, so long as good search tools are available. Sometimes this information is found by directed search using a web search engine, sometimes by serendipty by following links, and sometimes by asking hundreds of people in our social network or hundreds of thousands of people on a question answering website such as Answers.com, Quora, or Yahoo Answer
    1. Unfortunately, misguided views about usability still cause significant damage in today's world. In the 2000 U.S. elections, poor ballot design led thousands of voters in Palm Beach, Florida to vote for the wrong candidate, thus turning the tide of the entire presidential election. At the time, some observers made the ignorant claim that voters who could not understand the Palm Beach butterfly ballot were not bright enough to vote. I wonder if people who made such claims have never made the frustrating "mistake" of trying to pull open a door that requires pushing. Usability experts see this kind of problem as an error in the design of the door, rather than a problem with the person trying to leave the room.
    2. The web, in yet another example of its leveling effect, allows nearly everyone to see nearly every interface. Thus designers can learn rapidly from what others have done, and users can see if one web site's experience is substandard compared to others.
    1. At the start of the 1970s, The New Communes author Ron E. Roberts classified communes as a subclass of a larger category of Utopias.[5] He listed three main characteristics. Communes of this period tended to develop their own characteristics of theory though, so while many strived for variously expressed forms of egalitarianism, Roberts' list should never be read as typical. Roberts' three listed items were: first, egalitarianism – that communes specifically rejected hierarchy or graduations of social status as being necessary to social order. Second, human scale – that members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions. And third, that communes were consciously anti-bureaucratic.
    1. Another prominent conclusion is that joint asset ownership is suboptimal if investments are in human capital.

      Does that have to be the case?

    1. Other examples of complex adaptive systems are:stock markets: Many traders make decisions on the information known to them and their individual expectations about future movements of the market. They may start selling when they see the prices are going down (because other traders are selling). Such herding behavior can lead to high volatility on stock markets. immune systems: Immune systems consist of various mechanisms, including a large population of lymphocytes that detect and destroy pathogens and other intruders in the body. The immune systems needs to be able to detect new pathogens for the host to survive and therefore needs to be able to adapt.brains: The neural system in the brain consists of many neurons that are exchanging information. The interactions of many neurons make it possible for me to write this sentence and ponder the meaning of life. ecosystems: Ecosystems consist of many species that interact by eating other species, distributing nutrients, and pollinating plants. Ecosystems can be seen as complex food webs that are able to cope with changes in the number of certain species, and adapt – to a certain extent – to changes in climate. human societies: When you buy this new iPhone that is manufactured in China, with materials derived from African soils, and with software developed by programmers from India, you need to realize that those actions are made by autonomous organizations, firms and individuals. These many individual actions are guided by rules and agreements we have developed, but there is no ruler who can control these interactions.
    2. Path FormationPaved paths are not always the most desirable routes going from point A to point B. This may lead pedestrians to take short-cuts. Initially pedestrians walk over green grass. Subsequent people tend to use the stamped grass path instead of the pristine grass, and after many pedestrians an unpaved path is formed without any top-down design.
  12. Jun 2019
    1. However, indexes in the modern sense, giving exact locations of names and subjects in a book, were not compiled in antiquity, and only very few seem to have been made before the age of printing. There are several reasons for this. First, as long as books were written in the form of scrolls, there were neither page nor leaf numbers not line counts (as we have them now for classical texts). Also, even had there been such numerical indicators, it would have been impractical to append an index giving exact references, because in order for a reader to consult the index, the scroll would have to be unrolled to the very end and then to be rolled back to the relevant page. (Whoever has had to read a book available only on microfilm, the modern successor of the papyrus scroll, will have experienced how difficult and inconvenient it is to go from the index to the text.) Second, even though popular works were written in many copies (sometimes up to several hundreds),no two of them would be exactly the same, so that an index could at best have been made to chapters or paragraphs, but not to exact pages. Yet such a division of texts was rarely done (the one we have now for classical texts is mostly the work of medieval and Renaissance scholars). Only the invention of printing around 1450 made it possible to produce identical copies of books in large numbers, so that soon afterwards the first indexes began to be compiled, especially those to books of reference, such as herbals. (pages 164-166) Index entries were not always alphabetized by considering every letter in a word from beginning to end, as people are wont to do today. Most early indexes were arranged only by the first letter of the first word, the rest being left in no particular order at all. Gradually, alphabetization advanced to an arrangement by the first syllable, that is, the first two or three letters, the rest of an entry still being left unordered. Only very few indexes compiled in the 16th and early 17th centuries had fully alphabetized entries, but by the 18th century full alphabetization became the rule... (p. 136) (For more information on the subject of indexes, please see Professor Wellisch's Indexing from A to Z, which contains an account of an indexer being punished by having his ears lopped off, a history of narrative indexing, an essay on the zen of indexing, and much more. Please, if you quote from this page, CREDIT THE AUTHOR. Thanks.) Indexes go way back beyond the 17th century. The Gerardes Herbal from the 1590s had several fascinating indexes according to Hilary Calvert. Barbara Cohen writes that the alphabetical listing in the earliest ones only went as far as the first letter of the entry... no one thought at first to index each entry in either letter-by-letter or word-by-word order. Maja-Lisa writes that Peter Heylyn's 1652 Cosmographie in Four Bookes includes a series of tables at the end. They are alphabetical indexes and he prefaces them with "Short Tables may not seeme proportionalble to so long a Work, expecially in an Age wherein there are so many that pretend to learning, who study more the Index then they do the Book."
    2. Pliny the Elder (died 79 A.D.) wrote a massive work called The Natural History in 37 Books. It was a kind of encyclopedia that comprised information on a wide range of subjects. In order to make it a bit more user friendly, the entire first book of the work is nothing more than a gigantic table of contents in which he lists, book by book, the various subjects discussed. He even appended to each list of items for each book his list of Greek and Roman authors used in compiling the information for that book. He indicates in the very end of his preface to the entire work that this practice was first employed in Latin literature by Valerius Soranus, who lived during the last part of the second century B.C. and the first part of the first century B.C. Pliny's statement that Soranus was the first in Latin literature to do this indicates that it must have already been practiced by Greek writers.
    1. Smil notes that as of 2018, coal, oil, and natural gas still supply 90% of the world's primary energy. Despite decades of growth of renewable energy, the world uses more fossil fuels in 2018 than in 2000, by percentage.
    1. Jevons received public recognition for his work on The Coal Question (1865), in which he called attention to the gradual exhaustion of Britain's coal supplies and also put forth the view that increases in energy production efficiency leads to more, not less, consumption.[5]:7f, 161f This view is known today as the Jevons paradox, named after him. Due to this particular work, Jevons is regarded today as the first economist of some standing to develop an 'ecological' perspective on the economy.
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
    1. volatility and leverage are co-determined and arepro-cyclical; that is, together, they amplify the impact ofshocks. The mechanism, to be specific, is that decliningvolatility reduces the cost of taking on more leverage andfurthers a buildup of risk. The lesson: Risk managers mustresist the temptation to sell volatility when it is low andfalling. The AMH implicitly embraces modeling suchbehavior with heterogeneous agents that use heuristics.
    1. Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology.
    1. Bob Barton [said] "The basic principle of recursive design is to make the parts have the same power as the whole." For the first time I thought of the whole as the entire computer, and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers... Why not thousands of them, each simulating a useful structure?
    1. To keep recession away, the Federal Reserve lowered the Federal funds rate 11 times - from 6.5% in May 2000 to 1.75% in December 2001 - creating a flood of liquidity in the economy. Cheap money, once out of the bottle, always looks to be taken for a ride. It found easy prey in restless bankers—and even more restless borrowers who had no income, no job and no assets. These subprime borrowers wanted to realize their life's dream of acquiring a home. For them, holding the hands of a willing banker was a new ray of hope. More home loans, more home buyers, more appreciation in home prices. It wasn't long before things started to move just as the cheap money wanted them to.
  13. May 2019
    1. Virtually all BPMs have utilities for creating simple, data-gathering forms. And in many types of workflows, these simple forms may be adequate. However, in any workflow that includes complex document assembly (such as loan origination workflows), BPM forms are not likely to get the job done. Automating the assembly of complex documents requires ultra-sophisticated data-gathering forms, which can only be designed and created after the documents themselves have been automated. Put another way, you won't know which questions need to be asked to generate the document(s) until you've merged variables and business logic into the documents themselves. The variables you merge into the document serve as question fields in the data gathering forms. And here's the key point - since you have to use the document assembly platform to create interviews that are sophisticated enough to gather data for your complex documents, you might as well use the document assembly platform to generate all data-gathering forms in all of your workflows.
  14. Mar 2019
  15. Feb 2019
    1. In a 2011 Reddit IAmA, Jennings recalled how in 2004 the Democratic politicians Chuck Schumer and Harry Reid unsuccessfully asked Jennings to run for the United States Senate from Utah. Jennings commented, "That was when I realized the Democratic Party was f@#$ed in '04."[19]
  16. Jan 2019
    1. You don't need complex sentences to express complex ideas. When specialists in some abstruse topic talk to one another about ideas in their field, they don't use sentences any more complex than they do when talking about what to have for lunch. They use different words, certainly. But even those they use no more than necessary. And in my experience, the harder the subject, the more informally experts speak. Partly, I think, because they have less to prove, and partly because the harder the ideas you're talking about, the less you can afford to let language get in the way.
    2. It seems to be hard for most people to write in spoken language. So perhaps the best solution is to write your first draft the way you usually would, then afterward look at each sentence and ask "Is this the way I'd say this if I were talking to a friend?" If it isn't, imagine what you would say, and use that instead. After a while this filter will start to operate as you write. When you write something you wouldn't say, you'll hear the clank as it hits the page.Before I publish a new essay, I read it out loud and fix everything that doesn't sound like conversation. I even fix bits that are phonetically awkward; I don't know if that's necessary, but it doesn't cost much.
    3. If you simply manage to write in spoken language, you'll be ahead of 95% of writers. And it's so easy to do: just don't let a sentence through unless it's the way you'd say it to a friend.
  17. Dec 2018
    1. “They’re actively, actively recruiting,” said Cheddar’s Alex Heath. “They’re also trying to scoop up crypto start-ups that are at the white-paper level, which means they don’t really even have a product yet.”
  18. Sep 2018
    1. The selloff partly reflects a broader malaise in emerging markets. U.S. interest rate increases and a stronger dollar have lured cash back to America, often at the expense of developing economies. Some countries have come under additional pressure because of U.S. tariffs or sanctions, while economic turmoil in Turkey and Argentina have further fueled investors’ concerns.
  19. Aug 2018
    1. Bakkt will provide access to a new Bitcoin trading platform on the ICE Futures U.S. exchange. And it will also offer full warehousing services, a business that ICE doesn’t have. “Bakkt’s revenue will come from two sources,” says Loeffler, “the trading fees on the ICE Futures U.S. exchange, and warehouse fees paid by the customers that buy Bitcoin and store with Bakkt.”
    2. Bakkt plans to offer a full package combining a major CFTC-regulated exchange with CFTC-regulated clearing and custody, pending the approval from the commission and other regulators.

      still pending regulatory approval

    3. At a recent meeting with the couple in the plush Bond Room at the NYSE, Sprecher stressed that Loeffler has been a collaborator in charting ICE’s next big move. “Kelly and I brainstormed for five years to find a strategy for digital currencies,” says Sprecher.

      bakkt is 5 years in the making

    4. Cracking the 401(k) and IRA market for cryptocurrency would be a huge win for Bakkt. But the startup’s plans raise the prospect of an even more ambitious goal: Using Bitcoin to streamline and disrupt the world of retail payments by moving consumers from swiping credit cards to scanning their Bitcoin apps. The market opportunity is gigantic: Consumers worldwide are paying lofty credit card or online-shopping fees on $25 trillion a year in annual purchases.

      Allowing money from 401ks and IRA's would allow for huge influx of passive capital.

      Retail component would actually cause selling pressure as was seen in 2015 when more and more retailers started accepting bitcoin.

    1. The idea of a gold exchange-traded fund was first conceptualized by Benchmark Asset Management Company Private Ltd in India when they filed a proposal with the SEBI in May 2002. However it did not receive regulatory approval at first and was only launched later in March 2007

      Took 5 years to get approval for gold etf in India

    1. However, most ETCs implement a futures trading strategy, which may produce quite different results from owning the commodity.
    2. However, generally commodity ETFs are index funds tracking non-security indices. Because they do not invest in securities, commodity ETFs are not regulated as investment companies under the Investment Company Act of 1940 in the United States, although their public offering is subject to SEC review and they need an SEC no-action letter under the Securities Exchange Act of 1934. They may, however, be subject to regulation by the Commodity Futures Trading Commission.

      Commodity etfs are regulated by CFTC but need a no action letter from the SEC to be approved.

    3. The idea of a Gold ETF was first officially conceptualised by Benchmark Asset Management Company Private Ltd in India when they filed a proposal with the SEBI in May 2002.[32] The first gold exchange-traded fund was Gold Bullion Securities launched on the ASX in 2003, and the first silver exchange-traded fund was iShares Silver Trust launched on the NYSE in 2006. As of November 2010 a commodity ETF, namely SPDR Gold Shares, was the second-largest ETF by market capitalization.[33]

      In 8 years gold etf became the second largest by market cap

  20. Jul 2018
    1. Mayor de Blasio and his administration have made progress in meeting their goal of building 200,000 affordable units over the span of a decade, as 21,963 new units were added in 2016, the most in 27 years. However, there continues to be a shortage in East Harlem. Out of the nearly 20,000 affordable units, the city brought to all five boroughs, just 249 units have been built in East Harlem, according to a new report by the Department of Housing and Preservation Development (HPD). To better accommodate these residents, the city plans on expediting the construction of 2,400 units of affordable housing over the next few years, as DNA Info reported.
    1. However, price time-series have some drawbacks. Prices are usually only positive, which makes it harder to use models and approaches which require or produce negative numbers. In addition, price time-series are usually non-stationary, that is their statistical properties are less stable over time.
    1. Denote NNN as the number of instances of evidence we possess. As we gather an infinite amount of evidence, say as N→∞N→∞N \rightarrow \infty, our Bayesian results (often) align with frequentist results. Hence for large NNN, statistical inference is more or less objective. On the other hand, for small NNN, inference is much more unstable: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we preserve the uncertainty that reflects the instability of statistical inference of a small NNN dataset.

      Law of large numbers helps to get to the frequentist result but the bayesian perspective reflects instability of inferential statistics when the number of observed inferences is small.

    1. A core tenet of the Y Combinator playbook for startups is to talk to your users. If you’re interested in building a third party app on top of a fat protocol, the lesson might be to also talk to competing apps’ users to figure out what needs aren’t being served. In a similar vein, protocol developers should talk to app developers and learn what they think end users want.

      this isn't happening nearly enough which is why protocols don't provide tech components for viable end user apps

    1. The reason Mr. Wonderful loves royalty based funding is because it is a big win for both businesses and investors. Investors see a return on helping businesses succeed. Experienced investors will even offer guidance to help business owners avoid the pitfalls that many entrepreneurs stumble into. On the business side, entrepreneurs get the financing they need without debt or sacrificing ownership of their companies in any way. Additionally, since repayment of royalty based financing is structured around revenue, there are no rigid payment schedule. Royalty based funding provides financing and flexibility, which gives businesses the freedom to reach their potential, while simultaneously providing healthy returns to investors.
    1. Here are the definitions to make sure we’re on the same page:Subscription model — a periodic (monthly, yearly, or seasonal) payment to gain access to products or services.Transactional model — you pay as you use the products and services.
    1. Second, recall that the impetus formovingfrom proof-of-workto proof-of-stakeis to reduce the amount of computational resource and energy required to maintain the network by a couple orders of magnitude. That’s good forscalability and potential adoption, butalso meansa commensurate reduction in the PQ of the network.

      The impetus is for the reduction of technical debt and the increased efficiency of network resource provision. The computational resources used for mining and not for processing transactions gets repurposed to increase the amount of transactions that can be processed.

      This assumption that Q is constant is bizarre. If just looking at transaction throughput the goal is to be able to process several 100 thousand transactions per second if not millions. P and Q are clearly inversely correlated.

    2. Is that added valueenoughto offsetitsinefficiency compared to the incumbent centralised Twitter? Would Token Twitter offer compellingly higher utility compared to centralised Twitter, including enough surplus utility to offset the cost of operating the consensus mechanism? I’m notso sure.

      Considering the majority of the cost of operating twitter comes from human capital, marketing, legal and accounting and not from IT, which continues to fall on a per unit basis while the aformentioned continue to increase yes. If the assumption is that legal and accounting is no longer needed and developers and other employees are overpaid relative to an entirely crowdsource labor force then then you might see the redundancy costs in IT operations offset by cost reduction in other operational expenses.

    3. The combined effect of low and falling PQ and potentially very high V is that the utility value of utility cryptoassets at equilibrium should in fact be relatively low.

      Utility value of utility crypto assets are not entirely a function of the cost of network resources. These assets also provide influence which can't be financial measured as it captures the participants the expected future value of the network and what having influence over its direction can afford that particular participant.

      Also I would think that PQ is artificially high right now because of how inefficient blockchains are but as P falls due to further scalability Q should increase not only to offset declines in P but overcompensate the declining P as more services can be built on this infrastructure. A premium will be placed on the network effects of protocol that has a successful applicaiton that other applicaitons will want to interoperate with for its data and microservices (e.g. identity account, financial etc).

    1. Additionally, there is work available in most countries for people living outside the US, but only workers in the US and India can withdraw cash. Workers from other countries can only redeem their earnings through Amazon gift cards.
  21. Jun 2018
    1. there have always been far more users/consumers than suppliers, which means that in a world where transactions are costly owning the supplier relationship provides significantly more leverage.
    2. The value chain for any given consumer market is divided into three parts: suppliers, distributors, and consumers/users. The best way to make outsize profits in any of these markets is to either gain a horizontal monopoly in one of the three parts or to integrate two of the parts such that you have a competitive advantage in delivering a vertical solution. In the pre-Internet era the latter depended on controlling distribution.
    1. Since you use a cryptoasset once, and then it’s in someone else’s hands, this discounting methodology is not accumulative over each year the way it is with a DCF.

      why does a token have to be used once and exchange hands. a token can be taken out of circulation

    1. Basically, all token pitches include a line that goes something like this: "There is a fixed supply of tokens. As demand for the token increases, so must the price." This logic fails to take into account the velocity problem
    1. This, of course, leaves us none the wiser as to how to model velocity, as the equation of exchange is nothing more than an identity. MV=PQ just says that the money flow of expenditures is equal to the market value of what those expenditures buy, which is true by definition. The left and right sides are two ways of saying the same thing; it’s a form of double-entry accounting where each transaction is simultaneously recorded on both sides of the equation. Whether an effect should be recorded in M, V, P, or Q is, ultimately, arbitrary. To transform the identity into a tool with predictive potency, we need to make a series of assumptions about each of the variables. For example, monetarists assume M is determined exogenously, V is constant, and Q is independent of M and use the equation to demonstrate how increases in the money supply increase P (i.e. cause inflation).
    2. The first practical problem with velocity is that it’s frequently employed as a catch-all to make the two sides of the equation of exchange balance. It often simply captures the error in our estimation of the other variables in the model.
    3. The core thesis of current valuation frameworks is that utility value can be derived by (a) forecasting demand for the underlying resource that a network provisions (the network’s ‘GDP’) and (b) dividing this figure by the monetary base available for its fulfillment to obtain per-unit utility value. Present values can be derived from future expected utility values using conventional discounting. The theoretical framework that nearly all these valuation models employ is the equation of exchange, MV=PQ.
    1. Mechanism design studies solution concepts for a class of private-information games. Leonid Hurwicz explains that 'in a design problem, the goal function is the main "given", while the mechanism is the unknown. Therefore, the design problem is the "inverse" of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.'[1] So, two distinguishing features of these games are: that a game "designer" chooses the game structure rather than inheriting one that the designer is interested in the game's outcome

      Advantages over traditional game theory for token econimics:

      • a game "designer" chooses the game structure rather than inheriting one
      • that the designer is interested in the game's outcome
    1. 1. Thesis: Open Standards, Market Cycles and Investment ReturnsInformation technology evolves in multi-decade cycles of expansion, consolidation anddecentralization.

      Open standards reduce production costs, which bring down prices for consumers and increase the potential size of the market.

      New entrants realizing that cost are now low, competition is scarce and the potential reward is high, attempt to disrupt incumbents with more efficient and scalable business models.

      Market consolidates around the platforms of the companies that realize and implement these business models first.

      Demand then builds for a low cost, open source alternative to the incumbent platforms.

    2. We favor spreading priceand risk by building up and averaging out of positions over time rather than speculating onspeculation. A committed capital structure with significant capital reserves for staged follow-onsgives us the flexibility to build up our investments independent of market sentiment. We areshielded from having to dump assets on the market to honor redemption requests, avoiding thedreaded “death spiral” which can plague more liquid fund structures.
    3. We fund the development of decentralized information networks coordinated by a scarcecryptoasset – or token – native to the protocol. Our thesis is that decentralization andstandardization at the data layer of the internet is collapsing the production costs of informationnetworks, eliminating data monopolies and creating a new wave of innovation.
    4. Crypto provides a new mechanism for organizing human activity on a global basis usingprogrammable financial incentives. It’s an opportunity to design information networks which canachieve unprecedented levels of scale by decentralizing the infrastructure, open sourcing thedata, and distributing value more broadly. What we’ve discovered is the native business model ofnetworks – which, as it turns out, encompass the entire economy.
    5. Most of the use cases today involve compensating machine work (transaction processing, filestorage, etc.) with tokens: the building blocks of decentralized applications. But the greatestlong-term opportunity is in networks where tokens are earned by end-users themselves.
    6. We’ve also realized how inefficient the joint-stock equity industry model is at accounting for anddistributing the real value created by online networks. The value of a share of stock is necessarilya function of profits; the price of Twitter’s stock only reflects Twitter Inc’s ability to monetizethe data – and not the actual worth of the service. Tokens solve this inefficiency by derivingfinancial value directly from user demand as opposed to “taxing” by extracting profits.
    7. Following the history of information technology and the massive trend towards open source, wecan see that democratizing information is the natural next step in the incessant trend to opensource, and thus the next big opportunity for innovation.
    8. The way to play a consolidating market is to investheavily into the consolidating incumbents (which are likely to continue growing strongly for along period of time) and to invest progressively in the insurgent platforms that will grow tocommoditize the incumbent business models and create a new wave of innovation. We arefocused on the latter
    9. Those who succeed the most and establish successful platforms “on top” of the open standardlater tend to consolidate the industry by leveraging their scale (in assets and distribution) tointegrate vertically and expand horizontally at the expense of smaller companies. Competing inthis new environment suddenly becomes expensive and startups struggle to create value in theshadow of incumbents, compressing venture returns.Demand then builds for a low cost, open source alternative to the incumbent platforms, and thecycle repeats itself: the new open standard emerges and gets adopted, the market decentralizes asnew firms leverage the cost savings to compete with the old on price, value creation shiftsupwards (once more), and so on