12 Matching Annotations
  1. Aug 2022
    1. In short, Smith believed that slavery is not as profitable as free labor and that forced servitude takes away economic agents’ motivation to succeed. It takes away their ability to better their own condition. Instead, they are motivated to make life easier for themselves in the short term, which transfers costs that would normally be borne by the worker to the master. 

      This is closely related to current employment models. Many jobs simply pay by the hour, and don't promote or give raises out with no motivation for workers that do better.

    2. Smith offered an economic argument against slavery because he did not believe that monarchy (or freedom from it), wealth, or religion can be trusted to convince people to free their slaves

      Looking at American history this is absolutely true

    3. “we are not to imagine the temper of the Christian religion is necessarily contrary to slavery” (LJ(B) iii.128)—many Christian countries, again including Scotland, allowed slavery

      Adam Smith thought that Christians were not against slavery because of the way that professing christians acted.

    4. and lamented, to them, that he doubted economic motivation would be sufficient for masters to liberate those under their yoke

      Even Adam Smith didn't think that the free market was good enough to accomplish every moral good the way libertarians do.

  2. Jul 2022
    1. Such locomotorcircuits have been documented in a number ofspecies, including cats (1), rodents (2), crayfish(3), stick insects (4), and cockroaches (5).

      It's crazy that any part of the brain is so general across so many species.

    1. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by a patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior.

      Very interesting the way it describes different professions. Although there have been some approaches to the memex none of them have been this universally usable.

    2. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

      Even with current Zettlkasten technology like Logseq, a way to create a trail, and send off a particular trail to a friend is not present. I wonder what the copyright laws would look like when it comes to sharing excerpts as part of annotated trails like this. Would it be covered under Fair Use? What would a file format or a renderer for this look like?

    3. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

      We have gotten away from written annotations for digital work and I'm not entirely sure it's a good thing. I want to think through the trade-offs of this.

  3. Jun 2022
    1. Who models the models that model models? An exploration of GPT-3's in-context model fitting ability

      Very interesting article exploring the abilities of GPT-3 to perform on non-text problems. Very relevant to AI Learning to Learn and General Intelligence.

    1. The Scaling Hypothesis

      Really fascinating article on the benefits of scale. Essentially it hypothesizes that as AI models get larger and more complex, they only become better at solving more complex problems. The theorized reason is that they have more room to trivialize more and more parts (think of the way that Neural Networks can accomplish all bayesian functions).