13 Matching Annotations
  1. Jun 2017
    1. Various secular trends make everything more expensive and worse, which means government has to spend more money and regulation to get the same level of services

      This is exactly Joseph Tainter’s argument—complexity becomes less efficient in all societies, and they have to spend more to maintain the status quo. That inevitably leads to societal collapse.

    1. These were the men who lived through the centuries of Roman fall and Barbarian triumph, and who by virtue of their elevated position, their learning, and talents, should have seen, if not foretold, the course of events.

      I am thankful for Terry Jones’ “Barbarians”, which made comedic fun of such pomposity.

    2. while this pleasant country house and senior common room life was going calmly on, what do we find happening in the history books?

      While Lewis is writing about kids and lions and witches and Narnia, does he mention the war?

    3. To cure this sickness of population the Roman rulers knew no other way than to dose it with barbarian vigour. Just a small injection to begin with and then more and more till in the end the blood that flowed in its veins was not Roman but barbarian.

      Definitely can tell this was written before Holocaust. … OH! This first chapter is written during the rise of the Third Reich and that’s why it’s so anti-German!

    4. True to its own ethos it was impartial as between Barbarian and Roman, or between the Romans who prospered and ruled and those outside the pale.

      Christianity, that is.

    5. The Roman world was a world of schools and universities, writers, and builders. The barbarian world was a world in which mind was in its infancy and its infancy was long.

      SO cringe-worthy.

  2. Oct 2016
    1. At some point, methods can become so complex that the only results that you can believe are those that match you expectations. At that point, the research has lost something very important: research should be able to change your mind.

      This is a very powerful message, one that I hadn't heard before. It immediately reminds me of Richard Feynman’s “Cargo Cult Science“ piece where he talks about physicists’ shameful behavior after Millikan’s oil drop experiment:

      When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. … this long history of learning how not to fool ourselves—of having utter scientific integrity—is, I'm sorry to say, something that we haven't specifically included in any particular course that I know of. We just hope you've caught on by osmosis.

      And now you, Dr Salganik, tell us that this is absolutely imperative!

      (source)

    2. simple research is not the same as easy research

      Wow, are you channelling Rich Hickey’s famous-in-some-circles talk, Simple Made Easy?! That talk, and this dichotomy, is legendary in the Clojure/ClojureScript communities.

    1. Physical stores already collect extremely detailed purchase data, and they are developing infrastructure to monitor customers shopping behavior and mix experimentation into routine business practice.

      Source? This sounds fascinating—tracking customers’ movements via cameras, etc., I’d like to know more about this.

    1. They used the survey data to train a machine learning model to predict someone’s wealth from their call data, and then they used this model to estimate the wealth of all 1.5 million customers. Next, they estimated the place of residence of all 1.5 million customers by using the geographic information embedded in the call logs.

      I have some background in machine learning—which may be an aid or an obstacle in understanding this description of Blumestock’s work—but it’s not obvious to me what “call logs” and “call data” mean here.

      The researchers collected survey data on a small sample of the entire population—ok.

      They trained a model that consumes “call data” and estimates wealth—but what is “call data” here? The survey data without the wealth data? Or is it cell phone usage data from the cell phone company?

      Then they ran the model on data for all customers. This is what makes me think the model was trained on some kind of cell phone data, because that’s the only data available for people outside the survey.

      Thinking about it a bit more, and without reading the actual paper (open review right 😛), I think I understand what they did: they used survey data to train a model that predicted wealth from cell phone usage data (truth = wealth data from surveys). Then they tested the resulting model on cell phone usage data on all customers to estimate wealth across the country. Is this accurate?