9 Matching Annotations
  1. May 2023
    1. From Jim Keller on Lex, there’s three fundamental types of compute CPU: add, multiply, load, store, compare, branch (nothing can be known about anything) GPU: add, multiply, load, store (when things happen is known, but the addresses aren’t) DSP: add, multiply (everything is known except the data) Neural networks are DSPs. All the loads and stores can be statically computed, which isn’t even possible for GPU workloads, never mind CPU ones.
    1. Emphasizing lifetime-polymorphism can also make type inference untenable, a design choice that wouldn’t fit OCaml.

      References or sources? Why? Presumably there's some research into this?

    1. Personal emotional filters could filter content and translate interactions to provide only a particular emotional response, depending on how we want to feel at a given moment, perhaps delegating control to an AI that we trust to craft our emotional experience. (And here, another step: trust gives way to control, naturally.)

      this particularly reminds me of the premise to a Alastair Reynolds short-story, Zima Blue. (Was presented in a Netflix special, although I don't think the short did the story much justice …)

    1. More usefully, this mechanism allows us to make just a few things ambiently available. For example, we don't want to have to plumb stderr through to a function every time we want to do some printf debugging, so it makes sense to provide a tracing function this way (and Eio does this by default). Tracing allows all components to write debug messages, but it doesn't let them read them. Therefore, it doesn't provide a way for components to communicate with each other.

      Yet another "session-associated-storage / thread-local-storage is a good idea for logging" datapoint, ofmg

    2. ambient authority

      This is a really evocative item of terminology.

    1. Available as a monolithic file, by chapters, and in PDF — git repository.

      What a cool documentation design; I love the all-in-one layout.

      Very reminiscent of the old CoffeeScript docs, to me.

    1. However, we have one more generalization opportunity. The semicolon is a sequencing operator that has semantics that is usually defined by a programming language, and, typically, in regular deterministic languages x := f a; y := f b means: first compute f a and assign the result to x, then compute f b and assign the result to y. However, what if we would like to parametrize our algorithm with a behavior of the semicolon and operators:

      What a fantastic explanation of a monad; by far the best one I've seen.

  2. Jun 2020
    1. Next, open the haskell-ide-engine directory (the one which you cloned above) and open the stack.yaml file. There may be multiple files like stack-8.2.1.yml or stack-8.2.2.yaml. Ignore those extra files. You need the stack.yaml file. Note down the resolver (very first line of this file).

      This is no longer up-to-date, as of mid-2020; the installation method has been updated to use Shake, and looks like this:

      stack ./install.hs hie
      

      Unfortunately, this means the instructions to check the stack.yaml are no longer relevant ... it contains something like resolver: nightly-2020-05-01, instead of one of the lts- snapshots.

      I'm not sure what to do from here; I'm just gonna hope that the resultant hie will still work with the latest lts-* resolver for my beginner projects ...

    1. DO NOT start a stack vs cabal debate. Use that energy to instead convince the authors of stack & cabal to merge their work together and build a kickass unified tool for the ecosystem.

      I've been getting on my 'Haskell in 2020' feet for maybe 24 hours, and I've already seen, no kidding, three Cabal vs. Stack debates.

      As a newbie to the ecosystem, wtactualf, fam.