5 Matching Annotations
  1. Oct 2019
    1. MDX is a superset of Markdown. It allows you to write JSX inside markdown. This includes importing and rendering React components!
  2. Sep 2019
    1. Text embedding models convert any input text into an output vector of numbers, and in the process map semantically similar words near each other in the embedding space: Figure 2: Text embeddings convert any text into a vector of numbers (left). Semantically similar pieces of text are mapped nearby each other in the embedding space (right). Given a trained text embedding model, we can directly measure the associations the model has between words or phrases. Many of these associations are expected and are helpful for natural language tasks. However, some associations may be problematic or hurtful. For example, the ground-breaking paper by Bolukbasi et al. [4] found that the vector-relationship between "man" and "woman" was similar to the relationship between "physician" and "registered nurse" or "shopkeeper" and "housewife"

      love that Big Lebowski reference

  3. Jan 2019
    1. Grid devices can be nested or layered along with other devices and your plug-ins,

      Thanks to training for Cycling ’74 Max, had a kind of micro-epiphany about encapsulation, a year or so ago. Nesting devices in one another sounds like a convenience but there’s a rather deep effect on workflow when you start arranging things in this way: you don’t have to worry about the internals of a box/patcher/module/device if you really know what you can expect out of it. Though some may take this for granted (after all, other modular systems have had it for quite a while), there’s something profound about getting modules that can include other modules. Especially when some of these are third-party plugins.

  4. Aug 2018
    1. Publishers and other sites can include a simple line of javascript to enable annotation by default across their content.

      Publishers and platform hosts who want to learn more about embedding annotations can learn more about best practices here.

  5. Apr 2017
    1. Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Section 3.1 and 3.2 in Mikolov et al.).