57 Matching Annotations
  1. Aug 2016
  2. Jul 2016
  3. web.stanford.edu web.stanford.edu
  4. Jun 2016
    1. Thus, we basically need to re-train

      ... in order to achieve what? Statement doesn't seem complete.

      Perhaps "when we need lower dimensional embeddings with d = D', we can't obtain them from higher dimensional embeddings with d = D."?

      However, it is possible, to a certain extent, to obtain lower dimensional embeddings from higher dimensional ones - e.g. via PCA.

    1. (XTX)ˆvi=liˆvi

      Which means that transforming vector v with that matrix gives us a vector with the same direction. Direction does not change after transformation. This is eigenvector.

    2. It is evident that the choice ofPdiagonalizesCY

      That is, we have found that, by selecting P = E (the set of eigenvectors of Cx), we get what we wanted: the matrix Cy to be a diagonal matrix.

    3. Figure 3

      The example for redundancy is not (or at least it doesn't seem to be) in the context of the example with the spring and the ball. Since there is no clear separation between the examples, this might be confusing to readers.

    4. directions with largest variances in ourmeasurement space contain the dynamics of interes

      We seek new features (new directions) which best contain the information (variance) of interest.

      Amount of variance -> amount of information.

    5. ball’s position in a three-dimensional space

      ball's position = a data sample

      three-dimensional space = the feature space, with 3 x 2 features (because each camera records in 2D). Time dimension not recorded since it is, actually, the index of a data sample.

      Some of these features (dimensions) are not necessary (they are redundant).