21 Matching Annotations
  1. Jan 2021
    1. [21, 39] directlyuse conventional CNN or deep belief networks (DBN)

      interesting, read!

  2. Dec 2020
    1. ↵Gtr⇡(At|St,✓t)⇡(At|St,✓t)

      notice that the multiplier of the gradient here: G_t / pi(a|s) is positive, meaning we are always going in the same direction as the gradient. using a baseline G_t - v(S_t) allows us to revers this direction if G_t is lower than the baseline

    2. Actor–Critic with Eligibility Traces (continuing), for estimating⇡✓⇡⇡⇤

      actor critic algorithm one step TD:

      actor critic algorithm one step TD

    3. (13.16)

      very similar to box 199 but without h(s)

    4. ✓✓+↵✓Irln⇡(A|S,✓)

      actor-critic with state value baseline update, with discounting!

    5. ww+↵wrˆv(S,w)

      TD(0) update

    6. Gt:t+1ˆv(St,w)

      same as REINFORCE MC baseline, but with the sampled G replaced with a bootstrapped G

    7. That is,wis a single component,w.

      constant baseline?

    8. If there is discounting (<1) itshould be treated as a form of termination, which can be done simply by includinga factor ofin the second term of

      termination because discounting by gamma is equivalent to a non-disounted case, but with termination probability gamma

    Annotators

  3. Oct 2020
    1. Chenget al.[92] design a multi-channel parts-aggregated deep convolutional network byintegrating the local body part features and the global full-body features in a triplet training framework

      TODO: read this and find out what the philosophy behind parts-based model is??

    2. adaptive average pooling

      what is this?

    3. Generation/Augmentation

      TODO: read

    4. Using theannotated source data in the training process of the targetdomain is beneficial for cross-dataset learning

      What? Clarify

    5. Dy-namic graph matching (DGM)

      super interesting, but hardly applicable. do rad though!

    6. Sample Rate Learning

      what

    7. Singular VectorDecomposition (SVDNet)

      seems interesting, "iteratively integrate the orthogonality constraint in CNN training"

    8. Omni-Scale Network (OSNet)

      read paper again to see if any good ideas for architecture

    9. bottleneck laye

      Bottleneck layers do a 1x1 convolution to reduce the dimensionality, before a 3x3 convolution, to save computation

      https://medium.com/@erikgaas/resnet-torchvision-bottlenecks-and-layers-not-as-they-seem-145620f93096

    10. Global Feature Representation Learning

      someething that came up whilst looking through papers in attention: https://arxiv.org/pdf/1709.01507.pdf squeeze-and-excitation

    11. [68]

      Parts-based paper, interesting approach

    Tags

    Annotators