4 Matching Annotations
  1. Mar 2021
  2. Oct 2019
    1. the generator and discriminator losses derive from a single measure of distance between probability distributions. In both of these schemes, however, the generator can only affect one term in the distance measure: the term that reflects the distribution of the fake data. So during generator training we drop the other term, which reflects the distribution of the real data.

      Loss of GAN- How the two loss function are working on GAN training

  3. Jul 2017
    1. Partial loss-of-func- tion alleles cause the preferential loss of ventral structures and the expansion of remaining lateral and dorsal struc- tures (Figure 1 c) (Anderson and Niisslein-Volhard, 1988). These loss-of-function mutations in spz produce the same phenotypes as maternal effect mutations in the 10 other genes of the dorsal group.

      This paper has been curated by Flybase.

  4. Feb 2017
    1. SVM only cares that the difference is at least 10

      The margin seems to be manually set by the creator in the loss function. In the sample code, the margin is 1-- so the incorrect class has to be scored lower than the correct class by 1.

      How is this margin determined? It seems like one would have to know the magnitude of the scores beforehand.

      Diving deeper, is the scoring magnitude always the same if the parameters are normalized by their average and scaled to be between 0 and 1? (or -1 and -1... not sure of the correct scaling implementation)

      Coming back to the topic -- is this 'minimum margin' or delta a tune-able parameter?

      What effects do we see on the model by adjusting this parameter?

      What are best and worst case scenarios of playing with this parameter?