16 Matching Annotations
  1. Aug 2017
    1.   { 1703, 1.1 }   { 4306, 2.3 }   { 4306, 2.7 }   { 1703, 4.0 }

      1.1 and 4.0 belong to 1703 ads type, while 2.3 and 2.7 relate to 4306 ads type?

    2. Bootstrapped aggregate data with header

      resampling raw data and then aggregate data

    3. use the first form to stream over the data
      { 2.7, 2.7}
      { 2.3, 2.7, 4.0, 4.0 }
      { 2.3, 4.0, 4.0, 4.0 }
      

      得到一个这种形式的resample的元素,就变成计数的表示?

    4. maintaining B running aggregates, one for each resample
      { 0, 0, 2, 0 }  # Counts for Resample 0
      { 0, 1, 1, 2 }  # Counts for Resample 1
      { 0, 1, 0, 3 }  # Counts for Resample 2
      
    5. { 0, 0, 0 }

      B=3, value 1.1 appears 0 times in all 3 resamples

    6. { 0, 0, 2, 0 }

      column 1 from above resampled value

    7. limn→∞Binomial(n,1n)=Poisson(1)

      ?

  2. Jun 2017
    1. updates_collections: An optional list of collections that update_op should be added to.

      How to update learning rate or early stoping according to metrics?

  3. May 2017
    1. update_op performs a corresponding operation to update internal model state.

      So here we update the model parameters to match validation metric_fn? Is it the reason for probably overfitting the validation dataset? What if we use multiple times of validation dataset and don't tweak the model parameters?(As this post suggested, if we don't tweak any hyper or parameters of the model when we use validation dataset, validation dataset can also be used as test dataset.)

    1. Then later import it and extend it to a training graph.

      "When you create a tf.train.Saver with no arguments, it will implicitly use the current set of variables at the time of Saver construction when it saves and restores." https://github.com/tensorflow/tensorflow/issues/2489#issuecomment-221282483

  4. Apr 2017
    1. not burdening families with the costs of care

      different expression

    2. We found that what is most important to people at the end depends on where they live

      conclusion

  5. May 2016
    1. prefers many medium disagreements to one big one

      How to understand this sentence? Should it be"prefers many medium disagreements to big one"?

    2. Evaluate on the test set only a single time, at the very end.

      Is there are any conditions that we don't need to tune hyperparameters using validation set? In this case, could the accuracy on validation data be the same as the accuracy on the final test data in this sentence?