- Aug 2017
-
www.unofficialgoogledatascience.com www.unofficialgoogledatascience.com
-
{ 1703, 1.1 } { 4306, 2.3 } { 4306, 2.7 } { 1703, 4.0 }
1.1 and 4.0 belong to 1703 ads type, while 2.3 and 2.7 relate to 4306 ads type?
-
Bootstrapped aggregate data with header
resampling raw data and then aggregate data
-
use the first form to stream over the data
{ 2.7, 2.7} { 2.3, 2.7, 4.0, 4.0 } { 2.3, 4.0, 4.0, 4.0 }
得到一个这种形式的resample的元素,就变成计数的表示?
-
maintaining B running aggregates, one for each resample
{ 0, 0, 2, 0 } # Counts for Resample 0 { 0, 1, 1, 2 } # Counts for Resample 1 { 0, 1, 0, 3 } # Counts for Resample 2
-
{ 0, 0, 0 }
B=3, value 1.1 appears 0 times in all 3 resamples
-
{ 0, 0, 2, 0 }
column 1 from above resampled value
-
limn→∞Binomial(n,1n)=Poisson(1)
?
-
- Jun 2017
-
www.tensorflow.org www.tensorflow.org
-
updates_collections: An optional list of collections that update_op should be added to.
How to update learning rate or early stoping according to metrics?
-
- May 2017
-
www.tensorflow.org www.tensorflow.org
-
update_op performs a corresponding operation to update internal model state.
So here we update the model parameters to match validation
metric_fn
? Is it the reason for probably overfitting the validation dataset? What if we use multiple times of validation dataset and don't tweak the model parameters?(As this post suggested, if we don't tweak any hyper or parameters of the model when we use validation dataset, validation dataset can also be used as test dataset.)
-
-
www.tensorflow.org www.tensorflow.org
-
Then later import it and extend it to a training graph.
"When you create a tf.train.Saver with no arguments, it will implicitly use the current set of variables at the time of Saver construction when it saves and restores." https://github.com/tensorflow/tensorflow/issues/2489#issuecomment-221282483
-
- Apr 2017
-
www.economist.com www.economist.com
-
not burdening families with the costs of care
different expression
-
We found that what is most important to people at the end depends on where they live
conclusion
-
- May 2016
-
koaning.io koaning.io
-
beta distribution would be much better here
?
-
-
koaning.io koaning.io
-
P(T)=1/100?
-
-
cs231n.github.io cs231n.github.io
-
prefers many medium disagreements to one big one
How to understand this sentence? Should it be"prefers many medium disagreements to big one"?
-
Evaluate on the test set only a single time, at the very end.
Is there are any conditions that we don't need to tune hyperparameters using validation set? In this case, could the accuracy on validation data be the same as the accuracy on the final test data in this sentence?
-