219 Matching Annotations
  1. Last 7 days
    1. Aleta, A., Martín-Corral, D., Pastore y Piontti, A., Ajelli, M., Litvinova, M., Chinazzi, M., Dean, N. E., Halloran, M. E., Longini Jr, I. M., Merler, S., Pentland, A., Vespignani, A., Moro, E., & Moreno, Y. (2020). Modelling the impact of testing, contact tracing and household quarantine on second waves of COVID-19. Nature Human Behaviour, 1–8. https://doi.org/10.1038/s41562-020-0931-9

    1. Malani, A., Soman, S., Asher, S., Novosad, P., Imbert, C., Tandel, V., Agarwal, A., Alomar, A., Sarker, A., Shah, D., Shen, D., Gruber, J., Sachdeva, S., Kaiser, D., & Bettencourt, L. M. A. (2020). Adaptive Control of COVID-19 Outbreaks in India: Local, Gradual, and Trigger-based Exit Paths from Lockdown (Working Paper No. 27532; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w27532

  2. Aug 2020
  3. Jul 2020
    1. Baker, C. M., Campbell, P. T., Chades, I., Dean, A. J., Hester, S. M., Holden, M. H., McCaw, J. M., McVernon, J., Moss, R., Shearer, F. M., & Possingham, H. P. (2020). From climate change to pandemics: Decision science can help scientists have impact. ArXiv:2007.13261 [Physics]. http://arxiv.org/abs/2007.13261

    1. Candido, D. S., Claro, I. M., Jesus, J. G. de, Souza, W. M., Moreira, F. R. R., Dellicour, S., Mellan, T. A., Plessis, L. du, Pereira, R. H. M., Sales, F. C. S., Manuli, E. R., Thézé, J., Almeida, L., Menezes, M. T., Voloch, C. M., Fumagalli, M. J., Coletti, T. M., Silva, C. A. M. da, Ramundo, M. S., … Faria, N. R. (2020). Evolution and epidemic spread of SARS-CoV-2 in Brazil. Science. https://doi.org/10.1126/science.abd2161

    1. At the substitution level, you are substituting a cup of coffee that we could make at home or school with a cup of coffee from Starbucks. It’s still coffee: there’s no real change.

      Love this example with one of my favorite things: coffee! Having these examples are very helpful to me, this article not only provides examples, though, it explains why they are examples of each

    2. The SAMR model allows you the opportunity to evaluate why you are using a specific technology, design tasks that enable higher-order thinking skills, and engage students in rich learning experiences.

      Clearly stated purpose of the SAMR model!

    1. complete analysis of Grubhub’s business plan. Know what is its key activities, resources, value propositions, partners, and revenue streams.

      know the details of the Grubhub Business model you ought to know how does Grubhub work?

  4. Jun 2020
    1. Larremore, D. B., Wilder, B., Lester, E., Shehata, S., Burke, J. M., Hay, J. A., Tambe, M., Mina, M. J., & Parker, R. (2020). Test sensitivity is secondary to frequency and turnaround time for COVID-19 surveillance. MedRxiv, 2020.06.22.20136309. https://doi.org/10.1101/2020.06.22.20136309

    1. A Complete Guide on Food Delivery Business Model – Its Types and ChallengesYou are here:HomeApps & SoftwareA Complete Guide on Food…

      Building an online food delivery business will help you in the present as well as the future. As we all know about the present COVID 19 crisis and to let businesses survive in this pandemic, going digital is important.

    1. Facebook already harvests some data from WhatsApp. Without Koum at the helm, it’s possible that could increase—a move that wouldn’t be out of character for the social network, considering that the company’s entire business model hinges on targeted advertising around personal data.
    1. epsilon. Is a very small number to prevent any division by zero in the implementation (e.g. 10E-8). Further, learning rate decay can also be used with Adam. The paper uses a decay rate alpha = alpha/sqrt(t) updted each epoch (t) for the logistic regression demonstration. The Adam paper suggests: Good default settings for the tested machine learning problems are alpha=0.001, beta1=0.9, beta2=0.999 and epsilon=10−8 The TensorFlow documentation suggests some tuning of epsilon: The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. We can see that the popular deep learning libraries generally use the default parameters recommended by the paper. TensorFlow: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08. Keras: lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0. Blocks: learning_rate=0.002, beta1=0.9, beta2=0.999, epsilon=1e-08, decay_factor=1. Lasagne: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08 Caffe: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08 MxNet: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8 Torch: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8

      Should we expose EPS as one of the experiment parameters? I think that we shouldn't since it is a rather technical parameter.

  5. May 2020
    1. Given the disjoint vocabularies (Section2) andthe magnitude of improvement over BERT-Base(Section4), we suspect that while an in-domainvocabulary is helpful, SCIBERTbenefits mostfrom the scientific corpus pretraining.

      The specific vocabulary only slightly increases the model accuracy. Most of the benefit comes from domain specific corpus pre-training.

    2. We construct SCIVOCAB, a new WordPiece vo-cabulary on our scientific corpus using the Sen-tencePiece1library. We produce both cased anduncased vocabularies and set the vocabulary sizeto 30K to match the size of BASEVOCAB. The re-sulting token overlap between BASEVOCABandSCIVOCABis 42%, illustrating a substantial dif-ference in frequently used words between scien-tific and general domain texts

      For SciBERT they created a new vocabulary of the same size as for BERT. The overlap was at the level of 42%. We could check what is the overlap in our case?

    1. Although we could have constructed new WordPiece vocabulary based on biomedical corpora, we used the original vocabulary of BERTBASE for the following reasons: (i) compatibility of BioBERT with BERT, which allows BERT pre-trained on general domain corpora to be re-used, and makes it easier to interchangeably use existing models based on BERT and BioBERT and (ii) any new words may still be represented and fine-tuned for the biomedical domain using the original WordPiece vocabulary of BERT.

      BioBERT does not change the BERT vocabulary.

    1. def _tokenize(self, text): split_tokens = [] if self.do_basic_tokenize: for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens): for sub_token in self.wordpiece_tokenizer.tokenize(token): split_tokens.append(sub_token) else: split_tokens = self.wordpiece_tokenizer.tokenize(text) return split_tokens

      How BERT tokenization works

    1. My initial experiments indicated that adding custom words to the vocab-file had some effects. However, at least on my corpus that can be described as "medical tweets", this effect just disappears after running the domain specific pretraining for a while. After spending quite some time on this, I have ended up dropping the custom vocab-files totally. Bert seems to be able to learn these specialised words by tokenizing them.

      sbs experience from extending the vocabulary for medical data

    2. Since Bert does an excellent job in tokenising and learning this combinations, do not expect dramatic improvements by adding words to the vocab. In my experience adding very specific terms, like common long medical latin words, have some effect. Adding words like "footballs" will likely just have negative effects since the current vector is already pretty good.

      Expected improvement of extending the BERT vocabulary

    1. As is the case in NLP applications in general, we begin by turning each input word into a vector using an embedding algorithm.

      What is the embedding algorithm for BERT?

    1. :It is fairly expensive (four days on 4 to 16 Cloud TPUs), but is a one-time procedure for each language

      Estimates on the model pre-training from scratch

    1. I do not understand what is the threat model of not allowing the root user to configure Firefox, since malware could just replace the entire Firefox binary.
  6. Apr 2020