47 Matching Annotations
  1. Jun 2020
  2. May 2020
    1. Chu, H. Y., Englund, J. A., Starita, L. M., Famulare, M., Brandstetter, E., Nickerson, D. A., Rieder, M. J., Adler, A., Lacombe, K., Kim, A. E., Graham, C., Logue, J., Wolf, C. R., Heimonen, J., McCulloch, D. J., Han, P. D., Sibley, T. R., Lee, J., Ilcisin, M., … Bedford, T. (2020). Early Detection of Covid-19 through a Citywide Pandemic Surveillance Platform. New England Journal of Medicine, NEJMc2008646. https://doi.org/10.1056/NEJMc2008646

  3. Apr 2020
    1. Adams, E. R., Anand, R., Andersson, M. I., Auckland, K., Baillie, J. K., Barnes, E., Bell, J., Berry, T., Bibi, S., Carroll, M., Chinnakannan, S., Clutterbuck, E., Cornall, R. J., Crook, D. W., Silva, T. D., Dejnirattisai, W., Dingle, K. E., Dold, C., Eyre, D. W., … Sanchez, V. (2020). Evaluation of antibody testing for SARS-Cov-2 using ELISA and lateral flow immunoassays. MedRxiv, 2020.04.15.20066407. https://doi.org/10.1101/2020.04.15.20066407

  4. Feb 2020
  5. Sep 2019
  6. Mar 2018
    1. J.M. Berger Former Brookings Expert

      Paying attention to the qualifications of the author(s)/composer(s) is another crucial role in crap detection at it will help discern whether or not to take the piece seriously or to use it for further research.

    2. Markaz

      In the Rheinghold text , he explains the importance of pay attention the website layout as well as content. However, in doing so, you must tune your crap detection and remember that not everything with a fancy layout is reliable, and vice versa.

    3. I took a detailed look at how ISIS functions online, breaking it down into a five-part template, which can be implemented in different ways depending on the target’s disposition:

      Rather than simply stating information, the author (Berger) explains his source and the way in which he broke his research down into smaller categories. This citation is also apart of crap detection with a reliable source.

    4. detected through social media analysis,

      The implementing of this specific link gives important attribution and increases source reliability. The text makes a statement and is able to back it up with an external, secure source.

    5. there are practical and ethical limits to how much we can interdict discovery.

      Though Rheinghold stresses the importance of crap detection and researching your sources, he accepts the fact that there a limits that we reach in terms of discernment of validity. This is shown as the ISIS busters reach ethical and practical limits of search. It is important in the way that one mustn't get overwhelmed with finding the true source origin because you can only go so far.

    6. stripping away the mystique and focusing on the mechanics.

      Rheinghold stresses the importance of looking at the base of things, rather than simply the makeup and what you see initially, it is important to dig deeper and look at sources from a questionable yet structured angle.

    7. Monday, November 9, 2015

      The article ends in 'edu' which, as Rheinghold states, increases estimation of its credibility.

    8. This post originally appeared on VOX-Pol.

      Considering that the origin of this post comes from a non-secure site, that appears a tad amateur - also brings forth speculation. It is a blog site, and considering this - I somehow take what is posted 'with a grain of salt'.

    9. How does ISIS acquire new recruits online and convince them to take action? J.M. Berger explains, arguing that efforts to counter terrorists’ online activity can be more effective if the mechanics are clearly understood.

      I begin critiquing this article based on Rheinghold's initial conversation with his daughter. In the text Rheinghold suggests using a free internet service - Whois , in order to search for validity in research. After plugging this domain name into the site, I find that the name of the registered owner is 'Educase'. Educase is a nonprofit core data service for research and analysis.

    10. How terrorists recruit online (and how to stop it)

      I will be connecting this text through Howard Rheinghold's "Crap Detection 101" from chapter 2 of his book Net Smart - How to Thrive Online. This allows for further critic of this article in terms of this theme.

  7. Feb 2017
    1. how it uses zones

      Does anyone have an authoritative link for this concept of zones and how they work? It'd be much appreciated.

  8. Jan 2017
    1. Early event detection problems can go here. Two example cases just came to my mind are: 1- in emergency response: detecting a disaster quickly is important. 2- in computational journalism: many locals suddenly start talking about an event means something newsworthy is going on.

  9. Nov 2016
    1. Finally, by assuming the non-detection of a species to indicate absence from a given grid cell, we introduced an extra level of error into our models. This error depends on the probability of false absence given imperfect detection (i.e., the probability that a species was present but remained undetected in a given grid cell [73]): the higher this probability, the higher the risk of incorrectly quantifying species-climate relationships [73].

      This will be an ongoing challenge for species distribution modeling, because most of the data appropriate for these purposes is not collected in such a way as to allow the straightforward application of standard detection probability/occupancy models. This could potentially be addressed by developing models for detection probability based on species and habitat type. These models could be built on smaller/different datasets that include the required data for estimating detectability.

  10. Nov 2015
    1. Presentation summarizing an approach to duplicate web page detection that was developed by a researcher whilst at Google in the early 2000s

  11. Sep 2015
  12. arxiv.org arxiv.org
    1. Given an LSH familyH, the LSH scheme amplifiesthe gap between the high probabilityP1and the lowprobabilityP2by concatenating several functions

      Useful recap of LSH

    2. Recent survey paper for hashing-based approaches to similarity search

    1. This paper has a very useful overview of previous work that is worth reading under section 9.

    2. We used the following publicly available real datasets in the experiment

      Datasets used are DBPL, ENRON, UNIREF-4GRAM. All small (<1M records) in web terms and I would guess, all with small document sizes.

      Given a lengthy paper, could potentially divide into smaller documents (1 doc === 1 page) and do signature calculation on a per-page basis. This could have the benefit of bounding the search time by limiting the number of pages that need to be rendered to text in order to start the lookup process.