24 Matching Annotations
  1. Oct 2019
    1. Abstract

      Looking forward to seeing the abstract as that helps frame reader expectations.

      Also a figure or table would greatly help with connecting to the reader.

      Oftentimes a reader will want to look at a table or figure first, since that is quick before deciding to invest the time to understand the prose.

    2. we survey approaches to learning models

      Could the introduction provide a more definitive assessment of when incorporating external information helps models?

      One purpose of the review is to frame how scholars think about the topic... not just describing all relevant work.

      Take a stand on:

      1. the types of ways external info can be incorporated
      2. when it is useful to do so.
    3. .

      Paragraphs help readers tremendously. More paragraphs = easier to read and get back on track if lost.

      Should this be a paragraph break?

    4. Models that take gene expression data as input

      In order to unsterstand a learned classifier, the reader must know three things:

      1. what the observations are
      2. what the feature/predictors are
      3. what the outcome variable is

      Make sure that these 3 aspects are clearly established before digging deeper into a specific model.

      My understanding is that this review is about using biological knowledge to create bullet 2 (the features/predictors).

      Would a table with this information be a good addition to the review?

    5. hand-engineered

      It may help the reader understand to provide an example of a hand-engineered feature (unless it's too complicated).

    6. For example, for images

      The lack of comma after "for images" is confusing. Consider rephrasing like "With images for example,".

  2. Aug 2019
    1. excellent discrimination and calibration performance

      I think we need to more strongly quantify that the edge prior can outperform many existing edge predictions to show that there is a major chance of misrepresenting performance if you do not account for it. I will think of phrasings for the abstract after reading whole mansuscript.

    2. link

      edge for consistency

  3. Jul 2019
  4. May 2016
    1. the median review time at journals has grown from 85 days to >150 days during the past decade (5)

      This statement is a misunderstanding of Powell 2016, which states:

      At Nature, the median review time has grown from 85 days to just above 150 days over the past decade, according to Himmelstein's analysis.

      However,

      the median review time — the time between submission and acceptance of a paper — has hovered at around 100 days for more than 30 years.

      So while the median review time at Nature has gone from 85 to 150 days, this is not the case for all journals. See also the related Tweet.

    1. However, no nonsense or frameshif t mutations were identif ied, leading to the in ferenc e th at these var iants may be gain of fun ct ion and that this gain results in h igher leve ls of low-density l ipoprot ein cho leste rol (LDL-C)

      Just to be clear, the follow up studies contradicted this inference: loss of function reduces LDL levels? If so, I'd be more explicit that the inference turned out to be mistaken.

    2. An re cen

      Typo: a recent

    3. Himmelstein et al. (19)

      "Project Rephetio" may be a more precise way to refer to our study

    4. suggest druggable target

      Typo: missing a

    5. network-based strategy may seek t o target in te racting partners to achieve the desired outcome

      A good example here is the Network-based in silico drug efficacy screening study. I'd consider adding a paragraph on this study. This study falls into the comprehensive category you mention.

    6. 10.15363/THINKLAB.D107

      Any idea where this ALL CAPS DOI came from? I've reported this issue with DataCite and it's disheartening to see the issue potentially elsewhere. I'd love to know track down the source of the issue.

  5. Apr 2016
  6. Feb 2016