16,205 Matching Annotations
  1. Jul 2018
    1. On 2017 Feb 15, Vojtech Huser commented:

      This is a very interesting study. Since it using OMOP CDM, it would be interesting to execute it on additional datasets. The appendix provides some guidance. Are there plans to release (possibly with some delay) additional analysis code details? (the study is possibly using existing software packages where case studies in their application are very valuable)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 22, KEVIN BLACK commented:

      The version of record is available via the DOI shown above. The peer-reviewed manuscript version appears by agreement with the publisher at http://works.bepress.com/kjb/66/ .


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 28, Thomas Jeanne commented:

      There is ambiguity in the wording that the authors use in the body text and the abstract to describe the HbA1c reductions that were observed. A 1.1% mean reduction implies a relative reduction; e.g., a reduction from 7.6% A1c to 7.52% A1c. It is only after looking at Table 2 that it becomes clear that the observed change was an absolute reduction in the percentage of hemoglobin that was glycated. To avoid such confusion, use of the term "percentage point" is well-established (e.g., Hayward RA, 1997, Vijan S, 2014, Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group., 2016).

      The mean reduction in HbA1c level from baseline was 1.1 percentage points in the CGM group at 12 weeks, not 1.1%.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 04, Carl V Phillips commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 25, Peter Hajek commented:

      The press release claimed that ‘E-Cigarettes are Expanding Tobacco Product Use Among Youth’ but this study showed no such thing. It detected no increase in youth smoking, on the contrary, the continuous decline in smoking shows that e-cigarettes are not expanding smoking.

      In fact, the data in the paper suggest that if anything, the increase in vaping has been associated with an accelerated decline in smoking. The cut-off point of 2009 seems to have been selected to show no acceleration, but very few young people tried vaping in 2009. By 2011, only 1.5% of middle and high school students vaped within the past 30 days and the figures went up after that. If the decline in smoking over 2004-2011 is compared with the decline over following years, it may well have significantly accelerated.

      The final conclusion that ‘E-cigarette–only users would be unlikely to have initiated tobacco product use with cigarettes’ makes no sense because e-cigarette only users have not initiated any tobacco product use!

      If the authors mean by this that they initiated nicotine use, this is unlikely. In this as in other similar reports, smokers were asked on how many days they smoked in the past 30 days and it is most likely that the same question was asked of vapers, but these results are not reported. Studies that assessed frequency of use report that as with non-smokers who try nicotine replacement products such as nicotine chewing gum, it is extremely rare for non-smokers who try vaping to progress to regular use. While some smokers find e-cigarette satisfactory and switch to vaping, the majority of non-smokers who experiment with e-cigarettes only try them once or twice and virtually none progress to daily use.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Lydia Maniatis commented:

      To make clear what Adamian and Cavanagh (2017) do, and what they don’t do, in this publication. What they don’t do is to test a hypothesis. What they do is present a casual, ad hoc explanation of the Frohlich effect based on the results of past experiments, which they replicate here. The proposal remains untested. Even the ad hoc, untested assumptions (“we assume that the critical delay in producing the Fröhlich effect is not just the delay of attention in arriving at the target but also the time a saccade would then need to land on the target, if one were executed;”) can’t explain the results of their experiments, requiring more ad hoc proposals about complex processes: “The results suggest that the simultaneous onsets may be held in iconic memory and the cued motion trajectory can be retrieved if the cue arrives soon enough;” “A late SOA implies a longer memory retention period, and that means that the reported shifts could arise from working memory limitations and might not be perceptual in nature.”

      Is Adamian and Cavanagh’s assumption that “the critical delay is not just the delay of attention….but also the time a saccade would then need to land on the target…” testable?

      How would one go about testing it, as well as the additional assumptions the authors feel obliged to make with respect to memory?

      Why didn’t the authors attempt to test their proposal to begin with, rather than simply performing replications that, even if successful, could do no more than leave the issue unresolved? They have not even proposed possible tests.

      Obviously, replication was the safer choice, but one, again, that is essentially uninformative vis a vis an ad hoc proposal. It should be clear that the subject of eye movements and their role in perception is extremely complex and that casual speculations are unlikely to be borne out, if properly tested.

      I think Adamian and Cavanagh’s proposal is so vague, the confounds so many, and (least of all, at present) the technical demands so great, that it cannot be tested. If all of the main and subsidiary assumptions, and their implications, were clarified enough to allow them to be critically assessed for logical coherence and consistency with other known facts, it might well fail at this stage, obviating the need for experimental tests.

      Of course, I could be wrong in the present case; the authors may intend, post-replication, to attempt to concretize and subject their proposal to a genuine test; that would be genuinely refreshing.

      I would note, as an afterthought, the uninformative nature of the title of the article, which is typical of many vision science articles and reflects the essentially uninformative nature of the work itself. The title tells us what the article is about, but not what it concluded or implied.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 24, Jim Johnson commented:

      This paper is missing some highly relevant references from the Kieffer lab, including recent studies that establish the requirement for insulin in the anti-diabetic actions of leptin.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 08, Atanas G. Atanasov commented:

      Very important and nicely summarized information that is of a very high relevance for the general public… I have featured this review at: http://healthandscienceportal.blogspot.com/2017/03/potential-benefits-and-harms-of-fasting.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 31, Sin Hang Lee commented:

      Assuming you have read the reference under the Comment.

      Perhaps, you or someone would like to respond to the comment on behalf of Dr. Mark Schiffman and colleagues on the PubMed Commons below the abstract of following article. https://www.ncbi.nlm.nih.gov/pubmed/27905473

      I would like to initiate a forum of open discussion, not one-sided proclamations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 31, Stuart RAY commented:

      The comment above does not provide any evidence for the dissent stated. What is "biased and dangerous" about the HPV vaccination recommendation?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 24, Sin Hang Lee commented:

      The Editorial “Trump’s vaccine-commission idea is biased and dangerous” in Nature 2017 Jan 17;541(7637):259 is debatable. At least one article published by the Nature Publishing Group, in Nature Reviews Disease Primers 2016;2:16086 [1], promoting mass human papillomavirus (HPV) vaccination of girls 9-13 years of age and teenage boys at the cost of >$50 million for every 100,000 adolescents in the name of cervical cancer prevention is equally biased and dangerous. Medical journal censorship of dissenting data and opinions has suppressed the facts that the benefits of mass HPV vaccination are uncertain and the risks are substantial at great cost to society.

      References:

      1. https://www.ncbi.nlm.nih.gov/pubmed/27905473

      Sin Hang Lee shlee01@snet.net Milford Molecular Diagnostics Laboratory Milford,CT


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Andy Collings commented:

      (Original comment found at: https://elifesciences.org/content/6/e17044#disqus_thread)

      Response to “Replication Study: Discovery and preclinical validation of drug indications using compendia of public gene expression data”

      Atul J Butte, Marina Sirota, Joel T Dudley

      We represent three of the key authors of the original work.

      In October 2013, we were pleased to see that our original 2011 publication (Sirota et al., 2011) was selected as one of the top 50 influential cancer studies selected for reproducibility. Our initial impression, probably like most investigators reading this letter, was that such recognition would be a mixed blessing for us. Most of our work for this paper was conducted in 2009, 4 years prior to us being approached. We can see now that this reproducibility effort is one of the first 10 to be completed, and one of the first 5 to be published, more than 3 years later. The reproducibility team should be commended on their diligence to repeat experimental details as much as possible.

      The goal of the original study was to evaluate a prediction from a novel systematic computational technique that used open-access gene-expression data to identify potential off-indication therapeutic effects of several hundred FDA approved drugs. We chose to evaluate cimetidine based on the biological novelty of its predicted connection to lung cancer and availability of local collaborators in this disease area.

      The key experiment replicated here involved 18 mice treated with three varying doses of cimetidine (ranging from 25 to 100 mg/kg) administered via intraperitoneal injection daily to SCID mice after implantation of A549 human adenocarcinoma cells, along with 6 mice treated with doxorubicin as a positive control, and 6 mice treated only with vehicle as a negative control. The reproducibility team used many more mice in their experiment, but tested only the highest dose of cimetidine.

      First, it is very important to clearly note that we are truly impressed with how much Figure 1 in the reproducibility paper matches Figure 4c in our original paper, and this is the key finding that cimetidine has a biological effect between PBS/saline (the negative control) and doxorubicin (the positive control). We commend the authors for even using the same colors as we did, to better highlight the match between their figure and ours.

      While several valid analytic methods were used on the new tumor volume data, the analysis most similar to the original was the t-test we conducted on the measurements from day 11, with 100 mg/kg cimetidine compared to vehicle control. The new measurements were evaluated with a Welch t-test yielding t(53) = 2.16, with p=0.035. We are extremely pleased to see this raw p-value come out from their experiment.

      However, the reproducibility team then decided to apply a Bonferroni adjustment, resulting in a corrected p=0.105. While this Bonferroni adjustment was decided a priori and documented (Kandela et al., 2015), we fundamentally do not agree with their approach.

      The reproducibility team took on this validation effort by starting with our finding that cimetidine demonstrated some efficacy in the pre-clinical experiments. However, our study did not start with that prediction. We started our experiments with open data and a novel computational effort. Readers of our original paper (Sirota et al., 2011) will see that we started our study much earlier in the process, with publicly-available gene expression data on drugs and diseases, and computationally made predictions that certain drugs could be useful to treat certain conditions. We then chose cimetidine and lung adenocarcinoma from among the list of significant drug-disease pairs for validation. This drug-disease pairing was statistically significant in our computational analysis, which included the formal evaluation of multiple-hypothesis testing using random shuffled data and the calculation of q-values and false discovery rates. These are commonly used methods for controlling for the testing of multiple hypotheses. Aside from the statistical significance, local expertise in lung cancer and the availability of reagents and A549 cells and mouse models in our core facilities guided the selection. We then chose an additional pairing that we explicitly predicted (by the computational methodology) would fail. We again used cimetidine and found we had ACHN cells that could represent a model of renal cancer. Scientists will recognize this as a negative control.

      At no point did we feel the comparison of cimetidine against A549 cells had anything to do with the effect of cimetidine in ACHN cells; these were independently run experiments. The ACHN cell test was to test the specificity of the computational process upstream of all of this; it had nothing to do with our belief in cimetidine in A549 cells. Thus, we would not agree with the replication team’s characterization that these were all multiple hypotheses being validated equally, and thus merited a common adjustment of p-values. As described above, we corrected for the multiple hypothesis testing earlier in our process, at the computational stage. We never expected the cimetidine/ACHN experiment to succeed when we ran it. Similarly, our test of doxorubicin in A549 cells was performed as a positive control experiment; we fully expected that experiment to succeed.

      In email discussion, we learned the replication team feels these three hypotheses were tested equally, and thus adjusted the p-values by multiplying them by 3. We are going to have to respectfully “agree to disagree” here.

      We note some interesting results of their adjustments, such as the reproducibility team also not finding doxorubicin to have a statistically significant effect compared to vehicle treated mice. Again, the Welch’s t-test on this comparison yielded p=0.0325, but with their Bonferroni correction, this would no longer be deemed a significant association. Doxorubicin has been used as a known drug against A549 cells for nearly 30 years (Nishimura et al, 1989), and our use of this drug was only as a positive-control agent.

      Figure 3 was also very encouraging, where we do see a significant effect from the original and reproduced studies, and the meta-analysis together.

      In the end, we want to applaud replication efforts like this. We do believe it is importance for the public to have trust in scientists, and belief in the veracity of our published findings. However, we do recommend replication teams of the future to choose papers in a more impactful manner. While it is an honor for our paper to be selected, we were never going to run a clinical trial of cimetidine in lung adenocarcinoma, and we cannot see any such protocol being listed in clinicaltrials.gov. Our publication was more towards demonstrating the value of open data, through the validation of a specific computational prediction. We suggest that future replication studies of pre-clinical findings should really be tailored towards those most likely to actually be heading into clinical trials.

      References

      Sirota M, Dudley JT, Kim J, Chiang AP, Morgan AA, Sweet-Cordero A, Sage J, Butte AJ. Discovery and preclinical validation of drug indications using compendia of public gene expression data. Sci Transl Med. 2011 Aug 17;3(96):96ra77. doi: 10.1126/scitranslmed.3001318.

      Kandela I, Zervantonakis I; Reproducibility Project: Cancer Biology. Registered report: Discovery and preclinical validation of drug indications using compendia of public gene expression data. Elife. 2015 May 5;4:e06847. doi: 10.7554/eLife.06847.

      Nishimura M, Nakada H, Kawamura I, Mizota T, Shimomura K, Nakahara K, Goto T, Yamaguchi I, Okuhara M. A new antitumor antibiotic, FR900840. III. Antitumor activity against experimental tumors. J Antibiot (Tokyo). 1989 Apr;42(4):553-7.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 20, Robert Tibshirani commented:

      The Replication Study by Kandela et al of the Sirota et al paper “Discovery and Preclinical Validation of Drug Indications Using Compendia of Public Gene Expression Data“ reports a non-significant p-value of 0.105 for the test of the main finding for cimetidine in lung adenocarcinoma. They obtained this from a Bonferroni adjustment of the raw p-value of 0.035, multiplying this by three because the authors had also tested a negative and a positive control.

      This seems to me to be an inappropriate use of a multiple comparison adjustment. These adjustments are designed to protect the analyst against errors in making false discoveries. However if Sirota et al had found that the negative control was significant, they would not have reported it as a "discovery". Instead, it would have pointed to a problem with the experiment. Similarly, the significant result in the positive control was not considered a "discovery" but rather was a check of the experiment's quality.

      Now it is true that Kandela et al specified in their protocol that they would use a (conservative) Bonferroni adjustment in their analysis, and used this fact to choose a sample size of 28. This yielded an estimated power of 80%. If they had chosen to use the unadjusted test, the estimated power for n=28 would have been a little higher—about 90%. I think that the unadjusted test is appropriate here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Andy Collings commented:

      (Original comment found at: https://elifesciences.org/content/6/e21634#disqus_thread)

      Response to: “Replication Study: Melanoma genome sequencing reveals frequent PREX2 mutations"

      Lynda Chin and Levi Garraway

      We applaud the Reproducibility Project and support its goal to reproduce published scientific results. We also thank Horrigan et al for a carefully executed study, for which we provided reagents and extensive consultation throughout. Their work illustrates the inherent challenges in attempting to reproduce scientific results.

      We summarize below the results of Horrigan et al., first in lay terms and then in more scientific detail.

      Description for Lay Readers

      Briefly, our 2012 paper reported that human melanoma patients often carry mutations in the PREX2 gene. To study the effect of mutations in PREX2, we made modified versions of a commonly used immortalized human melanocyte cell line (called p’mels) and injected them into mice. When mice were injected with cells carrying an irrelevant gene or a normal copy of PREX2 (“control mice”), tumors started to form in about 9 weeks. When mice were injected with cells carried the mutated PREX2 genes (“experimental mice”), tumors began to form after around 4-5 weeks—indicating that mutated PREX2 accelerated tumor formation.

      When Horrigan et al. tried to reproduce our experiment, they found that tumors began to form in their control mice after about 1 week—not 9-10 weeks. Because their control developed tumors so rapidly, Horrigan et al. recognized that they could not meaningfully test our finding that mutant PREX2 accelerated the tumor formation.

      Why did the human melanocyte cells grow tumors in the control mice so much faster in Horrigan et al.’s experiment? The likely explanation is that human cells engineered in this way are known to undergo dramatic changes when they are grown for extended periods in culture. Therefore, Horrigan et al.’s study underscores how important it is to have appropriate control cells, before attempting to reproduce experimental findings.

      Finally, we emphasize that Horrigan et al.’s results do not call into question our results about PREX2 because their experiment was not informative. Moreover, we have recently validated the findings about PREX2 in an independent way—by creating genetically engineered mice that carry mutated PREX2 in their own genomes. These PREX2 mutant mice showed accelerated tumor growth compared to controls.

      Description for Scientific Readers

      The authors repeated a xenograft experiment (Figure 3b) in our 2012 report. In our experiment, we overexpressed GFP (negative control), wild type PREX2 (normal control) and two PREX2 mutants (G844D and Q1430*) (experimental arm) in a TERT-immortalized human melanocyte line engineered with RB and p53 inactivation (p’mel). To further sensitize these melanocytes for tumorigenicity, they were also engineered to overexpress oncogenic NRASG12D. We showed that the mutant PREX2 expression in p’mel cells significantly accelerated tumor formation in vivo. However, Horrigan et al found that the control and PREX WT or mutant expressing p’mels all behaved identically, forming tumors rapidly in vivo (within 1 week of implantation). This finding differed from our study, in which the control cells (both GFP and PREX2) did not form tumors until >10 weeks after implantation.

      The fact that Horrigan et al observed rapid tumor formation in all settings means that their findings are uninformative with regard to the reproducibility of a central conclusion of our 2012 report, namely that mutant PREX2 can accelerate tumor formation in vivo. Testing this hypothesis requires a control arm in which tumor formation is sufficiently latent so that a discernible effect on the rate of tumorigenesis by the mutants can be observed. In the Horrigan et al study, tumorigenesis in the control arms was so rapid that it essentially became impossible to detect any additional effect of mutant PREX2.

      Why were the controls so much more tumorigenic in the hands of Horrigan et al.? We note that although the investigators were provided with clones from the same p’mels used in the 2012 study, by the time Horrigan et al received the cells, more than two years had passed since the original p’mel cells were engineered. This is a crucial point, because as with many other cell lines, these “primed” human primary melanocytes are known to readily undergo adaptation during extended cultivation in vitro. In particular, these p’mels can spontaneously acquire a more transformed phenotype over time (we have seen this happen on multiple occasions). Thus, although a clone from the same engineered cells were provided to Horrigan et al, the fact that that clone of p’mel cells exhibited very different phenotype suggests that the additional passages, a major geographic relocation, and subsequent freeze-thaw manipulations have rendered them unsuitable as an experimental frame of reference.

      When we notice such “drifts” in engineered cell culture models, we often have to re-derive the relevant lines starting from even earlier stages in order to have controls with suitable tumorigenic latency. For example, in this case, we would have re-introduced NRASG12D into a clone of non-transformed melanocytes harboring TERT immortalization and RB/P53 inactivation to re-engineer a p’mel cell line. Had Horrigan et al used less tumorigenic controls, they would have a much better chance to reproduce an accelerating effect of mutant PREX2.

      To validate our initial observations regarding the oncogenic role of mutant PREX2, we have since taken an orthogonal approach: we created a genetically engineered mouse (GEM) model targeting both a truncating PREX2 mutation (E824*) and oncogenic NRASG12D expression to melanocytes under a tet-regulated promoter. In this GEM model, we observed significantly increased penetrance and decreased latency of melanoma formation (Lissanu Deribe et al PNAS, 2016, E1296-305; see Figure 3b in Lissanu Deribe et al PNAS, 2016), thus confirming the xenograft findings of our 2012 report showing that mutant PREX2 is oncogenic.

      In summary, we support rigorous assessments of reproducibility such as this. Equally, we consider it crucial to recognize and account for salient underlying properties of the model systems and experimental controls in order to minimize the risk of misleading conclusions regarding the reproducibility of any given experiment. Indeed, Horrigan et al. nicely articulated the importance of these considerations when discussing their results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Andy Collings commented:

      (Original comment in full found at: https://elifesciences.org/content/6/e18173#disqus_thread)

      Response to: “Replication Study: The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors”

      Irving Weissman for the authors of "The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors"

      Our original paper by Willingham and Volkmer et al in PNAS reported the result of experiments testing the hypothesis that CD47 might be expressed and demonstrate dominant ‘don’t eat me’ functions on human solid cancers, as well as our previously described studies with human leukemias and lymphomas and mouse leukemias. The study included primarily experiments on primary patient solid cancers with minimal passage as xenografts in immune deficient mice tested in vitro and as xenografts in the mice lacking adaptive immune system T, B, and NK cells, but possessing all other bone marrow derived innate immune system cells such as macrophages . We included one experiment on a long passaged mouse breast cancer line transplanted into syngeneic immunocompetent FVB mice. The Replication Study by Horrigan et al in eLife reports the results of efforts to repeat the experiments on the passaged mouse breast cancer line, but none of the experiments on human primary and minimally passaged cancers of several different solid tumor types, either in vitro or as xenografts in mice in which the CD47 ‘don’t eat me’ signal was blocked with monoclonal antibodies.

      When we were requested to participate in a replication study of our paper entitled “The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors” we agreed, but were worried that we had spent years developing the infrastructure to obtain human cancers from de-identified patients, found ways to transplant them into immune deficient mice, and limited our studies to human cancers within 1 to less than 10 transplant passages in these mice. Our major objective in the study was to test whether the CD47 molecule was present on these human solid tumors, if it acted as a ‘don’t eat me’ signal for mouse and human macrophages, and whether these tumors in immune deficient mice were susceptible to blocking anti-CD47 antibodies. This was a scientific paper to answer these questions, and not a preclinical study preparatory to human clinical trials.

      To our surprise, our study verified on all tested human cancers that they express CD47, perhaps the first cancer gene commonly expressed on all cancers; and it is a molecule which provides a ‘don’t eat me’ function; and we showed that blocking that function led to tumor attack by macrophages.

      Unfortunately, the independent group who accepted the task of replicating our studies did not do a single study with human cancers, or to study the effect of our blocking antibodies to the CD47 tumor cell surface molecules on the phagocytic removal of human cancers.

      Horrigan et al did begin, with our help, to replicate the one study we did as a pilot to see if anti-CD47 antibodies that also bind to mouse CD47 would have an effect on a long-transplanted mouse breast cancer line. We and others have found that the exact way you transplant these mouse cancers is critical to achieve engraftment of the cancers in appropriate immune competent mice. As we learned from Dr Sean Morrison, UT Southwestern Childrens Hospital, many cancers won’t grow in mice unless a special type of matrigel is used to support the cells in vitro and in transplant. Without it, transplantation may be sporadic and/or absent. The replication team found their own matrigel, and for reasons unknown to us, could not get reproducible transplantation in their testing. This was picked up in reviews of the paper by eLife referees, including a request for repeating the studies a number of ways, but that did not happen.

      There is therefore no study that addresses the title of the paper and its major conclusions: human cancers express CD47 and our studies show that it is a target for therapeutic studies.

      Several independent papers since ours have replicated not only our findings, but have extended them to many other human cancers (see below). So replication of our major points have occurred with independent groups.

      But we agree that everything we publish, major or minor, central or peripheral, must be replicable. Even in our human tumor studies there were a few outlier cancers that did not diminish growth in the presence of blocking anti-CD47 antibodies.

      The beginning of replication is to show experience and competence in the transplantability of the cancer. There are many possible reasons that replication of the basic transplantation of MT1A2 breast cancer cells in syngeneic FVB mice was not replicated in the experiments carried out by Horrigan et al, who got only a fraction of the mice transplanted. These could include the particular matrigel used, a problem with using long passaged cell lines[which may be heterogeneous and altered by the passaging in vitro and in vivo], rather than primary or recent mouse or human cancers. It could be inherent in how Horrigan et al did the experiments. Oddly, the control antibodies did diminish the growth of the MT1A2 cancers in their single experiment. Amongst the reasons concerning the heterogeneity of long passaged cell lines we might cite is that we have discovered two more ‘don’t et me’ molecules on cancers that interact with other receptors on macrophages. Although those papers are submitted, but not yet published, we cannot specify the details lest we endanger their publishability. (Readers who send us a request will receive copies of the papers when published.) Laboratories that study tumors at different transplant passages have often found that variant subsets of cells within the cancer can rapidly outgrow the major population of cells transplanted, and it is common that the successive transplants grow more aggressively in the same strain of mice, even though the name of the tumor is retained. For that reason it is clear that studies on long-passaged tumors may be studying some properties of the passaged cell rather than the original cancer in the individual. There are other possibilities. When the replication study lab interacted with us early on, we offered to do the experiments side by side with them to facilitate technology transfer. Horrigan et al declined. The offer still stands.

      Before this paper was published we published other papers demonstrating that CD47 was expressed on all samples of human AML and human NHL tested, usually at a higher level than on the same stage or type of human normal cell. Further, we showed both by genetic manipulation of the expression of CD47 on human cells or the treatment of those cells with blocking antibodies to CD47 that interrupt its interaction with macrophage receptor Sirpα lead to mouse or human macrophages to phagocytose and kill the tumor target cells. We used anti-human antibodies that did not trigger phagocytosis by ‘opsonization’, as the isotype of the antibodies used for blocking were not the isotype that is highly efficient at triggering complement activation or ADCC (activation via Fc receptor of NK lineage killer cells), and we demonstrated that on human lymphomas. [...]

      (The comment in full can be found at: https://elifesciences.org/content/6/e18173#disqus_thread)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 29, Laura M Cox commented:

      Thank you for catching this typo. The journal will issue a corrigendum soon.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 24, Seán Turner commented:

      In the title, "Faecalibacterium (sic) rodentium" should be "Faecalibaculum rodentium."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 22, Tom Yates commented:

      Shah and colleagues are to be congratulated for an important study (Shah NS, 2017), emphasising the major role of transmitted resistance in the epidemiology of extensively drug-resistant tuberculosis (XDR-TB). However, methodological issues will have impacted the results.

      As the authors acknowledge, in such studies, missing data bias estimates, with linked isolates wrongly designated unique. Their decision to look at a convenience sample of 51% of cases from throughout KwaZulu-Natal rather than attempt complete enrolment in a smaller area will have accentuated this bias.

      A growing body of research suggests transmission between members of the same household only explains a small proportion of all Mycobacterium tuberculosis (MTB) transmission in Sub Saharan Africa (Verver S, 2004, Andrews JR, 2014, Middelkoop K, 2015, Glynn JR, 2015). In the present study, recall bias plus disease prompting contacts to test for XDR-TB will likely have resulted in household, workplace and hospital contacts being captured more consistently than more casual community contacts.

      Determining the proportion of total MTB transmission occurring in specific locations would allow disease control programmes to be better targeted. I agree with Shah et al that this should be a research priority.

      Dr Tom A. Yates, Institute for Global Health, University College London, t.yates@ucl.ac.uk


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 24, Pushkar Malakar commented:

      This study has the potential to open a new field to explore like viral communications or signaling in viruses.Someway down the line there is possibility that one will know about the evolution of communication system or the signaling system between two organisms.This study has therapeutic potential also as viruses are responsible for many fatal human diseases like cancer, AIDS etc.If one understand the communication or signaling system between viruses then therapeutics can be developed to block this communication or signaling system which will help in preventing the viral diseases.More understanding of viral communication or signaling system may be used in biotech industries to produce cheaper and useful bioproducts as viruses replicate very fast. Further viruses might use different signaling molecules to communicate with each other for different activities.Signaling system might also help in the classification of viruses. Anyway this study is just the beginning and many more mysteries about viruses might be solved in the near future.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 06, Andrea Giaccari commented:

      PubMed states this article is Free Full Text, but then links to http://www.bmj.com/content/356/bmj.i6505 asking for "Article access for 1 day: Purchase this article for £23 $37 €30 + VAT"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 05, Jon-Patrick Allem commented:

      There are at least five problems with this paper: First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.

      This critique is misguided. The goal of this paper was to characterize public perception of e-cigarette aerosol by using a novel data source (tweets) and not to demonstrate any material risk from e-cigarette aerosol.

      Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.

      This critique is misguided. This study did not design educational messages. It described people’s perceptions about e-cigarette aerosol.

      Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification. Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive. They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.

      This critique is misguided. There was no policy proposal made in the passage highlighted here. The suggestion that social media platforms can be used as a communication channel is not a policy. It is a communication strategy. The idea that social media can be used to obtain information and later communicate messages is completely in line with the work presented in this paper. The notion that every paper answers every research question pertaining to a topic is an unreasonable expectation.

      Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.

      This critique here is based on one’s own opinion about the role of government and could be debated with no clear stopping point.

      Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance? Updated 21-Jan-17 with fifth point.

      Big social media data (Twitter, Instagram, Google Webs Search) can be used to fill certain knowledge gaps quickly. While one study using one data source is by no means definitive, one study based on timely data can provide an important starting-off point to address an issue of great import to public health. This paper describes why understanding public sentiment toward e-cigarette aerosol is relevant and utilizes a data source that allowed people to organically report on their sentiment toward e-cigarette aerosol unprimed by a researcher, without instrument bias, and at low costs. Also, policy development and communication campaigns are two distinct areas of research. The goal of this study was to inform the latter.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 25, Erica Melief commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 21, Clive Bates commented:

      There are at least five problems with this paper:

      First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.

      Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.

      Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification.

      Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive.

      They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.

      Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.

      Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance?

      Updated 21-Jan-17 with fifth point.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 21, Clive Bates commented:

      How did the author manage to publish a paper with the title "E-cigarettes: Are they as safe as the public thinks?", without citing any data on what the public actually does think? There is data in the National Cancer Institute's HINTS survey 2015. This is what it says:

      Compared to smoking cigarettes, would you say that electronic cigarettes are…

      • 5.3% say much less harmful
      • 20.6% say less harmful
      • 32.8% say just as harmful
      • 2.7% say more harmful
      • 2.0% say much more harmful
      • 1.2% have never heard of e-cigarettes
      • 33.9% don’t know enough about these products

      Which brings me to the main issue with the paper. The author claims that there is insufficient knowledge to determine if these products are safer than cigarettes. This is an extraordinary and dangerous claim given what is known about e-cigarettes and cigarettes. It is known with certainty that there are no products of combustion of organic material (i.e tobacco leaf) in e-cigarette vapour - this is a function of the physical and chemical processes involved. We also know that products of combustion cause almost all of the harm associated with smoking. There is also extensive measurement of harmful and potentially harmful constituent of cigarette smoke and e-cigarette aerosol showing many are not detectable or present at levels two orders of magnitude lower in the vapour aerosol (e.g. see Farsalinos KE, 2014, Burstyn I, 2014). So the emissions are dramatically less toxic and exposures much lower.

      The author provides a familiar non-sequitur: "There are no current studies that prove that e-cigarettes are safe". There never will be. Firstly because it is impossible to prove something to be completely safe, and almost nothing is. Secondly, no serious commentators claim they are completely safe, just very much safer than smoking. Hence the term 'harm reduction' to describe the benefits of switching to these products.

      This view commands support in the expert medical profession. The Royal College of Physicians (London) assessed the toxicology evidence in its 2016 report Nicotine without smoke: tobacco harm reduction and concluded:

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (Section 5.5 page 87)

      This is a carefully measured statement that aims to provide useful information to both users of the products and health and medical professionals while reflecting residual uncertainty. It contrasts with the author's information leaflet for patients, which even suggests there is no basis for believing e-cigarettes to be safer than smoking:

      If you are smoking and not planning to quit, we don't know if e-cigarettes are safer. Talk to your health care provider.

      But we do know beyond any reasonable doubt that e-cigarettes are very much safer - the debate is whether they are 90% safer or 99.9% safer than smoking. Regrettably, only 5.3% of American adults correctly believe that e-cigarettes are very much less harmful than smoking, while 37% incorrectly think they are as harmful or more harmful (see above). The danger with these misperceptions of risk is that they affect behaviour, causing people to continue to smoke when they might otherwise switch to much safer vaping. The danger with a paper like this and its patient-facing leaflet is that it nurtures these harmful risk misperceptions and becomes, therefore, a vector for harm.

      To return to the author's title question: E-Cigarettes: Are They as Safe as the Public Thinks?. The answer is: "No, they are very much safer than the public thinks".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 22, Eric Fauman commented:

      I know nothing about cow genetics, but I have done some work on the genetics of metabolites in humans, so I was interested to see how the authors derived biological insights from this genetic study. In particular, I was intrigued by the suggestion in the abstract that they found evidence that genes involved in the synthesis of “milk components” are important for lactation persistence.

      Unfortunately, the more I studied the paper the more problems I found that call this claim into question.

      First off, the Q-Q plot is currently unavailable, but the text mentions there’s only a “slight deviation in the upper right tail”, which could mean there are no true significant signals.

      To account for multiple testing, the authors decided to use a genome-wide association p-value cutoff of 0.95/44100 = 2.15e-5 instead of a more defensible 0.05/44100 = 1.1e-6.

      Since their initial p-value cutoff yielded a relatively small number of significant SNPs, the authors used a much more lenient p-value cutoff of 5e-4 which presumably is well within the linear portion of the Q-Q plot.

      The biggest problem with the enrichment analysis, however, is that they’ve neglected to account for genes drawn from a common locus. Often, paralogs of similar function are proximal in the genome. But typically we assume that a single SNP is affecting the function of only a single gene at a locus. So, for example, a SNP near the APOA4/APOA1/APOC3/APOA5 locus can tag all 4 genes, but it’s unfair to consider that 4 independent indications that “phospholipid efflux”, “reverse cholesterol transport”, “triglyceride homeostasis” and other pathways are “enriched” in this GWAS.

      This issue, of overcounting pathways due to gene duplication, affects all their top findings, presumably rendering them non-significant. Besides lipid pathways, this issue also pertains to the “lactation” GO term, which was selected based on the genes GC, HK2, CSN2 and CSN3. GC, CSN2 and CSN3 are all co-located on Chromosome 6.

      A perplexing claim in the paper is for the enrichment of the term “lipid metabolic process” (GO:0006629). According to the Ensembl Biomart, 912 Bos taurus genes fall into this category, or about 4% of the bovine protein coding genes (24616 according to Ensembl). So out of their set of 536 genes (flanking SNPs with P < 5e-4) we’d expect about 20 “lipid metabolic process” genes. And yet, this paper reports only 7. This might be significant, but for depletion, not enrichment.

      Sample size is of course a huge issue in GWAS. While 3,800 cows is a large number, it appears this trait may require a substantially larger number of animals before it can yield biologically meaningful results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 16, Stuart RAY commented:

      This is a scientifically interesting report, but the use of "mutation rate" in the title, abstract, and in some portions of the text is unfortunate, because the process being observed and measured in this report is evolutionary rate of substitution (as noted in the authors' Tables 1 and 2). The evolutionary rate of substitution results from a variety of processes that are affected by the rate mutation (determined in particular by the polymerase), positive and negative selection, and stochastic events at multiple levels (from individual cell to population). Thus, the term "mutation rate" is confusing and potentially misleading. With every RNA genome replication, there is a nonzero rate of mutation; what we estimate when we sequence virus obtained from infected individuals, sampled over a period of years, is the evolutionary rate of substitution.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 24, Shafic Sraj commented:

      Cubital tunnel score in the presence of carpal tunnel syndrome


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 21, Thomas Heston commented:

      This appears to be a classic example of the Hawthorne Effect, i.e. what gets examined tends to improve (http://www.economist.com/node/12510632). The conclusion of this research seems to be that focusing on a problem by providing feedback tends to improve that problem, compared to doing nothing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 31, Stuart RAY commented:

      This is a very interesting and well-done study of a model system. That said, the unqualified use of the terms "antiviral effects of IFN-lambda" and "norovirus" in the title of this article might be misleading without context. Readers should be alert: (a) the noroviruses are very diverse biologically and phylogenetically; (b) murine norovirus is distinct from human noroviruses in apparent tropism, binding (sialic vs blood group antigens), pH dependence of viral entry (Kim Y. Green, Fields Virology 2013, chapter 20); (c) there are significant biological differences between human and mouse responses to lambda interferon (Hermant P, 2014); and B6 mice lack functional MX1 (Pillai PS, 2016, Moritoh K, 2009). Given differences in virus and host, whether the findings presented by Baldridge et al. can be extrapolated to other systems (e.g. natural human norovirus infection) is highly uncertain; therefore, I suggest that the title should end with "in mice".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 14, Sin Hang Lee commented:

      In a research paper titled “Specific microbiota direct the differentiation of IL-17-producing T-helper cells in the mucosa of the small intestine” published by Ivanov et al. in Cell Host Microbe. 2008 Oct 16;4(4):337-49, antibiotic treatment of the specific microbiota has been shown to inhibit TH17 cell differentiation. Perhaps, Strle and colleagues may consider developing microbiological tests for accurate diagnosis of the early infection of Lyme borreliosis in patients with or without skin lesions for timely appropriate antibiotic treatment to prevent excessive TH17 responses and the subsequent autoimmune disorders.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 05, Kevin Hall commented:

      The theoretical basis of the carbohydrate-insulin model (CIM) relies on generally accepted physiology about endocrine regulation of adipose tissue– data that were all collected on short time scales. Ludwig appears to suggest that this long debate has been about a “straw man” short-term version of the CIM. This apparently explains why the purported metabolic advantages have been elusive when assessed by inpatient controlled feeding studies that were simply too short to unveil the metabolic advantages of the CIM. Indeed, Ludwig believes he has scored a win in this debate by acknowledging that these metabolic advantages of low carbohydrate diets on energy expenditure and body fat predicted by the CIM must operate on longer time scales, conveniently where no inpatient data have been generated either supporting or negating those predictions. This was accurately described in my review as an ad hoc modification of the CIM – a possibility currently unsupported by data but obviously supported by sincere belief.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 05, DAVID LUDWIG commented:

      With Hall’s comment of 4 Feb 2017, this long debate nears resolution. He acknowledges it’s “possible” that the very short metabolic studies do not reflect the long-term effects of macronutrients on body weight. We disagree on how likely that possibility is, and now must await further research to resolve the scientific uncertainties.

      Finally, on an issue of academic interest only, Hall creates a straw man in claiming to have “falsified” the Carbohydrate-Insulin Model (CIM). Versions of CIM were originally proposed more than a century ago, as detailed by Taubes G, 2013, before short term studies of substrate oxidation would have been possible. Furthermore, in the second paragraph of his review, Hall cites an article I coauthored Ludwig DS, 2014 and three by others Lustig RH, 2006, Taubes G, 2013, Wells JC, 2011 as recent iterations of CIM. Each of these articles focuses on long-term effects, and none asserts that 1 week should be adequate to prove or falsify CIM. In view of the failure of conventional approaches to address the massive public health challenge of obesity, let’s now refocus our energies into the design and execution of more definitive research.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 05, Kevin Hall commented:

      Ludwig suggests that demonstration of any metabolic adaptations occurring on a time scale of > 1 week after introduction of an isocaloric low carbohydrate diet somehow invalidates all of the inpatient controlled feeding studies with results that violate carbohydrate-insulin model (CIM) predictions. This presents a false dilemma and is a red herring.

      There are indeed metabolic adaptations that take place on longer time scales, but many of these changes actually support the conclusion that the purported metabolic advantages for body fat loss predicted by the CIM are inconsistent with the data. For example, as evidence for a prolonged period of fat adaptation, Ludwig notes modest additional increases in blood and urine ketones observed after 1 week of either starvation Owen OE, 1983 or consuming a hypocaloric ketogenic diet Yang MU, 1976. The implication is that daily fat and ketone oxidation presumably increase along with their blood concentrations over extended time periods to eventually result in an acceleration of body fat loss with low carbohydrate high fat diets as predicted by the CIM. But since acceleration of fat loss during prolonged starvation would be counterproductive to survival, might there be data supporting a more physiological interpretation the prolonged increase in blood and urine ketones?

      Both adipose lipolysis Bortz WM, 1972 and hepatic ketone production Balasse EO, 1989 reach a maximum within 1 week as demonstrated by isotopic tracer data. Therefore, rising blood ketone concentrations after 1 week must be explained by a reduced rate of removal from the blood. Indeed, muscle ketone oxidation decreases after 1 week of starvation and, along with decreased overall energy expenditure, the reduction in ketone oxidation results in rising blood concentrations and increased urinary excretion (page 144-152 of Burstztein S, et al. ‘Energy Metabolism, Indirect Calorimetry, and Nutrition.’ Williams & Wilkins 1989). Therefore, rather than being indicative of progressive mobilization of body fat to increase oxidation and accelerate fat loss, rising concentrations of blood ketones and fatty acids occurring after 1 week arise from reductions in ketone and fat oxidation concomitant with decreased energy expenditure.

      The deleterious effects of a 600 kcal/d low carbohydrate ketogenic diet on body protein and lean mass were demonstrated in Vasquez JA, 1992 and were found to last about 1 month. Since weight loss was not significantly different compared to an isocaloric higher carbohydrate diet, body fat loss was likely attenuated during the ketogenic diet and therefore in direct opposition to the CIM predictions. Subsequent normalization of nitrogen balance would tend to result in an equivalent rate of body fat loss between the isocaloric diets over longer time periods. In Hall KD, 2016, urinary nitrogen excretion increased for 11 days after introducing a 2700 kcal/d ketogenic diet and coincided with attenuated body fat loss measured during the first 2 weeks of the diet. The rate of body fat loss appeared to normalize in the final 2 weeks, but did not exceed the fat loss observed during the isocaloric high carbohydrate run-in diet. Mere normalization of body fat and lean tissue loss over long time periods cannot compensate for early deficiencies. Therefore, these data run against CIM predictions of augmented fat loss with lower carbohydrate diets.

      While I believe that outpatient weight loss trials demonstrate that low carbohydrate diets often outperform low fat diets over the short-term, there are little body weight differences over the long-term Freedhoff Y, 2016. However, outpatient studies cannot ensure or adequately measure diet adherence and therefore it is unclear whether greater short-term weight losses with low carbohydrate diets were due to reduced diet calories or the purported “metabolic advantages” of increased energy expenditure and augmented fat loss predicted by the CIM. The inpatient controlled feeding studies demonstrate that the observed short-term energy expenditure and body fat changes often violate CIM predictions.

      Ludwig conveniently suggests that all existing inpatient controlled feeding studies have been too short and that longer duration studies might produce results more favorable to the CIM. But even this were true, the current data demonstrating repeated violations of CIM model predictions constitute experimental falsifications of the CIM requiring an ad hoc modification of the model such that the metabolic advantages only begin after a time lag lasting many weeks. This is possible, but unlikely.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Feb 04, DAVID LUDWIG commented:

      Boiling down his comment of 3 Feb 2017, Hall disputes that the metabolic process of adapting to a high-fat/low-carbohydrate diet confounds interpretation of his and other short term feeding studies. If we can provide evidence that this process could take ≥ 1 week, the last leg of his attack on the Carbohydrate-Insulin Model collapses. Well, a picture is worth a thousand words, and here are 4:

      For convenience, these figures can be viewed at this link:

      Owen OE, 1983 Figure 1. Ketones are, of course, the hallmark of adaptation to a low-carbohydrate ketogenic diet. Generally speaking, the most potent stimulus of ketosis is fasting, since the consumption of all gluconeogenic precursors (carbohydrate and protein) is zero. As this figure shows, the blood levels of each of the three ketone species (BOHB, AcAc and acetone) continues to rise for ≥3 weeks. Indeed, the prolonged nature of adaptation to complete fasting has been known since the classic starvation studies of Cahill GF Jr, 1971. It stands to reason that this process might take even longer on standard low-carbohydrate diets, which inevitably provide ≥ 20 g carbohydrate/d and substantial protein.

      Yang MU, 1976 Figure 3A. Among men with obesity on an 800 kcal/d ketogenic diet (10 g/d carbohydrate, 50 g/d protein), urinary ketones continued to rise for 10 days through the end of the experiment, and by that point had achieved levels equivalent only to those on day 4 of complete fasting. Presumably, this process would be even slower with a non-calorie restricted ketogenic diet (because of inevitably higher carbohydrate and protein content).

      Vazquez JA, 1992 Figure 5B. On a conventional high-carbohydrate diet, the brain is critically dependent on glucose. With acute restriction of dietary carbohydrate (by fasting or a ketogenic diet), the body obtains gluconeogenic precursors by breaking down muscle. However, with rising ketone concentrations, the brain becomes adapted, sparing glucose. In this way, the body shifts away from protein to fat metabolism, sparing lean tissue. This phenomenon is clearly depicted among women with obesity given a calorie-restricted ketogenic diet (10 g carbohydrate/d) vs a nonketogenic diet (76 g carbohydrate/d), both with protein 50 g protein/d. For 3 weeks, nitrogen balance was strongly negative on the ketogenic diet compared to the non-ketogenic diet, but this difference was completely abolished by week 4. What would subsequently happen? We simply can’t know from the short-term studies.

      Hall KD, 2016 Figure 2B. Hall’s own study shows that the transient decrease in rate of fat loss upon initiation of the ketogenic diet accelerates after 2 weeks.

      The existence of this prolonged adaptive process explains why metabolic advantages for low-fat diet are consistently seen in very short metabolic studies. But after 2 to 4 weeks, advantages for low-carbohydrate diets begin to emerge, as summarized in my comment of 3 Feb 2017, below.

      Fat adaptation on low-carbohydrate diets has admittedly not been thoroughly studied, and its duration may differ among individuals and between experimental conditions. Nevertheless, there is strong reason to think that short feeding studies (i.e., < 3 to 4 weeks) have no relevance to the long-term effects of macronutrients on metabolism and body composition.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Feb 04, Kevin Hall commented:

      Is it really an “extreme argument” to conclude that important aspects of the carbohydrate-insulin model (CIM) have been falsified based on data from 20 highly controlled inpatient human feeding studies that failed to support key CIM model predictions? While previously ignoring this conforming body of data from other research groups, Ludwig now conveniently concludes that all of these studies were flawed in some way and are therefore irrelevant and incapable of testing any aspect of the carbohydrate-insulin model.

      To Ludwig, more relevant for assessing the energy expenditure and body fat predictions of the CIM are rodent studies and outpatient human studies where diet adherence cannot be adequately controlled or assessed Winkler JT, 2005. One such study Ludwig uses to bolster the CIM Ebbeling CB, 2012 did not measure body fat during the test diets and showed no significant energy expenditure differences between diets with the same amount of protein but varying in carbohydrate vs. fat. Ludwig claims that the study supports the CIM because energy expenditure was observed to increase with a very low carbohydrate diet. But the concomitant 50% increase in protein vs. the comparator diets makes it impossible to definitively conclude that any observed effect was due to carbohydrate reduction alone. Ludwig’s arguments about the possibly minimal effects of dietary protein changes on energy expenditure cannot eliminate this important confound.

      Ludwig argues that “adaptation to a higher-fat diet can take at least a week and perhaps considerably longer”. One of Ludwig’s citations in this regard describes the role of diet composition on fuel utilization and exercise performance Hawley JA, 2011. This review paper reported that adaptation to a high fat diet for <1 week was sufficient to alter fuel utilization, but 4-7 days of fat adaptation was required to maintain subsequent exercise performance. Interestingly, longer periods of fat adaptation during training (7 weeks) were concluded to limit exercise capacity and impair exercise performance. The other two studies Ludwig cited to support the necessity for long term fat adaptation fail to support the CIM. An inpatient controlled feeding study Vasquez JA, 1992 showed that a very low carbohydrate, high fat diet led to significantly greater loss of body protein and lean tissue mass despite no significant difference in weight loss compared to an isocaloric higher carbohydrate, lower fat diet. The second study was an outpatient feeding trial Velum VL, 2017 that failed to demonstrate a significant difference in body weight or fat loss despite prescribing diets substantially varying in carbohydrate vs. fat for 3 months.

      I agree with Ludwig that it likely takes a long time to equilibrate to added dietary fat without simultaneously reducing carbohydrate because, unlike carbohydrate and protein, dietary fat does not directly promote its own oxidation and does not significantly increase daily energy expenditure Schutz Y, 1989 and Horton TJ, 1995. Unfortunately, these observations also run counter to CIM predictions because they imply that added dietary fat results in a particularly efficient means to accumulate body fat compared to added carbohydrate or protein Bray GA, 2012. If such an added fat diet is sustained, adipose tissue will continue to expand until lipolysis is increased to sufficiently elevate circulating fatty acids and thereby increase daily fat oxidation to reestablish balance with fat intake Flatt JP, 1988.

      In contrast, when added fat is accompanied by an isocaloric reduction in carbohydrate, daily fat oxidation plateaus within the first week as indicated by the rapid and sustained drop in daily respiratory quotient in Hall KD, 2016 and Schrauwen P, 1997. Similarly, Hall KD, 2015 observed a decrease and plateau in daily respiratory quotient with the reduced carbohydrate diet, whereas the reduced fat diet resulted in no significant changes indicating that daily fat oxidation was unaffected. As further evidence that adaptations to carbohydrate restriction occur relatively quickly, adipose tissue lipolysis is known to reach a maximum within the first week of a prolonged fast Bortz WM, 1972 as does hepatic ketone production Balasse EO, 1989.

      While there is no evidence that carbohydrate restricted diets lead to an acceleration of daily fat oxidation on time scales longer than 1 week, and there is no known physiological mechanism for such an effect, this possibility cannot be ruled out. Such speculative long term effects constitute an ad hoc modification of the carbohydrate-insulin model whereby repeated violations of model predictions on time scales of 1 month or less are somehow reversed.

      As I have repeatedly acknowledged, prescribing lower carbohydrate diets in free-living subjects generally leads to greater loss of weight and body fat over the short-term when people are likely adhering most closely to the diet prescriptions. The CIM suggests that such diets offer a “metabolic advantage” that substantially increases energy expenditure and body fat loss even if diet calories are equal. However, inpatient controlled feeding studies do not support this contention as they have repeatedly failed to show significant differences in energy expenditure and body fat. Furthermore, such studies have occasionally measured significant differences in diametrically opposite directions than were predicted on the basis of carbohydrate intake and insulin secretion. These apparent falsifications of the CIM do not imply that dietary carbohydrates and insulin are unimportant for energy expenditure and body fat regulation. Rather, their role is more complicated than the CIM suggests and the model requires thoughtful modification.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Feb 03, DAVID LUDWIG commented:

      In his comment of January 31, 2017, Hall presses an extreme argument, that he successfully "falsified" major aspects of the Carbohydrate-Insulin Model (CIM) of obesity, and complains that opponents won't embrace their error. His argument boils down to 3 points:

      First, Hall’s small 6-day study and his small, “observational,” “pilot” study are fundamentally correct. Regarding the 6-day study Hall KD, 2015, he continues to insist that results of a very short intervention have relevance to understanding the long-term effects of macronutrients on body composition, despite evidence that adaptation to a higher-fat diet can take at least a week and perhaps considerably longer Hawley JA, 2011 Vazquez JA, 1992 Veum VL, 2017. (We need look no further than his observational study Hall KD, 2016, to see in Figure 2B that the transient decrease in rate of fat loss upon initiation of the low-carbohydrate diet accelerates after 2 weeks.) Of note, the 36 g/d greater predicted body fat loss on his low-fat diet would, if persistent, translate into a massive advantage in adiposity after just one year. If anything, the meta-analyses of long-term clinical trials suggest the opposite Tobias DK, 2015 Mansoor N, 2016 Mancini JG, 2016 Sackner-Bernstein J, 2015 Bueno NB, 2013. Furthermore, Hall’s two studies are mutually inconsistent: The 6-day study implies a major increase in energy expenditure from fat oxidation on the low-fat diet, whereas the observational study shows an increase in energy expenditure after 2 weeks (by doubly-labeled water) on the low-carbohydrate diet. Other limitations of his observational study have been considered elsewhere.

      Second, our randomized 3-arm cross-over study Ebbeling CB, 2012 is fundamentally wrong. I’ve addressed Hall’s concerns elsewhere. Here, he reiterates that the 10% difference in protein content (intended by design to reflect the Atkins diet) could account for our observed 325 kcal/d difference in energy expenditure. However, there is no basis in the literature for this belief. Among 10 studies published at the time of our feeding trial in which protein intake was compared within the physiological range (10 to 35% of total energy), energy expenditure on the higher vs. lower protein diets ranged from +95 kcal/d to -97 kcal/d, with the mean difference of near zero Dulloo AG, 1999 Hochstenbach-Waelen A, 2009 Lejeune MP, 2006 Luscombe ND, 2003 Mikkelsen PB, 2000 Veldhorst MA, 2009 Veldhorst MA, 2010 Westerterp KR, 1999 Westerterp-Plantenga MS, 2009 Whitehead JM, 1996. Though these studies have methodological limitations themselves, the finding is consistent with thermodynamic considerations that indicate a very minor increment in the "thermic effect of food" from a 10% increase in protein.

      Third, 18 other studies provide definitive support his position. This facile contention disregards that these studies are riddled with the same inherent limitations as his studies, including a combination of short duration, highly limited power, indirect measurements of body composition, reliance on metabolic chambers (which have been shown to underestimate adaptive thermogenesis compared to doubly-labeled water Rosenbaum M, 1996), quality control concerns and other issues. Of the cited studies, six were 1 to 4 days Astrup A, 1994 Dirlewanger M, 2000 Davy KP, 2001 Smith SR, 2000 Thearle MS, 2013 Yerboeket-van de Venne WP, 1996, seven were 7 to 15 days Horton TJ, 1995 Shepard TY, 2001 Eckel RH, 2006 Hill JO, 1991 Schrauwen P, 1997 Treuth MS, 2003 Yang MU, 1976, and just five were 4 to 6 weeks. One of these longer studies was based on recovered data from about 30 years prior to publication, with no direct measurements of body composition or energy expenditure Leibel RL, 1992. The other four longer studies employed severe calorie restriction, which would plausibly obscure macronutrient effects over this short duration. Two of these studies had just 4 subjects per diet group Rumpler WV, 1991 Bogardus C, 1981. The remaining two showed either a non-significant (2 kg lower total body fat) Golay A, 1996 or significant (30 cc lower visceral fat) Miyashita Y, 2004 advantage for the lower-carbohydrate diet. We’ve been down this road before, with the launch of the 40-year low-fat diet era based on over-interpretation of methodologically limited research. Let’s not make the same mistake again.

      Even as he over-interprets the short-term feeding studies, Hall disregards extensive animal research, high quality observational studies, mechanistic studies, and clinical trials in support of CIM, as summarized here and elsewhere Ludwig DS, 2014 Lucan SC, 2015 Templeman NM, 2017.

      Finally, Hall claims that I misunderstand the notion of “energy gap.” As both Hall and I Katan MB, 2010 have considered elsewhere, a decrease in energy intake produces a compensatory decrease in energy expenditure, resulting in less weight loss than would be predicted from the simple observation that a pound of fat contains 3500 kcal. However, here we consider the opposite phenomenon – an increase in energy expenditure resulting from changing dietary quality, not quantity. There is no reason to believe that compensatory increases in energy intake would occur as a result of faster metabolic rate over a similar time frame as that observed with compensatory changes to energy restriction. (Indeed, Hall himself acknowledges the possibility that low-carbohydrate diets might also lower energy intake.) Of course, progressive weight loss regardless of cause would eventually reduce energy expenditure, but we cannot infer from current data when that energy gap would reach zero. Even with conventional assumptions, NIDDK’s Body Weight Planner indicates the 150 kcal/d change in energy balance Hall found on the low-carbohydrate diet by doubly-labeled water would produce more than a 15 lb weight loss for a typical individual over several years – amounting to half the mean change in weight that occurred during the obesity epidemic in the U.S. Why would we dismiss findings with such major potential public health significance?

      Hall's premature claims of (at least partial) victory and calls for curtailment of funding for more research Freedhoff Y, 2016 do not do justice to a complicated scientific question. In view of the failure of conventional obesity treatment and the massive public health challenges, all participants in this debate would do well to acknowledge the limitations of existing evidence and join in the design of more definitive research.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Jan 31, Kevin Hall commented:

      Science progresses through an iterative process of formulating models to explain our observations and subjecting those models to experimental interrogation. A single valid experimental result that runs counter to a model prediction falsifies the model and thereby requires its reformulation. Alternatively, refutation of an apparent model falsification requires demonstrating that the experimental observation was invalid.

      My review of the carbohydrate-insulin model (CIM) presented a synthesis of the evidence from 20 inpatient controlled feeding studies strongly suggesting that at least some important aspects of the model are in need of modification. In particular, our recent studies Hall KD, 2015, Hall KD, 2016 employing carefully controlled inpatient isocaloric diets with constant protein, but differing in carbohydrate and fat, resulted in statistically significant differences between the diets regarding body fat and energy expenditure that were in directions opposite to predictions of the CIM.

      Rather than using our experimental results as the basis for clarifying and reformulating the CIM, Ludwig challenges their validity and simply ignores the 18 other inpatient controlled feeding studies with their conforming body of results failing to support the energy expenditure or body fat predictions of the CIM.

      Ludwig’s comments on the diets used in Hall KD, 2015 are irrelevant to whether they resulted in a valid test of the CIM predictions. We fed people diets that selectively reduced 30% of baseline calories solely by restricting either carbohydrate or fat. These diets achieved substantial differences in daily insulin secretion as measured by ~20% lower 24hr urinary C-peptide excretion with the reduced carbohydrate diet as compared with the reduced fat diet (p= 0.001) which was unchanged from baseline. Whereas the reduced fat diet resulted in no significant energy expenditure changes from baseline, carbohydrate restriction resulted in a ~100 kcal/d decrease in both daily energy expenditure and sleeping metabolic rate. These results were in direct opposition to the CIM predictions, but in accord with the previous studies described in the review as well as a subsequent study demonstrating that lower insulin secretion was associated with a greater reduction of metabolic rate during weight loss Muller MJ, 2015.

      Ludwig erroneously claims that the study suffered from an “inability to directly document change in fat mass by DXA”, but DXA measurements indicated statistically significant reductions in body fat with both diets. While DXA was not sufficiently precise to detect significant differences between the diets, even this null result runs counter to the predicted greater body fat loss with the reduced carbohydrate diet. Importantly, the highly sensitive fat balance technique demonstrated small but statistically significant differences in cumulative body fat loss (p<0.0001) in the direction opposite to the CIM predictions. Ludwig claims that our results are invalid because “rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings.” However, it is highly implausible that small uncertainties in the metabolizable energy content of the diet amounting to <100 kcal/d could explain the >400 kcal/d (p<0.0001) measured difference in daily fat oxidation rate. Furthermore, our results were robust to the study errors and exclusions fully reported in Hall KD, 2015 and clearly falsified important aspects of the CIM.

      We previously responded Hall KD, 2016b to Ludwig’s comments Ludwig DS, 2016 on our ketogenic diet study. Ludwig now argues that we set the bar too high regarding the energy expenditure predictions of the CIM based on “speculative claims by non-scientists like Robert Atkins”. But scientists well-known for promoting low carb diets have claimed that “very low carbohydrate diets, in their early phases, also must supply substantial glucose to the brain from gluconeogenesis…the energy cost, at 4–5 kcal/gram could amount to as much as 400–600 kcal/day” Fine EJ, 2004. Ludwig also sets the energy expenditure bar quite high in his New York Times opinion article, JAMA commentary, and book “Always Hungry” where he claims to have demonstrated a 325 kcal/d increase in expenditure in accordance with the CIM predictions Ebbeling CB, 2012. What Ludwig fails to mention is that such an interpretation is confounded by the low-carbohydrate diet having 50% greater dietary protein which is well-known to increase expenditure. Ludwig also doesn’t mention that his study failed to demonstrate a significant effect on either resting or daily energy expenditure when comparing diets with the same protein content, but varying in carbohydrate and fat.

      What was the energy expenditure bar set by our ketogenic diet study Hall KD, 2016? The clinical protocol specified that the primary daily energy expenditure outcome (measured by room calorimetry) must increase by >150 kcal/d to be considered physiologically meaningful. With the agreement of funders at the Nutrition Science Initiative, notable proponents of the CIM, the pre-specified 150 kcal/d threshold was used to calculate number of study subjects required to estimate the energy expenditure effect size in a homogeneous population of men consuming an extremely low carbohydrate diet. If the measured effect size exceeded 150 kcal/d then the results could be reasonably interpreted as a physiologically important increase in energy expenditure worthy of future study in a wider population using a more realistic and sustainable diets. Unfortunately, the primary energy expenditure outcome was substantially less than 150 kcal/d and it would have been unethical to retrospectively “move the goal posts” or emphasize exploratory outcomes that could possibly be interpreted as more favorable to the CIM.

      Ludwig sets the bar far too low when he claims that a ~100 kcal/d effect size “would be of major scientific and clinical significance” for treatment of obesity. Ludwig bases this claim on a misunderstanding of the tiny “energy imbalance gap” between calorie intake and expenditure corresponding with the rise of population obesity prevalence Hall KD, 2011. This is especially puzzling since Ludwig himself used the same mathematical model calculations to conclude that development of obesity in adults requires an increased energy intake (or decreased expenditure) amounting to ~400-700 kcal/d Katan MB, 2010.

      As described in my review, the carbohydrate-insulin model is clearly in need of reformulation regarding the predicted effects of isocaloric variations in dietary carbohydrate and fat on energy expenditure and body fat. However, other aspects of the model remain to be adequately investigated and reasonable ad hoc modifications of the model have been proposed. Finally, it is important to emphasize that regardless of whether the carbohydrate-insulin model is true or false, dietary carbohydrates and insulin may promote obesity and low carbohydrate diets may offer benefits for weight loss and metabolic health.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2017 Jan 17, DAVID LUDWIG commented:

      In this review, Hall claims to have “falsified” the Carbohydrate-Insulin Model (CIM) of obesity as iterated by Mark Friedman and me in 2014 Ludwig DS, 2014. Hall describes this achievement as “rare” in nutritional science and analogous to the refutation of the “luminiferous ether” hypothesis of the 19th century. Elsewhere, he argues that the published data are so definitive as to warrant curtailment of further funding for macronutrient-focused obesity research Freedhoff Y, 2016

      To loosely paraphrase Mark Twain, rumors of CIM’s demise have been greatly exaggerated.

      Hall bases his case mainly on his two feeding studies, one small and short (6 days), the other small, non-randomized (i.e., observational) and designated a pilot.

      In the discussion section of the 6-day study Hall KD, 2015, Hall and colleagues write: “Our relatively short-term experimental study has obvious limitations in its ability to translate to fat mass changes over prolonged durations” (NB: it can take the body weeks to fully adapt to a high fat diet Hawley JA, 2011 Vazquez JA, 1992 Veum VL, 2017). This appropriately cautious interpretation was evidently abandoned in the current review. Indeed, the study has numerous limitations beyond short duration, as reviewed elsewhere, including: 1) inability to directly document change in fat mass by DXA; 2) use of an exceptionally low fat content for the low-fat diet (< 8% of total energy), arguably without precedent in any population consuming natural diets; 3) use of a relatively mild restriction of carbohydrate (30% of total energy), well short of typical very-low-carbohydrate diets; and 4) experimental errors and exclusions of data that could confound findings. In addition, the investigators failed to verify biologically available energy of the diet (e.g., by analysis of the diets and stools for energy content). Rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings – and this possibility can’t be ruled out in studies of such short duration.

      Hall’s non-randomized pilot Hall KD, 2016 potentially suffers from all the well-recognized limitations of small observational studies, importantly including confounding by any time-varying covariate. One such factor is miscalculation of energy requirements, leading to progressive weight loss that would have introduced bias against the very-low-carbohydrate diet. Other major design and interpretive limitations have been considered elsewhere.

      Furthermore, Hall sets the bar for the CIM unrealistically high (i.e., 400 to 600 kcal/d greater total energy expenditure), citing speculative claims by non-scientists like Robert Atkins. In fact, effect estimates of 100 to 300 kcal/day – as demonstrated by Hall himself Hall KD, 2016 and by us Ebbeling CB, 2012 using doubly-labeled water – would be of major scientific and clinical significance if real, and do not represent "ad hoc modifications" to evade "falsification." (For comparison, Hall previously argued that the actual energy imbalance underlying the entire obesity epidemic is < 10 kcal/d Hall KD, 2011.)

      To test the CIM, we need high-quality studies of adequate duration to eliminate transient biological processes (ideally ≥ 1 month); using a randomized-controlled design; with definitive measurements of body composition (e.g. DXA or MRI); and including appropriate process measures to assure that the diets are properly controlled for biologically available energy content. No such studies have yet been published. Thus, the CIM is neither proven nor “falsified” by existing data. In view of the complexity of diet, many high-quality studies will likely be needed to provide a complete answer to this question, versions of which have been debated for a century.

      The CIM aims to explain a paradox: Body weight is controlled (“defended”) by biological factors affecting fat storage, hunger and energy expenditure Leibel RL, 1995. However, the average defended body weight has increased rapidly throughout the world among genetically stable populations. Lacking a definitive explanation for the ongoing obesity epidemic, or effective non-surgical treatment, we should not casually dismiss CIM, especially in light of many studies suggesting benefits of carbohydrate-modified/higher-fat diets for obesity Tobias DK, 2015 Mansoor N, 2016 Mancini JG, 2016 Sackner-Bernstein J, 2015 Bueno NB, 2013, cardiovascular disease Estruch R, 2013 and possibly longevity Wang DD, 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 05, Alex Vasquez commented:

      Apparently the publisher is not connecting these related publications for appropriate context; see: Vasquez A. Correspondence regarding Cutshall, Bergstrom, Kalish's "Evaluation of a functional medicine approach to treating fatigue, stress, and digestive issues in women.” Complement Ther Clin Pract. 2016 Oct 19 https://doi.org/10.1016/j.ctcp.2016.10.001


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 27, Robin W.M. Vernooij commented:

      We have developed a fillable, user-friendly PDF version of CheckUp, which can be found at the EQUATOR ( Enhancing the QUAlity and Transparency Of health Research ) Library (http://www.equator-network.org/reporting-guidelines/reporting-items-for-updated-clinical-guidelines-checkup/).

      CheckUp has recently been translated into Spanish and Dutch (Chinese and Czech versions are being prepared); these translated versions can also be found at the EQUATOR library. Researchers are invited to translate CheckUp into other languages.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 01, Lydia Maniatis commented:

      "Could these null findings result simply from poor data quality in infants?"

      That a study even warrants such a statement implies a lack of theoretical and methodological rigor. Such questions cannot be resolved post hoc - experiments need to be planned so as to avoid them altogether. The authors feel that "Several observations argue against this [poor quality data] interpretation," but such special pleading by the authors doesn't make me feel any better.

      This is a study in which the conceptual categories are crude - e.g. "scenes" is a category - calling into question its replicability (given the broad latitude in selecting stimuli we could label "scenes.") Post hoc evaluations of data - model-fitting, etc - are also poor practice. All they can do is describe a particular dataset, confounds and all. Because the authors make no predictions, emphasizing instead the relative novelty of their technique, one might overlook the fact that data generated without a clear theoretical premise guiding control of variables/potential confounds is of very limited theoretical value. Basically, they're just playing with their toys.

      Despite what I see as poor scientific practice, I don't think we needed an fMRI study to to "suggest" to us that, by 4–6 months, babies can distinguish "faces" from "scenes."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 04, Randi Pechacek commented:

      Aaron Weimann, the 1st author on this paper, wrote a blog post on microBEnet briefly discussing this new software. Read about it here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 07, Lydia Maniatis commented:

      “In conclusion, using a psychophysical method, for the first time we showed that the timescale of adaption mechanisms for the mid-level visual areas were substantially slower than those for the early visual areas.”

      Psychophysical methods are amazing. They let you tap into specific levels of the brain, just by requiring observers to press one of two buttons. As you can imagine, some heavy-duty theoretical and empirical preparation has gone into laying the ground for such a simple but penetrating method.

      One example of this preparation is the assertion by Graham (1992) that under certain “simple” conditions, the brain becomes transparent, so that the percept is a direct reflection of, e.g. the activity of V1 neurons. (Like bright students, the higher levels can’t be bothered to respond to very boring stimulation). She concluded this after a subset of a vast number of experiments performed in the very active field had proven “consistent” with the “classical” view of V1 behavior, at a time when V1 was thought to be pretty much all there was (for vision). (The “classical” view was later shown to be premature and inadequate, making the achievement of consistency in this body of work even more impressive). If one wanted to be ornery, one might compare Graham’s position to saying that we can drop an object into a Rube Goldberg contraption and trigger only the first event in the series, while the other events simply disengage, due to the simplicity of the object – perhaps a simple, sinusoidal surface. To be fair, though, the visual system is not as integrated or complex as those darned contraptions.

      The incorporation of this type of syllogism into the interpretation of psychophysical data was duly noted by Teller (1984), who, impressed, dubbed it the “nothing mucks it up proviso.” It has obviously remained a pillar of psychophysical research, then and now.

      The other important proviso is the assumption that the visual system performs a Fourier analysis, or a system of little Fourier analyses, or something, on the image. There is no evidence for or logic to this proviso (e.g. no imaginable functional reason or even remotely adequate practical account), but, in conjunction with the transparency assumption, it becomes a very powerful tool: little sinusoidal patches tap directly into particular neural populations, or “spatial filters,” whose activity may be observed via a perceiving subject’s button tap (and a few dozen other “linking propositions,” methodological choices and number-crunching/modeling choices for which we have to consult each study individually). (There are also certain minor logical problems with the notion of “detectors,” a concept invoked in the present paper; interested readers should consult Teller (1984))

      The basic theoretical ground has been so thoroughly packed that there is little reason for authors to explain their rationale before launching into their methods and results. The gist of the matter, as indicated in the brief introduction, is that Hancock and Pierce (2008) “proposed that the exposure to the compound [grating] pattern gave rise to more adaptation in the mid-level visual areas (e.g., V4) than the exposure to the component gratings.” Hancock and Pierce (2008) doubtless had a good reason for so proposing. Mei et al (2017) extend these proposals, via more gratings, and button presses, to generate even more penetrating proposals. These may become practically testable at some point in the distant future; the rationale, as mentioned, is already well-developed.

      n.b. Due to transparency considerations, results apply to gratings only, either individual or overlapping.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Lydia Maniatis commented:

      Comment 2:Below are some of the assumptions entailed by the sixty-year-old "signal detection theory," as described by Nevin (1969) in a review of Green and Swets (1966), the founding text of sdt.

      "Signal detection theory [has proposed] an indirectly derived measure of sensitivity...This measure is defined as the separation...between a pair of hypothesized normal density functions representing the internally observed effects of signal plus noise, an noise alone."

      In other words, for any image an investigator might present, the nervous system of the observer generates a pair of probability functions related to the presence of absence of a feature of that image that the investigator has in mind and which he/she has instructed the observer to watch for. The observer perceives this feature on the basis of some form of knowledge of these functions. These functions have no perceptual correlate, nor is the observer aware of them, nor is there any explanation of how or why they would be represented at the neural level.

      "The subject's pre-experimental biases, his expectations based on instructions and the a priori probability of signal, and the effects of the consequences of responding, are all subsumed under the parameter beta. The subject is assumed to transform his observations into a likelihood ratio, which is the ratio of the probability density of an observation if a signal is present to the probability density of that observation in the absence of signal. He is assumed, further, to partition the likelihood ratio continuum so that one response occurs if the likelihood ratio exceeds beta, and the other if it is less than beta."

      Wow. None of these assumptions have any relationship to perceptual experience. Are they in the least plausible, or in any conceivable way testable? They underlie much of the data collection in contemporary vision science. They are dutifully taught by instructors; learning such material clearly requires that students set aside any critical thinking instincts.

      The chief impetus behind SDT seems to have been a desire for mathematical neatness, rather than for the achievement of insight and discovery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 23, Lydia Maniatis commented:

      "For over 60 years, signal detection theory has been used to analyze detection and discrimination tasks. Typically, sensory data are assumed to be Gaussian with equal variances but different means for signal-absent and signal-present trials. To decide, the observer compares the noisy sensory data to a fixed decision criterion. Performance is summarized by d0 (discriminability) and c (decision criterion) based on measured hit and false-alarm rates."

      What should be noted about the above excerpt is the way in which a statement of historical fact is offered as a substitute for a rationale. What scientist could argue with a 60-year-old practice?

      The "typical" assumptions are not credible, but if the authors believe in them, it would be great if they could propose a way to test them, as well as work out arguments against the seemingly insurmountable objections to treating neurons as detectors, objections raise by, for example, Teller (1984).

      While they are at it, they might explain what they mean by "sensory data." Are they referring to the reaction of a single photoreceptor when struck by a photon of a particular wavelength and intensity? Or to one of the infinitely variable combinations of photon intensities/wavelengths hitting the entire retina at any given moment - combinations which mediate what is perceived at any local point in the visual field? How do we get a Gaussian distribution when every passing state of the whole retina, and even parts of it, is more than likely unique? When, with eyes open, is the visual system in a "signal-absent" state?

      There is clearly a perfect confusion here about the "decision" by the visual process that produces the conscious percept and the decision by the conscious observer trying to recall and compare percepts presented under suboptimal conditions (very brief presentation times) and decide whether they conform to an extrinsic criterion. (What is the logic of the brief presentation? And why muddy the waters with forced choices? (I suspect it's to ensure the necessary "noisiness" of results)).

      "For 5 out of 10 observers in the covert-criterion task, the exponentially weighted movingaverage model fit the best. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the limited-memory models, one was fit best by the Bayesian selection, exponentially weighted moving-average, and the reinforcement learning models, one was fit best by the Bayesian selection and the reinforcement learning models, one was fit best by the exponentially weighted moving-average and reinforcement learning models, and one was best fit by the reinforcement learning model. At the group level, the exceedance probability for the exponentially weighted moving-average is very high (慸ponential = .95) suggesting that given the group data, it is a more likely model than the alternatives (Table 1). In the overt-criterion task, the exponentially weighted moving-average model fit best for 5 out of 10 observers. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the reinforcement learning models, two were fit best by the reinforcement-learning model, and two were fit best by the limited-memory model. At the group level, the exceedance probability for the exponentially weighted moving-average model (慸ponential = .78) is higher than the alternatives suggesting that it is more likely given the group data (Table 2)."

      Note how, in the modern conception of vision science practice, failure is not an option; the criterion is simply which of a number of arbitrary models "fits best," overall. Inconsistency with experiment is not cause to reject a "model", as long as other models did worse or did as well, but in fewer case.

      What is the aim here?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 04, Henri de la Salle commented:

      This review privileges the view that mRNAs are translated in platelets. However, the biological function of mRNAs in platelets is not so clear. Our works do not agree with the conclusions drawn from other ones quoted in this review. We have demonstrated (Angénieux et al., PLoS One. 2016 Jan 25;11(1):e0148064. doi: 10.1371/journal.pone.0148064) that the lifespan of mRNAs and rRNAs in platelets is reduced, only a few hours. Accordingly, the translation activity in platelets rapidly decay, within a few hours. Thus, in vivo, translation of non-mitochondrial mRNAs only occurs in young platelets, a few percent of blood platelets in physiologic conditions. Most of the works reporting translation in platelets should be revisited by quantifying the number of transcripts of interest actually present in platelets. The RNAscope technique is powerful to investigate this problem; our works indicated that the most frequent transcript (eg beta actin mRNA) can be detected most if not all young platelets, but in only few percent of total blood platelets in homeostatic conditions. Finally, the biological role of translation in young platelets need to be established using accurate quantification methods, which is not easy with these cells.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 26, Elena Gavrilova commented:

      The initial article with the experimental study could be found here: https://www.ncbi.nlm.nih.gov/pubmed/27769099 The current article provides a detailed response to E.V. Dueva’s concerns regarding the experimental study. Moreover, the current article already contains response to the concerns raised in E.V. Dueva’s comment. Both our articles are fully transparent for the readers so they have an opportunity to familiarize themselves with the study results and detailed response to the concerns and make their own opinion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 21, Hans Bisgaard commented:

      Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.

      Sincerely

      Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 29, Martijn Katan commented:

      My colleague Paul Brand and I published a Dutch-language comment on this paper in the Dutch Medical Journal. The English abstract is below and on www.ncbi.nlm.nih.gov/pubmed/28635579.

      Abstract: Taking fish oil supplements in the third trimester of pregnancy was associated with significantly less wheezing or asthma in the child at the age of 3-5 years, according to a randomized clinical trial by Bisgaard et al., NEJM 2017. However, the results of this study should be interpreted with caution. The primary end points were modified at a late stage in the study, and two primary end points, eczema in the first 3 years of life and allergic sensitization at 18 months of age, were demoted to secondary end points, and showed no significant effect of treatment. Furthermore, the age range for the published primary end point, persistent wheeze, differed from that in the protocol. Additional concerns include the emphasis on outcomes by omega-3 fatty acid levels in the blood, a post hoc subgroup analysis not included in the protocol. In our opinion, this study does not justify advising routine fish oil supplements in pregnancy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jul 25, Konstantinos Fountoulakis commented:

      Nice way to reply without replying to exact and specific questions. You already know that the NEJM editor rejected a letter by me and as i can see here he has also rejected other similar letters which raised the same questions. These specific questions seem to burn and i again mention them here:

      1. Did you or you did not change the primary outcome after registering the trail and during the study, and after the results of some of the subjects were available? (not in my comments but it needs a definite answer which i did not see so far)<br>
      2. Did you or you did not include in the paper a different primary outcome (3-5 years) from what you had eventually registered in the protocol (0-3 years) and specifically stated in the paper that this was the primary outcome of the study? Is 0-3 identical to 3-5?

      Well i have no way of publishing this as a letter to the editor, I have already tried. To make things worse, the reply letter says (verbatim) that 'As clearly stated in the article the primary outcome was extended to distinguish wheezing children from asthmatic children'. I hope you will respond to the above issues and clarify once and for all the problem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Jul 21, Hans Bisgaard commented:

      Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.

      Sincerely

      Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Apr 20, Konstantinos Fountoulakis commented:

      You do not like my critisism, and your reply is insulting. However in https://clinicaltrials.gov/ct2/show/NCT00798226 it says verbatim: 'Primary Outcome Measures: Persistent wheeze 0 to 3 years of age [ Time Frame: 3 years ]

      in the paper you say (again verbatim): Primary End Point During the prespecified, double-blind follow-up period, which covered children from birth to between 3 and 5 years of age, 136 of 695 children (19.6%) received a diagnosis of persistent wheeze or asthma, and this condition was associated with reduced lung function by 5 years of age, with parental asthma, and with a genetic risk of asthma

      It is quite different 0 to 3 and 0-to between 3 and 5

      I would appreciate an answer on this. BTW Facebook is also good in disceminating scientific findings


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Apr 16, Hans Bisgaard commented:

      Reply to comment from Konstantinos Fountoulakis:

      We think that the tone and scientific level in this correspondence is inappropriate for a scientific discussion and rather resembles a facebook discussion. The primary outcome reported in the paper is identical to the registered primary outcome of asthmatic symptoms during the prespecified, double blind follow-up period until the youngest child turned 3 years of age. This primary outcome does not include any “unblinded" observation period. The definition of the primary outcome was predefined based upon a previously published algorithm using diary-registration of asthma symptoms and a predefined treatment algorithm, and also the statistical model (survival analysis by cox regression) was predefined. As evident from the paper, the analyses related to further follow-up until the youngest child turned 5 years of age, as requested by NEJM, are clearly reported separately as the results of a "continued follow-up period”.

      Sincerely

      Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke

      COPSAC, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2017 Apr 12, Konstantinos Fountoulakis commented:

      The paper reports that high-dose supplementation with n−3 LCPUFA in the third trimester of pregnancy reduces the incidence of wheezing in the offspring [1]. However, the primary outcome as registered [2] is incidence at 3 years while in the paper it is erroneously reported as incidence between 3 and 5 years. This is highly problematic and raises a number of issues. Any changes to the protocol or to the way results are presented should had been made clear in the manuscript. Any other way of presenting the results and conclusions is problematic. It is not acceptable that the NEJM asked for an extension of the primary outcome. This could had been added as an additional post-hoc analysis. The results concerning the real primary outcome are not reported but they are probably negative, taken into consideration figure 1 and the marginal significance (p=0.03) at year 5. Furthermore, the trial becomes single blinded gradually after year 3 which makes conclusions problematic. Conclusively, the paper clearly violates the CONSORT statement [3], is probably negative concerning the primary outcome (which is in accord with the negative secondary outcomes) and it is written in a misleading way.

      1. Bisgaard H, Stokholm J, Chawes B, Vissing N, Bjarnadóttir E, Schoos A et al. Fish Oil–Derived Fatty Acids in Pregnancy and Wheeze and Asthma in Offspring. New England Journal of Medicine. 2016;375(26):2530-2539.
      2. ClinicalTrials.gov [Internet]. Bethesda (MD): National Library of Medicine (US). 2000 Feb 29 - . Identifier NCT00798226, Fish Oil Supplementation During Pregnancy for Prevention of Asthma, Eczema and Allergies in Childhood; 2008, Nov 25 [cited 2017 Jan 8]; Available from: https://clinicaltrials.gov/ct2/show/record/NCT00798226
      3. Schulz K, Altman D, Moher D. CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials. PLoS Medicine. 2010;7(3):e1000251.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2017 Jul 21, Hans Bisgaard commented:

      Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.

      Sincerely

      Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2017 Apr 06, Robert Goulden commented:

      Hi Hans,

      Many thanks for the reply. I really don't meant to sound incriminatig or pedantic, but the switching of primary and secondary outcomes is a widespread problem in clinical trials. Anyone looking at the history of changes to the COPSAC registration I think would be keen to find out if that had occured here.

      You say 'Before unblinding of the trial we became aware that ranking of outcomes in this registration was not clear and we therefore changed this'. By that, do you mean that 'Development of eczema from 0 to 3 years of age' and 'Sensitization at 18 months of age' were mistakenly listed as primary outcomes in the original registration and subsequent revisions (until correction in Feb 2014), when your original intent was for them to be secondary outcomes from the outset? I of course understand how such an error can be made, but I hope you feel this is a reasonable question given the importance of this issue for determining the appropriate statistical significance threshold.

      Rob


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2017 Mar 26, Hans Bisgaard commented:

      Reply to question from Robert Goulden

      We must admit that we find this comment very incriminating and with no contribution to a scientific discussion. The primary outcome of our study was always 'wheeze’ (early asthmatic symptoms). Otherwise, we would not have reported it as such, and we doubt that the New England Journal of Medicine would have published it. Similarly, the diagnostic algorithm based upon episodes of 'troublesome lung symptoms' was pre-specified as was the analysis method (risk of developing wheeze analyzed by cox regression) in line with previous studies from our COPSAC birth cohorts. It is correct that wheeze, eczema and allergic sensitization (in that order) were all listed as ‘primary outcomes’ in the initial ClinicalTrials.gov registration. Before unblinding of the trial we became aware that ranking of outcomes in this registration was not clear and we therefore changed this (still unaware of the results of the trial). The only change after unblinding of the trial in relation to the primary outcome was the change in nomenclature to ‘Persistent wheeze or asthma’. This was due to a request from the New England Journal of Medicine of an additional 2 years follow-up from 3 to 5 years of age thereby including an age where we would normally use the term ‘asthma’.

      Sincerely

      Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke

      COPSAC, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2017 Feb 09, Robert Goulden commented:

      Here's a letter I sent to NEJM which they declined to publish. Hopefully the authors can respond here:

      A review of the history of changes on the ClinicalTrials.gov entry (NCT00798226) for Bisgaard et al.’s study raises questions about the selection of their primary outcome and the statistical significance of their positive result.

      When first registered in 2008, the trial had three primary outcomes: development of wheeze, development of eczema, and sensitization. In February 2014, two months before the study completion date, the entry was edited to just have persistent wheeze as the primary outcome, with eczema and sensitisation switched to secondary outcomes. The published study in NEJM shows that persistent wheeze – presented as the sole primary outcome – was the only one of the three original primary outcomes to be statistically significant (P = 0.035).

      Given multiple primary outcomes, an adjustment such as Bonferroni should have been made to the significance threshold: 0.05/3 = 0.017. Accordingly, the effect on wheeze was not statistically significant. Would the authors comment on their selection of the only ‘significant’ primary outcome as their final primary outcome? Were they aware of the study results at this point and did this influence their decision?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 06, Cicely Saunders Institute Journal Club commented:

      This paper was reviewed in the Cicely Saunders Institute Journal Club on 1st March 2017.

      The paper reports on the independent associations of income, education and multimorbidity with aggressiveness of end of life care, using rich data from the Health and Retirement Study (HRS) linked to the National Death Index (NDI) and Medicare data. We enjoyed discussing this paper and agree with the authors about the importance of understanding social determinants alongside clinical determinants of care at the end of life. We liked the measure of multimorbidity used, comprising of items related to comorbidity, functional limitations and geriatric syndromes, and thought this comprehensive approach was useful in this population. We were not sure why the sample was limited to fee-for-service patients and whether this may have disproportionately excluded some socio-economic groups. As a non-US audience we would have welcomed some further justification for restricting the sample in this way and discussion of potential limitations, perhaps using a CONSORT diagram to explain the steps. We enjoyed the presentation of the bivariate associations in the bar charts, helping us to understand the U and J shaped relationships between some of the variables. More information about what exactly the income variable was capturing (i.e. including pensions or not, and whether the household income was total or averaged across the number of people in the household) would have been useful. We also felt the race variable was broad and interpretation of the results would have benefited from more refined categories. Overall the paper sparked a good discussion about the importance of measuring social determinants and illness and function related factors in end of life populations and how best to capture these.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 29, Rashmi Das commented:

      We thank Harri for his PERSONAL (NON PEER REVIEWED) OPINION which is available at above HANDLE ( http://hdl.handle.net/10138/153180) THAT CONTAINS DIRECT COPY AND PASTE OF THREE FIGURES/IMAGES FROM OUR PREVIOUS PUBLICATIONS (JAMA 2014 and Cochrane 2013). We are happy to reply to above comments made by Harri. First regarding the Cochrane review which was withdrawn in 2015, the detailed report is already available at following link (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001364.pub5/abstract). This report is the collaborative observation and conclusion of the Cochrane editors (UNLIKE THE HANDLE WHICH CONTAINS MORE OF PERSONAL OPINION WHICH HAS ALREADY BEEN EXAMINED BY THE COCHRANE EDITORS BEFORE REACHING THE CONCLUSION). The same HANDLE WAS SENT TO JAMA EDITORS REGARDING THE JAMA CLINICAL SYNOPSIS (PUBLISHED IN 2014) AND HARRI REQUESTED THE EDITORS TO CARRY OUT THE INVESTIGATION AND VERIFY. THE EDITORS ASKED US FOR REPLY WHICH WE CLARIFIED IN A POINT TO POINT MANNER (BOTH THE COMMENT BY HARRI AND OUR REPLY WAS PUBLISHED, SEE BELOW). HAD THE COMMENT/REPORT BY HARRI WAS ENTIRELY CORRECT, THE JAMA EDITORS COULD HAVE STRAIGHTWAY RETRACTED/WITHDRAWN THE SYNOPSIS WITHOUT GOING FOR PUBLICATION OF THE COMMENT/REPLY (Both are available at following: https://www.ncbi.nlm.nih.gov/pubmed/26284729; https://www.ncbi.nlm.nih.gov/pubmed/26284728). IT HAS TO BE MADE CLEAR THAT THE JAMA SYNOPSIS (DAS 2014) WAS WITHDRAWN AS THE SOURCE DOCUMENT ON WHICH IT WAS BASED (COCHRANE 2013 REVIEW) WAS WITHDRAWN (NOT BASED ON THE REPORT IN THE HANDLE WHICH IS A PERSONAL NON PEER REVIEWED OPINION). The irony is that though HARRI'S COMMENT got published as LETTER TO EDITOR in JAMA after OUR REPLY, still the NON PEER REVIEWED HANDLE THAT CONTAINS DIRECT COPY OF THREE FIGURES/IMAGES FROM OUR PUBLICATION IS GETTING PROPAGATED.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 24, Harri Hemila commented:

      Background of the retraction

      Concerns were expressed about unattributed copying of text and data, and about numerous other problems in the Cochrane review “Zinc for the Common Cold” by Singh M, 2013. Details of the concerns are available at: http://hdl.handle.net/10138/153180.

      The Cochrane review was withdrawn, see Singh M, 2015.

      The JAMA summary of the Cochrane review by Das RR, 2014 had numerous additional problems of its own.

      Detailed description of problems in Das RR, 2014 are available at http://hdl.handle.net/10138/153617.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 15, Farrel Buchinsky commented:

      How can I find out more about "well-newborn care in the inpatient setting"? It was $12000 and almost exclusively associated with uncomplicated delivery. In other words preterm birth complications, neonatal encephalopathy, and other neonatal disorders were not the major contributors. I do not understand that. What is being charged? Is the labor and delivery being charged to the mother or the child or being split equally among them? Surely it cannot be the "room" charge for hanging out in the newborn nursery for 2 days?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 03, Peter Hajek commented:

      The concerns and warnings about 'dual use' are not justified. There is no evidence that dual use of cigarettes and e-cigarettes poses any additional risks, on the contrary. The available evidence suggests that dual use of cigarettes and e-cigarettes has the same or better effect than dual use of cigarettes and nicotine replacement treatments (NRT) that is the basis for licensing NRT for 'cut down to quit' use. It reduces smoke intake (and therefore toxin intake - even of chemicals present in both products, such as acrolein); and increases the chance of quitting smoking later on. The evidence we have up to now leaves no doubt that smokers should be informed truthfully about the risk differential and encouraged to switch to vaping.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 03, Donald Forsdyke commented:

      ASSUME A SPHERICAL COW?

      Following a multidisciplinary study of milk production at a dairy farm, a physicist returned to explain the result to the farmer. Drawing a circle she began: "Assume the cow is a sphere … ." (1) This insider math joke may explain Koonin’s puzzlement that "most biologists do not pay much attention to population genetic theory" (2).

      The bold statement that "nothing in evolution makes sense except in the light of population genetics," cannot be accepted by biologists when evolution is portrayed in terms of just two variables, "an interplay of selection and random drift," constituting a "core theory." While mathematical biologists might find it "counterintuitive" that "the last common eukaryotic ancestor had an intron density close to that in extant animals," this is not necessarily so for their less mathematical counterparts. They are not so readily inclined to believe that an intron "is apparently there just because it can be" (3).

      While expediently adopting "null models" to make the maths easier, population geneticists are not "refuted by a new theoretical development." They have long been refuted by old theoretical developments, as illustrated by the early twentieth century clash between the Mendelians and the Biometricians (4). It is true that by adjusting "selection coefficient values" and accepting that "streamlining is still likely to efficiently purge true functionless sequences," the null models can closer approximate reality. But a host of further variables – obvious to many biologists – still await the acknowledgement of our modern Biometricians.

      1.Krauss LM (1994) Fear of Physics: A Guide for the Perplexed. Jonathan Cape, London.

      2.Koolin EV (2016) Splendor and misery of adaptation, or the importance of neutral null for understanding evolution. BMC Biology 14:114 Koonin EV, 2016

      3.Forsdyke DR (2013) Introns First. Biological Theory 7, 196-203.

      4.Cock AG, Forsdyke DR (2008) "Treasure Your Exceptions." The Science and Life of William Bateson. Springer, New York.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 29, Christopher Southan commented:

      BIA 10-2474 is now avaible from vendors (see https://cdsouthan.blogspot.se/2016/01/molecular-details-related-to-bia-10-2474.html). The experimental verification of the predictions in this paper are thus awaited with interest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 29, Lydia Maniatis commented:

      The title of this article indicates that the authors may have something to say about “lightness perception for surfaces moving through different illumination levels”, but leaves us in the dark what that might be.

      The abstract isn’t much more illuminating. The somewhat vague message seems to be that the perceived lightness of a patch in the visual field depends on the structure of the “light field,” the “choice of fixation positions,” and whether the scene is viewed freely or not, and that “eye movements in [dynamic scenes and nonuniform light fields] are chosen to improve lightness constancy.”

      Unfortunately and fatally absent from the terms of the discussion is any reference to shape. Yet, shape (i.e. the organization (segregation/unification, 3D interpretation), via the visual process, of the points in the retinal projection into perceived forms) is the only available means to the goal of creating percepts of lightness as well as relative illumination of surfaces. This is obvious with respect to the authors' sitmuli, which are images on a computer screen. The luminance structure of the light emitting points on that screen is the only information the visual system has to work with, and unless those points are grouped and boundaries and depth relations inferred there is no basis for designating continuous surfaces, their lightness, their relative illumination. Whether areas of the visual are interpreted as changing in reflectance or illumination is contingent on which parts of the field are eligible to be grouped into perceived physical units, with a homogeneous surface.

      In other words: When the luminance of a surface in a part of the visual field changes, (e.g. from lighter to darker), the change may be interpreted as being due to a change in illumination of a surface in that location, a change in the color of the surface at that location, the presence of a fog overlying the surface at that location, etc., or a combination of these possibilities. How is the solution (the percept) arrived at? For example, at the lower left side of Toscani et al’s (2016) Figure 1, an edge between a dark area (the “wall)” and a lighter area (the “side of a cube”) to its right is perceived as a lightening in terms of both perceived illumination and perceived reflectance) while a change from same lighter area to a darker area to its right is seen as a change in illumination only. The reason is structural, based on the very principles of organization not mentioned by the authors.

      The consequence of the failure to consider principles of organization in any study of lightness perception is that ANY resulting claims can be immediately falsified. It is impossible to predict how a surface will look when placed in any given location in the visual field by referring only to the distribution of incident illumination, since this information doesn’t in the least allow us to predict luminance structure. And a description of luminance structure doesn’t help us if we don’t consider visual principles of organization. The former fact should be particularly obvious to people using uniformly illuminated pictorial stimuli, whether on a page or on a screen, which produce impressions of non-uniform illumination. Like reflectance, the perception of illumination is constructed, it isn’t an independent variable for vision; so it makes no sense, in the context of perception experiments, to refer to it as though it is – as the authors do in the phrase “moving through different illumination levels” - especially if we aren’t even talking about actual illumination levels, but only visually-constructed ones! The perception of changing illumination levels is the flip side to the perception of unchanging surfaces, and vice versa. Like lightness, perceived illumination is dependent on principles of organization, starting with figure/ground segregation.

      So, for example, when the authors say that the brightest parts of a (perceived) surface’s luminance distribution is “an efficient…heuristic for the visual system to achieve accurate…judgments of lightness,” we can counter (falsify) with the glare illusion (http://www.opticalillusion.net/optical-illusions/grey-glow-illusion-the-glare-effect/) in which the brightest area is not perceived as the plain view color of the surface, which appears black and obscured by a glare or bright fog.

      With respect to eye movements and fixation: It seems to be the case that fixations are the product, not the cause, of perceptual solutions. For example, it has been shown that while viewing the Muller-Lyer illusion, eye movements trace a longer path when we’re looking at the apparently longer figure and vice versa. Another problem with the claim that eye movements have a causal role by sampling “more relevant” parts of the field is that all parts of the field are taken into account in the generation of a percept, e.g. in order for the visual system to conclude that a particular patch is the lightest part of a homogeneously-colored but differently-illuminated physical unit, rather than a differently colored patch on a different unit. Since the perceived relative lightness/illumination of that particular patch is related to the perceived lightness/illumination of the whole visual field, isolating that patch by fixation can’t be uniquely informative. As we know, reduction conditions can transform the perception of surfaces.

      I would note that the emphasis on “lightness constancy” rather than “principles of lightness perception” is common but ill-conceived. With respect to understanding perception, understanding lightness constancy is no more informative than understanding lightness inconstancy. (For a great example, complete with movement, of lightness INconstancy, see https://www.youtube.com/watch?v=z9Sen1HTu5o). In either case, what is constant are the underlying perceptual principles; to understand one effect is to understand the other. This is another reason the claim that eye movements are chosen “to improve lightness constancy” is ill-conceived. Only an all-knowing homunculus can know, a priori, which areas of the visual field represent stimulation from physical surfaces with constant reflectance x, which represent physical surfaces obstructed by fog or in shadow, which areas represent physical surfaces that are actually changing in their light reflecting properties (a squid, for example - do we want to improve his or her constancy?), etc. The visual system has to go where the evidence goes, as interpreted via the evolved process. This process achieves veridicality – e.g. seeing surface properties as unchanging when they’re unchanging, and as changing when they’re changing - in typical conditions.

      Ironically, observers in Toscani et al’s (2016) experiments are not perceiving surfaces veridically, since, for example, parts of the screen surface that are actually varying in their color are perceived as unchanging in that respect, while, correspondingly, they are seen (incorrectly) as experiencing changing illumination. So we’re actually talking about the converse of lightness constancy; the authors are equating a physical surface that is unchanging in its light-reflecting properties, but experiencing changing illumination, with a surface (a section of the screen) that is actually changing in its light reflecting/emitting properties independently of incident illumination, which is constant. In the former case, seeing the surface as unchanging parallels the physical situation, while in the latter case it is opposite to the physical situation. Calling both situations examples of “lightness constancy” only confuses the issue, which is: “Why does a retinal projection with a particular luminance structure result in patch x looking the way it does.” The question, again, cannot be answered reliably without invoking principles of organization, i.e. the consequences of that luminance structure for perceived shape.

      Short version: Shape.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 04, Lydia Maniatis commented:

      This article belongs to the popular "transparent brain" school of thought. (The label is inspired by Graham (1992) see comment https://pubpeer.com/publications/8F9314481736594E8D58E237D3C0D0).

      That is, certain visual scenes selectively tap neurons at particular levels of the visual system, such that by analyzing the percept we can draw conclusions about the behavior of groups of neurons at that level.

      Teller (1984) called this view the "nothing mucks it up proviso," referring to the fact that it assumes all other parts of the hierarchically-organized, complicatedly interconnected visual system play no role in the particular effect of interest.

      The untenable transparent brain fiction is compromised even further by the "simple" stimuli that are supposed to enable the transparent view into V1 etc, as they actually elicit highly sophisticated 3D percepts including effects such as the perception of light and shadow and fog/transparency. OF course, these perceptual features are mediated by the activity of V1 (etc) neurons. But the factors the investigators reference - orientation here, often contrast - are somehow supposed to retain their power to reflect only the behavior of V1 (or whatever level particular investigators are claiming to isolate and "model.")


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Scott D Slotnick commented:

      There has been a call for peer commentary on the Editorial/Discussion Paper (Slotnick SD, 2017) in the journal Cognitive Neuroscience (due February 13th, 2017). The Editorial/Discussion Paper, Commentaries, and an Author Response will be published in an issue of Cognitive Neuroscience later this year.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 04, Gerard Ridgway commented:

      This editorial suggests that the problems identified by Eklund A, 2016 arise solely from the use of resting-state data in place of null data. It seems to overlook the fact that Eklund A, 2016 use randomly timed designs (two blocked designs and two event-related), meaning the unknown timecourse of default mode network activity cannot consistently give rise to design-synchronised activation (except perhaps in the special case of the initial transients, which is mentioned briefly in an article by Flandin and Friston, 2016, but probably warrants further investigation). On the other hand, the activity of the DMN could perhaps be contributing to non-Gaussianity of the residuals and/or a more complex spatial autocorrelation function (ACF) than is typically modelled, but these aspects seem not to be mentioned, or to be addressed in the author's recommended simulation approach (which seems to be very similar to the original AlphaSim from AFNI).

      Regarding the spatial ACF in particular, but also the issue of the cluster-defining threshold (CDT), this article should be contrasted with Cox et al., 2016, which recommends a new long-tailed non-Gaussian ACF available in newer versions of AlphaSim, together with a CDT of 0.001 or below.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Chris Del Mar commented:

      Has this trial report hidden the results -- that symptomatic outcomes are clinically almost identical -- in plain sight? See this blog that plots the results more transparently:-

      http://blogs.bmj.com/bmj/2017/02/08/how-to-hide-trial-results-in-plain-sight/?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

      Chris Del Mar cdelmar@bond.edu.au Paul Glasziou pglaszio@bond.edu.au


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 24, Dorothy V M Bishop commented:

      It is a pleasure to see this paper, which has the potential to transform the field of ERP research by setting new standards for reproducibility.

      I have one suggestion to add to those already given for reducing the false discovery rate in this field, and that is to include dummy conditions where no effect is anticipated. This is exactly what the authors did in their demonstration example, but it can also be incorporated in an experiment. We started to do this in our research on mismatch negativity (MMN), inspired by a study by McGee et al 1997; they worked at a time when it was not unusual for the MMN to be identified by 'experts' – and what they showed was that experts were prone to identify MMNs when the standard and deviant stimuli were identical. We found this approach – inclusion of a 'dummy' mismatch – invaluable when attempting to study MMN in individuals (Hardiman and Bishop, 2010). It was particularly helpful, for instance, when validating an approach for identifying time periods of significant mismatch in the waveform.

      Another suggestion is that the field could start to work more collaboratively to address these issues. As the authors note, replication is the best way to confirm that one has a real effect. Sometimes it may be possible to use an existing dataset to replicate a result, but data-sharing is not yet the norm for the field – journals could change that by requiring deposition of the data for published papers. But, more generally, if journals and/or funders started to require replications before work could be published, then one might see more reciprocal arrangements, whereby groups would agree to replicate each other's findings. Years ago, when I suggested this, I remember some people said you could not expect findings to replicate because everyone had different systems for data acquisition and processing. But if our data are specific to the lab that collected them, then surely we have a problem.

      Finally, I have one request, which is that the authors make their simulation script available. My own experience is that working with simulations is the best way to persuade people that the problems you have highlighted are real and not just statistical quibbles, and we need to encourage researchers in this area to become familiar with this approach.

      Bishop, D. V. M., & Hardiman, M. J. (2010). Measurement of mismatch negativity in individuals: a study using single-trial analysis. Psychophysiology, 47, 697-705 doi:10.1111/j.1469-8986.2009.00970.x

      McGee, T., Kraus, N., & Nicol, T. (1997). Is it really a mismatch negativity? An assessment of methds for determining response validity in individual subjects. Electroencephalography and Clinical Neurophysiology, 104, 359-368.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 03, Daniel Schwartz commented:

      The BC Cardiac Surgical Intensive Care Score is available online or via the Calculate mobile app for iOS, Android and Windows 10 at https://www.qxmd.com/calculate/calculator_36/bc-cardiac-surgical-intensive-care-score

      Conflict of interest: Medical Director, QxMD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 23, Alessandro Rasman commented:

      Bernhard HJ. Juurlink MD, Giovanni Battista Agus MD, Dario Alpini MD, Maria Amitrano MD, Giampiero Avruscio MD, Pietro Maria Bavera MD, Aldo Bruno MD, Pietro Cecconi MD, Elcio da Silveira Machado MD, Miro Denislic MD, Massimiliano Farina MD, Hector Ferral MD, Claude Franceschi MD, Massimo Lanza MD, Marcello Mancini MD, Donato Oreste MD, Raffaello Pagani MD, Fabio Pozzi Mucelli MD, Franz Schelling MD, Salvatore JA Sclafani MD, Adnan Siddiqui MD, PierluigI Stimamiglio MD, Arnaldo Toffon MD, Antonio Tori MD, Gianfranco Vettorello MD, Ivan Zuran MD and Pierfrancesco Veroux MD

      We read with interest the study titled "Free serum haemoglobin is associated with brain atrophy in secondary progressive multiple sclerosis" (1). Dr. Zamboni first outlined the similarities between impaired venous drainage in the lower extremities and MS in 2006 in his "Big Idea" paper (2). Chronic venous insufficiency can cause a breakdown of red blood cells, leading to increased levels of free hemoglobin. Exactly what the London researchers saw 11 years later.

      References: 1) Lewin, Alex, et al. "Free serum haemoglobin is associated with brain atrophy in secondary progressive multiple sclerosis" Wellcome open research 1-10 (2016). 2) Zamboni, Paolo. "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis" Journal of the Royal Society of Medicine 99.11 (2006): 589-593.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 27, Zvi Herzig commented:

      Prolonged exposures of oropharyngeal tissue submerged in refill liquids are a poor comparison to brief exposures from accidents.

      The constituents of EC liquids other than nicotine (glycerol, propylene glycol and food flavorings) are GRAS approved in relation to oral consumption. It's therefore unlikely that these would pose a particular hazard in relation to oral cancer.

      Likewise, with regards to nicotine, epidemiology of prolonged oral exposure in relation to snus is not linked to oral cancer either Lee PN, 2011.

      Thus none of the known e-liquid constituents are plausibly related to oral cancer. Which supports the above conclusion that the study's results unrelated to normal exposures.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 03, Christian Welz commented:

      we discussed that issue in the publication: "Because most EC users refill their cartridges by themselves, and incidental or accidental contact is logical and described (Varelt et al., 2015; Vakkalanka et al., 2014) we intentionally used unvapored liquids for our experiments...."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 20, Zvi Herzig commented:

      This study directly exposes cells to liquids, despite that fact that users are exposed to the vapor, not the liquid. This issue has been noted previously Hajek P, 2014, Farsalinos KE, 2014. The ~3 ml of liquid which EC users consume daily Farsalinos KE, 2014 is diluted by much air over hundreds of puffs. This is incomparable to the direct exposures to liquids in the present study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 09, M Mangan commented:

      There has now been an EFSA review of the paper, with an eye towards regulatory aspects in the EU. They describe the work as incomplete and with "severe shortcomings".

      http://onlinelibrary.wiley.com/doi/10.2903/sp.efsa.2017.EN-1249/abstract


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 03, M Mangan commented:

      There have now been some really good summaries of issues with this work.

      Is GM corn really different to non-GM corn? http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/

      What are isogenic lines and why should they be used to study GE traits? http://themadvirologist.blogspot.com/2017/01/what-is-isogenic-line-and-why-should-it.html

      Another: http://biobeef.faculty.ucdavis.edu/2017/01/03/i_would_appreciate_your_comments/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 22, M Mangan commented:

      The top fold change showed in this paper turns out to be a plant pathogen protein, that might have been affecting the maize. http://www.uniprot.org/uniprot/W7LNM5

      If that is the case, it may demonstrate the power of -omics in revealing that the claims of differences have to be evaluated very carefully. They may not be what the authors claim they are.

      We await some explanation of this large difference between samples from the authors.

      Edit to add: The author Mesnage asked me to post questions at the journal site, but is not coming over to answer them. Maybe the authors will find them here, so I'll also add them here as well.

      There's a lot of nonsense drama below now, but I want to hear from the authors (Robin Mesnage asked me to post here, but I can't see if he's responding):

      1. What is your explanation for the fact that top fold-change proteins in your data set are fungal proteins (and it's a known maize pathogen)?

      2. Are you aware that fungal contamination could result in similar changes in regards to the pathway changes that you describe? Did you consider this at all? Why didn't you address this in your paper?

      3. If you wish to dismiss your own top reported proteins, how can you stand by the importance of the fold-change claims you are making about other proteins?

      Thanks for your guidance on this. It's very perplexing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 10, Jesper M Kivelä commented:

      The URL in the reference number 2 is invalid (i.e. html is missing in the end). This error slipped through my (evidently not so) pedantic eyes at proof stage.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 15, Sin Hang Lee commented:

      Correspondence submitted to Nat.Rev.Dis.Primers

      In their recent Primer [Lyme borreliosis. Nat. Rev. Dis. Primers 2, 16091 (2016)] Allen Steere and colleagues described Lyme borreliosis as an important emerging infectious disease [1]. The authors assert that the natural history of untreated Lyme borreliosis can be divided into stages 1, 2 and 3, and that the early stage 1 infections can be treated successfully with a 10–14 day course of antibiotics. However, the authors also stated that demonstration of borrelial infection by laboratory testing is required for reliable diagnosis of Lyme borreliosis, with the exception of erythema migrans and that serodiagnostic tests are insensitive during the first several weeks of infection. If not treated early, “within days to weeks, the strains of B. burgdorferi in the United States commonly disseminate from the site of the tick bite to other regions of the body”. In other words, the authors have affirmed that if reliably diagnosed at the early stage of the infection, Lyme borreliosis can be cured with timely, appropriate antibiotics to prevent deep tissue damage along with its associated clinical manifestations resulting from host immune response to various spirochetal products or components. In the Outlook Diagnostic tests section of the article, the authors failed to mention the fact that currently the diagnosis of emerging infectious diseases largely depends on finding evidence of the causative agent in the host by nucleic acid-based tests [2], not serodiagnostic tests which usually turn positive only during convalescence. The authors seem to advise the medical practitioners to not treat Lyme disease patients until the proliferating spirochetes in the host have elicited certain immune responses which can be confirmed by serologic tests. Such practice should not be accepted or continued for obvious reasons.

      The authors stated “After being deposited in the skin, B. burgdorferi usually multiplies locally before spreading through tissues and into the blood or lymphatic system, which facilitates migration to distant sites.”. This statement acknowledges that spirochetemia is an early phase in the pathogenesis of Lyme borreliosis. But under the section of Diagnostic tests, polymerase chain reaction (PCR) test was only mentioned for synovial fluid of patients in late Lyme arthritis and for cerebrospinal fluid (CSF) of late neuroborreliosis. To refute the usefulness of DNA test for Lyme disease diagnosis, the authors cited a study which showed borrelial DNA was detected in synovial fluid of Lyme arthritis patients containing moribund or dead spirochetes [3]. However, the authors failed to discuss the significance of detection of borrelial DNA in the diagnosis of spirochetemia. The authors failed to acknowledge that even the finding of moribund or dead borrelial cells circulating in the blood is diagnostic of an active infection. Free foreign DNA is degraded and eliminated from the mammalian host’s blood within 48 hours [4]. Detection of any borrelial DNA validated by DNA sequencing is indicative of a recent presence of spirochetes, dead or alive, in the circulating blood which is evidence of an active infection beyond a reasonable doubt.

      It seems unfortunate for many current Lyme disease patients that Lyme arthritis was described before the era of Sanger sequencing and PCR [5]. If Lyme borreliosis were discovered as an emerging infectious disease today, Lyme disease would probably be routinely diagnosed using a highly accurate nucleic acid amplification test, as reiterated by Dr. Tom Frieden, director of the Centers for Disease Control and Prevention (CDC) for Zika virus infection [6], or by the European Centre for Disease Prevention and Control for the case definition of Ebola virus infection [7]. Now there is evidence that clinical “Lyme disease” in the United States may be caused by B. miyamotoi [8-10], co-infection of B. burgdorferi and B. miyamotoi [9], a novel CDC strain (GenBank ID# KM052618) of unnamed borrelia [10], and a novel strain of B. burgdorferi with two homeologous 16S rRNA genes [11]. The Lyme disease patients infected with these less common strains of borreliae may have negative or non-diagnostic two-tiered serology test results. Neither erythema migrans nor serologic test is reliable for the diagnosis of Lyme disease. In one summer, the emergency room of a small hospital in Connecticut saw 7 DNA sequencing-proven B. burgdorferi spirochetemic patients. Only three of them (3/7) had a skin lesion and only one (1/7) had a positive two-tiered serologic Lyme test [12].

      After a 40-year delay, the medical establishment should begin to diagnose “Lyme disease” as an emerging infectious disease by implementing nucleic acid-based diagnostic tests in the Lyme disease-endemic areas. A national proficiency test program to survey the competency of diagnostic laboratories in detecting various pathogenic borrelia species is urgently needed for stimulating diagnostic innovation. We should treat the borrelial infection of “Lyme disease” to reduce its autoimmune consequences, just like treating streptococcal infection early to reduce the incidence of rheumatic heart disease in the past.

      Allen Steere and colleagues have written a prescription to treat Lyme borreliosis in their lengthy article raising numerous questions [1], but paid little attention to the issue of how to select the patients at the right time for the most effective treatment. For the physicians managing current and future Lyme disease patients, a sensitive and no-false positive molecular diagnostic test is a priority, also the most important issue for the patients that Allen Steere and his colleagues have simply glossed over.

      Conflict of Interest: Sin Hang Lee is the director of Milford Molecular Diagnostics Laboratory specialized in developing DNA sequencing-based diagnostic tests for community hospital laboratories.

      References 1. Steere, A.C. et al. Lyme borreliosis. Nat. Rev. Dis. Primers 2,16091 (2016). 2. Olano, J.P. & Walker, D.H. Diagnosing emerging and reemerging infectious diseases: the pivotal role of the pathologist. Arch. Pathol. Lab. Med. 135, 83-91 (2011). 3. Li, X. et al. Burden and viability of Borrelia burgdorferi in skin and joints of patients with erythema migrans or Lyme arthritis. Arthritis Rheum. 63, 2238–2247 (2011). 4. Schubbert, R. et al. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc. Natl. Acad. Sci. U. S. A. 94, 961-966 (1997). 5. Steere, A. C. et al. Lyme arthritis: an epidemic of oligoarticular arthritis in children and adults in three connecticut communities. Arthritis Rheum. 20, 7–17 (1977). 6. Frieden T. Transcript for CDC Telebriefing: Zika Update. https://www.cdc.gov/media/releases/2016/t0617-zika.html (2016) 7. ECDC. Ebola virus disease case definition for reporting in EU. http://ecdc.europa.eu/en/healthtopics/ebola_marburg_fevers/EVDcasedefinition/Pages/default.aspx#sthash.LvKojQGu.wf5kwZDT.dpuf (2016 last accessed) 8. Jobe, D.A. et al. Borrelia miyamotoi Infection in Patients from Upper Midwestern United States, 2014-2015. Emerg. Infect. Dis. 22, 1471-1473 (2016). 9. Lee, S.H. et al. Detection of borreliae in archived sera from patients with clinically suspect Lyme disease. Int. J. Mol. Sci. 15, 4284-4298 (2014). 10. Lee, S.H. et al. DNA sequencing diagnosis of off-season spirochetemia with low bacterial density in Borrelia burgdorferi and Borrelia miyamotoi infections. Int. J. Mol. Sci. 15, 11364-11386 (2014). 11. Lee, S.H. Lyme disease caused by Borrelia burgdorferi with two homeologous 16S rRNA genes: a case report. Int. Med. Case Rep. J. 9,101-106 (2016). 12. Lee, S.H. et al. Early Lyme disease with spirochetemia - diagnosed by DNA sequencing. BMC Res. Notes. 3, 273 (2010).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 28, Marcia Herman-giddens commented:

      While there are many aspects of this review paper by Steere, et al, which beg for comment, I focus on the erythema migrans rash (EM). Steere, et al, state that “erythema migrans is the presenting manifestation of Lyme borreliosis in ~80% of patients in the United States” based on their 2003 paper. It is unclear from that paper exactly how this figure was obtained. As far as I know, there has never been a well-designed study to examine this issue.

      I was pleased to see Figure 5 showing photographs of EM rashes with their more accurate solid red appearance. Research has shown that, contrary to popular belief (likely because of the promotion of the so-called ‘target or bull’s-eye’ type of lesion), most EMs are solid red. As stated by Shapiro in 2014 in the NEJM, “Although reputed to have a bull’s-eye appearance, approximately two thirds of single erythema migrans lesions either are uniformly erythematous or have enhanced central erythema without clearing around it.” Later, some may have central clearing. The CDC estimates “70-80%” of Lyme disease patients have an EM rash and call the picture on their webpage “classic” even though it shows a bull’s-eye or target type lesion.

      One outcome of this misrepresentation as a bull’s eye or target lesion, is that patients with the more common solid EM rash may not present to their medical provider in a timely manner thinking that it does not represent possible Lyme disease. I know of several cases where this happened and the patients went on to develop late Lyme disease. Aucott et al, in their 2012 paper, “Bull’s-Eye and Nontarget Skin Lesions of Lyme Disease: An Internet Survey of Identification of Erythema Migrans,” found that many of the general public participants were familiar with the classic target-type erythema migrans lesion but only 20.5% could correctly identify the nonclassic erythema migrans. In addition, many health care providers are not well trained in the recognition of EM rashes. In a case series by Aucott et al. in 2009, among Lyme disease patients presenting with a rash, the diagnosis of EM was initially missed by providers in 23%.

      The well-known lack of sensitivity in the recommended two-tier test for diagnosis of Lyme disease in early infections and the probability that many EM rashes are misdiagnosed or missed, especially among people living alone or when the rash occurs in the hairline, etc. contribute to the lack of accurate data on the incidence of EM rashes following infection with B. burgdorferi. These factors and others affect the collection of accurate data on the proportion of patients newly infected with B. burgdorferi who do develop erythema migrans and suggest that the true incidence is likely lower than 70-80%.

      Steere et al. Lyme borreliosis. Nat Rev Dis Primers. 2016 Dec 15;2:16090. doi: 10.1038/nrdp.2016.90. Steere AC, Sikand VK. The presenting manifestations of Lyme disease and the outcomes of treatment. New England Journal of Medicine. 2003 Jun 12;348(24):2472-4. Shapiro ED. Lyme disease. New England Journal of Medicine. 2014 May 1;370(18):1724-31. www.cdc.gov/lyme/signs_symptoms/ Aucott JN, Crowder LA, Yedlin V, Kortte KB. Bull’s-Eye and Nontarget Skin Lesions of Lyme Disease: An Internet Survey of Identification of Erythema Migrans. Dermatology research and practice. 2012 Oct 24;2012. Aucott J, Morrison C, Munoz B, Rowe PC, Schwarzwalder A, West SK. Diagnostic challenges of early Lyme disease: lessons from a community case series. BMC Infectious Diseases. 2009 Jun 1;9(1):1.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 25, Raphael Stricker commented:

      Lyme Primer is Obsolete (Part 1)

      Raphael B. Stricker, Union Square Medical Associates, San Francisco, CA; Lorraine Johnson, LymeDisease.org, Chico, CA. rstricker@usmamed.com; lbjohnson@lymedisease.org

      The Lyme primer by Steere and colleagues presents an overview of the epidemiology, pathogenesis, diagnosis and treatment of Lyme disease. The authors adhere to the dogma and opinions of the Infectious Diseases Society of America (IDSA), and as a result the primer showcases the schizoid nature of the IDSA view of Lyme disease: while the pathogenesis of the disease is highly complex and worthy of a formidable infectious agent, the epidemiology, diagnosis and treatment of the disease is ridiculously simple and rather banal ("hard to catch and easy to cure"). As a result, the primer propagates the myths and misinformation about Lyme disease that have made the IDSA view obsolete and contributed to the unchecked spread of the tickborne disease epidemic around the world. The following points address significant flaws and deficiencies in the primer, with appropriate references.

      There are two standards of care for Lyme disease. One is based on the guidelines of IDSA (Reference 101 in the primer) and the other is based on the guidelines of the International Lyme and Associated Diseases Society (ILADS) (1). The primer adheres to the IDSA guidelines, which are based largely on "expert opinion" (2,3) and were recently delisted by the National Guideline Clearinghouse (NGC) because they are obsolete and fail to meet methodological quality standards for guideline development set forth by the Institute of Medicine (IOM) (1). The NGC recognizes the ILADS guidelines, which were developed using the GRADE methodology endorsed by the IOM (1). Much of the clinical information in the primer is refuted by the ILADS guidelines, as outlined below.

      In the Abstract, the primer states that "All manifestations of the infection can usually be treated successfully with appropriate antibiotic regimens, but the disease may be followed by post-infectious sequelae in some patients." Current evidence from "big data" analysis indicates that 36-63% of patients treated with IDSA-recommended short-course antibiotics may fail this therapy (4-6). The concept of "post-infectious sequelae" ignores the extensive literature on persistent Borrelia burgdorferi (Bb) infection despite antibiotic treatment (1,7).

      The primer states that "Infection through alternate modes of transmission, including transfusion, sexual contact, semen, urine, or breast milk, has not been demonstrated." This is a very strong statement that ignores growing evidence of other modes of Bb transmission, especially via pregnancy and sexual contact (8-10). The primer states that Borrelia does not produce its own matrix degrading proteases. This statement ignores the description of a Bb aggrecanase that plays a role in tissue invasion by the spirochete and probably facilitates chronic infection as well (11,12).

      Neurological syndromes associated with Bb infection are considered "controversial" by IDSA proponents because only hard neurological signs (Bell's palsy, meningoencephalitis) are accepted as significant by that group. In contrast, many Lyme patients have only soft neurological signs (cognitive and memory problems, severe fatigue, neuropathy), and these features of chronic Lyme disease are ignored by the primer authors despite supportive literature (13,14). The concept that neurological and cardiac involvement in Lyme disease resolves spontaneously, even without treatment, promotes a pet IDSA theme that Lyme disease is a trivial illness. This concept is not supported by recent literature that has documented cardiac deaths in untreated patients (15).

      The primer repeats the discredited view that Lyme testing is "virtually 100%" positive after 4-8 weeks of untreated infection. This unreferenced statement ignores the fact that two-tier testing for persistent Bb infection has poor sensitivity (46%) despite excellent specificity (99%) (16,17). The studies that allegedly show high sensitivity of two tier testing used circular reasoning to arrive at this conclusion: patients were chosen because they had positive Lyme tests, and then they had positive Lyme tests (18). Thus the primer propagates one of the biggest myths about Lyme disease diagnosis instead of acknowledging the dreadful state of 30-year-old Lyme serology and the need for better testing, such as companion and molecular diagnostics.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Dec 25, Raphael Stricker commented:

      Lyme Primer is Obsolete (Part 2)

      Raphael B. Stricker, Union Square Medical Associates, San Francisco, CA; Lorraine Johnson, LymeDisease.org, Chico, CA. rstricker@usmamed.com; lbjohnson@lymedisease.org

      The primer states that a tick must usually be attached for more than 24 hours to transmit Bb, and that a single 200 mg dose of doxycycline can prevent transmission of Bb. The former statement is not supported by recent literature, especially when coinfecting agents are transmitted along with Bb (19). The latter statement is based on a flimsy study that has been attacked repeatedly for its many flaws (1).

      The statement that there has been no evidence of Bb drug resistance ignores studies showing that resistance may occur (20-22). The issue of cyst forms that evade the immune system and antibiotic therapy is also ignored (6,23), and the primer disregards recent literature on antibiotic-tolerant persister organisms in Lyme disease (24-26). Once again the primer propagates the IDSA theme that Lyme disease is a trivial infection, with statements about quality of life after short-course treatment such as this: "Regardless of the disease manifestation, most patients with Lyme borreliosis respond well to antibiotic therapy and experience complete recovery." This statement whitewashes the significant morbidity associated with chronic Lyme disease symptoms (4,5). Approximately 42% of respondents in a survey of over 3,000 patients reported that they stopped working as a result of Lyme disease (with 24% reporting that they received disability as a result of chronic Lyme disease), while 25% reported having to reduce their work hours or change the nature of their work due to Lyme disease (4,5).

      The unreferenced statement that two weeks of antibiotics cures Lyme carditis is not supported by the literature (1). The primer has limited the discussion of longer antibiotic treatment for post-treatment Lyme disease to studies by Klempner et al. and Berende et al. The authors ignore the positive studies of Krupp et al. and Fallon et al. showing benefit of longer antibiotic treatment, and they avoid discussion of the deep flaws in the negative Lyme treatment trials that lacked the size and power to yield meaningful results (27,28).

      The primer calls the LYMErix(R) Lyme vaccine that was withdrawn from the market "safe and efficacious" and the authors blame Lyme advocacy groups for the vaccine failure. This mantra of "blaming the victims" has become a familiar excuse for the failed vaccine, which generated a class action lawsuit based on its lack of safety (29,30). Until vaccine developers come to grips with the very real potential hazards of Lyme vaccine constructs, a successful Lyme vaccine will remain out of reach.

      Under "competing interests", there is no disclaimer by Paul Mead, who is an employee of the Centers for Disease Control and Prevention (CDC). Does this mean that Mead represents the CDC in endorsing this slanted and obsolete view of Lyme disease? If that is the case, it is disturbing that a government agency is shirking its responsibility to lead the battle against tickborne disease and instead endorses a regressive viewpoint that stunts science and harms patients.

      References 1. Cameron et al, Expert Rev Anti Infect Ther. 2014;12:1103-35. 2. Johnson & Stricker, Philos Ethics Human Med 2010;5:9. 3. Johnson & Stricker, Clin Infect Dis. 2010;51:1108-9. 4. Johnson et al, Health Policy 2011;102:64- 71. 5. Johnson et al, PeerJ 2014;2:e322; DOI 10.7717/peerj.322. 6. Adrion et al, PLoS ONE 2015;10:e0116767. 7. Stricker & Johnson, Infect Drug Resist 2011:4:1-9. 8. Wright & Nielsen, Am J Vet Res. 1990;51:1980-7. 9. MacDonald, Rheum Dis Clin NA. 1989;15:657-677. 10. Stricker & Middelveen, Expert Rev Anti-Infect Ther. 2015;13:1303-6. 11. Russell and Johnson, Mol Microbiol. 2013;90:228-40. 12. Stricker & Johnson, Front Cell Infect Microbiol. 2013;3:40. 13. Cairns & Godwin, Int J Epidemiol. 2005;34:1340-5. 14. Fallon et al, Neurobiol Dis. 2010;37:534-41. 15. Muehlenbachs et al, Am J Pathol. 2016;186:1195-205. 16. Ang et al, Eur J Clin Microbiol Infect Dis. 2011;30:1027-32. 17. Stricker & Johnson, Minerva Med. 2010;101:419-25. 18. Stricker/PMC, Comment on Cook & Puri, Int J Gen Med. 2016;9:427-40. 19. Cook, Int J Gen Med. 2014;8:1-8. 20. Terekhova et al, Antimicrob Agents Chemother. 2002;46:3637-40. 21. Galbraith et al, Antimicrob Agents Chemother. 2005;49:4354-7. 22. Hunfeld & Brade, Wien Klin Wochenschr. 2006;118:659-68. 23. Merilainen et al, Microbiology. 2015;161:516-27. 24. Feng et al, Emerg Microbes Infect. 2014;3:e49. 25. Sharma et al, Antimicrob Agents Chemother. 2015;59:4616-24. 26. Hodzic, Bosn J Basic Med Sci. 2015;15:1-13. 27. Delong et al, Contemp Clin Trials 2012;33:1132-42. 28. Stricker/PMC, Comment on Berende et al, N Engl J Med. 2016;374:1209-20. 29. Marks, Int J Risk Saf Med. 2011;23:89-96. 30. Stricker & Johnson, Lancet Infect Dis. 2014;14:12. Disclosure: RBS and LJ are members of the International Lyme and Associated Diseases Society (ILADS) and directors of LymeDisease.org. They have no financial or other conflicts to declare.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Dec 20, Sin Hang Lee commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 17, Lydia Maniatis commented:

      While the title of this article refers to perception, the term is used loosely. The authors are not examining the process or principles via which a percept is formed, but, rather, how certain already-formed features of that percept are used to identify and label various substances whose general appearance is only one, and sometimes not even the most important, characteristic normally used to identify the substance. The question seems trivial and essentially non-perceptual.

      An excerpt from the concluding paragraph of the paper may help convey the level of the conceptual discussion:

      “…the name “chocolate” is assigned to all viscosities as long as the optical material is chocolate. This presumably reflects the fact that different concentrations and temperatures of chocolate yield a wide range of viscosities, but changes to the surface color and optical appearance are less common… The term “water” specifies a specific colorless transparent appearance and a specific (runny) viscosity. “

      What does it mean to say that “the optical material is chocolate.” It seems like just a jargony way of saying “the name chocolate is assigned to anything that looks like chocolate” which begs the (trivial) question, and which isn’t even necessarily true, since the dispositive feature of chocolate is the flavor. Similarly, how do the authors distinguish between the label "water" and the labels "alcohol," "white vinegar," "salt solution" etc?

      The distinction the authors are making between “optical” and “mechanical” properties indicates they haven’t understood the problem of perception. It’s not clear what distinction they are making between “optical appearance” and simply "appearance." In the category of “optical” characteristics they place color, transparency, gloss, etc. But color, transparency, gloss as experienced by observers are perceptual characteristics, and as such are in exactly the same category as perceived “mechanical” characteristics, among which the authors place perceived viscosity.

      That the distinction they are making is a perceptual one is in no doubt as they are using images as stimuli, i.e. stimuli whose perceived "optical and mechanical" properties differ greatly from their physical properties. Even if they were using actual objects, the objection would be the same, as the actual stimulus would be the retinal projection, which contains neither color, viscosity, etc.

      It is also inexplicable to me why the authors would refer to “optical appearance” as equivalent to “low-level image correlates.” Converting a retinal projection into a percept containing features such as transparency or color or gloss requires the very highest level of visual processes.

      All in all, the article conveys conceptual confusion about the basic problem of perception, let alone how it is achieved, while the problem chosen for study doesn’t touch on any of these problems or solutions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 22, William Davies commented:

      I would like to congratulate the authors on their interesting study. We have recently shown that acute inhibition of the steroid sulfatase enzyme in new mouse mothers results in increased brain expression of the Nov/Ccn3 gene; as there is some evidence for the extracellular CCN3 protein interacting with integrin B1, I wondered whether the authors had, or had considered, looking at CCN3 levels in their cellular model?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Harri Hemila commented:

      Problems in study inclusions, in data extraction, and in the scales may bias the meta-analysis on vitamin C and post-operative atrial fibrillation (POAF)

      Hu X, 2017 state in their methods that “studies that met the following criteria were included: (1) randomized controlled trials (RCTs) of adult patients who underwent cardiac surgery; (2) patients randomly assigned to receive vitamin C or placebo …”. However, Hu X, 2017 included the study by Carnes CA, 2001 although that was not an RCT, instead “an age- and gender-matched control group (not receiving ascorbic acid) was retrospectively selected”. In addition, the Hu X, 2017 meta-analysis did not include the data of 2 rather large US trials that found no effect of vitamin C against POAF and thus remained unpublished leading to publication bias, see Hemilä H, 2017. Furthermore, Hu X, 2017 claimed that “funnel plots showed no evidence of publication bias”, but the existence of the 2 unpublished US studies refutes that statement.

      Furthermore, Altman DG, 1998 pointed out that “the odds ratio [OR] should not be interpreted as an approximate relative risk [RR] unless the events are rare in both groups (say, less than 20-30%)”. However, in the Fig. 2 of Hu X, 2017, the lowest incidence of POAF in the placebo groups was 19% and 6 out of 8 studies had incidence of POAF over 30% in their placebo groups. In such a case the OR does not properly approximate RR, and the authors should have calculated the effect on the RR scale instead.

      In their Fig. 4, Hu X, 2017 state that the mean duration of intensive care unit (ICU) stay in the vitamin C group was 24.9 hours in Colby JA, 2011. However, Colby JA, 2011 reported in their Table 1 that the duration of ICU stay was 249.9 hours, i.e. 10 times greater. Evidently, such an error leads to bias in the pooled estimate of effect, but also leads to exaggeration of the heterogeneity between the included trials.

      Hu X, 2017 calculated the effect of vitamin C on the duration of ICU stay and of hospital stay on the absolute scale, i.e. on days, although there were substantial variations in the placebo groups, and thus the relative scale would have been much more informative Hemilä H, 2016. As an illustration of this difference between the scales, Hemilä H, 2017 calculated that the effect of vitamin C on hospital stay in days was significantly heterogeneous with I<sup>2</sup> = 60% (P = 0.02). In contrast, the effect of vitamin C on hospital stay on the relative scale was not significantly heterogeneous with I<sup>2</sup> = 39% (P = 0.09). The lower heterogeneity on the relative scale is explained by the adjustment for the baseline variations in the studies.

      Hu X, 2017 write “compared with placebo group, vitamin C administration was not associated with any length of stay, including in the ICU”. However, Hemilä H, 2017 calculated that there was strong evidence from 10 RCTs that vitamin C shortened ICU stay in the POAF trials by 7% (P = 0.002).

      Hu X, 2017 also concluded that vitamin C did not shorten the duration of hospital stay, whereas Hemilä H, 2017 calculated that vitamin C shortened hospital stay in 11 POAF trials by 10% (P = 10<sup>-7</sup> ).

      Although the general conclusion of Hu X, 2017 that vitamin C has effects against POAF seems reasonable, there is very strong evidence of heterogeneity in the effect. Five trials in the USA found no benefit, discouraging further research in the USA. However, positive findings in less wealthy countries suggest that the effect of vitamin C should be further studied in such countries, Hemilä H, 2017.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 14, Matthew Wargo commented:

      While this report increases our understanding of SphR, many of the findings, including SphR direct transcriptional control of the neutral ceramidase (gene designation cerN (PA0845)), promoter mapping, and binding site determination were previously reported in LaBauve and Wargo 2014 (PMID 24465209)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 12, Lydia Maniatis commented:

      It’s amazing how shy Agaoglu and Chung (2017) are in making their legitimate point, and how gently and casually they make it when they finally do get there (even giving credit where credit is not due). We can’t be sure, until we get to the very end (of the article not the abstract), what the authors think about the question posed, somewhat awkwardly, in their title: “Can (should) theories of crowding be unified?” The title begs the question, if they can be unified, then why shouldn’t they be? The explanation for the awkwardness, also saved for the end, is that the term “crowding” is conceptually vague, and has been used as a catch-all term for all manner of demonstrations of the fact that peripheral vision is not as good as central vision. “All in all, although we applaud the attempts at unifying various types of response errors in crowding studies, we think that without a better taxonomy of crowding—instead of calling everything crowding, perhaps introducing types of crowding (as in the masking literature)—unifying attempts will remain unsuccessful.” In other words, the issue of a unifying “theory of crowding” is moot given that we’re talking about a hodge-podge of poorly-understood phenomena.

      What is also moot are the experiments reported in the article. While I think they have a lot of problems common to the field (layers of untested/untestable or even false, arbitrarily chosen assumptions), that doesn’t matter. As should be clear from the authors’ own discussion, no new experiments were needed to make the necessary theoretical point, which is that a clear conceptual understanding of phenomena needs to precede any attempt at a technical, causal explanation. Clearly, the experiments added by the authors do not make more acute the stated need for “a better taxonomy of crowding.” The redundancy of the study is reflected in the statement (caps mine) that “Our empirical data and modeling results ALSO [i.e. in addition to previous existing evidence] suggest that crowded percepts cannot be fully accounted for by a single mechanism (or model).” They continue to say that “The part of the problem is that many seemingly similar but mechanistically different phenomena tend to be categorized under the same umbrella in an effort to organize the knowledge in the field. Therefore, constraints for theoretical models become inflated.”

      Agaoglu and Chung’s (2016) message, in short, is that the heterogeneous class of phenomena/conditions referred to as “crowding” are clearly not candidates for a common explanation or “model” as they routinely produce mutually conflicting experimental outcomes (one model works for this one but not that one, etc) and that there is a need for investigators to clarify more precisely what they are talking about when they use the term crowding. While, as mentioned, they could surely have made their argument without new experiments, they couldn’t have published it, as the rational argument in science has unfortunately been demoted and degraded in favor of uninterpretable, un-unifyable, premature p-values.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 13, Lydia Maniatis commented:

      Witzel et al’s (2016) degree of ignorance of the fundamentals of their chosen topic is capable of disconcerting even the most jaded observer of the vision literature (me). Intentionally or not, it reflects the tenacious, subterranean grip of the behaviorist tradition (consideration only of simple signal (stimulus)/response paradigms) despite overwhelming evidence, both logical and empirical, of its inadequacy. That editors allow such products to pass into the literature is, I guess, also par for the course.

      The problem here is that the authors simply ignore one of the most basic facts about color perception, though it is highly relevant to their problem of interest. It is not clear from the text whether they are even aware of this fact, i.e. that the color perceived at any given location in the visual field is contingent on the light reflected both from the local segment of the scene and the areas in the scene as a whole, (both adjacent and non-adjacent), and more specifically, to the structural relationships (and their implications) of the light-reflecting areas in question. These empirical facts are not in the least in question, yet for Witzel et al (2016) they might not as well exist. They acknowledge only local contrast and adaptation as possible reasons for why similar local “color signals” can produce different color experiences:

      “To clarify the role of metamer mismatching in color constancy, we investigated whether metamer mismatching predicts the variation of performance in color constancy that cannot be attributed to adaptation and local contrast.”

      “In the present study, we tested different high-level (color categories) and sensory factors (metamer mismatching, sensory singularities, and cone ratios) that are likely to affect performance in color constancy beyond what is predicted by adaptation [for some reason “contrast” has been dropped].”

      “However, it is known that color appearance and color naming can be influenced by context, such as local contrast and adaptation (Hansen et al., 2007).” Their conclusion that “a considerable degree of uncertainty (about 50%) in judging colors across illuminations is explained by the size of metamer mismatch volumes” is meaningless since results are condition-sensitive and the authors have not considered the relevant confounds (e.g. figure-ground structure of the visual field).

      Bizarrely, they speculate that the unexplained “50%” of the failures of color constancy “beyond what is predicted by adaptation…may be rather the result of linguistic categorization.” Anything but consider the alternative that is well-known and rather well-understood. (They also considered the bizarre notion of the role of “color singularities” i.e. that the visual system “maps the sensory signal that results from looking directly at the light of the illumination (illuminant signal).” Given that their stimuli are pictorial, I don’t even know how to interpret their use of the term color singularity. At any rate there is no such illumination signal, as has been proven both logically and empirically (it would, for example, eliminate the possibility of pictorial lighting effects).

      The presence of such articles in the vision science literature is mind-boggling.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 13, Lydia Maniatis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 17, Andrey Khlystov commented:

      Gentlemen, The problem with your arguments is that Brand II, though being the “cleaner” of the three we tested, still produces very high aldehyde emissions. Three out of five V2 liquids that we tested exceeded the one time (one time!) exposure limits in a single puff. They also produced higher emissions than non-flavored liquids used in more powerful Brand I and III e-cigarettes. Surely you can check that. Please see our reply to Farsalinos et al. letter to ES&T – there are other studies that found even higher aldehyde concentrations than we did. High aldehyde emissions are not limited to a single study or a single liquid. We also demonstrate that “dry puff” arguments that Farsalinos et al. use to dismiss all high aldehyde studies have absolutely no factual basis. I doubt I can add anything else to this discussion except for reminding you that the strength of science is not only in reproducing results, but also in not cherry-picking studies and data that fit one’s theories or expectations. I do appreciate your efforts in clarifying the benefits and risks of e-cigarette use and whish you success in this endeavor.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 13, Peter Hajek commented:

      Dr Khlystov, I understand that your laboratory has a good track record. Your findings are potentially important, but they do need to be replicated so a possibility that they were an artefact of some part of your procedure is ruled out The reason for wanting to test Brand 1 liquids is as follows. If other liquids show no alarming toxicant levels (which is possible because in previous studies where dry puffs were excluded, no such levels were found), the next level of explanation will be that the levels may be low in other liquids, but they were high in the liquid you tested. It is of course possible that your results will be replicated, but if not, this would necessitate another round of testing of the liquids you used. Testing your Brand 1 liquids straight away would remove this potential expense and delay in clarifying the issue. Providing information on which liquids you used, if this is available, should be simple and uncontroversial.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 11, Konstantinos Farsalinos commented:

      Dr Khlystov mentions that there is information about 5 samples tested in their study. Unfortunately he refers to the samples with the lowest (by far) levels of carbonyl emissions. In fact, only one of these samples had high carbonyl emissions, unlike Brand I samples which showed very high levels of toxic emissions (especially for 3 of the samples, the levels found were extreme).

      Liquids from different batches may not be the same, but finding almost 7000 ug/g of formaldehyde compared to < 0.65 ug/g from an unflavored sample can be easily reproduced with reasonable accuracy even with different batches. A replication study finding levels of carbonyl emissions lower by orders of magnitude cannot be attributed to different batches.

      As i mentioned in my ES&T letter to the editor, the levels found by Khlystov and Samburova could only be explained by dry puffs, but this has been excluded because of the findings in unflavored liquids. Also, previous studies with verified realistic (i.e. no dry puff conditions) have found aldehyde levels orders of magnitude lower compared to their study. This creates a crucial need to replicate the samples with the highest levels of carbonyl emissions, despite the reassurance about the laboratory quality. Replication is the epitomy of science. But the authors are not providing the necessary information for these liquids.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Feb 11, Andrey Khlystov commented:

      Dr. Hajek,

      You have not read the paper carefully. As I said earlier, our paper has information on 5 liquids that should be enough to get anybody started in replicating the data. They are from Brand II, which is V2 Standard (see Table 1). The brand is very easy to find (www.v2.com). The other liquids were from local vape shops, but this is of little consequence for the study, see below.

      You seem to miss the point of our paper. Please let me briefly summarize its message: flavors, especially at higher concentrations, appear to dominate aldehyde production. To check generality of our observations, all one needs to do is take any flavored liquid and test it at different concentrations and/or against unflavored liquids and see what happens.

      Contrary to what you suggest, there is little value in testing exactly the same liquids. Testing specific liquids or flavors was not the point of our study. We observed that a fairly wide variety of randomly selected flavored liquids produce significant aldehyde emissions, with aldehyde profiles varying among different flavors. If only PG or VG were responsible for the majority of aldehyde emissions, there would be no differences among liquids that have the same PG/VG composition. Yet, we observed significant differences among such liquids.

      I also doubt that liquids from different batches are exactly the same, especially from small-time operations. At the time of writing the paper, we did not measure concentrations of liquid constituents. Having understood their role, we are controlling for liquid composition in our on-going study. As we mentioned in the letter to ES&T, we see appreciable aldehyde concentrations in both mainstream and secondary aerosols for a wide variety of e-cigarettes and liquids that users bring to our study. High aldehyde emissions are not limited to the 15 liquids and 3 e-cigarette brands we tested in our original study, it appears; the problem seems to be quite universal.

      As we stated in our letter to ES&T, we are calling for checking findings of ANY e-cigarette study. I would like to note, however, that if one doubts our measurements, he or she needs to come up with a plausible mechanism, other than the effect of flavors, that explains why unflavored liquids produced significantly lower emissions than flavored ones or why a diluted flavored liquid was producing less than a more concentrated one. Please note, this was observed for the same e-cigarette, the same power output, and the same experimental setup. As of now and as far as I know, nobody came up with a single credible reason to doubt our results. I would also like to stress that aldehyde measurements are not trivial. We have over 20 years of experience in these measurements with a solid track record of QA/QC. Please rest assured - we stand by the quality of our data.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Feb 09, Peter Hajek commented:

      Your paper does not identify the actual products. Can you let others know the product name and the online address where it was purchased? There is no point trying to clarify your finding with different e-liquids.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Feb 08, Andrey Khlystov commented:

      Please read our paper carefully. There is information on 5 liquids that can be easily ordered online. Good luck with your experiments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Feb 08, Peter Hajek commented:

      In their just published response to Farsalinos et al. comment on these unexpected results, the authors acknowledge that replications are needed. Could they please respond to repeated requests to specify which e-liquids they used so a replication can be performed?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Dec 12, Peter Hajek commented:

      The authors are correct that other studies are needed to check this phenomenon. Can they specify which e-liquids they tested so a replication is possible?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 06, Joe G. Gitchell commented:

      In the brief report at the link, we have provided an update to include the 2015 NHIS data.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 14, Lydia Maniatis commented:

      In this article, Felin et al (2016) mix up two different issues, which, like oil and water, intermingle but don’t mix. Put another way, the authors alternate between the two poles of a false dichotomy. On the one side is a truism, on the other an epistemological fallacy.

      The latter involves the view expressed by Koenderink (2014) in an editorial called “The All-Seeing-Eye?” Here, he essentially makes the old argument that because all knowledge, and all perception, is indirect, inferential and selective, it makes no sense to posit a unique, real world. As I have pointed out in Maniatis (2015), the anti-realist position of Koenderink (2014) as well as Rogers (2014) and Hoffman (2009) and Purves et al (2014) (all of whom are cited in this respect by Felin et al (2016)) are paradoxical and inconsistently asserted. Rogers (2014), for example, describes the concept of “illusion” as invalid, while at the same time saying that “illusions” are useful). Inconsistency is inevitable if we want to make references to an objective world on the one hand (e.g. in making scientific statements), and at the same time claim that there is no unique, objective world.

      It is clear from the text that Felin et al (2016) are not, in fact, adopting an anti-realist view. It has simply been inappropriately mixed into a critique of poor practices in the social sciences. These practices involve setting up situations in which participants make incorrect inferences, and treating this as an example of bias or irrationality. As the authors correctly discuss, this is pointless and inappropriate; the relevant question is how do organisms manage, most of the time, to make correct or useful inferences on the basis of information that is always inadequate and partial? What implicit assumptions allow them to fill in the gaps?

      These are basic, fundamental points, but do not license a leap to irrationality; Arguing that it is not useful to treat a visual illusion, for example, as simply a mistake is not a reason to reject the distinction between veridical and non-veridical solutions, and this applies also to cognitive inferences about the world.

      As the authors themselves note, their points are not new - despite the existence of researchers who have ignored or failed to understand them. So they seem to be giving themselves a little too much credit when they describe their views as “provocative” and a prescription for a “radically different, organism-specific understanding of nature, perception, and rationality.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 14, Gregory Francis commented:

      An analysis of the statistics reported in this paper suggest that the findings appear too good to be true. Details are published as an eLetter to the original article at

      http://www.jneurosci.org/content/36/49/12448/tab-e-letters


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 15, Ashraf Nabhan commented:

      I thank the authors for this elegant work. The Results are reassuring regarding the risk of congenital malformations after ACE inhibitor exposure during the first trimester. The authors made a wise note to caregivers that women on ACE-I during reproductive years should have a transition off of these medications early in pregnancy to avoid the known adverse fetal effects associated with late pregnancy exposure. This wise note should have appeared in the abstract, since we know that most readers only read the abstract as many do not have access to the full text and some only focus on the authors' conclusion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 13, Stuart RAY commented:

      This finding is intriguing, but the reported findings seem inconclusive, for the following reasons: (1) figure 2a - the tree of sequences from Core/E1 region - has weak clustering with no significant bootstrap value to show confident clustering with subtype 3a, and no significant bootstrap value in genotype 1 to exclude the query sequence; (2) the nearly full-length sequence was cobbled together from 12 overlapping PCR amplifications, raising the possibility that this was a mixed infection with different regions amplified from different variants in the blood, rather than a single recombinant genome; and (3) the title says "not uncommon" but it appears that detailed study was only done for one specimen. It would be more convincing if a single longer amplicon, with phylogenetically-informative sequences on both sides of the breakpoint and showing a reproducible breakpoint, were recovered from separate blood aliquots (i.e. fully independent amplifications). Submission of the resulting sequence(s) to GenBank is a reasonable and important expectation of such studies. In addition, the title of the paper should not say "not uncommon" unless the prevalence can be estimated with some modicum of confidence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 14, Randi Pechacek commented:

      Russell Neches, first author of this paper, wrote a blog on microBEnet explaining the process and background of this experiment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 07, Raphael Stricker commented:

      Serological Test Sensitivity in Late Lyme Disease.

      Raphael B. Stricker, MD<br> Union Square Medical Associates San Francisco, CA rstricker@usmamed.com

      Cook and Puri have written an excellent review of the sorry state of commercial two-tier testing for Lyme disease (1). Unfortunately the authors failed to address the myth of high serological test sensitivity in late Lyme disease.

      In the review, Figure 4 and Table 7 show a mean two-tier serological test sensitivity of 87.3-95.8% for late Lyme arthritis, neuroborreliosis and Lyme carditis. However, this apparently high sensitivity is based on circular reasoning: in order for patients to be diagnosed with these late conditions, they were required to have clinical symptoms AND POSITIVE SEROLOGICAL TESTING. Then guess what, they had positive serological testing! This spurious circular reasoning invalidates the high sensitivity rate and should have been pointed out by the authors of the review.

      As an example, the study by Bacon et al. (2) contains the following language: "For late disease, the case definition requires at least one late manifestation AND LABORATORY CONFIRMATION OF INFECTION, and therefore the possibility of selection bias toward reactive samples cannot be discounted" (emphasis added). Other studies of late Lyme disease using spurious circular reasoning to prove high sensitivity of two-tier serological testing have been discussed elsewhere (3-5).

      In the ongoing controversy over Lyme disease, it is important to avoid propagation of myths about the tickborne illness, and insightful analysis of flawed reasoning is the best way to accomplish this goal.

      References 1. Cook MJ, Puri BK. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy. Int J Gen Med 2016;9:427–40. 2. Bacon RM, Biggerstaff BJ, Schriefer ME, et al. Serodiagnosis of Lyme disease by kinetic enzyme-linked immunosorbent assay using recombinant VlsE1 or peptide antigens of Borrelia burgdorferi compared with 2-tiered testing using whole-cell lysates. J Infect Dis. 2003;187:1187–99. 3. Stricker RB, Johnson L. Serologic tests for Lyme disease: More smoke and mirrors. Clin Infect Dis. 2008;47:1111-2. 4. Stricker RB, Johnson L. Lyme disease: the next decade. Infect Drug Resist. 2011;4:1–9. 5. Stricker RB, Johnson L. Circular reasoning in CDC Lyme disease test review. Pubmed Commons comment on: Moore A, Nelson C, Molins C, Mead P, Schriefer M. Current guidelines, common clinical pitfalls, and future directions for laboratory diagnosis of Lyme disease, United States. Emerg Infect Dis. 2016;22:1169-77.

      Disclosure: RBS is a member of the International Lyme and Associated Diseases Society (ILADS) and a director of LymeDisease.org. He has no financial or other conflicts to declare.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 30, Alejandro Montenegro-Montero commented:

      I read this very interesting review summarizing recent global studies aimed at characterizing circadian gene expression in mammals. It presents a nice summary of the different ways in which the clock can impact gene expression and additionally, briefly discusses various statistical methods currently used for the identification of rhythmic genes from global data sets. This is a very welcome review on the subject.

      Readers might also be interested in our discussion of the relative contributions of the different stages of gene expression, in determining rhythmic profiles in eukaryotes. In our commentary, "In the Driver's Seat: The Case for Transcriptional Regulation and Coupling as Relevant Determinants of the Circadian Transcriptome and Proteome in Eukaryotes", we discuss several scenarios in which gene transcription (even when apparently arrhythmic) might play a much relevant role in determining oscillations in gene expression than currently estimated, regulating rhythms at downstream steps. Further, we argue that due to both biological and technical reasons, the jury is still out on the determination of the relative contributions of each of the different stages of gene expression in regulating output molecular rhythms.

      We hope that reviews like the one by Mermet et al., and commentaries like the one presented in this post, stimulate further discussions on this exciting topic: there are still many important challenges ahead in the field of circadian gene regulation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, Lydia Maniatis commented:

      Part 2: 2. The terms “adaptation” and “aftereffects.” There is no way to discern the meaning of these terms in the context of this study. The authors seem to be using the term very loosely: “….many aspects of visual perception are adaptive, such that the appearance of a given stimulus may be affected by what has been seen before.” The example they give involves motion aftereffects, the cause of which, as far as I can discover, is still unknown. Continuing, they declare that “This perceptual bias is known as an aftereffect [last term italicized].”

      This exposition conflates all possible effects of experience on subsequent perceptual impressions with the color aftereffects based on color opponency at the retinal level, and with the less well-understood motion aftereffects. In other words, we’re lumping together the phenomenon known as “perceptual set,” for example, with color aftereffects, as well as with motion aftereffects. In these latter cases, at least, it is intelligible to talk about aftereffects as being “opposite” to the original effects. In the case of motion, we are fairly straightforwardly talking about a percept of a motion in the opposite direction. In the case of color, the situation is not based on percepts having perceptually opposing characteristics; what makes green the opposite of red is more physiological then perceptual. So even with respect to what the authors refer to as “low-level” effects, ‘opposite’ means rather different things.

      The vagueness of the ‘opposite’ concept as used by Burton et al (2016) is expressed in their placement of quotation marks around the term: “In the facial expression aftereffect, adaptation to a face with a particular expression will bias participants’ judgments of subsequent faces towards the “opposite” expression: The expression with visual characteristics opposite those of the adaptor, relative to the central tendency of expressions.”

      All of the unexamined theoretical assumptions implicit in the terms ‘visual characteristics,’ ‘adaptor’ ‘central tendency’ and, therefore, ‘opposite’ are embedded in the uncritically-adopted procedure of Tiddeman et al (2001). While the example the authors give – “Where fear has raised eyebrows and an open mouth, anti-fear has lowered eyebrows and a closed mouth, and so on” may seem straightforward, neither it nor the procedure is as straightforward as we might assume. The devil is in the “and so on.” First, “lowered eyebrows” is a relative term; lowered in relation to what? Different faces have different relationships between eyebrows, eyes, nose, hairline, etc. And a closed mouth is a very general state. Second, this discrete, if vague, description doesn’t directly reference the technical procedure developed by Tiddeman et al (2001). When we are told by Burton et al (2016) that “anti-expressions were created by morphing along a trajectory that ran from one of the identity-neutral faces [they look like nothing?] through the average expression and beyond it to a point that differed from the average to the same extent as the original expression,” we have absolutely no way to interpret this without examining the assumptions and mathematics utilized by Tiddeman et al (2001). On a conceptual and practical level, readers and authors are blind as to the theoretical significance of this manipulation and the description of its products in terms of “opposites.”

      In addition, there is no way to distinguish the authors’ description of adaptation from any possible effect of previous experience, e.g. to distinguish it from the previously-mentioned concept of “perceptual set,” or from the fact that we perceive things in relative terms; a baby tiger cub, for example, evokes an impression of smallness while a smaller but fully-grown cat large for its size might evoke the impression of largeness. Someone used to being around short people might find average-height people tall, and vice versa. Should we lump this with color, motion aftereffects and perceptual set effects? Maybe we should, but we need to make the case, we need a rationale.

      1. Implications The authors say that their results indicate that “expression aftereffects” may have a significant impact on day-to-day expression perception, but given that they needed to train their observers to deliver adequate results, and given the very particular conditions that they chose (without explanation), this is not particularly convincing. Questions about the specifics of the design are always relevant in this type of studies, where stimuli are very briefly presented. Why, for example, did Burton et al (2016) use a 150 millisecond ISI, versus the 500 millisecond ISI used by Skinner and Benton (2010). With such tight conditions, such decisions can obviously influence results, so it’s important to rationalize them in the context of theory.

      2. It should be obvious already, but the following statement, taken from the General discussion, is an apt demonstration of the general intellectual vagueness of the article: “Face aftereffects are often used to examine the visual representation of faces, with the assumption that these aftereffects tap the mechanisms of visual perception in the same way as lower level visual aftereffects.”

      The phrase “in the same way…” is wholly without a referent; we don’t even know what level of analysis is being referred to. In the same way as (retinally-mediated, as far as we understand) color aftereffects? In the same (physiologically not well understood) way as motion aftereffects?” In the same way as the effects of perceptual set? In the same way as seeing size, or shape, or color, etc, in relative terms? What way is “the same way”?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 08, Lydia Maniatis commented:

      Part 1: There seems to be a widespread misunderstanding in the vision sciences about the role of experiment in science. The value of experiment rests solely on the clarity, coherence, internal consistency, and respect for known facts of the underlying rationale, certain of whose predictions it is designed to test. The methodological choices made in conducting the experimental test are directly linked to this rationale. (For example, if I did a study on the heritability of a single (assumed) gene underlying nose size (plus “noise”), my results would be interpretable in the terms of my assumption (especially if “size” were not clearly defined), even if my assumption were false. This is why it’s important to take care in constructing our hypotheses and tests). If the underlying concepts are vague, or if the rationale lacks internal consistency, or if it lacks coherence, then the results of the experiment cannot be interpreted (in a scientific sense).

      The problems of rationale and method are overwhelmingly on display here. Below, I enumerate and expand on various issues.

      1. The stimuli The theory/method behind the (implicitly) theory-laden stimuli (averaged and morphed photos of faces having various expressions ) is briefly described as having been adapted from those used by Skinner and Benton (2010). (As the corresponding author has clarified to me, the stimuli are, in fact, the same ones used in that study.) The pre-morphed versions of those stimuli came from a database of 70 individual faces exhibiting various expressions collected by Lundqvist, Flykt & Ohman (1998). This reference is not cited by Burton et al (2016), which I feel is an oversight, and neither Bonner et al (2016) nor Skinner and Benton (2010) feel the need to address the sampling criteria used by Lundqvist et al (1998). Sampling methods are a science in themselves, and are always informed by the purpose and assumptions of a study.

      However, we’ll assume the investigators evaluated Lundqvist et al’s (1998) sampling technique and found it acceptable, as it’s arguably less problematic than the theoretical problems glossed over in the morphing procedure. The only (non)description of this procedure provided by either Burton et al (2016) or Skinner and Benton (2010) is a single reference, to Tiddeman, Burt, & Perrett,( 2001). A cursory examination of the work reported on by those researchers reveals it to be a wholly inadequate basis for the use Burton et al (2016) make of it.

      Tiddeman et al (2001) were interested in the problem of using morphing techniques to age faces. They were trying to improve the technique in comparison to previous attempts, with regard to this specific problem. They didn’t merely assume their computational solutions achieved the aim, but evaluated the results in empirical tests with observers. (However, they, too, fail to describe the source of the images on which they are basing their procedures; the reference to 70 original faces seems the only clue that we are dealing with the sample collected by Lundqvist et al (1998).) The study is clearly a preliminary step in the development of morphing techniques for a specific purpose: “We plan to use the new prototyping and transformation methods to investigate psychological theories of facial attraction related to aging….We’ll also investigate technical extensions to the texture processing algorithms. Our results show that previous statistical models of facial images in terms of shape and color are incomplete.”

      The use of the computational methods being tentatively proposed by Tiddeman et al (2001) by Skinner and Benton (2010) and Burton et al (2016) for a very different purpose has been neither analyzed, rationalized nor validated by either group. Rather, the procedure is casually and thoughtlessly adopted to produce stimuli that the authors refer to as exhibiting “anti-expressions.” What this label means or implies at a theoretical level is completely opaque. I suspect the authors may not even know what the Tiddeman et al algorithm actually does. (Earlier work by Rhodes clearly shows the pitfalls of blind application of computational procedures to stimuli and labeling them on the basis of the pre-manipulation perception. I remember seeing pictures of morphed or averaged“faces” in studies on the perception of beauty that appeared grossly deformed and non-human.)

      Averaging of things in general is dicey, as the average may be a completely unrealistic or meaningless value. If we mix all the colors of the rainbow, do we get an “average” color? All the more so when it comes to complex shapes. If we combined images of a pine tree and a palm tree, would the result be the average of the two? What would this mean? Complex structures are composed of multiple internal relationships and the significance of the products of a crude averaging process is difficult to evaluate and should be used with caution. Or not.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 12, David Mage commented:

      Carlin and Moon have provided a very thorough review of risk factors for SIDS with one major flaw – a complete lack of recognition that SIDS have a male excess of 50% corresponding to a male fraction of 0.60, or 3 males for each 2 females. CDC (http://wonder.cdc.gov) reports for years 1968-2014 that for causes of death by SIDS, Unknown (UNK) and Accidental Suffocation and Strangulation in Bed (ASSB) there were 119,201 male and 79,629 female post-neonatal cases (28-364 days) for a male fraction of 0.600. Naeye et al. (PMID 5129451) were, to our knowledge, the first to claim that this 50% male excess in infant mortality must be X-linked. We agreed, and have proposed that an X-linked recessive allele with frequency q = 2/3 that is not protective against acute anoxic encephalopathy would place XY males at risk with frequency q = 2/3 and XX females at risk with frequency q*q = 4/9 (PMID 9076695, 15384886).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 22, Atanas G. Atanasov commented:

      Dear Colleagues, thank you so much for the excellent collaborative work! Consumer Health Digest Scientific Abstract of this article is now available at: https://www.consumerhealthdigest.com/brain-health/protection-against-neurodegenerative-diseases.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 13, Konstantinos Fountoulakis commented:

      This paper compares CBT with short-term psychoanalytical therapy and versus a brief psychological intervention. The results of the paper suggest no difference between the three types of interventions and the conclusion is as follows: ‘Short-term psychoanalytical psychotherapy is as effective as CBT and, together with brief psychosocial intervention, offers an additional patient choice for psychological therapy, alongside CBT, for adolescents with moderate to severe depression who are attending routine specialist child and adolescent mental health service clinics’. Essentially this conclusion is misleading. The description of the ‘brief psychosocial intervention’ suggests it was something between a general psychoeducational approach and supportive psychotherapy, and it was delivered by the usual general staff of the setting without any specialized training in a treatment-as-usual approach. Therefore the interpretation of the results is either that the brief psychosocial intervention was a kind of placebo, and in this case the ‘active’ psychotherapies did not differ from a placebo condition (negative trial), or, if the brief psychosocial intervention is indeed efficacious, then the results do not support the added value of the more complex, demanding, expensive and time consuming CBT and psychoanalytical treatments applied by highly trained therapists versus simpler techniques applied by nurses. In my opinion, the results of this study do not support the applicability of cognitive and psychoanalytical theories in the treatment of adolescent depression but this is not entirely clear and it is debatable. What is clear, is that CBT and psychoanalytic therapy are not better than simpler and cheaper treatment-as-usual psychosocial interventions which are traditionally routinely applied in many clinical settings already.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 31, Krishnan Raghunathan commented:

      "We would like to acknowledge the work of Dr. Ingela Parmyrd and colleagues (1) who have shown that in living cells, crosslinking of for instance CTxB induces ordered domain formation in an actin-dependent manner.

      (1) Dinic, J., Ashrafzadeh, P., Parmryd, I. (2013) "Actin filaments attachment at the plasma membrane in live cells cause the formation of ordered lipid domains" Biochimica et Biophysica Acta - Biomembranes, 1828(3): 1102-1111"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 12, Bjarke M Klein commented:

      Bjarke Mirner Klein, PhD, Director, Biometrics, Ferring Pharmaceuticals, Copenhagen, Denmark.

      We thank the authors for their interest in our trial. This publication is the first reporting data from the ESTHER-1 trial and therefore presents the overall results at a high level. More publications presenting full details on different aspects of the data are in progress. In their comment, they raise the following three concerns: 1) the presentation of the number of oocytes retrieved, 2) the decision not to present the so-called ‘inverted’ responders, and 3) the possibility that the individualised dosing regimen may increase excessive response in low AMH patients. Each of these will be addressed in the following.

      Concerning the presentation of the number of oocytes retrieved: Strict criteria for cycle cancellation due to poor or excessive response were specified in the trial protocol. Since a priori both scenarios were considered a possibility and since there is no consensus on how the number of oocytes retrieved should be imputed for each scenario it was decided to present the data as in Table 3. This is a transparent way of presenting the results since the reader can derive the numbers for all subjects using his/her own assumptions on how to impute the cycle cancellations. This is exactly what the two authors have done assuming that cycle cancellations due to poor response should be included as zero oocytes retrieved in the calculations. It should be noted that the treatment difference remains the same irrespective of method of display.

      Concerning the ‘inverted’ responders: The terminology of ‘inverted’ ovarian response may be misunderstood, since it may suggest that e.g. a subject who would have <4 oocytes retrieved using a standard starting dose of follitropin alfa (GONAL-F) would have ≥20 oocytes retrieved with the individualised follitropin delta (REKOVELLE®) dose. This would indeed be a dramatic and surprising impact considering that the maximum daily dose of follitropin delta is 12 mcg and the starting dose of follitropin alfa is 150 IU (11 mcg). However, as illustrated on Figures 1A and 1B the consequences of the individualised follitropin delta (REKOVELLE®) dosing regimen dose are not that drastic. From these figures, it can be observed that the ovarian response in terms of number of oocyte retrieved is comparable for the mid-range AMH while the treatment differences are seen at the lower and higher AMH level.

      Since the individualised dosing algorithm assigns the same daily dose of individualised follitropin delta (REKOVELLE®) to subjects with an AMH <15 pmol/L and gradually decreases the dose as a function of AMH for subjects with AMH ≥15 pmol/L it seems relevant to present the data by these subgroups. As can be seen from Table 3, the individualised follitropin delta (REKOVELLE®) dosing regimen shifts the distribution of oocytes retrieved upwards for subjects with AMH <15 pmol/L while it shifts the distribution downwards for subjects with AMH ≥15 pmol/L. Such a shift in the distribution obviously also affects the tails of the distribution, i.e. in this case the probability of either too low or too high number of oocytes retrieved. For the publication, it was considered relevant to focus on the risk of poor response in the subjects at risk of hypo-response and the risk of excessive response for the subjects at risk of hyper-response.

      Concerning the possibility that the individualised dosing regimen may increase excessive response in low AMH patients: Relevant data on OHSS and preventive interventions is presented in Table 3 and the relationship to AMH is illustrated in Figure 1C. The authors are concerned that since excessive response is not presented for the potential hypo-responders (AMH <15 pmol/L) the overall safety of the individualised dosing is unclear. We take the opportunity to present the data for excessive response in the subjects at risk of hypo-response, where the observed incidence of having ≥15 oocytes retrieved among subjects with AMH <15 pmol/L was (6%) and (5%) in the follitropin delta and follitropin alfa groups, respectively.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 29, Jack Wilkinson commented:

      Jack Wilkinson, Centre for Biostatistics,Institute of Population Health, Manchester Academic Health Science Centre (MAHSC), University of Manchester, Manchester, UK.

      Sarah Lensen, Department of Obstetrics and Gynaecology, University of Auckland, New Zealand.

      Nyboe Andersen and colleagues (2017) present a large randomised study comparing follitropin delta, with dosing based on AMH and body weight, against follitropin alfa, with dose selection by conventional means. Stated key findings of the study include the fact that more women achieved a target response of 8-14 oocytes (43.3% vs 38.4%) in the follitropin delta group, and that fewer women had poor (< 4 oocytes) or excessive (>= 15 or >= 20 oocytes) responses compared to the follitropin alfa group. However, on close inspection of the published study, it is not clear that follitropin delta, administered using this dose selection algorithm, has been shown to be superior to the standard dose of follitropin alfa in relation to response to stimulation.

      We note that calculations regarding number of oocytes obtained do not include women who were randomised but subsequently had their stimulation cycle cancelled for anticipated poor response (prior to hCG trigger). Indeed, there were more women with cycles cancelled for anticipated poor response in the follitropin delta group than the follitropin alfa group. The claim that poor response was lower in the follitropin delta arm then appears to be technically correct but potentially highly misleading. It would be preferable to include women with cancelled cycles in the numerator and denominator (they appeared in neither numerator nor denominator in the analysis presented by Nyboe Andersen and colleagues) by setting the number of oocytes recovered for women with cancelled stimulation cycles to be zero. This would rectify any bias resulting from differential cancellation rates between the study arms. It would also preserve the balance over confounding factors produced by randomisation, which is otherwise violated. When we include all patients in this way, the mean number of oocytes retrieved in the follitropin delta and follitropin alfa arms is 9.6 and 10.1, respectively, and the numbers achieving the target response are 275 (41%) and 247 (37%), which gives p=0.15 from a chi-squared test.

      A second concern is the fact that the authors present the rate of poor response in the low AMH group and the rate of hyper response in the high AMH group, but do not present the rate of poor response in the high AMH group or the rate of hyper response in the low AMH group. If we refer to hyper responses in low AMH patients and poor responses in high AMH patients as ‘inverted’, then we can calculate the number of inverted responses in each group. In fact, the number of inverted responses are greater in the follitropin delta group (5% vs 4% using 15 oocytes as the threshold for hyper response, or 4% vs 2% using 20 oocytes):

      Number Inverted <4 or >=15 is 34/640 (5%) in experimental arm and 25/643 (4%) in the control arm, p=0.233 from Fisher's exact test.<br> Number Inverted <4 or >=20 is 23/640 (4%) in experimental arm and 11/643 (2%) in the control arm, p=0.038 from Fisher’s exact test.

      These calculations are conducted in patients achieving hCG trigger, as it does not appear to be possible to identify whether the women with cycles cancelled for poor response were in the high or low AMH stratum from the information presented. The number of hyper responses in the low AMH group is of particular interest, since this represents increased risk to the treated woman and any resulting offspring. Unfortunately, this information is obfuscated by the presentation in the manuscript. We also note that the decision to report the rates of hypo and hyper response only in these subgroups appears to be a departure from the clinicaltrials.gov record for the trial (https://clinicaltrials.gov/ct2/show/NCT01956110).

      Given that the claim of the authors in relation to the effectiveness of treatment was one of non-inferiority, the matter of the safety of the individualised dosing regimen compared to standard dosing would appear to be of paramount importance. On the basis of the considerations outlined above, it is unclear that an advantage of the individualised follitropin delta regimen in relation to achieving target ovarian responses has been demonstrated. Moreover, the data leave open the possibility that the individualised regimen may increase excessive responses in low AMH patients. We would like to invite the authors to clarify this point.

      A version of this comment has been posted on the journal's website.

      References Nyboe Andersen, A., et al. (2017). "Individualized versus conventional ovarian stimulation for in vitro fertilization: a multicenter, randomized, controlled, assessor-blinded, phase 3 noninferiority trial." Fertil Steril 107(2): 387-396 e384.

      Conflict of interest statement

      JW is funded by a Doctoral Research Fellowship from the National Institute for Health Research (DRF-2014-07-050). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health. JW also declares that publishing research is beneficial to his career. JW is a statistical editor for the Cochrane Gynaecology and Fertility Group, although the views expressed here are not necessarily those of the group.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 09, Gerhard Holt commented:

      Intimate Cohabitants'Shared Microbiomes and Alpha-Synuclein Pathology

      Sampson et al.'s findings on the role of gut microbiota on Alpha-synucleinopathies[1] raise considerable potential concern regarding the microbiome of intimate cohabitants of patients with Alpha-Synuclein pathology, since they are likely to have substantial exposure to a shared microbiome.

      Do these intimate cohabitants develop more early Alpha-Synuclein pathology than non-cohabitant controls ?

      Are they at greater risk for Parkinson's disease and other alpha-synucleinopathies ?

      From an Infectious Disease perspective, this might be an important clinical study.

      Perhaps some of the intimate cohabitants are more resistant to A-SYN pathology, SCFAs, or the underlying gut microbes than others - despite similar microbiome exposure ?

      Perhaps differences in the interactions between their immune systems and their microbiome alter outcomes ?

      This might be a fertile area for differential proteome studies.

      Given the importance of Sampson et al.'s findings, hopefully a clinical trial of antibiotics followed by (healthy donor) fecal transplant for patients with Parkinson's Disease will soon follow - perhaps with a cross-over design.

      Given the low risks of a potential treatment which uses well known medications and a well established procedure (Fecal Transplant), and given the grossly disproportionate potential benefit, hopefully this study will be expedited in humans.

      It may be useful however to also study Alpha-Synuclein pathology in their intimate cohabitants relative to controls.

      References :

      [1] Gut Microbiota Regulate Motor Deficits and Neuroinflammation in a Model of Parkinson’s Disease Sampson, Timothy R. et al. Cell , Volume 167 , Issue 6 , 1469 - 1480.e12

        <www.cell.com/cell/pdf/S0092-8674(16)31590-2.pdf>
      


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 20, Claudiu Bandea commented:

      The etiology of Parkinson's disease is not an enigma

      In a recent expert review article entitled “How close are we to revealing the etiology of Parkinson's disease?” (1), Kurt Jellinger concluded: “Although major advances have been made in our understanding of the etiology (and pathogenesis) of PD (and related synucleinopathies) over the last 20 years, it remains as much an enigma as when James Parkinson first described its clinical features” (italics added).

      It remains to be seen if the study by Sampson et al. (2) will join the growing list of “major advances” in the field of Parkinson’s, which have revealed numerous contributing factors, including environmental (e.g. toxins), microbial (e.g. various bacteria and viruses), physiological (e.g. oxidative stress) and genetic (e. g. mutations) in addition to the main risk factor - aging.

      But what if the etiology of Parkinson’s has remained an enigma primarily because of these “major advances”? What if, by promoting a multitude of different causes and pathogenic mechanisms, each supported by strong data and observations, these “major advances” have led to confusion and have constrained progress? What if Parkinson’s enigma is a classic case of ‘not seeing the forest because of the trees’?

      Contrary to the conventional thinking in the field, I recently proposed that the existing data and observations, including those in Sampson et al. (2) and other “major advances”, when integrated in a comprehensive conceptual framework that makes biological and evolutionary sense, point towards a sensible solution to Parkinson’s enigma (3, 4; for recent additional evidence see Ref. 5):

      (i) α-synuclein (aSyn), the primary protein implicated in Parkinson’s and related synucleinopathies, is a member of the innate immune system;

      (ii) The assembly of αSyn into various oligomers and fibers is not a protein misfolding event as currently defined, nor is it a prion-like replication/propagation activity, but it is an integral part of its biological function in innate immunity;

      (iii) The activities associated with the immune function of αSyn lead to Parkinson’s and other synucleinopathies, which are innate autoimmune disorders.

      Sampson et al. (2) performed a series of ingenious experiments evaluating the impact of gut microbiota and some of their metabolites on the aggregation of αSyn, activation of microglia, and motor and gut motility deficits in a transgenic mouse model for Parkinson’s that expresses high levels of human αSyn. Similar to other studies showing a strong influence of enteric microbiota on the immune and nervous systems, the results showed that, compared to germ-free transgenic mice, mice carrying complex gut microbiota had increased αSyn associated pathology and microglia activation in the brain and displayed progressive deficit in motor functions and gut motility. The study would have benefited from parallel experiments in mice with disrupted vagal nerve, likely the main portal for αSyn aggregates (as well as for neurotropic microbial/viral pathogens) to the brain, particularly in the light of the finding that the administration of a mixture of short-chain fatty acids, which can reach the brain via the blood circulatory system, simulates the effects of gut microbiota. Perhaps the most intriguing finding reported by Sampson et al. was that the gut microbiota transferred from patients with Parkinson’s enhanced motor impairments, whereas the microbiota from healthy human donors did not, which suggested that the effects might be induced by specific Parkinson’s associated microbial taxa; however, I suggest an additional putative mechanism: the impairment was initiated by ‘αSyn oligomeric seeds’ that were transferred from the patients with Parkinson’s.

      As interesting and valuable the results reported Sampson et al. (2) are, the authors have failed to make sense of them in context of the other “major advances” in the field, thereby adding to the enigma (i.e. confusion) regarding the etiology of Parkinson’s and related synucleinopathies.

      For example, given that the ‘prion’ concept has been one of the major emerging paradigms in the field of Parkinson’s and other neurodegenerative diseases, including Alzheimer’s, Huntington’s, and ALS (e.g. 5-10), it is highly surprising that Sampson et al. have only mentioned it in the following statement: “Braak’s hypothesis posits that aberrant αSyn accumulation initiates in the gut and propagates via the vagus nerve to the brain in a prion-like fashion (Del Tredici and Braak, 2008)” (italics added). It appears that “Del Tredici and Braak, 2008” reference is a technical error as it doesn’t present a “Braak’s hypothesis”; possibly, Sampson et al. intended to refer to the ‘dual-hit hypothesis’, first presented in 2007 and re-published by the same authors two years later (11). Nevertheless, as I have previously discussed (3,4,12), the ‘prion hypothesis’ is probably flawed, so I commend Sampson et al. for omitting it, but I doubt that the authors were motivated by the same perspective.

      I end this brief essay on Sampson et al. (2) by outlining its major weakness, which it is shared with most studies addressing the role of αSyn: the failure to consider the physiological function of αSyn when exploring its pathogenic mechanisms and the etiology of Parkinson’s and related synucleinopathies. Unfortunately, the prion hypothesis and the associated ‘protein misfolding’ paradigm have conceptually uncoupled the pathogenic mechanisms associated with αSyn, as well as with the other main proteins implicated in neurodegeneration (e.g. APP/amyloid-β, tau, huntingtin, TDP-43, prion protein), from their evolutionarily selected biological function (3,4). As recently suggested, the paths toward understanding neurodegeneration must be re-evaluated (13), and this should start with assessing the scientific foundation of the prion hypothesis and protein misfolding paradigm, which are questionable.

      References:

      (1) Jellinger KA. 2015. How close are we to revealing the etiology of Parkinson's disease? Expert Rev Neurother. 15(10):1105-7. Jellinger KA, 2015

      (2) Sampson et al. 2016. Gut Microbiota Regulate Motor Deficits and Neuroinflammation in a Model of Parkinson's Disease. Cell. 167(6):1469-80. Sampson TR, 2016

      (3) Bandea CI. 2013. Aβ, tau, α-synuclein, huntingtin, TDP-43, PrP and AA are members of the innate immune system: a unifying hypothesis on the etiology of AD, PD, HD, ALS, CJD and RSA as innate immunity disorders. bioRxiv. doi: 10.1101/000604; http://biorxiv.org/content/early/2013/11/18/000604

      (4) Bandea CI. 2009. Endogenous viral etiology of prion diseases. Nature Precedings. http://precedings.nature.com/documents/3887/version/1/files/npre20093887-1.pdf

      (5) Beatman et al. 2015. Alpha-Synuclein Expression Restricts RNA Viral Infections in the Brain. J Virol. 90(6):2767-82; Beatman EL, 2015

      (6) Miller G. 2009. Neurodegeneration. Could they all be prion diseases? Science. 326(5958):1337-9. Miller G, 2009

      (7) Goedert M, Clavaguera F, Tolnay M. 2010. The propagation of prion-like protein inclusions in neurodegenerative diseases. Trends Neurosci. 33(7):317-25. Goedert M, 2010

      (8) Angot et al. 2010. Are synucleinopathies prion-like disorders? Lancet Neurol. 9(11):1128-38. Angot E, 2010

      (9) Jucker M, Walker LC. 2013. Self-propagation of pathogenic protein aggregates in neurodegenerative diseases. Nature. 501(7465):45-51. Jucker M, 2013

      (10) Prusiner SB. 2013. Biology and genetics of prions causing neurodegeneration. Annu Rev Genet. 47:601-23. Prusiner SB, 2013

      (11) Hawkes CH, Del Tredici K, Braak H. 2009. Parkinson's disease: the dual hit theory revisited. Ann N Y Acad Sci. 1170:615-22. Hawkes CH, 2009

      (12) Bandea CI. 1986. From prions to prionic viruses. Med Hypotheses. 20(2):139-42. Bândea CI, 1986

      (13) Kosik et al. 2016. A path toward understanding neurodegeneration. Science. 353(6302):872-3. Kosik KS, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 31, George McNamara commented:

      Western (blot) misconduct paper: not perfect, but worth reading and thinking about. I'll also add that: * in this era of NGS (next generation sequencing), any RNAseq or single cell RNAseq that does not pass all the filters/gates, is going to be ignored ... enabling the authors to "prove their hypothesis (again and again)". * More NGS = fewer Northern blots. Of course Northern blots are just as susceptible to cropping = misconduct as Western blots. * the article mentions insulin, but came out sooner than this paper: https://www.ncbi.nlm.nih.gov/pubmed/28263308 Kracht et al 2017 Autoimmunity against a defective ribosomal insulin gene product in type 1 diabetes. Nature Medicine ... insulin alternative reading frame mRNA results in "a highly immunogenic polypeptide" targeted by cytolytic CD8 T-cells.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 16, Kevin Kavanagh commented:

      We have ongoing concerns regarding several of the questions posed in our previous letter(1) along with the authors’ response.(2) The major concern is that we are still not able to reconcile the data presented in the author’s reply letter with those presented in the manuscript. Thus, we feel there may be an over-statement of the efficacy of the intervention. In addition, we have ongoing concerns regarding the reporting of conflicts-of-interests.

      1. The most important stated outcome of this study is the 42% decrease in hospital-onset MRSA infections.

      This outcome has been widely disseminated in the media and even appeared in the headline of a major infectious disease news outlet, Infection Control Today: “Hospital Reduces MRSA Rates by 42% with electronic hand hygiene measurement.”(3) However, the pre and post-intervention rates (baseline rate of 0.381 infections per 1000 days, reduction of 0.114 infections per 1000 days and post-intervention of rate of 0.267 infections per 1000 days) of MRSA that Kelly, et al. (2) gave in their letter showed only a 30% reduction:

      0.114 / 0.381 = 0.299 or 29.9%

      In their letter, Kelly, et al.(2) questioned our calculation of the baseline rate. Our calculation was based upon the data given in their manuscript of a 42% reduction which corresponded to a reduction in MRSA infections of 0.114 per 1000 patient days. Using algebra, the baseline and post-intervention rates can then be calculated:

      If the reduction is 0.114 and corresponds to 42%, then the baseline rate equals:<br> 0.114 / ( 0.42 ) = 0.271

      If the baseline rate is 0.271 and the reduction 0.114, then the post-intervention rate equals: 0.271 – 0.114 = 0.157

      We feel the authors should explain or correct this discrepancy in their study’s outcome. As we stated, our calculated post-intervention rate (0.157) appeared to be even better than that reported by Jain, et al.(4) We agree that the authors’ reported post-intervention rate in their letter (0.267) is in accordance with that reported by Jain, et al, but appears to be different from the results reported in their manuscript.

      2. The authors’ explanation of the conflict-of-interest (COI)

      The authors’ statement regarding the original statement of conflicts-of-interest was given as follows: “The conflict of interest statement was inadvertently left off the prepublication galley proof, but was included in the final publication.” Since the publisher is the one which initially creates the galley poof, we feel this may give the impression it was a publisher’s error.

      According to PubMed, the original date of online publication (Epub) for Kelly, et al.(5), was June 23, 2016. The Journal has a website designation for articles in this stage as “In Press Corrected Proof”. As of July 25, 2016, the article, which we received from the University of Kentucky Library, had a COI statement of “None to report.” The August 2016 print publication of the article and article’s current PDF both have the same DOI number as the June 23, 2016 e-published “In Press Corrected Proof”. These latter manuscripts have the revised COI statement.

      In addition, the final publication is often considered the e-publication (Epub.), which is assigned a Digital Optic Identifier (DOI) when the article “is published”(6), and is available to libraries and/or PubMed. Some journals do not even publish a printed version of an article. At this stage author corrections are often time stamped or if major, accomplished by a letter or erratum.

      Finally, the COI issue is not only with potential industrial funding but also with potential COIs involving the authors. According to Infection Control Today: “Connie Steed, MSN, RN, CIC, director of infection prevention at GHS and a MRSA study co-author, has been working with DebMed for the past seven years.”(3) The start of this relationship appears to have preceded the study start date by several years and we feel should have been either declared or explained. We also feel a COI statement from all authors should also accompany the publication of this and every article.

      Summary: It is not the purpose of this communication to establish the efficacy of a device which monitors hand hygiene compliance but to express our concern that the Kelly, et al. study(5) should be viewed with caution when entering it in to a body of evidence to establish standards for patient care.

      References

      (1) Kavanagh KT, Saman DM. Comment Regarding: Electronic hand hygiene monitoring as a tool for reducing health care–associated methicillin-resistant Staphylococcus aureus infection. American Journal of infection Control. December 01 2016 http://www.ajicjournal.org/article/S0196-6553(16)30904-X/fulltext Kavanagh KT, 2016

      (2) Kelly WJ, Blackhurst D, McAtee W, Steen C., Response to Letter Regarding Manuscript “Electronic Hand Hygiene Monitoring as a Tool for Reducing Nosocomial Methicillin-resistant Staphylococcus aureus Infection” American Journal of infection Control. December 01 2016 http://www.ajicjournal.org/article/S0196-6553(16)30812-4/fulltext

      (3) Hospital Reduces MRSA Rates by 42% with electronic hand hygiene measurement. Infection Control Today. July 8, 2016. http://www.infectioncontroltoday.com/news/2016/07/hospital-reduces-mrsa-rates-by-42-with-electronic-hand-hygiene-measurement.aspx

      (4) Jain R, Kralovic SM, Evans NE, Ambrose M, Simbartl LA, Obrosky DS, Render ML, Freyberg RW, Jernigan JA, Muder RR , Miller LJ, Roselle GA. Veterans Affairs Initiative to Prevent Methicillin-resistant Staphyloccus aureus Infections . NEJM Apr 2011:364:1419-1430 Retrieved From: http://www.nejm.org/doi/full/10.1056/NEJMoa1007474

      (5) Kelly JW, Blackhurst D, McAtee W, Steed C. Electronic hand hygiene monitoring as a tool for reducing health care-associated methicillin-resistant Staphylococcus aureus infection. Am J Infect Control. 2016 Jun 23. pii: S0196-6553(16)30340-6. doi: 10.1016/j.ajic.2016.04.215. [Epub ahead of print] Kelly JW, 2016

      (6) What is a digital object identifier, or DOI? APA Style. American Psychological Association. Last accessed on Dec. 3, 2016 from http://www.apastyle.org/learn/faqs/what-is-doi.aspx

      Kevin T. Kavanagh, MD, MS Daniel M. Saman, DrPh, MPH


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 04, Ralph Brinks commented:

      One of the key methods used in this paper is not appropriate. Note the comment about a similar paper authored from the same group: Hoyer A, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 04, Sin Hang Lee commented:

      The medical profession, including medical schools and hospitals, is now a part of the health care industry, and implementation of editorial policies of medical journals is commonly biased in favor of business interests. PubMed Commons has offered the only, albeit constrained, open forum to air dissenting research and opinions in science-based language. Discontinuation of PubMed Commons will silence any questioning of the industry-sponsored promotional publications indexed in PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 17, Sin Hang Lee commented:

      "In science you don't need to be polite, you merely have to be right."- Winston Churchill.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 22, Marcia Herman-giddens commented:

      Please explain for what reason this comment was removed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Feb 19, Sin Hang Lee commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Feb 19, Sin Hang Lee commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Feb 14, Mark Schiffman commented:

      Prophylactic HPV vaccines consist of the major coat protein L1 assembled into macromolecular structures – virus like particles (VLPs) that mimic the geometry and morphology of the wild type virus coat or capsid but do not contain full length HPV DNA genomes. VLPs, with their repeat crystalline array of L1 pentamers as in the wild type virus, are intrinsically immunogenic(1) eliciting high antibody titres with or without adjuvant(2). The safety profile of the licensed vaccines was assessed extensively in randomized clinical trials (RCTs)(3-5). In the 10 years since the first 2 commercial vaccines Gardasil and Cervarix were licensed, the safety profile has been intensively monitored in the post-licensure setting by robust pharmacovigilance using both passive and active surveillance (3, 6). These studies, which collectively have included millions of subjects, provide no evidence whatsoever to support the speculation that HPV vaccines by virtue of their protein content, adjuvants or any other element within the formulation –could induce, trigger or exacerbate auto-immune disorders, thromboembolic events, demyelinating diseases or other chronic conditions.<br> The Global Advisory Committee on Vaccine Safety of the World Health Organisation has reviewed the safety data for HPV vaccines on several occasions http://www.who.int/vaccine_safety/committee/topics/hpv/en/ GACVS stated in 2014: “In summary, the GACVS continues to closely monitor the safety of HPV vaccines and, based on a careful examination of the available evidence, continues to affirm that its benefit-risk profile remains favorable. The Committee is concerned, however, by the claims of harm that are being raised on the basis of anecdotal observations and reports in the absence of biological or epidemiological substantiation. While the reporting of adverse events following immunization by the public and health care providers should be encouraged and remains the cornerstone of safety surveillance, their interpretation requires due diligence and great care. As stated before, allegations of harm from vaccination based on weak evidence can lead to real harm when, as a result, safe and effective vaccines cease to be used. To date, there is no scientific evidence that aluminium-containing vaccines cause harm, that the presence of aluminium at the injection site (the MMF “tattoo”) is related to any autoimmune syndrome, and that HPV DNA fragments are responsible for inflammation, cerebral vasculitis or other immune-mediated phenomena.” http://www.who.int/vaccine_safety/committee/topics/hpv/GACVS_Statement_HPV_12_Mar_2014.pdf Efficacy against vaccine type high grade cervical intraepithelial neoplasia CIN3 (the well-established precursor of cervical cancer(7)) has been demonstrated for the vaccines in the relevant RCTs (8-10). Cervical cancer cannot be used for ethical reasons as an end point in clinical trials (7). With regard to screening, contrary to the misleading comments by Dr. Lee, there is a large and authoritative body of evidence (including many RCTs) showing that any of the approved HPV tests is substantially more sensitive for detection of CIN2, CIN3, or cancer than cytology (11). The figure he quotes of 58% sensitivity is the result of a controversial application of “verification bias adjustment” in detection of CIN2 or worse, in a single trial. Large systematic reviews have consistently reported much higher sensitivity of HPV testing compared to cytology (12). The sensitivity of HPV testing is not at issue; rather specificity is a concern. As we emphasized in the article, HPV testing does require a secondary triage method to identify persistent infection and cancer precursors that require treatment, because HPV is very common and most infections “clear”. There are several choice of triage strategy prior to treatment; HPV typing and cytology or its analogues are most often proposed. Automated methods will soon be available. Carcinogenic human papillomavirus infections are a global public health problem, >80 % of the annual ≥530,000 cervical cancer cases occur in resource poor countries in which the disease is often incurable (13). Whatever preventive measures are adopted, evaluating the impact of interventions to control infection and disease requires a global perspective; from this perspective the promise of HPV vaccination and HPV testing are overwhelmingly supported by highly credible data. • Mark Schiffman • , John Doorbar • , Nicolas Wentzensen • , Silvia de Sanjosé • , Carole Fakhry • , Bradley J. Monk • , Margaret A. Stanley • & Silvia Franceschi  

      1. Bachmann MF, Zinkernagel RM. The influence of virus structure on antibody responses and virus serotype formation. Immunology today. 1996;17(12):553-8.
      2. Harro CD, Pang YY, Roden RB, Hildesheim A, Wang Z, Reynolds MJ, et al. Safety and immunogenicity trial in adult volunteers of a human papillomavirus 16 L1 virus-like particle vaccine. J Natl Cancer Inst. 2001;93(Feb 21. 4): 284-492.
      3. Vichnin M, Bonanni P, Klein NP, Garland SM, Block SL, Kjaer SK, et al. An Overview of Quadrivalent Human Papillomavirus Vaccine Safety: 2006 to 2015. Pediatr Infect Dis J. 2015;34(9):983-91.
      4. Moreira ED, Jr., Block SL, Ferris D, Giuliano AR, Iversen OE, Joura EA, et al. Safety Profile of the 9-Valent HPV Vaccine: A Combined Analysis of 7 Phase III Clinical Trials. Pediatrics. 2016.
      5. Descamps D, Hardt K, Spiessens B, Izurieta P, Verstraeten T, Breuer T, et al. Safety of human papillomavirus (HPV)-16/18 AS04-adjuvanted vaccine for cervical cancer prevention: A pooled analysis of 11 clinical trials. Hum Vaccin. 2009;5(5).
      6. Angelo MG, Zima J, Tavares Da Silva F, Baril L, Arellano F. Post-licensure safety surveillance for human papillomavirus-16/18-AS04-adjuvanted vaccine: more than 4 years of experience. Pharmacoepidemiol Drug Saf. 2014;23(5):456-65.
      7. Pagliusi SR, Teresa Aguado M. Efficacy and other milestones for human papillomavirus vaccine introduction. Vaccine. 2004;23(Dec 16. 5):569-78.
      8. Future II Study Group. Quadrivalent vaccine against human papillomavirus to prevent high-grade cervical lesions. The New England journal of medicine. 2007;356(19):1915-27.
      9. Lehtinen M, Paavonen J, Wheeler CM, Jaisamrarn U, Garland SM, Castellsague X, et al. Overall efficacy of HPV-16/18 AS04-adjuvanted vaccine against grade 3 or greater cervical intraepithelial neoplasia: 4-year end-of-study analysis of the randomised, double-blind PATRICIA trial. Lancet Oncol. 2012;13(1):89-99.
      10. Joura EA, Giuliano AR, Iversen OE, Bouchard C, Mao C, Mehlsen J, et al. A 9-valent HPV vaccine against infection and intraepithelial neoplasia in women. The New England journal of medicine. 2015;372(8):711-23.
      11. Ronco G, Dillner J, Elfström KM, Tunesi S, Snijders PJ, Arbyn M, Kitchener H, Segnan N, Gilham C, Giorgi-Rossi P, Berkhof J, Peto J, Meijer CJ; International HPV screening working group.. Efficacy of HPV-based screening for prevention of invasive cervical cancer: follow-up of four European randomised controlled trials. Lancet. 2014 Feb 8;383(9916):524-32. Erratum in: Lancet. 2015 Oct 10;386(10002):1446.<br>
      12. Arbyn M, Ronco G, Anttila A, Meijer CJLM, Poljak M, Ogilvie G et al. Evidence Regarding Human Papillomavirus Testing in Secondary Prevention of Cervical Cancer. Vaccine. 2012; 30 Suppl 5:F88-99.
      13. Plummer M, de Martel C, Vignat J, Ferlay J, Bray F, Franceschi S. Global burden of cancers attributable to infections in 2012: a synthetic analysis. Lancet Glob Health. 2016 Sep;4(9):e609-16. doi: 10.1016/S2214-109X(16)30143-7.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Jan 19, Sin Hang Lee commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 06, Randi Pechacek commented:

      Holly Bik, new faculty addition to UC Riverside and 1st author of this paper, wrote a blog describing the background for this research on microBEnet. Read about it here


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, NephJC - Nephrology Journal Club commented:

      This commentary on CKD staging and Precision Medicine was discussed on December 6th and 7th in the open online nephrology journal club, #NephJC, on twitter. Introductory comments written by Tom Oates and Kevin Fowler are available at the NephJC website here and here. The journal also kindly made the commentary free to access for this month. The discussion was quite detailed, with over 100 participants, including nephrologists, fellows and patients as well as author Jonathan Himmelfarb. The highlights of the tweetchat were:

      • The authors have written a richly referenced, thoughtful and though-provoking commentary, which is a must-read for anyone interested in gaining a perspective in this area.

      • The advent of eGFR reporting and CKD staging have resulted in many advances, including improved recognition and diagnosis, planning therapy, epidemiological estimates and public messaging. Nonetheless, the categorical staging system is not perfect (groups together diverse diseases) though opinion was sharply divided on the issue of CKD in the elderly being a true phenomenon versus an ageing effect.

      • The section on precision medicine in kidney disease was also quite nuanced, with a lot of optimism and ideas discussed, such as new trial designs and personalized care.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 07, Katherine S Button commented:

      Thanks Erick H Turner this reference and the others you provide are very helpful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 03, Erick H Turner commented:

      Please see... Turner EH. Publication bias, with a focus on psychiatry: causes and solutions. CNS Drugs 2013;27:457–68. doi:10.1007/s40263-013-0067-9 ...which cited earlier proposals along this line. My article proposed a related approach, but it differs in that the subject of the review is the study protocol, which is written before--not after--the study results are known.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 03, Lydia Maniatis commented:

      Could the authors please provide a citation(s) for the following introductory comments?

      "Over much of the dynamic range of human cone-mediated vision, light adaptation obeys Weber's law. Raw light intensity is transformed into a neural response that is proportional to contrast...where ϕW is the physiological response to a flash of intensity ΔI, and I is the light level to which the system is preadapted. Put another way, the cone visual system takes the physical flash intensity ΔI as input and applies to this input the multiplicative Weber gain factor to produce the neural response (Equation 1). This transformation begins in the cones themselves and is well suited to support color constancy when the illumination level varies."

      Does this statement, assuming relevant though missing citations, apply in general, or is it a description of results collected under very narrow and special conditions, and if so, what are they?

      As in many psychophysical studies, a very small number of subjects included an author. In experiments 1 and 2, one of two observers was an author. Why isn't this considered a problem with respect to bias?

      Also similar to many other psychophysical papers, the "hypothesis" being tested is the tip of a bundle of casually-made, rather complex, rather vague, undefended assumptions which the experiments do not, in fact, test. For example:

      1. "As our working hypothesis, we assume that the observer’s signal-to-noise ratio for discriminating trials in which an adapting field is presented alone from trials with a superimposed small, brief flash is [equation].

      2. "The assumption that visual sensitivity is limited by such multiplied Poisson noise has been previously proposed (Reeves, Wu, & Schirillo, 1998) as an explanation of why visual sensitivity is less than would be expected if threshold was limited by the photon fluctuations from the adapting field (Denton & Pirenne, 1954; Graham & Hood, 1992)."

      I note that the mere fact that Reeves, Wu and Schirillo proposed an assumption does not amount to an argument.

      Roughly, what researchers are doing is similar to this:

      Let's assume that how quickly a substance burns is a function of the amount of (assumed) phlogiston (possessing a number of assumed characteristics) it contains. So I burn substance "a", and I burn substance "b", and conclude that, since the former burns faster than the latter, it also contains more assumed phlogiston having the assumed characteristics. The phlogiston assumptions (and the authors here bundle together layers of assumptions) get a free ride, and they shouldn't. The title of this paper is tantamount to "Substance "a" contains more phlogiston than substance "b." It can only be valid if all of the underlying assumptions based on which the data was interpreted are valid, and that's unknown at best. We can even make the predictions a little more specific, and thus appear to test among competing models (which I think is actually what is going on here). For example, one model might predict a faster burn function than another, allowing us to "decide" between two different phlogiston models neither of which will actually have been tested. (Helping to avoid this type of fruitless diversion is what Popper's epistemology was designed to accomplish.)

      Also, it seems odd for the authors to be testing a tentative theory from the 1940's, which was clearly premature and inadequate, and apparently choosing to test a less-informed version of it:

      "In presenting the theory in this way, we have adhered more closely to the original presentation of Rose—an engineer who was interested in both biological and machine vision—than to that of de Vries, who was a physiologist and who introduced supplementary assumptions about the spatiotemporal summation parameters in human rod vision. We adopt Rose’s approach because the relevant neural parameters are still not well understood, and we wish to clearly distinguish between the absolute limits on threshold set by physics and the still incompletely understood neural mechanisms."

      In addition, the authors seem to have adopted the attitude that various selected contents of perception can be directly correlated with the activity of cells at any chosen level of the visual system (even when the neural parameters are still not well understood!), and that the rest of the activity leading to the conscious percept can be ignored, and that percepts that can't be directly correlated with the activity of the chosen cells can be ignored, via casual assumptions such as N. Graham's "the brain becomes transparent" under certain conditions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 10, Jose M. Moran commented:

      I think that authors have not correctly addressed the analysis of their results. They have correctly performed intragroup comparisons, but fail to analyze the between groups results. At the final time point, there are no statistically significant differences P=0.366 for LSS and P=0.641 for IDATE-state, between G1 (massasage+rest) and G2 (massage+reiki) so no effect of reiki intervention was detected at all in this study. Again the size effects measured also do not differ between G1 and G2 for both LSS IDATE-STATE, authors have failed in the analysis of the CI95% for the calculated Cohen’s d that are completely overlapped. Obviosly both G1 and G2 significantly differ from the G3 group (no intervention) but both in the same amount.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 12, Nikhil Meena commented:

      In our experience, Patients who don't appear to be candidates for a pleuroscopy may also be poor candidates for sclerosant therpay. http://journals.sagepub.com/doi/10.1177/1753465817721146

      Abstract BACKGROUND: Indwelling tunneled pleural catheters (TPCs) are increasingly being used to treat recurrent pleural effusions. There is also an increased interest in early pleurodesis in order to prevent infectious complications. We studied the time to removal and other outcomes for all the TPCs placed at our institution. METHODS: After institutional review board approval, records of patients who had had a TPC placed between July 2009 and June 2016 were reviewed; the catheters were placed in an endoscopy suite or during pleuroscopy with or without a sclerosant. The catheters were drained daily or less frequently and were removed after three drainages of less than 50 ml. RESULTS: During the study period 193 TPCs were placed. Of these 45 (23%) were placed for benign diseases. The commonest malignancy was lung cancer 70 (36%). Drainage 2-3 times a week without a sclerosant ( n = 100) lead to pleurodesis at 57 ± 78 days, while daily drainage after TPC + pleuroscopy + talc ( n = 41) achieved the same result in 14 ± 8 days ( p < 0.001). TPC + talc + daily protocol achieved pleurodesis in 19 ± 7 days, TPC + rapid protocol achieved the same result in 28 ± 19 days ( p = 0.013). The TPCs + sclerosant had an odds ratio of 6.01 (95% confidence interval: 2.1-17.2) of having a complication versus TPC without sclerosant. CONCLUSIONS: It is clear that TPCs when placed with a sclerosant had a significantly shorter dwell time; However, they were associated with higher odds of complications. One must be aware of these possibilities when offering what is essentially a palliative therapy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 08, Koene Van Dijk commented:

      CAUTION: Something went terribly wrong during the peer-review process of this manuscript

      The study by Molfino et al., (2017, 10.1002/jcsm.12156) included extremely small samples (n=9 and n=4 for patient groups and n=2 for the control group).

      According to the text of the manuscript, BOLD fMRI data that was collected did not undergo any pre-processing to remove noise but raw values from a hand-drawn region of interest were exported from the Siemens scanner console and imported into Microsoft Excel.

      No statistical analysis was applied to measure contrast between different conditions, but raw BOLD values before (during time frames 0-50), during (time frames 51-261), and after (single time frame 262) nutritional ingestion values were calculated/extracted.

      I recommend the authors and readers who want to learn more about BOLD fMRI data collection and analyses to read "Handbook of Functional MRI Data Analysis" by Poldrack, Mumford, and Nichols.

      DISCLAIMER: I am an employee of Pfizer. The statements or opinions expressed on this site are my own and do not necessarily represent those of Pfizer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 01, Ricardo Pujol-Borrell commented:

      This confirms the rare but interesting nature of autoimmune hypophysitis


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 26, Janet Kern commented:

      Bonferroni is a 'multiple comparisons adjustment' for reducing the risk of false-positive findings when engaging in statistical 'fishing expeditions' among many unrelated associations. It is appropriate only when any of the following are true: 1. those associations are equally important, likely, and expected to be zero (absent) based on external (a priori) considerations; 2. the cost of any false negative is minor compared to the cost of any false positive; and 3. the associations are independent (unrelated) to one another. In return for the reduce risk of false positives, multiple comparison adjustments, like Bonferroni, dramatically increase the risk of missing real associations (false negatives). So, even if there were no other objections, Bonferroni as used by the authors (with N = 8) is simply erroneous. Using Bonferroni in this study was wrong for several other reasons: First, the authors specifically wanted to test if influenza vaccination during pregnancy was a risk factor for ASD—this was not a 'fishing expedition" as assumed by Bonferroni (violating '1' above). Second, the overall association of influenza vaccination anytime during pregnancy depends completely on the associations within each trimester, so violates the Bonferroni assumption of independence (violates '3' above). Third, the first trimester is expected to be the period of greatest vulnerability for the developing fetus, and so is a pre-specified hypothesis. (In other words, before the study, the stakeholders expected (a priori) an association, which also violates '1') Finally, we need to be confident that vaccines are safe: the costs of wrongly concluding that the influenza vaccine is safe rivals the costs of wrongly concluding that it causes harm, which violates the Bonferroni assumption ('2') that wrongly concluding harm is more costly than wrongly concluding safety.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 30, Lydia Maniatis commented:

      Do you know what "p-hacking" means? I think that's what you're labelling as "data-driven." Are you aware that there have been six news reports amplifying the uncorroborated claims in your title? Sorry, but your title should have been, "We collected a lot of data."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 30, Antoine Coutrot commented:

      Dear Lydia,

      thank you for highlighting our limitation section, it is indeed quite important. You are absolutely right, our method is too confounded to allow us to draw any general conclusion. As are all experiments in cognitive science. They are all limited by the sample size, by the participant profile, by the task... But we try and do our best. For instance we collected 400+ participants from 58 nationalities, more than any eye-tracking experiment ever published. The main points of the paper are 1- gaze contains a wealth of information about the observer 2- with a big and diverse eye database it is possible to capture in a data-driven fashion what demographics explain different gaze patterns. Here, it happens to be the gender, hence the title.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 29, Lydia Maniatis commented:

      Anyone familiar with the vision science literature should know by now that the best place to start reading a published paper is the section at the tail end titled “Limitations of the study.” This is where we can see whether the titular claims have any connection to what the study is actually entitled to report in terms of findings.

      Compare, for example, the title of this study with its “limitations” section, quoted below in full (caps mine):

      “The authors would like to make clear that THIS STUDY DOES NOT DEMONSTRATE THAT GENDER IS THE VARIABLE THAT MOST INFLUENCES GAZE PATTERNS DURING FACE EXPLORATION in general. [i.e. our method is too confounded to allow us to draw any such conclusion in principle].Many aspects of the experimental design might have influenced the results presented in this paper. The actors we used were all Caucasian between 20 and 40 years old with a neutral expression and did not speak—all factors that could have influenced observers' strategies (Coutrot & Guyader, 2014; Schurgin et al., 2014; Wheeler et al., 2011). Even the initial gaze position has been shown to have a significant impact on the following scanpaths (Arizpe et al., 2012; Arizpe et al., 2015). In particular, the task given to the participants—rating the level of comfort they felt with the actor's duration of direct gaze—would certainly bias participants' attention toward actors' eyes. One of the first eye-tracking experiments in history suggested that gaze patterns are strongly modulated by different task demands (Yarbus, 1965). This result has since been replicated and extended: More recent studies showed that the task at hand can even be inferred using gaze-based classifiers (Boisvert & Bruce, 2016; Borji & Itti, 2014; Haji-Abolhassani & Clark, 2014; Kanan et al., 2015). Here, gender appears to be the variable that produces the strongest differences between participants. But one could legitimately hypothesize that if the task had been to determine the emotion displayed by the actors' face, the culture of the observer could have played a more important role as it has been shown that the way we perceive facial expression is not universal (Jack, Blais, Scheepers, Schyns, & Caldara, 2009). Considering the above, the key message of this paper is that our method allows capturing systematic differences between groups of observers in a data-driven fashion.”

      In other words, this is not a research paper, but a preliminary application of a method that might be useful for a research study. As far as the reported results go, it is an exercise in p-hacking. The title is purely cosmetic. As such it seems to have been rather effective, insofar as the article has already been the subject of six news stories, including reports in the Daily Mail and Le Monde.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 11, Christopher Southan commented:

      While cogently reported and as Open Access, the Journal has allowed the publication of an irreproducible study. This is because of the non-disclosure of the key inhibitor structure, DCC-3014 from Deciphera (it might be exemplified in WO2014145023, or 025 or 029)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 05, Yuwei Fan commented:

      To analyze Zr using EDS, the specimen should not be sputter coated with gold. Result in Fig. 7 seems to be questionable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 02, Alessandro Rasman commented:

      The problem in this study comes in using a percentage for veins rather than an absolute area as a measure of physiological flow problems in veins. Please read this article (http://www.pagepressjournals.org/index.php/vl/article/view/5012) and Figure 3.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 30, Monica Green commented:

      It is useful for Pařízek and colleagues to have presented this hypothetical scenario of an alleged Caesarean section. It was surprising, however, that the study did not engage with the other published literature on the medieval history of C-section (a bibliography is available here: https://www.academia.edu/30089387/Bibliography_on_Caesarean_Section_in_the_Middle_Ages). There is also considerable literature on the history of surgery in medieval Europe and the history of anesthesia.

      What is puzzling about this study is that it is nothing but a hypothetical scenario. The authors have found no testimony contemporary with Beatrice herself to confirm that she had any complications at all with the birth, let alone that it ended in a C-section. The only hint they have found that anything was amiss is her (or her scribe's) use of the phrase salva incolumitate in referring to herself after the birth. Incolumis (and derivative forms) is not a common word in medieval medical texts, but it is not at all rare in diplomatic documents. In my searches (DuCange's Glossarium, http://www.uni-mannheim.de/mateo/camenaref/ducange.html; the Epistolae collection of medieval women's letters: https://epistolae.ccnmtl.columbia.edu/), the phrase comes up commonly simply to confirm one's general health and fitness for office. In other words, there is nothing at all unusual here. Given that obstetrical mishaps were common in the Middle Ages (Green MH, 2008), the principle of lex parsimoniae would have asked that analysis be given first to other complications. Given Beatrice's age at the time of the birth (19), obstetric fistula would likely be high on that list.

      It is a separate question why a legend surrounding Wenceslaus' birth arose, which this study has traced back no further than the 15th century, nearly 100 years after the birth itself. Stories of Caesar were very popular in royal circles at that time, and his birth (by C-section, allegedly, because of a medieval misunderstanding of classical sources) was often depicted in quite elaborately decorated manuscripts. A more interesting question, therefore, is why the legend arose, and why the vernacular histories of the Caesars might have been so influential in this imaginary.

      Finally, it may be important for readers of this post to note that most work in the history of medicine is never registered in the PubMed database. Most historians publish in Humanities venues, and those are not indexed here. So please remember to look beyond PubMed if you are researching historical questions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 19, Stuart RAY commented:

      Not noted on this PubMed entry (yet, perhaps), this paper has been retracted. Of particular note, the retraction statement makes an excellent case for the authors' dedication to data sharing - it was reanalysis of the raw data by another group of scientists that revealed an unexpected finding. Without data sharing it would have been very difficult to discover the problem of mixed species in the sample. Kudos to the authors, and data sharing for reliable science!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, NephJC - Nephrology Journal Club commented:

      This trial on early steroid tapering was discussed on November 29th and 30th 2016 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments written by Hector Madariaga and Kevin Fowler are available at the NephJC website here and here.

      The discussion was quite detailed, with over 60 participants, including general and transplant nephrologists, fellows and patients. The highlights of the tweetchat were:

      • The authors should be commended for designing and conducting this important trial, with funding received from the industry.

      • The trial results generated a lot of discussion, though it was thought to be underpowered for the outcome of biopsy proven acute rejection, given the small difference observed (9.9% in ATG versus 10.6 to 11.2% in basiliximab arms) relative to the generous sample size assumptions (6.7and 17% respectively). The high rate of new onset diabetes observed overall (compared to lower rates in other trials such as SYMPHONY) were explained on the basis of explicit evaluation on the basis of glucose tolerance tests.

      • Overall, the trial did not change any opinions amongst the discussants: practitioners favoring steroid-free regimens were comforted with these results, but others would like to see stronger data with long term graft outcomes before embracing steroid-free regimens.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 31, Robin P Clarke commented:

      Nutrient deficiencies being factors in autism causation was predicted by the antiinnatia theory of autism (Clarke, 1993; Clarke, 2016), which stated that autism is caused by a high level of "antiinnatia factors" (factors tending to cause a sort of "general" reduction of gene-expression).

      It was stated therein that "Gene-expression depends on processes that have many possibilities for malfunction, with many common factors underlying (for instance) all transcription from DNA, all being dependent on, for example, supply of nutrients....".<br> And that thus the autism-causing antiinnatia would tend to result from nutrient deficiencies (though of course nutrient deficits would also produce their own specific symptoms such as bone problems in respect of vit D).

      The extent to which supplementation later in life can reverse the effects of deficiency in earlier developmental periods would depend on to what extent irreversible effects have been caused, such as perhaps neurons not migrating in neurotypical ways, or learning processes delayed too long.

      Future studies should perhaps look for the possibility that the supplementation has more effect on younger children and less (or no) effect on older ones. It would further be expected that the improvements would be relatively permanent rather than ceasing on discontinuation of the supplementation.

      Clarke RP (1993) A theory of general impairment of gene-expression manifesting as autism. Personality and Individual Differences 14,465-482.

      Clarke RP (2016) (Updated presentation of preceding) - (PDF-file:) A theory of evolution-biased reduction of gene-expression manifesting as autism.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 21, Donald Forsdyke commented:

      THE RNA WORLD AND DARRYL REANNEY

      The title of historian Neeraja Sankaran's paper in a "special historical issue" of the Journal of Molecular Evolution implies that the RNA world idea was formulated 30 years ago (i.e. 1986) by a single author, Walter Gilbert (1). Yet the paper traces the story to authors who wrote at earlier times. Missing from the author list is Darryl Reanney who, like Gilbert, documented a "genes in pieces" hypothesis in February 1978 and went on to explore the RNA world idea with the imperative that error-correcting mechanisms must have evolved at a very early stage (2). Much of Reanney's work is now supported (3).

      However, Sankaran cites the video of a US National Library of Medicine meeting organized by historian Nathaniel Comfort on 17th March 2016 (4). Here W. F. Doolittle, who had consistently cited Reanney, discusses the evolutionary speculation triggered by the discovery of introns in 1977, declaring that "several things came together at that time," things that "a guy named Darryl Reanney had been articulating before that." Furthermore, "it occurred to several of us simultaneously and to Darryl Reanney a bit before – before me anyway – that you could just recast the whole theory in terms of the RNA world."

      Gilbert himself thought that "most molecular biologists did not seriously read the evolution literature; probably still don’t." Indeed, contemporary molecular biologists writing on "the origin of the RNA world," do not mention Reanney (5). Thus, we look to historians to put the record straight.

      1.Sankaran N (2016) The RNA world at thirty: a look back with its author. J Mol Evol DOI 10.1007/s00239-016-9767-3 Sankaran N, 2016

      2.Reanney DC (1987) Genetic error and genome design. Cold Spring Harb Symp Quant Biol 52:751-757

      3.Forsdyke DR (2013) Introns first. Biological Theory 7:196-203 Paper here

      4.Comfort N (2016) The origins of the RNA world. Library of Congress Webcast. NLM Webcast

      5.Robertson MP, Joyce GF (2012) The origins of the RNA world. Cold Spring Harb Perspect Biol 4:a003608. Robertson MP, 2012


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 21, Erik Shapiro commented:

      Interesting work! It may be a subtle detail, but did you use iron pentacarbonyl or iron acac as the starting material? In the list of chemicals, you say iron pentacarbonyl, and in the methods you say iron acac. It is somewhat important because the synthesis of iron oxide using the two different starting materials is often different; one being a hot-injection method (pentacarbonyl) the other being what you described (iron acac).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 11, Jean-Pierre Bayley commented:

      Should we screen carriers of maternally inherited SDHD mutations?

      Jean-Pierre Bayley (1), Jeroen C Jansen (2), Eleonora P M Corssmit (3) and Frederik J Hes (4) 1. Department of Human Genetics, 2. Department of Otorhinolaryngology, 3. Department of Endocrinology and Metabolic Diseases, 4. Department of Clinical Genetics, Leiden University Medical Center, Leiden, the Netherlands

      We wish to comment on the above paper by Burnichon and colleagues: Burnichon N, et al. Risk assessment of maternally inherited SDHD paraganglioma and phaeochromocytoma. J Med Genet. 2017; 54:125-133. 3

      In this paper a prospective study is presented that identified and described development of pheochromocytoma in a carrier of an SDHD mutation. Although at first sight not an uncommon occurrence in carriers of these mutations, this case is unusual because the mutation was inherited via the maternal line. This is now only the third reported case of confirmed phaeochromocytoma development following maternal transmission of an SDHD mutation. (1-3) The patient in question was identified amongst a cohort of 20 maternal mutation carriers who underwent imaging surveillance. Based on the identification of one patient in this cohort (5%), the authors make recommendations for the clinical care of carriers of a maternally inherited SDHD mutation. They advise targeted familial genetic testing from the age of 18 in families with SDHD mutations, and that identified carriers undergo imaging and biochemical workup to detect asymptomatic tumours. If the first workup is negative, the authors suggest that patients be informed about paraganglioma-phaeochromocytoma (PPGL) symptoms and recommend an annual clinical examination and blood pressure measurement, with a new workup indicated in case of symptoms suggestive of PPGL. Although this paper is a meaningful contribution to the literature, we are concerned that the authors base their subsequent clinical recommendations on a relatively small cohort. In a recent study, we described one confirmed case of maternal transmission and concluded that “we consider the increase in risk represented by these reports to be negligible.” (2)

      Two reasons underlie this statement. Firstly, the somatic rearrangements underlying the maternal cases identified to date are far more complex (loss of the paternal wild-type SDHD allele by mitotic recombination, followed by loss of the recombined paternal chromosome containing the paternal 11q23 region and the maternal 11p15 region) than the molecular events seen in paternal cases (loss of whole chromosome 11). Secondly, our conclusions were based, implicitly, on many previous studies at our centre over the past three decades in which we described various aspects of the large SDHD cohort collected by us over that period. Genetic aspects of this cohort, and 601 patients with paternally transmitted SDHD mutations, were described by Hensen and co-workers in 2012. (4) As all previous studies suggest that mutations are equally transmissible via the paternal or maternal line, our identification of a single maternal case amongst this cohort suggests that the penetrance of maternally transmitted mutations is very low. Using the calculation employed by Burnichon and colleagues and assuming that at least 600 maternal mutation carriers are alive in the Netherlands, we arrive at an estimate of 0.17% (1/601 = 0.17%), rather than their figure of 5%. In addition to our own cohort, 1000’s of SDHD mutation carriers have been identified world-wide. Assuming that 1 in 20 maternally transmitted mutations result in tumours, many more maternally inherited cases would have come to our attention, even without surveillance.

      In our opinion the question of management of maternally inherited SDHD mutations comes down to a risk-benefit analysis. The most obvious implication of the recommendations made by Burnichon and colleagues in our patient population would be the institution of surveillance, with all the attendant practical, financial and psychological burdens for 600 carriers of maternally inherited SDHD mutations in order to identify a single case. Furthermore, SDHD-associated PPGL mortality rates and survival in a Dutch cohort of SDHD variant carriers was not substantially increased compared with the general population. (5) In practice, carriers of maternally inherited SDHD mutations at our centre are not advised to undergo surveillance. Instead, we reassure them that their risk of developing PPGL is exceptionally low (described three times worldwide), but that they should be aware, more so than the general population, of symptoms that are suggestive of paraganglioma or phaeochromocytoma. Many families have been in our care for over 25 years and in that time we have found no evidence to suggest that this policy should be revised.

      NB. A version of this comment has been posted on the Journal of Medical Genetics website and has been commented on in turn by Burnichon and colleagues.

      References

      1.Yeap PM, Tobias ES, Mavraki E, Fletcher A, Bradshaw N, Freel EM, Cooke A, Murday VA, Davidson HR, Perry CG, Lindsay RS. Molecular analysis of pheochromocytoma after maternal transmission of SDHD mutation elucidates mechanism of parent-of-origin effect. J Clin Endocrinol Metab 2011;96:E2009-E2013.

      2.Bayley JP, Oldenburg RA, Nuk J, Hoekstra AS, van der Meer CA, Korpershoek E, McGillivray B, Corssmit EP, Dinjens WN, de Krijger RR, Devilee P, Jansen JC, Hes FJ. Paraganglioma and pheochromocytoma upon maternal transmission of SDHD mutations. BMC Med Genet 2014;15:111.

      3.Burnichon N, Mazzella JM, Drui D, Amar L, Bertherat J, Coupier I, Delemer B, Guilhem I, Herman P, Kerlan V, Tabarin A, Wion N, Lahlou-Laforet K, Favier J, Gimenez-Roqueplo AP. Risk assessment of maternally inherited SDHD paraganglioma and phaeochromocytoma. J Med Genet 2017;54:125-33.

      4.Hensen EF, van DN, Jansen JC, Corssmit EP, Tops CM, Romijn JA, Vriends AH, Van Der Mey AG, Cornelisse CJ, Devilee P, Bayley JP. High prevalence of founder mutations of the succinate dehydrogenase genes in the Netherlands. Clin Genet 2012;81:284-8.

      5.van Hulsteijn LT, Heesterman B, Jansen JC, Bayley JP, Hes FJ, Corssmit EP, Dekkers OM. No evidence for increased mortality in SDHD variant carriers compared with the general population. Eur J Hum Genet 2015;23:1713-6.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Darko Lavrencic commented:

      I believe that implantable systems for continuous liquorpheresis and CSF replacement could be successfully used also for intracranial hypotension-hypovolemia syndrome as it could be caused by decreased CSF formation. See: http://www.med-lavrencic.si/research/correspondence/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 01, Daniel Quintana commented:

      We thank Dr. Grossman for his comments on our manuscript. We are cognisant of the study’s limitations, which we highlighted in our original manuscript. Despite these limitations, we believe our conclusions are still valid but also understand that Dr. Grossman may not agree with our interpretation.

      We will now address Dr. Grossman’s two comments in turn, which we have reprinted for clarity:

      • Comment 1 from Dr. Grossman: The authors report respiration frequency as the peak frequency in a band range between 0.15-0.40 Hz. Resting respiration rate (i.e. frequency), however, is rarely a constant phenomenon for most people within a resting period of several minutes: some breaths are longer, some are shorter, and the peak frequency does not necessarily reflect average breathing rate; in fact, there are very likely to be different peaks, and only the highest peak would have been used to estimate (or misestimate) average respiratory frequency. Spectral frequency analysis, therefore, is a highly imprecise method to calculate mean breathing frequency (perhaps, the difference in relations found between mentally ill vs. healthy people were merely due to increased variability of respiratory frequency among the ill individuals; see Fig. 1F). In any case, this may be be sufficient to disqualify the main conclusions of the study.

      Response: We recognize that spectral frequency may not be an optimal method to calculate mean respiration frequency given the intraindividual variation of respiration rate. However, Levene's Test of Equality of Variances shows that the variances in the clinical and healthy groups are not significantly different [F(1,202) = 1.6, p = 0.21], which suggests that we cannot discard our null hypothesis that the group variances are equal. Thus, variability in mean respiration rates between the groups are unlikely to have confounded our results.

      • Comment 2 from Dr. Grossman: However, there is may be even a more serious problem that invalidates the conclusions of this investigation. As already mentioned, the authors examined respiration frequencies only between 0.15-0.40 Hz; this corresponds to a range between 9 and 24 breaths/minute. Already in 1992, we demonstrated among a group of healthy participants that a sizable proportion of participants manifest substantial proportions of resting breathing cycles below 9 cycles per minute: among 16 healthy individuals carefully assessed for respiration rate during a 10-minute resting period, we found that half of the participants showed 1/5 of their total breathing cycles to be slower than 9 cycles/minute (cpm); over 60% of participants showed >10% of their cycles to be slower than 9 cpm (also very likely thqat a substantial proportion of breaths occurred beyond 24 cpm). Thus, accurate estimation of mean resting respiration frequency is also seriously compromised by the insufficient range of frequencies included in the analysis. See Grossman (1992, Fig. 5): Grossman, P. Biological Psychology 34 (1992) 131 -161

      Response: To confirm that mean respiration the frequency was not missed or misattributed to non-respiratory frequencies, we re-analysed the data including participants that we originally excluded as they fell outside the 0.15-0.4 Hz range. In total, 2 participants (both from the patient group) had a mean respiratory frequency < 0.15Hz and 4 participants (3 from the patient group and 1 from the healthy control group) had a mean respiratory frequency > 0.4Hz. We also re-analysed absolute high frequency HRV and adjusted the frequency bands accordingly (0.1Hz – 0.4Hz for the 2 participants with a lower than average respiratory rate and, 0.15Hz-0.5Hz for the 4 participants with a higher than average respiratory rate).

      We found that including these participants did not change the overall conclusions of the study. While we reported an estimated correlation (p) of −0.29 between HF-HRV and respiration in the patient group [95% CI (−0.53, −0.03)], our updated analysis demonstrated a slightly stronger estimated association -0.47 [95% CI (−0.66, −0.26)]. For the healthy controls, we originally reported an estimated correlation (p) of −0.04 between HF-HRV and respiration in the patient group [95% CI (−0.21, 0.12)]. Our updated analyses demonstrated a close to equivalent estimated association of -0.04 [95% CI (−0.20, −0.12)]. We also originally reported that computing the posterior difference of p between these two tests revealed a 94.1% probability that p was more negative in the clinical group compared to the control group. Running this analysis modestly increased this probability to 99.7%.

      Daniel S. Quintana (on behalf of study co-authors)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 25, Paul Grossman commented:

      Quintana et al. (2016) suggest that individual difference in respiration rate is only correlated with high-frequency heart-rate variability (HF-HRV), i.e. respiratory sinus arrhythmia (RSA) among seriously mentally ill people, but not among healthy individuals. The data presented has several methodological problems that seem very likely to severely compromise the authors' conclusions:

      1. The authors report respiration frequency as the peak frequency in a band range between 0.15-0.40 Hz. Resting respiration rate (i.e. frequency), however, is rarely a constant phenomenon for most people within a resting period of several minutes: some breaths are longer, some are shorter, and the peak frequency does not necessarily reflect average breathing rate; in fact, there are very likely to be different peaks, and only the highest peak would have been used to estimate (or misestimate) average respiratory frequency. Spectral frequency analysis, therefore, is a highly imprecise method to calculate mean breathing frequency (perhaps, the difference in relations found between mentally ill vs. healthy people were merely due to increased variability of respiratory frequency among the ill individuals; see Fig. 1F). In any case, this may be be sufficient to disqualify the main conclusions of the study.

      2. However, there is may be even a more serious problem that invalidates the conclusions of this investigation. As already mentioned, the authors examined respiration frequencies only between 0.15-0.40 Hz; this corresponds to a range between 9 and 24 breaths/minute. Already in 1992, we demonstrated among a group of healthy participants that a sizable proportion of participants manifest substantial proportions of resting breathing cycles below 9 cycles per minute: among 16 healthy individuals carefully assessed for respiration rate during a 10-minute resting period, we found that half of the participants showed 1/5 of their total breathing cycles to be slower than 9 cycles/minute (cpm); over 60% of participants showed >10% of their cycles to be slower than 9 cpm (also very likely thqat a substantial proportion of breaths occurred beyond 24 cpm). Thus, accurate estimation of mean resting respiration frequency is also seriously compromised by the insufficient range of frequencies included in the analysis. See Grossman (1992, Fig. 5): Grossman, P. Biological Psychology 34 (1992) 131 -161

      https://www.researchgate.net/profile/Paul_Grossman2/publication/21689110_Respiratory_and_cardiac_rhythms_as_windows_to_central_and_autonomic_biobehavioral_regulation_Selection_of_window_frames_keeping_the_panes_clean_and_viewing_the_neural_topography/links/5731a22708ae6cca19a2d221/Respiratory-and-cardiac-rhythms-as-windows-to-central-and-autonomic-biobehavioral-regulation-Selection-of-window-frames-keeping-the-panes-clean-and-viewing-the-neural-topography.pdf?origin=publication_detail&ev=pub_int_prw_xdl&msrp=vu97U8y7CNd-ip3iK-qeQgkeqfmS6EwOYfT0BMazIWb4K9Weys1ta4uRS9rdGDRYEbtODvNOG_dr7MWpJIsjJrRkt_z8sTfSS4XmxvaEPMo.DabVyZLtsNb0XPkl_aRXgRYPgmzZVGFb4rchSD_o4vKn98sRTVYBXvo7RQOTYFxDbL7VMx9qNlfuFZvJNy8-9g.kd_GECHVk8wJ18QwWTmSdS3htJncx0qJ0Okn_km-wIHEkyXmPXbXIO-Rb_KUvz_72b5WrLKh7otlmZ6awszetQ.c3eR_WnqJ55XOex_Q4-EHpow-8RGg-Oi87AAPSljLLDtjYimkEgJ99Lu9lmclW4kkI11Jzzp2mkQ4pKenDt6BA

      It is also unfortunate that the authors merely cited a single investigation that unusually showed no relation between individual differences in respiration frequency and RSA magnitude (i.e. Denver et al., 2007), but none of the many studies that have found correlations in the range of r's= 0.3-0.5; e.g. https://www.researchgate.net/publication/279615441_Respiratory_Sinus_Arrhythmia_and_Parasympathetic_Cardiac_Control_Some_Basic_Issues_Concerning_Quantification_Applications_and_Implications

      http://journal.frontiersin.org/article/10.3389/fphys.2016.00356/Fülle

      https://pdfs.semanticscholar.org/6e44/e75dd2061a43cc69a4354171540e8a98e6a5.pdf

      The Denver et al. study, additionally, used the same methods inaccurately to calculate respiration rate.

      Paul Grossman Pgrossman0@gmail.com


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Apr 24, Paul Grossman commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 15, KEVIN BLACK commented:

      We proposed that "the most likely cause for the" excess prevalence of depression in PD was "that both syndromes arise from similar causes, with either appearing first in a given individual. ... [O]ne may reasonably search for such shared causative factors among the known risk factors for" either disease, "such as genes (probably plural), aging, chemical toxins, or psychologically stressful life events" (Black KJ, Pandya A. Depression in Parkinson disease. Pp. 199-237 in Gilliam F, Kanner AM, Sheline YI: Depression and Brain Dysfunction. New York: Taylor & Francis, 2006, at pp. 216-217).

      Arabia and colleagues (2007) found that depressive and anxious disorders were much more likely in first-degree relatives of PD patients than of controls (doi: 10.1001/archpsyc.64.12.1385). One gene that may contribute to that finding is the serotonin transporter, discussed in the review cited above. Cagni et al here identify an additional gene that may also be a shared risk factor: the G/G phenotype of the Val66Met polymorphism of the BDNF gene.

      Studies such as these may contribute useful information not only to the etiology of depression and anxiety but also to the etiology of Parkinson disease.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 04, Lydia Maniatis commented:

      Let's say we ask the proverbial “man in the street” the following question: Do you think chickens will be better at discriminating between the colors of tiled food containers if the tiles are many or large, or if they are few or small?

      I think that most, without too much thought, would answer the former. “More and bigger” of anything is generally more salient then “Fewer and smaller.” Would those who made correct guesses be licensed to claim that their pet theory about chicken vision had been corroborated? It should be obvious that predictable predictions do not constitute rigorous tests of any hypothesis. This is the type of hypothesis-testing Olsson et al (2017) engage in in this study.

      Furthermore, the hypothesis that the authors are supposed to be testing doesn’t consist of a straightforward, coherent, intelligible set of assumptions, but of a hodgepodge of uncorroborated assumptions and models spanning over fifty years. The “success” of the authors’ simple experiment implies corroboration of all of these subsidiary models and assumptions. Obviously, the experiments are being tasked with far too much heavy-lifting, and the conclusions that hinge on them are not credible.

      Here is a sampling of the assumptions and models that chickens greater sensitivity to “more and bigger” are presumed to corroborate:

      The main hypothesis: “Chickens use spatial summation to maintain color discrimination in low light intensities.”

      Supporting models/assumptions: “Color differences delta S in the unit of just-noticeable differences (JND) were calculated using the receptor noise limited (RNL) model (Vorobyev and Osorio, 1998) as….” I.e. the RNL model is assumed to be valid.

      “Spectral sensitivities, R, were derived by fitting a template (Govardovskii, Fyhrquist, Reuter, Kuzmin, & Donne, 2000)…” I.e. the model template is assumed to be valid.

      “We assumed the same standard deviation of noise for all cone types such that the Weber fraction for the L channel was 0.06, based on the color discrimination thresholds measured in a previous study (Olsson et al 2015)” Note that, according to a Pubmed Commons comment by the lead author, Olsson et al (2015) “figure out the equivalent Weber fraction which describe these limits. Whether that is actually caused by noise or not we can not say from our experiment..." Yet the main hypothesis of Olsson et al (2017) uncritically assumes a “noisy” process.

      “The same simple model (SM) of calculating the absolute quantum catch as in a previous study (Olsson et al 2015)” Again, the authors cannot say that the results of that previous study were “actually caused by noise or not”, i.e that the “simple model” is actually modeling what they are claiming.

      “We modeled increasing levels of spatial summation, assuming that absolute quantum catches….are summed linearly…” Should we even ask why?

      “From ….cone densities in the dorso-temporal retina of chickens (Kram et al, 2010) we estimated the number of cones that viewed a single color tile of a stimulus.” This last assumption obviously doesn’t consider the fact of chicken eye movements, which would make the number of cones involved much larger. The idea of simple pooling is also problematic from the point of view that chickens do exhibit constancy under varying illumination, so in the context of sunshine and shadow, pooling across an illumination boundary would arguably produce unreliable estimates that would undermine constancy.

      “We derived intensity thresholds by fitting a logistic psychometric function to the choice data of each experimental group of chickens and individual chickes using the Matlab toolbox palamedes (Prins & Kingdom, 2009).” We assume that Prins and Kingdom’s hypothesized quantitative link between choice and thresholds, as well as all of those authors’ underlying assumptions, e.g. that signal detection theory is an appropriate model for vision, are valid.

      I would note, finally, that the authors’ current description of the findings of Olsson, Lind and Kelber (2015) differs significantly from those implied by the title of the latter publication (“Bird color vision: behavioral thresholds reveal receptor noise”). As mentioned above, the lead author of that study has acknowledged that the title goes further than was licensed by experiment. Here, the Olsson et al (2015) study is described as having shown that “the intensity threshold for color discrimination in chickens depends on the chromatic contrast between the stimuli and on stimulus brightness.” This result, i.e. that “higher contrast, brighter = more salient”, is, if anything, even more predictable than the prediction of Olson et al (2017).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 01, Trevor Bell commented:

      The source code of the pipeline described in this paper is now available online at the following address:

      https://github.com/DrTrevorBell/CuratedGenBank


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 25, Trevor Bell commented:

      Multiple sequence alignments containing only full-length sequences, for each genotype, are now also available for download from the alignments page. These sequences are a subset of the alignments already available.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 26, Trevor Bell commented:

      A comma character has inadvertantly been added to the query provided under the "GenBank download" section during post-production of this article. The comma in the number "99,999" should be deleted, so that the number reads "99999".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jan 26, Trevor Bell commented:

      The authors would like to clarify a point made in the abstract. Although multiple sequence alignments of HBV are publicly available, as far as we are aware, ours is the first to include both full length and subgenomic fragments of HBV in the same alignment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 08, Raha Pazoki commented:

      Supplementary Table 6 of this article is supposed to provide GERA results for previously identified blood pressure loci. This table is however exactly the same as Supplementary Table 3 and does not include SNPs flagged as "P" for previously identified in Supplementary Table 4.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 14, David Mage commented:

      The Sudden Infant Death Syndrome (SIDS) and other causes of human respiratory failures appear to be X-linked (PMID 27188625). OMIM shows human TASK-1 to be autosomal, that would imply if Task-1 is involved in SIDS that an interaction with an X-linkage might also be considered.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 14, Pavel Nesmiyanov commented:

      What strain and what waas the exact procedure for the microbiological assessment? Disc diffusion method is not the most accurate method. Authors should have used standard strains and dilution method for MIC.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 20, Alessandro Rasman commented:

      Bernhard HJ. Juurlink MD, Dario Alpini MD, Giampiero Avruscio MD, Miro Denislic MD, Attilio Guazzoni MD, Laura Mendozzi MD, Raffaello Pagani MD, Adnan Siddiqui MD, PierluigI Stimamiglio MD, Pierfrancesco Veroux MD snd Pietro Maria Bavera MD

      We read with interest the consensus statement titled "The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative" (1). We wonder why the authors haven't cited in the notes any paper of Dr. Paolo Zamboni from University of Ferrara, Italy. Particulary, his oldest paper titled "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis" published in November 2006 (2). In that paper he readily showed the histology of the CVS, and explicitly reported the possibility to image it by the means of MR, as well.

      References: 1) Sati, Pascal, et al. "The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative." Nature Reviews Neurology (2016). 2) Zamboni, Paolo. "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis." Journal of the Royal Society of Medicine 99.11 (2006): 589-593.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 13, Kiyoshi Ezawa commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 24, Kiyoshi Ezawa commented:

      [Alert by the author]

      The web-page (or XML) version of this Erratum (at the BMC Bioinformatics web-site: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-016-1282-4) contained errors during two periods: one from its initial publication on Nov 10th, 2016 till around Nov 18th, 2016, and the other from around March 3rd, 2017 till the release of the latest version on April 7th, 2017. The latest version does not contain these errors.

      In consequence, the Erratum at the PubMed Central web-page (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105235/) also contained the same errors, since its initial release till around April 13th, 2017, when it was updated.

      (It should be noted that, since its initial publication, the PDF version of the Erratum has been nearly error-free, containing at most one relatively harmless error in Eq.(R4.6) before the correction.)

      Therefore, if you visited the Erratum only around any of the aforementioned periods but did not download its PDF, I strongly urge you to re-visit the Erratum, and hopefully to download the PDF.

      And I would be grateful if you could inform your colleagues of the release of the new version of this Erratum, so that these errors in its previous versions will be eradicated eventually.

      Incidentally, most of the errors discussed in this Erratum apply only to the web-page (or XML) version of the original article (PMID: 27638547; DOI: 10.1186/s12859-016-1105-7).

      There are only two exceptions: one is the error in Eq.(R5.4), and the other is the update on the reference information (reference [2] in the Erratum, or PMID: 27677569); they apply to both the XML and the PDF.

      (In the proofreading process, I was allowed to proofread only the PDF but not the web-page version. Therefore, I had no control over those errors in the web-page that were not in the PDF.)

      Kiyoshi Ezawa, Ph.D, the author of the Erratum (PMID: 27832741).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 24, Lydia Maniatis commented:

      As should be evident from the corresponding PubPeer discussion, I have to disagree with all of Guy’s claims. I think logic and evidence are on my side. Not only is there not “considerable evidence that humans acquire knowledge of how depth cues work from experience,” the evidence and logic are all on the opposite side. The naïve use of the term “object” and reference to how objects change “as we approach or touch them and learn about how they change in size, aerial perspective, linear perspective etc” indicates a failure to understand the fundamental problem of perception, i.e. how the proximal stimulus, which does not consist objects of any size or shape, is metamorphosed into a shaped 3D percept. Perceiving 3D shape presupposes depth perception. As Gilchrist (2003) points out in a critical Nature review of Purves and Lotto’s book, “Why we see what we do:” “Infant habituation studies show that size and shape are perceived correctly on the first day of life. The baby regards a small nearby object and a distant larger object as different even when they make the same retinal image. But newborns can recognize an object placed at two different distances as the same object, despite the different retinal size, or the same rectangle placed at different slants. How can the newborn learn something so sophisticated in matter of hours?” Gilchrist also addresses the logical problems of the “learning” perspective (caps mine): “In the 18th C, George Berkeley argued that touch educates vision. However, this merely displaces the problem. Tactile stimulation is even more ambiguous than retinal stimulation, and the weight of the evidence show that vision educates touch, not vice versa. Purves and Lotto speak of what the ambiguous stimulus “turned out to signify in past experience.” But exactly how did it turn out thus? WHAT IS THE SOURCE OF FEEDBACK THAT RESOLVES THE AMBIGUITY?” “Learning” proponents consistently fail to acknowledge, let alone attempt to answer, this last question. As I point out on PubPeer, if touch helps us to learn to see, then the wide use of touchscreens by children should presumably compromise 3D perception, since the tactile feedback is presumably indicative of flatness at all times.

      The confusion is evident in Guy’s reference to the “trusted cue – occlusion implying depth.” Again, there is a naïve use of the term “occlusion.” Obviously, the image observers see on the screen isn’t occluded, it’s just a pattern of colored points. With respect to both the screen and the retinal stimulation, there is no occlusion because there are no objects. Occlusion is a perceptual, not a physical, fact as far as the proximal stimulus is concerned. So the cue itself is an inferred construct intimately linked to object perception. So we’re forced to ask, what cued the cue…and so on, ad infinitum. Ultimately, we’re forced to go back to brass tacks, to tackle figure ground organization via general principles of organization. Even if we accepted that there could (somehow) be unambiguous cues, we would still have the problem that each retinal image is unique, so we would need a different cue - and thus an infinite number of cues- to handle all of the ambiguity. Which makes the use of “cues” redundant.

      So the notion that “one might not need much to allow a self-organising system of cue to rapidly ‘boot-strap’itself into a robust system in which myriad sensory cues are integrated optimally” is clearly untenable if we try to actually work through what it implies. The concept of ‘cue recruitment’ throws up a lot of concerns only because even its provisional acceptance requires that we accept unacceptable assumptions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, Guy M Wallis commented:

      Lydia raises an important question. Surely we can't learn everything! We need something to hang our perceptual hat on to get the ball rolling. After all, in the experiments described in our paper, Ben and I relied on the presence of a trusted cue - occlusion implying depth - to allow the observer to harness the new cue which we imposed - arm movement. But where did knowledge of the trusted depth cue come from? Did we have to learn that too? Well there is considerable empirical evidence that humans do acquire knowledge of how depth cues work from experience. We observe objects as we approach or touch them and learn about how they change in size, aerial perspective, linear perspective etc. But it also seems likely that some cues have been acquired in phylogentic time due to their reliability and utility. The apparently in-built assumption that lighting in a scene comes from above and the left may be an example of this. In the end though, one might not need much to allow a self-organising system of cues to rapidly 'boot-strap' itself into a robust system in which myriad sensory cues are integrated optimally.

      Lydia and my co-author, Benjamin Backus, have been engaged in a lively and informative exchange on PubPeer which I recommend to those interested in this debate. The concept of cue recruitment throws up a lot of concerns and queries.

      https://pubpeer.com/publications/2622B45C885243AFCB5C604CB0638B


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 12, Lydia Maniatis commented:

      It occurs to me that the "cue recruitment theory" is susceptible to the problem of infinite regress. If percepts are by their nature ambiguous, and require "cues" to disambiguate, then aren't the cues, which are also perceptual articles, also in need of disambiguation? Don't we need to cue the cue? And so on....


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 12, Lydia Maniatis commented:

      Two (probably) final points regarding the authors' conclusion quoted below:

      "In conclusion, the present study presents evidence that a voluntary action (arm movement) can influence visual perceptual processes. We suggest that this relationship may develop through an already functional link between motor behavior and the visual system (Cisek & Kalaska, 2010; Fagioli et al., 2007; Wohlschläger & Wohlschläger, 1998). Through the associative learning paradigm used here, this relationship can be modified to enable arbitrary relationships between limb movement and perceived motion of a perceptually ambiguous stimulus. "

      First, most stimuli are not perceptually ambiguous (i.e. they are not bistable or multistable), so the relevance of this putative finding is questionable in practice, and would require much more development in theory.

      Second, the claim that it is possible to construct "arbitrary relationships between limb movement and perceived motion of a perceptually ambiguous stimulus" is a radical behaviorist claim, of a type that has consistently been falsified both logically and empirically.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Nov 12, Lydia Maniatis commented:

      The degree of uncertainty incorporated into this study in the form of confounds means that the claims at the front end carry no weight.

      Essentially, the authors apparently are employing a forced choice paradigm. (They don’t refer to it as such, but rather as a “dichotomous perceptual decision.” Their stimulus is bistable, unstable, briefly presented, temporally decaying and the response relies on memory as it occurs after the image has left the screen. Their training procedure likely produces expectations that may bias outcomes.

      The highly unstable nature of the Necker cube, even in static form, is self-evident. I don’t know if this is mitigated by motion, but I doubt it. I would expect the uncertainty to be even greater when the square face of the figure isn’t in a vertical/horizontal orientation.

      In their discussion, the authors address the possibility of response bias in their study: “Firestone and Scholl (in press)…include a section on action-based influences on perception. The authors argue that much of this literature is polluted with response bias and that suitable control studies have undermined many of the earlier findings.”

      Wallis and Backus counter this possibility with a straw man. “If participants were trying to respond in a manner they thought we might expect, there is no reason why they would not have done so in the passive conditions…”

      However, the question isn’t only whether participants were trying to meet investigator expectations, but whether they had developed expectations of their own based on the “training” procedures.

      In the so-called passive training condition, an arrow, either congruent or incongruent, was associated with the rotation of a disambiguated Necker cube. However, in this condition observers have no incentive to pay attention to this peripheral form and its connection with the area of interest. In the active condition, in contrast, it is necessary attend the arrows and to act on them. This obligation to act on the arrows while observing the figure ensures that attention is paid to their connection with cube rotation.

      The conceptual and methodological uncertainty is compounded by the fact that the authors themselves can’t explain (though they presumably expected it) the failure of the arrows alone to produce a perceptual bias. As with the previous issue, they dispense too casually with the problem:

      “So why did the participants in the passive conditions show little or no cue recruitment? As mentioned in the Introduction, Orhan et al. (2010) have argued that there must be a mechanism for determining which cues can be combined to create a meaningful interpretation of the sensory array. In the context of this study it would appear that passive viewing of the rotating object and the contingent arrows, does not satisfy this mechanism's requirements. This is perhaps because the arrows are regarded as extrinsic to the stimulus and hence unfavored for recruitment (Jain et al., 2014).

      This is as weak and evasive an argument as could possibly be made in a scientific paper. The authors as why the arrow “cue” itself didn’t have an effect. They answer that it didn’t have an effect because it doesn’t satisfy the unknown requirements of an unknown mechanism that is nevertheless presumed to exist. So if a putative cue “works,” it proves the mechanism exists, and if a putative cue doesn’t work, it shows the mechanism is uninterested in it. Thus the cue theory is a classic case of an unfalsifiable, untestable proposition. It is merely assumed and data uncritically interpreted in that light.

      The bottom line here is that the failure of the arrows to act as “cues” contradicts the investigators predictions, and they don’t know why. Which begs the question of why they planned an experiment containing what at the beginning they must have considered a serious confound? The failure of the arrows to cue the percept constitutes a serious challenge to their underlying assumptions, and needs to be addressed.

      The authors’ further rationalization, that “This is perhaps because the arrows are regarded as extrinsic to the stimulus and hence unfavored for recruitment” begs the question, regarded as extrinsic by whom? The conscious observer? This leads, again, to the possibility of response bias.

      But Wallis and Backus have their own response bias to the suggestion of response bias in their subjects: “We regard cue-recruitment as a cognitively impenetrable, bottom-up process….”

      Thinking this is one thing, corroborating it another. The use of perceptually unstable stimuli producing temporally limited effects reliant on memory and forced choice responses isn’t a method designed to guard against potential response bias, but rather one that offers fertile ground for it. The convenience of dichotomous responses for data analysis can’t offset these disadvantages.

      Short version: The possibility of response bias has in no way been excluded.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 01, Amy Donahue commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 01, Amy Donahue commented:

      Using the information in this article for a presentation on assisted reproduction technology, it's very helpful. Appreciate that it's open access, too! But as a medical librarian, I couldn't help but note that the search strategy could be improved. Even excluding articles published after 8/1/16, this strategy retrieves more articles than the authors noted finding:

      ((("Oocytes"[Majr] OR oocyte*[tiab])) AND ("Cryopreservation"[Majr] OR freez*[tiab] OR "Vitrification"[Majr] OR vitrif*[tiab])) AND ("Pregnancy"[Mesh] OR pregnan*[tiab] OR survival[tiab] OR birth[tiab] OR "quality embryo"[tiab] OR "quality embryos"[tiab] OR "embryo quality"[tiab] OR "viable embryo"[tiab] OR "viable embryos"[tiab])

      Not limiting to humans (which is helpful, but does limit to only Medline articles; maybe that was the authors' intent) yields roughly 1500; limiting to humans (which does exclude some human studies that just aren't indexed as such) brings it down to almost 1,000.

      Additionally, searches should probably be done in other databases, not just PubMed (and note that Medline is the subset of articles in PubMed that are indexed with MeSH terms), for the sake of being comprehensive, although that certainly adds time and effort to screening and deduplicating the results (but librarians can also help with that). There should be librarians at some of the authors' institutions, if not all - getting some search help next time would make your work even stronger.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 18, Jack Gilbert commented:

      We have been following some of the comments about this paper and accept that the wording of parts of our paper could be interpreted in ways we did not intend and that do not reflect the work performed.  We want to make it clear that for this paper we made predictions about nitrate, etc based on analysis of rRNA amplicon sequences and matching them to known genomes.  We did not directly measure these genes involved in nitrate metabolism (nitrate reductase, nitrite reductase, etc.), or know for certain that the strains present in the samples have such functions (although they are widely distributed in the matching phylogenetic groups).  Some of the wording (e.g., of the title in the abstract) did not come across as we intended, and could be interpreted as implying that we made direct measurements.  We want to note that we believe the predictions we made are useful, but acknowledge that they have limitations. We also want to stress that to test these hypotheses and advance clinical practice, we would need to perform extensive validation through intervention studies in carefully controlled clinical populations, which is obviously considered beyond the scope of the Observation format. However, we are currently performing ongoing studies that we believe will advance this research including some work based on public comments made about the lack of validation of the specific claims of the paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 28, Lydia Maniatis commented:

      Cherniawsky and Mullen’s (2016) article lies well within the perimeter of a school of thought that, despite its obvious intellectual and empirical absurdity, is popular within the vision science community.

      The school persists, and is relentlessly prolific, because it has insulated itself from the possibility of falsification, mainly by ignoring both fact and reason.

      Explanatory schemes are concocted with respect to a narrow set of stimuli and conditions. Data generated under this narrow set of conditions are always interpreted in terms of the narrow scheme of assumptions, via permissive post hoc modeling. When, as here, results contradict expectation, additional ad hoc assumptions are made with reference to the specific, narrow type of stimuli used, which then, of course, may subsequently be corroborated, more or less, using those same stimuli or mild variants thereof.

      The process continues ad infinitum via the same ad hoc route. This is the reason that, as Kingdom (2011) has noted, the study of lightness, brightness and transparency (and I would add, vision science in general) is divided into camps “each with its own preferred stimuli and methodology” and characterized by “ideological divides.“ The term “ideological” is highly appropriate here, as it indicates a refusal to face facts and arguments that contradict or challenge the preferred view. It is obviously antithetical to the scientific attitude and, unfortunately, very typical of virtually all of contemporary vision science.

      The title of this paper ”The whole is other than the sum...” indicates that a prediction of “summation” failed even under the gentle treatment it received. The authors don’t quite know what to make of their results, but a conclusion of “other” is enough by today’s standards.

      The ideological camp to which this article belongs is a scandal on many counts. First, it adopts the view that there are certain figures whose retinal projections trigger visual processes such that the ultimate percept directly reflects local “low-level” processes. More specifically, it reflects “low-level” processes as they are currently (and crudely) understood. The figures supposed to have this quality are those for which the appropriate “low-level” story du jour has been concocted.

      The success of the method is well-described by Graham (1997, discussed in PubPeer), who notes that countless experiments were "consistent" with the behavior of V1 neurons at a time when V1 had only begun to be explored and when researchers were unaware not only of the complexities of V1 but also of the many hierarchically higher-level and processes that intervene between retina and percept. This amazing success is rationalized (if we may use the term loosely) by Graham, who with magical thinking reckons that under certain conditions the brain becomes “transparent” down to the initial processing levels. Teller (1984) had earlier (to no apparent effect) described such a view as “the nothing mucks it up proviso,” and pointed out the obvious logical problems.

      Cherniawsky and Mullen premise their article on this view with their opening sentence: “Two-dimensional orthogonal gratings (plaids) are a useful tool in the study of complex form perception, as early spatial vision is well described by responses to simple one-dimensional sinusoidal gratings…” In fact, the “one-dimensional sinusoidal gratings” in question typically produce 3D percepts of light and shadow, and the authors’ plaids in Figure 1 appear curved and partially obscured by a foggy overlay. So as illogical as the transparent brain hypothesis is to begin with, the stimuli supposed to tap into lower level processes aren’t even consistent with a strictly “low-level” interpretive process.

      The uninitiated might wonder why the authors use the term “spatial vision.” It is because they have uncritically adopted the partner of the transparent brain hypothesis, the view that the early visual processes perform a Fourier analysis on the retinal projection. It is not clear that this is at all realistic at the physiological level, but there is also no apparent functional reason for such a challenging process, as it would in no way further the achievement of the goal of organizing the incoming light into figures and grounds as the basis for further interpretation leading to a (usually) veridical representation of the environment. The Fourier conceit is, of course, maintained by employing sinusoidal gratings while ignoring their actual perceptual effects. That is, the sinusoidal gratings and combinations thereof are said to tap into the low-level frequency channels, which then determine contrast via summation, inhibition, etc, (whatever post hoc interpretation the data of any particular experiment seem to require). These contrast impressions, though experienced in the context of, e.g. impressions of partially-shadowed tubes, are never considered with respect to these complex 3D percepts. Lacking necessary interpretive assumptions, investigators are reduced to describing their results in terms of “other,” precisely described, but theoretically unintelligible and tangled effects.

      The idea that “summation” of local neural activities can explain perception is contradicted by a million cases, and counting, including the much-loved sinusoidal gratings and their shape-from-shading effects. But ideology is stronger and, apparently, good enough for vision science today.

      Finally, the notion of “detectors” is a staple of this school and the authors’ discussion; for a discussion of why this concept is untenable, please see Teller (1984).

      p.s. As usual, I’ll ask why its ok for an author to be one of a small number of subjects, the rest of whom are described as “naïve.” If it’s important to be naïve, then…

      Also, why use forced choices, and thus inject more uncertainty than necessary into the results? It’s theoretically possible that observers never see what you think they’re seeing…Obviously, if you’re committed to interpreting results a certain way, it’s convenient to force the data to look a certain way…

      Also, no explanation is given for methodological choices, e.g. the (very brief) presentation times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Christopher Miller commented:

      The clonEvol package has changed slightly, requiring an update to the "run.R" example script contained in Additional File 2. The updated script can be found here: https://gist.github.com/chrisamiller/f4eae5618ec2985e105d05e3032ae674


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 11, Martin Mayer commented:

      Cut the fat: Putting the risks of hypertriglyceridemia into context

      A brief response to “Nonfasting mild-to-moderate hypertriglyceridemia and risk of acute pancreatitis”

      In their article, Pedersen and colleagues present findings from their prospective cohort study on hypertriglyceridemia and its association with both acute pancreatitis and myocardial infarction.<sup>1</sup> With a median follow-up of 6.7 years (interquartile range, 4.0 to 9.4 years) among 116,550 "white individuals of Danish descent from the Danish general population"<sup>1(p1835)</sup> selected randomly from two similar prospective studies (the Copenhagen City Heart Study and the Copenhagen General Population Study), this is a sizable study with respectable follow-up, even if generalizability of the findings might be at least somewhat limited. They rightly note “there is no consensus on a clear threshold above which triglycerides are associated with acute pancreatitis,”<sup>1(p1835)</sup> and others have highlighted important issues with the evidence base.<sup>2</sup> Pedersen and colleagues also cite a review<sup>3</sup> on triglycerides and cardiovascular disease, but here too the evidence is not entirely clear; the review only concludes evidence “is increasing”<sup>3(p633)</sup> and recommends high-intensity statin therapy. The review also considers the future potential of add-on triglyceride-lowering therapy for those already on a statin, pointing to two ongoing trials of ω-3 fatty acids (REDUCE-IT and STRENGTH). However, the currently-available evidence - particularly that with patient-relevant outcomes - does not support such a strategy for ω-3 fatty acids or other agents that can substantially lower triglycerides (such as fibrates and niacin).<sup>2,4,5</sup>

      Even if their study reflects an underlying truth, Pedersen and colleagues unfortunately demonstrate a relative inattention to absolute risks and the implications thereof. They devote a small amount of text to absolute risks and report absolute numbers in the figures, but they repeatedly state their findings show “high risk” for acute pancreatitis, a perspective seemingly driven by the magnitude of the hazard ratios (HRs). In their concluding statements, they even remark: “Mild-to-moderate hypertriglyceridemia at 177 mg/dL (2 mmol/L) and above is associated with high risk of acute pancreatitis in the general population, with HRs higher than for myocardial infarction.”<sup>1(p1841)</sup>

      When caring for individual patients, relative metrics such as HRs are most useful when appropriately applied to corresponding baseline absolute risks. Conversely, disproportionate focus on relative metrics or failure to adequately contextualize relative metrics with corresponding absolute risks is considerably less informative and can contribute to a distorted sense of reality. Even if one accepts research findings as being likely reflective of an underlying truth, one must always carefully appraise absolute risks to gain a finer appreciation of the quantitative implications of the research findings. This practice is still useful even if one finds weaknesses in methodology, as one can simply consider the estimates increasingly uncertain in a manner qualitatively proportional to the weaknesses in methodology. A tool customized for this study is available here (TinyURL: http://tinyurl.com/JAMAIMhypertrigcalctool).

      According to their own data, comparing the lowest triglyceride level group (<89 mg/dL or <1 mmol/L) to the highest triglyceride level group (≥443 mg/dL or ≥5 mmol/L), one finds an absolute risk difference (ARD) for acute pancreatitis of 0.93% over 10 years if using the absolute numbers reported in Figure 1 to estimate absolute risks, and an ARD of 2.05% over 10 years (95% confidence interval [CI], 0.73% to 4.99%) if using the absolute risk in the lowest triglyceride level group and the multivariable-adjusted HR estimate for the highest triglyceride level group (HR 8.7; 95% CI, 3.7 to 20). Repeating this for myocardial infarction, one finds an ARD of 5.6% over 10 years or an ARD of 5.08% (95% CI, 3.00% to 7.73%) over 10 years. This demonstrates at least one reason why it is important to put relative metrics into context: Although the HRs for acute pancreatitis may be “higher than for myocardial infarction”,<sup>1(p1841)</sup> the absolute risks and absolute risk differences are higher for myocardial infarction. Additionally, it is more informative to provide risk estimates in absolute terms than in relative terms. Indeed, as aforementioned, absolute risks give better insight into what research might mean for a patient if one accepts the findings as being reflective of an underlying truth. Unfortunately, The New York Times' coverage of the study exacerbates the issue, with the only attempt to contextualize the relative metrics being a quote from one of the study’s authors. (Such mishandling of evidence is not uncommon in the media, but that is not the focus of this commentary. Including The New York Times’ coverage is not meant to single them out as uniquely bad or good in this regard; it simply serves as an example.) It is ultimately a disservice to say the risk of pancreatitis was 770% higher in patients with triglycerides ≥443 mg/dL (≥5 mmol/L) compared to patients with triglycerides <89 mg/dL (<1 mmol/L) without contextualizing such a metric with absolute risks. More technically, and as discussed in the tool, HRs are also not quite the same as relative risks.

      Lastly, while management was not a focus Pedersen and colleagues’ article, sensible lifestyle changes should be emphasized wherever poor lifestyle factors exist. As for interventions beyond lifestyle changes, a medication that can reduce cardiovascular risk – such as a statin – might be instituted after shared decision-making concerning a person’s cardiovascular risk estimate; importantly, however, a person’s cardiovascular risk estimate is not dependent on triglyceride levels, and pharmaceutical intervention targeted at lowering triglycerides per se is not clearly supported by currently-available evidence examining cardiovascular, pancreatic, or other patient-relevant outcomes.  

      References

      (1) Pedersen SB, Langsted A, Nordestgaard BG. Nonfasting mild-to-moderate hypertriglyceridemia and risk of acute pancreatitis. JAMA Intern Med. 2016 Dec 1;176(12):1834-1842. doi: 10.1001/jamainternmed.2016.6875.

      (2) Lederle FA, Bloomfield HE. Drug treatment of asymptomatic hypertriglyceridemia to prevent pancreatitis: where is the evidence? Ann Intern Med. 2012 Nov 6;157(9):662-664. doi: 10.7326/0003-4819-157-9-201211060-00011.

      (3) Nordestgaard BG, Varbo A. Triglycerides and cardiovascular disease. Lancet. 2014;384(9943):626-635.

      (4) Rizos EC, Ntzani EE, Bika E, Kostapanos MS, Elisaf MS. Association between omega-3 fatty acid supplementation and risk of major cardiovascular disease events: a systematic review and meta-analysis. JAMA. 2012 Sep 12;308(10):1024-1033. doi: 10.1001/2012.jama.11374.

      (5) Keene D, Price C, Shun-Shin MJ, Francis DP. Effect on cardiovascular risk of high density lipoprotein targeted drug treatments niacin, fibrates, and CETP inhibitors: meta-analysis of randomised controlled trials including 117,411 patients. BMJ. 2014 Jul 18;349:g4379. doi: 10.1136/bmj.g4379. (Note about this reference: Although the title implies focus on HDL as a therapeutic target, this study nevertheless provides meaningful insight into whether there is any cardiovascular or mortality benefit from adding either niacin or a fibrate to statin therapy, and both these agents can substantially lower triglycerides.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 26, Su-Fang Lin commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 26, Su-Fang Lin commented:

      Now the link in Oncotarget is back.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 18, Stephen Maher commented:

      Comments for this article in PubPeer (https://pubpeer.com/publications/27816970) suggest that the statistical data and some of the figures in this article are exactly the same and therefore unsubstantiated. As of late November 2016, the article can no longer be found on the Oncotarget website. No retraction has been reported.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 08, Peter Hajek commented:

      One problem with interpretation is that in these studies, very few if any people actually stopped smoking. The provision of stop smoking treatments (as opposed to actually stopping smoking) does not seem to undermine concurrent substance use treatments, but the question of whether actually stopping smoking helps with or undermines concurrent efforts to stop using other drugs, and whether sequential treatments yield better results than doing this concurrently, have not been well answered so far.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Harri Hemila commented:

      Vitamin E may increase and decrease all-cause mortality in subgroups of males

      Galli F, 2017 claimed that supplementation with vitamin E may have no effect on all-cause mortality even at supra-nutritional doses. They did not consider the strong evidence from the ATBC Study, which indicates that the effects of vitamin E on all-cause mortality appear to be heterogeneous.

      The ATBC Study investigated 29 133 male smokers, and Hemilä H, 2009 showed that the effect of vitamin E on all-cause mortality was simultaneously modified by age and dietary vitamin C intake with P = 0.0005 for the test of heterogeneity. Vitamin E had no influence on mortality in males who had a low dietary intake of vitamin C. However, among males who had a high intake of vitamin C, supplementation with vitamin E increased mortality by 19% among those who were 50-62 years at the baseline of the trial, whereas it decreased mortality by 41% among those who were 66 years and older. The decrease in mortality amongst the oldest participants suggested that vitamin E might increase life span, and indeed, men that were administered vitamin E lived for half a year longer at the upper end of the follow-up age range, see Hemilä H, 2011.

      Galli F, 2017 further also claimed that vitamin E intake is unlikely to affect mortality regardless of dose, and they referred to the Bayesian meta-analysis on vitamin E by Berry D, 2009. However, Galli et al. overlooked that the Bayesian meta-analysis was based on between-trial analysis, whereas the evidence for heterogeneity in vitamin E effect in the ATBC Study was based on individual participant level analysis, a much more reliable analysis Hemilä H, 2009. Between-study analysis may suffer from ecological fallacy. Galli et al. also disregarded other detailed criticisms of the Berry et al. meta-analysis on vitamin E by Greenland S, 2009 and Miller ER 3rd, 2009.

      Galli F, 2017 concluded that since an indiscriminate vitamin E supplementation is not supported by the available evidence, future efforts are necessary to establish biomarkers and selection criteria to predict who is likely to benefit from vitamin E supplementation. However, the ATBC Study analyses indicate that age and responses to life style questionnaires may characterize people who benefit of vitamin E administration. It seems illogical therefore that the variables already identified in the ATBC Study analyses were not considered in the review by Galli et al.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, Mohamed Fahmy commented:

      not commonly recognised, but significantly important anomalies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 09, Yang K Xiang commented:

      In the their commentary, Dr. Santulli has noted the differences in glucose tolerance tests between two studies (1, 2). In our study, 5-6 week old WT and β2AR -/- were fed a high fat diet (60% fat) for 6 months; both strains develop diabetes and glucose intolerance when compared to animals of same genotypes fed with a control chow (10% fat). We did not observe differences in glucose homeostasis between WT and β2AR -/- fed with the control chow. This contrasts with the Santulli study, in which β2AR -/- strain used in their studies develop diabetes and glucose intolerance at 6-months of age when fed a chow diet. Several factors may contribute to the differences in diabetic phenotypes observed.

      1. In the Stanlulli study, β2AR -/- were backcrossed to the C57Bl6/N strain. In our study, the β2AR -/- is backcrossed into the C57Bl6/J strain.
      2. Our study used a defined control chow with 10% fat whose composition with the exception of fat and sucrose content matched that of the high-fat diet. The possibility therefore exists that the “chow” diet in the Santulli study, whose composition is not described in detail could contribute in part to some of the metabolic changes observed. In addition, our study does not exclude that β2AR -/- mice may have metabolic issues relative to WT after feeding with the defined control chow.

      The primary focus of our work was to understand the cardiac response to obesity and long-term hyperinsulinemia. In this regard the β2AR -/- mice on a high fat diet developed hyperglycemia and hyperinsulinemia, which therefore enabled us to determine if the absence of β2ARs in the heart could modulate the cardiac maladaptation that develops in wild type animals. We reported fasted insulin concentrations to demonstrate the existence of hyperinsulinemia in response to high fat feeding. However, we did observe in data not presented in the manuscript that insulin concentrations in β2AR -/- mice after intraperitoneal administration glucose were statistically lower than those in high fat fed WT, suggesting a reduced insulin release from islets, consistent with the conclusions of the in Santulli study. The Muzzin study, mentioned in the commentary is an animal with complete absence of all three β adrenergic receptors and as such caution is advised in comparing that model to mice with selective loss of the β2AR.

      A study published by Jiang and colleagues was also discussed, which reported that β2AR -/- mice display a diabetic retinopathy phenotype. Although the authors of this study did not provide background information of glucose and insulin levels, they suggest that β adrenergic signaling is essential for maintaining retinal muller cell viability. Thus the observed retinopathy might not be related to diabetes per se. Taken together, these data suggest that β2AR signaling is associated with glucose metabolism and complications that may be modulated in a tissue-specific manner in different tissues in diabetes. Ultimately, transgenic approaches with tissue-specific deletion of β2AR may offer more insight into the underlying mechanism of these tissue-specific phenotypes.

      Reference

      1) Inhibiting Insulin-Mediated β2-Adrenergic Receptor Activation Prevents Diabetes-Associated Cardiac Dysfunction. Circulation. 2017;135:73-88.

      2) Age-related impairment in insulin release: the essential role of β2-adrenergic receptor. Diabetes. 2012;61:692-701.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 28, Gaetano Santulli commented:

      In the present article, Wang, Liu, Fu and colleagues report that β2-adrenergic receptor (β2AR) plays a key role in hyperinsulinemia-induced cardiac dysfunction (1). Overall, the data are very interesting and compelling. However, we noticed that in this paper β2AR-/- mice do not exhibit glucose intolerance; in fact, they seem to have a response to intraperitoneal glucose that is even better than wild-type mice (though a statistical analysis comparing these two groups is not provided). Although surprisingly not reported by the Authors, mounting evidence indicates that the deletion of β2AR has detrimental effects on glucose metabolism (2-4). Indeed, we have demonstrated that β2AR-/- mice display impaired insulin release and significant glucose intolerance (2). Muzzin and colleagues found that the ablation of βARs mechanistically underlies impaired glucose homeostasis (3). Other groups have confirmed these results, also showing that β2AR-/- mice develop diabetic-related microvascular complications (i.e. retinopathy)(4). Nonetheless, the Authors fail to at least discuss previous relevant literature describing the alterations in glucose metabolism observed in β2AR-/- mice and do not accurately circumstantiate their findings. Furthermore, the Authors do not provide any measurement (not in vivo nor in isolated islets) of insulin levels following glucose challenge, showing just baseline serum levels. We believe that for the sake of scientific appropriateness the Readers of Circulation will appreciate a clarification, in particular regarding the fact that pertinent literature in the field has been overlooked.

      A formal e-Letter has been published by Circulation.

      Competing Interests: None.

      References 1) Inhibiting Insulin-Mediated beta2-Adrenergic Receptor Activation Prevents Diabetes-Associated Cardiac Dysfunction. Circulation. 2017;135:73-88.

      2) Age-related impairment in insulin release: the essential role of β2-adrenergic receptor. Diabetes. 2012;61:692-701.

      3) The lack of beta-adrenoceptors results in enhanced insulin sensitivity in mice exhibiting increased adiposity and glucose intolerance. Diabetes. 2005;54:3490-5.

      4) Beta2-adrenergic receptor knockout mice exhibit a diabetic retinopathy phenotype. PLoS One. 2013;8:e70555.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 09, Serina Stretton commented:

      Prasad and Rajkumar’s editorial on conflicts of interest (COI) published in the Blood Cancer Journal [1] explores how management of financial COI in academic oncology influences treatment decisions away from best patient care. We share Prasad and Rajkumar’s concerns about the potential negative influence of COI, irrespective of its source, but disagree that banning industry-funded PMWs is a reasonable or practical solution.

      This year (2017) three leading professional organizations, the International Society for Medical Publication Professionals (ISMPP), the American Medical Writers Association (AMWA), and the European Medical Writers Association (EMWA), released a joint position statement reaffirming PMWs’ obligations to be transparent about their contributions and sources of funding, and to clearly delineate the respective roles of authors and PMWs [2]. Prasad and Rajkumar claim that publications written with assistance from industry-funded PMWs may not reflect authors’ views and that authors may feel unable to challenge inappropriate sponsor influence. These statements undermine the clear responsibilities and accountability that authors should uphold when publishing clinical data [3,4]. For example. as required by the International Committee of Medical Journal Editors [3] and upheld by the AMWA-EMWA-ISMPP joint position statement, authors must provide all of the following: early intellectual input to a publication, be involved in the drafting, approve the final version for publication, and agree to be accountable for all aspects of the work. It is the latter two requirements that counter Prasad and Rajkumar’s premise that authors have little opportunity to control the content of the manuscript. In contrast, PMWs who often do not meet authorship criteria, assist authors to disclose findings from clinical studies in a timely, ethical, and accurate manner; ensure that authors and sponsors are aware of their obligations; and document author contributions to the development of a publication [2,4]. To contribute value in these roles, PMWs regularly receive mandatory training on ethical publication practices from their employers and industry funders [5-7].

      Of concern, Prasad and Rajkumar’s present misleading data to support their arguments for banning industry-funded PMWs. First, they state that “writing assistance” is common, citing prevalence data from a survey of honorary or ghost authorship by Wislar et al [8]. Honorary or ghost authorship occurs when an individual who merits authorship is excluded from the author byline. This is quite distinct from medical writers who not meet authorship criteria and (i) declare their involvement in the acknowledgements (PMWs) [3] or (ii) keep their involvement hidden (ghostwriters) [9]. Indeed, the prevalence of ghostwriting in the Wislar et al survey was 0.2% of articles, far lower than the 21% cited by Prasad and Rajkumar for ghost authorship. Second, Prasad and Rajkumar state that ghost authorship in industry-funded trials is far worse, citing a study by Gøtzsche et al [10]. However, Gøtzsche et al used a nonstandard definition of ghost authorship by extending the definition to undeclared contributions (either as authors or in the acknowledgments) from individuals who wrote the trial protocol and those who conducted the statistical analyses.

      As acknowledged by Prasad and Rajkumar, there are multiple benefits to engaging a PMW in terms of time and readability [1]. More importantly, publications involving PMWs are of higher quality – they have a shorter acceptance time [11], are more compliant with international reporting guidelines [12, 13], contain significantly fewer non-prespecified outcomes [14], and have a lower rate of retraction due to misconduct [15] than publications without PMWs or with those that are not funded by industry. As such, it is entirely unreasonable to exclude PMWs as an option on the basis of their funding. We strongly advocate that PMWs are selected on the basis of a proven track record and commitment to ethical and transparent publication practices. In addition, we strongly recommend that authors become familiar with reporting guidelines and be aware of, and fully comply with their obligations and roles as authors.

      The Global Alliance of Publication Professionals (www.gappteam.org)

      Serina Stretton, ProScribe – Envision Pharma Group, Sydney, NSW, Australia; Jackie Marchington, Caudex – McCann Complete Medical Ltd, Oxford, UK; Cindy W. Hamilton Virginia Commonwealth University School of Pharmacy, Richmond; Hamilton House Medical and Scientific Communications, Virginia Beach, VA, USA; Art Gertel, MedSciCom, LLC, Lebanon, NJ, USA

      GAPP is a group of independent individuals who volunteer their time and receive no funding (other than website hosting fees from the International Society for Medical Publication Professionals). All GAPP members have held, or do hold, leadership positions at associations representing professional medical writers (eg, AMWA, EMWA, DIA, ISMPP, ARCS), but do not speak on behalf of those organisations. GAPP members have, or do provide, professional medical writing services to not-for-profit and for-profit clients.

      REFERENCES [1] Prasad V, Rajkumar SV. Blood Cancer J 2016;6:e489 [2] www.ismpp.org/assets/docs/Inititives/amwa-emwa-ismpp joint position statement on the role of professional medical writersjanuary 2017.pdf 2017 [accessed 08.06.17] [3] www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html; 2016 [accessed 08.06.17] [4] Battisti WP et al. International Society for Medical Publication Professionals. Good Publication Practice for communicating company-sponsored medical research: GPP3. Ann Intern Med. 2015;163(6):461-4 [5] www.ismpp.org/ismpp-code-of-ethics [accessed 08.06.17] [6] www.amwa.org/page/Codeof_Ethics [accessed 08.06.17] [7] Wager E et al. BMJ Open. 2014;4(4):e004780 [8] Wislar JS et al. BMJ 2011;343:d6128.4-7 [9] Stretton S. BMJ Open 2014;4(7): e004777. [10] Gøtzsche PC et al. PLoS Med 2007;4:0047-52 [11] Bailey, M. AMWA J 2011;26(4):147-152 [12] Gattrell W et al. BMJ Open. 2016;6:e010329 [13] Jacobs A. Write Stuff 2010;19(3):196-200 [14] Gattrell W et al. ISMPP EU Annual Meeting 2017 [15] Woolley KL et al. Curr Med Res Opin 2011;27(6)1175-82


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.