16,205 Matching Annotations
  1. Jul 2018
    1. On 2016 Sep 13, Josh Bittker commented:

      @Christopher Southan- Thanks, we're going to try to resolve the PubChem links by adding aliases in Pubchem of the short names used in the paper (BRD####); the compounds are registered in Pubchem with their full IDs but not the shortened IDs.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 10, Christopher Southan commented:

      This post resolves the PubChem links https://cdsouthan.blogspot.se/2016/09/structures-from-latest-antimalarial.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 09, Cristiane N Soares commented:

      The question regarding CHIK tests mentioned by Thomas Jeanne is really relevant in this case. In fact, we were concerned about co-infections, and after the paper acceptance we performed IgM and IgG CHIK tests in serum and CSF. All samples were negatives for CHIKV.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 08, Thomas Jeanne commented:

      In their case report, Soares et al. do not mention testing for chikungunya virus (CHIKV), which has considerable overlap with Zika virus (ZIKV) in both epidemiologic characteristics and clinical presentation. Brazil experienced a large increase in chikungunya cases in early 2016 (Collucci C, 2016), around the time of this patient's illness, and recent case series in Ecuador (Zambrano H, 2016) and Brazil (Sardi SI, 2016) have demonstrated coinfection with ZIKV and CHIKV. Moreover, a recently published study of Nicaraguan patients found that 27% of those who tested positive for any of ZIKV, CHIKV, or DENV (dengue virus) with multplex RT-PCR also tested positive for one or both of the other viruses (Waggoner JJ, 2016). CHIKV itself has previously been linked to encephalitis including fatal encephalitis (Gérardin P, 2016), and some have speculated that adverse interactions could result from coinfection with two or more arboviruses (Singer M, 2017). Coinfection with chikungunya as a contributing factor in this case cannot be ruled out without appropriate testing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 08, Clive Bates commented:

      This paper does not actually address the question posed in the title: Should electronic cigarette use be covered by clean indoor air laws?

      The question it does address is more like "how much discomfort do vapers say they experience when the same law is applied to them as to smokers?".

      The authors do not show that this is the foundation on which a justification for applying clean indoor air laws to vaping should rest. There is no basis to assume that it is.

      Addressing the question set in the title is ultimately a matter of property rights. The appropriate question is: "what is the rationale for the state to intervene using the law to override the preferred vaping policy of the owners or managers of properties?".

      The authors cannot simply assume that everyone and at all times shares their preference for 'clean indoor air'. Vapers may prefer a convivial vape and a venue owner may be pleased to offer them a space to do it. Unless this is creating some material hazard to other people, why should the law stop this mutually agreed arrangement? Simply arguing that it doesn't cause that much discomfort among that many vapers isn't a rationale. If the law stops them doing what they would like to do there is a welfare or utility loss to consider.

      It is likely that many places will not allow vaping - sometimes for good reasons. But consider the following cases:

      1. A bar wants to have a vape night every Thursday

      2. A bar wants to dedicate one room where vaping is permitted

      3. In a town with three bars, one decides it will cater for vapers, two decide they will not allow vaping

      4. A bar manager decides on balance that his vaping customers prefer it and his other clientele are not that bothered – he’d do better allowing it

      5. A hotel wants to allow vaping in its rooms and in its bar, but not in its restaurant, spa, and lobby

      6. An office workplace decides to allow vaping breaks near the coffee machine to save on wasted smoking break time and encourage smokers to quit by switching

      7. A care home wants to allow an indoor vaping area to encourage its smoking elderly residents to switch during the coming winter instead of going out in the cold

      8. A vape shop is trying to help people switch from smoking and wants to demo products in the shop…

      9. A shelter for homeless people allows it to make its clients welcome

      10. A day centre for refugees allows it instead of smoking

      These are all reasonable accommodations of vaping for good reasons. But the law is much too crude to manage millions of micro-judgments of this nature. It can only justify overruling them with a blanket prohibition if it is preventing harm to bystanders or workers who are exposed to hazardous agents at a level likely to cause a material risk.

      A much better role for the state is to advise owners and managers on how to make these decisions in an informed way. This is what Public Health England has done [1], and that, in my view, is a more enlightened and liberal philosophy. Further, I suspect it is more likely help to convert more smokers to vaping, giving a public health dividend too.

      [1] Public Health England, Use of e-cigarettes in public places and workplaces, July 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Anita Bandrowski commented:

      This paper is the basis of part of an example Authentication of Key Biological Resources document that we and the UCSD library has put together.

      Please find it here: http://doi.org/10.6075/J0RB72JC


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 20, Huang Nan commented:

      This paper introduced a very peculiar observation, which states that 18% and 42% of rural and urban Beijing female, with avaerge age in the early 60s, are sunbed users (Table 1). This is highly counter-intuitive as there would be less than 1% sunbed user exists in any Chinese population. Despite this observation, the authors claimed in the text that: "Only a few individuals had a sunburn history or used sunbeds."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 26, Atanas G. Atanasov commented:

      Excellent work, many thanks to the authors for the great overview.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 03, Thomas Hünig commented:

      Thank you for bringing this up in the Commons. Yes, it is unfortunate that somewhere in production process, the "µ" symbols were converted to "m", which sometimes happens when fonts are changed. Fortunately, the mistake becomes obvious by its sheer magnitude (1000x off), and the corresponding paper in Eur. J. Immunol. with the original, correct data is referenced. My apologies that we did not spot this mistake before publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Dec 31, Mark Milton commented:

      The article includes several unfortunate typos in a vital piece of information. The article states that "These encouraging findings led to the design of a new healthy volunteer trial, which started at 0.1 mg/kg, i.e. a 1000- fold lower dose than the one applied in the ill-fated trial of 2006 (Clinical trials identifier: NCT01885624). After careful monitoring of each patient, the dose was gradually increased to a maximum of 7 mg/kg, still well below what had been applied in the first HV trial." The units listed for the dose are mg/kg but should have been µg/kg. The starting dose was 0.1 µg/kg and the highest dose evaluated was 7 µg/kg (Tabares et al 2014). The dose administered in the TGN1412 FIH study was 100 µg/kg. Although this typo does not detract from the overall conclusions from the study, it is sad to see that this error was not noticed by the authors or reviewers given the near tragic circumstances of the FIH clinical trial for TGN1412.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 02, Pierre Fontana commented:

      Tailored antiplatelet therapy in patients at high cardiovascular risk: don’t prematurely throw the baby out with the bathwater

      The clinical impact of a strategy based on a platelet function assay to adjust antiplatelet therapy has been intensively investigated. However, large prospective interventional studies failed to demonstrate the benefit of personalizing antiplatelet therapy. One of the concerns was that the interventions were delayed and partially effective, contrary to earlier smaller trials that employed incremental clopidogrel loading doses prior to PCI Tantry US, 2013 Bonello L, 2009.

      Cayla and co-workers should be commended for their efforts in the ANTARCTIC trial. Although the trial is pragmatic, important limitations may account for the neutral effect of the intervention, including an antiplatelet adjustment performed between D14 and D28 after randomization. Early personalization is also supported by data from the TRITON-TIMI38 trial where half of the ischemic events (4.7/9.9%) of the prasugrel-treated arm occurred 3 days after randomization. Stratifying the analysis on the timing of events before and after D28 may provide some insight, though underpowered for a definitive conclusion.

      The prognostic value of the platelet function assay and cut-off used would also be of great interest in the control group. If, the assay and cut-off values were not prognostic in this elderly population, personalization would be bound to fail.

      Finally, the results of ANTARCTIC restricted to the subgroup of patients with hypertension (73% of patients), thus accumulating 3 of the risk factors related to the clinical relevance of high platelet reactivity Reny JL, 2016 would also be very interesting. Further research should not only evaluate other pharmacological approaches but also early personalization and measurement of platelet reactivity in the control group.

      J.-L. Reny, MD, PhD and P. Fontana, MD, PhD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 16, Leon van Kempen commented:

      RNA in FFPE tissue is commonly degraded. NanoString profiling will still yield reliable results when RNA is degraded to 200nt fragments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 03, Adrian Barnett commented:

      I should have cited this paper which shows how random funding can de-centralise funding away from ingrained ideas and hence increase overall efficiency: Sharar Avin "Funding Science by Lottery", volume 1, European Studies in Philosophy of Science


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, G L Francis commented:

      I have read your publication in PNAS titled ‘Metabolic features of chronic fatigue syndrome’ with much interest, this significant contribution has at last provided a definitive publication of a realistic evidence based diagnostic test based on a panel of blood metabolites - this could provide a more robust diagnostic base for future rational treatment studies in ‘CFS’.

      Athough there are many more complex and critical questions to be asked, I will keep mine simple. I took particular note of the authors comments “When MTHFD2L is turned down in differentiated cells, less mitochondrial formate is produced and one-carbon units are directed through Methylene-THF toward increased SAM synthesis and increased DNA methylation” (from Figure S6. Mitochondrial Control of Redox, NADPH, Nucleotide, and Methylation Pathways legend). I recently read the paper, 'Association of Vitamin B12 Deficiency with Homozygosity of the TT MTHFR C677T Genotype, Hyperhomocysteinemia, and Endothelial Cell Dysfunction' Shiran A et al. IMAJ 2015; 17: 288–292, and wondered whether the gene variations in the individuals described within that publication, could be over represented in your subjects, mind you the size of your study population probably answers my own question; and no doubt many mechanisms that lead to a perturbation of this pathway exist, of which this could conceivable be just one of many, even if a minor contributor. Moreover, there does seem to be a difference between the two papers in terms of the particular pertubations on incidence of cardiovascular disease and outcomes?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 19, james brophy commented:

      The authors conclude prophylactic ICD implantation "was not associated with a significantly lower long-term rate of death from any cause than was usual clinical care”. Given the observed hazard rate for death was 0.87; 95% confidence interval [CI], 0.68 to 1.12; P=0.28), this conclusion is quite simply wrong, unless a potential 32% reduction in death is considered clinically unimportant. The aphorism "Absence of proof is not proof of absence" is worth recalling.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 29, David Keller commented:

      If the investigators were not blinded, how was bias excluded from their scoring of the balance, stability and TUG tests?

      The placebo, nocebo, Pygmalion and other expectation effects can be substantial in Parkinson's disease. Unblinded investigators can transmit cues regarding their own expectations to patients, thereby affecting the patients' response to therapy. In addition, unblinded investigators are affected by bias in their subjective evaluations of patient response to therapy, and even in their measurement of patient performance on relatively objective tests. What was done to minimize these sources of bias from contaminating the results of this single-blinded study? Were the clinicians who scored the BBS, TUG and LOS tests aware of the randomization status of each patient they tested?

      In addition, I question whether the results reported for the LOS test in the Results section of the abstract are statistically significant. The patients assigned to exergaming scored 78.9 +/- 7.65 %, which corresponds to a Confidence Interval of [71.25 - 86.55] %, while the control patient scores of 70.6 +/- 9.37 % correspond to a confidence interval of [61.23 - 79.97] %. These two confidence intervals overlap from 71.25 % to 79.97 %, a range which includes the average score of the exergaming patients.

      If a follow-up study is planned, the blinding of investigators, especially those who score the patients' test performances, would reduce bias and expectation effects. Increasing the number of subjects assigned to active treatment and to control treatment would improve the statistical significance of the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 13, Craig Brown commented:

      I would welcome more research in this matter, also. As a clinician, for many years I have found this to yield significant visual and cognitive functional improvement in many patients with mild to moderate cerebral atrophy and vascular dementia, particularly those with MTHFR polymorphisms.

      Over several years, the benefit has been reliable. Patients who quit it, often come back and and restart it, because they notice loss of functional improvements that returns upon resumption.

      Because Folic acid does not cross the Blood Brain Barrier, it can build up blocking active l-methylfolate transport into the brain and retina, also impairing DHFR, Dihydrofolate Reductase which impairs methylation and BH4 recycling- essential for serotonin, dopamine, and norepinephrine production- which in turn are essential for mood, attention,sleep and memory.

      I find it works optimally when combined with Folic Acid avoidance- to reduce BBB blockade, riboflavin to enhance MTHFR methylation, and Vitamin D to enhance folate absorption. It has a long record of safety, with few serious side effects, for a condition that has few effective treatments. All this is to say, more research is surely a good thing here, but excessive skepticism deprives patients of a chance to try a low risk frequently helpful but not magic option.

      It is classified as a Medical Food, in part, because our FDA does not encourage formulating drug products that have multiple active ingredients, particularly ingredients that occur naturally in foods and in human metabolism. Medical Foods were implemented by the FDA specifically for higher concentrations of natural food based substances important to address genetic metabolic impairments, in this case, impairment of DFHR and MTHFR; which may contribute to cerebral ischemia, atrophy, and dementia.

      A final thought, double blind experiments are the ideal gold standard, however, the elderly, and the demented are considered high risk populations and have such strong protections in place at the NIH, that placebo studies are difficult to justify to, or get approval from, any Institutional Review Board IRB, when previous benefit has been shown, because that amounts to knowingly withholding treatment. We may have to content ourselves with non-placebo trials.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 09, Gayle Scott commented:

      This company (Nestle Health Science - Pamlab, Inc)-sponsored study was neither randomized nor blinded. Tests of cognitive function and QOL were not included.The study will certainly be cited in advertising for CerefolinNAC.

      It is important to note CerefolinNAC is a "medical food," an FDA designation for products designed to meet the nutritional needs of patients whose needs cannot be met through foods, such as inborn errors of metabolism, eg PKU (patients must avoid phenylalanine), maple syrup urine disease (patients must avoid branched chain amino acids).

      Another medical food for Alzheimer's disease is Axona, caprylic triglyceride, a medium chain triglyceride found in coconut oil. Unlike dietary supplements, medical foods can be labeled for medical conditions such as Alzheimer’s disease. Dietary supplements must be labeled for so-called “structure and function claims” and cannot make claims to treat or prevent disease. For example, ginkgo may be labeled “supports memory function,” but not “for treatment of dementia.” A drug or medical food could be labeled “for treatment of dementia associated with Alzheimer’s disease.”

      Think of medical foods as hybrids of prescription drugs and dietary supplements, more closely resembling dietary supplements in terms of regulation. Packaging for medical foods is similar to prescription products with package inserts, NDC numbers, and usually “Rx only” on the labels. But like dietary supplements, medical foods are not required to be evaluated for safety or efficacy, and the FDA does not require approval before marketing. "Caution: Federal law prohibits dispensing without prescription" is not required on product labeling. The FDA specifies only that these products are for use with medical supervision;. however, a medical food manufacturer may market a product to be dispensed only on physician request.

      Message to patients regarding CerefolinNAC: much more research is needed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 16, Michael Stillman commented:

      I read this article with great interest. And with significant concern.

      A sweeping review by the Department of Health and Human Services' Office for Human Research Protections of Dr. Harkema's spinal cord injury research program (https://www.hhs.gov/ohrp/compliance-and-reporting/determination-letters/2016/october-17-2016-university-louisville/index.html accessed May 16, 2017) documented numerous instances of sloppy methodologies and potential frank scientific misconduct. This report included evidence of: a) missing source documents, leading to an inability to verify whether protocols had been followed or captured data was valid; b) multiple instances of unapproved deviations from experiments protocols; c) participants having been injured while participating in translational research experiments; d) a failure to document and adjudicate adverse events and to report unanticipated problems to the IRB; and e) subjects being misled about the cost of participating in research protocols. Dr. Harkema's conduct was so concerning that the National Institute of Disability, Independent Living, and Rehabilitation Research (NIDILRR) prematurely halted and defunded one of her major research projects (http://kycir.org/2016/07/11/top-u-of-l-researcher-loses-federal-funding-for-paralysis-study/ accessed May 26, 2017).

      I approached the editors of "Journal of Neurotrauma" with reports from both Health and Human Services (above) and University of Louisville's IRB and asked them three questions: a) were they adequately concerned with this study's integrity to consider a retraction; b) were they adequately concerned to consider publishing a "concerned" letter to the editor questioning the study's integrity and reliability; and c) were they interested in reviewing adverse events associated with the experiments. Their response: "no," "no," and "no."

      I call on the editorial board of "Journal of Neurotrauma" to carefully inspect all documents and data sets related to this work. I would further expect them to review all adverse events reports, and to demand evidence that they've been reviewed and adjudicated by an independent medical monitor or study physician. Short of this, this work remains specious.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 07, Donald Forsdyke commented:

      PATERNITY OF INNATE IMMUNITY?

      The accolades cast on scientists we admire include that of paternity. Few will dispute that Gregor Mendel was the father of the science we now call genetics. At the outset, this paper (1) hails Metchnikoff (1845-1916) as “the father of innate immunity.” However, an obituary of US immunologist Charles Janeway (1943-2003) hails him similarly (2). Can a science have two fathers? Well, yes. But not if an alternative of Mendelian stature is around. While paternity is not directly ascribed, a review of the pioneering studies on innate immunity of Almroth Wright (1861-1947) will perhaps suggest to some that he is more deserving of that accolade (3).

      1.Gordon S (2016) Phagocytosis: the legacy of Metchnikoff. Cell 166:1065-1068 Gordon S, 2016

      2.Oransky I (2003) Charles A Janeway Jr. Lancet 362:409.

      3.Forsdyke DR (2016) Almroth Wright, opsonins, innate immunity and the lectin pathway of complement activation: a historical perspective. Microbes & Infection 18:450-459. Forsdyke DR, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 10, Shawn McGlynn commented:

      With the trees from this phylogeny paper now available, we can resolve the discussion between myself and the authors (below) and conclude that there is no evidence that nitrogenase was present in the LUCA as the authors claimed in their publication.

      In their data set, the authors identified two clusters of proteins which they refer to as NifD; clusters 3058 and 3899. NifD binds the metal cluster of nitrogenase and is required for catalysis. In the author's protein groups, cluster 3058 is comprised of 30 sequences, and 3899 is comprised of 10 sequences. Inspection of these sequences reveals that neither cluster contains any actual NifD sequences. This can be said with certainty since biochemistry has demonstrated that the metal cofactor coordinating residues Cys<sup>275</sup> and His<sup>442</sup> (using the numbering scheme from the Azotobacter vinelandii NifD sequence) are absolutely required for activity. NONE of the 40 sequences analyzed by the authors contain these residues. Therefore, NONE of these sequences can have the capability to bind the nitrogenase metal cluster, and it follows that none of them would have the capacity to reduce di-nitrogen. The authors have not analyzed a single nitrogenase sequence in their analysis and are therefore disqualified from making claims about the evolution of the protein; the claims made in this paper about nitrogenase cannot be substantiated with the data which have been analyzed. The sequences contained in the author's "NifD" protein clusters are closely related homologs related to nitrogenase cofactor biosynthesis and are within a large family of related proteins (which includes real NifD proteins, but also proteins involved in bacteriochlorophyll and Ni porphyrin F430 biosynthesis). While the author's analyzed proteins are more related to nitrogen metabolism than F430 or bacteriochlorophyll biosynthesis, they are not nitrogenase, but are nitrogenase homologs that complete assembly reactions.

      Other than not having looked at any sequences which would be capable of catalyzing nitrogen reduction, the presentation of two "NifD" clusters highlights important problems with the methods used in this paper which affect the entire analysis and conclusions. First, two clusters were formed for one homologous group, which should not have occurred if the goal was to investigate ancestry. Second, by selecting small clusters from whole trees, the authors were able to prune the full tree until they recovered small sub trees which show monophyly of archaea and bacteria. However it was incorrect to ignore the entire tree of homologs and present only two small clusters from a large family. This is "cherry" picking to the extreme - in this case it is "nitrogenase" picking, but it is very likely that this problem of pruning until the desired result sullies many if not all of the protein families and conclusions in the paper; for example the radical SAM tree was likely pruned in this same way with the incorrect conclusion being reached (like nitrogenase, a full tree of radical SAM does not recover the archaea bacteria split in protein phylogenies either). Until someone does a complete analysis with full trees the claims of this paper will remain unproven and misleading since they are based on selective sampling of information. It would seem that the authors have missed the full trees whilst being lost in mere branches of their phylogenetic forest of 286,514 protein clusters.

      In a forthcoming publication, I will discuss in detail the branching position of the NifD homologs identified by the authors, as well as the possible evolutionary trajectory of the whole protein family with respect to the evolution of life and the nitrogen cycle on this planet in more detail, including bona fide NifD proteins which I have already made comment on below in this PubMed Commons thread.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 20, Madeline C Weiss commented:

      The trees and also the alignments for all 355 proteins are available on our resources website:

      http://www.molevol.de/resources/index.html?id=007weiss/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 25, Shawn McGlynn commented:

      Unfortunately, the points raised by Professor Martin do not address the problem I raised in my original comment, which I quote from below: "nitrogenase protein phylogeny does not recover the monophyly of the archaea and bacteria." As I wrote, the nitrogenase protein is an excellent example of violating the author's criterion of judging a protein to be present in the LUCA by virtue that "its tree should recover bacterial and archaeal monophyly" (quoted from Weiss et alia). Therefore it should not be included in this paper's conclusions.

      Let's be more specific about this and look at a phylogenetic tree of the nitrogenase D peptide (sometimes referred to as the alpha subunit). This peptide binds the catalytic metal-sulfur cluster and its phylogeny can be viewed on my google site https://sites.google.com/site/simplyshawn/home.

      I colored archaeal versions red and bacterial black. You can see that this tree does not recover the monophyly of the archaea and bacteria and therefore should not be included in the author's set of LUCA proteins.

      Is what I display the result of a tree construction error? Probably not, this tree looks pretty much the same to every other tree published by various methods, so it seems to correctly reflect sequence evolution as we understand it today. The tree I made just has more sequences; it can be compared directly with Figure 1 in Leigh 2000, Figure 2 in Raymond et alia 2004, and Figure 2 in Boyd et alia 2011. Unfortunately, Weiss and others do not include any trees in their paper, so it is impossible to know what they are deriving their conclusions from, but it would be very difficult to imagine that they have constructed a tree different from all of these.

      Could it be that all these archaea obtained nitrogenase by horizontal gene transfer after the enzyme emerging in bacteria? Possibly, although this would imply that it was not in the LUCA as the authors claim.

      Could it be that the protein developed in methanogens and was then transferred into the bacterial domain? Yes, and Boyd and others suggested just this in their 2011 Geobiology paper. This would also mean that the protein was not in the LUCA.

      Could it be that the protein was present in the LUCA as Weiss and co-authors assert? Based on phylogenetic analysis, no.

      As Prof. Martin writes - there certainly is more debate to be had about nitrogenase age than was visible in my first comment. However, we can be sure that the protein does not recover the archaea bacteria monophyly, and should have not been included in the authors paper.

      Prof. Martin might likely counter my arguments here by saying something about metal dependence and treating different sequences separately (for example Anf, Vnf, MoFe type). However let us remember that the sequences are all homologous. Metal binding is one component of the nitrogenase phenotype, but all nitrogenase are homologous and descend from a common ancestor.

      Now that we can be sure that the nitrogenase does not conform to the author's second criterion for judging presence in the LUCA, let us examine if the protein conforms to the first criterion: "the protein should be present in at least two higher taxa of bacteria and archaea". In fact, all nitrogenase in archaea that are found in the NCBI and JGI databases are only within the methanogenic euryarchaeota. Unfortunately, Weiss and coauthors do not define what "higher taxa" means to them in their article, but it should be questioned if having a gene represented by members of a single phylum actually constitutes being present within "two higher taxa". Archaea are significantly more diverse than what is observed in the methanogenic euryarchaeota. Surely, if a protein was present in the LUCA, it would be a bit more widely distributed, and it would be easy to argue that the presence of nitrogenase in only one phylum provides evidence that it does not conform to the authors criterion number one. Thus, the picture that emerges from a closer look at nitrogenase phylogeny and distribution is that the protein violates both of the authors criteria for inclusion in the LUCA protein set.

      Let me summarize:

      1) Nitrogenase does not recover the bacterial and archaeal monophyly and therefore violates the author's criterion number 2.

      2) Nitrogenase in archaea is only found within the methanogenic euryarchaeota and is not broadly distributed, and therefore also seems to violate the authors criterion number 1.

      3) From a phylogenetic perspective, the nitrogenase protein should not be included as a candidate to be present in the LUCA.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 12, William F Martin commented:

      There is an ongoing debate in the literature about the age of nitrogenase.

      In his comment, McGlynn favours published interpretations that molybdenum nitrogenase arose some time after the Great Oxidation Event 2.5 billion years ago (1). A different perspective on the issue is provided by Stüecken et al. (2) who found evidence for Mo-nitrogenase before 3.2 billion years ago. Our recent paper (3) traced nitrogenase to LUCA, but also suggested that methanogens are the ancestral forms of archaea, in line both with phylogenetic (4) and isotope (5) evidence for the antiquity of methanogens, and with a methanogen origin of nitrogenase (6).

      Clearly, there had to be a source of reduced nitrogen at life’s origin before the origin of nitrogenase or any other enzyme. Our data (3) are consistent with the view that life arose in hydrothermal vents and independent laboratory studies show that dinitrogen can be reduced to ammonium under simulated vent conditions (7,8). There is more to the debate about nitrogenase age, methanogen age, and early sources of fixed nitrogen than McGlynn’s comment would suggest.

      1. Boyd, E. S., Hamilton, T. L., and Peters, J. W. (2011). An alternative path for the evolution of biological nitrogen fixation. Front. Microbiol. 2:205. doi:10.3389/fmicb.2011.00205

      2. Stüeken EE, Buick R, Guy BM, Koehler MC. Isotopic evidence for biological nitrogen fixation by molybdenum-nitrogenase from 3.2 Gyr. Nature 520, 666–669 (2015)

      3. Weiss MC, Sousa FL, Mrnjavac N, Neukirchen S, Roettger M, Nelson-Sathi S, Martin WF: The physiology and habitat of the last universal common ancestor. Nat Microbiol (2016) 1(9):16116 doi:10.1038/nmicrobiol.2016.116

      4. Raymann, K., Brochier-Armanet, C. & Gribaldo, S. The two-domain tree of life is linked to a new root for the Archaea. Proc. Natl Acad. Sci. USA 112, 6670–6675 (2015).

      5. Ueno, Y., K. Yamada, N. Yoshida, S. Maruyama, and Y. Isozaki. 2006. Evidence from fluid inclusions for microbial methanogenesis in the early archaean era. Nature 440:516-519.

      6. Boyd, E. S., Anbar, A. D., Miller, S., Hamilton, T. L., Lavin, M., and Peters, J. W. (2011). A late methanogen origin for molybdenum-depen- dent nitrogenase. Geobiology 9, 221–232.

      7. Smirnov A, Hausner D, Laffers R, Strongin DR, Schoonen MAA. Abiotic ammonium formation in the presence of Ni-Fe metals and alloys and its implications for the Hadean nitrogen cycle. Geochemical Transactions 9:5 (2008) doi:10.1186/1467-4866-9-5

      8. Dörr M, Kassbohrer J, Grunert R, Kreisel G, Brand WA, Werner RA, Geilmann H, Apfel C, Robl C, Weigand W: A possible prebiotic formation of ammonia from dinitrogen on iron sulfide surfaces. Angew Chem Int Ed Engl 2003, 42(13):1540-1543.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Mar 09, Tanai Cardona commented:

      I agree with Shawn regarding the fact that "Nitrogenase does not recover the bacterial and archaeal monophyly and therefore violates the author's criterion number 2."

      I have a different explanation for why nitrogenase was recovered in LUCA. And this has to do with the tetrapyrrole biosynthesis enzymes related to nitrogenases that, in fact, do recover monophyly for Archaea and Bacteria. Namely, the enzyme involved in the synthesis of the Ni-tetrapyrrole cofactor, Cofactor F430, required for methanogenesis in archaea; and the enzymes involved in the synthesis of Mg-tetrapyrroles in photosynthetic bacteria. Still to this date, the subunits of the nitrogenase-like enzyme required for Cofactor F430 synthesis are annotated as nitrogenase subunits.

      So, what Weiss et al interpreted as a nitrogenase in LUCA, might actually include proteins of the tetrapyrrole biosynthesis enzymes.

      Bill, I think you should make all the trees for each one of the 355 proteins available online. That would be really useful for all of us interested in early evolution! Thank you.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Oct 08, Shawn McGlynn commented:

      This paper uses a phylogenetic approach to "illuminate the biology of LUCA" and uses two criteria to assess if a given protein encoding gene was in the LUCA:

      "the protein should be present in at least two higher taxa of bacteria and archaea, respectively, and (2) its tree should recover bacterial and archaeal monophyly"

      The authors later conclude that "LUCA accessed nitrogen via nitrogenase", however the nitrogenase protein is an excellent example of violating the author's criterion (2) above, and therefore cannot be included in the LUCA protein set based on the author's own criterion.

      Upon phylogenetic analysis, the nitrogenase alpha subunit protein - which ligates the active site - branches into five clusters. One of these clusters is not well resolved, yet four of the five clusters contain both archaea and bacteria, therefor a nitrogenase protein phylogeny does not recover the monophyly of the archaea and bacteria.

      Other claims in this paper may deserve scrutiny as well.

      Suggested Reading below - if there are others to add someone please feel free:

      Raymond, J., Siefert, J. L., Staples, C. R., and Blankenship, R. E. (2004). The natural history of nitrogen fixation. Mol. Biol. Evol. 21, 541–554

      Boyd, E. S., Anbar, A. D., Miller, S., Hamilton, T. L., Lavin, M., and Peters, J. W. (2011a). A late methanogen origin for molybdenum-depen- dent nitrogenase. Geobiology 9, 221–232.

      Boyd, E. S., Hamilton, T. L., and Peters, J. W. (2011b). An alternative path for the evolution of biological nitrogen fixation. Front. Microbiol. 2:205. doi: 10.3389/fmicb.2011.00205


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 19, DP Zhou commented:

      China's fragile health insurance system can not serve the health system. Patients are forced to spend all their savings to buy medicines in cash with no insurance coverage. For most families, one cancer patient means the bankruptcy of the whole family. In such despair, many patients choose to extort the doctors and the hospitals as the last option to recover some cost of medicines.

      The health insurance system in China is a sensitive issue. The state-provided insurance is not covering major illnesses. The private insurance is poor-quality and mostly abused by finance institutions in real estate investment and other speculations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Peter Hajek commented:

      The length of exposure would be relevant if the dosing was comparable, but the damage to mice lungs was caused by doses of nicotine that were many times above anything a human vaper could possibly get. It is the dose that makes the poison. Many chemicals produce damage at large enough doses while lifetime exposure to small enough doses is innocent.

      To justify the conclusions about toxicity of vaping, the toxic effect would need to be documented with realistic dosing, and then shown to actually apply to humans (who have much better nicotine tolerance than mice).

      I agree that mice studies with realistic dosing could be useful, though data on changes in lung function in human vapers would be much more informative; and I do appreciate that the warnings of risks in the paper were phrased with caution.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 30, Robert Foronjy commented:

      The study is NOT reassuring to e-cigarette consumers. On the contrary, it shows that nicotine exposure reproduced the lung structural and physiologic changes present in COPD. These changes occurred after only four months of exposure. Even adjusting for the differences in lifespans, this exposure in mice is much briefer than that of a lifelong e-cigarette consumer. I do agree, however, that carefully conducted studies are needed to determine whether there is a threshold effect of nicotine exposure.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 29, Peter Hajek commented:

      Thank you for the explanation. The exposure however was not equivalent. Mice have much faster nicotine metabolism than humans which means that nicotine exposure in mice must be many times higher than in humans to produce the same blood cotinine levels. See the reference below that calculated that mice with comparable cotinine levels were exposed to an equivalent of at least 200 cigarettes per day. In addition to this, mice also have much lower tolerance to nicotine than humans which means that their organs would be much more severely affected even if the levels were comparable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 29, Robert Foronjy commented:

      Mortality was not reported in the manuscript since no deaths occurred. The exposure was well tolerated by the mice and no abnormal behavior or physiologic stress was noted. At the time of euthanasia, all the internal organs were grossly normal on exam. Cotinine levels in the mice were provided in the study and they are similar to what has been documented in humans who vape electronic cigarettes. We agree that both the mice and human consumers are exposing their lungs to toxic concentrations of nicotine. This is one of the essential points that is expressed by the data presented in the manuscript.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Aug 26, Peter Hajek commented:

      The authors propose a hypothesis that deserves attention, but the study findings need to be interpreted with caution.

      The mice were severely overdosed with nicotine, up to the lethal levels for mice, and a huge amount above what any human vaper would get - see this comment on a previous such study:

      http://journals.plos.org/plosone/article/comment?id=info:doi/10.1371/annotation/5dfe1e98-3100-4102-a425-a647b9459456

      The report does not say how many mice were involved and if any died during the experiment; and whether effects of nicotine poisoning were detected in other organ systems. This could perhaps be clarified.

      Regarding the relevance to human health, nicotine poisoning poses normally no risk to vapers or smokers because if nicotine concentrations start to rise above their usual moderate levels, there is an advance warning in the form of nausea which makes people stop nicotine intake long before any dangerous levels can accrue. (Mice in these types of experiments do not have that option).

      The study actually provides a reassurance for vapers, to the extent that mice outcomes have any relevance for humans, in that in the absence of nicotine overdose, chronic dosing with the standard ingredients of e-cigarette aerosol (PG and VG) had no adverse effects on mice lungs.

      Peter Hajek


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 04, Egon Willighagen commented:

      This paper raises a number of interesting points about contemporary research. The choice of word "selfies" is a bit misleading, IMHO, particularly because the article also discusses the internet.

      The problem of selfies is partly because the liberal ideas that research is a free market where researchers have to sell their research and compete for funding. Indeed, I was trained to do so by the generation of researchers above me, and I learned what role conferences (talks, posters), publication lists (amount, where, etc) have in this. Using the Internet is just an extension of this, and nothing special; this idea of selfies was introduced before the internet, not after.

      Unfortunately, the Internet is used more for these selfies (publication lists, CVs, announcements) than for actual research: exchange of research data is still very limited. That is indeed a shame and must change. But I guess it can only really change after the current way research is funded has changed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 13, Andrew R Kniss commented:

      There is a correction to this article that includes corrected yield data for haylage, and an updated overall estimate for the organic yield gap (updated figure is 67%, rather than the originally reported 80%). Correction is here: https://www.ncbi.nlm.nih.gov/pubmed/27824908

      A pdf of the article with corrections made in-line (in blue font) can be downloaded here: https://figshare.com/articles/journal_pone_0161673-CORRECTED_PDF/4234037


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 28, Jaime A. Teixeira da Silva commented:

      There are at least two extremely serious - and possibly purposefully misleading - errors in the terms used in this paper. Or perhaps, as I argue, they are not errors, but reflect a seismic shift in "predatory" publishing ideology eschewed by Jeffrey Beall.

      Beall refers to such "deceitful" publishers as "predatory" publishers. He even refers to his original paper incorrectly [1]. The original term that Beall coined in 2010 was "predatory" open-access scholarly publishers, referring specifically to open access (OA) publishers. His blog, named “scholarlyoa” also reflects this exclusive focus on OA.

      His purposeful omission of the term OA in this paper published by J Korean Med Sci reflects not only the omission of the term OA from the entire title and text, and even from the original definition, it also reflects very lax editorial oversight in the review of this paper. For the past 6 years, Beall has focused exclusively on OA, and has indicated, on multiple occasions on his blog, that he does not consider traditional (i.e., print or non-OA) journals or publishers.

      Why then has Beall purposefully omitted the term OA?

      Why has there been an apparent seismic shift in this paper, and in Beall’s apparent position in 2016, in the definition of "predatory"? By purposefully (because it is inconceivable that such an omission by Beall, a widely praised scholar, could have been accidental) removing the OA limit, and allowing any journal or publisher to be considered "predatory", Beall is no longer excluding the large publishers. Such publishers include Elsevier, SpringerNature, Nature Publishing Group, Taylor & Francis / Informa, or Wiley, which include the largest oligopolic publishers that dominate publishing today [2].

      Does this shift in definition also reflect a shift in Beall's stance regarding traditional publishers? Or does it mean that several of these publishers, who publish now large fleets of OA journals, can no longer be excluded from equal criticism if there is evidence of their “predatory” practices, as listed by Beall [3]?

      The second misleading aspect is that Beall no longer refers to such OA journals as simply "predatory". His definition evolved (the precise date is unclear) to characterize such publishers as "Potential, possible, or probable predatory scholarly open-access publishers" [4] and journals as "Potential, possible, or probable predatory scholarly open-access journals" [5]. Careful examination of this list of words reflects that almost any journal or publisher could be classified as “predatory”, provided that it fulfilled at least one of the criteria on the Beall list of “predatory” practices.

      So, is Beall referring exclusively to the lists in [4] and [5] in his latest attack on select members of the OA industry, or does his definition also include other publishers that also publish print journals, i.e., non-OA journals?

      Beall needs to explain himself carefully to scientists and to the public, because his warnings and radical recommendations [6] have to be carefully considered in the light of his flexible definitions and swaying lists.

      The issue of deceitful publishers and journals affects all scientists, and all of us are concerned. But we should also be extremely concerned about the inconsistency in Beall's lists and definitions, and the lack of clear definitions assigned to them. Because many are starting to call on the use of those lists as "black lists" to block or ban the publication of papers in such publishers and journals. I stand firmly against this level of discriminatory action until crystal clear definitions for each entry are provided.

      We should also view the journals that have approved these Beall publications with caution and ask what criteria were used to approve the publication of these papers with faulty definitions?

      Until then, these "warnings" by Beall may in fact represent a danger to freedom of speech and of academics' choice to publish wherever they please, with or without the explicit permission or approval of their research institutes, even though the Beall blog provides some entertainment value, and as a crude “warning system”.

      [1] Beall J. "Predatory" open-access scholarly publishers. Charleston Advis 2010;11:10–17. [2] Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. doi:10.1371/journal.pone.0127502 [3] https://scholarlyoa.files.wordpress.com/2015/01/criteria-2015.pdf [4] https://scholarlyoa.com/publishers/ [5] https://scholarlyoa.com/individual-journals/ [6] Beall J. Predatory journals: Ban predators from the scientific record. Nature 534, 326. doi: 10.1038/534326a (also read some pertinent criticism in the comments section of that paper)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 01, Lydia Maniatis commented:

      The logic of this study hinges on the following statement:

      "The perceptual distance between colors was calculated using the receptor-noise limited model of Vorobyev and Osorio (1998; see also Table 1; Supplementary Figure S1), which has recently been validated experimentally (Olsson, Lind, & Kelber, 2015)."

      In no sense is it legitimate to say that Olsson, Lind & Kelber have validated any models, as their own conclusions rest on unvalidated and implausible assumptions, specifically the assumption that the relevant discrimination thresholds " are set by photoreceptor noise, which is propagated into higher order processing."

      This idea (versions of which Teller, 1984, described as the "nothing mucks it up" proviso), is not only untested, it is bizarre, as it leaves open the questions of a. how and why this "noise" is directly propagated unchanged by a highly complex feedback and feedforward system whose outcomes (e.g. lightness constancy, first demonstrated by W. Kohler to exist in chicks) resemble logical inference (and which are not noisy in experience), and b. even if we wanted to concede that the visual system is "noisy," (which is a bad idea) on what basis do we decide, using behavioral data, that this noise originates at the photoreceptor, and only the photoreceptor level? Many psychophysicists (equally illegitimately) prefer to cite V1 in describing their results.

      The concept of "noise" is related to the also-illegitimate ideas that neurons act as "detectors" of specific stimuli and that complex percepts are formed by summing up simpler ones.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Thomas Ferguson commented:

      Other than VEGF, there are no solid data to support the idea that the other cytokines tested in this study are involved in human AMD. When the levels of the cytokines are lowered by ranibizumab plus dexamethasone treatment, there was no effect on disease course. It seems that the conclusion of this studying should be the opposite: that inflammatory proteins (other than VEGF) are not involved in the pathogenesis of chronic macular edema due to AMD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 31, Lydia Maniatis commented:

      I think the authors have overlooked a major confound in their stimuli. This is the structure of the collection of items. If we have three items, for example, they will always form a triangular structure, except if they’re in a line. If they’re in a line, they still have a structure, with a middle and two flanking items. Our visual system is sensitive to structure, including that of a collection of items; Gestalt experiments have also shown this is also clearly the case with much “lower” animals, such as birds. I don’t think the authors can discuss this issue meaningfully without taking this factor into account.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 13, Andy Collings commented:

      Jeffrey Friedman and colleagues' response to Markus Meister's paper, Physical limits to magnetogenetics, is available here, https://elifesciences.org/content/5/e17210#comment-2948691685, and is reproduced below:

      On the Physical Limits of Magnetogenetics

      In a recent paper, Markus Meister comments on data published by our groups (and a third employing a different approach) [1] showing that cells can be engineered to respond to an electromagnetic field [2-4]. Based on a set of theoretical calculations, Meister asserts that neither the heat transfer nor mechanical force created by an electromagnetic field interacting with a ferritin particle would be of sufficient magnitude to gate an ion channel and then goes on to question our groups’ findings altogether.

      One series of papers (from the Friedman and Dordick laboratories) employed four different experimental approaches in cultured cells, tissue slices and animals in vivo to show that an electromagnetic field can induce ion flow in cells expressing ferritin tethered to the TRPV1 ion channel [2,3]. This experimental approach was validated in vitro by measuring calcium entry, reporter expression and electrophysiological changes in response to a magnetic field. The method was validated in vivo by assaying magnetically induced changes in reporter expression, blood glucose and plasma hormones levels, and alterations in feeding behavior in mice.

      These results are wholly consistent with those in an independent publication (from the Guler and Deppmann laboratories) in which the investigators fused ferritin in frame to the TRPV4 ion channel [4]. In this report, magnetic sensitivity was validated in vitro using calcium entry and electrophysiological responses as outputs. Additionally, in vivo validation was demonstrated by analyzing magnetically induced behaviors in zebrafish and mice, and through single unit electrophysiological recordings.

      In his paper, Meister incorrectly states our collective view on the operative mechanism [1]. While we are considering several hypotheses, we agree that the precise mechanism is undetermined. Lastly, although mathematical calculations can often be used to model biologic phenomena when enough of the relevant attributes of the system are known, the intrinsic complexity of biologic processes can in other instances limit the applicability of purely theoretical calculations [5]. It is our view that mathematical theory needs to accommodate the available data, not the other way around. We are thus surprised that Meister would stridently question the validity of an extensive data set published by two independent groups (and a third using a different method) without performing any experiments. However, we too are interested in defining the operative mechanism(s) and welcome further discussion and experimentation to bring data and theory into alignment.

      Jeffrey Friedman, Sarah Stanley, Leah Kelly, Alex Nectow, Xiaofei Yu, Sarah F Schmidt, Kaamashri Latcha

      Department of Molecular Genetics, Rockefeller University

      Jonathan S Dordick, Jeremy Sauer

      Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute

      Ali D Güler, Aarti M Purohit, Ryan M Grippo

      Christopher D Deppmann, Michael A Wheeler

      Sarah Kucenas, Cody J Smith

      Department of Biology, University of Virginia

      Manoj K Patel, Matteo Ottolini, Bryan S Barker, Ronald P Gaykema

      Department of Anesthesiology, University of Virginia

      (Laboratory Heads in Bold Lettering)

      References

      1) Meister, M, Physical limits to magnetogenetics. eLife, 2016. 5. http://dx.doi.org/10.7554/eLife.17210

      2) Stanley, SA, J Sauer, RS Kane, JS Dordick, and JM Friedman, Corrigendum: Remote regulation of glucose homeostasis in mice using genetically encoded nanoparticles. Nat Med, 2015. 21(5): p. 537. http://dx.doi.org/10.1038/nm0515-537b

      3) Stanley, SA, L Kelly, KN Latcha, SF Schmidt, X Yu, AR Nectow, J Sauer, JP Dyke, JS Dordick, and JM Friedman, Bidirectional electromagnetic control of the hypothalamus regulates feeding and metabolism. Nature, 2016. 531(7596): p. 647-50. http://dx.doi.org/10.1038/nature17183

      4) Wheeler, MA, CJ Smith, M Ottolini, BS Barker, AM Purohit, RM Grippo, RP Gaykema, AJ Spano, MP Beenhakker, S Kucenas, MK Patel, CD Deppmann, and AD Guler, Genetically targeted magnetic control of the nervous system. Nat Neurosci, 2016. 19(5): p. 756-61. http://dx.doi.org/10.1038/nn.4265

      5) Laughlin, RB and D Pines, The theory of everything. Proc Natl Acad Sci U S A, 2000. 97(1): p. 28-31. http://dx.doi.org/10.1073/pnas.97.1.28


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 05, Alan Roger Santos-Silva commented:

      The spectrum of oral squamous cell carcinoma in young patients

      We read with interest the current narrative review published by Liu et al [1], in Oncotarget. The article itself is interesting, however, they appear to have misunderstood our article [2] because they seem to believe that there was a cause-effect relationship between orthodontic treatment and tongue squamous cell carcinoma (SCC) at young age. This idea might provide anecdotal information about the potential of orthodontic treatment to cause persistent irritation on oral mucosa and lead to oral SCC. Thus, we believe that it is relevant to clarify that the current understanding about the spectrum of oral SCC in young patients points out three well-known groups according to demographic and clinicopathologic features: (1). 40-45 years old patients highly exposed to alcohol and tobacco diagnosed with keratinizing oral cavity SCC; (2). <45 years old patients, predominantly non-smoking males, diagnosed with HPV-related non-keratinizing oropharyngeal SCC; and (3). Younger than 40-year-old patients, mainly non-smoking and non-drinking females diagnosed with keratinizing oral tongue SCC (HPV seems not to be a risk factor in this group) [3-5]. Therefore, chronic inflammation triggered by persistent trauma of the oral mucosa must not be considered an important risk factor in young patients with oral cancer.

      References: 1. Liu X, Gao XL, Liang XH, Tang YL. The etiologic spectrum of head and neck squamous cell carcinoma in young patients. Oncotarget. 2016 Aug 12. doi: 10.18632/oncotarget.11265. [Epub ahead of print]. 2. Santos-Silva AR, Carvalho Andrade MA, Jorge J, Almeida OP, Vargas PA, Lopes MA. Tongue squamous cell carcinoma in young nonsmoking and nondrinking patients: 3 clinical cases of orthodontic interest. Am J Orthod Dentofacial Orthop. 2014; 145: 103-7. 3. Toner M, O'Regan EM. Head and neck squamous cell carcinoma in the young: a spectrum or a distinct group? Part 1. Head Neck Pathol. 2009; 3: 246-248. 4. de Castro Junior G. Curr Opin Oncol. 2016; 28: 193-194. 5.Santos-Silva AR, Ribeiro AC, Soubhia AM, Miyahara GI, Carlos R, Speight PM, Hunter KD, Torres-Rendon A, Vargas PA, Lopes MA. High incidences of DNA ploidy abnormalities in tongue squamous cell carcinoma of young patients: an international collaborative study. Histopathology. 2011; 58: 1127-1135.

      Authors: Alan Roger Santos-Silva [1,2]; Ana Carolina Prado Ribeiro [1,2]; Thais Bianca Brandão [1,2]; Marcio Ajudarte Lopes [1]

      [1] Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil. [2] Dental Oncology Service, Instituto do Câncer do Estado de São Paulo (ICESP), Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil.

      Correspondence to: Alan Roger Santos-Silva Department of Oral Diagnosis, Piracicaba Dental School, UNICAMP Av. Limeira, 901, Areão, Piracicaba, São Paulo, Brazil, CEP: 13414-903 Telephone: +55 19 2106 5320 alanroger@fop.unicamp.br


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Olga Krizanova commented:

      We are aware that there are some papers showing ineffectivity of Xestospongin C on IP3 receptors. Nevertheless, Xest is a widely accepted inhibitor of IP3 receptors (IP3R), as documented by majority IP3R papers and also by companies selling this product (e.g. Sigma-Aldrich, Cayman Chemical, Abcam, etc.). Since Xest also inhibits voltage-dependent Ca2+ and K+ currents at concentrations similar to those which inhibit the IP3R, it can be regarded as a selective blocker of the IP3R in permeabilized cells. Cell type used in experiments might be of a special importance. In our paper we observed the effect of Xest on IP3R1 on four different cell lines -A2780, SKOV3, Bowes and MDA-MB-231. Moreover, we verified results observed by Xest by another IP3R blocker -2-APB and also by IP3R1 silencing. All these results imply that Xest acts as IP3R inhibitor. Recently, paper with more specific Xestospongin B was published, but unfortunately, this compound is not yet commercially available.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 22, Darren Boehning commented:

      Xestospongin C (Xest) does not inhibit IP3R channels. See PMID: 24628114 PMCID: PMC4080982 DOI: 10.1111/bph.12685 There are other well-documented examples in the literature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 27, Atanas G. Atanasov commented:

      This is indeed a very promising research area… thanks to the authors for the good work


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 16, Ellen M Goudsmit commented:

      It should be noted that the PACE trial did not assess pacing as recommended by virtually all patient groups. This behavioural strategy is based on the observation that minimal exertion tends to exacerbate symptoms, plus the evidence that many with ME and CFS cannot gradually increase activity levels for more than a few days because of clinically significant adverse reactions [1]. It does not make any assumptions about aetiology.

      The authors state that “It should be remembered that the moderate success of behavioural approaches does not imply that CFS/ME is a psychological or psychiatric disorder.” I submit that this relates to CBT and GET and not to strategies such as pacing. It might be helpful here to remind readers that the GET protocol for CFS/ME (as tested in most RCTs) is partly based on an operant conditioning theory, which is generally regarded as psychological [2]. The rehabilitative approaches promoted in the UK, i.e. CBT and GET, tend to focus on fatigue and sleep disorders, both of which may be a result of stress and psychiatric disorders e.g. depression. A review of the literature from the 'medical authorities' in the UK shows that almost without exception, they tend to limit the role of non-psychiatric aetiological factors to the acute phase and that somatic symptoms are usually attributed to fear of activity and the physiological effects of stress.

      I informed the editor that as it read, the paper suggests that 1. patients have no sound medical source to support their preference for pacing and that 2. the data from the PACE trial provides good evidence against this strategy. I clarified that the trial actually evaluated adaptive pacing therapy (a programme including advice on stress management and a version of pacing that permits patients to operate at 70% of their estimated capability.) The editor chose not to investigate this issue in the manner one expects from an editor of a reputable journal. In light of the above issues, the information about pacing in this paper may mislead readers.

      Interested scientists may find an alternative analysis of the differing views highly illuminating [3].

      [1]. Goudsmit, EM., Jason, LA, Nijs, J and Wallman, KE. Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation, 2012, 34, 13, 1140-1147. doi: 10.3109/09638288.2011.635746.]

      [2]. Goudsmit, E. The PACE trial. Are graded activity and cognitive-behavioural therapy really effective treatments for ME? Online 18th March 2016. http://www.axfordsabode.org.uk/me/ME-PDF/PACE trial the flaws.pdf

      [3]. Friedberg, F. Cognitive-behavior therapy: why is it so vilified in the chronic fatigue syndrome community? Fatigue: Biomedicine, Health & Behavior, 2016, 4, 3, 127-131. http://www.tandfonline.com/doi/full/10.1080/21641846.2016.1200884


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 15, Lily Chu commented:

      As a member of the Institute of Medicine Committee, I talked to multiple patients, caregivers, clinicians, and researchers. The problem they have with the name "CFS" goes beyond psychological stigma. For one, fatigue is only one symptom of the disease but not even the most disabling one for patients. Post-exertional malaise and cognitive issues are. Secondly, most patients and families are concerned about psychological implications not because of stigmatization but simply because CFS is NOT a psychological or psychiatric condition. Some patients experience co-morbid depression, acknowledge its presence, and receive treatment for it. In support groups, patients discuss depression and anxiety without fear of stigma. The problem comes when clinicians or researchers conflate patients' depression with their CFS and conclude that they can treat the latter condition with cognitive behavioral therapy or with SSRIs. An analogy would be if tomorrow, patients experiencing myocardial infarcts and major depression were told aspirin, B-blockers, cholesterol medication, etc. would no longer be the treatments for myocardial infarcts but instead SSRIs would be. Could you imagine how patients would feel in that circumstance? That is why they are concerned.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 10, ROBERT COMBES commented:

      Robert Combes and Michael Balls

      In a recent exchange of views, in PubMed Commons, with Simon Chapman on the effectiveness and safety of vaping for achieving the cessation of tobacco smoking, provoked by a paper published by Martin McKee [and comments therein], Clive Bates has criticised one of our publications. The paper in question urges caution concerning any further official endorsement of electronic cigarettes (ECs), at least until more safety data (including results from long-term tests) have become available. Bates questions why we should write on such issues, given our long-standing focus on ‘animal rights’, as he puts it, and from this mistaken assumption he makes the remarkably illogical deduction that our paper is without merit. Bates also implies that our views should not be taken seriously, because we published in Alternatives to Laboratory Animals (ATLA), a journal owned by FRAME (Fund for the Replacement of Animals in Medical Experiments), an organisation with which we have been closely associated in the past.<br> We have written a document to correct Bates' misconceptions about who we are, what our experience is, why we decided to write about this topic in the first place, what we actually said, and why we said it. In addition, we have elaborated on our views concerning the regulatory control of e-cigarettes, in which we explain in detail why we believe the current policy being implemented by PHE lacks a credible scientific basis. We make several suggestions to rectify the situation, based on our careers specialising in cellular toxicology: a) the safety of electronic cigarettes should be seen as a problem to be addressed, primarily by applying toxicological principles and methods, to derive relevant risk assessments, based on experimental observations and not opinions and guesswork; b) such assessments should not be confused with arguments in favour of vaping based on how harmful smoking is, and on the results of chemical analysis; c) it would be grossly negligent if the relevant national regulatory authorities were to continue to ignore the increasingly convincing evidence suggesting that exposure to nicotine can lead to serious long-term, as distinct from acute, effects, related to carcinogenicity, mutagenicity (manifested as DNA and chromosomal damage) and reproductive toxicity; and d) only once such information has been analysed, together with the results of other testing, should risks from vaping be weighed against risks from not vaping, to enable properly informed choice.<br> Due to space limitations, the pre-publication version of the complete document has to be downloaded from: https://www.researchgate.net/publication/307958871_Draft_Response_regarding_comments_made_by_Clive_Bates_about_one_of_our_publications_on_the_safety_of_electronic_cigarettes_and_vaping and our original publication is available from: https://www.researchgate.net/publication/289674033_On_the_Safety_of_E-cigarettes_I_can_resist_anything_except_temptation1

      We hope that anyone wishing to respond will carefully read these two documents before doing so.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 24, Clive Bates commented:

      In response to Professor Daube, I am pleased to have the opportunity to explain a different and less authoritarian approach to the public health challenges of smoking.

      1. But let me start with a misunderstanding. Professor Daube accuses me of a personal attack on Professor McKee. In fact, I made five specific substantive comments on Professor McKee's short letter, to which Professor Stimson added a further two. These are corrections of fact and understanding, not a 'personal attack'. It is important that academics understand and recognise this distinction.

      2. Professor Daube draws the reader's attention to a link to an investor presentation by Imperial Tobacco. I am unsure what point he is trying to make. Nevertheless, the presentation paints a rosy picture of life in Australia for this tobacco company: it is "on track" (p6); it has "continued strong performance in Australia" (p15); in Australia it is "continuing to perform strongly - JPS equity driving share, revenue and profit growth" (p31). It may be a hard pill to swallow, but tobacco companies in Australia are very profitable indeed, in part because the tax regime allows them to raise underlying pre-tax prices easily.

      3. It's a common error of activists to believe that harm to tobacco companies is a proxy for success in tobacco control (an idea sometimes known as 'the scream test'). If it that was the case, the burgeoning profitability of tobacco companies would be a sign of utter failure in tobacco control [1]. We should instead focus on what it takes to eliminate smoking-related disease. If that means companies selling products that don't kill the user instead of products that do, then so be it - I consider that is progress. If your alternative is to use coercive policies to stop people using nicotine at all, then you may make progress... but it will be slow and laborious, smoking will persist for longer and many more people will be harmed as a result. These are the unintended consequences of taking more dogmatic positions that seem tougher, but are less effective.

      4. In any event, my concerns are not about the welfare of the tobacco industry in Australia or anywhere else. My concern, as I hope I made clear in my response to Professor Chapman, is the welfare of the 2.8 million Australians (16% adults) who continue to smoke despite Australia's tobacco control efforts. For them, the serious health risks of smoking are compounded by some Australian tobacco control policies that are punitive (Australia is not alone in this) while being denied low-risk alternatives. All the harms caused by both smoking and anti-smoking policies can be mitigated and the benefits realised by making very low-risk alternatives to combustible cigarettes (for example, e-cigarettes or smokeless tobacco) available to smokers to purchase with their own money and of their own volition. Professor Daube apparently opposes this simple liberal idea - that the state should not intervene to prevent people improving their own health in a way that works for them and harms no-one else.

      5. Professor Daube finishes his contribution with what I can only assume is an attempted smear in pointing out that I sometimes speak at conferences where the tobacco industry is present, as if this is somehow, a priori, an immoral act. I speak at these events because I have an ambitious advocacy agenda about how these firms should evolve from being 'merchants of death' into supplying a competitive low-risk recreational nicotine market, based on products that do not involve combustion of tobacco leaf, which the source of the disease burden. So I, and many others, have a public health agenda - the formation of a market for nicotine that will not kill one billion users in the 21st Century, and that will perhaps avoid hundreds of millions of premature deaths [2]. There is a dispute about how to do this, and no doubt Professor Daube has ideas. However, the policy proposals for the so-called 'tobacco endgame' advanced by tobacco control activists do not withstand even cursory scrutiny [3]. The preferred approach of advocates of 'tobacco harm reduction', among which I include myself, involves a fundamental technology transformation, a disruptive process that has started and is synergistic with well-founded tobacco control policies [4]. If, like me, you wish to see a market change fundamentally, then it makes sense to talk to and understand every significant actor in the market, rather than only those whose convictions you already share.

      References & further reading

      [1] Bates C. Who or what is the World Health Organisation at war with? The Counterfactual, May 2016 [link].

      [2] Bates C. A billion lives? The Counterfactual, November 2015 [link] and Bates C. Are we in the endgame for smoking? The Counterfactual, February 2015 [link]

      [3] Bates C. The tobacco endgame: a critique of the policy ideas. The Counterfactual, March 2015 [link]

      [4] Bates C. A more credible endgame - creative destruction. The Counterfactual, March 2015 [link].


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 25, Clive Bates commented:

      As I think Professor Daube's comment contains inappropriate innuendo about my motives, let me repeat the disclosure statement from my initial posting:

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.

      My hope is that prominent academics and veterans of the struggles of the past will adopt an open mind towards the right strategy for reducing the burden of death and disease caused by smoking as we go forward. While he may not like the idea, Professor Daube can surely see that 'tobacco harm reduction' is a concept supported by many of the top scientists and policy thinkers in the field, including the Tobacco Advisory Group of the Royal College of Physicians. It is not the work of the tobacco industry and cannot be dismissed just by claiming it is in their interests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 24, Mike Daube commented:

      As part of his lengthy and personalised attacks on Martin McKee, Clive Bates argues that “we certainly should not” look to Australia for policy inspiration.

      This view, and some of his other comments, would have strong support from the global tobacco industry, which has ferociously opposed the evidence-based action to reduce smoking taken by successive Australian governments, and reports that we are “the darkest market in the world”. (1)

      No doubt Mr Bates will be able to discuss these issues further with tobacco industry leaders at the Global Tobacco & Nicotine Forum (“the annual industry summit”) in Brussels later this year, where as in previous years he is listed as a speaker.(2)

      References 1. Brisby D, Pramanik A, Matthews P, Kutz O, Kamaras A. Imperial Brands PLC Investor Day: Jun 8 2016. Transcript – Quality Growth: Returns and Growth – Markets that Matter [p.6] & Presentation Slides – Quality Growth: Returns and Growth – Markets that Matter [slide 16]. http://www.imperialbrandsplc.com/Investors/Results-centre.

      1. http://gtnf-2016.com/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Aug 22, Clive Bates commented:

      Some responses to Professor Simon Chapman:

      1. Professor Chapman criticises the Public Health England and Royal College of Physicians consensus on the relative risk of smoking and e-cigarette use by referring to a comment piece Combes RD, 2015 in the journal Alternatives to Laboratory Animals. The piece is written by a commentator whose affiliation is an animal welfare rights campaign (FRAME), for which ATLA is the house journal, and an independent consultant. How these two came to be writing about e-cigarettes at all is not stated, but this is less important than the fact that their commentary provides little of substance to challenge the robust expert-based PHE and RCP analysis, and it provides even less to justify the colourful dismissive pull-out quotes chosen by Professor Chapman. Even though the work can be dismissed on its merits, surely the authors should have disclosed that FRAME has pharmaceutical funders [Our supporters], including companies who make and sell medical smoking cessation products.

      2. Professor Chapman confirms my view that the appropriate statistic to use for comparing Australian prevalence of current smoking is 16.0 percent based on the Australian Bureau of Statistics, National Health Survey: First Results, 2014-15 (see table 9.3). This is the latest data on the prevalence of current adult smoking.

      3. Unless it's to make the numbers look as low as possible, I am unsure why Professors Chapman and McKee choose to refer to a survey from 2013 or why Professor Chapman didn't disclose in his response that he is citing a survey of drug use, including illicit drug use: [see AIHW, National Drug Strategy Household Survey detailed report 2013]. Surely a neutral investigator would be concerned that a state-run survey asking about illicit drug use might have a low response rate? And further, that non-responders would be more likely to be drug users, and hence also more likely to be smokers - so distorting the prevalence systematically downwards? In fact, the response rate in this survey is just 49.1% [Explanatory notes]. While this might be the best that can be done to understand illicit drug use, it is an unnecessarily unreliable way to gauge legal activity like smoking, especially as a more recent and more reliable survey is available.

      4. The figure of 11% given for smoking in Sweden is not 'daily smoking' as asserted by Professor Chapman. With just a little more research before rushing out his reply, Professor Chapman could have checked the source and link I provided. The question used is: "Regarding smoking cigarettes, cigarettes, cigars, cigarillos or a pipe, which of the following applies to you?" 11% of Swedes answer affirmatively to the response: "You currently smoke".

      5. If we are comparing national statistics, it is true that measured smoking prevalence in Britain is a little higher than in the Australia - the latest Office for National Statistics data suggests 17.5 percent of adults age 16 and over were current smokers in 2015 (derived from its special survey of e-cigarette use: E-cigarette use in Great Britain 2015). So what? The two countries are very different both today and in where they have come from and many factors explain smoking prevalence - not just tobacco control policy. But if one is to insist on such comparisons, official data from the (until now) vape-friendly United States suggests that American current adult smoking prevalence, at 15.1 percent, is now below that of Australia [source: National Center for Health Statistics, National Health Interview Survey, 1997–2015, Sample Adult Core component. Figure 8.1. Prevalence of current cigarette smoking among adults aged 18 and over: United States, 1997–2015]

      6. Regressive taxes are harmful and so is stigmatisation - I shouldn't need to reference that for anyone working in public health. Any thoughtful policy maker will not only try to design policies that achieve a primary objective (reduce the disease attributable to smoking) but also be mindful that the policies themselves can be a source of harm or damaging in some other way. Ignoring the consequences of tobacco policies on wider measures of wellbeing is something best left to fanatics. In public health terms, these consequences may be considered 'a price worth paying' to reduce smoking, but they create real harms for those who continue to smoke, and in my view, those promoting them have an ethical obligation to mitigate these wider harms to the extent possible.

      7. The approach, favoured by me and many others, of supporting (or in Australia's case of not actively obstructing) ways in which smokers can more easily move from the most dangerous products to those likely to cause minimal risk has twin advantages:

      • (1) it helps to achieve the ultimate goal of reducing cancer, cardiovascular disease, and respiratory illnesses by improving the responsiveness of smokers to conventional tobacco control policy. It does this by removing the significant barrier of having to quit nicotine completely, something many cannot do easily or choose not to do.

      • (2) It does this in a way that goes with the grain of consumer preferences and meets people where they are. This is something for public health to rediscover - public health should be about 'enabling', not bullying or nannying, and go about its business with humility and empathy towards those it is trying to help.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Aug 22, Clive Bates commented:

      As an aside, it's disappointing to see Professor Chapman spreading doubt about e-cigarettes with reference to the filters and 'light and mild' cigarette fiasco (see the 1999 report by Martin Jarvis and me on this fiasco). This 'science-by-analogy' fails because it misunderstands the nicotine-seeking behaviour that underpins both smoking and vaping.

      With light and mild cigarettes, health activists were fooled into believing that these cigarettes would much be less risky, even though they are no less risky. It would be wrong to compound this error by implying that e-cigarettes are not much less risky, even though they are sure to be.

      The underlying reason for both errors is the same - nicotine users seek a roughly fixed dose of nicotine (a well-understood process, known as titration). If a vaper can obtain their desired nicotine dose without exposure to cigarette smoke toxins, then they will not suffer the smoking-related harms. With light and mild cigarettes, both nicotine and toxins were diluted equally with air to fool smoking machines. However, human smokers adjusted their behaviour to get the desired dose of nicotine and so got almost the same exposures to toxins. This is another well-understood process known as 'compensation'. I am sure a global authority of Professor Chapman's stature would be aware these mechanisms, so it is all the more perplexing that he should draw on this analogy in his campaign against e-cigarettes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2016 Aug 22, Simon Chapman commented:

      Clive Bates' efforts to correct points made in Martin McKee’s letter in turn require correction and comment. Bates disputes that there was not a single source for the claim that e-cigarettes are “95% safer” than smoking (in fact Public Health England stated “95% less harmful” [1], a critical difference). Bates cites two references in support of his claim, but both of these are nothing but secondary references, with both citing the same Nutt et al [2] 95% less harmful estimate as their primary source.

      Two toxicologists have written an excoriating critique of the provenance of the “95% less harmful” statement, describing its endorsement as “reckless”[3] and nothing but the consensus of the opinions of a carefully hand-picked group. The 95% estimate remains little more than a factoid – a piece of questionable information that is reported and repeated so often that it becomes accepted as fact.

      We will not have an evidence-based comparison of harm until we have cohort data in the decades to come comparing mortality and morbidity outcomes from exclusive smokers versus exclusive vapers and dual users. This was how our knowledge eventually emerged of the failure other mass efforts at tobacco harm reduction: cigarette filters and the misleading lights and milds fiasco.

      Bates challenges McKee’s statement that Australian smoking prevalence is “below 13%” and cites Australian Bureau of Statistics (ABS) data from 2014-15 derived from a household survey of 14,700 dwellings that shows 16% of those aged 18+ were “current” smokers (14.5% smoking daily). McKee was almost certainly referring to 2013 data from the Australian Institute of Health and Welfare’s (AIHW) ongoing national surveys based on interviews with some 28,000 respondents which showed 12.8% of age 14+ Australians smoked daily, with another 2.2% smoking less than daily[4]. The next AIHW survey will report in 2017 and with the impact of plain packaging, several 12.5% tobacco tax increases, on-going tobacco control campaigning and a downward historical trend away from smoking, there are strong expectations that the 2017 prevalence will be even lower.

      Bates cites a 2015 report saying that Sweden has 11% smoking prevalence. This figure is almost certainly daily smoking prevalence data, not total smoking prevalence that Bates insists is the relevant figure that should be cited for Australia. If so, the comparable figure for Sweden should also be used. In 2012 the Swedish Ministry of Health reported to the WHO that 22% of Swedish people aged 16-84 currently smoked (11% daily and 11% less than daily) [5]. It is not credible that Sweden could have halved its smoking prevalence in three years.

      Meanwhile, England with current smoking prevalence in 2015 of 18.2% in July 2016 [6 – slide 1] trails Australia, regardless of whether the ABS or AIHW data are used. Also, the proportion of English smokers who smoked in the last year and who tried to stop smoking is currently the lowest recorded in England since 2007 [6 slide 4].

      Bates says that the UK and the USA where e-ecigarette use is widespread have seen “recent sharp falls” in smoking prevalence. In fact in smoking prevalence has been falling in both nations for many years prior to the advent of e-cigarettes, as it has in Australia where e-cigarettes are seldom seen. Disturbingly in the USA, the decline in youth smoking has come to a halt after 2014 [7], following continuous falls for at least a decade – well before e-cigarette use became popular. The spectacular increase in e-cigarette use in youth particularly between 2013-2015 (see Figure 1 in reference 7] was either coincident or possibly partly responsible with that halting.

      Finally Bates makes gratuitous, unreferenced remarks about “harms” arising from Australia’s tobacco tax policy and “campaigns to denormalise smoking”. There are no policies or campaigns to denormalise smoking in Australian that are not also in place in the UK or the USA, as well as many other nations. When Bates was director at ASH he vigourously campaigned for tobacco taxes to be high and to keep on increasing [8]. His current views make an interesting contrast with even the CEO of British American Tobacco Australia who agrees that tax has had a major impact on reducing smoking, telling an Australian parliamentary committee in 2011 “We understand that the price going up when the excise goes up reduces consumption. We saw that last year very effectively with the increase in excise. There was a 25 per cent increase in the excise and we saw the volumes go down by about 10.2 per cent; there was about a 10.2 per cent reduction in the industry last year in Australia.” [9].

      References

      1 Public Health England. E-cigarettes: a new foundation for evidence-based policy and practice. Aug 2015. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/454517/Ecigarettes_a_firm_foundation_for_evidence_based_policy_and_practice.pdf

      2 Nutt DJ et al. Estimating the harms of nicotine-containing products using the MCDA approach. Eur Addict Res 2014;20:218-25.

      3 Combes RD, Balls M. On the safety of e-cigarettes.: “I can resists anything except temptation.” ATLA 2015;42:417-25. https://www.researchgate.net/publication/289674033_On_the_Safety_of_E-cigarettes_I_can_resist_anything_except_temptation1

      4 Australian Institute of Health and Welfare. National Drug Household Survey. 2014 data and references. http://www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129548784

      5 Swedish Ministry for Health and Social Affairs. Reporting instrument of the WHO Framework Convention on Tobacco Control 2012 (13 April) http://www.who.int/fctc/reporting/party_reports/sweden_2012_report_final_rev.pdf

      6 Smoking in England. Top line findings STS140721 5 Aug 2016 http://www.smokinginengland.info/downloadfile/?type=latest-stats&src=13 (slide 1)

      7 Singh T et al. Tobacco use among middle and high school students — United States, 2011–2015. http://www.cdc.gov/mmwr/volumes/65/wr/mm6514a1.htm MMWR April 15, 2016 / 65(14);361–367

      8 Bates C Why tobacco taxes should be high and continue to increase. 1999 (February) http://www.ash.org.uk/files/documents/ASH_218.pdf

      9 The Treasury. Post-implementation review: 25 per cent tobacco excise increase. Commonwealth of Australia 2013; Feb. http://ris.dpmc.gov.au/files/2013/05/02-25-per-cent-Excise-for-Tobacco.doc p15


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Aug 21, Gerry Stimson commented:

      Clive Bates (below) identifies five assertions by Martin McKee that need correction: there are two more, making seven in McKee's eleven lined letter.

      First, McKee states that ‘It is misleading to suggest that there is a consensus on e-cigarettes in England, given that many members of the health community have continuing reservations’ and quotes one short BMA statement that calls for medical regulation of e-cigarettes.

      He ignores the ‘public health consensus statement’ from English public health, medical, cancer and tobacco control organisations that supports e-cigarettes for quitting smoking. The consensus statement says that ‘We all agree that e-cigarettes are significantly less harmful than smoking.’ [1, 2]. The first edition of this statement [1] explicitly challenges McKee’s position on the evidence. The consensus statement is endorsed by Public Health England, Action on Smoking and Health, the Association of Directors of Public Health, the British Lung Foundation, Cancer Research UK, the Faculty of Public Health, Fresh North East, Healthier Futures, Public Health Action, the Royal College of Physicians, the Royal Society for Public Health, the UK Centre for Tobacco and Alcohol Studies and the UK Health Forum. McKee and the BMA are minority outliers in England and the UK.

      The PHE report on e-cigarettes faced a backlash but this was from a few public health leaders including McKee who organised a behind-the-scenes campaign against the report including a critical editorial and comment in the Lancet, and an editorial in the BMJ backed up by a media campaign hostile to PHE. Emails revealed as a result of a Freedom of Information request show that this backlash was orchestrated by McKee and a handful of public health experts [3, 4].

      Second, McKee misrepresents and misunderstands drugs harm reduction. He cites Australia, and it was indeed in Australia (as in the UK) that the public health successes in preventing the spread of HIV infection and other adverse aspects of drug use were driven by harm reduction – including engaging with drug users, outreach to drug users, destigmatisation, provision of sterile needles and syringes, and methadone [5, 6, 7]. Drugs harm reduction was a public health success [4, 6]. The UK and other countries that implemented harm reduction avoided a major epidemic of drug related HIV infection of the sort that has been experienced in many countries. Drugs harm reduction was implemented despite drugs demand and supply and reduction measures, not as McKee asserts because it was part of a combined strategy including supply demand and supply reduction. McKee’s position is out of step with the Open Society Institute, of which he chairs the Global Health Advisory Committee; OSI has resourced drugs harm reduction and campaigns against the criminalisation of drugs ie those demand and supply reduction measures that maximise harm.

      1 Public health England (2015) E-cigarettes: a developing public health consensus. https://www.gov.uk/government/news/e-cigarettes-an-emerging-public-health-consensus

      2 Public health England (2016) E-cigarettes: a developing public health consensus. https://www.gov.uk/government/publications/e-cigarettes-a-developing-public-health-consensus

      3 Puddlecote D, (2016/) Correspondence between McKee and Davies Aug 15 to Oct 15. https://www.scribd.com/doc/296112057/Correspondence-Between-McKee-and-Davies-Aug-15-to-Oct-15. Accessed 07 03 2016

      4 Stimson G V (2016) A tale of two epidemics: drugs harm reduction and tobacco harm reduction, Drugs and Alcohol Today, 16, 3 2016, 1-9.

      5 Berridge V (1996) AIDS in the UK: The Making of Policy, 1981-1994. Oxford University Press.

      6 Stimson G V (1995) AIDS and injecting drug use in the United Kingdom, 1988-1993: the policy response and the prevention of the epidemic. Social Science and Medicine, 41,5, 699-716

      7 Wodak A, (2016) Hysteria about drugs and harm minimisation. It's always the same old story. https://www.theguardian.com/commentisfree/2016/aug/11/hysteria-about-drugs-and-harm-minimisation-its-always-the-same-old-story


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2016 Aug 20, Clive Bates commented:

      The author, Martin McKee, makes no less than five assertions in this short letter that demand correction:

      First, that there was only one source for the claim that e-cigarettes are "95% safer" than smoking. In fact, this claim does not rely on a single source but is the consensus view of Public Health England's expert reviewers [1] and a close variation on this claim is the consensus view of the Tobacco Advisory Group of the Royal College of Physicians and is endorsed by the College [2]:

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (Section 5.5 page 87)

      Second, that PHE's work was in some way compromised by McKee's "concerns about conflicts of interest". To support this largely self-referential claim, he cites a piece of very poor journalism in which every accusation was denied or refuted by all involved. Please see Gornall J, 2015 including my PuBMed Commons critique of this article and a more detailed critique on my blog [3].

      Third, that "other evidence, some not quoted in the review, raised serious questions about the safety of these products". The citation for this assertion is Pisinger C, 2014. This review does not, in fact, raise any credible questions about the safety of these products, and suffered numerous basic methodological failings. For this reason, it was reviewed but then ignored in the Royal College of Physicians' assessment of e-cigarette risk [2 - page 79]. Please see the PubMed Commons critiques of this paper [4].

      Fourth, that adult smoking prevalence in Australia is "below 13%, without e-cigarettes". Both parts of this claim are wrong. The latest official data shows an adult smoking prevalence of 16.0% in Australia [5]. No citation was provided by the author for his claim. E-cigarettes are widely used in Australia, despite a ban on sales of nicotine liquids. Australians purchase nicotine-based liquids internationally over the internet or buy on a thriving black market that has been created by Australia's wholly unjustified de facto prohibition.

      Fifth, that we "should look to Australia" for tobacco policy inspiration. We certainly should not. Australia has a disturbingly unethical policy of allowing cigarettes to be widely available for sale but tries to deny its 2.8 million smokers access to much safer products by banning nicotine-based e-cigarettes. These options have proved extremely popular and beneficial for millions of smokers in Europe and the United States trying to manage their own risks and health outcomes. Further, the author should consider the harms that arise from Australia's anti-smoking policies in their own right, such as high and regressive taxation and stigma that arises from its campaigns to denormalise smoking.

      If the author wishes to find a model country, he need not travel as far as Australia. Sweden had a smoking prevalence of 11% in 2015 - an extreme outlier in the European Union, which averages 26% prevalence on the measure used in the only consistent pan-European survey [6]. The primary reason for Sweden's very low smoking prevalence is the use of alternative forms of nicotine (primarily snus, a smokeless tobacco) which pose minimal risks to health and have over time substituted for smoking. This exactly what we might expect from e-cigarettes and, given the recent sharp falls in adult and youth smoking in both the UK and the US, this does seem likely. Going with grain of consumers' preferences represents a more humane way to address the risks of smoking than the battery of punitive and coercive policies favoured in Australia.

      Though not specialised in nicotine policy or science, the author is a prolific commentator on the e-cigarette controversy. If he wishes to contribute more effectively, he could start by reading an extensive critique of his own article in the BMJ (McKee M, 2015), which is at once devastating, educational, and entertaining [7].

      References

      [1] McNeill A. Hajek P. Underpinning evidence for the estimate that e-cigarette use is around 95% safer than smoking: authors’ note, 27 August 2015 [link]

      [2] Royal College of Physicians (London) Nicotine without smoke: tobacco harm reduction 28 April 2016 [link]

      [3] Bates C. Smears or science? The BMJ attack on Public Health England and its e-cigarettes evidence review, November 2015 [link]

      [4] Pisinger C, 2014 Bates C. comment [here] and Zvi Herzig [here]

      [5] Australian Bureau of Statistics, National Health Survey: First Results, 2014-15. Table 9.3, 8 December 2015 [link to data]

      [6] European Commission, Special Eurobarometer 429, Attitudes of Europeans towards tobacco, May 2015 [link] - see page 11.

      [7] Herzig Z. Response to McKee and Capewell, 9 February 2016 [link]

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 21, Daniel Himmelstein commented:

      Thanks Dr. Seulbe Lee for you response. My apologies for the unit mistake. For the record, I had incorrectly used milliliters rather than liters in the denominator of stream concentrations.

      I updated my notebook to fix the error. To avoid confusion, I changed the notebook link in my first comment to be version specific. I also performed another analysis which speculated on potential sewage contentrations of AMPH under the following assumptions:

      • 1 in 4 people orally consume 30 mg of AMPH daily
      • 40% of the consumed AMPH is excreted into the sewage
      • Each person creates 80 gallons of sewage per day

      Under these assumptions, fresh sewage was estimated to contain 9.91 ug/L of AMPH, which is ~10 times higher than the artificial streams. Granted there is likely additional dilution and degradation I'm not accounting for, but nonetheless this calculation shows it's possible that sewage streams from avid amphetamine communities could result in the doses reported by this study.

      Our research group is continuing work on the ecological effects of multiple contaminants found in these streams.

      Glad to hear. As someone who's swam in both Cresheim Creek and the Mississippi River just this summer, I can appreciate the need to study and reduce the contamination of America's waterways.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Sep 17, Sylvia Seulbe Lee commented:

      Daniel,

      Thank you for your comments on the paper. We appreciate your skepticism and critical observations. Although the CNN story mentions that the source of amphetamine in Baltimore streams could be linked to the excrement of illicit drug users, we clarify that our study made no claims about the major source of the amphetamine in the streams we sampled. Illicit or recreational drug use is one potential source of amphetamine. We are unable to distinguish between recreational and prescription drug use in Baltimore, but prescription use of amphetamine (e.g., for the treatment of ADHD, illicit use by college students prior to exams) may be the primary cause for increased loading, especially given the increasing trend in number of diagnoses and prescription of medication for treatment of ADHD and similar conditions. Another source of amphetamine is improper disposal of prescription medication (flushing down the toilet).

      We have to point out that your reading of the amphetamine concentrations is incorrect. We measured 0.630 ug/L amphetamine in Gwynns Falls, which is equivalent to 630 ng/L or 0.630 ng/mL. Additionally, we added 1 ng/mL (equivalent to 1 ug/L reported in the paper) amphetamine into the artificial streams, not 1000 ng/mL. Thus, the actual concentrations of amphetamine measured in the field and used in the experiment were 1000 times less than the concentrations you reported.

      With respect to dilution of pharmaceutical products from sewage to the watershed, we would like to note that the stream we sampled is small (http://www.beslter.org/virtual_tour/Watershed.html) and the wastewater entering these streams is mostly raw, untreated sewage leaking from failing infrastructure. Baltimore has a population of more than 600,000 people and the large number of people feeding waste into that river could create quite a load. In addition, we note that amphetamine degraded by over 80% in the artificial streams. Thus, we noted in the discussion section that the high concentrations found in the field may indicate that the loading of amphetamine into the Baltimore streams is actually higher than the concentrations we measured, or that there is pseudo-persistence of amphetamine because of continuous input into the streams. Our finding that there were ecological effects even with 80% degradation of the parent amphetamine compound in the artificial streams is noteworthy.

      Furthermore, we acknowledge that the concentrations of drugs in streams is spatially and temporally variable. As shown in our paper, the concentrations of drugs differed quite a bit between our sampling in 2013 and in 2014. The differences were likely due to high flow events prior to our sampling date in 2013. However, the environmental relevance of 1 ug/L amphetamine concentration was clearly supported in the paper by higher concentrations found in streams and rivers in other locations (e.g., Spain, India, etc.).

      Finally, we agree completely that there are many pressing and detrimental contaminants in urban streams in Baltimore and elsewhere. Our research group is continuing work on the ecological effects of multiple contaminants found in these streams.

      Regards, Sylvia - on behalf of my co-authors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 26, Daniel Himmelstein commented:

      Preamble: I'm far from an expert on environmental science, just a critical observer skeptical of claims that excrement from recreational drug users harms aquatic environments. Given the ongoing war on drugs, these topics are bound to political. For example, CNN covered this study with the title "Your drain on drugs: Amphetamines seep into Baltimore's streams." The CNN story concludes that excrement of illicit meth users is of environmental concern.

      Premise: By the time pharmaceutical products in excrement reach the watershed, they will be extremely diluted. Humans safely tolerate the undiluted dosage, so in general I don't envision the extremely diluted dose harming aquatic life. In cases where the watershed contains high concentrations of pharmaceuticals, I suspect the contamination vector was not the excrement of users, but rather runoff from manufacturing or distribution processes.

      Specifics:

      This study observed the following six concentrations of amphetamine in Baltimore's streams: 3, 8, 13, 28, 101, 630 ng/ml (Table 1). They constructed four artificial streams where they introduced 1000 ng/ml of AMPH (D-amphetamine). Note that the controlled experiment evaluated an AMPH concentration 49 times that of the median concentration in Baltimore streams.

      Furthermore, the Cmax (max concentration in plasma) of D-amphetamine resulting from prescription Adderall is 33.8 ng/ml (McGough JJ, 2003). Accordingly, the artificial streams used an AMPH concentration 30 times that of the blood of an active user. Note that AMPH has a high bioavailability: 75% of the consumed dose enters the blood according to DrugBank. It's unreasonable that runoff from excrement of users could result in a higher concentration than in the blood of the active user.

      However, the study frames the contamination as a result of excrement. The introduction states:

      Unfortunately, many of the same chemicals are also used illicitly as narcotics. After ingestion of AMPH approximately 30−40% of the parent compound plus its metabolites are excreted in human urine and feces, and these can be transported into surface waters directly or through wastewater treatment facilities. On the basis of increases in both medical and illicit usage, there is cause to speculate that the release of stimulants to various aquatic environments across the globe may be on the rise.

      And the discussion states:

      Our study demonstrates that illicit drugs may have the potential to alter stream structure and function.

      Conclusion:

      Evidence is lacking that excrement from recreational drug users has anything to do with environmentally harmful levels of AMPH in Baltimore streams. There seems to be a bigger issue with pollution in the Baltimore streams, with the study stating:

      As much as 65% of the average flow in the Gwynns Falls can be attributed to untreated sewage from leaking infrastructure

      In such a polluted aquatic environment, I suspect there are several more pressing and detrimental contaminants than recreational drugs. Finally, there are related studies, such as Jiang JJ, 2015, that I haven't had time to investigate.

      Update 2016-09-01:

      Here is more evidence that the 630 ng/ml of amphetamine observed in Gwynns Run at Carroll Park is extremely high. At that concentration, only 7.94 liters of stream water contain an effective dose of AMPH (5 mg). At 1000 ng/ml, 5.0 liters of water contain an effective dose of AMPH.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 31, Daniela Drandi commented:

      Dr. Kumar S. and Colleagues gave a comprehensive description of the role of the ancestors (MFC and ASOqPCR) and the new high-throughput (NGF and NGS) MRD techniques in MM. However, in the “molecular methods for MRD detection” section, the Authors briefly refer to our original work, (Drandi D et al. J Mol Diagn. 2015;17(6):652-60), in a way that misinterprets our findings. Infact, in their review the Authors concluded that ddPCR is a “less applicable and more labor intensive” method compared to qPCR. This statement is in contrast to what was observed in our original work, where the comparison between qPCR and ddPCR showed that: 1) ddPCR has sensitivity, accuracy, and reproducibility comparable to qPCR; 2) ddPCR allows to bypass the standard curve issue, ensuring the quantification of samples with low tumor invasion at baseline or lacking MFC data; 3) ddPCR has a substantial benefit in terms of reduced costs, labor intensiveness and waste of precious tissue (see Drandi D et al., supplemental table S3). Notably, according to these findings, a standardization process is currently ongoing, both in the European (ESLHO-EuroMRD group) and in the Italian (Italian Lymphoma Foundation (FIL)-MRD Network) context. We agree that ddPCR does not overcome all the limitation of qPCR including the need, in IGH-based MRD, of patients-specific ASO-primers. However, as we showed, ddPCR is a feasible and an attractive alternative method for MRD detection, especially in term of applicability and labor intensiveness.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 24, Jordan Anaya commented:

      I think readers of this article will be interested in a comment I posted at F1000Research, which reads:

      I would like to clarify and/or raise some issues with this article and accompanying comments.

      One: Reviewers Prachee Avasthi and Cynthia Wolberger both emphasized the importance of being able to sort by date, and in response the article was edited to say: "Currently, the search.bioPreprint default search results are ordered by relevance without any option to re-sort by date. The authors are aware of the pressing need for this added feature and if possible will incorporate it into the next version of the search tool."

      However, it has been nearly a year and this feature has not been added.

      Two: The article states: "Until the creation of search.bioPreprint there has been no simple and efficient way to identify biomedical research published in a preprint format..."

      This is simply not true as Google Scholar indexes preprints. This was pointed out by Prachee Avasthi and in response the authors edited the text to include an incorrect method for finding preprints with Google Scholar. In a previous comment I pointed out how to correctly search for preprints with Google Scholar, and it appears the authors read the comment given they utilize the method at this page on their site: http://www.hsls.pitt.edu/gspreprints

      Three: In his comment the author states: "We want to stress that the 'Sort by date' feature offered by Google Scholar (GS) is abysmal. It drastically drops the number of retrieved articles compared to the default search results."

      This feature of Google Scholar is indeed limited, as it restricts the results to articles which were published in the past year. However, if the goal is to find recent preprints then this limitation shouldn't be a problem and I don't know that I would classify the feature as "abysmal".

      Four: The article states: "As new preprint servers are introduced, search.bioPreprint will incorporate them and continue to provide a simple solution for finding preprint articles."

      New preprint servers have been introduced, such as preprints.org and Wellcome Open Research, but search.biopreprint has not incorporated them.

      Five: Prachee Avasthi pointed out that the search.biopreprint search engine cannot find this F1000Research article about search.biopreprint. It only finds the bioRxiv version. In response the author stated: "The Health Sciences Library System’s quality check team has investigated this issue and is working on a solution. We anticipate a quick fix of this problem."

      This problem has not been fixed.

      Competing Interests: I made and operate http://www.prepubmed.org/, which is another tool for searching for preprints.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Cicely Saunders Institute Journal Club commented:

      This paper was discussed on 12 May 2017 by the MSc students in Palliative Care at the KCL Cicely Saunders Institute.

      The study, that we read with great interest, is a retrospective cohort study examining the association between palliative homecare services and the number of emergency department visits (in regards to both high and low acuity). Previous studies have shown that palliative care homecare services help reduce patients’ consecutive visits to emergency department. Therefore, in this study the authors tested the hypothesis that life-threatening visits could be reduced with the induction of palliative homecare services and education in treating high acuity symptoms at home.

      The study used data from the Ontario Cancer Registry, including a large number of patients (54,743). The study showed that palliative homecare services could reduce the emergency department visit rate in both high and low-acuity groups, which could be considered a benefit of palliative homecare services. However, more information on the definition and the way of addressing palliative homecare services would allow better understanding of the generalizability of this finding. The authors used the Canadian Triage and Acuity Scale national guidelines as the classification, but we would have liked more information on the triage system and the allocation of patients according to their symptoms. For example, pain throat, malaise and fatigue are subjective symptoms which are less commonly classified as emergency or resuscitation-required, but in the study these were allocated in both acuity levels (high and low). We considered that this classification might affect the result significantly, therefore we would have appreciated further explanations.

      Ka Meng Ao, Ming Yuang Huang, Pamela Turrillas, Myongjin Agnes Cho


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 13, Richard Jahan-Tigh commented:

      Might just be a case of Grover's disease? Good place for it clinically and in the right age group.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 10, Gerry Stimson commented:

      In addition to the comments made by Clive Bates about the limitations of the study, a further fault is that the research measured e-cigarette use, and did not establish whether the e-cigarettes actually contained nicotine. As the paper reports 'Students were selected as ever e-cigarette users if they responded “yes” to the question “have you ever tried an e-cigarette”'. But the majority of youth vapers in the US do NOT use nicotine-containing e-cigarettes. The Monitoring the Future study reported that about 60% of youth vapers use e-cigarettes without nicotine. Lax scrutiny by the editor and reviewers means that this crucial issue is overlooked - indeed the article authors do not appear to have identified that this is as a limitation. This further undermines the rather facile policy recommendations to limit e-cigarette availability to young people.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 10, Clive Bates commented:

      The authors appear to have discovered that people want to use e-cigarettes instead of smoking. For anyone taking a public health perspective, that is a positive development given they are likely to be at least 95% lower risk than smoking and nearly all e-cigarette users are currently (or otherwise would be) smokers.

      The authors' policy proposals as stated in their conclusion do not follow from the observations they have made. The paper is insufficiently broad to draw any policy conclusions as it does not consider the interactions between vaping behaviour and smoking behaviour or wider effects on adult or adolescent welfare from increasing costs or reducing access. The paper does not give any insights into the effectiveness, costs, and risks of the proposed policies, so the authors have no foundation on which to make such recommendations.

      The authors appear to be unaware of the potential for unintended consequences arising from their ideas. For example raising the cost of e-cigarettes may cause existing users to relapse to smoking or reduce the incentive to switch from smoking to vaping. They believe their policies will "be important for preventing continued use in youth", but the reaction may not be the one they want - complete abstinence. It may be a continuation of, or return to, smoking.

      Finally, editors and peer reviews should be much firmer in disallowing policy recommendations based on completely inadequate reasoning and, in this case, on a misinterpretation of their own data in which they mischaracterize a benefit as a detriment and an opportunity as a threat.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 14, Haider Al-Darraji commented:

      Figure 1 doesn't seem to reflect its provided legend! It is the Andersen framework rather than the research sites.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 05, Peter H. Asdahl commented:

      I read with interest the study by Villani et al., who reported findings of a cancer surveillance program among TP53 mutation carriers. The authors analysed data from 89 TP53 mutation carriers diagnosed and followed at three tertiary cancer care centres in North America. Villani et al. concluded that their surveillance program is feasible, detects early tumour stages, and confers a sustained survival benefit. I emphasize a more conservative interpretation of the results because of biases common to observational studies of screening effects.

      The primary outcome of the study was incident cancers. If the surveillance and non-surveillance groups were exchangeable at baseline (i.e. had similar distribution of known and unknown factors that affect cancer incidence), we would expect either higher frequency or earlier detection of cancer in the surveillance group because of differential detection attributable to systematic cancer surveillance. The results reported by Villani et al. are counterintuitive: 49% and 88% of individuals in the surveillance and non-surveillance groups, respectively, were diagnosed with at least one incident cancer (crude risk ratio for the effect of surveillance=0.56, 95% confidence limits: 0.41, 0.76. – n.b. risk time distribution is not included in the manuscript). This inverse result suggests that the groups were not exchangeable, and thus confounding is a concern. The potential for confounding is further supported by baseline imbalances in age, sex, and previous cancer diagnosis, which favour higher cancer incidence in the non-surveillance group (the P-values in Table 1 are misleading because of low power to detect differences).

      The baseline imbalance between groups is exacerbated by lead time and length time biases when comparing survival. Detection of asymptomatic tumours by surveillance invariably adds survival time to the surveillance group and gives a spurious indication of improved survival. (Croswell JM, 2010) The effects of this bias are well-documented, and depending on the time from detection to symptom onset, the bias may be substantial. The survival analysis is further invalidated by the inclusion of non-cancerous lesions (e.g. fibroadenoma and osteochondroma, which are unlikely to affect survival) and pre-cancerous lesions (e.g. colonic adenoma and dysplastic naevus). Such lesions accounted for 40% and 16% of the incident neoplasms in the surveillance group and non-surveillance group, respectively.

      Annual MRI was included in the surveillance protocol. Many of the malignancies among TP53 mutation carriers are rapidly growing, and thus more often detected by symptoms rather than annual follow-up. For example, most medulloblastoma recurrences are detected by symptoms a median of four months after the last imaging. (Torres CF, 1994)

      In summary, the study by Villani et al. is not immune to biases common to observational studies of screening effects (Croswell JM, 2010) and the results should not be interpreted as a benefit of surveillance of TP53 mutation carriers as it uncritically has been done by many, which is well described in the accompanying editorial. The benefit, if any, of the proposed surveillance program cannot be assessed without a more rigorous study design to reduce known biases. For example, a study based on random allocation of individuals to surveillance and no surveillance would reduce the potential for confounding assuming that randomization is successful. In addition, adjustment for lead and length time biases is recommended regardless of randomized or non-randomized study design.

      I would like to acknowledge Rohit P. Ojha, Gilles Vassal, and Henrik Hasle for their contributions to this comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 01, Serina Stretton commented:

      We read with interest Vera-Badillo and colleagues’ recent publication entitled Honorary and Ghost Authorship in Reports of Randomised Clinical Trials in Oncology [1], which appears to show that these unethical practices are highly prevalent in oncology trial publications. While we applaud the authors for conducting original research in this field, we are concerned that the nonstandard definitions used for ghost authorship may have skewed the results.

      Vera-Badillo and colleagues assessed oncology trial publications where the trial protocol was available and contained a list of investigators. They defined ghost authorship as being present if an individual met one of the following criteria: “(i) investigators listed in the protocol were neither included as authors nor acknowledged in the article; (2) the individual who performed the statistical analyses was neither listed as an author nor acknowledged; (3) assistance of a medical writer was acknowledged in the publication.” No rationale or references were provided in support of this definition. However, while similar definitions have been used in some surveys of unethical authorship practices [2, 3], the definition provided by Vera-Badillo and colleagues is not uniformly accepted [4-7] and is not consistent with the International Committee of Medical Journal Editors (ICMJE) authorship criteria [8] or with the Council of Science Editors (CSE) definition of ghost authorship [9].

      There may be many valid reasons why participating investigators or statisticians may not be eligible for authorship of publications arising from a trial [8]. Here, we would like to respond to Vera-Badillo and colleagues’ assertion that medical writers who ARE acknowledged for their contributions are ghost authors. As specified by the ICMJE, medical writing is an example of a contribution that alone does not merit authorship and, therefore, should be disclosed as an acknowledgement. Also according to ICMJE, appropriate disclosure of medical writing assistance in the acknowledgements is not ghost authoring unless the medical writer was also involved in the generation of the research or its analysis, was responsible for the integrity of the research, or was accountable for the clinical interpretation of the findings. In their publication, Vera-Badillo and colleagues reported evidence of ghost authorship in 66% of evaluated studies. Of these, 34% had acknowledged medical writer assistance. Clearly, inclusion of declared medical writing assistance as ghost authorship has inflated the prevalence of ghost authoring reported in this study. Failure to apply standardised definitions of ghost authorship, guest (or honorary) authorship, and ghostwriting, limits the comparability of findings across studies and can mislead readers as to the true prevalence of these distinct practices [10-12].

      As recognised by the ICMJE [8], the CSE [9], and the World Association of Medical Editors [13], professional medical writers have a legitimate and valued role in assisting authors disclose findings from clinical trials in the peer-reviewed literature. Vera-Badillo and colleagues state in the discussion that medical writers either employed or funded by the pharmaceutical industry are “likely to write in a manner that meets sponsor approval”. No evidence is cited to support this claim. If sponsor approval requires accurate and robust reporting of trial results in accordance with international guidelines on reporting findings from human research [8, 14, 15], then yes, we agree. Professional medical writers employed or funded by the pharmaceutical industry routinely work within ethical guidelines and receive mandatory training on ethical publication practices [16-19]. Although medical writers may receive requests from authors or sponsors that they believe to be unethical, findings from the Global Publication Survey, conducted from November 2012 to February 2013, showed that most requests (93%) were withdrawn after the need for compliance with guidelines was made clear to the requestor [19].

      By expanding the definition of ghost authorship to include disclosed medical writing assistance, Vera-Badillo and colleagues have inflated the prevalence of ghost authorship in oncology trial publications. Such an unbalanced approach has the potential to detract from the true prevalence of ghost authorship where an individual who is deserving of authorship is hidden from the reader.

      The Global Alliance of Publication Professionals (www.gappteam.org)

      Serina Stretton, ProScribe – Envision Pharma Group, Sydney, NSW, Australia; Jackie Marchington, Caudex – McCann Complete Medical Ltd, Oxford, UK; Cindy W. Hamilton Virginia Commonwealth University School of Pharmacy, Richmond; Hamilton House Medical and Scientific Communications, Virginia Beach, VA, USA; Art Gertel, MedSciCom, LLC, Lebanon, NJ, USA

      GAPP is a group of independent individuals who volunteer their time and receive no funding (other than website hosting fees from the International Society for Medical Publication Professionals). All GAPP members have held, or do hold, leadership roles at associations representing professional medical writers (eg, AMWA, EMWA, DIA, ISMPP, ARCS), but do not speak on behalf of those organisations. GAPP members have or do provide professional medical writing services to not-for-profit and for-profit clients.

      References [1] Vera-Badillo FE et al. Eur J Cancer 2016;66:1-8. [2] Healy D, Cattell D. Br J Psychiatry 2003;183:22-7. [3] Gøtzsche PC et al. PLoS Med 2007;4:0047-52. [4] Flanagin A et al. JAMA 1998;280:222-4. [5] Jacobs A, Hamilton C. Write Stuff 2009;18:118-23. [6] Wislar JS et al. BMJ 2011;343:d6128.4-7 [7] Hamilton CW, Jacobs A. AMWA J 2012;27:115. [8] http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html; 2015 [accessed 12.09.16]. [9] http://www.councilscienceeditors.org/resource-library/editorial-policies/white-paper-on-publication-ethics/; [accessed 13.09.16]. [10] Stretton S. BMJ Open 2014;4(7): e004777. [11] Marušić A et al. PloS One. 2011;6(9):e23477. [12] Marchington J et al. J Gen Intern Med 2016;31:11. [13] http://www.wame.org/about/policy-statements#Ghost Writing; 2005 [accessed 12.09.16]. [14] WMA JAMA. 2013;310(20):2191-4. [15] Moher D et al. J Clin Epidemiol. 2010;63:e1–37 [16] http://www.ismpp.org/ismpp-code-of-ethics [accessed 12.09.16]. [17] http://www.amwa.org/amwa_ethics; [accessed 12.09.16]. [18] Jacobs A, Wager E. Curr Med Res Opin 2005;21(2):317-21. [19]Wager E et al. BMJ Open. 2014;4(4):e004780.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 16, Amanda Capes-Davis commented:

      This paper has problems with its authentication testing, resulting in at least three misidentified cell lines (HeLa) being used as models for liver cancer.

      The Materials and Methods of the paper state that "All cell lines were regularly authenticated by morphologic observation under microscopy". This is consistent with the policy of the journal, Cancer Research, which strongly encourages authentication testing of cell lines used in its publications.

      However, morphologic observation is not a suitable method for authentication testing. Changes in morphology can be subtle and difficult to interpret; cultures can be misidentified before observation begins. To investigate the latter possibility, I examined publicly available datasets of STR genotypes to see if the cell lines listed in the paper are known to be misidentified.

      Three of the cell lines used in this paper (Bel-7402, L-02, SMMC-7721) had STR genotypes published by Bian X, 2017 and Huang Y, 2017. All three "liver" cell lines correspond to HeLa and are therefore misidentified.

      HeLa and its three misidentified derivatives were used in the majority of figures (Figures 2, 3, 5, and 6). Although the phosphorylation data appear to be unaffected, the conclusions regarding liver cancer metastasis must be re-examined.

      What can we learn to improve the validity of our research publications?

      For authors and reviewers:

      For journal editors and funding bodies:

      • Encouragement of authentication testing is a step forward, but is insufficient to stop use of misidentified cell lines.
      • Mandatory testing using an accepted method is effective (Fusenig NE, 2017) and would have detected and avoided this problem prior to publication.
      • Policy on authentication testing requires oversight and ongoing review in light of such examples. This is important for NIH and other funding bodies who have recently implemented authentication of key resources as part of grant applications.

      I am grateful to Rebecca Schweppe, Christopher Korch, Douglas Kniss, and Roland Nardone for their input to this comment and much helpful discussion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 30, Paul Brookes commented:

      I submitted a response to this opinion piece to the journal (Circ. Res.), but unfortunately was informed that they do not accept or publish correspondence related to this type of article. So, here's my un-published letter, which raises a number of issues with the article...

      A recent Circ. Res. viewpointLoscalzo J, 2016 discussed the complex relationships between redox biology and metabolism in the setting of hypoxia, with an emphasis on the use of biochemically correct terminology. While there is broad agreement that the field of redox biology is often confounded by use of inappropriate methods and language Kalyanaraman B, 2012,Forman HJ, 2015, concern is raised regarding some ideas on reductive stress in the latter part of the article.

      In discussing the fate of glycolytically-derived NADH in hypoxia, the reader is urged to “Remember that while redirecting glucose metabolism to glycolysis decreases NADH production by the TCA cycle and decreases leaky electron transport chain flux, glycolysis continues to produce NADH". First, glucose undergoes glycolysis regardless of cellular oxygenation status; this simply happens at a faster rate in hypoxia. As such, glucose is not redirected but rather its product pyruvate is. Second, regardless a proposed lower rate of NADH generation by the TCA cycle (which may not actually be the case Chouchani ET, 2014,Hochachka PW, 1975) NADH still accumulates in hypoxic mitochondria because its major consumer, the O2-dependent respiratory chain, is inhibited. It is clear that both NADH consumers and producers can determine the NADH/NAD+ ratio, and in hypoxia the consumption side of the equation cannot be forgotten.

      While the field is in broad agreement that NADH accumulates in hypoxia, the piece goes on to claim that “How the cell handles this mounting pool of reducing equivalents remained enigmatic until recently.” This is misleading. The defining characteristic of hypoxia, one that has dominated the literature in the nearly 90 years since Warburg's seminal work Warburg O, 1927, is the generation of lactate by lactate dehydrogenase (LDH), a key NADH consuming reaction that permits glycolysis to continue. Lactate is “How cells handle the mounting pool of reducing equivalents.”

      Without mentioning lactate, an alternate fate for hypoxic NADH is proposed, based on the recent discovery that both LDH and malate dehydrogenase (MDH) can use NADH to drive the reduction of 2-oxoglutarate (α-ketoglutarate, α-KG) to the L(S)-enantiomer of 2-hydroxyglutarate (L-2-HG) under hypoxic conditions Oldham WM, 2015,Intlekofer AM, 2015. We also found elevated 2-HG in the ischemic preconditioned heart Nadtochiy SM, 2015, and recently reported that acidic pH – a common feature of hypoxia – can promote 2-HG generation by LDH and MDH Nadtochiy SM, 2016.

      While there can be little doubt that the discovery of hypoxic L-2-HG accumulation is an important milestone in understanding hypoxic metabolism and signaling, the claim that L-2-HG is “a reservoir for reducing equivalents and buffers NADH/NAD+” is troublesome on several counts. From a quantitative standpoint, we reported the canonical activities of LDH (pyruvate + NADH --> lactate + NAD+) and of MDH (oxaloacetate + NADH --> malate + NAD+) are at least 3-orders of magnitude greater than the rates at which these enzymes can reduce α-KG to L-2-HG Nadtochiy SM, 2016. This is in agreement with an earlier study reporting a catalytic efficiency ratio of 10<sup>7</sup> for the canonical vs. L-2-HG generating activities of MDH Rzem R, 2007. Given these constraints, we consider it unlikely that the generation of L-2-HG by these enzymes is a quantitatively important NADH sink, compared to their native reactions. It is also misleading to refer to the α-KG --> L-2-HG reaction as a "reservoir for reducing equivalents", because even though this reaction consumes NADH, it is not clear whether the reverse reaction regenerates NADH. Specifically, the metabolite rescue enzyme L-2-HG-dehydrogenase uses an FAD electron acceptor and is not known to consume NAD+ Nadtochiy SM, 2016,Rzem R, 2007,Weil-Malherbe H, 1937.

      Another potentially important sink for reducing equivalents in hypoxia that was not mentioned, is succinate. During hypoxia, NADH oxidation by mitochondrial complex I can drive the reversal of complex II (succinate dehydrogenase) to reduce fumarate to succinate Chouchani ET, 2014. This redox circuit, in which fumarate replaces oxygen as an electron acceptor for respiration, was first hinted at over 50 years ago SANADI DR, 1963. Importantly (and in contrast to L-2-HG as mentioned above), the metabolites recovered upon withdrawal from a fumarate --> succinate "electron bank" are the same as those deposited.

      Although recent attention has focused on the pathologic effects of accumulated succinate in driving ROS generation at tissue reperfusion Chouchani ET, 2014,Pell VR, 2016, the physiologic importance of hypoxic complex II reversal as a redox reservoir and as an evolutionarily-conserved survival mechanism Hochachka PW, 1975 should not be overlooked. Quantitatively, the levels of lactate and succinate accumulated during hypoxia are comparable Hochachka PW, 1975, and both are several orders of magnitude greater than reported hypoxic 2-HG levels.

      While overall the article makes a number of important points regarding reductive stress and the correct use of terminology in this field, we feel that the currently available data do not support a quantitatively significant role for L-2-HG as a hypoxic reservoir for reducing equivalents. These quantitative limitations do not diminish the potential importance of L-2-HG as a hypoxic signaling molecule Nadtochiy SM, 2016,Su X, 2016,Xu W, 2011.

      Paul S. Brookes, PhD.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 04, Cicely Saunders Institute Journal Club commented:

      This paper was discussed on 2.5.17, by students on the KCL Cicely Saunders Institute MSc in Palliative Care

      We read with interest the systematic review article by Cahill et al on the evidence for conducting palliative care family meetings.

      We congratulate the authors on their effort to include as many papers as possible by using a wide search strategy. Ultimately, only a small number of papers were relevant to this review and were included. The authors found significant heterogeneity within the various studies, in terms of the patient settings, interviewer background, and country of origin and culture. Study methods included both qualitative and quantitative designs, and a range of outcome measures, but there was a notable lack of RCT studies.

      Two studies found a benefit of family meetings using validated outcome measures. A further four found a positive outcome of family meetings, but with non-validated outcome measures. We felt that the lack of validated outcome measures does not necessarily exclude their value.

      We agree with the conclusions of the authors that there is limited evidence for family meetings in the literature and that further research would be of value. The small and diverse sample size leads to the potential for a beta error (not finding a difference where one exists). We were surprised by the final statement of the abstract that family meetings should not be routinely adopted into clinical practice, and we do not feel that the data in the paper support this: the absence of finding is not synonymous to the finding of absence. Further, our experience in three health care settings (UK, Canada, Switzerland) is that family meetings are already widely and routinely used.

      Aina Zehnder, Emma Hernandez-Wyatt, James W Tam


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 03, Alfonso Leiva commented:

      I would like to remark this study is the first prospective cohort to analyse the assotiation between time to diagnosis and stage. Fiona Walter et al studied the factors related to longer time to diagnosis and tried to explain the lack of assotiation between longer time to diagnosis and stage. We have recently published an article to explain this paradox and suggest confounding by an unknown factor as a posible explination. We have suggested the stage when symptoms appear is the main confounder in the assotiation between time to diagnosis and stage of diagnosis and propose a graphic representation for the progression of CRC fron an preclinical asymtomatic stage to a clinical symptomatic stage.

      Leiva A, Esteva M, Llobera J, Macià F, Pita-Fernández S, González-Luján L, Sánchez-Calavera MA, Ramos M. Time to diagnosis and stage of symptomatic colorectal cancer determined by three different sources of information: A population based retrospective study. Cancer Epidemiol. 2017 Jan 23;47:48-55.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 17, BSH Cancer Screening, Help-Seeking and Prevention Journal Club commented:

      The HBRC discussed this paper during the journal club held on November 15th 2016. This paper fits well with research conducted within our group on early diagnosis and symptomatic presentation. We considered this paper to be a useful addition to the literature and the paper raises some interesting findings which could be investigated further.

      The study examined the factors associated with a colorectal cancer (CRC) diagnosis and time to diagnosis (TDI). These factors included symptoms, clinical factors and sociodemographic factors. Due to the important role early diagnosis plays in survival from CRC, it is pertinent to investigate at which point diagnosis may be delayed. Early diagnosis of CRC can be problematic due to many of the symptoms being able to be associated with other health problems or being benign. The authors acknowledge that most cases of CRC present symptomatically.

      The group was interested in the finding that less specific symptoms such as indigestion or abdominal pain were associated with shorter patient intervals and that specific classic symptoms, such as rectal bleeding were associated with shorter health system intervals (HSI). So what patients might perceive to be alarm symptoms differs from perceptions of healthcare professionals. It was also highlighted that there was a discrepancy in the patient interval found in this study with a previous study, with this study showing 35 days as the median patient interval, compared to a primary care audit conducted by Lyratzopoulos and colleagues (2015) which showed a patient interval of 19 days. It was also interesting that family history of cancer was associated with a longer HSI, given that family history is a risk factor for cancer.

      The main advantage of this study is the prospective design, with the recruitment of patients prior to their diagnosis. Patients reported their symptoms and so provided insight into what they experienced, but the group did acknowledge that this was retrospective as symptoms were those experienced before they presented at the GP, with these being up to 2 years before diagnosis. The group felt the authors’ use of multiple regression models was a benefit to the study, allowing an investigation into time-constant and duration-varying effects, as in line with previous research, it was shown that rectal bleeding becomes normalised over time.

      We discussed limitations of the study and recognised that the authors did not acknowledge the Be Clear on Cancer Awareness Campaigns which took place during the data collection (Jan-March 2011, Jan-March 2012, Aug-Sept 2012) and could have had an impact by shortening patient interval and increasing referral rates. We also discussed that there could be an inherent bias in GPs and that HSI could be due to this bias of GP’s wanting to reassure patients that their symptom is likely to be the sign of something other than cancer. This could also help explain the longer time to diagnosis and HSI in those with depression and anxiety, as GP’s may feel the need to over reassure these patients, recognising that they are already anxious. However, when symptoms have been shown to be a ‘false alarm’, overreassurance and undersupport from healthcare professionals has been shown to lead patients to interpret subsequent symptoms as benign and express concern about appearing hypochondriacal (Renzi, Whitaker and Wardle, 2015). It may also be due to healthcare professionals attributing symptoms to some of the side effects related to medication for depression and anxiety such as diarrhoea, vomiting, and constipation. The authors suggest also that healthcare professionals might not take these patients physical symptoms seriously. There was also a small number of CRC patients given the amount of patients approached, with the authors recognising the study is underpowered. There may also have been an overestimate of the number of bowel symptoms in non-cancer patients, which was recognised by the authors. It was also unclear that the authors had also conducted univariate analyses and that these were included in the supplementary material until they were mentioned at the end of the results.

      There may also be differences in TDI depending on the type of referral e.g. two week wait, safety netting, and the group would have liked some more information about this. The group would also have liked to see some discussion about the median HSI being longer (58 days) than the 31 days currently recommended for diagnosis from the day of referral and the new target for 2020 of 28 days from referral to diagnosis. It would have also been useful to have some information about how many consultations patients had before being referred, as the authors state in the introduction that 1/3 of CRC patients have three or more consultations with the GP before a referral is made. It would also have been informative for the data on how long participants took to return their questionnaire, with the authors stating that most were completed within 2 weeks, but that some were within 3 months.

      It would be interesting to look further into the factors affecting patients presenting to their GP straight away with symptoms and those which delay. We discussed possible explanations being personality, extreme individual differences in how symptoms are perceived as serious or not and external factors such as being too busy. It would also be interesting to consider whether these symptoms were mentioned by patients as an afterthought at the end of a consultation about something else, or whether this was the symptom that patients primarily presented to the doctor with.

      In conclusion, the HBRC group read the article with great interest and would encourage further studies in this area.

      Conflicts of interest: We report no conflict of interests and note that the comments produced by the group are collective and not the opinion of any one individual.

      References

      1) Lyratzopoulos G, Saunders CL, Abel GA, McPhail S, Neal RD, Wardle J, Rubin GP (2015) The relative length of the patient and the primary care interval in patients with 28 common and rarer cancers. Br J Cancer 112(Suppl 1): S35–S40.

      2) Renzi C, Whitaker KL, Wardle J. (2015) Over-reassurance and undersupport after a 'false alarm': a systematic review of the impact on subsequent cancer symptom attribution and help seeking. BMJ Open. 5(2):e007002. doi: 10.1136/bmjopen-2014-007002.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 29, Michael Goard commented:

      We thank the Janelia Neural Circuit Computation Journal Club for taking the time to review our paper. However, we wish to clarify a few of the points brought up in the review.

      1) Off-target effects of inactivation. The authors of the review correctly point out that off-target effects can spread laterally from an inactivated region, potentially complicating the interpretation of the V1/PPC inactivation experiments. We have since carried out (not yet published) electrophysiology experiments in V1 during PPC photoinactivation and find there is some suppression (though not silencing) of V1 excitatory neurons through polysynaptic effects. The suppression is moderate and the V1 neurons maintain stimulus selectivity, so it is unlikely off-target suppression in V1 is responsible for the PPC inactivation effects, but the results do need to be interpreted with some caution.

      Notably, the suppression effect is not distance-dependent; it instead appears heterogeneous and is likely dependent on connectivity, as has been recently demonstrated in other preparations (Otchy et al, Nature, 2015). Given these findings, describing off-target effects as a simple function of power and distance is likely misleading. Indeed, even focal cortical silencing is likely to have complex effects on subcortical structures in addition to the targeted region. Instead, we suggest that while photoinactivation experiments are still useful for investigating the role of a region in behavior, the results need to be interpreted carefully (e.g., as demonstrating an area as permissive rather than instructive; per Otchy et al., 2015).

      2) Silencing of ALM in addition to M2. The photoinactivation experiments were designed to discriminate between sensory, parietal, and motor contributions to the task, rather than specific regions within motor cortex. We did not intend to suggest that ALM was unaffected in our photoinactivation experiments (this is the principal reason we used the agnostic term “fMC” rather than referring to a specific region). Although the center of our window was located posterior and medial to ALM, we used a relatively large window (2 x 2.5 mm), so ALM was likely affected.

      3) Rebound activity contributing to fMC photoinactivation effects. Rebound effects are not likely to be responsible for the role of fMC during the stimulus epoch. First, our photostimulus did not cause consistent rebound excitation (e.g., Figure 8B). This is likely due to the use of continuous rather than pulsed photoinactivation (see Figure 1G in Zhao et al., Nat Methods, 2011). Second, we did run several inactivation experiments with a 100-200 ms offset ramp (as in Guo et al., 2014), and found identical results (we did not include these experiments in the publication since we did not observe rebound activity). We suspect the discrepancy with Guo et al. is due to the unilateral vs. bilateral photoinactivation (Li, Daie, et al., 2016), as the reviewers suggest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 07, Janelia Neural Circuit Computation Journal Club commented:

      Highlight/Summary

      This is one of several recent papers investigating cortical dynamics during head-restrained behaviors in mice using mostly imaging methods. The questions posed were:

      Which brain regions are responsible for sensorimotor transformation? Which region(s) are responsible for maintaining task-relevant information in the delay period between the stimulus and response?

      These questions were definitely not answered. However, the study contains some nice cellular calcium imaging in multiple brain regions in a new type of mouse behavior.

      The behavior is a Go / No Go behavioral paradigm. The S+ and S- stimuli were drifting horizontal and vertical gratings, respectively. The mouse had to withhold licking during a delay epoch. During a subsequent response epoch the mouse responded by licking for a reward on Go trials.

      Strengths

      Perhaps the greatest strength of the paper is that activity was probed in multiple regions in the same behavior (all L2/3 neurons, using two-photon calcium imaging). Activity was measured in primary visual cortex (V1), ‘posterior parietal cortex’ (PPC; 2 mm posterior, 1.7 mm lateral), and fMC. ‘fMC' overlaps sMO in the Allen Reference Atlas, posterior and medial to ALM (distance approximately 1 mm) (Li/Daie et al 2016). This location is analogous to rat 'frontal orienting field’ (Erlich et al 2011) or M2 (Murakami et al 2014). Folks who work on whiskers refer to this area as vibrissal M1, because it corresponds to the part of motor cortex with the lowest threshold for whisker movements.

      In V1, a large fraction (> 50 %) of neurons were active and selective during the sample epoch. One of the more interesting findings is that a substantial fraction of V1 neurons were suppressed during the delay epoch. This could be a mechanism to reduce ‘sensory gain’ and ’distractions' during movement preparation. Interestingly, PPC neurons were task-selective during the sample or response epochs; consistent with previous work in primates (many studies in parietal areas) and rats (Raposo et al 2014), individual neurons multiplexed sensory and movement selectivity. However, there was little activity / selectivity during the delay epoch. This suggests that their sequence-like dynamics in maze tasks (e.g. Harvey et al 2012) might reflect ongoing sensory input and movement in the maze tasks, rather than more cognitive variables. fMC neurons were active and selective during the delay and response epoch, consistent with a role in movement planning and motor control, again consistent with many prior studies in primates, rats (Erlich et al 2011), and mice (Guo/Li et al 2014).

      Weaknesses

      Delayed response or movement tasks have been used for more than forty years to study memory-guided movements and motor preparation. Typically different stimuli predict different movement directions (e.g. saccades, arm movements or lick directions). Previous experiments have shown that activity during the delay epoch predicts specific movements, long before the movement. In this study, Go and No Go trials are fundamentally asymmetric and it is unclear how this behavioral paradigm relates to the literature on movement preparation. What does selectivity during the delay epoch mean? On No Go trials a smart mouse would simply ignore the events post stimulus presentation, making delay activity difficult to interpret.

      The behavioral design also makes the interpretation of the inactivation experiments suspect. The paper includes an analysis of behavior with bilateral photoinhibition (Figure 9). The authors argue for several take-home messages (‘we were able to determine the necessity of sensory, association, and frontal motor cortical regions during each epoch (stimulus, delay, response) of a memory-guided task.'); all of these conclusions come with major caveats.

      1.) Inactivation of both V1 and PPC during the sample epoch abolishes behavior, caused by an increase in false alarm rate and decrease in hit rate (Fig. 9d). The problem is that the optogenetic protocol silenced a large fraction of the brain. The methods are unlikely to have the spatial resolution to specifically inactivate V1 vs PPC. The authors evenly illuminated a 2 mm diameter window with 6.5mW/mm<sup>2</sup> light in VGat-ChR2 mice. This amounts to 20 mW laser power. According to the calibrations performed by Guo / Li et al (2014) in the same type of transgenic mice, this predicts substantial silencing over a radius (!) of 2-3 mm (Guo / Li et al 2014; Figure 2). Photoinhibiting V1 will therefore silence PPC and vice versa. It is therefore expected that silencing V1 and PPC have similar behavioral effects.

      2.) Silencing during the response window abolished the behavioral response (licking). Other labs labs have also observed total suppression of voluntary licking with frontal bilateral inactivation (e.g. Komiyama et al 2010; and unpublished). However, the proximal cause of the behavioral effect is likely silencing of ALM, which is more anterior and lateral to ‘fMC’. ALM projects to premotor structures related to licking. Low intensity activation of ALM, but not more medial and posterior structures such as fMC, triggers rhythmic licking (Li et al 2015) The large photostimulus used here would have silenced ALM as well as fMC.

      3.) Somewhat surprisingly, behavior is perturbed after silencing fMC during the sample (stimulus) and delay epochs. In Guo / i et al 2014, unilateral silencing of frontal cortex during the sample epoch (in this case ALM during a tactile decision task, 2AFC type) did not cause a behavioral effect (although bilateral silencing is likely different; see Li / Daie et al 2016). The behavioral effect in Goard et al 2016 may not be caused by the silencing itself, but by the subsequent rebound activity (an overshoot after silencing; see for example Guo JZ et al eLife 2016; Figure 4—figure supplement 2). Rebound activity is difficult to avoid, but can be minimized by gradually ramping down the photostimulus, a strategy that was not used here. The key indication that rebound was a problem is that behavior degrades almost exclusively via an increase in false alarm rate -- in other words - mice now always lick independent of trial type. Increased activity in ‘fMC’, as expected with rebound, is expected to promote these false alarms. More experiments are needed to make the inactivation experiments solid.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 18, Jen Herman commented:

      I would like to offer a possible alternative interpretation to explain the gain of interaction variants we identified for both SirA and DnaA that we did not conceive of at the time of publication.

      In the gain of interaction screen (bacterial two-hybrid - B2H) we obtained the surprising result that none of the variants we identified (either in SirA or DnaA) occurred near the known SirA-DnaA interaction interface. The DnaA gain of interaction substitutions occurred primarily in the region of DnaA important for DnaA oligomerization (Domain III). If these variants are defective for DnaA self interaction, then they might also be more available to interact with SirA in the B2H.

      If SirA, like DnaA, is also capable of forming higher order oligomers (at least at the higher copy numbers likely present in the B2H), then it is also conceivable that the gain of interaction variants we identified within SirA are also defective in this form of self-interaction. One piece of data to suggest this hypothesis might be correct is that truncating several amino acids from SirA's C-terminus (including the critical P141T residue) increases SirA solubility following overexpression. Previously, we and others were unable to identify conditions to solubilize any overexpressed wild-type SirA. Of course, this could simply be due to a propensity of SirA to form aggregates/inclusion bodies; however, another possibility is that SirA has an intrinsic tendency to oligomerize/polymerize at high concentrations, and that SirA's C-terminal region facilitates this particular form of self-interaction.

      If any of this is true, one should be able to design B2H gain of interaction screens to identify residues that likely disrupt the suspected oligomerization of any candidate protein suspected to mutltimerize (as we may have inadvertently done). This could be potentially useful for identifying monomer forms that are more amenable to, for example, protein overexpression or crystallization.

      In the bigger picture, one wonders how many proteins that are "insoluble" are actually forming ordered homomers of some sort due to their chiral nature. Relatedly, would this tendency be of any biological significance or simply a consequence of not being selected against in vivo (especially for proteins present at low copy number in the cell)? (see PMID 10940245 for a very nice review related to this subject).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 01, Tanya Halliday commented:

      A letter to the editor (and response) have been published indicating failure to account for regression to the mean in the article. Thus, the conclusion regarding effectiveness of the SHE program is not supported by the data.

      See: http://www.tandfonline.com/doi/full/10.1080/08952841.2017.1407575


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 11, Sudheendra Rao commented:

      Would be glad to get more info on the PKA. What exactly was detected? total, regulatory/cat sub unit? phosphorylation status etc. Or just providing info on antibodies used will also do. Thanks.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, Julia Romanowska commented:

      Sounds interesting, but I couldn't find an option in the R package to run on several cores - and this is an important feature when using GWAS or EWAS.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 19, Mayer Brezis commented:

      The study shows a correlation between low albumin and mortality - which makes sense and confirms previous literature. Is the relationship CAUSAL? The authors suggest causality: "Maintaining a normal serum albumin level may not only prolong patient survival but also may prevent medical complications...". Couldn't low albumin simply be A MARKER of more severe morbidity? If a baseline higher albumin BEFORE initiation of the tube feeding predicts lower mortality, how could this feeding mediate the improved survival? Are the authors suggesting that low albumin should a consideration AGAINST tube feeding because of predicted poorer prognosis?<br> Similarly, stable or increased albumin predicts long term survival not necessarily because of tube feeding, but simply as a marker of healthier people who survived the gastrostomy procedure. Causality cannot be implied in the absence of a control group and missed follow up in a third of the patients in the study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Harri Hemila commented:

      Two unpublished trials bias the conclusions on vitamin C and atrial fibrillation

      In their meta-analysis on vitamin C and postoperative atrial fibrillation (POAF), Polymeropoulos E, 2016 state that “no significant heterogeneity was observed among [nine] included studies” (p. 244). However, their meta-analysis did not include the data of 2 large US trials that found no effect of vitamin C against POAF and have thus remained unpublished. If those 2 trials are included, there is significant heterogeneity in the effects of vitamin C. Vitamin C had no effects against POAF in 5 US trials, but significantly prevented POAF in a set of 10 trials conducted outside of the USA, mainly in Iran and Greece, see Hemilä H, 2017 and Hemilä H, 2017. Although the conclusion by Polymeropoulos E, 2016 that vitamin C does have effects against POAF seems appropriate, the effect has been observed only in studies carried out in less wealthy countries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 16, Jon Simons commented:

      Thank you for alerting us to this problem with the GingerALE software. We will look into it and, if necessary, consult the journal about whether a corrective communication might be appropriate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 09, Christopher Tench commented:

      The version of GingerALE (2.3.2) has a bug in the FDR algorithm that resulted in false positive results. This bug was fixed at version 2.3.3.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 29, Inge Seim commented:

      Please note that GHRL (derived from genome sequencing data) in 31 bird species, including Columba livia, was reported in late 2014 (http://www.ncbi.nlm.nih.gov/pubmed/25500363). Unfortunately, Xie and colleagues did not cite this work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 02, MA Rodríguez-Cabello commented:

      Dear sirs, It is a wrong translation of the original text. Where this text says "prostate-specific antigen level" the original abstract says ASA classification. I apologize for the error in the translation done from the original abstract provided by Elsevier.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 28, Lydia Maniatis commented:

      Followers of the school of thought to which the authors of this article belong believe, among other odd things, in the notion that visual perception can be studied without reference to form. Thus, the reference made in the title of this paper to "regular sparse micro-patterns." There are (micro)-patterns and there are (micro)-patterns; do the present conclusions apply to any and all "regular, sparse micro-patterns?" Or only selected ones?

      Among the other beliefs of this school is the notion that different retinal projections trigger processing at different levels of the visual system, such that, for example, the activities of V1 neurons may be directly discerned in a “simple” percept. These supposed V1 (etc) signatures, of course, only apply to restricted features of a restricted set of stimuli (e.g. "grid-textures") under restricted contexts. The supposed neural behaviors and their links to perception are simple, involving largely local summation and inhibition.

      The idea that different percepts/features selectively tap different layers of visual processing is not defensible, and no serious attempt has ever been made to defend it. The problem was flagged by Teller (1984), who labeled it the “nothing mucks it up proviso” highlighting the failure to explain the role of the levels of the visual system (whose processes involved unimaginably complex feedback effects) not invoked by a particular “low-level” explanation. With stunning lack of seriousness Graham (e.g 1992, see comments in PubPeer) proposed that under certain conditions the brain becomes transparent through to the lower levels, and contemporary researchers have implicitly embraced this view. The fact is, however, that even the stimuli that are supposed to selectively tap into low-level processes (sine wave gratings/Gabor patches) produce 3D percepts with the impression of light and shadow; these facts are never addressed by devotees of the transparent brain, whose models are not interested in and certainly couldn’t handle them.

      The use of “Gabor patches” is a symptom of the other untenable assumption that “low-levels” of the visual system perform a Fourier analysis of the luminance structure of the retinal projection at each moment. There is no conceivable reason why the visual system should do this, or how, as it would not contribute to use of luminance patterns to construct a representation of the environment. There is also no evidence that it does this.

      In addition, it is also, with no evidence, asserted that the neural “signal” is “noisy.” This assumption is quite convenient, as the degree of supposed “noise” can be varied ad lib for the purposes of model-fitting. It is not clear how proponents of a “signal detecting mechanism with noise” conceive of the distinction between neural activity denoting “signal” and neural activity denoting “noise.” In order to describe the percept as the product of “signal” and “noise,” investigators have to define the “signal,” i.e. what should be contained in the percept in the absence of (purely hypothetical) “noise;” But that means that rather than observing how the visual process handles stimulation, they preordain what the percept should be, and describe (and "model") deviations as being due to “noise.”

      Furthermore, observers employed by this school are typically required to make forced, usually binary, choices, such that the form of the data will comply with model assumptions, as opposed to being complicated by what observers actually perceive, (and by the need to describe this with precision).

      Taken together, the procedures and assumptions employed by Baker and Meese (2016) and many others in the field are very convenient, insofar as “theory” at no point has to come into contact with fact or logic. It is completely bootstrapped, as follows: A model of neural function is constructed, and stimuli are selected/discovered which are amenable to an ad hoc description in terms of this model; aspects of the percepts produced by the stimulus figure (as well as percepts produced by other known figures) that are not consistent with the model are ignored, as are all the logical problems with the assumptions (many of which Teller (1984), a star of the field, tried to call attention to with no effect); the stimulus is then recursively treated as evidence for the model. Variants of the restricted set of stimulus types may produce minor inconsistencies with the models, which are then adjusted accordingly, refitted, and so on. (Here, Baker and Meese freely conclude that perception of their stimulus indicates a "mid-level" (as they conceive it) contribution). It is a perfectly self-contained system - but it isn’t science. In fact, I think it is exactly the kind of activity that Popper was trying to propose criteria for excluding from empirical science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 08, Harald HHW Schmidt commented:

      This paper is poorly validated. The detection of NOX4 relies on a poorly validated antibody, or in fact no one in the field believes that it is specific. Others have shown that siRNAs can be highly unspecific. We and others cannot detect NOX4 in macrophages. Thus the title and conclusions appear to be invalid.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 18, Daniel Weeks commented:

      Relative effect sizes of FTO variants and rs373863828 on body mass index in Samoans

      We would like to thank Dr. Janssens for making these helpful comments about the presentation and interpretation of our findings. And we welcome this opportunity to present our results more precisely and clearly.

      Regarding the suggestion that we should have compared standardized effects, there exists some literature that argues that comparison of standardized effects can be misleading (Cummings P, 2004, Cummings P, 2011). Indeed, Rothman and Greenland (1998, p. 672) recommend that "effects should be expressed in a substantively meaningful unit that is uniform across studies, not in standard-deviation units." While the argument for comparing standardized effects may be more compelling when different studies used different measurement scales, in this case, body mass index (BMI) has been measured in prior studies and our current one using a common scale.

      As recommended, we have now assessed the effect of variants on BMI in the FTO region to allow for direct comparison in our Samoan population. As Table 1 indicates, while the effects of these FTO variants are not statistically significant in our discovery sample, the estimates of the effect size of the FTO variants are similar in magnitude to previous estimates in other populations, and the non-standardized effect of the missense variant rs373863828 in CREBRF is approximately 3.75 to 4.66 times greater than the effects of the FTO variants in our discovery sample.

      We concur with the important reminder that the odds ratio overestimates the relative risk when the outcome prevalence is high.

      Thank you,

      Daniel E. Weeks and Ryan Minster on behalf of all of the co-authors.

      Department of Human Genetics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA.

      References:

      Cummings P. (2004) Meta-analysis based on standardized effects is unreliable. Arch Pediatr Adolesc Med. 158(6):595-7. PubMed PMID: 15184227.

      Cummings P. (2011) Arguments for and against standardized mean differences (effect sizes). Arch Pediatr Adolesc Med. 165(7):592-6. doi: 10.1001/archpediatrics.2011.97. PubMed PMID: 21727271.

      Rothman, K.J. and Greenland S. (1998) Modern epidemiology, second edition. Lippincott Williams & Wilkins, Philadelphia.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 03, Cecile Janssens commented:

      This study showed that a variant in CREBRF is strongly associated with body mass index (BMI) in Samoans. The authors write that this gene variant is associated with BMI with “a much larger effect size than other known common obesity risk variants, including variation in FTO.” The risk variant was also “positively associated with obesity risk” and with other obesity-related traits. For a correct interpretation of these findings, two methodological issues need to be considered.

      Much larger effect size

      The effect size of the CREBRF variant (1.36-1.45 kg/m2 per copy of the risk allele) is indeed larger than that of FTO (0.39 kg/m2 per copy), but this comparison is not valid to claim that the gene variant has a stronger effect.

      The effect size for the FTO gene comes from a pooled analysis of studies in which the average BMI of the population was below 28kg/m2 with standard deviations lower than 4kg/m2. In this study, the mean BMI was 33.5 and 32.7 kg/m2 in the discovery and replication samples and the standard deviations were higher (6.7 and 7.2 kg/m2). To claim that the CREBRF has a stronger effect than FTO, the researchers should have compared standardized effects that take into account the differences in BMI between the study populations, or they should have assessed the effect of FTO to allow for a direct comparison in the Samoan population.

      It is surprising that the authors have not considered this direct comparison between the genes, given that an earlier publication had reported about the relationship between FTO and BMI in the replication datasets of this study (Karns R, 2012) That study showed no association between FTO and BMI in the smallest, but a higher effect size in the largest of the two replication samples (0.55-0.70 kg/m2). The effect of the CREBRF gene may still be stronger than that of the FTO gene, but the difference may not be as large as the comparison of unstandardized effect sizes between the populations suggests.

      Impact on obesity risk

      The authors also investigate the “impact of the gene variant on the risk of obesity” and found that the odds ratio for the gene variant was 1.44 in the replication sample. This value is an odds ratio and indicates the impact on the odds of obesity, not on the risk of obesity. The difference between the two is essential here.

      The value of the odds ratio is similar to the relative risk when the outcome of interest is rare. In this study, the majority of the people were obese, 55.5% and 48.8% in the discovery and replication samples had BMI higher than 32kg/m2. When the prevalence of the outcome is this high, the odds ratio overestimates the relative risk. When the odds ratio is 1.44, the relative risk is 1.43 when the prevalence of obesity in noncarriers is 1%, 1.32 when it is 20%, 1.22 when it is 40%, 1.18 when it is 50%, and 1.16 when 55% of the non-carriers is obese. Regarding the impact on obesity risk, the gene variant might be more ordinary as suggested.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 21, David Keller commented:

      Goulden's data actually confirms that minimum mortality occurs with light-to-moderate alcohol intake

      Goulden's study concludes: "Moderate alcohol consumption is not associated with reduced all-cause mortality in older adults", a finding he admits is contrary to that of many prior studies. He bases his analysis on a new category he designates as "occasional drinkers", but he gives two differently worded definitions of an "occasional drinker" in different parts of his paper. Goulden states he intends "occasional drinkers" to consume less alcohol than "light drinkers", a standard category of drinkers who consume 1 to 6 standard drinks per week, each containing about 14 grams of ethanol. Unfortunately, both of his definitions of "occasional drinker" can include heavy binge alcohol abusers, clearly not what he intends. By ignoring this new and superfluous group of drinkers, we see that his remaining data confirms that the minimum risk for mortality is associated with light to moderate alcohol intake (the familiar J-shaped curve).

      In his reply to my letter, Goulden wrote: "Keller raises the possibility that the 'occasional drinkers' group is, in fact, a group of light drinkers who have under-reported their level of consumption."

      Or worse, as we shall see. In addition, both of the definitions he gives for "occasional drinker" do not make physiological sense, are superfluous, and confusing. This new category does not contribute to understanding the data, and it increases the possibility of erroneous classification of drinkers.

      In the abstract, Goulden defines an "occasional drinker" as one who reports drinking "at least once, but never more than less than once a week [sic]". In the body of the paper, he defines an "occasional drinker" as one who reports drinking on at least 1 occasion, but always less than once per week. By failing to specify the amount of alcohol consumed on each occasion, Goulden's definitions classify both of the following as "occasional drinkers": a subject who drinks a glass champagne once a year on New Year's eve; and another who drinks an entire bottle of whiskey in one sitting every 8 days. These very different kinds of drinkers are both included in Goulden's definitions of "occasional drinker", by which I think he means those who have tried alcohol at least once, but only drink less than one drink per week. This definition rules out the heavy binge drinker, but it is easier to understand, and thus might reduce errors when classifying drinkers.

      Now, for my main point: Look at these hazard ratios (from Table 2) for all-cause mortality with their confidence limits removed for improved visibility, and the "occasional drinker" column removed because of reasons cited above. We are left with 5 columns of data, in 3 rows, which all exhibit a minimum hazard ratio for mortality at <7 drinks per week, which increases when you shift even 1 column to the left or right:

      Drinks/week..................zero.....<7.....7-13....14-20...>20

      Fully adjusted...............1.19....1.02....1.14....1.13....1.45

      Fully adjusted, men..........1.21....1.04....1.16....1.17....1.53

      Fully adjusted, women........1.16....1.00....1.13....1.11....1.59

      Note that the data in every row approximates a J-shaped curve with the minimum harm ratio in the column labeled <7 [drinks per week], which is light drinking. The next-lowest point is in the 7-13 drinks per week column, or about 2 drinks per day, which is moderate drinking. Although in some instances the confidence intervals overlap, we still have a set of trends which are consistent with past studies, demonstrating the typical J-shaped association between daily ethanol dose and mortality harm ratios. Such trends would likely become statistically significant if the study power were increased enough. The bottom line is that the data tend to support, rather than contradict, the often-observed phenomenon that all-cause mortality is minimized in persons who consume mild-to-moderate amounts of alcohol.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 21, David Keller commented:

      Goulden's results confirm the J-shaped relationship of all-cause mortality with alcohol intake

      In my recent letter about Goulden's study, I pointed out that his data actually confirm the benefit of mild alcohol ingestion for reducing all-cause mortality. I supply the text of my letter below, exactly as it was published, for convenient reference. Goulden should have titled his paper, "Yet more data confirming that a "J-shaped" relationship exists between the amount of alcohol consumed daily and the risk of all-cause mortality." My detailed rebuttal of Goulden's reply is at the following URL:

      http://www.ncbi.nlm.nih.gov/pubmed/27453387#cm27453387_26107

      Here is the text of my letter:

      "Goulden's conclusion that moderate alcohol consumption is not associated with reduced all-cause mortality in older adults conflicts with the findings of other studies, which he attributes mainly to residual confounding and bias. However, Goulden's own Table 2 indicates that regular drinkers who consume less than 7 drinks per week (whom I shall call “light drinkers”) actually do exhibit the lowest average mortality hazard ratio (HR), compared with nondrinkers or heavy drinkers (>21 drinks per week), even when fully adjusted by Goulden, for all 11 categories of subjects, based on age, sex, health, socioeconomic, and functional status.

      "Likewise, for those who consume 7 to 14 drinks per week (“moderate drinkers”), Table 2 reveals that their average mortality HR is less than that of nondrinkers or heavy drinkers, with only 1 outlying category (of 11 categories). This outlier data point is for subjects aged less than 60 years, which may be explained by the fact that the ratio of noncardiovascular mortality (particularly automobile accidents) to cardiovascular mortality is highest in this youngest age category. Thus, the trends exhibited by Goulden's average data are consistent with the previously reported J-shaped beneficial relationship between light-to-moderate ethanol ingestion and mortality, with the single exception explained above.

      "Goulden defines a new category, “occasional drinkers” as those who “report drinking at least once, but never more than ‘less than once per week,’” and assigns them the mortality HR of 1.00. Because occasional drinkers consume alcohol in amounts greater than nondrinkers, but less than light drinkers, their mortality should be between that of nondrinkers and that of light drinkers.

      "However, for 5 of the 11 categories of subjects analyzed, the mortality HR for occasional drinkers is less than or equal to that of light drinkers. This may be due to subjects who miscategorize their light alcohol intake as occasional. The effects of this error are magnified because of the small amounts of alcohol involved, and thereby obscure the J-shaped curve relating alcohol intake and benefit.

      "The only way to determine with certainty what effect ethanol ingestion has on cardiovascular and total mortality is to conduct a randomized, controlled trial, which is long overdue."

      Reference

      1: Goulden, R. Moderate alcohol consumption is not associated with reduced all-cause mortality. Am J Med. 2016; 129: 180–186.e4


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 11, Clive Bates commented:

      I should add that the authors are writing about the FDA ("FDA" appears 70 times in the paper), yet this work is funded in part by the FDA Center for Tobacco Products. It's no surprise then that it takes a wholly uncritical approach to FDA's system for consumer risk information. Somehow they managed to state:

      The authors of this manuscript have no conflicts of interest to declare.

      While the funding is made clear, the failure to acknowledge the COI is telling - perhaps a 'white hat bias'?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 11, Clive Bates commented:

      It may not be what the authors intended, but the paper offers a troubling insight into the ingrained pedantry of a regulatory bureaucracy and how this can obscure the truth and cause harm. The authors have approached their work without considering the wider implications for health of FDA's system for assessing risk communication.

      The core weakness in the paper is the assumption that nothing can be true unless the FDA says it is, and that the FDA has a way of efficiently establishing what is true. No evidence supports either contention.

      Some observations.

      1. The paper was published before FDA had deemed e-cigs to be within its jurisdiction, so the retailers involved were free to make true and non-misleading claims - if they did that, they have not broken any laws.

      2. Some of the vendors' claims documented in the paper are reasonable and true and some would benefit from more nuanced language - all are broadly communicating substantially lower risk. This is, beyond any reasonable doubt, factually correct. The makers of these products are trying to persuade people to take on much lower risk products than cigarettes, but the authors and appear to believe this should be prevented. This is indistinguishable from a regulatory protection for the cigarette trade, with all that implies.

      3. If the public bodies like CDC and FDA in the U.S. had been candid about these products from the outset instead of creating fear and confusion, vendors would not need to make claims or could quote them as reliable authorities. However, they have not done this in the way that their English equivalents have: see, for example, the Royal College of Physicians [1] and Public Health England [2]. These bodies have assessed the evidence and made estimates aiming to help consumers gain a realistic appreciation of relative risk of smoking and vaping. They estimate that e-cigarette use, while not guaranteed entirely safe, is likely to be at least 95% lower risk than smoking.

      4. This contrasts with the FDA route to providing consumers with appropriate risk information - the Modified Risk Tobacco Product (MRTP) application. This approach already appears dysfunctional. It is now two years since Swedish Match filed a 130,000-page application to make a claim for snus (a form of smokeless tobacco) that is so obviously true it does not even justify the wear and tear on a rubber stamp: WARNING: No tobacco product is safe, but this product presents substantially lower risks to health than cigarettes. If a snus vendor cannot say that, then no claim is possible under this system.

      5. In contrast with its reluctance to allow manufacturers to state the obvious, FDA does not subject its own claims or risk communications to the public health test that it requires of manufacturers or vendors. FDA intends to require the packaging of e-cigarettes to carry the following: WARNING: This product contains nicotine. Nicotine is an addictive chemical. But how does it know this will not deter smokers from switching and therefore continuing to smoke? How does it know that it is not misleading consumers by the absence of realistic information on relative risk?

      6. FDA (and the authors) take no responsibility for, and show no interest in, the huge misalignment between consumers' risk perceptions and expert judgement on the relative risks of smoking and vaping Only 5.3% Americans correctly say vaping is much less harmful than smoking, while 37.5% say it is just as harmful or more harmful [3] - a view no experts anywhere would support. By allowing these misperceptions to flourish, they are in effect indifferent to the likely harms arising from maintaining that smoking and vaping are of equivalent risk unless FDA says otherwise.

      7. It is a perverse system that requires tobacco or e-cigarette companies to go through a heavily burdensome and expensive MRTP process before the consumer can be provided with truthful information about risks. Why should the commercial judgements of nicotine or tobacco companies on the value of going through this process be what determines what the consumer is told? For the most companies, the cost and burden of the process will simply be too great to guarantee a return through additional sales, so no applications will be made and consumers will be left in the dark.

      8. The FDA's restriction on communicating true and non-misleading information to consumers is part of the Nicopure Labs v FDA case - the challenge is made under the Constitutional First Amendment protection of free speech. The authors should not assume that the FDA that is acting lawfully, and FDA (and the authors) should have the burden of proof to show a vendor's claim is false or misleading.

      To conclude, the authors should return to the basic purpose of regulation, which is to protect health. They should then look carefully at how the legislation and its institutional implementation serve or defeat that purpose. If they did that, they would worry more about the barrier the FDA creates to consumer understanding and informed choice and less about the e-cigarette vendors' efforts, albeit imperfect, to inform consumers about the fundamental and evidence-based advantages of their products.

      [1] Royal College of Physicians, Nicotine with smoke: tobacco harm reduction. April 2016 [link]

      [2] Public Health England, E-cigarettes: an evidence update. August 2015 [Link]

      [3] Risk perception data from the National Cancer Institute HINTS survey, 2015).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 09, Christopher Tench commented:

      The version of GingerALE used (2.0) had a bug that resulted in false positive results. This bug was fixed at version 2.3.3


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 22, Holger Schunemann commented:

      Error in author listing; the correct citation for this article is http://www.bmj.com/content/354/bmj.i3507: BMJ. 2016 Jul 20;354:i3507. doi: 10.1136/bmj.i3507. When and how to update systematic reviews: consensus and checklist. Garner P, Hopewell S, Chandler J, MacLehose H, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 22, Stephen Tucker commented:

      This version corrects labels missing from Fig1 and Fig 5 of the original article


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 24, Maruti Singh commented:

      Thanks for your interest. Actually Caustic soda is a common household powder meant for washing clothes especially white. It is very cheap and usually easily available in villages of India. It is not used for birth control. The reason it was used here was to control the PPH following delivery- which may have been due to a tear in vagina or cervix. Packs soaked in Caustic Soda is often used by untrained birth attendants to control PPH and to cauterise any vaginal or cervical tear. However patient was not very clear on what happened except that she was bleeding post delivery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 24, Judy Slome Cohain commented:

      Would the authors please comment on the reasons for packing caustic soda in the patient's vagina? Was it meant as future birth control?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 06, University of Kansas School of Nursing Journal Club commented:

      Flinn, Sidney., Moore, Nicholas., Spiegel, Jennifer., Schemmel, Alisa, Caedo, Cassandra., Fox, Leana., Hill, RaeAnn., & Hinman, Jill. [Team 1: KUSON Class of 2017]

      Introduction

      We chose this article because this study focuses on how a clinical learning environment can impact a nursing student’s educational experience. In our Development of a Microsystem Leader course, we have been discussing the clinical microsystems and the elements present in the microsystem environments and their impact on nurses satisfaction. As students, our “work environment” can be considered as the environment in which we learn and are exposed to clinical practice. Within the context of nursing school, we have several learning microsystem environments, including our traditional classroom, as well as our clinical rotations. These separate microsystems each provide us with unique learning experience and opportunities, where we interact with each other throughout our time as a nursing student. In our classroom microsystem, we learn concepts that are applicable to clinical practice, and are encourage to use what we have learned and acquire competency through our clinical experience. Both environments play an important role in our nursing preparation and affect our ability to provide effective care as future nurses. An unsatisfactory clinical experience has the ability to negatively impact our learning and could ultimately determine the outcome of our nursing preparation and influence our practice in the microsystems care setting.

      Methods

      This article was found using PubMed database. The purpose of this study was to examine if nursing students were satisfied with their clinical setting as learning environments. The study used a quantitative descriptive correlational method using 463 undergraduate nursing students as sample from three universities in Cyprus. Data were collected from the three universities’ nursing program using the Clinical Learning Environment, Supervision and Nurse Teacher (CLES+T) questionnaire. The CLES+T was used to measure student’s satisfaction with their clinical learning environment. It consists of 34 items classified into 5 dimensions: pedagogical atmosphere on the ward; supervisory relationship; leadership style of the ward manager; premises of nursing on the ward; role of the NT in clinical practice. Out of the total 664 students from the three universities, 463 or 70.3% completed the self-report questionnaire. Along with the questionnaire each student was asked to complete a demographic data sheet that included information such as age, gender, and education level, what hospital and unit they were assigned for clinical rotation. The data was collected in the last laboratory lesson that occurred in the 2012-2013 school year. Quantitative data was derived from the questionnaires through the use of descriptive statistics (Papastavrou, Dimitriadou, Tsangari & Andreou, 2016).

      Findings

      Results showed that overall, nursing students rated their clinical learning environment as “very good” and were highly satisfactory (Papastavrou et al., 2016, p.9). This was well correlated with all five dimensions indicated in the CLES+T questionnaire including overall satisfaction. The biggest difference in scores were found among the students who met with their educators or managers frequently which is considered as successful supervision. This is considered as the “most influential factor in the students’ satisfaction with the learning environment” (p. 2). Students who attended private institutions were less satisfied, as were those placed in a pediatrics unit or ward. Important aspects of high satisfaction rates included coming from a state university, those with mentors, and those with high motivation. Limitations of the study included some students’ limited amounts of time spent in their clinical environment at the time of the study, and failure to use a “mix methodology” to compare the findings in this study with other similar studies (Papastavrou et al., 2016).

      Implications to Nursing Education

      This study is important to nursing because our educational and clinical preparation is the starting point of how we are shape to become successful nurses. The clinical learning environment is especially important because this is where we, as students, get the opportunity to use the knowledge and skills we learned in the classroom and apply it in the patient care setting. We get to actively practice assessments, apply hands-on skills, and interact with other medical professionals while becoming empowered to participate in practice autonomously. As mentioned in the article, a well-established mentorship with the nurses on the floor and with our clinical instructors sets the groundwork for a positive experience during clinical immersion experiences (Papastavrou et al., 2016). This positive relationship and experiences can lead to a healthy workplace that allows nursing students to feel empowered enough to practice their own skills, build trust with their instructors and ask appropriate questions when necessary (Papastavrou et al., 2016). Many of us have had unsatisfactory clinical experiences where we had a disengaged clinical instructor or had a designated nurse mentor who clearly lack mentoring skills to guide student in the learning environment. These situations lead us to feel quite dissatisfied with our clinical experience and hindered our learning experience. It is important for nurses and clinical faculty to be aware of how important these clinical experiences and supervisory relationships are to our preparation. Without them we would not be able to fully grasp the complexities that are associated with nursing practice, and would be inadequately prepared to work as a nurse. In relation to clinical microsystems, this study can be considered to be focusing on a positive clinical microsystem learning experience in nursing school. These experiences become the foundation of how we are being formed to become future frontline leaders, and bring out the confidence in us through their guidance and mentorship. It is crucial that nursing school establish a positive learning environment, both in the classroom and clinical setting that will help nursing students build their competence and development in the clinical setting which they can carry after graduation and applied in practice (Papastavrou et al., 2016). Creating a positive learning environment in nursing school will help bring students’ positive attitudes to the workplace where they can be a part of an empowered microsystems. This article provided us with a well-defined guidelines on how to empower nursing students and creating a healthy learning and work environment.

      Papastavrou, E., Dimitriadou, M., Tsangari, H., & Andreou, C. (2016). Nursing students’ satisfaction of the clinical learning environment: a research study. BMC Nursing, 15(1), 44. DOI: 10.1186/s12912-016-0164-4


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Ellen M Goudsmit commented:

      I am not persuaded that ME, as described by clinicians and researchers prior to 1988, has much to do with neurasthenia. Indeed, fatigue was not a criterion for the diagnosis of ME [1]. It presents as a more neurological disorder, e.g. muscle weakness after minimal exertion. References to CFS/ME are misleading where research used criteria for chronic fatigue or CFS, rather than ME. The assumption of equivalence has been tested and the differences are of clinical significance.

      A useful strategy to avoid post-exertion related exacerbations is pacing [2]. I missed a reference.

      1 Goudsmit, EM, Shepherd, C., Dancey, CP and Howes, S. ME: Chronic fatigue syndrome or a distinct clinical entity? Health Psychology Update, 2009, 18, 1, 26-33. http://www.bpsshop.org.uk/Health-Psychology-Update-Vol-18-No-1-2009-P797.aspx

      2 Goudsmit, EM., Jason, LA, Nijs, J and Wallman, KE. Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation, 2012, 34, 13, 1140-1147. doi: 10.3109/09638288.2011.635746.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 22, Tom Kindlon commented:

      Some information on an unpublished study on pupil responses:

      Dr Bansal mentions he has observed unusual responses by the pupils to light. I thought I would highlight a study that was done in the late 1990s looking at this. Unfortunately the researcher passed away before it could be published. Perhaps there are better sources than these lay articles but I thought they might be of some use in the hope that the finding might be followed up again.


      Eye test hope for ME sufferers

      Jenny Hope

      A new eye test can 'see' changes in the brain triggered by the crippling disease ME. The advance comes from a number of research projects that could lead to better treatments for the illness once ridiculed as 'yuppie flu'.

      It gives fresh hope to an estimated 150,000 victims of chronic fatigue syndrome, which can leave those worst affected bedridden with pain, suffering short-term memory loss and unable to walk even short distances.

      Scientists at the Royal Free Hospital and the City University in London have found a way to measure changes in the eyes of ME patients which may show they lack an important brain chemical.

      A study by Dr Ian James and Professor John Barbur checked the pupils of 16 ME patients and 24 healthy individuals, using a computer to measure changes identified between the two groups.

      They found patients with chronic fatigue had larger pupils and also had a stronger reaction to light and other stimuli. The changes could be linked to a deficiency of the brain chemical serotonin, which is known to occur in ME and is also linked to depression.

      Professor John Hughes, chairman of the Chronic Fatigue Syndrome Research Foundation, said the research should make it possible to understand changes occurring in the brain of a sufferer.

      This could help those studying the effect of different drugs and possibly help doctors diagnose CFS, he added.

      At present there are no reliable tests, although a checklist of symptoms developed five years ago is being used by doctors worldwide.


      BREAKTHROUGH FOR ME by Geraint Jones

      For years, ME has been treated with suspicion by doctors. Many believe that for every genuine sufferer there is another who simply believes himself to be ill. Experts cannot agree on whether the condition is a physical illness or a psychological disorder which exists only in the victim's mind. One reason for this scepticism is that, as yet, no one has been able to provide an accurate diagnosis for ME, or myalgic encephalomyelitis, which is known to affect 150,000 people in Britain. There is no known cure and treatment is often based on antidepressant drugs like Prozae, with limited success.

      All this may be about to change. Dr Ian James, consultant and reader in clinical pharmacology at London's Royal Free Hospital School of Medicine, believes that he has found a way of diagnosing the chronic fatigue syndrome and hopes to use it to develop a treatment programme. The breakthrough came after months of research spearheaded by Dr James and Professor John Barbur of London's City University. It centres round the discovery that the eyes of ME sufferers respond to light and motion stimuli in an unusual way.

      "Several doctors treating ME patients noticed that they showed an abnormal pupil response", says Dr James. "When the pupil is subjected to changes in light, or is required to alter focus from a close object to one further away, it does so by constricting and dilating. ME patients' eyes do this as well but there is an initial period of instability when the pupil fluctuates in size".

      Using a computerised "pupilometer", which precisely measures eye responses, Dr James embarked on a detailed study of this phenomenon on ME patients, using non-sufferers as a control. A variety of shapes were flashed on to a screen and moved across it, while a computer precisely measured pupil reflex to each of the 40 tests. Results confirmed that the pupil fluctuation was peculiar to those participants who suffered from ME.

      Dr James concluded that the abnormal pupil response is a result of some kind of interference in the transfer of impulses from the brain to the eye. He believes that ME is the result of a deficiency of a neuro-transmitter called 5HT, whose job it is to pass impulses through nerves to cells. The eyes of ME sufferers treated with 5HT behave normally. "I do not yet know how the ME virus causes abnormalities in 5HT transmission but it does inhibit its function", says Dr James.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 26, Darren L Dahly commented:

      Minor comment: Reference 17 is in error. It should instead point to this abstract, which was presented at the same conference. The full analysis was later published here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 23, Sin Hang Lee commented:

      Thanks for the clarification.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, Steven M Callister commented:

      In our report, we correctly state that the 142 bp segments of our amplified products had 100% homology with B. miyamotoi. However, the reader is also correct that our analyses did not include the entire amplified 145 bp seqment, since we did not include the complete primer sequences. As the reader stated, there is indeed one mismatch when the primer sequences are included. However, there is still >99% homology with the B. miyamotoi CP006647.2 sequence, so the oversight does not change the legitimacy of our conclusion. As an additional point, we have also since sequenced the entire glpQ from a human patient from our region positive by PCR for B. miyamotoi, and found 100% homology with the CP006647.2 glpQ sequence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 15, Sin Hang Lee commented:

      To the Editors: Jobe and colleagues [1] used polymerase chain reaction (PCR) to amplify a 142-bp fragment of a borrelial glycerophosphodiester phosphodiesterase (glpQ) gene in the blood samples of 7 patients. The sequences of the PCR primers were 5′-GATAATATTCCTGTTATAATGC-3′ (forward) and 5′-CACTGAGATTTAGTGATTTAAGTTC-3′ (reverse), respectively. The DNA sequence of the PCR amplicon was reported to be 100% homologous with that of the glpQ gene of Borrelia miyamotoi LB-2001 (GenBank accession no. CP006647.2) in each case. However, the database entry retrieved from GenBank accession no. CP006647.2 shows a 907293-base complete genome of B. miyamotoi which contains a 145-nucleotide segment in the glpQ gene starting with sequence GACAATATTCCTGTTATAATGC and ending with sequence GAACTTAAATCACTAAATCTCAGTG (position 248633 to 248777) matching the binding sites of the PCR primers referenced above with one-base mismatch (C) at the forward primer site. Because there is at least one base mismatch and a 3-base difference between the size of the PCR amplicon and the length of the defined DNA sequence entered in the GenBank database, the amplicon reported by the authors cannot be “100% homologous with that of B. miyamotoi LB-2001”. The authors should publish the base-calling electropherogram of the 142-bp PCR amplicon for an open review. Perhaps, they have uncovered a novel borrelial species in these 7 patients. References 1. Jobe DA, Lovrich SD, Oldenburg DG, Kowalski TJ, Callister SM. Borrelia miyamotoi infection in patients from upper midwestern United States, 2014–2015. Emerg Infect Dis. 2016 Aug. http://dx.doi.org/10.3201/eid2208.151878 Sin Hang Lee, MD Milford Molecular Diagnostics Laboratory Milford, CT Shlee01@snet.net


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 14, Thales Batista commented:

      Previously, we had explored the N-Ratio as a potential tool to select patients for adjuvant chemoradiotherapy after a D2-gastrectomy using four consecutive statistical steps. (Arq Gastroenterol. 2013 Oct-Dec;50(4):257-63.) First, we applied the c-statistic to establish the overall prognostic accuracy of the N-Ratio for predicting survival as a continuous variable. Second, we evaluated the prognostic value of N-Ratio in predicting survival when categorized according to clinically relevant cutoffs previously published. Third, we confirm the categorized N-Ratio as an independent predictor of survival using multivariate analyses to control the effect of other clinical/pathologic prognostic factors. Finally, we performed stratified survival analysis comparing survival outcomes of the treatment groups among the N-Ratio categories. Thus, we confirmed the N-Ratio as a method to improve lymph node metastasis staging in gastric cancer and suggested the cutoffs provided by Marchet et al. (Eur J Surg Oncol. 2008;34:159-65.) [i.e.: 0%, 1%~9%, 10%~25%, and >25%] as the best way for its categorization after a D2-gastrectomy. In these settings, N-Ratio appears a useful tool to select patients for adjuvant chemoradiotherapy, and the benefit of adding this type of adjuvancy to D2-gastrectomy is suggested to be limited to patients with milder degrees of lymphatic spread (i.e., NR2, 10%–25%).

      Recently, Fan M et a. (Br J Radiol. 2016;89(1059):20150758.) also explored the role of adjuvant chemoradiation vs chemo, and found similar results of ours that patients with N1-2 stage rather than those with N3 stage benefit most from additional radiation after D2 dissection. However, using data from the important RCT named ARTIST Trial, Kim Y et al. presents different results favoring the use of chemoradiotherapy after D2 gastrectomy in patients having N ratios >25%. These contrary finding warrants further investigation in future prospective studies, but highlight the N-Ration as a useful tool for a more taylored therapy based on radiation for gastric cancer patients. Since targeted therapy are currently focused in sophisticated molecular classifications, this approach might serve to improve patients selection for adjuvant radiotherapy based on simple and easely available clinico-phatological findings.

      In these settings, we would like to congratulate the authors for their interested paper re-exploring the data from Artist Trial; and also to invite other authos to re-explore your data using a similar approach.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Judy Slome Cohain commented:

      It is logical that birth centre outcomes are and will always be the same as hospital birth outcomes. Women who leave home are consciously leaving the safety of their home because they are under the misconception that the dangers of birth justify leaving the safety of their home. Leaving home releases higher levels of fear hormones, such as norepinephrine and ATP, and of course exposes the mother, fetus and newborn to unfamiliar and the potentially hostile bacteria of the strange environment. When we are home and the doors are locked, we are more relaxed and our unconscious brains can function better which promotes a faster and easier birth. Being home and having lower levels of stress hormones released serves to reassure the fetus, which prevents the fetal distress detected at about 20% of hospital and birth center births. The practitioner physically has the door key to the birth center and the birth center is for her convenience, not the birthing woman. If the woman had the door key and was encouraged to lock out whomever she wanted, as she does at home, that might influence outcomes at birth centers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 25, Judy Slome Cohain commented:

      I agree. Holding one's newborn baby works far better than ice on the perineum&rectal area. But in midwifery, it is important to NEVER SAY NEVER, because every once in a while, ice is very helpful 5 minutes after birth.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 25, Maarten Zwart commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 25, Maarten Zwart commented:

      Thanks for the comments, John!

      I agree the paired recordings you're suggesting would be interesting to do, but it's very difficult (I've not had any success) to separate excitation from inhibition in voltage clamp because of space clamp issues in these neurons. It seems interesting enough to try to get it to work, though. A more general description of the rhythm generator(s) will hopefully also come from further EM-based reconstructions.

      Most larval muscles receive input from a MN that only innervates that particular muscle ("unique innervator"), as well as a MN that innervates multiple muscles ("common innervator"). LT1-4 and LO1 are different; they only receive input from the "unique" MN, so that simplifies things. There are no known inhibitory MNs in the larva, which is an interesting quirk if this indeed holds up.

      It's not been exhaustively explored how different larval MNs compare in their intrinsic properties, but there is an interesting difference between the "unique" innervators and the "common" ones, with the latter showing a delay-to-first-spike caused by an Ia-type current (Choi, Park, and Griffith, 2004). I looked into intrinsic properties to test whether a similar delay-to-first-spike mediated the sequence. There will certainly be differences in input resistances between some MNs as they are not all the same size, but fast and slow ones have yet to be described.

      Thanks for the heads up on the PTX effect. We've seen different effects at different concentrations, with higher concentrations affecting intersegmental coordination in addition to intrasegmental coordination, and we've jotted these down to simply a more effective receptor block, but that's very interesting!

      Thanks again John!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jul 25, John Tuthill commented:

      Do transverse and longitudinal MNs receive any synchronous or correlated excitatory input? Figure S4 shows paired recordings between aCC and LO1 and they look relatively correlated. Would be interesting to look at LO1 and LT2 pairs to see whether the inputs they share drive synchronous activation at particular phases of the fictive rhythm cycle, which might be suppressed by inhibition (from iINs) at other phases. This would provide some indication of whether there is a single “CPG” that serves as a common clock/oscillator for all the MNs within a segment. It would also have some bearing on your model that intra-segmental timing is generated by selective inhibition, rather than specificity of excitation.

      Each larval muscle is controlled by multiple MNs. These different MNs receive many, but not all presynaptic inputs in common (figure 2). How does this affect the phase relationship of MNs that innervate a common muscle? A broader question might be, in an oscillating population of MNs, how well can you predict phase relationships by quantifying the proportion of overlapping presynaptic inputs to those MNs?

      Are larval MNs divided into fast/slow neurons, as in the adult? On a related note, do all larval MNs exhibit the vanilla intrinsic properties shown in Fig 1? Do fly larvae have inhibitory MNs like many adult insects? (Interested in these questions since we are working on them in the adult).

      Technical note: @ 1 micromolar, picrotoxin only blocks GABAa receptors, not GluCl (at least in the adult CNS, see Wilson and Laurent, 2005 and Liu and Wilson, 2013).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 24, Anthony Jorm commented:

      Thank you to the authors for providing the requested data. I would like to provide a further comment on the effect size for the primary outcome of their intervention, the Social Acceptance Scale. Using the pre-test and post-test means and standard deviations and the correlation between pre-test and post-test, they calculate a Cohen’s d of 0.186, which is close to Cohen’s definition of a ‘small’ effect size (d = 0.2). However, I believe this is not the appropriate method for calculating the effect size. Morris & DeShon Morris SB, 2002 have reviewed methods of calculating effect sizes from repeated measures designs. They distinguish between a ‘repeated measures effect size’ and an ‘independent groups effect size’. Koller & Stuart appear to have used the repeated measures effect size (equation 8 of Morris & DeShon). This is not wrong, but it is a different metric from that used in most meta-analyses. To allow comparison with published meta-analyses, it is necessary to use the independent groups effect size, which I calculate to give a d = 0.14 (using equation 13 of Morris & DeShon). This effect size can be compared to the results of the meta-analysis of Corrigan et al. Corrigan PW, 2012 which reported pooled results from studies of stigma reduction programs with adolescents. The mean Cohen’s d scores for ‘behavioral intentions’ (which the Social Acceptance Scale aims to measure) were 0.302 for education programs, 0.457 for in-person contact programs and 0.172 for video contact programs. I would therefore conclude that the contact-based education program reported by Koller & Stuart has a ‘less than small’ effect and that it less than those seen in other contact-based and education programs for stigma reduction in adolescents.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 10, Heather Stuart commented:

      We would like to thank Professor Jorm for his careful consideration of our results and his comment. As requested, we have provided the following additional data analysis. 1. Report means, standard deviations and Cohen’s d with 95% CI for the primary outcome. This will allow comparison with the results of the meta-analyses by Corrigan et al. Corrigan PW, 2012 and Griffiths et al. Griffiths KM, 2014. Professor Jorm’s questions raise the important issues of what constitutes a meaningful outcome when conducting anti-stigma research and how much of an effect is noteworthy (statistical significance aside). We discussed these issues at length when designing the evaluation protocol and based on the book Analysis of Pretest-Posttest Designs (Bonate, 2000) we took the approach that scale scores are not helpful for guiding program improvements. Aggregated scale scores do not identify which specific areas require improvement, whereas individual survey items do. We also considered what would be a meaningful difference to program partners (who participated actively in this discussion) and settled on the 80% (A grade) threshold as a meaningful heuristic describing the outcome of an educational intervention. Thus, we deliberately did not use the entire scale score to calculate a difference of means. Our primary outcome was the adjusted odds ratio. When we convert the odds ratio to an effect size (Chinn, 2000)we get an effect size of 0.52, reflecting a moderate effect. The mean pretest Social Acceptance score was 24.56 (SD 6.71, CI 24.34-24.75) and for the post-test it was 23.62 (SD 6.93, CI 23.40-23.83). Using these values and the correlation between the 2 scores (0.73) the resulting Cohen’s d is 0.186, reflecting a small and statistically significant effect size. It is important to point out that the mean differences reported here do not take into consideration the heterogeneity across programs, so most likely underestimate the effect. This might explain why the effect size when using the OR (which was corrected for heterogeneity) was higher than the unadjusted mean standardized effect. Whether using a mean standardized effect size or the adjusted odds ratio, results suggest that the contact based education is a promising practice for reducing stigma in high school students.<br> 2. Data on the percentage of ‘positive outliers’ to compare with the ‘negative outliers’. Because we had some regression to the mean in our data, we used the negative outliners to rule out the hypothesis that the negative changes noted could be entirely explained by this statistical artefact. We defined negative outliners as the 25th percentile minus 1.5 times the interquartile range. Outliners were 3.8% for the Stereotype Scale difference score and 2.8% for the Social Acceptance difference score suggesting that some students actually got worse. We noted that males were more likely to be among the outliers. Our subsequent analysis of student characteristics showed that males who did not self-disclose a mental illness were less likely to achieve a passing score. This supported the idea that a small group of students may be reacting negatively to the intervention and becoming more stigmatized. While the OR alone (or the mean standardized difference) could, as Professor Jorm indicates, mask some deterioration in a subset of students, our full analysis was designed to uncover this exact phenomenon.<br> Professor Jorm has asked that we show the positive outliers. If we define a positive outliner as the 75th percentile plus 1.5 times the interquartile range, then 1.9% were outliners on the Stereotype Scale difference score and 2.3% are outliers on the Social Acceptance distance score, suggesting that the intervention also resonated particularly well with a small group of students.<br> Thus, while contact based interventions appear to be generally effective (i.e. when using omnibus measures such as a standardized effect size or the adjusted odds ratio), our findings support the idea that effects are not uniform across all sub-sets of students (or, indeed programs). Consequently, more nuanced approaches to anti-stigma interventions are needed, such as those that are sensitive to gender and personal disclosure along with fidelity criteria to maximize program effects.

      1. Data on changes in ‘fail grades’, i.e. whether there was any increase in those with less than 50% non-stigmatizing responses<br> In response to Professor Jorm’s request for a reanalysis of students who failed, we defined a fail grade as giving a stigmatising response to at least 6 of the 11 statements, (54% of the questions). At pretest, 32.8% of students ‘failed’ on the Stereotype scale, dropping to 23.7% at post-test (reflecting a decrease of 9.1%). For the Social acceptance scale, at pretest 28.5% ‘failed’, dropping to 24.8% at post-test, reflecting (a decrease of 3.7%). Using McNemar’s test, both the Stereotype scale (X2 (1) = 148.7, p <.001) and the Social Acceptance scale (X2 (1) = 28.4, p <.001) were statistically significant lending further support to our conclusion that the interventions were generally effective. Bonate, P. L. (2000). Analysis of Pretest- Posttest Designs. CRC Press. Chinn, S. (2000). A simple method for converting an odds ratio to effect size for use in meta-analysis. Statistics in Medicine, 3127-3131.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jul 20, Anthony Jorm commented:

      The authors of this study conclude that “contact-based education appears to be effective in improving students’ behavioural intentions towards people who have a mental illness”. However, it is not clear that the data on the primary outcome measure (the Social Acceptance Scale) support this conclusion. The authors measured change on this primary outcome in two ways. The first is a difference score calculated by subtracting post-test scores from pre-test scores. The second is a dichotomous grade score, with 80% non-stigmatizing responses defined as an ‘A grade’. With the difference scores, the authors do not report the means, standard deviations and an effect size measure (e.g. Cohen’s d) at pre-test and post-test, as is usually done. This makes it impossible to compare the effects to those reported in meta-analyses of the effects of stigma reduction interventions. Instead, they report the percentage of participants whose scores got worse, stayed the same or got better. It is notable that a greater percentage got worse (28.3%) than got better (19.8%), indicating that the overall effect may have been negative. The authors also report on the percentage of participants who got worse by 5 or more points (the ‘negative outliers’: 2.8%), but they do not report for comparison the percentage who got better by this amount. The dichotomous A grade scores do appear to show improvement overall, with an odds ratio of 2.57. However, this measure could mask simultaneous deterioration in the primary outcome in a subset of participants. This could be assessed by also reporting the equivalent of a ‘fail grade’. I request that the authors report the following to allow a full assessment of the effects of this intervention: 1. Means, standard deviations and Cohen’s d with 95% CI for the primary outcome. This will allow comparison with the results of the meta-analyses by Corrigan et al. Corrigan PW, 2012 and Griffiths et al. Griffiths KM, 2014. 2. Data on the percentage of ‘positive outliers’ to compare with the ‘negative outliers’. 3. Data on changes in ‘fail grades’, i.e. whether there was any increase in those with less than 50% non-stigmatizing responses.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 19, Jan Tunér commented:

      The authors have used 780 nm, 20 mW, 0.04 cm2, 10 seconds, 0.2 J per point, 1.8 J per session. This is a very low energy. Energy (J) and dose (J/cm2) both have to be within the therapeutic window. By using a thin probe, a high dose can easily be reached but the energy here is much too low in my opinion. The authors quote Kymplova (2003) as having success with these parameters, but this is not correct. The multimode approach of Kymplova was as follows: The light sources were as follows: a laser of a wave length 670 nm, power 20 mW, with continuous alternations of frequencies 10 Hz, 25 Hz, and 50 Hz, a polarized light source of a 400-2,000 nm wavelength in an interval of power 20 mW and frequency 100 Hz and a monochromatic light source of a 660 nm wavelength and power 40 mW, with simultaneous application of a magnetic field at an induction 8 mT.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Melissa Rethlefsen commented:

      I thank the authors of this Cochrane review for providing their search strategies in the document Appendix. Upon trying to reproduce the Ovid MEDLINE search strategy, we came across several errors. It is unclear whether these are transcription errors or represent actual errors in the performed search strategy, though likely the former.

      For instance, in line 39, the search is "tumour bed boost.sh.kw.ti.ab" [quotes not in original]. The correct syntax would be "tumour bed boost.sh,kw,ti,ab" [no quotes]. The same is true for line 41, where the commas are replaced with periods.

      In line 42, the search is "Breast Neoplasms /rt.sh" [quotes not in original]. It is not entirely clear what the authors meant here, but likely they meant to search the MeSH heading Breast Neoplasms with the subheading radiotherapy. If that is the case, the search should have been "Breast Neoplasms/rt" [no quotes].

      In lines 43 and 44, it appears as though the authors were trying to search for the MeSH term "Radiotherapy, Conformal" with two different subheadings, which they spell out and end with a subject heading field search (i.e., Radiotherapy, Conformal/adverse events.sh). In Ovid syntax, however, the correct search syntax would be "Radiotherapy, Conformal/ae" [no quotes] without the subheading spelled out and without the extraneous .sh.

      In line 47, there is another minor error, again with .sh being extraneously added to the search term "Radiotherapy/" [quotes not in original].

      Though these errors are minor and are highly likely to be transcription errors, when attempting to replicate this search, each of these lines produces an error in Ovid. If a searcher is unaware of how to fix these problems, the search becomes unreplicable. Because the search could not have been completed as published, it is unlikely this was actually how the search was performed; however, it is a good case study to examine how even small details matter greatly for reproducibility in search strategies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 21, Jacob H. Hanna commented:

      In 2014, Theunissen et al. Cell Stem Cell 2014 Theunissen TW, 2014 reported absolute failure to detect human naïve PSC derived cell integration in chimeric mouse embryos obtained following micro-injection into mouse blastocysts, as was reported for the first time by our group (Gafni et al. Nature 2013). However, the authors failed to discuss that imaging and cell detection methods applied by Theunissen et al. Cell Stem Cell 2014 were (and still) not at par with those applied by Gafni et al. Nature 2013.

      Regardless, we find it important to alert the readers that Theunissen and Jaenisch have now revised (de facto, retracted) their previous negative results, and are able to detect naïve human PSC derived cells in mouse embryos at more than 0.5-2% of embryos obtained (Theunissen et al. Cell Stem Cell 2016 - Figure 7) Theunissen TW, 2016 < http://www.cell.com/cell-stem-cell/fulltext/S1934-5909(16)30161-8 >. They now apply GFP and RFP flourescence detection and PCR based assays for Mitochondrial DNA, which were applied by the same group to elegantly claim contribution of human neural crest cells into mouse embryos (albeit also at low efficiency (Cohen et al. PNAS 2016 Cohen MA, 2016).

      While the authors of the latter recent paper avoided conducting advanced imaging and/or histology sectioning on such obtained embryos, we also note that the 0.5-2% reported efficiency is remarkable considering that the 5i/LA (or 4i/LA) naïve human cells used lack epigenetic imprinting (due to aberrant near-complete loss of DNMT1 protein that is not seen in mouse naive ESCs!! http://imgur.com/M6FeaTs ) and are chromosomally abnormal. The latter features are well known inhibitors for chimera formation even when attempting to conduct same species chimera assay with mouse naïve PSCs.

      Jacob (Yaqub) Hanna M.D. Ph.D.

      Department of Molecular Genetics (Mayer Bldg. Rm.005)

      Weizmann Institute of Science | 234 Herzl St, Rehovot 7610001, Israel

      Email: jacob.hanna at weizmann.ac.il

      Lab website: http://hannalabweb.weizmann.ac.il/

      Twitter: @Jacob_Hanna


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Judy Slome Cohain commented:

      What are the implications of this study? Has mercury intake been associated with detrimental effects? A recent review found the benefits of diets providing moderate amounts of fish during pregnancy outweigh potential detrimental effect of mercury in regards to offspring neurodevelopment.(1) Wouldn't the benefits of rice in rural China, the staple of the diet, outweigh the detrimental effects also?

      1.Fish intake during pregnancy and foetal neurodevelopment--a systematic review of the evidence. Nutrients. 2015 Mar 18;7(3):2001-14. doi: 10.3390/nu7032001. Review. This review


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 25, Francisco Felix commented:

      Overall survival is based on a not long enough follow-up time (7.8 y median follow-up versus 10y-OS estimate), so its mature value will probably be a little inferior, maybe (and this is a blind shot) near 70-75%. Nevertheless, this is a homage to all the efforts and good will by so many people devoted to bring about better results for the treatment of these kids. As well as the bleak prognosis of relapsed patients reminds us all that there is so much more to do... I believe that transnational cooperative group projects have done a formidable job so far, but it is now time to move on to the next step: global open science initiatives.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 17, Stuart RAY commented:

      And now a more recent one, with overlap in topic and authorship, is reportedly being retracted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Oct 17, Stuart RAY commented:

      A paper Inbar R, 2016 with the same authors and a nearly-identical title was previously published in Vaccine and retracted by the publisher.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 16, Niels Vrang commented:

      We welcome discussion and critique although we do not approve of the rather broad spectrum of points raised in this comment. We do stand by our results which have been generated by several skilled technicians and scientists known in the art of neuroscience and the methods employed live up to scientific standards. Our findings are but a series of findings trying to understand the putative role of GLP-1 agonists in neurodegenerative disorders and in this particular case the findings were negative. The discussion of the paper clearly recognise that others have found positive data. We believe that the best way to advance science is by reporting both positive and negative data and we are thankful for PLoS One for making the reporting of negative findings possible.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 08, Christian Holscher commented:

      This is a misleading and unscientific paper. The authors measure amyloid plaque load in a London mutation APP mouse model which does not develop amyloid plaques! In the London mutation, the amyloid stays mostly inside the cells. Fig. 5 clearly shows that the brains are virtually free of plaques, yet the authors conclude that the drug failed to reduce amyloid plaque load! They should have measured total amyloid levels using the western blot technique. The memory tasks are similarly dubious. The wild type control mice are a lot lighter than the transgenic mice (Fig 1A) and they swim a lot faster in the water maze (Fig. 3A-C), which makes the interpretation of the result questionable. The reduced latency can be explained entirely by the faster swim speed. The memory tests of the APP/PS1 mice are just as unscientific. Fig. 4 shows that the saline treated APP/PS1 mice do not show a memory deficit in both tasks when compared to controls. The drug cannot improve a non-existing memory deficit! The experiment did not work and needs to be repeated. Showing these graphs in the publication is misleading. This study is deeply flawed and the conclusions are not supported by the data shown. It should be retracted.

      Prof. Christian Holscher, PhD Lancaster University, UK


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 05, David Evans commented:

      Mendelian randomization (MR) is an epidemiological method that can be used to strengthen causal inference regarding the relationship between a modifiable environmental exposure and a medically relevant trait and to estimate the magnitude of this relationship [1]. Whilst the principles on which MR are based are relatively easy to comprehend, many scientists report finding it difficult to understand the method at first. Reviews of MR that present the methodology and concrete examples in the context of particular biomedical issues may be of considerable value in introducing the ideas to practitioners and researchers in different fields, and several have appeared [2,3].

      In a recent issue of Nature Reviews Rheumatology, Robinson et al. review some of the theory behind MR as well as application of the technique to the field of rheumatology [4]. Whilst a useful introduction for a rheumatology audience, Robinson et al.’s article contains some errors and inaccuracies in the description of the MR method that might mislead researchers who attempt to apply the approach based on this review. There are some infelicities in the description of some biological processes (e.g. the authors’ contend in their abstract that “alleles for genetic variants are randomly inherited at meiosis”- this is false- alleles are inherited at conception, not during meiosis), but here we have restricted ourselves to pointing out some of the issues that are directly relevant to the theory and practice of MR:

      (1) The authors seem to be confused in the choice of independent and dependent variables in the two stage least squares instrumental variables regression analyses. The correct way of describing the first stage of this procedure is that the exposure is regressed on the instrumental variable (not vice versa as the authors sometimes do in their manuscript). In addition, the second panel of Figure 2 in their paper illustrates fitted values from the first stage analysis regressed on the outcome variable. This is not correct. The second stage of the two stage least squares procedure is equivalent to regressing the outcome variable on the fitted values from the first stage regression (not vice versa). The authors also do not comment on the requirement of correcting the standard errors of the parameter estimates should the analysis be performed in this two-step fashion (although most statistics packages that implement two stage least squares regression will do this automatically for the user). We also point out that technically the authors describe a variant of a two-stage residual inclusion estimator rather than two-stage least squares [5].

      (2) The visual description of residuals in the first panel of Figure 2 is incorrect. As in ordinary least squares regression, residuals refer to the part of the dependent variable (here urate) that is not predicted by the regression. The residuals should therefore be represented by vertical double headed arrows between the individual data points and the regression line, not by horizontal double headed arrows between the data points and the regression line.

      (3) The authors claim that “The genotypic measure of exposure is simple to obtain and being objective is not subject to experimental biases (such as recall bias)”. Whilst it is true that genotypes are not subject to many of the biases common in classical epidemiology, they are still subject to possible measurement error (i.e. genotyping error, imputation uncertainty and population stratification), all of which must be borne in mind to ensure that the results of any MR analysis are robust.

      (4) The authors mistakenly claim that it is straight forward to demonstrate that a genetic variant is not related to possible confounders of the exposure outcome association. We disagree. Whilst it is usually elementary to show that a genetic variant is associated with an exposure of interest (hence satisfying one of the core assumptions for a valid genetic instrument), demonstrating that a genetic variant is not associated with factors that confound the association between exposure and outcome is impossible [6]. The best an investigator can hope to do is to show that the putative genetic instrument is unrelated to a range of potential confounding variables [7]. If no association is found (or fewer associations than are expected by chance), then this will increase confidence that the genetic variant fulfils this core assumption, but an investigator can never prove this assumption outright since there may still be residual/unmeasured confounders/confounding that are associated with the genetic variants but have not been tested explicitly.

      (5) The authors misunderstand the nature of two sample MR analysis and the Wald statistic used in this procedure. The authors claim that the Wald method does not provide an estimate of the causal effect of the exposure on the outcome. This is false. The Wald method provides estimates of the causal effect of the exposure on the outcome and their standard errors [8]. The authors also claim that MR analysis requires measurement of the biological exposure. Again this false. Investigators can use two sample MR on summary results data obviating the requirement of measuring an exposure variable in their analyses, and indeed this is one of the benefits of this type of analysis [9].

      (6) The authors misinterpret results obtained from the Durbin Wu Hausman statistic (a statistical test typically used to compare observational and instrumental variable estimates of the association between the exposure and the outcome). They incorrectly state that, in the presence of reverse causality, estimates from an MR analysis will be in the direction opposite to the observational association. This is not the case. In reality, reverse causality would result in a causal estimate of zero rather than an estimate in the opposite direction to the observational association (assuming that the genetic variant that instruments the exposure is a valid instrument). Typically a significant Durbin Wu Hausman statistic indicates a difference between observational and causal estimates of the exposure-outcome association and can be a result of the presence of latent confounding in the observational analysis or indeed reverse causality. The statistic makes the strong assumption that the model for the instrumental variable analysis is valid, and also often has low power.

      Finally, we note that the authors have failed to mention several recent extensions of MR methodology that allow relaxation of some aspects of the IV assumptions, providing forms of sensitivity analysis to conventional approaches [10,11]. These will become progressively more useful as the number of genetic variants known to be related to various medically relevant exposures increases. Extensive discussion of the MR methodology as well as some recent developments in sensitivity analyses are available elsewhere [9,12,13].

      David M Evans, Tom Palmer, George Davey Smith

      References

      [1] Davey Smith et al (2003). Int J Epidemiol, 32(1):1-22.

      [2] Jansen et al (2014). Eur Heart J, 35(29), 1917-24.

      [3] Sekula et al (in press). J Am Soc Nephrol.

      [4] Robinson et al (2016). Nat Rev Rheumatol, 12(8), 486-96.

      [5] Terza et al (2008). J Health Econ, 27, 531-543.

      [6] Didelez et al (2007). Stat Methods Med Res, 16:309-30.

      [7] Davey Smith et al (2007). PLOS Med, 4(12), e352.

      [8] Pierce et al (2013). Am J Epidemiol, 178:1177–84.

      [9] Davey Smith et al (2014). Hum Mol Genet, 23(R1):R89-98.

      [10] Bowden et al (2015). Int J Epidemiol, 44(2):512-25.

      [11] Bowden et al (2016). Genet Epidemiol, 40(4):304-14.

      [12] Evans et al (2015). Ann Rev Genom Hum Genet, 16:327-50.

      [13] Burgess et al (in press). Epidemiology.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 01, Melissa Rethlefsen commented:

      I thank Dr. Thombs for his responses, particularly for pointing out the location of the search strategy in the appendix of Thombs BD, 2014. I am still uncertain whether the search strategies in question were the ones used to validate whether the primary studies would be retrieved ("In addition, for all studies listed in MEDLINE, we checked whether the study would be retrieved using a previously published peer-reviewed search [9].") for two reasons: 1) The cited study (Sampson M, 2011, about the method of validation) does not include the search strategy Dr. Thombs notes below, though the strategy is cited previously when the search to identify meta-analyses meeting the inclusion criteria was discussed, and 2) The search strategy in Thombs BD, 2014 is very specific to the "Patient Health Questionnaire." Was this search strategy modified to include other instruments? Or was the Patient Health Questionnaire the only depression screening tool in this project? It appeared as though other scales were included, such as the Geriatric Depression Scale and the Hospital Anxiety and Depression Scale, hence my confusion.

      I greatly appreciate the information about the reduction in the number of citations to examine; this is indeed highly beneficial information. I am curious whether the high number of citations came from primarily the inclusion of one or more Web of Science databases? Looking at the Thombs BD, 2014 appendix, multiple databases (SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH) were searched in the Web of Science platform. Were one or more of those a big contributor to extra citations retrieved?

      Though Dr. Thombs and his colleagues make excellent points about the need to maximize resources at the expense of completeness, which I fully agree with, my concern is that studies which do post-hoc analysis of database contributions to systematic reviews lead those without information retrieval expertise to believe that searching one database is comprehensive, when in fact, the skill of the searcher is the primary factor in recall and precision. Most systematic review teams do not have librarians or information specialists, much less ones with the skill and experience of Dr. Kloda. I appreciate that Dr. Thombs acknowledges the importance of including information specialists or librarians on systematic review teams, and I agree with him that the use of previously published, validated searches is a highly promising method for reducing resource consumption in systematic reviews.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 26, Brett D Thombs commented:

      We thank Ms. Rethlefson for her comments on our study. We agree with her about the importance of working with a skilled information specialist or librarian on all systematic reviews and that, as she notes, the quality of searches is often very poor in systematic reviews. She has correctly noted some of the limitations in our study, as we did in the study itself. We do not share Ms. Rethlefson’s concern with our use of what she refers to as an “uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform.” The full peer-reviewed search strategy that we used is provided in the appendix of the systematic review protocol that we cited (1). Ms. Rethlefson seems to criticize this approach because it “can only find 92% of the included articles, versus the 94% available in the database.” Systematic reviews are done for different purposes, and there is always a balance between resource consumption and completeness. In many cases, identifying 94% of all test accuracy evidence will be sufficient, and, in those cases, identifying 92% is not substantively different. Ms. Rethlefson questioned whether searching only MEDLINE would indeed reduce the number of citations and the burden in evaluating them. She is correct that we did not assess that. However, based on our initial search (not including updates) for studies of the diagnostic test accuracy of the Patient Health Questionnaire (1), using MEDLINE only would have cut the total number of citations to process from 6449 to 1389 (22%) compared to searching MEDLINE, PsycINFO, and Web of Science. Thus, it does seem evident, that in this area of research, using such a strategy would have a significant impact on resource use. Whether or not it is the right choice depends on the specific purposes of the systematic review and would be conditional on using a well-designed, peer-reviewed search.

      (1) Thombs BD, Benedetti A, Kloda LA, et al. The diagnostic accuracy of the Patient Health Questionnaire-2 (PHQ-2), Patient Health Questionnaire-8 (PHQ-8), and Patient Health Questionnaire-9 (PHQ-9) for detecting major depression: protocol for a systematic review and individual patient data meta-analyses. Systematic Reviews. 2014;3:124.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 21, Melissa Rethlefsen commented:

      The authors are asking an important question—which database(s) should be searched in a systematic review? Current guidance from the Cochrane Collaboration, the Institute of Medicine, and most information retrieval specialists suggests that searching multiple databases is a necessity for a comprehensive search of the literature, but searching multiple databases can be time-consuming and may result in more citations than are manageable to review. In this paper, the authors posit that searching MEDLINE alone would be sufficient to locate relevant studies when conducting systematic reviews with meta-analysis on depression screening tools.

      Though the authors’ methodology is detailed, one limitation noted in the paper was noted as, “we were unable to examine whether the search strategies used by authors in each meta-analysis did, in fact, identify the articles indexed in MEDLINE as most included meta-analyses did not provide reproducible search strategies.” Because of this, the conclusions of this study must be viewed with caution. If the searches conducted by the authors did not locate the studies in MEDLINE, the fact that the studies could have theoretically been located in MEDLINE is irrelevant. Finding results in MEDLINE is largely due to the ability of the searcher, the sensitivity of the search design, and the skill of the indexer.Wieland LS, 2012 Suarez-Almazor ME, 2000 Golder S, 2014 O'Leary N, 2007 Searching for known items to assess database utility in systematic reviews has been previously done (see, for example, Gehanno JF, 2013), but it has been critiqued due to the lack of search strategy assessment.Boeker M, 2013 Giustini D, 2013

      The authors, using an uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform they state had been “a previously published peer-reviewed search,” indeed can only find 92% of the included articles, versus the 94% available in the database. Unfortunately, there is little reason to suppose that the authors of systematic review articles can be expected to conduct a “reasonable, peer-reviewed search strategy.” In fact, researchers have repeatedly shown that even fully reproducible reported search strategies often have fatal errors and major omissions in search terms and controlled vocabulary.Sampson M, 2006 Rethlefsen ML, 2015 Koffel JB, 2016 Golder S, 2008 Though working with a librarian or information specialist is recommended as a way to enhance search strategy quality, studies have shown that certain disciplines never work with librarians on their systematic reviews Koffel JB, 2016, and those disciplines where it is more common still only work with librarians about a third of the time.Rethlefsen ML, 2015 Tools like PRESS were developed to improve search strategies McGowan J, 2016, but search peer review is rarely done. Rethlefsen ML, 2015

      The authors also state that, “searching fewer databases in addition to MEDLINE will result in substantively less literature to screen.” This has not been shown by this study. The authors do not report on the number of articles retrieved by their search or any of the searches undertaken in the 16 meta-analyses they evaluate. Furthermore, no data on precision, recall, or number needed to read was given for either their search or for the meta-analyses. These data could be reconstructed and would give readers concrete information to make this case. That would be particularly helpful in light of the information provided about the number and names of the databases searched. Other studies looking at database performance for systematic reviews have included precision and recall information for the original search strategies and/or from the reported found items. Preston L, 2015 Bramer WM, 2013 Bramer WM, 2016 These studies have largely found that searching multiple databases is of benefit.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 04, Martine Crasnier-Mednansky commented:

      Expression of crp in Escherichia coli was found 'not to be' post-transcriptionally regulated by sRNAs including SdsR (Lee HJ, 2016). In sharp contrast, this paper reports SdsR 'strongly' affects the expression of crp in Salmonella typhimurium. What causes such discrepancy?

      In all fairness, Lee HJ, 2016 noted "a few sRNAs [which included SdrS] were close to the two-fold cutoff for repression of crp" and also noted "our translational fusions will only detect regulation in the 5’ UTR and the first 20 codons of the targets". Therefore, it was prudently suggested that expression of crp was not affected by sRNAs.

      Here, the authors observed a two-fold repression of crp by SdsR using whole genome microarray (table 1), and an almost 2-fold repression using a gfp reporter fusion (figure 1B). Thus it appears there is no data discrepancy between the present work and Lee HJ, 2016.

      The contention by the authors SdsR strongly affects the expression of crp is in relation to data obtained with sRNA CyaR (as reported in figure 6). Figure 6A indicates that, in early stationary phase, there is no synthesis of SdsR. SdsR appears at +3h when the cells are supposedly well advanced in the stationary phase. This suggests regulation by SdsR occurs late in the stationary phase, as mentioned by the authors. Figure 6A also indicates constitutive SdsR is overly expressed, and most importantly the correlation between SdsR and crp mRNA is not straightforward, as observed by comparing lane 4 and 10 (or 11) in figure 6A.

      CyaR expression is positively regulated by CRP-cAMP (De Lay N, 2009), therefore a carbon source triggering a relatively high cAMP level as compared to glucose (maltose in this paper), caused an increase in the CyaR level both in the presence and absence of SdsR (figure 6B, lane 1 to 8). With SdsR overly expressed, the CyaR level significantly decreased for cells grown on maltose (figure 6B, lane 11 and 12). The authors concluded lack of CyaR is related to the repression of crp by SdsR yet the level of CRP was not monitored.

      It is reasonable to conclude regulation of crp expression by sRNAs does not appear physiologically relevant during growth or entry into stationary phase. However, this regulation may be significant upon accumulation of SdsR in nutrient-limited cells. If this is the case, CRP-dependent synthesis of post-exponential starvation proteins, which are not essential for survival (Schultz JE, 1988), will gradually be shut off. This attempted proposal is grounded in data from Lévi-Meyrueis C, 2014 indicating lack of SdsR results in impaired competitive fitness however only after 2 to 3 days in stationary phase.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 05, Antonio Palazón-Bru commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 05, Antonio Palazón-Bru commented:

      Dear Dr Martí-Carvajal,

      Thank you very much for your interest in our work. In the paper we had written the following phrase about the indicated issue: "ROMPA has a stratified randomisation based on gender, age (≤65 or >65 years) and Simplified Acute Physiology Score (SAPS) III score (<50 or ≥51)." Please, if you have any doubt about this, do not hesitate to contact us for further information.

      Best regards, The authors


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 16, Arturo Martí-Carvajal commented:

      This protocol reported an unclear randomization process. 1. How sequence generation was done? 2. How do trial authors assured the ramdoness?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 10, Lydia Maniatis commented:

      The inexplicable insistence on a logically and empirically invalid elementaristic approach is reflected in Gheorghiu et al’s description of color as a “low-“level feature” (of the proximal, distal, perceptual ? – stimulus): “Together these studies suggest that while symmetry mechanisms are sensitive to across-the-symmetry-midline correlations in low-level features such as color…” I’ve already discussed the problem with describing symmetry (or asymmetry) as a collection of correlations; here, the point has to do with color. Color, as we know, is not a feature of the distal stimulus, or of the proximal stimulus, it is a feature and a product of perceptual processes. As we (visual perception types) know, furthermore, there is no unique collection of wavelengths associated with the perception of a given color. How a local patch will look is wholly dependent on the structure and interpretation (via process) of the surrounding patches as well as that particular one. A patch reflecting the full spectrum of wavelengths in the visible spectrum can appear any color we want, because of the possibility of perceiving transparency and double layers of color. (Google/Image Purves cubes to see an example). So, although the term “low-level” is rather vague, it is clear that the perceived color of a patch in the visual field is the result of the highest level of perceptual processes, up to and including the process that produces consciousness. This type of fundamental confusion at the root of a research program should call that program into question.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 08, Lydia Maniatis commented:

      Point 3

      A persistent question that I have is why is it acceptable, in the world of psychophysics, for authors to act as subjects, especially when the number of observers is typically so small, and when authors are careful to point out that non-author subjects were "naive."

      Quoting Gheorghiu et al: "Six observers participated in the experiments: the first author and five subjects who were naive with regard to the experimental aims."

      Indeed, in certain conditions, the lead author acted as one of only three, or even of only two, subjects:

      "For the number of blobs experiment three observers (EG, RA, CM) took part in all stimulus conditions and for the stimulus presentation duration experiment only two observers (EG and RA) participated."

      If naivete is important, then why is this acceptable? It seems like a straightforward question. Maybe there's a straigthforward answer, but I don't know where to find it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 07, Lydia Maniatis commented:

      The authors are asking questions of the nature: “How many types of phlogiston does wood contain,” comparing the results of burning wood in a variety of conditions, and interpreting them under the assumptions of “phlogiston detection theory.”

      The key point is that their major assumption or hypothesis - the existence of phlogiston, is never questioned or tested even though evidence and arguments against it are of long-standing. Here, ‘phlogiston’ is equivalent to “symmetry channels” and ‘assumptions of ‘phlogiston detection theory’ are equivalent to the assumptions of “probability summation of independent color-symmetry channels within the framework of signal-detection theory.”

      As noted in the earlier post, signal detection is a wholly inappropriate concept with respect to perception. But this doesn’t inhibit the study from proceeding, because logical problems are ignored and data is simply interpreted as though the “framework of sdt” were applicable.

      The basic logical problem is that the perception of a symmetrical form derives from the detection of local symmetry elements. These local elements supposed to instigate local signals, which are summed, and this sum mediates whether symmetry will or will not be perceived:

      “In the random-segregated condition the local symmetry signals would be additively combined into a single color-selective symmetry channel, producing a relatively large symmetry signal in that color channel and zero symmetry signal in the other color channels. In the non-segregated condition on the other hand, there would be symmetry information in all channels but the information in each channel would be much weaker…Probability summation across channels would result in an overall stronger signal in the random-segregated compared to non-segregated condition16. If there are no color-selective symmetry channels, then all color-symmetry signals will be pooled into one single channel.”

      How inappropriate the above quote is is easier to appreciate if we look at cases in which the physical and proximal configurations aren’t symmetrical, but the perceived configuration is. Take, for example, a picture of a parallelogram that looks like a slanted rectangle (as tends to be the case, e.g. in the three visible sides of the Necker cube). If the parallelogram is perceived as rectangular, then it looks symmetrical. This being the case, does it make sense to talk about “local symmetry signals” being summed up to produce the perceived symmetry? Isn’t the perception of the whole rectangle itself prior to and inextricably tied to the perception of its symmetry? If we are willing to invoke “local symmetry signals” then we could just as well invoke “local asymmetry signals,” since perceived asymmetry in a form is just as salient as symmetry - and just as dependent on prior organization. In perception (unlike in cognition), formal features such as symmetry are never disembodied; we never perceive “symmetry” as such, we perceive a symmetrical object. So, just as you can’t separate a shadow from the object that casts it, you can’t separate symmetry from the form that embodies it, and thus you can’t localize it.

      The logical problem is the same whether or not the distal or proximal stimuli are symmetrical. For a given pair of dots in Gheorghiu et al’s stimuli to be tagged as a“local symmetry signal,” they must already have been perceptually incorporated in a perceived shape. Symmetry will be a feature of that shape, as a whole. It is therefore redundant to say that we perceive symmetry by going back and summing up “local signals” from the particular pairs of points that are matched only because they are already players in a global shape percept. If we don’t assume this prior organization, then any pair of dots in the stimuli are eligible to be called “symmetry signals” simply by imagining an axis equidistant from both.

      In general, it isn’t reasonable or even intelligible to argue that any aspect of a shape, e.g. the triangularity of a triangle, is perceivable via a piecemeal process of detection of local “triangularity signals.” This was the fundamental observation of the Gestaltists; sadly, it has never sunk in.

      In a subsequent post I will discuss the problem with the two alternative forced choice method used here. This method forces observers to choose one of two options, even if neither of those matches their perceptual experience. Here, I want to point out that this experiment is set up in precisely the same way: Data are used to choose among alternatives, none of which reflect nature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 04, Lydia Maniatis commented:

      Preliminary to a more general critique of this study, whose casual approach to theory and method is unfortunately typical in the field of vision science, I would like to point out the conceptual confusion expressed in the first few sentences of the introductory remarks.

      Here, Gheorghiu et al (2016) state that "Symmetry is a ubiquitous feature in natural images, and is found in both biological and man-made objects...symmetry perception plays an important role in object recognition, [etc]."

      If by "natural images" the authors are referring either to the retinal projection or to any projection of the natural or even man-made world, the statement is incorrect. It will be rare that the projection of either a symmetrical or an asymmetrical object will be symmetrical in the projection. The authors are making what the Gestaltists used to call the "experience error," equating the properties of the products of perception with the properties of the proximal stimulus.

      Yes, the world contains may quasi-symmetrical objects, yes, man-made objects are, more often than not, symmetrical, and yes, we generally perceive physically symmetrical objects as symmetrical. But this not occurs not because the proximal stimulus mirrors this symmetry, but in spite of the fact that it does not.

      The misunderstanding vis a vis the properties of the physical source of the retinal projection vs the properties of the projection vs the properties of the percept runs deep and is fundamental to studies that, like this one, treat perception as a "signal detection" problem.

      When an observer says "I see symmetry" in this object or picture, this does not mean that the observer's retinal projection contains symmetrical figures (even if (and this is an insurmountable if) it were theoretically acceptable to treat the projection as being "pre-treated" by a figure-ground process that segregates and integrates photon-hits on the basis of the physical coherence of sources that reflected them).

      So in what sense is symmetry being "detected"? Only in the sense that the conscious observer is inspecting the products of perceptual processes that occur automatically, and are the only link between conscious experience and the outside world. Because of this, an observer may "detect" symmetry in a perceptual stimulus even if the source of that stimulus is, in fact, asymmetrical. For example, when we look at a Necker cube, we "detect" symmetry, even though the figure that reflected the light is not symmetrical. When it comes to perception, the "feature detector" concept is a non-starter, because it ignores the nature of the proximal stimulation and mixes up cause and effect.

      The fact that the authors use actually symmetrical figures as their experimental objects obscures this truth, of which they should be aware.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, Amit Dutt commented:

      We thank the reader for the general appreciation and interest expressed in the TMC-SNPdb initiative. We agree that using tumor adjacent normal samples from cancer patients indeed has limitations to the effect as described in an unambiguous way in our article. In addition to our study, the Exome Aggregation Consortium (Lek M, 2016) –involving 7,601 normal blood samples derived from cancer patients of 60,706 normal samples studied— has similarly described the limitation in using such normal samples. Of note, the TMC-SNPdb is a pilot initiative project. As it evolves with the inclusion of additional normal samples, we do anticipate further refinements over the subsequent release/ versions of the database.<br> Unfortunately, the suggested comparison between the TMC-SNPdb with Sudmant et al. 2015 study cannot be made as Sudmant et al. (doi:10.1038/nature15394) performed a low pass whole genome sequencing at ~8x coverage to describe “structural alterations". Such low pass coverage studies are not ideally suited for variant analysis. However, in a separate study, the 1000 Genomes project consortium (1000 Genomes Project Consortium., 2015) has described whole exome sequence data of samples including data from >400 "normal" people of South-East Asian/Indian ethnicity at a mean high coverage of ~75x to exhaustively enlist the SNPs present in the population. Our study describing the TMC-SNPdb does compare and deplete the SNPs reported from this study to develop a unique set of yet undescribed SNPs specific to the Indian population, in an exclusive manner.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 31, Chandan Kumar commented:

      This seems like a welcome initiative to generate a SNP database for Indian populations. However as noted in the discussion, use of tumor adjacent normal samples derived from cancer patients to generate a reference germline database is problematic. Interestingly, the 1000 genome project (Sudmant et al, An integrated map of structural variation in 2,504 human genomes,Nature 526,75–81,(01 October 2015), doi:10.1038/nature15394) includes data from >400 "normal" people of South-East Asian/Indian ethnicity. May be useful to compare the two datasets.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 25, Ram Dessau commented:

      Discussion of formal criteria for website presentation, appear more important than medical or scientific issues in this commentary.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 02, Peter Hajek commented:

      The finding that stop-smoking medications had beneficial rather than adverse effects on pregnancy outcomes is an important contribution. However, despite the claim in the Conclusion, the study does not really provide any information on their efficacy for smoking cessation. Medication users were trying to quit and an unknown but possibly large proportion of them were already non-smokers at intake (data on this crucial variable are not provided). The control group were all smokers. They were also not engaged in stop-smoking treatments and so likely to have a lower interest in quitting than those taking stop-smoking medications.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 13, Clive Bates commented:

      The authors base their advice four main pillars, each of which is unreliable.

      First, that we lack evidence of safety. We lack evidence of the complete safety of anything, including medicines, and notably those used in smoking cessation. What matters is the relative risk. We do have good evidence that e-cigarette vapour is much less hazardous than cigarette smoke [1]. Most of the hazardous agents in cigarette smoke are either not present at detectable levels in e-cigarette aerosol or at levels far below those found cigarette smoke. The authors mention diacetyl but fail the basic requirement of risk presentation, which is quantification - magnitude and materiality matter. The levels of diacetyl found in e-cigarettes (if it is used at all) are several hundred times lower than in cigarette smoke, and there are no known cases of 'popcorn lung' in smokers [2]. Overall, the Royal College of Physicians reviewed the safety of e-cigarettes and concluded [3]:

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure". (Section 5.5 page 87)

      Second, the authors argue that e-cigarettes may be ineffective as a smoking cessation aid. There are few RCTs evaluating these products because they are not medicines, but low-risk consumer alternatives to cigarettes. RCTs are not well suited to evaluating complex behaviour change in a rapidly evolving marketplace. However, there is considerable evidence of smokers using these products as alternatives to smoking and the studies cited by the authors are limited by confounding, selection bias and inappropriate aggregation of heterogeneous studies. May I suggest that interested clinicians cut through the thicket of claims and counterclaims, and read some of the accounts of smokers who's lives have been revolutionised by these products [4]? For the more cautious, e-cigarettes may be an option to suggest once the other options have been exhausted, but the approach certainly should not be discarded in its entirety.

      Third, the authors argue that these products are not approved by the FDA and that there are fire and poisoning risks. Regulation by the FDA is no guarantee of safety - many medicines have severe side-effects and cigarettes are regulated by the FDA under the Tobacco Control Act. There have been a few cases of highly publicised e-cigarette battery fires - a small risk with all lithium-ion batteries. But this should be set against fire risks from smoking, which the National Fire Protection Service estimates causes the following damage [5]</p>

      In 2011, U.S. fire departments responded to an estimated 90,000 smoking-material fires in the U.S., largely unchanged from 90,800 in 2010. These fires resulted in an estimated 540 civilian deaths, 1,640 civilian injuries and $621 million in direct property damage".

      It is true that calls to poison centers rose 'exponentially' along with the growth of the product category from a low base. But the word 'exponential' does not mean 'large'. Calls to US Poison Centers amounted to 4,014 for e-cigarettes in 2014, but that compares with 291,062 for analgesics and 199,291 for cosmetics, and 2.2 million in total [6]. It is a small risk amongst many others common in the home.

      Fourth and finally, the authors make a 'first-do-no harm" argument. But paradoxically they create potential serious harm with their chosen analogy:

      Jumping from the 10th floor of a burning building rather than the 15th floor offers no real benefit.

      This is an implied relative risk claim, which can be interpreted in two ways. First, that cigarettes and e-cigarettes are equally lethal (the likely result of jumping from the 10th and 15th floor is near certain death) or that e-cigarettes offer about two-thirds of the risk of smoking, based on the relative energy of impact (proportional to the distance fallen). Neither is remotely supportable by any evidence, but it this kind of casually misleading risk communication that is likely to cause fewer smokers to switch and more e-cigarette users to relapse. The debate about the relative risk of e-cigarette risk and smoking is better represented by a comparison of stumbling on the building's entrance steps with jumping from the 4th floor, the estimated height causing death in 50% of falls [7]. The weakness and inappropriateness of such analogies have been heavily criticised [8].

      These metaphors, like other false and misleading anti-harm-reduction statements are inherently unethical attempts to prevent people from learning accurate health information. Moreover, they implicitly provide bad advice about health behavior priorities and are intended to persuade people to stick with a behavior that is more dangerous than an available alternative. Finally, the metaphors exhibit a flippant tone that seems inappropriate for a serious discussion of health science.

      The responsible clinician should be providing accurate, realistic information and advice conveyed in a way that promotes understanding rather than unwarranted fear or confusion and helps the patient make an informed choice. It is important to recognise that smokers are at great risk, and if these products offer and attractive and appealing way out of smoking that works for them the clinician should not be a barrier to them taking that path.

      [1] Farsalinos KE, Polosa R. Safety evaluation and risk assessment of electronic cigarettes as tobacco cigarette substitutes: a systematic review. Ther Adv Drug Saf 2014;5:67-86. [Link]

      [2] Siegel M. New Study Finds that Average Diacetyl Exposure from Vaping is 750 Times Lower than from Smoking, The Rest of the Story, 10 December 2015. [Link]

      [3] Royal College of Physicians, London. Nicotine without smoke:tobacco harm reduction, 28 April 2016 [Link]

      [4] Consumer Advocates for Smoke-free Alternatives Associations (CASAA) User testimonials, accessed 12 July 2016 [Link]

      [5] Hall J. The Smoking-Material Fire Problem, National Fire Protection Service, July 2013 [Link

      [6] Mowry JB, Spyker DA, Brooks DE, et al. 2014 Annual Report of the American Association of Poison Control Centers' National Poison Data System (NPDS): 32nd Annual Report. Clin Toxicol 2015;53:962-1147. [Link].

      [7] Marx J, Hockberger R, Walls R. Rosen's Emergency Medicine - Concepts and Clinical Practice, 8th EditionTable 36-1 page 290, August 2013. [Link]

      [8] Phillips CV, Guenzel B, Bergen P. Deconstructing anti-harm-reduction metaphors; mortality risk from falls and other traumatic injuries compared to smokeless tobacco use. Harm Reduct J 2006;3:15.[Link]

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 07, Zvi Herzig commented:

      The authors argue that their study Gmel G, 2016 is not cross-sectional because smoking was measured at the baseline. However, both the exposure (EC use) and the outcome (cessation) are measured simultaneously at follow-up. This allows for the most basic limitation of cross-sectional studies: reverse causality.

      Contrary to the authors' understanding that findings demonstrate that vaping had no beneficial effect, the negative correlation between EC and cessation likely results from successful quitters lacking the need to initiate EC use.

      Their fact that EC was rare at towards the beginning of the study only exacerbates the aforementioned flaw: those quit early on had little opportunity to expose initiate e-cigarette use as smokers and hence the lower e-cigarette use among tobacco quitters.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 19, Zvi Herzig commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 13, Daniel Mønsted Shabanzadeh commented:

      Response to Frisch and Earp’s comments on Systematic Review

      Daniel Mønsted Shabanzadeh<sup>1,2,3</sup> Signe Düring<sup>4</sup> Cai Frimodt-Møller<sup>5</sup>

      <sup>1</sup> Digestive Disease Center, Bispebjerg Hospital <sup>2</sup> Research Centre for Prevention and Health, Capital Region of Denmark <sup>3</sup> Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen <sup>4</sup> Mental Health Services of the Capital Region Region of Denmark <sup>5</sup> Department of Urology, CFR Hospitals, Denmark

      This response was posted on the Danish Medical Journal's website July 2016: http://ugeskriftet.dk/files/response_to_frisch_and_earp_dmj.pdf


      Dear Morten Frisch and Brian Earp

      We thank you both for the comments(1) on our systematic review(2).

      We respectfully disagree that the conductance of systematic reviews is unjustified. We can only emphasize the importance of identifying all available literature for clarity, before drawing conclusions on a delimited objective, such as, whether the exposure of circumcision has an impact on outcomes of perceived sexual function in adult males. The systematic process was performed according to the PRISMA statement and our conclusion reflected the lack of research in specific domains. We therefore, do not feel the need to justify the methodology any further.

      You have problematized that we did not include a Canadian study of sexual partners to circumcised males, however, this was not part of our research objective. To answer the objective of the impact of circumcision on sexual partners perceived sexual function would require yet another systematic review process.

      Circumcision is performed on both clinical indications such as penile or prepuce pathology and for nonclinical purposes such as cultural practice or with the aim of HIV-prevention. As we have demonstrated in the paper, many studies fail to distinguish these two populations which is major limitation from a clinical perspective, and one should therefore not draw conclusions about either from such studies. Frisch and Earp suggest that a number of other factors besides this clinical perspective may contribute to the outcome of perceived sexual function in males and we do agree. We have risen the issue of heterogeneity and limitations of the available literature in the discussion.

      Our conclusion clearly states the results of the highest quality of available evidence and the lack of high quality studies on consequences of medically indicated circumcision and age at circumcision in order to fully answer our study objectives, and we have specifically stated that a majority of the studies does not take sexual orientation into perspective. We have suggested specific study designs on how to fill the gaps in evidence for future research.

      We would like to extend to you both, and all other interested parties, an invitation to collaborate in the future. We can all agree that the field calls for further research, and would be happy to join forces, with contributions from both clinical, epidemiological and physiological angles.

      References

      (1) Frisch M, Earp B. Problems in the qualitative synthesis paper on sexual outcomes following nonmedical male circumcision by Shabanzadeh et al http://ugeskriftet.dk/files/201607001_commentary_frisch_earp_on_paper_by_shabanzadeh_et_al_dmj_1.pdf

      (2) Shabanzadeh DM, During S, Frimodt-Moller C. Male circumcision does not result in inferior perceived male sexual function - a systematic review. Danish medical journal. 2016;63(7)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Dec 06, Morten Frisch commented:

      Problems in the qualitative synthesis paper on sexual outcomes following non-medical male circumcision by Shabanzadeh et al

      by Frisch M<sup>1,2</sup> & Earp BD<sup>3</sup>

      <sup>1</sup> Statens Serum Institut, Copenhagen, Denmark <sup>2</sup> Aalborg University, Aalborg, Denmark <sup>3</sup> Hastings Center Bioethics Research Institute, USA

      The comment below was published on the Danish Medical Journal's (Ugeskrift for Læger's) website on July 1, 2016: http://ugeskriftet.dk/files/2016-07-01_commentary_frisch_earp_on_paper_by_shabanzadeh_et_al_dmj_1.pdf

      _____________________________________________________________________________________________________________

      Shabanzadeh et al (1) claim in their title that “Male circumcision does not result in inferior perceived male sexual function.” Yet such a categorical conclusion does not follow from the data and analysis presented in the paper itself. As the authors state, there was “considerable clinical heterogeneity in circumcision indications and procedures, study designs, quality and reporting of results” in the studies they reviewed, which precluded an objective, quantitative assessment. Inadequate follow-up periods of only 1-2 years in the prospective studies imply that their results cannot be generalized beyond that range. In addition, “Risks of observer and selective reporting bias were present in the included studies … only half of the studies included validated questionnaires and some studies reported only parts of questionnaires.”

      There is also a troubling heteronormativity to the authors’ headline claim. As they state: “Most studies focused on the heterosexual practice of intravaginal intercourse and did not take into account other important heterosexual or homosexual practices that comprise male sexual function.” Such practices include, inter alia, styles of masturbation that involve manipulation of the foreskin itself, as well as “docking” among men who have sex with men (MSM), both of which are rendered impossible by circumcision (2). Related to this, a recent Canadian study, not included in the paper by Shabanzadeh et al, found “a large preference toward intact partners for anal intercourse, fellatio, and manual stimulation of his partner’s genitals,” in a small but demographically diverse sample of MSM (3). Against such a backdrop, the authors’ characterization of their paper as “a systematic review” showing a definitive lack of adverse effects of circumcision on perceived male sexual function is unjustified. As Yavchitz et al argue, putting such a conclusive ‘spin’ on findings that are in truth more mixed or equivocal “could bias readers' interpretation of [the] results” (4). Thus, while the literature search performed by Shabanzadeh et al may well have been carried out in a systematic manner, their ‘qualitative synthesis without metaanalysis’ leaves the distinct impression of a partial (in both senses of the word)assessment.

      The authors mention that the rationale for undertaking their analysis was “the debate on non-medical male circumcision [that has been] gaining momentum during the past few years”. But the public controversy surrounding male circumcision has to do with the performance of surgery on underage boys, specifically, in the absence of medical necessity. By contrast, therapeutic circumcisions that cannot be deferred until an age of individual consent are broadly perceived to be ethically uncontroversial, as are voluntary circumcisions performed for whatever reason on adult men, who are free to make such decisions about their own genitals (5). Consequently, studies dealing with either therapeutic or adult circumcisions are irrelevant to the ongoing controversy and should have been excluded by the authors in light of their own aims; such exclusion would have left only a handful of relevant investigations out of the 38 included studies.

      As one of us has noted elsewhere: “the [sexual] effects of adult circumcision, whatever they are, cannot be simply mapped on to neonates” or young children (2). This is because studies assessing sexual outcome variables in adults typically do not account for socially desirable responding (6); they concern men who, by definition, actively desire to undergo the surgery to achieve a perceived benefit, and are therefore likely to be psychologically motivated to regard the result as an improvement overall; and such studies are typically hampered by limited follow-up (as noted above), rarely if ever extending into older age, when sexual problems begin to increase markedly (7). In infant or early childhood circumcision, by contrast, “the unprotected head of the penis has to rub against clothing (etc.) for over a decade before sexual debut. In this latter case … the affected individual has no point of comparison by which to assess his sexual sensation or satisfaction - his foreskin was removed before he could acquire the relevant frame of reference - and thus he will be unable to record any differences” (2).

      The sexual consequences of circumcision are likely to vary from person to person. All-encompassing statements, such as that forming the title of the paper by Shabanzadeh et al, do not reflect this lived reality. Individual differences in sexual outcome variables will be shaped by numerous factors, such as the unique penile anatomy of each male, the type of circumcision and the timing of the procedure, the motivation behind it, the cultural context, whether it was undertaken voluntarily (or otherwise), the man’s subjective feelings about having been circumcised, his underlying psychological profile, and so on (8, 9). Collapsing across all of these factors to draw general conclusions can only serve to obscure such crucial variance (10).

      Therefore, the choice of the authors to include any study looking at sexual outcomes after circumcision, whether in boys or adult males, whether in healthy individuals or in patients with a foreskin problem, whether in Africa or in Western settings, and whether with a follow-up period of decades or only a few months to years is problematic. Such a cacophony of 38 studies, dominated by findings on short-term sexual consequences of voluntary, adult male circumcision has limited relevance, if any, to the authors’ stated research question: how non-therapeutic circumcision in boys affects the sex lives of the adult men they will one day become.

      References

      (1) Shabanzadeh DM, Düring S, Frimodt-Møller C. Male circumcision does not result in inferior perceived male sexual function – a systematic review. Danish Medical Journal 2016; 63: A5245 (http://www.danmedj.dk/portal/page/portal/danmedj.dk/dmj_forside/PAST_ISSUE/2016/D MJ201607/A5245).

      (2) Earp BD. Sex and circumcision. American Journal of Bioethics 2015; 15: 43-5.

      (3) Bossio JA, Pukall CF, Bartley K. You either have it or you don't: the impact of male circumcision status on sexual partners. Can J Hum Sex 2015; 24: 104-19.

      (4) Yavchitz A, Boutron I, Bafeta A, Marroun I, Charles P, Mantz J, Ravaud P. Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study. PLoS Med 2012; 9, e1001308.

      (5) Darby R. Targeting patients who cannot object? Re-examining the case for nontherapeutic infant circumcision. SAGE Open 2016; 6: 2158244016649219.

      (6) Earp BD. The need to control for socially desirable responding in studies on the sexual effects of male circumcision. PLoS ONE 2015; 10: 1-12.

      (7) Earp BD. Infant circumcision and adult penile sensitivity: implications for sexual experience. Trends in Urology & Men’s Health 2016; in press.

      (8) Goldman R. The psychological impact of circumcision. BJU International 1999; 83: 93-102.

      (9) Boyle GJ, Goldman R, Svoboda JS, Fernandez E. Male circumcision: pain, trauma and psychosexual sequelae. Journal of Health Psychology 2002; 7: 329-343.

      (10) Johnsdotter S. Discourses on sexual pleasure after genital modifications: the fallacy of genital determinism (a response to J. Steven Svoboda). Global Discourse 2013; 3: 256-265.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Janelia Neural Circuit Computation Journal Club commented:

      Highlight/Summary Howe and Dombeck imaged the activity in dopaminergic projection axons from the midbrain to the striatum during self-initiated locomotion and unexpected-reward delivery in mice. They reported a rapid increase in activity, which apparently preceded the locomotion onset. Rapid signalling in these axons was correlated with changes in acceleration during locomotion. Furthermore, axons that carried locomotion signals were found to be distinct from those that responded to unpredicted rewards. The authors concluded that distinct populations of midbrain dopaminergic neurons play a role in locomotion control and processing of unexpected reward.

      Strengths This is one of a few studies that uses self-initiated locomotion to contrast dopaminergic signalling during movement-related activity and reward. By imaging separately from axonal projections originating in the substantia nigra (SNc) and ventral tegmental area (VTA), the authors found that the SNc signal was more ‘movement- related’, whereas VTA conveyed both movement and ‘reward-related’ signals. The authors confirmed this observation by describing a hitherto unknown gradient in locomotion associated signal versus reward signal along the dorsal-ventral axis of the striatum, which was consistent with a previously reported pattern of projections from the VTA and SNc to the striatum.

      Weaknesses

      There are several technical issues that complicates make the interpretation of these results difficult:

      Detecting neural activity in individual axons using two-photon calcium imaging could be confounded by brain motion, which is expected to be exacerbated during locomotion. Electrophysiological recordings from the VTA and the SNc during locomotion and reward-delivery could have provided more unambiguous results. In fact, extracellular recordings from the SNc, recorded in a similar task, suggested that movement onset is represented by a pause in firing of SNc neurons (Dodson et al., 2016), rather than by an increase in firing rate.!

      The authors reported that the dopaminergic signal preceded locomotion initiation by ~100 msec, as measured indirectly by treadmill acceleration. However, any postural changes or micro-movements, could have started before treadmill movement was detected. Therefore, whether movement initiation precedes or lags behind the dopaminergic signal is unclear based on this data, until more direct measurements of motion initiation are conducted (e.g. EMG, or detailed video analysis of the paws kinematics).

      The authors claim that dopaminergic signalling during continuous locomotion were associated with changes in acceleration. However, during locomotion changes in acceleration had a rhythmic pattern at ~3.5 Hz, which complicates the interpretation of whether the dopaminergic signalling reflected past or future changes of the movement pattern. Furthermore, previous studies have reported oscillatory activity in the VTA at 4 Hz (Fujisawa & Buzsaki, 2011), and hence it is unclear whether the rhythmic signalling in the dopaminergic neurons during locomotion reported by Howe and Dombeck could simply reflect a coupling of the activity of the dopaminergic neurons to LFP.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 17, George McNamara commented:

      more nuanced review at https://www.ncbi.nlm.nih.gov/pubmed/28481306

      and compelling blog and Perspective blasting MTH1:

      http://www.icr.ac.uk/blogs/the-drug-discoverer/page-details/call-to-bioscientists-choose-and-use-your-chemicai-probes-very-carefully

      Blagg & Workman 2017 Cancer Cell (open access) http://dx.doi.org/10.1016/j.ccell.2017.06.005


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 25, Morten Oksvold commented:

      In figure 3E the micrograph of merged KI67 and DAPI staining does not correspond with the single DAPI staining.

      A similar problem is also seen in figure 4D, where the PI-positive cells do not correspond with the hoechst stained cells. This is seen when you adjust the brightness. In this experiment, cells are exposed to 5 mg/ml PI for 30 min. The usual is to use 1 microgram/ml and study the cells directly, otherwise the PI will also stain the live cells.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 17, David Keller commented:

      Berger's erroneous remarks in paragraph 4 imply possible serious errors in the Cologuard internal assay results

      Berger wrote: "Dr. Keller’s attempt to draw a correlation between the occult hemoglobin value within a Cologuard test result and the stool hemoglobin concentration thresholds for positive results in a commercially available fecal immunochemical tests with separately validated reference points is just one example of how easily the erroneous use of data can yield potentially inappropriate clinical decisions. "

      Keller: Each Cologuard specimen is subjected to an assay for occult hemoglobin concentration, measured in ng/mL, a measurement which must be accurate and repeatable by other modes of testing, including "commercially available fecal immunochemical tests (FIT)". Any biochemical measurement, such as fecal hemoglobin concentration, should remain constant and not vary depending on the test used to measure it. The Cologuard composite score requires an accurate measurement of fecal hemoglobin concentration, and it does not matter how that concentration is measured, but the concentrations measured by different tests must come out the same, within the error limits of the tests, or there is a problem.

      Berger wrote: "For instance, as discussed at the FDA panel review of Cologuard, due to the Cologuard test’s use of significantly more stool in the Cologuard hemoglobin tube, the cut off of 100ng/mL concentration in the hemoglobin tube collection buffer used by some fecal immunochemical tests was estimated by FDA reviewers to be likely equivalent to more than twice that level within a Cologuard test."

      Keller: Your statement is completely false, and reveals a serious misunderstanding of the very simple and basic concept of concentration itself. Assuming a homogeneous specimen, the concentration of a solute (in this case hemoglobin) is independent of the sample size. The fact that the Cologuard assay uses "significantly more stool" should not affect the measured hemoglobin concentration. The amount of collection buffer solution relative to the amount of stool specimen in the test tube should not make any difference. If two different assays are measuring different fecal hemoglobin concentrations for the same homogeneous sample, then one or both assays are erroneous. Because the commercial FIT assays are calibrated and validated by the FDA, while you admit that Cologuard's internal assays are not validated or calibrated to that standard, it appears that the internal Cologuard fecal hemoglobin concentration measurement is erroneous, which, in turn, impairs the accuracy of all Cologuard screens!

      Berger wrote: "Even then, the theoretically resulting comparative threshold level is merely the product of conjecture because it has not been studied for that purpose, and to our understanding, the four most commonly used fecal immunochemical tests all generate only qualitative positive or negative results without release of the underlying quantitative information".

      Keller: You are wrong - commercial FIT tests yield a BINARY result of positive or negative, which is completely QUANTITATIVE and not in any way qualitative. For example, the FIT test used as the control comparator in the Multitarget clinical trial [1] was positive for fecal hemoglobin concentrations greater than or equal to 100 ng/mL, and negative for lesser concentrations. The results of that FIT test were binary (positive or negative), quantitative, accurate, repeatable and did not vary with the size of the stool specimen! The results of the Cologuard internal fecal hemoglobin concentration assay had better agree with the FIT test results, or there is a serious problem, and it is probably in your assay (see my rebuttal of your second paragraph for a full explanation of why).

      Reference

      Imperiale, T.F., Ransohoff, D.F., Itzkowitz, S.H., Turnbull, B.A., Ross, M.E. Colorectal Cancer Study Group. Fecal DNA versus fecal occult blood for colorectal cancer screening in an average risk population. N Engl J Med. 2004 Dec 23;351 (270414. PubMed PMID: 15616205).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 16, David Keller commented:

      Rebuttal of Berger's paragraph 3; and an explanation of how to define any test's normal range

      Berger wrote: "Reporting of the constituent values within a Cologuard test is not only an unapproved use of Cologuard, but the granularity that individual marker levels and the specific composite score might be assumed to provide for clinical decision making in screening patients is not supported."

      Keller: The result of any assay you run on my body tissues or wastes, regardless of the granularity or the FDA approval status of the assay, must be made available to me upon request. This is a right of any patient. Nothing you have said releases you from the obligation to inform me of my test results.

      Berger wrote: "While the algorithm and the algorithm cut-off that define positive/negative are clinically validated for screening, the individual constituent values are not approved for clinical use outside the context of the algorithm.

      Keller: According to the FDA, the Cologuard individual DNA assays are not approved for use outside the Multitarget algorithm primarily because Exact Sciences (your employer) did not apply for their approval, which is the necessary first step in the approval process. Every patient has the right to be informed of the results of any assay you run on his or her DNA, regardless of whether or not the assay is FDA-approved.

      Berger wrote: "The component markers do not have individual “normal” reference ranges associated with them and, as a result, these intermediate analytes are not separately interpretable."

      Keller: The normal range of any test may be defined as the average value of all test results, plus or minus 2 standard deviations, a range which will contain 95% of the results for the population. The remaining 5% may be defined as "abnormal".

      Berger wrote: "This is different from the way that the separate tests ordered in a “test panel” can be interpreted, as under those circumstances, each test in the panel has its own separate reference range."

      Keller: No, there is no difference, the process of defining a normal range is the same regardless of whether you are defining it for an internal Cologuard assay or a component of a "test panel".

      Regardless of how or whether a reference range is defined, it does not alter the ethical imperative that you must release the result of any test you perform on a patient to that patient. In my original commentary, I presented two clinical scenarios where patients could come to harm because of the failure of Exact Sciences to report extreme or "abnormal" internal assay results, such as a high fecal hemoglobin concentration caused by a non-neoplastic colon disease, or a patient with a high-negative composite score who has a higher-than-average risk of false negative Cologuard screening result, and is given the same 3-year rescreening schedule as a patient whose fecal specimen result has the ideal lowest-risk composite score of zero. In the event that a patient comes to harm as a result of one of these clinical scenarios, Exact Sciences could, and should, become the target of a product liability lawsuit because of the failure to inform these patients of their abnormal internal assay results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jul 12, David Keller commented:

      Rebuttal of paragraph 2: Dr. Berger mistakes "binary" for "qualitative"

      Berger: "The Cologuard test is intended for the qualitative detection of colorectal neoplasia associated biomarkers."

      Keller: This so-called qualitative detection process is actually totally quantitative in nature. Each biomarker (DNA mutation, methylation abnormality or hemoglobin concentration) is carefully measured and inserted into a precise equation to generate a precise "Composite Score" which is directly related to the risk of colon cancer. This is a totally quantitative process.

      Berger: " The numerical values generated from the component assays of the Cologuard test are excluded from the scope of the approval and are not clinically validated as individual test results."

      Keller: I asked the FDA about that. Their reply to me was essentially as follows: the primary reason FDA did not approve the component assays as individual test results was that your company, Exact Sciences, did not apply for such approval. The first step of the FDA approval process for a test is for the manufacturer to apply for approval.

      The MultiTarget algorithm was well validated at a composite score of 183 in a large randomized trial [1], which measured the risk of neoplasia (among other parameters) at that point. That validation can be extended across the entire range of "negative" Cologuard scores, from zero to 182, but clinical validation of one or more additional composite scores in that range is required. A reasonable minimum number of points to validate could be composite scores of zero, 60 and 120, thus providing (along with the already validated composite score of 183) approximation of the risk of neoplasia as a function of composite score by the three straight line segments defined by the risk of neoplasia at composite scores of zero, 60, 120 and 183. The mathematical name for this process is called "interpolation", and it will provide a reasonable estimate of neoplasia risk across the entire range of negative composite scores (0 - 182). The more points which are validated within that range, the more accurate the estimate of neoplasia risk will be across the entire range (as measured by root-mean-squared error).

      Berger: "These numerical values are only constituents of the validated test algorithm that generates the qualitative Cologuard composite result (positive/negative)."

      Keller: Numerical values are quantitative by definition. Your so-called "qualitative" result is derived from a number called the "composite score". The Cologuard result is "positive" if the composite score is 183 or greater, "negative" if the score is 182 or less. The Cologuard result is therefore not qualitative, it is binary. Dr. Berger has clearly mistaken a binary decision-making process for a qualitative one. This is a clear-cut error of nomenclature and conceptualization, for which I will submit an erratum to this journal.

      Reference

      1: Imperiale, T.F., Ransohoff, D.F., Itzkowitz, S.H., Turnbull, B.A., Ross, M.E. Colorectal Cancer Study Group. Fecal DNA versus fecal occult blood for colorectal cancer screening in an average risk population. N Engl J Med. 2004 Dec 23;351 (270414. PubMed PMID: 15616205).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jul 12, David Keller commented:

      Beyond rebuttal: constructive suggestions for unleashing the full power of ColoGuard screening

      My point-by-point rebuttal of the response by Dr. Berger and Dr. Lidgard to my commentary is necessary but of little importance. What is important is that their company's outstanding colon cancer screening system, an implementation of the MultiTarget algorithm marketed under the commercial name "ColoGuard", could function at an even higher level of accuracy if its full power were unleashed for each fecal sample screened.

      I propose that the Composite Score be used for risk-stratification of "negative" ColoGuard screens, so that we do not perpetuate the potentially dangerous situation we have now, where patients with Composite Scores ranging from zero to 182 are all treated the same: they are told their result is negative, nobody is certain when to repeat the test, but Medicare will pay for it again in 3 years. However, a composite score of 182 must confer a higher risk of being a false negative than a score of zero. Further, a composite score of 183 is only 0.5% higher than a score of 182, yet a patient with a score of 182 is told their screening result is negative and sent home, while a patient with score 183 is told he needs a colonoscopy immediately. That kind of discontinuity, called a Heaviside step function, is not found in nature. The risk of cancer must, as a natural phenomenon, vary across the range of composite scores in a manner which is smooth and continuous.

      In a large randomized clinical trial, Cologuard was found to have a sensitivity of 92.3% for detecting colorectal cancer, and hence, a false-negative rate of 7.7%. The Cologuard assays for malignancy-associated DNA yield results which are directly related to cancer risk, so the risk of a false-negative result must therefore vary directly with the composite score, increasing monotonically over the range of negative composite scores, from zero to 182. In other words, across the range of zero to 182, any composite score of N+1 must confer an incrementally higher risk of false-negative colon cancer than is conferred by a composite score of N. Therefore, patients whose Cologuard screening result is currently reported simply as "negative" can be further risk-stratified by their composite scores, for example:


      Score.........Repeat screening interval

      0 - 60........Repeat Cologuard in 3 years

      61 - 120......Repeat Cologuard in 2 years

      121 - 182.....Repeat Cologuard in 1 year

      The retail price of one Cologuard test kit is $649, very expensive compared with standard fecal immunochemical screening for colon cancer, which retails for $15 per kit. Risk-stratification would concentrate medical resources where the risk is highest, by allocating additional Cologuard kits to patients at the highest risk of an initial false-negative screen. At the same time, fewer false positive screens would occur, compared with simply repeating the Cologuard screen annually for all patients.

      Validation of the composite score across its entire negative range of zero to 182, and determination of where to position the break points in composite score for optimal risk stratification, can be performed using Monte Carlo simulation, with models based on post-approval Cologuard clinical data. Similar techniques were employed by CISNET to compare the outcomes of various screening algorithms, and these simulation results were recently published in JAMA, to support the latest USPSTF colon cancer screening recommendation update [1]. These results can also be applied to hybrid screening strategies, devised in response to spending limits or other constraints [2].

      References

      1: Knudsen AB, Zauber AG, Rutter CM, Naber SK, Doria-Rose VP, Pabiniak C, Johanson C, Fischer SE, Lansdorp-Vogelaar I, Kuntz KM. Estimation of Benefits, Burden, and Harms of Colorectal Cancer Screening Strategies: Modeling Study for the US Preventive Services Task Force. JAMA. 2016 Jun 21;315(23):2595-609. doi: 10.1001/jama.2016.6828. PubMed PMID: 27305518.

      2: Keller DL. A Hybrid Non-Invasive Colon Cancer Screening Strategy, To Maximize Sensitivity With Medicare Coverage. PubMed Commons, accessed on 7/30/2016 at the following URL:

      http://www.ncbi.nlm.nih.gov/pubmed/27305518#cm27305518_22819


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 09, Lydia Maniatis commented:

      The trigger for choosing this article to comment on was the reference in the title to “low-level mediation.” It reflects one of the deadly sins of contemporary vision science, i.e. the invalid assumption that aspects of perceptual experience may be directly attributed to the activities of particular sets of neurons in particular, anatomically early (more closely linked to the retina) areas of the visual system.

      One problem with such a view is that the activities of neurons in these areas, (like the retinal cells themselves, and, arguably, all the neurons in the visual system and beyond) are the physiological basis for all aspects of the percept. Thus, it doesn’t make sense to specially attribute, e.g. the movement of a grey dot to the special sensitivities of neural population x, when these underlie both the perception of the grey dot, the background, the relative position of dot and frame, and every other feature of the percept. Such claims are, in other words, arbitrary and cannot be corroborated by ascertaining what subjects are seeing.

      The most popular area to invoke in this respect is, as is the case here, area V1. The reason is historical; Hubel and Wiesel’s work on the striate cortex (corresponding to V1) of cats and later monkeys. They reported on the relative firing rates of neurons to straight bars differing in orientation, length, direction of movement. These results were over-interpreted to mean that neurons in V1 were akin to specialized detectors of the particular stimuli, within the narrow set tested, which made them fire fastest. However, as Teller (1984) noted in her critique of common psychophysical linking propositions, a. we know that the neural code is not based on maximum, but relative, firing rates, b. there is an infinitely large equivalence class of stimuli that would cause these same neurons to fire.

      Nevertheless, investigators keep the flawed, fragile story flickering by employing stimuli and conditions whose results may be, though loosely and permissively, interpreted according to such crude assumptions. That the story is woefully inadequate (that investigators don’t know what factors to control so as to, at least, consistently interpret results) is reflected in the perennial admission, as in this article, that psychophysical experiments get conflicting results which investigators are unable to rationalize:

      “Differences in the outcomes of different psychophysical procedures have already been noted elsewhere, and perhaps deserve more attention…

      “Although our manipulation of attention did not produce a directional aftereffect, Lankheet and Verstraten’s (1995) manipulation of attention did. The reason for this discrepancy remains unclear…” Morgan et al (2016, p. 2630).

      “…our results (Exp 4) did not confirm the factual basis for the claim (Blaser et al , 2005) that a 90 degree probe…we cannot be certain why our results are different…Differences include the psychophysical method (2AFC rather than MSS…), the statistical methods of analysis, the use of colors that appeared equally salient to the observer….”

      If the authors are saying that even the statistical methods of analysis can potentially produce opposing interpretations of data, but investigators don’t understand how or why, what can possibly be the theoretical value of their own data/interpretations? How are we supposed to evaluate these things, if theoretical control of conditions is so inadequate and interpretation of experimental results so mystifyingly wide open and susceptible to inexplicable “butterfly effects”?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 24, Mike KLYMKOWSKY commented:

      The cilia of prokaryote is a completely different structure from that of a eukaryote. This makes the first sentence of the abstract obscure to say the least.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 23, University of Kansas School of Nursing Journal Club commented:

      Team Members: Hayley Janner, Annie Yungmeyer, Katherine Barnthouse, McKenzie Baker, Dylan Severson, Macy McKee [Class of 2017]

      Background

      Our team selected this article due to the significant role that workplace bullying plays in establishing and maintaining a healthy work environment. Nursing had been historically associated with high rate of workplace bullying, especially towards new or inexperienced nurses--best known by the common expression “nurses eat their young.” Because of this reputation, the effect of bullying on work environments and nursing satisfaction should be explored. 
      

      Thus, this study fills a vital gap in the literature by examining both the causes and effects of workplace bullying in the microsystem. This article applies to several of the modules for processes of a healthy work environment, including authentic leadership, as Yokohoma et al. (2016) found that lack of authentic leadership contributes to a work environment that fosters bullying, and creating a motivating environment, as workplace bullying has been found to be acutely and chronically demotivating for nurses.

      Methods

      This article was found via a literature search of CINAHL. The study design was a cross-sectional design and data was collected via a distributed self-administered questionnaire. The questionnaire was distributed to nurses in Japan attending various nursing conferences, all of which were unaffiliated with any nursing workplaces. The survey distributed consisted of the Negative Acts Questionnaire-Revised (NAQ-R), which assesses for workplace bullying behaviors, and the Practice Environment Scale of the Nursing Work Index (PES-NWI), which assesses quality of work environment. Participant demographics and other workplace factors (such as average hours of overtime work, average days off in a month, opportunity to request work off, and more) were also collected in the questionnaire (Yokohama et al., 2016). The issue of workplace bullying impacts all nurses in all areas of the world. However, because the population surveyed was Japanese nurses, this study directly represents the work environments of Japanese nurses. Nonetheless, this study is still significant as the results of this study could generate a starting point for researchers in other countries studying the relationship of workplace bullying and a healthy work environment.
      

      Findings

      The study found that workplace bullying in Japan is a significant issue, as 18.5% of participants were classified by survey responses as being a victim of bullying. Bullying was considered to have taken place when participants reported that any negative behavior mentioned in the NAQ-R was directed towards them either on a “weekly” or “daily” basis. The most common bullying behaviors that took place were “someone withholding information that affects your performance,” “being exposed to an unmanageable workload,” and “being shouted at or being the target of spontaneous anger (or rage)” (Yokohama et al., 2016, p. 2481). Demographic and workplace factors determined to be associated with being a victim of bullying included being “unmarried, holding a bachelor’s degree or higher, having registered nurse and additional qualifications, fewer years of nursing experience, fewer years of experience in current workplace, more overtime hours per day, not always having the opportunities to request days off, working on more days off, and a less HWE (defined by lower than average scores on the five PES-NWI subscales)” (Yokohama et al., 2016, p. 2481-2482). Workplace bullying was also associated with lower scores on the nurse manager section of the PES-NWI, including leadership, ability, and support of nurses.
      
      There are several limitations with this study. One limitation was the cross-sectional design of the study, which prevented the authors from proving causal factors for bullying--only correlational factors can be described. Secondly, because the study was personally answered by nurses, results may be subjective depending on the nurses’ mindset and personal evaluation of appropriate workplace conduct and bullying, among other factors. Finally, this study was performed exclusively via surveying of Japanese nurses. Thus, while the study may be representative of workplace bullying in Japanese nursing environments, it may not be possible to generalize this to all nursing workplace bullying.
      

      Implications

      Workplace bullying is an important issue in nursing due to the numerous negative consequences it has been linked with, most importantly with nurse retention, satisfaction, nurse depression and lowered quality of patient care. Nurse depression contributes to burnout and frequent turnover, adding to the ever-rising costs of the healthcare industry. Additionally, depression can result to decreased patient care quality leads to increased medical error and negative patient outcomes. Thus, workplace bullying in nursing is a vital issue that needs to address, as its consequences run contrary to the purpose of medicine and nursing as a whole.
      
      This issue is important to us on a personal level as graduating nursing students since we came into nursing to help and heal patients because we care deeply about them. This caring values and attitudes should extends to our coworkers as well. It is utterly unacceptable to comprehend that a significant number of individual in an industry known for helping people heal instead choose to tear each other down. Our efforts should be focused instead on banding together to provide not only the best care possible to patients, but to care and nurture each other—the same reason that majority of us in nursing entered the industry. 
      
      This study is also important in presenting the importance of the microsystem leadership development and the leadership role implications on the contribution of poor nursing leadership to workplace bullying. In their study, Yokohama et al. (2016) found that individuals who scored their managers low on leadership, ability, and support of nurses were more likely to be victims of workplace bullying. In the PES-NWI, leadership specifically is comprised of five items that are similar to traits held by authentic leaders. Nurse Managers who are not effective leaders are less effective at dismantling a culture of bullying on their unit. Thus, development of authentic leadership in nurse managers carries the possibility of reducing workplace bullying that nurses experience.
      

      This article contributes greatly to our future nursing practice because it has clearly displayed to us the far-reaching impact that poor leadership can have on several unit factors, such as inability to change a toxic and bullying workplace culture leading to poor patient outcomes. This publication has re-emphasized to us the importance of practicing authentic leadership in our own careers, especially if we advance into leadership positions such as nurse manager. Finally, we will not turn a blind eye to workplace bullying in my own future employment, as the consequences for patients are simply not acceptable.

      References Yokoyama, M., Suzuki, M., Takai, Y., Igarashi, A., Noguchi-Watanabe, M. and Yamamoto- Mitani, N. (2016). Workplace bullying among nurses and their related factors in Japan: a cross-sectional survey. Journal of Clinical Nursing, 25: 2478–2488. doi:10.1111/jocn.13270


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, Prashant Sharma, MD, DM commented:

      Quoted from the abstract above, "Following WHO recommendations, refractory anemia with excess blasts (RAEB)-2 diagnosis is not possible in MDS-E, as patients with 10% to < 20% BM blasts from TNCs fulfill erythroleukemia criteria".

      Err... where would WHO 2008 place a case with 51% erythroid cells and 3% blasts with Auer rods that's negative for the leukemia-defining recurring genetic abnormalities?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 19, Prashant Sharma, MD, DM commented:

      Very interesting... This would certainly upgrade more than a few cases. Hopefully the WHO is listening, and other large centres are reanalyzing their marrow differentials for confirming/refuting this "game-changing" paper.

      What about the blast% in remission status marrows from acute leukemia patients? Its currently mostly done from TNCs, but based on this study is it possible that calculating it from NECs would provide better prognostication?

      • Prashant Sharma, Assoc. Prof. of Hematology, PGIMER, Chandigarh, India


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 27, Santosh Kondekar commented:

      It was interesting to read about pathogenesis of COPD. Its really beautifully differentiated from asthma. The concept of early child hood onset of COPD needed an extra mention. As COPD of early onset often has its roots in early childhood.An early onset COPD often has some neonatal or infantile insult that causes fixed parenchymal insult which makes the airways more susceptible for dysplasia or remodelling. As these cases are often overlooked as refractory child hood asthma; they often are mistreated; early picking up of low FEV1 and non responsiveness to steroids should compel a clinician take an appropriate step like a CT chest at early age to pick up fixed parenchymal damage or fixed airway changes. Clinically; to avoid overdiagnosis of asthma; we follow a criteria called "other than asthma" or OTA criteria; this helps us segregate cases with potential for development of COPD in future. OTA can be read at http://childasthma.weebly.com/ota-other-than-asthma-or-asthma-like-illnesses.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 14, David Keller commented:

      The placebo effect is mediated via the very same neural pathways damaged in Parkinson's disease

      Neural imaging studies have demonstrated that the placebo effect in healthy subjects is mediated by the same neural pathway damaged in Parkinson disease (PD), namely the transport and release of dopamine from the substantia nigra to the striatum.[1] This is why placebos of all types can cause surprisingly robust and durable clinical improvements in PD. The authors of this anecdotal case note that meditation has been shown to release dopamine in the striatum. The questions include: does meditation release more dopamine than placebo? What is an effective placebo control for meditation, and how can it be distinguished or truly differ from real meditation? How can subjects be instructed to meditate or placebo-meditate in a double-blinded fashion? and so on. The hypothesis that meditation might cure, remit, or benefit PD patients in any way more so than placebo seems difficult if not impossible to test. For people with Parkinson's who doze off when they try to meditate, it may at least be helpful in inducing drowsiness, which in turn can increase their sleep benefit.

      Incidentally, there was no indication for a DaT scan in this patient, whose clinical diagnosis of Parkinson's disease was indisputable at the time the scan was done. I point this out because the DaT scan selectively delivers 21 times as much ionizing radiation to the striatum than does a standard CT scan of the whole brain.[2]

      References

      1: de la Fuente-Fernández R, Lidstone S, Stoessl AJ. Placebo effect and dopamine release. J Neural Transm Suppl. 2006;(70):415-8. Review. PubMed PMID: 17017561.

      21: Keller DL. Proposal for a clinical trial to test the safety of a widely-used radionuclide scan. Comment on PMID: 26236969. In: PubMed Commons [Internet]. 2015 Sept 15 [cited 2015 Nov 20]. Available from http://www.ncbi.nlm.nih.gov/pubmed/26236969#cm26236969_11818


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 08, Martin Mayer commented:

      Reply to Dong D. Wang, MD, ScD and Frank B. Hu, MD, PhD

      Author: Martin Mayer, MS, PA-C

      Conflict of interest: None

      I sincerely appreciate the reply from Drs. Wang and Hu, including the time they took to read my original commentary on their study and the time they took to compose a response. However, their reply ultimately does not resolve the issues I present in my original commentary, and I am concerned they may mistakenly believe I am attempting to dismiss entirely the field of nutritional epidemiology or the potential benefits of a sound diet; neither of these are true, and nothing herein or in my original commentary should be construed as a suggestive, definitive, or de facto exoneration or dismissal of various patterns of fat intake or dietary composition. Such impressions would suggest having missed the central thrust behind my original commentary, namely (1) researchers should always endeavor to provide balanced and objective qualitative and quantitative context for their research findings, and (2) those reading research articles should consider these issues during evidence appraisal, synthesis, translation, and application. Nevertheless, and even though I am a strong advocate for healthy lifestyles (including a sound diet), I stand by my original commentary, and I respond here in a point-by-point fashion.

      (Post edited after original posting to update the link to my reply, as it was not displaying correctly.)

      Note: I edited this post and my full reply to Wang and Hu on September 14, 2017 to update all URLs hyperlinking to my original commentary due to a rebranding of the website on which my blog post appears. I did not make any other changes to this post (I even include the originally-present parenthetical note about editing the original post due to issues with how my reply was displaying) or my full reply. The original post appears below in its original form for the sake of completeness and transparency of the record, as does the original link to my full reply to Wang and Hu.

      -----------------------Begin original post from March 8, 2017-----------------------

      Reply to Dong D. Wang, MD, ScD and Frank B. Hu, MD, PhD

      Author: Martin Mayer, MS, PA-C

      Conflict of interest: None

      I sincerely appreciate the reply from Drs. Wang and Hu, including the time they took to read my original commentary on their study and the time they took to compose a response. However, their reply ultimately does not resolve the issues I present in my original commentary, and I am concerned they may mistakenly believe I am attempting to dismiss entirely the field of nutritional epidemiology or the potential benefits of a sound diet; neither of these are true, and nothing herein or in my original commentary should be construed as a suggestive, definitive, or de facto exoneration or dismissal of various patterns of fat intake or dietary composition. Such impressions would suggest having missed the central thrust behind my original commentary, namely (1) researchers should always endeavor to provide balanced and objective qualitative and quantitative context for their research findings, and (2) those reading research articles should consider these issues during evidence appraisal, synthesis, translation, and application. Nevertheless, and even though I am a strong advocate for healthy lifestyles (including a sound diet), I stand by my original commentary, and I respond here in a point-by-point fashion.

      (Post edited after original posting to update the link to my reply, as it was not displaying correctly.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 14, Lydia Maniatis commented:

      "Our findings are consistent with ... biological plausibility...." "Plausibility" is quite a low and rather subjective bar; an argument against outright rejection, not an argument in support of...


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 11, Dong Wang commented:

      Reply to Martin Mayer, MS, PA-C

      Authors: Dong D. Wang, MD, ScD and Frank B. Hu, MD, PhD

      From the Departments of Nutrition (DDW and FBH) and Epidemiology (FBH), Harvard T. H. Chan School of Public Health, Boston, MA; The Channing Division for Network Medicine, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA (FBH)

      We agree with Mr. Mayer that ‘well-designed, well-executed randomized controlled trials (RCTs)’ can provide strong evidence for the causal effect between dietary fatty acids and mortality. However, because of multiple methodological limitations, e.g., poor compliance and high drop-out rate, decades-long RCTs testing effects of dietary interventions on hard endpoints, such as cardiovascular disease (CVD) incidence and mortality, are extremely difficult to conduct [1]. High cost and ethical considerations are additional challenges for conducting such a RCT. Further, the notion that RCTs are ‘confounding-free’ is only held when there are low rates of drop out and high degree of compliance. In most large-scale long-term RCTs, biases may occur after baseline randomization due to differential adherence to assigned treatment regimens, differential loss to follow-up, and other differences between comparison groups [2]. In addition, our findings based on prospective cohorts are consistent with effects of replacing saturated fatty acid (SFA) by polyunsaturated fatty acid (PUFA) on both blood lipids [3] and cardiovascular disease [4] from RCTs. Thus, in most situations, large prospective cohort studies of hard clinical endpoints, when well designed and interpreted in the context smaller RCTs on intermediate endpoints such as blood lipids, can provide the best available evidence to inform dietary recommendations. One such example is trans fat. Large epidemiologic studies like ours found a consistent positive association between trans fat intake and risk of cardiovascular disease. Meanwhile, small RCTs found that trans fatty acids increase total and LDL cholesterol. The combination of these two types of evidence has led to the policies that result in food labeling and banning in the food supply [5].

      Citing Nissen and Ioannidis’ attacks on methodological issues of nutritional epidemiology [6, 7], Mr. Mayer questioned the validity of the food frequency questionnaires (FFQs) in assessing dietary intakes. However, Nissen and Ioannidis’ viewpoints and Mr. Mayer’s question simply reflect lack of understanding of the basic methodology of nutritional epidemiology and human nutrition research. In contrary to Mr. Mayer’s claim, our food frequency questionnaires (FFQs) have been demonstrated to be a useful and valid dietary assessment instrument to measure long-term usual dietary intake in well-conducted epidemiological studies [1, 8]. The validity of our FFQs against multiple-day diet records and biomarkers in the validation studies has been extensively documented [8]. For example, the correlations between energy-adjusted intakes assessed by the 1986 FFQ and the mean of multiple weighed 1-week dietary records collected in 1980 and 1986, corrected for variation in the records, were 0.67 for total fat, 0.70 for SFA, 0.69 for MUFA, and 0.64 for PUFA. [8] Correlations increased when the mean of 3 FFQs (1980, 1984, and 1986) was used; for example, for SFAs the correlation was 0.95. The correlation between dietary fatty acid intake assessed by the FFQ and the composition of fatty acids in adipose tissue were 0.51 for TFA, 0.35 for LA, and 0.48 for long-chain n-3 PUFA in NHS, [9] and 0.29 for TFA, 0.48 for LA, and 0.47 for EPA in HPFS. Moreover, adjustment for total energy intake, along with use of cumulative average intake calculated from many repeated FFQs, further dampens the measurement errors and improves the validity [8].

      By pointing out that our study population was ‘exclusively health care professionals with noteworthy exclusion criteria’, Mr. Mayer questioned the generalizability of our findings. However, the effect estimates represent the underlying physiological mechanisms relating fatty acid intake to mortality that are generally applicable to other populations. In addition, for the estimated effect of substituting SFA by PUFA, the hazard ratio (HR) of CVD mortality in our study (0.72, 95% CI, 0.65-0.80) is similar to the HR of coronary death (0.74, 95% CI, 0.61-0.89) estimated from a pooled analysis including 11 cohorts with diverse sociodemographic characteristics, which further support the generalizability of our findings [10]. Because our study intended to mimic a primary prevention setting, we excluded participants with major chronic diseases, including CVD, cancer and diabetes, at baseline. In contrary to Mr. Mayer’s assertion, applying these exclusion criteria, our study produced more generalizable findings to inform dietary recommendations for primary prevention of disease outcomes in the general population.

      Mr. Mayer criticized our use of HR, a ratio measure, and claimed only reporting HRs is ‘considerably less informative and can contribute to distorted appraisal of research findings’. These assertions are unfounded. Both ratio and difference measurements have their own merits and usefulness. Difference measures are measures of the public health and clinically relevant effect of exposure, whereas relative measures are measures of the biological strength of the association between an exposure and disease outcome. Therefore, reporting HRs is compatible with the objective of our study, i.e., to examine the associations of specific dietary fats with total and cause-specific mortality. From a technical perspective, HR is the default outputs estimated by the multiplicative Cox proportional hazards model, the most robust and widely applied statistical model for time-to-event data. It is important to note that HRs can be compared across different studies and populations, whereas difference measures are difficult to compare because of different baseline risk in different populations.

      In summary, our study provided strong evidence because of the solid study design, such as many repeated measurements of diet, validated measurement methods and high follow-up rates over decades, and sophisticated statistical analysis, i.e., extensive adjustment for a large number of potential confounding factors. Our findings are consistent with other high-quality evidence from both observational cohort studies and RCTs [3, 4, 10] and meet multiple key Bradford-Hill criteria, including the strength and consistency of the evidence, biological plausibility, temporal relationships and experimental evidence on intermediate biomarkers.

      Conflict of interest: None

      Reference

      [1] Satija, A., et al., Advances in nutrition, 2015. 6(1): p. 5-18.

      [2] Manson, J.E., et al., Jama, 2016. 315(21): p. 2273-4.

      [3] Mensink, R.P., et al., The American journal of clinical nutrition, 2003. 77(5): p. 1146-55.

      [4] Mozaffarian, D., et al., PLoS Med, 2010. 7(3): p. e1000252.

      [5] National Conference of State Legislatures.: http://www.ncsl.org/issues-research/health/trans-fat-and-menu-labeling-legislation.aspx.

      [6] Ioannidis, J.P., BMJ, 2013. 347: p. f6698.

      [7] Nissen, S.E., Annals of internal medicine, 2016. 164(8): p. 558-559.

      [8] Willett, W.C., Nutritional epidemiology 2013, Oxford University Press.

      [9] London, S.J., et al., The American journal of clinical nutrition, 1991. 54(2): p. 340-5.

      [10] Jakobsen, M.U., et al., The American journal of clinical nutrition, 2009. 89(5): p. 1425-32.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jan 14, Martin Mayer commented:

      Reporting and appraising research: a cautionary tale

      Substituting various fats for carbohydrates or saturated fat: an uncertain recipe missing quantitative context and a cautionary example of reporting and appraising research

      Broadly speaking, science is a way of thinking that involves asking answerable questions about phenomena and then systematically and impartially pursuing means to reduce uncertainty about the answer as much as possible. During the pursuit, findings must always be appropriately contextualized to avoid inaccurate, disproportionate, or otherwise mistaken interpretations, as such mistaken interpretations run contrary to the raison d’être of scientific inquiry. Unfortunately, confusion about and mistaken or overreaching interpretations of research abound.

      Wang and colleagues recently published an article in JAMA Internal Medicine investigating various patterns of fat intake on total and cause-specific mortality. Their article speaks to the above and will add tangibility to the above considerations; it therefore serves as an instructive example to be considered in some detail, but the concepts considered herein are certainly more broadly applicable.

      Read the rest here (http://blogs.bmj.com/bmjebmspotlight/2016/10/03/reporting-and-appraising-research-a-cautionary-tale/).

      Note: I edited this post on September 14, 2017 to update all URLs hyperlinking to my original commentary due to a rebranding of the website on which my blog post appears. I did not make any other changes. The original post appears below in its original form for the sake of completeness and transparency of the record.

      -----------------------Begin original post from January 14, 2017-----------------------

      Reporting and appraising research: a cautionary tale

      Substituting various fats for carbohydrates or saturated fat: an uncertain recipe missing quantitative context and a cautionary example of reporting and appraising research

      Broadly speaking, science is a way of thinking that involves asking answerable questions about phenomena and then systematically and impartially pursuing means to reduce uncertainty about the answer as much as possible. During the pursuit, findings must always be appropriately contextualized to avoid inaccurate, disproportionate, or otherwise mistaken interpretations, as such mistaken interpretations run contrary to the raison d’être of scientific inquiry. Unfortunately, confusion about and mistaken or overreaching interpretations of research abound.

      Wang and colleagues recently published an article in JAMA Internal Medicine investigating various patterns of fat intake on total and cause-specific mortality. Their article speaks to the above and will add tangibility to the above considerations; it therefore serves as an instructive example to be considered in some detail, but the concepts considered herein are certainly more broadly applicable.

      Read the rest here (http://blogs.bmj.com/ebm/2016/10/03/reporting-and-appraising-research-a-cautionary-tale/).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 19, Venkatesh Thiruganasambandamoorthy commented:

      Thanks to QxMD and Dr. Schwartz for bringing it to end users as a convenient app


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 02, Daniel Schwartz commented:

      The Canadian Syncope Risk Score can be accessed at the point of care using mobile & web app 'Calculate'

      http://www.qxmd.com/calculate/calculator_383/canadian-syncope-risk-score-csr

      Conflict of interest: Medical Director, QxMD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 11, Johannes W. Dietrich commented:

      We got feedback from readers that SPINA-GT and SPINA-GD are difficult to calculate. There is free software available from http://spina.sf.net that can calculate SPINA parameters and Jostel's TSH index from equilibrium concentrations of TSH, free T4 and free T3. In addition, software for using the UCLA platform is available on request from the UCLA Biocybernetics Laboratory.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 15, Anthony Jorm commented:

      In their response to the letter by Jorm et al which questioned whether scaling up of treatment is likely to reduce prevalence of depression and anxiety Jorm AF, 2016, Chisholm and colleagues commented that “population ageing… in many contexts will cancel out or more than counteract the impact of treatment” Chisholm D, 2016. As far as high income countries are concerned, this is unlikely. A review of age group differences in the risk of anxiety and depression found that the evidence supported a reduction of risk with older age Jorm AF, 2000. These findings imply that population ageing is unlikely to counteract the impact of increased provision of treatment on the prevalence of depression and anxiety in high-income countries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 27, Kerin Tyrrell commented:

      This article can also be viewed and/or downloaded from an author's personal web site:

      https://is.gd/CQP1TZ


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 27, Nisha G Arya commented:

      We reran the meta-analysis using the new version of GingerALE (2.3.6). We got the exact same coordinates as our previous analysis, except that the midbrain and pons were not longer identified as significantly activated. In our discussion of the paper, we had already mentioned that the midbrain and pons are difficult to image, so these new results are not surprising.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 09, Christopher Tench commented:

      The version of GingerALE used (2.3.1) had a bug that results in false positive results. The bugs were fixed at versions 2.3.3 and 2.3.6


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 28, Wichor Bramer commented:

      I spotted a very small error that might lead to confusion in table 1: In C3 it says: "Review the top references without page numbers and those with page numbers, starting with number 1 for equivalent author names."

      We meant: "Review the top references, those without page numbers and those with page numbers starting with number 1, for equivalent author names."

      It is not necessary to review all references manually. Only the references that lack a page number, or the ones starting with 1 (1-3, 1-27 etc) can potentially contain false duplicates, and should be checked manually.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 05, Kath Wright commented:

      This paper has been added to the issg Search Filter Resource at https://sites.google.com/a/york.ac.uk/issg-search-filters-resource/adverse-events-filters


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 19, Kelly Farrah commented:

      Happy to hear that you found the analysis useful Michelle. Which filter did you end up choosing?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 14, Michelle Fiander commented:

      Thanks very much for this analysis; am in the midst of a project and you have helped me determine which filter to use.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 14, Thomas E. Nichols commented:

      This work has two unfortunate statements that can be misunderstood to mean that the findings apply to all "40,000" publications in the fMRI literature. The following two corrections will resolve this problem:

      The last sentence of the Significance statement should read: “These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.”

      The first sentence after the heading “The future of fMRI” should have read: “Due to lamentable archiving and data-sharing practices it is unlikely that problematic analyses can be redone.”

      For more on this, see the blog entry "Bibliometrics of Cluster Inference" http://blogs.warwick.ac.uk/nichols/entry/bibliometrics_of_cluster/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 28, GERALD SMITH commented:

      Comments on an article by Stahl et al., Genetics (2016)

      In their article “Apparent epigenetic meiotic double-strand-break disparity in Saccharomyces cerevisiae: A meta-analysis,” STAHL et al. (2016) reanalyze published data on meiotic gene conversion patterns in S. cerevisiae. They infer that the pattern is subject to epigenetic influence and suggest that further experiments, which they are not able to conduct, should be done to test this idea. Relevant data showing this feature had been previously published in Genetics by the group of JÜRG KOHLI.

      In their article “The mating-type-related bias of gene conversion in Schizosaccharomyces pombe” PARVANOV et al. (2008) assayed gene conversion at the ura4A hotspot during meiosis. They mated two pairs of strains that were isogenic except for the coupling relations of the mat and ura4A alleles, which are on separate chromosomes. Conversion was assayed after no mitotic divisions (zygotic meiosis) and after extensive (52 or 70) mitotic divisions (azygotic meiosis). The coupling relation made a highly significant (t-test, p < 0.001) 2-fold difference in zygotic meiosis, as also seen in extensive data in BAUR et al. (2005) but no significant difference in azygotic meiosis (ratios of 1.04 and 1.05 were observed). Thus, the coupling effect disappears during mitotic growth, fully consistent with the coupling effect on meiotic gene conversion pattern being epigenetic. BAUR et al. showed that the homolog entering the zygotic crosses in coupling with the h<sup>+</sup> mating-type allele converted to wt about twice as frequently as did the homolog in coupling with h<sup>-</sup>.

      The reduction or abolition of the coupling effect by mutations that remove histone modifying enzymes (the acetyl transferases Gcn5 and Ada2 and the deactetylase Clr6), shown by PARVANOV et al., is also strong evidence that this effect is via chromatin structure (one definition of “epigenetic”).

      It seems simplest to consider this epigenetic effect to be differential frequency of DSBs at the ura4A hotspot, depending on the coupling relation with the mating-type locus. Differential DSB frequency on the homologs was proposed by STAHL et al. for the effects on gene conversion patterns at HIS4 of S. cerevisiae reported by them and others. In both cases, however, the effect could be via differential repair of a DSB with the sister (resulting in no visible conversion, i.e., restoration) or with the homolog (potentially producing a convertant, either full or half). PARVANOV et al. discuss this possibility of differential repair, citing the differential binding of Swi5 DNA strand-exchange protein to the silent mat2 – mat3 loci in h<sup>-</sup> and h<sup>+</sup> strains (JIA et al. 2004). But this possibility seems at odds with swi2Δ having no significant effect on the coupling effect, yet Swi2 being required for the differential binding of Swi5. In addition, it is unclear that the differential binding of Swi5 to heterochromatin (mat2 – mat3) extends to euchromatin (i.e., at ura4A). Since ura4A has an exceptionally strong meiotic DSB hotspot associated with this transplacement (GREGAN et al. 2005) and since meiotic DSBs are affected by chromatin structure, it seems likely that the effect is via differential DSB frequency, as BAUR et al. and PARVANOV et al. also discuss.

      Regardless of the molecular basis of the coupling effect, its disappearance upon mitotic growth of the diploid and its dependence on chromatin modifications establishes the effect as “epigenetic” by a commonly used definition. At the end of their Discussion, STAHL et al. say, “Of course, the conclusions and surmises of this paper are testable by the execution of properly controlled crosses, studies that we are unable to undertake ourselves.” These surmises had been tested years earlier by BAUR et al. and PARVANOV et al. and found to be true. It is unfortunate that their work was not cited by STAHL et al.

      BAUR, M., E. HARTSUIKER, E. LEHMANN, K. LUDIN, P. Munz et al., 2005 The meiotic recombination hot spot ura4A in Schizosaccharomyces pombe. Genetics 169: 551-561.

      GREGAN, J., P. K. RABITSCH, B. SAKEM, O. CSUTAK, V. LATYPOV et al., 2005 Novel genes required for meiotic chromosome segregation are identified by a high-throughput knockout screen in fission yeast. Current Biology 15: 1663-1669.

      JIA, S., T. YAMADA and S. I. GREWAL, 2004 Heterochromatin regulates cell type-specific long-range chromatin interactions essential for directed recombination. Cell 119: 469-480.

      PARVANOV, E., J. KOHLI and K. LUDIN, 2008 The mating-type-related bias of gene conversion in Schizosaccharomyces pombe. Genetics 180: 1859-1868.

      STAHL, F. W., M. B. REHAN, H. M. FOSS and R. H. BORTS, 2016 Apparent epigenetic meiotic double-strand-break disparity in Saccharomyces cerevisiae: A meta-analysis. Genetics 204: 129-137.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 24, azita Hekmatdoost commented:

      Comments on “No effects of oral vitamin D supplementation on non-alcoholic fatty liver disease in patients with type 2 diabetes: a randomized, double-blind, placebo-controlled trial”

      Makan Cheraghpour1,a ; Alireza Ghaemi 2,a ; and Azita Hekmatdoost 3* 1 Nutrition and Metabolic Diseases Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, IR Iran 2 Department of Basic Sciences and Nutrition, Health Sciences Research Center, School of Public Health, Mazandaran University of Medical Sciences, Sari, IR Iran 3 Department of Clinical Nutrition, Faculty of Nutrition and Food Technology, Shahid Beheshti University of Medical Sciences, Tehran, IR Iran *Corresponding Author: Azita Hekmatdoost, Department of Clinical Nutrition, Faculty of Nutrition and Food Technology, Shahid Beheshti University of Medical Sciences, Tehran, IR Iran a These two authors have equally contributed to this work.

      Barchetta et al (1) recently reported that vitamin D supplementation for 24 weeks had no effect on non-alcoholic fatty liver disease in patients with type 2 diabetes. The results showed that consumption of high dose vitamin D lead to no significant changes in metabolic and cardiovascular parameters and hepatic steatosis in these patients. Considering the contradiction between these results and previous studies (2-9), we decided to point some overlooked points of this study despite the outstanding ones.<br> First, NAFLD is now one of the most common chronic diseases in the world and there is a direct link between the disease and other metabolic disorders such as obesity, type 2 diabetes and cardiovascular diseases (10). It seems that lifestyle plays an important role in the formation and progression of the disease, which shows wide variations due to changes in diet and lifestyle (11-12). In studies about patients with NAFLD evaluating the diet and its components such as energy, processed meat, total fat, trans/saturated -fatty acids, the type of carbohydrates is essential because any change in using them can be a potentially confounding factor in the results of this kind of studies (13-19). However, dietary intakes were not assessed in this study, and the study participants did not get any dietary recommendation at baseline to reduce this confounding factor. Second, many studies have shown that physical activity can lead to significant improvements in metabolic parameters and liver steatosis in patients with NAFLD (11). This confounding factor was dissembled in this study because it was not assessed during the study and there was no recommendation to the patients in this regard. Thirdly, sunlight is one of the most important sources of vitamin D in the human body, so that exposure to the sun can supply the daily requirement of vitamin D. There is no assessment of this variable in this study to overcome its effect as a confounding factor. Finally, we recommend making a revision of this study because vitamin D is a cheap and safe supplement and acceptable for most patients. Thus, vitamin D supplementation for treatment or prevention of metabolic diseases such as NAFLD may be useful for health and well-being of society. More researches are needed in this area. References 1. Barchetta I, Del Ben M, Angelico F, Di Martino M, Fraioli A, La Torre G, et al. No effects of oral vitamin D supplementation on non-alcoholic fatty liver disease in patients with type 2 diabetes: a randomized, double-blind, placebo-controlled trial. BMC Med. 2016;14:92. 2. Chung GE, Kim D, Kwak MS, Yang JI, Yim JY, Lim SH, et al. The serum vitamin D level is inversely correlated with nonalcoholic fatty liver disease. Clin Mol Hepatol. 2016 Mar;22(1):146-51. 3. Foroughi M, Maghsoudi Z, Askari G. The effect of vitamin D supplementation on blood sugar and different indices of insulin resistance in patients with non-alcoholic fatty liver disease (NAFLD). Iran J Nurs Midwifery Res. 2016 Jan-Feb;21(1):100-4. 4. Leung PS. The Potential Protective Action of Vitamin D in Hepatic Insulin Resistance and Pancreatic Islet Dysfunction in Type 2 Diabetes Mellitus. Nutrients. 2016 Mar;8(3):147. 5. Luger M, Kruschitz R, Kienbacher C, Traussnigg S, Langer FB, Schindler K, et al. Prevalence of Liver Fibrosis and its Association with Non-invasive Fibrosis and Metabolic Markers in Morbidly Obese Patients with Vitamin D Deficiency. Obes Surg. 2016 Mar 17. 6. Mohamed Ahmed A, Abdel Ghany M, Abdel Hakeem GL, Kamal A, Khattab R, Abdalla A, et al. Assessment of Vitamin D status in a group of Egyptian children with non alcoholic fatty liver disease (multicenter study). Nutr Metab (Lond). 2016;13:53. 7. Nelson JE, Roth CL, Wilson LA, Yates KP, Aouizerat B, Morgan-Stevenson V, et al. Vitamin D Deficiency Is Associated With Increased Risk of Non-alcoholic Steatohepatitis in Adults With Non-alcoholic Fatty Liver Disease: Possible Role for MAPK and NF-kappaB? Am J Gastroenterol. 2016 Jun;111(6):852-63. 8. Wang D, Lin H, Xia M, Aleteng Q, Li X, Ma H, et al. Vitamin D Levels Are Inversely Associated with Liver Fat Content and Risk of Non-Alcoholic Fatty Liver Disease in a Chinese Middle-Aged and Elderly Population: The Shanghai Changfeng Study. PLoS One. 2016;11(6):e0157515. 9. Zhai HL, Wang NJ, Han B, Li Q, Chen Y, Zhu CF, et al. Low vitamin D levels and non-alcoholic fatty liver disease, evidence for their independent association in men in East China: a cross-sectional study (Survey on Prevalence in East China for Metabolic Diseases and Risk Factors (SPECT-China)). Br J Nutr. 2016 Apr;115(8):1352-9. 10. Eslamparast T, Eghtesad S, Poustchi H, Hekmatdoost A. Recent advances in dietary supplementation, in treating non-alcoholic fatty liver disease. World journal of hepatology. 2015 Feb 27;7(2):204-12. 11. Ghaemi A, Taleban FA, Hekmatdoost A, Rafiei A, Hosseini V, Amiri Z, et al. How Much Weight Loss is Effective on Nonalcoholic Fatty Liver Disease? Hepat Mon. 2013;13(12):e15227. 12. Hekmatdoost A, Shamsipour A, Meibodi M, Gheibizadeh N, Eslamparast T, Poustchi H. Adherence to the Dietary Approaches to Stop Hypertension (DASH) and risk of Nonalcoholic Fatty Liver Disease. Int J Food Sci Nutr. 2016 Jul 19:1-6. 13. Eslamparast T, Poustchi H, Zamani F, Sharafkhah M, Malekzadeh R, Hekmatdoost A. Synbiotic supplementation in nonalcoholic fatty liver disease: a randomized, double-blind, placebo-controlled pilot study. Am J Clin Nutr. 2014 Mar;99(3):535-42. 14. Faghihzadeh F, Adibi P, Hekmatdoost A. The effects of resveratrol supplementation on cardiovascular risk factors in patients with non-alcoholic fatty liver disease: a randomised, double-blind, placebo-controlled study. Br J Nutr. 2015 Sep 14;114(5):796-803. 15. Faghihzadeh F, Adibi P, Rafiei R, Hekmatdoost A. Resveratrol supplementation improves inflammatory biomarkers in patients with nonalcoholic fatty liver disease. Nutr Res. 2014 Oct;34(10):837-43. 16. Rahimlou M, Yari Z, Hekmatdoost A, Alavian SM, Keshavarz SA. Ginger Supplementation in Nonalcoholic Fatty Liver Disease: A Randomized, Double-Blind, Placebo-Controlled Pilot Study. Hepat Mon. 2016 Jan;16(1):e34897. 17. Shavakhi A, Minakari M, Firouzian H, Assali R, Hekmatdoost A, Ferns G. Effect of a Probiotic and Metformin on Liver Aminotransferases in Non-alcoholic Steatohepatitis: A Double Blind Randomized Clinical Trial. Int J Prev Med. 2013 May;4(5):531-7. 18. Yari Z, Rahimlou M, Eslamparast T, Ebrahimi-Daryani N, Poustchi H, Hekmatdoost A. Flaxseed supplementation in non-alcoholic fatty liver disease: a pilot randomized, open labeled, controlled study. Int J Food Sci Nutr. 2016 Jun;67(4):461-9. 19. Askari F, Rashidkhani B, Hekmatdoost A. Cinnamon may have therapeutic benefits on lipid profile, liver enzymes, insulin resistance, and high-sensitivity C-reactive protein in nonalcoholic fatty liver disease patients. Nutr Res. 2014 Feb;34(2):143-8.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 05, Holger Schunemann commented:

      The comment by Messori and colleagues puzzles me. Why would one search using the term "GRADE method"? GRADE has rarely, if ever, been referred to as a "method". It has typically been described as a system, approach, framework etc. Thus, what if Messory and colleagues would use a more appropriate (more sensitive) search including GRADE system, approach, GRADE framework or just GRADE? Without appropriate searching for information this comment seems not useful. Also, perhaps, when the purpose of the comment is clear, a citation based search would be helpful. GRADE articles have been cited over 20,000 times and there will probably be useful information from 100's of guidance documents that have been developed with GRADE and the cited GRADE publications (beginning in 2003).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.