16,205 Matching Annotations
  1. Jul 2018
    1. On 2017 Aug 21, Luiza Rodrigues commented:

      Hi. Why is the Supplement 2 not available? I would like to see the eFigures 3, 4 and 5. Couldn't find them in the link. Thanks Dr Luíza


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 18, Nicoletta Villa commented:

      1. The first question is: how to define this chromosome? It had a complete X banding exactly symmetrical above and below the primary constriction and presented an active centromere in an anomalous position definable as neocentromere, though canonical (repositioning).
      2. According to the International System for Human Cytogenetic Nomenclature (ISCN) 2016, the definition of centric fission is: “break in the centromere resulting in two derivative chromosomes composed of the short and long arms, respectively”. This does not happen in the chromosome X here described since the whole chromosome (old centromere included) constitutes the isochromosome. This was also confirmed by the paracentric inversion that offers us a complicated but responsive mechanism for the isochromosome formation. Regarding the cited review (Lin and Yan, Mutat Res 2008; 658:95), we mentioned it in the introduction and discussion sections. In both cases, we reported general aspects of telomeric-like sequences. We speculated that the paracentric inversion of the entire Xp arm could be a result of a non-allelic homologous recombination mediated by inverted repeats, as reported by Warburton and Dittwald (see article for details). FISH data with pan-telomeric probes revealed the anomalous presence of TTAGGG repeats near the inactive centromeres in a highly symmetrical manner, absent in the Xp terminations. Therefore, we used BAC probes and identified the paracentric inversion of the entire short arm that made the telomere common sequences completely interstitial. Silahtaroglu et al (J Med Genet 1998; 35:682) reported a paracentric inversion that did not involve telomeric region in a XXY male (“Simultaneous hybridisation with biotin labelled "All Centromere" and digoxigenin labelled "All Telomere" probes showed that the telomeric sequences were not inverted). This is not our case.
      3. Rivera et al. (Clin Genet 1999, 55:122) reported a case showing a rearrangement due to a centric fission of chromosome 12 and a translocation on chromosome 8p. This last rearrangement resulted in a fusion between 8ptel and 12cen mediated by interstitial telomeric sequences, as well written in the abstract. In our case there was not a fusion between two different chromosomes, but an isochromosome, confirmed by banding, FISH and by means of microsatellite segregation study. Moreover, we demonstrated the presence of telomeric sequences near to the old centromeres.
      4. We did not perform the androgen receptor inactivation test because the itrc(X) was always inactivated in reverse banding (RBA) as it is possible to see in figure 1B and also the microsatellite polymorphisms never showed a third allele. Moreover, the mosaic situation made a quantitative analysis very difficult or even impossible due to the loss of Xq.
      5. The frequency of chromosomal abnormalities in couples subjected to medically assisted procreation appears to be increased (literature data), but we couldn’t correlate the chromosomal rearrangement here described with PMA. We don’t know parental origin of the rearrangement and parents refused further analyses.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 19, Horacio Rivera commented:

      Centromere repositioning or neocentromere in a tricentric X chromosome? As for the unique tricentric X-cromosome described by Villa et al. (2017), I do the following remarks: 1. The apparent discrepancy between title and content regarding the paper’s key concept illustrates how difficult is to classify the single functional centromere of the rearranged X: did it result from centromere repositioning and is a class III neocentromere as stated in the title or is it a novel kind of neocentromere? Since the former mechanism implies the emergence of a new non-alphoid centromere in otherwise intact chromosomes (Marshall et al. Am J Hum Genet 2008, 82: 261; Liehr et al. Cytogenet Genome Res 2010,128:189), it seems better to opt for the more general term neocentromere despite it being composed of alphoid sequences. 2. Since the authors plausibly ascribe the emergence of the functional centromere at an unexpected place to an initial paracentric inversion of the entire Xp arm “shifting a part of the centromere at the p end”, then they may have designated such centromeric breakage with the specific term centric fission. It is significant that the hypothetical telomere-like sequences mapping at Xp11.21 or 22 and thought by the authors to be involved in the rearrangement, are simply not referred to in the cited review (Lin and Yan, Mutat Res 2008; 658: 95); moreover, the authors appear to contradict themselves when they conclude that “the first event could be a result of a non-allelic homologous recombination mediated by inverted low-copy repeats”. Regardless of the concerned sequences, the exact breakpoint should be revised to Xp10 and the Xp rearrangement designated as a centric inversion after Silahtaroglu et al. (J Med Genet 1998, 35: 682) who described an inverted 12p resulting from a centric fission coupled with a subtelomeric breakpoint. 3. According to the underlying mechanism advanced by the authors, two true centromere-telomere fusions (Rivera et al. Clin Genet 1999, 55: 122) occurred in the rearranged chromosome. Yet, the authors also fail to recognize this phenomenon. 4. Despite the analysis of microsatellite polymorphisms, the parental derivation of the tricentric X chromosome was not determined. Likely, the HUMAR assay could have resolved this point. 5. The fact that the patient was conceived after intracytoplasmic sperm injection recalls other chromosome rearrangements and gonosomal aneuploidies found in children conceived by means of such a technique (Venkataraman and Craft, Hum Reprod 2002, 17: 2560; Alfonsi et al. Cytogenet Genome Res 2012, 36: 1; Rivera and Domínguez, Clinics 2012, 67: 669).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 26, Duke RNA Biology Journal Club commented:

      This paper was chosen based on the well written abstract. The observation in the title appeared interesting and novel with implications for a role of miRNA in epigenetics with response to environmental conditions. However, upon further reading and analysis of the paper, we expect more work will be done to solidify these initial conclusions. We agree that miRNAs are divergent when flies are grown at different temperatures, additionally, piRNAs do seem to be expressed more at lower temperatures. That this holds true when flies are subsequently switched to different temperatures was also fascinating. However, this paper remained mostly observational. We expected, from general patterns observed in narratives of other papers, that the authors would attempt looking at the protein expression of machinery associated with each process, miRNA and piRNA processing, at each temperature to help elucidate a mechanism for these observations. However, what followed was a speculative section on these observations based on RNA-expression, which does not always correlate with protein expression. Overall, we think this article provides a great starting point for future work involving the functional impact that these molecular changes to environmental stimulus can have on an organism.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 28, Morten Oksvold commented:

      In this article the authors cite Deleris, A. et al. Hierarchical action and inhibition of plant Dicer-like proteins in antiviral defense. Science 313, 68–71 (2006) (reference number 7).

      This article represents one of the articles that supposed to be retracted (see quote from the report below):

      From the investigation report: "Although it is obviously the journal's prerogative, the former (category 2) papers, particularly those containing well documented intentional manipulations (PLoS Pathogens 2013 9:e1003435; Plant Cell 2004 16: 1235; Science 2006 313: 68; PNAS 2006 103: 19593 and EMBO J 2010 29: 1699), should be retracted through OV's requests as being non-factual, irrespectively of whether the reported observations have been reproduced by others."

      Link to full report here: https://www.ethz.ch/content/dam/ethz/news/medienmitteilungen/2015/PDF/untersuchungsbericht/Report_of_ETH_Commission_Voinnet.pdf

      I find it problematic that Nature Plants accepts this kind of practice, by apparently legitimating well documented intentional manipulations as facts.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 21, Atanas G. Atanasov commented:

      Thank you so much for the excellent joint work dear Colleagues! I have featured our manuscript on the INPST website: https://sites.google.com/view/inpst/1-xanthohumol


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 04, Sin Hang Lee commented:

      The medical profession, including medical schools and hospitals, is now a part of the health care industry, and implementation of editorial policies of medical journals is commonly biased in favor of business interests. PubMed Commons has offered the only, albeit constrained, open forum to air dissenting research and opinions in science-based language. Discontinuation of PubMed Commons will silence any questioning of the industry-sponsored promotional publications indexed in PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 24, Sin Hang Lee commented:

      Jorge Cervantes proposed a new theory to argue against the existence of chronic Lyme disease with persistent infection [1]. According to this theory, “after antibiotic eradication of Bb, its DNA is able to persist in anatomical locations that coincide with sites of inflammation.” He assumed that the free, naked and water-soluble DNA molecules released from the dying Borrelia burgdorferi spirochetes remain in the extracellular matrix of the patient’s tissues. However, under section “3. Borrelia DNA persistence” of his article, the references cited did not test for free borrelial DNA at all, and therefore do not back up his theory. For examples, in the reference by Li et al. [2], the authors concluded that the DNA detected was in moribund or dead B burgdorferi cells, not in free form. In the reference by Schmidt et al. [3] and the reference by Aberer et al. [4], borrelial DNA was detected in the pellet of patients’ urine samples after centrifugation at 14,000 x g and 36,000 × g, respectively. Since soluble free DNA molecules cannot be pelleted by such a low centrifugal force, the borrelial DNA detected by these authors must be still bound to bacterial cells or cell fragments in the urine. In the reference by Kubanek et al. [5], the authors actually showed by electron microscopy that the tissues tested positive for borrelial DNA clearly contained borrelial bacteria, not free DNA.

      Free, extracellular, naked bacterial DNA is very prone to decay. Foreign DNA experimentally introduced into a mammal is degraded and eliminated from the host’s blood within 48 hours [6]. But the stability of extracellular DNA still depends on its form or even on the sequence of its nucleotide bases. Circular plasmid DNA is more stable in vitro than a segment of linear chromosomal DNA after release from the bacterial cell. Even DNase I does not cleave DNA randomly although not base nor sequence specific. Extracellular bacterial 16S rDNA is known to be degraded much more rapidly in the environment than those bound to cell fragments [7]. Borrelial 16S rDNA extracted by ammonium hydroxide stored in TE buffer is stable, but is degraded rapidly in human serum at room temperature (unpublished personal observation). DNA sequencing-confirmed detection of borrelial 16S rDNA in the pellet of serum or plasma samples derived from patients’ venous blood constitutes solid molecular evidence of spirochetemia in Lyme borreliosis [8, 9]. Whether spirochetemia in chronic Lyme disease needs to be treated with prolonged antibiotics is an important heath care issue which should be further discussed. To push an elusive DNA-binding AMP treatment of chronic Lyme disease can only direct the attention away from the real issue of how to define Lyme disease, acute or chronic, as an emerging infectious disease, like Ebola and Zika, for proper patient management. There is no evidence that free naked borrelial DNA has been demonstrated in any patient samples.

      The author should also cite a reference to back up his claim that human macrophages can remove extracellular Bb DNA. The reference by Brencicova and Diebold [10], cited by the author, clearly stated “Endosomal TLR are situated in the membrane of the endolysosomal compartment of APC and sample the content of these compartments for the presence of nucleic acid agonists. Pathogens or dead cells gain access to the compartment by endocytosis. Alternatively, infection-induced autophagy can shuttle viral nucleic acids and antigens into the endolysosomal compartment and allow for recognition of replicating virus within infected cells by endosomal TLR.” Free DNA was not mentioned.

      References [1] Cervantes J. Doctor says you are cured, but you still feel the pain. Borrelia DNA persistence in Lyme disease. Microbes Infect 2017 Jun 15. pii: S1286-4579(17)30090-4. doi: 10.1016/j.micinf.2017.06.002. [Epub ahead of print] Review. [2] Li X, McHugh GA, Damle N, Sikand VK, Glickstein L, Steere AC. Burden and viability of Borrelia burgdorferi in skin and joints of patients with erythema migrans or lyme arthritis. Arthritis Rheum 2011;63: 2238-47. [3] Schmidt B, Muellegger RR, Stockenhuber C, Soyer HP, Hoedl S, Luger A, et al. Detection of Borrelia burgdorferi-specific DNA in urine specimens from patients with erythema migrans before and after antibiotic therapy. J Clin Microbiol 1996;34:1359-63. [4] Aberer E, Bergmann AR, Derler AM, Schmidt B. Course of Borrelia burgdorferi DNA shedding in urine after treatment. Acta Derm Venereol 2007;87(1):39-42. [5] Kubanek M, Sramko M, Berenova D, Hulinska D, Hrbackova H, Maluskova J, et al. Detection of Borrelia burgdorferi sensu lato in endomyocardial biopsy specimens in individuals with recent-onset dilated cardiomyopathy. Eur J Heart Fail 2012;14:588-96. [6] Schubbert R, Renz D, Schmitz B, Doerfler W. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc Natl Acad Sci U S A. 1997;94:961-6. [7] Corinaldesi C, Danovaro R, Dell'Anno A. Simultaneous recovery of extracellular and intracellular DNA suitable for molecular studies from marine sediments. Appl Environ Microbiol 2005;71:46-50. [8] Lee SH, Vigliotti JS, Vigliotti VS, Jones W, Shearer DM. Detection of borreliae in archived sera from patients with clinically suspect Lyme disease. Int J Mol Sci. 2014;15:4284-98. [9] Lee SH. Lyme disease caused by Borrelia burgdorferi with two homeologous 16S rRNA genes: a case report. Int Med Case Rep J. 2016;9:101-6. [10] Brencicova E, Diebold SS. Nucleic acids and endosomal pattern recognition: how to tell friend from foe? Front Cell Infect Microbiol 2013;3:37.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 26, Duke RNA Biology Journal Club commented:

      We were excited to see another paper on ribosome heterogeneity from Maria Barna’s lab since their 2011 paper linking loss of a ribosomal protein (RP) with developmental phenotypes in mice PMCID: PMC4445650. In comparison, we found this paper less of a complete story but still powerful in the questions it reveals. To summarize, Shi and co-workers used a combination of selected reaction monitoring based proteomics and tandem mass tagging to identify and orthogonally validate 4 RPs as substoichiometric in tissue culture polysomes. We viewed this result as the major breakthrough in the field. The remainder of the paper looked more closely at two RPs, RPS25 and RPL10A, and their associated transcriptome. Interestingly, they found divergent enrichments within the Ribo-seq datasets associated with these proteins. We speculated, from the distinct transcriptome of these RPs, that cellular environment could play a large role in RP composition. However, instead of following up on these observations, the authors instread probe RPL10A’s function in IRES mediated translation. After performing multiple rounds of experiments with bicistronic constructs, they found this protein had the ability to interact with some - such as HCV, host-mRNAs - but not every IRES type. We discussed previous publications linking ribosomal proteins PMCID: PMC4243054 and RP PTMs PMCID: PMC2253395 to proper translation of HCV. Therefore, we found the observations in Fig 6 interesting but unsurprising and wondered how each observation pieced together in the larger picture of viral translation. This conversation brought us to the conclusion of the paper, with a noticeably short discussion section. We were left with two main unanswered questions: are these substochiometric differences ever combined or limited to one RP at a time; and does RP composition change with cell environment and location? This publication makes a big step towards answering these questions, especially with the quantitative lengths used to determine stoichiometric ratios of RPs, but we found the paper lacking in proposing a distinct in vivo role for these substoichiometric ribosomes and will look forward to follow-up publications leading to these answers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 18, Hilda Bastian commented:

      An assessment of a critical problem, with important conclusions. It would be helpful, though, if the scope of the 4 guidelines were shown. The inclusion criteria are not very specific on this matter, and the citations of the versions of the 4 included guidelines are not provided.

      In addition to the scope, the dating of the guidelines' last search for evidence (if available) with respect to the dates of the systematic reviews would be valuable. Gauging to what extent systematic reviews were not included because of being out of scope, out of date, or not yet published is important to interpreting these findings. Given how quickly systematic reviews can go out of date (Shojania KG, 2007), the non-inclusion of older systematic reviews may have been deliberate.

      The publisher of the article does not appear to have uploaded Appendix A, which includes the references to the systematic reviews. Further, confusion has been created by linking the citations of the first 44 systematic reviews to the references of the article's texts. The end result is that neither the 4 guidelines nor the 71 systematic reviews are identifiable. It would be helpful if the authors would post these 75 citations here.

      Disclosure: I work on PubMed Health, the PubMed resource on systematic reviews and information based on them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 06, Stuart Buck commented:

      What bothers me most is the following statement from a study author: "Each of these increase LDL-cholesterol compared to carbohydrate and more so when compared to the unsaturated fats. This is sufficient to warn the public about anticipated adverse effects of coconut oil on CVD."

      No, it is not. There are at least 5 treatments that lower LDL without lowering CVD, and sometimes even make CVD worse. See Table 1: http://www.nejm.org/doi/full/10.1056/NEJMp1508120?af=R&rss=currentIssue#t=article.

      Nutritionists should not give advice based on trials about LDL while ignoring that LDL manipulation is often disconnected from or even directly contrary to CVD outcomes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Aug 02, Hilda Bastian commented:

      Thank you for the reply, Professor Sacks. However, the reply does not address the errors I pointed to, nor respond directly the key problems I raised. Much of it is directed to rebutting claims I did not make.

      ... (1) Lack of reporting on the processes for selecting evidence

      My first point was that although the statement asserts that the totality of evidence and recent studies was reviewed, it does not report the process for identifying the systematic reviews it selected. No validated method for evaluating the systematic reviews is reported, and reasons for excluding each of the trials in the chosen systematic reviews are not reported either (with the exception of 6 trials, accounting for 10 trials in total). Hamley S, 2017, for example, lists 19 randomized trials on the question of replacing saturated with polyunsaturated fat, drawn from 8 systematic reviews/meta-analyses (Table 2). I stress that my point here is not related to the conclusions, but rather to the adequacy and transparency of the methodology.

      The totality of evidence approach considering a variety of research types does not obviate the need to explain how the studies were sought, selected, and appraised (Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, 2011).

      ... (2) Singling out coconut oil

      The reply reiterates a statement based on a single survey and people's beliefs about coconut oil. But there is no data to show that dietary coconut oil is consumed at levels that warrant this attention, whereas palm oil, for example, does not. I am not sure whether the data I could find on this is an accurate reflection or not (Bastian, June 2017). If it is, however, then the issue of replacing palm oil in commercially produced food would have warranted more attention than coconut oil. Given the very different standards applied to studies of coconut oil, the question of why it was addressed at all, when so much else in scope was not, remains a relevant one.

      ... (3) Inadequacy of Eyres L, 2016 as a basis for wide-ranging conclusions on health effects of coconut oil

      I reiterate the point I made: the conclusions that clinical trials on the effects on CVD measures have not been reported, and that there are "no known offsetting favorable effects" would require a high-quality systematic review on the effects of coconut oil on both CVD and non-CVD health outcomes of dietary coconut oil. Eyres L, 2016 is not that review. Whichever of the validated and accepted methodologies for assessing the quality of a systematic review you would use (Pussegoda K, 2017), the Eyres review would not fare well. It does not include elements required for a high quality systematic review - such as reporting on the excluded studies and including a study-by-study assessment of the methodological characteristics and risk of bias of included studies. More importantly, its scope is too narrow.

      I identified the 8 trials in the 7 papers I mention, in a quick search to test the adequacy of coverage of the Eyres review. I only included those on CVD outcomes. There are undoubtedly further relevant trials. That short search though, established the limits of scope of the Eyres review, even in CVD health.

      This is how the authors of the Eyres characterize the evidence they found:

      "Much of the research has important limitations that warrant caution when interpreting results, such as small sample size, biased samples, inadequate dietary assessment, and a strong likelihood of confounding. There is no robust evidence on disease outcomes, and most of the evidence is related to lipid profiles."

      I agree with that assessment, and the reply offers no methodologically sound counter to this. Instead, the studies not in the Eyres review were critiqued. The reply cites these criteria for excluding all but 3 of the 8 studies as acceptable for consideration (presumably the 2 reported in a single paper were regarded as a single study):

      [A]mong the 7 studies...4 would appropriately be excluded as result of being non-randomized, uncontrolled, using a very small amount, not including a control group or not even being a trial of coconut oil.

      I don't really know what to make of "uncontrolled" and "not including a control group" as 2 criteria, given all these trials are controlled: the final 3 that aren't rejected don't make it clear to me either. No threshold is offered for what is a large enough dose, so I can't work with that either. However, I took the other 2 - randomized or not, and having a solely coconut oil arm as objective criteria I could apply to the 8 trials within Eyres and the 8 trials outside it (and extracted some additional data). This is reported in full on a blog post (Bastian, August 2017). In summary:

      • The Eyres group has fewer randomized trials: 4/8 compared to 7/8 in the non-Eyres group (or 6/7 for non-Eyres after knocking out the trial with no separate coconut arm).
      • There are fewer randomized participants in the Eyres group: 143 compared to 234 in 6 non-Eyres randomized trials with a separate coconut arm.
      • All the trials in the Eyres group only look at blood lipid profiles whereas most in the non-Eyres group assess at least 1 non-blood-test outcome (5/8 or 4/7). That is in part because of the Eyres exclusion criteria (such as rejecting any trial in a specific population or clinical subgroup, such as overweight people).

      The Eyres group cannot be regarded as an adequate or representative subset of trials. And the same level of critique has not been applied even-handedly.

      ... (4) Errors in representation of the Eyres findings on coconut oil versus other saturated fats

      As this was not addressed in the reply, I'll reiterate it, with additional detail. This is what the Eyres review concludes on this question:

      "In comparison with other fat sources, coconut oil did not raise total or LDL cholesterol to the same extent as butter in one of the studies by Cox et al., but it did increase both measures to a greater extent than did cis unsaturated vegetable oils...[W]hen the data from the 5 trials that directly compared coconut oil with another saturated fat are examined collectively, the results are largely inconsistent".

      This is what the AHA writes:

      "The authors also noted that the 7 trials did not find a difference in raising LDL cholesterol between coconut oil and other oils high in saturated fat such as butter, beef fat, or palm oil".

      As there was no meta-analysis of these trials, there is no single estimate to discuss. Of the 5 that did include a comparison with saturated fats, there were differences among their results: the AHA had pointed out 1 of them just a few sentences previous to their "no difference" statement. This is objectively a mis-statement of the Eyres' review's findings, which results in an exaggeration of the strength of the evidence.

      Nothing in the reply to my comment changes, for me, the conclusion I came to in my first blog post on this:

      "On coconut oil, the AHA has taken a stand on very shaky ground with some major claims – as though they had a very strong systematic review of reliable research on all possible health consequences of dietary coconut oil. They don’t. The people arguing the opposite – that coconut oil is so healthy you should try to use it every day – are also on shaky ground".

      Disclosure: I have no financial, livelihood, or intellectual conflicts of interest in relation to coconut or dietary fats. I discuss my personal, social, and professional biases in a blog post that discusses the AHA advisory on coconut oil in detail. (Bastian, August 2017). This PubMed Commons comment also contains some excerpts from that post.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jul 24, Frank M Sacks commented:

      On behalf of the authors, I respond to comments by Hilda Bastian about the American Heart Association Presidential Advisory on Dietary Fats and Cardiovascular disease Sacks FM, 2017.

      The comprehensive advisory includes, (i) Clinical trials that tested the effects of dietary saturated fat compared to unsaturated fat or carbohydrate on cardiovascular disease (CVD) events, e.g. heart attack, (ii) Clinical trials that tested the effects of dietary fats on lipid risk factors, e.g. LDL-cholesterol, (iii) Prospective epidemiological studies on dietary fats and carbohydrates and CVD, and (iv) Animal models of diet and atherosclerosis. Thus, it reflects the “totality of evidence”. The confluence of findings provides a very strong scientific case for the recommendation that dietary saturated fat be replaced with unsaturated fat, especially polyunsaturated fat.

      Recent systematic reviews and meta-analyses Mozaffarian D, 2010, Chowdhury R, 2014, Hooper <PMID: 26068959 used well accepted methodologies, and included trials published up to 2009, 2013, and 2014. Only a small number of clinical trials evaluated direct effects of dietary fat on CVD. Most of these studies, and all that have an impact on the overall findings, were conducted years ago, and are well known. Contrary to the Bastian’s comments, there are no more recent trials on this topic. These 3 meta-analyses each confirm the beneficial effect of replacing saturated with polyunsaturated fat. The similarity of findings lends robustness to the overall conclusions of the report. The meta-analyses and all the individual trials are discussed critically in detail in the advisory.

      Because the topic of the advisory is the effect of dietary fats on CVD, coconut oil is well within its scope. Coconut oil is currently rated as a healthy oil by 72% of the American public, despite its composition derived from 98% saturated fats, which increase the blood level of LDL-cholesterol, a cause of atherosclerosis and CVD. The meta-analysis by Mensink reports the quantitative effects on LDL-cholesterol of the saturated fats that are in coconut oil, mainly lauric, myristic, and palmitic acids. Each of these increase LDL-cholesterol compared to carbohydrate and more so when compared to the unsaturated fats. This is sufficient to warn the public about anticipated adverse effects of coconut oil on CVD.

      Some studies tested coconut oil itself, and found that it increases LDL-cholesterol as would be predicted by its saturated fat content. These studies were identified and summarized in the systematic review by Eyres L, 2016 which used rigorous, well-accepted methodology. The criteria for inclusion of an article in the systematic review were well conceived. Eyres et al. concluded, “Overall, the weight of the evidence from intervention studies to date suggests that replacing coconut oil with cis unsaturated fats would alter blood lipid profiles in a manner consistent with a reduction in risk factors for cardiovascular disease.” Bastian implies that this systematic review is composed of weak studies and omitted several studies that would affect the conclusion of the advisory to avoid eating coconut oil. This is not true. Eyres et al. identified eight studies; all were controlled clinical trials that used valid nutritional protocols and statistical analyses. All reported higher LDL-C levels when coconut oil was consumed compared to unsaturated oils, including olive, corn and soybean oils, statistically significantly in 7 of them. Together, these trials included populations from the US, Sri Lanka, New Zealand, Pacific Islands, and Malaysia, demonstrating generalizability. There is no objective scientific reason to disparage them. The only substantive criticisms mentioned by Bastian are a short duration and small sample. These criticisms are unwarranted. Effects of diet on blood lipids, especially LDL-cholesterol, are established quickly, by 2 weeks. A small sample, with careful dietary control and execution, can yield a well-powered trial with valid results. In summary, the 8 trials in the Eyres et al. systematic review provide strong evidence that coconut oil increases LDL-C levels compared with unsaturated oils.

      What about the 7 studies named by Bastian that were not included in the systematic review? McKenney JM, 1995 reported that coconut oil increased LDL-cholesterol significantly by 12% compared with canola oil in 11 patients with hypercholesterolemia. In a second study in 17 patients treated with lovastatin, LDL-C increased nonsignificantly in the coconut oil period. Thus, the results of this small study would add to the overall effects of coconut oil shown in the other studies to increase LDL-cholesterol. Ganji V, 1996 reported that coconut oil increased LDL-cholesterol compared to soybean oil in 10 normal participants. Assunção ML, 2009 reported no difference in the effects of coconut and soybean oils on LDL-cholesterol levels. However, LDL-cholesterol levels increased during the soybean oil period, clearly an anomolous result. Cardoso DA, 2015 conducted a nonrandomized study comparing coconut oil, 13 mL per day, with no supplemental oil. Because there is no control for the coconut oil, it is unclear how to interpret the lack of difference in LDL-cholesterol between the groups. de Paula Franco E, 2015 conducted a sequential study of a calorie-reduced diet followed by coconut flour, 26 g per day. This study was not randomized and did not have a control group. Enns reported in her Ph.D. degree dissertation at the University of Manitoba the results of a randomized trial that compared a 2:1:1 mix of butter, coconut oil, and high-linoleic safflower oil, 25 g per day, with canola oil, 25 g per day. This trial did not claim to be a study on the effects of coconut oil. Finally, Shedden reported in her M.S. degree thesis at Arizona State University the results of a placebo-controlled randomized trial of coconut oil, 2 g per day. This miniscule amount of coconut oil did not affect LDL-cholesterol. In summary, among the 7 studies cited by Bastian not in the Eyers review, 4 would appropriately be excluded as result of being non-randomized, uncontrolled, using a very small amount, not including a control group or not even being a trial of coconut oil. Among the 3 randomized trials, McKenney et al., Ganji et al. and Assuncao et al., the first two found that coconut oil increased LDL-cholesterol levels. The trial of Assuncao et al. would likely fail an outlier test because it is the only one among 12 studies in which LDL-C levels is lower on coconut than soybean oil. Given the differences in study designs, populations, and localities, the results of coconut oil trials are remarkably uniform showing that it increases LDL-cholesterol levels, an established cause of cardiovascular disease.

      Bastian employs a tactic in common with some other critics of good nutritional science, namely, to a) disparage and misrepresent high quality studies that show harmful effects of saturated fat; b) promote and misrepresent seriously flawed and irrelevant studies that report the opposite; and c) cite meta-analyses with faulty designs often based on inclusion of flawed studies. We offer a challenge to those who assert health benefits to coconut oil, or saturated fat, in general. Produce well-designed and executed studies that show that there are beneficial effects on a bona fide health outcome or a recognized surrogate, e.g., LDL-cholesterol.

      Frank M. Sacks, for the authors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Jun 30, Hilda Bastian commented:

      The authors state that this advisory "reviews and discusses the scientific evidence, including the most recent studies", and that its primary recommendation is made, "taking into consideration the totality of the scientific evidence, satisfying rigorous criteria for causality". They do not report what evidence was sought and how, or the basis upon which it was selected. There is little in this report to suggest that "the totality of scientific evidence" was considered.

      For example, four reviews of trials are referred to:

      However, the more recent systematic review and meta-analysis within Ramsden CE, 2016 (date of last search March 2015) was not mentioned. Nor are, for example, these systematic reviews: Skeaff CM, 2009; Stroster, 2013; National Clinical Guideline Centre (UK), 2014; Schwingshackl L, 2014; Pimpin L, 2016.

      The AHA advisory includes sections reviewing two specific sources of saturated fat, dairy and coconut oil. Dairy products are a major source of dietary saturated fats. However, no basis for singling out coconut oil is offered, or for not addressing evidence about other, and larger, sources of saturated fats in Americans' diets. The section concludes: "we advise against the use of coconut oil".

      There are three conclusions/statements leading to that recommendation:

      • Eyres L, 2016 "noted that the 7 trials did not find a difference in raising LDL cholesterol between coconut oil and other oils high in saturated fat such as butter, beef fat, or palm oil."
      • "Clinical trials that compared direct effects on CVD of coconut oil and other dietary oils have not been reported."
      • Coconut oil increases LDL cholesterol "and has no known offsetting favorable effects".

      The only studies of coconut oil cited by the advisory to support these conclusions are one review (Eyres L, 2016) - reasonably described as a narrative, not systematic, review by its authors - and 7 of the 8 studies included in that review. The date of search of this study was the end of 2013 (with an apparently abbreviated update search, not fully reported, in 2015). Not only is that too long ago to be reasonably certain there are no recent studies, the review's inclusion and exclusion criteria are too narrow to support broad conclusions about coconut oil and CVD or other health effects.

      The AHA's first statement - that Eyres et al noted no difference between 7 trials comparing coconut oil with other saturated fats - is not correct. Only 5 small trials included such comparisons, and their results were inconsistent (with 2 of the 3 randomized trials finding a difference). There was no meta-analysis, so there was no single summative finding. The trials in question are very small, none lasting longer than eight weeks, and have a range of methodological quality issues. The authors of the Eyres review caution about interpreting conclusions based on the methodologically limited evidence in their paper. In accepting these trials as a reliable basis for a strong recommendation, the AHA has not applied as rigorous a standard of proof as they did for the trials they designated as "non-core" and rejected for their meta-analysis on replacing dietary saturated fat with polyunsaturated fat.

      Further, even a rapid, unsystematic search shows that there are more participants in relevant randomized trials not included in the Eyres review than there are randomized participants within it. For example: McKenney JM, 1995; Ganji V, 1996; Assunção ML, 2009; Cardoso DA, 2015; de Paula Franco E, 2015; and Enns, 2015 (as well as another published since the AHA's panel finished its work, Shedden, 2017).

      The conclusions of the coconut oil section of the AHA advisory are not supported by the evidence it cites. A high quality systematic review that minimizes bias is required to draw any conclusion about the health effects of coconut oil.

      Disclosure: I have no financial, livelihood, or intellectual conflicts of interest in relation to coconut or dietary fats. I discuss my personal, social, and professional biases in a blog post that discusses the AHA advisory on coconut oil in detail (Bastian, June 2017).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 07, JEANNIE LEE commented:

      Authors: John E. Froberg,* Chen-Yu Wang,* Roy Blum, Yesu Jeon, and Jeannie T. Lee**

      equal contribution * corresponding author **

      We appreciate the response of Chen et al. to our Technical Comment. However, we do not believe the response satisfies our concerns regarding the genotypes of the cells used in Chen et al. We outline our critique below and invite commentary from the community:

      1. Chen et al. continued to maintain that ∆LBS is a true deletion and stated that they confirmed ∆LBS by Sanger sequencing. They should provide FASTA files containing the Sanger sequence data from the same clone used in the study.

      2. Chen et al. erroneously stated that our idea of an LBS inversion was based on the presence of discordant reads within the Xist locus. The “inversion reads” are a specific subset of discordant reads with one end mapped within the LBS region and the other just outside and where both reads align to the same strand. Whereas discordant reads can indeed be found outside of the LBS region in both WT and ∆LBS cells, such inversion reads can only be detected in ∆LBS and ∆LBS rescue, but not in WT, LBR knockdown, and SHARP knockdown samples. Thus, “inversion reads” are specific to ∆LBS and ∆LBS rescue and not simply an artifact of the discordant RAP reads.

      3. In Figure 2, Chen et al. proposed that the discordant reads that suggest an inversion are simply an artifact of sequencing the RAP capture probes. This cannot be the case. The inversion reads were also found in input samples for both ∆LBS and ∆LBS rescue. The input was set aside before addition of capture probes, therefore the presence of inversion reads in input cannot be explained by sequencing the probes. Thus, the inversion reads likely arose from a genuine inverted sequence at the endogenous locus, not from capture probe contamination.

      4. To explain why we were able to align many reads to the LBS region, Chen et al. proposed that LBS inserted into a repetitive region of chromosome 12 (and that the region was not flipped in situ). The authors provided no data to support this. We were also unable to see an enrichment of chr12 (or any other autosome) among the discordant reads. The authors must provide the inverse PCR Sanger sequence data and confirmation of a chr12 insertion by FISH, southern blot, or other means.

      5. Chen at al. stated that our analysis of the LBS deletion is flawed because we used the wrong coordinates for ∆LBS. We used chrX:100676777-100677593 (mm9). These coordinates completely overlap and are nearly identical to the ∆LBS mm9 coordinates provided by Chen et al. (chrX: 100676791-100677575). Thus, the coordinates are essentially the same and cannot explain our differences.

      6. We raised the major concern that the authors’ CLIP data lacked crossover reads that must be present if a deletion is present. In response, Chen et al. suggested that there were 4 crossover reads. We dispute that these 4 reads cross over the deleted region. Read 1 (1928049) could potentially cross over “∆LBS”, but showed deletions and mismatches that gave the read a poor alignment score (right side pair cigar: 21M3D14M). Read 2 (4677231) could also potentially cross over, but its quality score was also so low that it was filtered out by our pipeline. Read 3 (1928051) does not cross over at all, and was also filtered out due to a low quality score. Read 4 (4677228) also does not cross over and instead aligned upstream of the LBS region. Thus, the 4 reads do not qualify as crossovers, leaving unanswered why the CLIP data failed to reveal a deletion, inversion, or intact sequence.

      7. The RNA FISH experiment in Figure 1F the authors use to argue against inversion may not have worked. Chen et al. argued that the absence of RNA FISH signal with probes antisense to LBS to argue that LBS is deleted and not inverted. The antisense probe should have produced one pinpoint spot in wild-type cells due to Tsix transcription. However, there is no pinpoint spot in wild-type cells.

      8. The ∆A cell line they used for RAP does not have a Repeat A deletion, at least at the endogenous locus. Chen et al. admitted that they used the wrong cell line and published an incorrect dataset. They should provide full characterization of the cells actually used in the published experiment and explain how the data coincidentally supported their conclusions.

      9. Chen et al. claimed that our RAP analysis is different from theirs because we did not properly account for probe sequences. This suggestion fails to explain the differences between our analyses. First, Chen et al. did not describe how they “account for probe sequences”. Second, our RAP patterns were derived after excluding discordant reads. This means that we already excluded the reads Chen et al. claim cause the discrepancy between our RAP patterns and theirs. Finally, excluding probe sequences will not change the global X-chromosomal RAP pattern because probe sequences are only found within Xist.

      10. We question how Chen et al. scaled RAP coverage tracks when comparing different RAP samples. As our analysis showed that, excepting the WT RAP, all RAP experiments showed very limited coverage on the X outside of the Xist locus — thereby strongly indicating a failure of RAP experiments. These observation are not consistent with the continued assertion of Chen et al. that reduced Xist binding is found in ∆LBS and ∆A HPRT (even though they admittedly used the wrong cell line), but not in ∆LBS rescue. It remains unclear how they scaled and normalized different RAPs and how they arrived at these conclusions.

      11. Furthermore, the possibility that the observed phenotypes arise from experimental variability cannot be excluded. Chen et al. argue that the fact that multiple Xist mutants (LBR KD, ∆LBR, an un-described ∆A line) all show the same phenotype eliminates the necessity of replicates. This is simply not true; it is possible that all mutants show a Xist localization defect because all of the mutant RAPs failed. The authors also argue that replicates aren’t necessary since they compare average profiles across all active vs. inactive regions of the chromosome. However, this comparison does not take into account RAP efficiency, and comparing RAP signal between active vs. inactive regions in an inefficient RAP is not biologically meaningful. Biological replicates of the same mutants in parallel with wild-type are absolutely essential for evaluating RAP phenotypes.

      12. Chen et al. utilized FISH to show that perturbing LBR-Xist interaction abolishes the ability of Xist to internalize active genes into the Xist cloud (Chen et al. 2016), a phenotype similar to the ΔA Xist mutants (Chaumeil et al. 2006). Strikingly, the distance from X-linked genes to the ΔA or ΔLBS Xist clouds measured by Chen et al. spanned ~ ½ nuclear radius, far greater than what was described in Chaumeil et al. This very large nuclear distance calls into question the specificity of the FISH experiments.

      13. In the final paragraph, the authors questioned the nucleation site model. However, no evidence was provided. The references they cited in connection to this statement are not relevant to the role of YY1 or to the Repeat F region for Xist nucleation. In fact, the original LBR study of Chen et al. argued that the nucleation site (a.k.a. “LBS region”) is needed for proper Xist spreading. This argument (notwithstanding the questionable genotypes) would be entirely consistent with the original findings of Jeon & Lee in 2011, which reported that deleting this region of Xist inhibits Xist spreading as a consequence of failed nucleation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 07, Cicely Saunders Institute Journal Club commented:

      The Cicely Saunders Institute journal club reviewed this paper on Wednesday 5th July 2017. We enjoyed discussing this paper and felt that the authors had addressed an important area, especially with the increasing importance on how we coordinate services at the end-of-life. The authors provided detailed information about the settings, which was useful in understanding the context in which the data was collected. Additionally, the authors provide an allied health professional perspective, highlighting the difference that can be made by changes to the environment and equipment.

      We felt that the authors could have applied a theoretical framework to strengthen understanding of the proposed topic and their findings, such as Andersen’s healthcare utilisation model. We were interested in their focus on patient, carer and professional views, but felt that these could have benefitted from triangulation in the narrative of the paper.

      The authors highlighted many interesting aspects (tipping points for carers, how patients will manage increased risk to be able to stay home), we would have liked supplementary materials to explore these further, as the authors collected such a wealth of data from both interviews and observations.

      Commentary by Sophie Pask and Dr Catherine Evans


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 04, Sin Hang Lee commented:

      The medical profession, including medical schools and hospitals, is now a part of the health care industry, and implementation of editorial policies of medical journals is commonly biased in favor of business interests. PubMed Commons has offered the only, albeit constrained, open forum to air dissenting research and opinions in science-based language. Discontinuation of PubMed Commons will silence any questioning of the industry-sponsored promotional publications indexed in PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 23, Samuel Shor commented:

      Marzec, et al (1) described 5 cases of treated chronic Lyme disease that resulted in poor outcomes We are concerned about 3 conclusions: 1. Characterization of chronic Lyme disease as an invalid nebulous condition 2 “…..evidence that the recommended two-tiered serologic testing is actually more sensitive the longer B. burgdeorferi infection has been present” 3. “Studies have not shown that such treatments lead to substantial long-term improvements for patients.” We too are concerned about any individual whose outcomes represent complications to well-intentioned intervention. However, there is substantive support in the literature for the existence of 1. Chronic Lyme disease-Our perspective is that this represents the clinical manifestations of ongoing active infection by Borrelia burgdorferi (Bb) sensu latu complex in the setting of either chronic untreated or inadequately treated individuals. The lilkihood of undiagnosed acute Lyme is increased by the infrequency of patients recalling tick bites. In one study representing CDC criteria diagnosed Lyme disease, only 14% had that recollection. (2) Not all cases of acute Lyme are associated with an erythema (EM) rash. Over 15 years, 31% of the reported surveillance cases lacked an EM rash. (3) The ILADS guidelines (4) describe the Lyme post treatment "….persistence of B. burgdorferi in specific individuals and animal models." The 2012 Embers (5) nonhuman primate and 2014 Hodzic (6) murine studies provide evidence of persistence of Bb infection after MBC adequate courses of antimicrobials. Additional animal and human studies support this concept (7-10). We want to emphasize that other etiologies may be causal, but that a cohort of these patients likely have a perpetuation of chronic signs and symptoms due to an active Bb infection. 2. Sensitivity of two tiered testing in late Lyme: Based upon a 2008 study by Steere et al (11) “the sensitivity of 2-tier testing in patients with later manifestations of Lyme disease was 100%, and the specificity was 99%” Entrance criteria for late stage Lyme: “In all patients with neurologic, cardiac, or joint involvement, a serologic result positive for B. burgdorferi by ELISA and Western blot was required for case inclusion….” “Because the entrance criteria for the aforementioned analysis REQUIRED positive serologies … by definition, all patients with disseminated or persistent Lyme disease were required to have a positive serologic test result. It is disingenuous to define a condition by a positive test result and then state that the test has 100% sensitivity…” (12) By extension, the concept of seronegativity is well-documented in cases of chronic Lyme disease (13-15) In a study, patients with a positive culture and/or PCR results and active late Lyme disease, 63.5% were not two-tier positive. (16) A second study of pcr positive late Lyme patients found that 56.3% were seronegative. (17) 3. “Studies have not shown that such treatments lead to substantial long-term improvements for patients.” A number of studies discount this claim. In 2 of the 4 NIH supported prospective human trials by Fallon (18) and Krupp (19), sub-cohort analysis showed statistically significant benefit to retreatment. In the former study 37 patients who were suspected of having active neuroborreliosis, and were treated with 10 weeks of 2gms/day IV Ceftriaxone. Pain and physical functioning improved at 12 and was sustained at 24 weeks. The authors indicated that “these benefits were felt to be independent of carefully assessed placebo effects.” In the latter study 55 patients who were felt to have active infection by Bb, with persistent severe fatigue of 6 or more months received 28 days of IV Ceftriaxone. A significant improvement in fatigue was sustained at 6 months. Other prospective trials of prolonged antimicrobial treatment were employed that revealed statistically significant improved outcomes. (20-22) In summary, as unfortunate are the 5 cases reported by Marzec, it is this author’s belief that they should not be used to discount a real entity, chronic Lyme disease. Whether due to the lack of timely diagnosis or adequacy of intervention, the literature supports the concept of chronic active Bb infection. That the diagnostic sensitivity of the 2 tiered paradigm is flawed, and seronegative active Bb infection exists. That emphasis should be made to generate a careful differential diagnosis, proactive management with probiotics and careful monitoring in the selective utility of long term antibiotics. As such, these often disabled individuals will more readily have access to the care they deserve, with compassion and empathetic oversight. Samuel Shor, MD, FACP President ILADS [International Lyme and Associated Diseases Society] Associate Clinical Professor George Washington University Health Care Sciences 1. Marzec NS, 2017 2. Berger BW, 1989 3. Bacon RM, 2008 4. Cameron DJ, 2014 5. Embers ME, 2012 6. Hodzic E, 2014 7. Treib J, 1998 8. Steere AC, 1990 9. Dvoráková J, 2004 10. Berglund J, 2002 11. Steere AC, 2008 12. Stricker RB, 2008 13. Breier F, 2001 14. Dejmková H, 2002 15. Schutzer SE, 1990 16. Oksi J, 1995 17. Chmielewski T, 2003 18. Fallon BA, 2008 19. Krupp LB, 2003 20. Cameron D, 2008 21. Wahlberg P, 1994 22. Oksi J, 1998


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 20, Sin Hang Lee commented:

      Dear L Ide: I have stopped reading NEJM since its Publisher fired Dr. Jerome Kassirer as the editor in chief. If you have published an article in the NEJM against intravenous or prolonged antibiotic treatment of SBE cases, please list the reference here. Then I will try to read it. What is elispot anyway? Have you used it? I was talking about using borrelial gene sequencing to diagnose chronic Lyme disease. Please write more if you have any objections.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jun 19, L Ide commented:

      Sin Hang Lee, please read the NEJM. Longterm antibiotics are rubbish. I hope you 're not in favor of the not evidence-based test elispot? Thank you Marzec et al. for your article.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Jun 19, Sin Hang Lee commented:

      Marzec and colleagues presented five “chronic Lyme disease” patients who did not benefit from additional antibiotic treatment when the diagnoses of “chronic Lyme disease” were made on the basis of clinical symptoms and signs (www.cdc.gov/lyme/diagnosistesting/) or by unvalidated tests. The authors have not presented evidence to show that chronic Lyme disease patients with borrelial spirochetemia proven by culture or by gene sequencing do not benefit from additional antibiotic treatment. In medicine, certain chronic infections, such as subacute bacterial endocarditis, may require intravenous or prolonged antibiotic treatment [1] in spite of its potential side effects.

      The CDC should give the practitioners a case definition of Lyme disease, as for Ebola and Zika.

      Conflicts of Interest: Sin Hang Lee, MD is the director of Milford Molecular Diagnostics Laboratory specialized in developing DNA sequencing-based diagnostic tests.

      References 1. Hoen B. Epidemiology and antibiotic treatment of infective endocarditis: an update. Heart 2006 ;92 :1694-700. Review.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Jun 18, Raphael Stricker commented:

      Chronic Lyme Disease Treatment: Science versus Anecdotes.

      Lorraine Johnson, Raphael B. Stricker, MD.

      Lymedisease.org, PO Box 1352, Chico, CA 95927; ILADS, PO Box 341461, Bethesda, MD 20827

      lorrainejohnson@outlook.com; rstricker@usmamed.com

      The article by Marzec et al. published in MMWR purports to show the dangers of treatment in patients diagnosed with chronic Lyme disease (1). Recent reports from the Centers for Disease Control and Prevention (CDC) indicate that more than 300,000 new cases of Lyme disease are diagnosed each year in the USA (2). The MMWR article from the CDC describes five anecdotal cases of treatment complications in these patients while ignoring the significant morbidity related to denial of treatment for chronic Lyme disease (2,3). The resultant biased report raises scientific and ethical issues about the CDC's role in promoting the best care for patients with tickborne diseases.

      The MMWR piece resulted from anecdotal reports gathered by Dr. Christina Nelson of the CDC. The article notes that the information was gathered because “clinicians and state health departments periodically contact CDC concerning patients who have acquired serious bacterial infections during treatments for chronic Lyme disease.” However, an ethics complaint filed against Dr. Nelson by the Lyme disease patient advocacy group LymeDisease.org suggests that these adverse event reports were in fact specifically solicited by Dr. Nelson via emails distributed in 2014 (4). Dr. Nelson asked clinicians from the Infectious Diseases Society of America (IDSA) to provide anecdotal evidence of harm to patients from intravenous antibiotic therapy related to Lyme disease, and she apparently offered coauthorship of her article as an incentive to describe these adverse events. She did not ask for consequences of failing to treat these patients, nor did she solicit commentary from practitioners who treat chronic Lyme disease according to the guidelines of the International Lyme and Associated Diseases Society (ILADS).

      The risk of any medical treatment is extremely context-sensitive. A crucial question is whether the risks of treatment are warranted given the potential benefits, the availability of other treatment options, the severity of the patient's presentation, and the risk tolerance of the individual patient. By asking for an assessment of treatment risks only, Dr. Nelson is framing the issue in a manner that excludes the other half of the equation in a risk/benefit assessment. She is also ignoring an issue that is critical to patients who suffer a profoundly diminished quality of life due to their illness, namely the risk of not treating (5,6). Moreover, by failing to mention that these adverse event reports were rare and specifically solicited, she implies that these rare occurrences are a common concern. In reality, studies of the risks and benefits associated with intravenous antibiotic treatment for Lyme disease indicate that the risks of adverse events are no greater than the risks of intravenous therapy in other unrelated diseases (7,8).

      By asking the question only of those on one side of the controversy, Dr. Nelson is further demonstrating favoritism and a lack of impartiality on the part of the CDC. Accordingly, Dr. Nelson's solicitation of anecdotal adverse events for case studies of Lyme disease is a highly inappropriate partisan act of favoritism toward the IDSA viewpoint at the expense of critical stakeholders - Lyme disease patients and their treating physicians - and an attack on the ILADS viewpoints.

      References 1. Marzec NS, Nelson C, Waldron PR, et al. Serious bacterial infections acquired during treatment of patients given a diagnosis of chronic Lyme disease - United States. MMWR Morb Mortal Wkly Rep. 2017 Jun 16;66(23):607-609. 2. Stricker RB, Johnson L. Lyme disease: Call for a ‘‘Manhattan Project’’ to combat the epidemic. PLoS Pathog. 2014;10(1): e1003796. 3. Stricker RB, Fesler MC. Chronic Lyme disease: A working case definition. Chronic Dis Int. 2017; 4(1): 1025. 4. Leland DK. TOUCHED BY LYME: CDC ignores ethics, attacks “chronic Lyme”. Available at https://www.lymedisease.org/touchedbylyme-cdc-ignores-ethics/. Accessed June 16, 2017. 5. Johnson L, Aylward A, Stricker RB. Healthcare access and burden of care for patients with Lyme disease: a large United States survey. Health Policy. 2011;102: 64–71. 6. Johnson L, Wilcox S, Mankoff J, Stricker RB. Severity of chronic Lyme disease compared to other chronic conditions: a quality of life survey. Peer J. 2014;2:e322. 7. Stricker RB, Green CL, Savely VR, Chamallas SN, Johnson L. Safety of intravenous antibiotic therapy in patients referred for treatment of neurologic Lyme disease. Minerva Med. 2010;101:1–7. 8. Stricker RB, Delong AK, Green CL, et al. Benefit of intravenous antibiotic therapy in patients referred for treatment of neurologic Lyme disease. Int J Gen Med. 2011; 4: 639–646. Disclosure: RBS and LJ are members of the International Lyme and Associated Diseases Society (ILADS) and directors of LymeDisease.org. They have no financial or other conflicts to declare.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 02, Lise Bankir commented:

      After we published the paper above " Relationship between Sodium Intake and Water Intake: The False and the True", we rediscovered a publication that we should have mentioned in this paper because it is one more study showing no change in urine volume (= water excretion) in normal subjects in response to marked changes in sodium intake (leading to corresponding changes in sodium excretion). The reference of this paper is : Hormonal responses to gradual changes in dietary sodium intake in humans. Sagnella GA, Markandu ND, Buckley MG, Miller MA, Singer DR, MacGregor GA. Am J Physiol. 1989 Jun;256(6 Pt 2):R1171-5. PMID: 2525347

      Table 1 shows urine volumes in six normal subjects submitted to increasing sodium intakes. After 4 days on a very low sodium intake (12 mmol/24h), sodium intake was increased gradually by 50 mmol/day on successive days. For sodium intakes of 10, 50, 100, 150, 200, 300, 350, 350 mmol/day, urine volumes were 1.43, 1.40, 1.23, 1.38, 1.47, 1.96, 1.69, 1.53 L/24h, respectively. No comment is made in the paper about this relatively stable urine volume.

      This study, like several others cited in our paper, confirms that urine volume, and thus probably fluid intake, do not increase with increasing sodium intake without other changes, when studied in healthy young subjects.

      This observation in 1989 is at variance with the results of an experimental study reported by the same group in 2001, in hypertensive patients (off their usual treatment for 3 months). Effect of salt intake on renal excretion of water in humans. He FJ, Markandu ND, Sagnella GA, MacGregor GA. Hypertension. 2001 Sep;38(3):317-20. PMID: 11566897

      After a few days on a 350 mmol/day salt intake, these hypertensive patients were switched to a low salt intake of 10-20 mmol/day. Urine volume fell significantly from 2.2 ± 0.09 to 1.3 ± 0.05 L/day. The difference between the two studies can possibly be explained by the fact that the 1989 study was conducted in healthy young subjects (age range 19-21 y) whereas the 2001 study concerned hypertensive patients (mean age 48 y, range 19-70).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 29, Alex Vasquez commented:

      As a researcher, author, clinician, and recent (2015) reviewer for PLOS One, I am writing with great concern about the editorial quality and review process of this previously esteemed journal, particularly with regard to the article discussed below. This Letter was sent to the PLOS One Editor by email on 20 July 2017 with receipt acknowledged on 21 July 2017; the response that "We encourage you to contact the corresponding authors of the article directly" is absurd as it will not address either the 1) lack of editorial quality, 2) the defects in the review process, or 3) the [in my opinion] erroneous publication of this article. Oddly, PLOS One does not publish letters and therefore provides no means for the archived biomedical record (e.g., in Pubmed, PubMed Central, Europe PMC, etc) to be corrected. Thus, given the aforementioned lackluster response and nearly nonexistent channels to correct and critique this PLOS One article, this critique is published online publicly https://www.academia.edu/34072801 and also via Pubmed Commons.

      In their 4-week open trial with no control group, no laboratory testing, and which relied on participant-reported "data" for every aspect of treatment compliance and treatment effect, the Lawrence and Hyde [1] prescribed a program of psychosocial support (conference call with in-person/online contact with healthcare provider and support group) and dietary improvements (including avoidance of sugar, alcohol, grains and refined carbohydrates) and then attributed the purported health benefits (including self-reported weight loss and subjective improvements in digestion, cognition, physical and emotional wellbeing) to changes in the participants’ gut microbiome. Participants were a self-selected “convenience sample of people seeking the services of a nutritional therapist. The majority of participants’ primary goal was therefore weight-loss, with several additionally aiming to improve digestive symptoms (chronic acid reflux, bloating, constipation, loose stools, wind), plus energy, and pain issues.” Without providing any data to support their conclusion that the intervention changed a single microbe or metabolite from the gut microbiota, the authors assert that, “This dietary microbiome intervention has the potential to improve physical and emotional wellbeing in the general population but also to be investigated as a treatment option for individuals with conditions as diverse as IBS, anxiety, depression and Alzheimer’s disease.” Similarly and without evidence showing that the intervention modified any parameter of the gut microbiome, the authors assert, “Taken together with our findings, these results suggest that dietary interventions to optimise gut bacteria may have a role to play both in the treatment of Alzheimer’s disease but also in optimising cognitive functioning in non-clinical populations.” The most remarkable aspect of this research is the attribution of the mechanism of action of their diet intervention to gut microbiomal modification without a single original data point of evidence; indeed, the authors acknowledge that the effect of the “diet on the health and diversity of the microbiome has not been directly tested.” The authors utilized a participant-completed medical screening questionnaire which they acknowledge “was not selected specifically for the purpose of research, and is lacking in detailed information relating to its reliability and validity compared to scales used more specifically for research.”

      Therefore in summary, this “research article” has the following attributes: 1. No objective data on health outcomes was collected; the study presents only participant-reported subjective data, 2. No objective data on treatment compliance was collected; we do not know if the participants followed the diet nor to what extent, 3. No objective data on treatment effect/mechanisms: The authors claim that the intervention changes the gut microbiome but failed to measure even a single parameter, microbe, molecule, or metabolite. 4. Participants were a positively self-selected “convenience sample” ripe and ready for a placebo response given their demonstrated positive expectations. 5. Impossible attribution, especially to the gut microbiome: With no control group, no-one knows if the supposed "improvements" were due to the psychosocial intervention, the diet, the season, the natural history the non-disease being non-studied, chance; the attribution of supposed benefit to a mechanism involving the gut microbiome is not supported by any data in this publication. 6. Short duration with no durability of effect: No demonstrated durability to the supposed benefits; the study was of notably short duration (4 weeks), 7. Wild attribution without any shred of evidence: The treatment included 1) diet intervention and 2) psychosocial support and then the authors attributed (without any supporting data whatsoever) the subjective/undocumented/purported benefits to 3) changes in the gut microbiotal composition. 8. Unreliable methods: The authors note that their use of the Functional Medicine Medical Symptoms Questionnaire “was not selected specifically for the purpose of research, and is lacking in detailed information relating to its reliability and validity compared to scales used more specifically for research” and that 9. No previous validation, as the diet plan “has not been directly tested” for its effect on the “health and diversity of the microbiome.” If this had been a follow-up survey or symptom assessment based on previous research, then such a publication might be reasonable; however, such is not the case with this publication. 10. The financial and self-promotional conflict of interest and the prominent mention of their proprietary book not fewer than 16 times in the manuscript. Does PLOS One now publish thinly veiled infomercials—masquerading as clinical research—for proprietary products? Given all these confounding variables and lack of objective data including zero data showing changes in the gut microbiome, a reasonable reviewer and reader can ask “What—if any—scientific value does this article provide?”

      To be sure, we as researchers and clinicians have increasingly appreciated the role of the body-wide microbiome in health and disease, and we appreciate that diet potently shapes the gut microbiome [2,3]. However, to ascribe health benefits to changes in the microbiome from uncontrolled positively-selected participant-collected data following diet modification and psychosocial intervention is premature at best, unscientific at worst; the authors failed to collect even a single data point showing change in any microbe, molecule, or metabolite related to the gut microbiome, and yet the published title of the article is “Microbiome restoration diet.” Studies attributing therapeutic benefit to microbiome improvements should provide evidence supporting their hypothesis via the quantitative correlation of direct microbial analysis (or at the very least via surrogate markers such as serum endotoxin levels or serum 16SrRNA, both of which are also influenced by other factors such as intestinal permeability) with mostly objective data from biochemical markers and reliable tests of neurocognitive and emotional status. The financial and self-promotional conflict of interest and the prominent mention of their proprietary book not fewer than 16 times in the manuscript further calls into question the motive and actual value of this publication in a scientific journal.

      Citations: 1. Lawrence K, Hyde J. PLoS One 2017 Jun:e0179017. 2. Vasquez A. Nutritional and Botanical Treatments against Silent Infections and Gastrointestinal Dysbiosis. Nutr Perspect 2006 Jan https://www.academia.edu/3862817. 3. Vasquez A. Human Microbiome and Dysbiosis in Clinical Disease, Volume 1. Barcelona: International College of Human Nutrition and Functional Medicine; 2015. ISBN13: 978-0990620419


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 30, DANIEL BROWN commented:

      Mycobacterium is not the same thing as Mycoplasma. Never has been, never will be.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 05, Suzy Chapman commented:

      As mentioned in my previous comment, in the literature one observes frequent instances where the term "bodily distress disorder" has been used when what is actually being discussed within the paper or editorial is the Fink et al. (2010) "bodily distress syndrome (BDS)" disorder construct.

      For example, "bodily distress disorder" is used interchangeably with "bodily distress syndrome" in the editorial (Creed et al. 2010): Is there a better term than "medically unexplained symptoms"? [1].

      In this (Rief and Isaac 2014) editorial: The future of somatoform disorders: somatic symptom disorder, bodily distress disorder or functional syndromes? the authors are using the term, "bodily distress disorder" while clearly discussing the Fink et al. (2010) BDS construct [2].

      The ICD-11 S3DWG sub working group's proposed term is seen, here, as "Bodily distress disorder (Fink and Schroder 2010)" in Slide #3 of the symposium presentation: An introduction to "medically unexplained" persistent physical symptoms. (Professor Trudie Chalder, Department of Psychological Medicine, King’s Health Partners, 2014) [3].

      This paper: Medium- and long-term prognostic validity of competing classification proposals for the former somatoform disorders (Schumacher et al. 2017) compares prognostic validity of DSM-5 "somatic symptom disorder (SSD)" with "bodily distress disorder (BDD)" and "polysymptomatic distress disorder (PSDD)" and discusses their potential as alternatives to SSD for the replacement of the somatoform disorders for the forthcoming ICD-11 [4].

      The authors state, "the current draft of the WHO group is based on the BDD proposal." But the authors have confirmed that for their study, they had operationalized "Bodily distress disorder based on Fink et al. 2007" [5].

      In the (Fink et al. 2007) paper: Symptoms and syndromes of bodily distress: an exploratory study of 978 internal medical, neurological, and primary care patients, the authors conclude: "We identified a general, distinct, bodily distress syndrome or disorder that seems to encompass the various functional syndromes advanced by different medical specialties as well as somatization disorder and related diagnoses of the psychiatric classification."

      There are other examples in research literature, publications [6] and in the field.

      But these examples above suffice to demonstrate that the term, "bodily distress disorder" is already used synonymously with disorder term "bodily distress syndrome (BDS)" and that many researchers and clinicians do not differentiate between the two.

      These examples also serve to demonstrate that the "bodily distress disorder" term is already being used outside ICD-11 Beta draft to describe a diagnostic construct that subsumes CFS, ME, IBS and FM under a single, unifying disorder construct - which does not correspond with how ICD Revision has defined "BDD" for the ICD-11 core edition, in which these categories remain discretely classified in chapters outside the Mental, behavioural or neurodevelopmental disorders chapter.

      Since researchers/clinicians do not differentiate between "bodily distress syndrome" and "bodily distress disorder" (and in some cases, one also observes the conflations, "bodily distress syndrome or disorder" and "bodily distress syndrome/disorder"), ICD Revision needs to give urgent consideration to the difficulties and implications for maintaining the discrete identity of its proposed disorder, once ICD-11 is released and in the hands of its end users – clinicians, allied health professionals and coders, and to urgently review its current choice of nomenclature.

      1 Creed F, Guthrie E, Fink P, Henningsen P, Rief W, Sharpe M, White P. Is there a better term than "medically unexplained symptoms"? J Psychosom Res. 2010 Jan;68(1):5-8. doi:10.1016/j.jpsychores.2009.09.004. [PMID: 20004295]

      2 Rief W, Isaac M. The future of somatoform disorders: somatic symptom disorder, bodily distress disorder or functional syndromes? Curr Opin Psychiatry September 2014 – Volume 27 – Issue 5 – p315–319. [PMID: 25023885]

      3 Chalder, T. An introduction to "medically unexplained" persistent physical symptoms. Presentation, Department of Psychological Medicine, King’s Health Partners, 2014. [Accessed 27 February 2017]

      4 Schumacher S, Rief W, Klaus K, Brähler E, Mewes R. Medium- and long-term prognostic validity of competing classification proposals for the former somatoform disorders. Psychol Med. 2017 Feb 9:1-14. doi: 10.1017/S0033291717000149. [PMID: 28179046]

      5 Fink P, Toft T, Hansen MS, Ornbol E, Olesen F. Symptoms and syndromes of bodily distress: an exploratory study of 978 internal medical, neurological, and primary care patients. Psychosom Med. 2007 Jan;69(1):30-9. [PMID: 17244846]

      6 Medically Unexplained Symptoms, Somatisation and Bodily Distress: Developing Better Clinical Services, Francis Creed, Peter Henningsen, Per Fink (Eds), Cambridge University Press, 2011.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 05, Suzy Chapman commented:

      In his lecture, Prof Fink discusses DSM-5's "somatic symptom disorder (SSD)" and its alignment with the proposed new diagnostic category, "bodily distress disorder (BDD)", as defined and characterized for the core edition of ICD-11 [1].

      Prof Fink notes that SSD is a diagnosis mainly based on psychological and behavioural characteristics, with a very low symptom threshold - only one chronic, disturbing symptom required to meet the criteria, which may or may not be associated with a general medical condition - and suggests this will include most patients seen in both primary and secondary care.

      That DSM-5's SSD (and the similarly conceptualized BDD, as defined by ICD-11) risk mislabelling millions of patients with an inappropriate mental disorder diagnosis was identified in 2013 by Frances & Chapman [2] and Frances [3][4].

      Prof Fink is rightly concerned that ICD-11's proposed new single category replacement for the ICD-10 somatoform disorders will share the problems inherent in the DSM-5 SSD diagnosis.

      However, Prof Fink omits from his lecture a crucial consideration concerning proposed nomenclature.

      Since at least 2007, the term "bodily distress disorder" is frequently seen in the literature, at symposia and in presentations being used interchangeably with the term "bodily distress syndrome (BDS)" - the diagnostic construct developed by Prof Fink and his colleagues, which he confirms has been rejected by ICD Revision for inclusion in the ICD-11 core edition.

      To the best of my knowledge, no clinician or researcher has published on the potential for confusion and conflation between the two disorder constructs or the implications for maintaining disorder integrity within and beyond ICD-11 - if ICD Revision names its differently conceptualized construct, with its very different criteria set and which potentially captures a different patient population, "bodily distress disorder."

      Thus far, ICD Revision has provided no rationale for re-purposing a disorder term that is already closely associated with the Fink et al (2010) disorder construct and criteria set.

      There is no justification for introducing a new disorder category into ICD-11 that has greater conceptual alignment with the DSM-5 SSD construct but is proposed to be assigned a disorder name that is closely associated with a divergent (and operationalized) construct/criteria set, that is already in use in research and clinical settings in Denmark and beyond.

      This is unsafe and unsound classificatory practice and a very obvious flaw in their recommendations that remains unaddressed.

      It is disappointing, then, that whilst having identified problems with clinical utility and given some consideration to the implications for patients for a diagnosis of SSD or its ICD-11 sister diagnosis, the author misses the opportunity to alert his audience to the potential for disorder conflation between ICD-11's proposed "BDD" and his own divergent, "BDS" diagnostic construct.

      Comment from the author on this specific issue of nomenclature would be welcomed.

      Secondly, there have been two working groups making recommendations to ICD Revision for the revision of the somatoform disorders.

      Within his lecture, Prof Fink also refers to the proposals of the ICD-11 Primary Care Consultation Group (PCCG), that is chaired by Prof Sir David Goldberg.

      The 28 mental disorders proposed for inclusion in the abridged primary care version (ICD-11 PHC) will require a corresponding category within the core edition. However, the PCCG considers that the "BDD" construct, as defined and characterized for the ICD-11 core edition, lacks utility in primary care settings.

      The PCCG's recommendation is for an alternative construct for use in the primary care version which is a modification of the Fink et al (2010) BDS diagnostic construct and criteria set.

      Prof Fink states that the PCCG is recommending the name "bodily stress disorder (BSD)" for the new disorder category which it proposes as the replacement for the ICD-10 PHC "F45 Unexplained somatic complaints" category rather than use the name "bodily distress syndrome (BDS)."

      But according to Goldberg et al (2017), the PCCG would appear to continue to recommend the term "bodily stress syndrome (BSS)" for their modification - not "bodily stress disorder (BSD)" as Prof Fink has reported [4]. It would be helpful to have this apparent anomaly clarified.

      If the PCCG's proposals for the abridged primary care version are approved by WHO/ICD Revision, there will be a lack of correspondence between the ICD-11 core edition replacement for the ICD-10 somatoform disorders and the primary care version.

      A lack of consistency between the two editions risks confusion and conflation between the "BSS" BDS modification, the Fink et al (2010) unmodified BDS and the ICD-11 core edition defined BDD, resulting in loss of disorder definition integrity, lack of clarity over which patient populations these constructs are intended to capture, potential misapplication, confusion between different diagnoses across primary care and specialty settings, and will hamper statistical analyses.

      Furthermore, and crucially, there appear to be no exclusions or differential diagnoses within the PCCG's proposed "BSS" criteria for CFS, ME, IBS and FM - diagnostic categories that are discretely classified within ICD-11 under chapters outside the mental, behavioural or neurodevelopmental disorders chapter.

      This issue is still unaddressed by ICD Revision.

      With only a few months left before the Beta draft needs to be finalized, the revision of the somatoform disorders for ICD-11 and ICD-11 PHC remains an indigestible alphabet soup.

      1 Gureje O, Reed GM. Bodily distress disorder in ICD-11: problems and prospects. World Psychiatry. 2016 Oct;15(3):291-292. doi: 10.1002/wps.20353. [PMID: 27717252]

      2 Allen Frances¹, Suzy Chapman². DSM-5 somatic symptom disorder mislabels medical illness as mental disorder. 1 Department of Psychiatry, Duke University 2 DxRevisionWatch.com Aust N Z J Psychiatry. 2013 May;47(5):483-4.[PMID: 23653063]

      3 Frances A. DSM-5 Somatic Symptom Disorder. J Nerv Ment Dis. 2013 Jun;201(6):530-1. doi: 10.1097/NMD.0b013e318294827c [PMID: 23719325]

      4 Frances A. The new somatic symptom disorder in DSM-5 risks mislabeling many people as mentally ill. BMJ. 2013 Mar 18;346:f1580. doi: 10.1136/bmj.f1580. [PMID: 23511949] 5 Primary care physicians' use of the proposed classification of common mental disorders for ICD-11. Goldberg et al. Fam Pract. 2017 May 4. doi: 10.1093/fampra/cmx033. [PMID: 28475675]


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 20, NephJC - Nephrology Journal Club commented:

      The exciting paper “Canagliflozin and Cardiovascular and Renal Events in Type 2 Diabetes” was discussed on June 27th and 28th 2017 on #NephJC, the open online nephrology journal club.

      Introductory comments written by Swapnil Hiremath are available at the NephJC website here

      Nearly 200 people participated in the discussion with over 1000 tweets. One of the authors, Vlado Perkovic also kindly joined the journal club

      The highlights of the tweetchat were:

      • There has been very heterogenous use of SGLT2 inhibitors (SGLT2i) across the globe to date. They tend to be more commonly started by endocrinologists. There have been some cases of euglycemic DKA noted.

      • The studies were felt to be well-designed followed FDA guidance for non-inferiority meticulously. According to the authors, unexpected effects made it difficult to proceed to larger study without understanding cardiovascular safety in detail – hence CANVAS-R.

      • It was unusual that such a high percentage (70%) of the group had normoalbuminuria.

      • It would be interesting to determine what weight loss was calorific and what was diuretic effect.

      • The excess of amputations and fractures in the canagliflozin group was surprising. Postulated mechanisms for this included expression of SGLT2i elsewhere in the body and differential effects on oxidative phosphorylation.

      • Overall there were promising composite renal endpoints but there is still some concern about the potential adverse events revealed here which time may better delineate the extent of.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 18, Thomas Heston commented:

      This trial basically shows that treating diabetes helps decrease cardiovascular morbidity, which is not a new finding. To determine if canagliflozin has a unique property in decreasing cardiovascular events beyond simply lowering blood sugar, the analysis would have had to match patients by their hemoglobin A1c, then compare outcomes of placebo vs canagliflozin. Amazingly, this was not done. They did not look at cardiovascular events after correcting for hemoglobin A1c levels. Note that this was a pharmaceutical company funded research project, and the conclusion heavily implies that canagliflozin (as opposed to any agent that lowers blood sugar) has a unique quality of lowering cardiovascular events in diabetics. The authors did not prove that canagliflozin had any unique cardiovascular protective properties [Heston TF, 2017]. By not separating out the potential unique effects of canagliflozin beyond just lowering blood sugar, the results regarding a unique cardiovascular effect are basically meaningless and even worse, misleading.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 14, Kay Tye commented:

      We would like to make a clarification regarding the following sentences addressing the impact of previous social experience or social history on DRN DA dynamics across multiple stimuli:

      In reference to the fiber photometry signals in response to a number of stimuli, Cho and colleagues stated that “These findings did not vary with subject social history, as similar patterns of DRNDA activation were seen in group-housed mice (Figures S2J–S2L)” and “DRNDA activation by a wide variety of rewarding and aversive stimuli, both social and non-social, occurred irrespective of the subject’s social history, as the stimulus-evoked change in DRNDA fluorescence was not influenced by chronic separation from cage-mates (Figures S2J–S2M).”

      However, with respect to the presentation of a social stimulus, Cho and colleagues report a similar trend as Matthews and colleagues, albeit one that was not statistically significant:

      In Matthews et al., Cell (2016) the experiments in Figure 2 showed a significantly greater DRN DA fiber photometry signal (GCaMP6m in TH::Cre male mice) in response to a juvenile male when subjects were acutely (24 hours) single-housed than when group-housed (n=9, within-subject comparison). PMID: 26871628

      In Cho et al., Neuron (2017) the experiments in Figure S2J showed a non-significant trend reflecting a greater mean DRN DA fiber photometry signal (GCaMP6f in TH::Cre male mice) in response to an adult female when subjects were chronically single-housed (at least 4 weeks, standard for sleep studies; n=7) versus group-housed mice(n=4, between-subjects comparison).

      Examining the impact of social stimuli on DRN DA activity was not the focus of the study by Cho and colleagues. Further, as stated in the discussion of Cho and colleagues, a number of experimental parameters such as the nature of social target (male exposed to juvenile vs. female), isolation paradigms (acute vs. chronic) and optical fiber tip location differed between these studies (please see the publications for details). Nonetheless, the experiments related to social behavior in these two studies produce largely consistent results.

      This post was composed by Kay M. Tye, a corresponding author of the Matthews and colleagues study, following discussion with and approval from Viviana Gradinaru, the corresponding author of the Cho and colleagues study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 04, Marilyn Tirard commented:

      We have published a response to the comment by Wilkinson et al, which can be found with the online version of the original article: https://elifesciences.org/articles/26338#annotations:1fKgOgXREei5NxuiEWEQ0w


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Sep 20, Kevin Wilkinson commented:

      Is the His6-HA-SUMO1 knock-in mouse a valid model system to study protein SUMOylation?

      K.A. Wilkinson<sup>1*</sup> , S. Martin<sup>2</sup> , S.K. Tyagarajan<sup>3</sup> , O Arancio<sup>4</sup> , T.J. Craig<sup>5</sup> , C. Guo<sup>6</sup> , P.E. Fraser<sup>7</sup> , S.A.N. Goldstein<sup>8</sup> , J.M. Henley<sup>1*</sup> .

      <sup>1</sup> School of Biochemistry, Centre for Synaptic Plasticity, University of Bristol, Bristol, UK. <sup>2</sup> Université Côte d’Azur, INSERM, CNRS, IPMC, France. <sup>3</sup> Institute of Pharmacology and Toxicology, University of Zurich, Switzerland. <sup>4</sup> Taub Institute & Dept of Pathology and Cell Biology, Columbia University, New York, NY, USA. <sup>5</sup> Centre for Research in Biosciences, University of the West of England, Bristol, UK. <sup>6</sup> Department of Biomedical Science, University of Sheffield, Sheffield, UK. <sup>7</sup> Tanz Centre for Research in Neurodegenerative Diseases, University of Toronto, Toronto, Canada. <sup>8</sup> Stritch School of Medicine, Loyola University, Chicago, USA.

      *Address for correspondence: kevin.wilkinson@bristol.ac.uk or J.M.Henley@bristol.ac.uk.

      Introduction

      There is a large and growing literature on protein SUMOylation in neurons and other cell types. While there is a consensus that most protein SUMOylation occurs within the nucleus, SUMOylation of many classes of extranuclear proteins has been identified and, importantly, functionally validated. Notably, in neurons these include neurotransmitter receptors, transporters, sodium and potassium channels, mitochondrial proteins, and numerous key pre- and post-synaptic proteins (for reviews see Henley JM, 2014, Wasik U, 2014, Peng J, 2016, Martin S, 2007, Luo J, 2013, Craig TJ, 2012, Scheschonka A, 2007, Guo C, 2014, Wu H, 2016, Schorova L, 2016). Furthermore, several groups have reported SUMO1-ylated proteins in synaptic fractions using biochemical subcellular fractionation approaches, using a range of different validated anti-SUMO1 antibodies (Martin S, 2007, Feligioni M, 2009, Marcelli S, 2017, Loriol C, 2012) and many studies have independently observed colocalisation of SUMO1 immunoreactivity with synaptic markers (Konopacki FA, 2011, Ghosh H, 2016, Gwizdek C, 2013, Jaafari N, 2013, Hasegawa Y, 2014). Tirard and co-workers (Daniel JA, 2017) directly challenge this wealth of compelling evidence. Primarily using a His6-HA-SUMO1 knock-in (KI) mouse the authors contest any significant involvement of post-translational modification by SUMO1 in the function of synaptic proteins.

      On what basis do Daniel et al. argue against synaptic SUMOylation?

      Most of the experiments reported by Daniel et al. use a knock-in (KI) mouse that expresses His6-HA-SUMO1 in place of endogenous SUMO1. Using tissue from these mice, followed by immunoprecipitation experiments, they fail to biochemically identify SUMOylation of the previously validated SUMO targets synapsin1a (Tang LT, 2015), gephyrin (Ghosh H, 2016), GluK2 (Martin S, 2007, Konopacki FA, 2011, Chamberlain SE, 2012, Zhu QJ, 2012), syntaxin1a (Craig TJ, 2015), RIM1α (Girach F, 2013), mGluR7 (Wilkinson KA, 2011, Choi JH, 2016), and synaptotagmin1 (Matsuzaki S, 2015). Moreover, by staining and subcellular fractionation, they also fail to detect protein SUMOylation in synaptic fractions or colocalisation of specific anti-SUMO1 signal with synaptic markers. On this basis, they conclude there is essentially no functionally relevant SUMOylation of synaptic proteins.

      What are the reasons for these discrepancies?

      • Inefficiency of His6-HA-SUMO1 conjugation and compensation by SUMO2/3

      A major cause for concern is that there is 20-30% less SUMO1-ylation in His6-HA-SUMO1 KI mice than in wild-type (WT) mice (Daniel JA, 2017, Tirard M, 2012). Moreover, in the paper initially characterising these KI mice, Tirard et al. showed that while total protein SUMO1-ylation is reduced, total SUMO2/3-ylation is correspondingly increased (Tirard M, 2012). Thus, His6-HA-SUMO1 conjugation is significantly impaired and most likely compensated for by increased conjugation by SUMO2/3. Crucially, however, Daniel et al. do not examine modification by SUMO2/3 at any point in their recent study.<br> Given that SUMO modification is notoriously difficult to detect the 20-30% reduction in His6-HA-SUMO1 compared to wild-type SUMO1 conjugation will make it even more technically challenging. Moreover, this deficit in SUMO1-ylation may well be offset by an increase in SUMO2/3-ylation of individual proteins, but this likely compensation was not tested. Since these deficits alone could explain why Daniel et al. failed to detect SUMO1 modification of the previously characterised synaptic substrate proteins it is surprising that they did not attempt to recapitulate SUMO1-ylation of the target proteins under the endogenous conditions in wild-type systems used in the original papers.

      • Lack of functional studies on the substrates they examine

      Daniel et al. confine their studies to immunoblotting and immunolabelling. However, these techniques address only one aspect of validating a bone fide SUMO substrate. It is at least as important to examine the effects of target protein SUMOylation in functional assays. Function-based approaches such as electrophysiology or neurotransmitter release assays are not reported or even discussed by Daniel et al. This is an extremely important omission. We argue that simply because SUMO1-ylation of a protein is beneath the detection sensitivity in a model system that exhibits sub-endogenous levels of SUMO1-ylation, does not mean that protein is not a functionally important and physiologically relevant SUMO1 substrate.

      • Insensitivity or inadequate use of assay systems

      Failure to detect GluK2 SUMOylation

      GluK2 is a prototypic synaptic SUMO1 substrate that has been validated in exogenous expression systems, neuronal cultures and rat brain (Martin S, 2007, Konopacki FA, 2011, Chamberlain SE, 2012, Zhu QJ, 2012). Daniel et al. attempt to detect SUMOylation of GluK2 using immunoprecipitation experiments from the His6-HA-SUMO1 KI mice. However, a key flaw in this experiment is that the C-terminal anti-GluK2 monoclonal rabbit antibody used does not recognise SUMOylated GluK2 because its epitope is masked by SUMO conjugation. Thus, due to technical reasons, the experiment shown could not possibly detect SUMOylated GluK2 whether or not it occurs in the KI mice.

      Subcellular fractionation and immunolabelling

      Daniel et al. perform subcellular fractionation and anti-SUMO1 Western blots to compare His6-HA-SUMO1 KI and SUMO1 knockout (KO) mice. In the KI mice they fail to detect SUMO1-ylated proteins in synaptic fractions. Importantly, however, they do not address what happens in WT mice, which, unlike the KI mice, exhibit normal levels of SUMO1-ylation. While the authors provide beautiful images of SUMO1 immunolabelling in neurons cultured from WT, His6-HA-SUMO1 KI mice and SUMO1 KO mice, in stark contrast to previous reports using rat cultures (Martin S, 2007, Konopacki FA, 2011, Gwizdek C, 2013, Jaafari N, 2013), they detect no specific synaptic SUMO1 immunoreactivity in neurons prepared from WT mice. We note, however, that the nuclear SUMO1 staining in neurons from His6-HA-SUMO1 KI mice is weak, and even weaker in WT neurons. Given that a very large proportion of SUMO1 staining is nuclear, these low detection levels would almost certainly rule out visualisation of the far less abundant, but nonetheless functionally important, extranuclear SUMO1 immunoreactivity.

      In conclusion

      Given these caveats we suggest that the failure of Daniel et al. to detect synaptic protein SUMO1-ylation in His6-HA-SUMO1 KI mice is due to intrinsic deficiencies in this model system that prevent it from reporting the low, yet physiologically relevant, levels of synaptic protein modification by endogenous SUMO1. In consequence, we question the conclusions reached and the usefulness of this model for investigation of previously identified and novel SUMO1 substrates.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 29, David Keller commented:

      Were serum cholesterol levels adequately controlled for?

      Low LDL cholesterol, and low total cholesterol, have been associated with higher risk of Parkinson disease (PD) in recent studies [1]. Consumption of lowfat dairy products, by displacing full-fat dairy products from the diet, certainly leads to lower serum cholesterol levels. Were serum cholesterol levels adequately controlled for? Does a person who consumes a lot of low fat dairy food run a higher risk of PD than a person with the same lipid levels who does not consume low fat dairy?

      Reference

      1: Huang X, Alonso A, Guo X, Umbach DM, Lichtenstein ML, Ballantyne CM, Mailman RB, Mosley TH, Chen H. Statins, plasma cholesterol, and risk of Parkinson's disease: a prospective study. Mov Disord. 2015 Apr;30(4):552-9. doi: 10.1002/mds.26152. Epub 2015 Jan 14. PubMed PMID: 25639598; PubMed Central PMCID: PMC4390443.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Lydia Maniatis commented:

      This paper seems to hinge on a notion that makes no functional sense, has never been corroborated, and has led to an absurd claim by one of its main proponents attempting to salvage it.

      The notion is that the visual system performs a Fourier analysis on the "image" via "spatial frequency filters." As far as I can see, the reasons for the adoption of this functionally absurd notion were an ill-conceived analogy to hearing and the Fourier analysis that happens in the inner ear, combined with a gross over-interpretaion of the early findings of Hubel and Wiesel on the striate cortex of the cat. Psychophysicists were fabulously successful at corroborating the notion, accumulating heaps of evidence in its favor. Unfortunately, as Graham (2011) points out, the evidence generated had been interpreted in terms of a visual system that consisted solely of V1 (which was supposed to contain the orientation/frequency detectors) while it was later understood to be far more complex. The sitmuli that had been interpreted as tapping into V1 had somehow been ignored by neurons in V2, V3, V4, etc.! In these circumstances, Graham considers the "success" of the psychophysical program as something akin to "magic," and decides to explain it by arguing that, in the case of very simple stimuli, the brain becomes "transparent" down to the lower levels. Earlier, Teller (1984) had censured such attitudes as examples of an untenable "nothing mucks it up proviso." Below is the relevant passage from Graham (2011):

      "The simple multiple-analyzers model shown in the top panel of Fig. 1 was and is a very good account, qualitatively and quantitatively, of the results of psychophysical experiments using near-threshold contrasts . And by 1985 there were hundreds of published papers each typically with many such experiments. It was quite clear by that time, however, that area V1 was only one of 10 or more different areas in the cortex devoted to vision. ...The success of this simple multiple-analyzers model seemed almost magical therefore. How could a model account for so many experimental results when it represented most areas of the visual cortex and the whole rest of the brain by a simple decision rule? One possible explanation of the magic is this: In response to near-threshold patterns, only a small proportion of the analyzers are being stimulated above their baseline. Perhaps this sparseness of information going upstream limits the kinds of processing that thehigher levels can do, and limits them to being described by simple decision rules because such rules may be close to optimal given the sparseness. It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen." Or, as is well-known, it's always possible to arrange experiments, including employing a very restricted set of stimuli, so as to achieve consistency with any hypothesis.

      I guess the fairly brief exposures used in the present experiment are supposed to qualify then for transparency status, unless the authors have their own views about the anatomical loci of the supposed spatial frequency filters, but all of this really should be discussed and defended explicitly.

      The idea that the visual system performs a Fourier analysis i.e. analyzes the visual stimulation into spatial frequency patterns, is absurd for a number of reasons. First, the retinal stimulation is initially point stimulation, a mosaic of points (photoreceptors) whose activities at any given moment depend on the intensity/wavelength of the photons striking them. Therefore, to organize this mosaic into a set of images based on spatial frequency is not first a problem of detection, but of organization. The spatial frequency kind of organization in no way furthers, but rather impedes, the task that the visual system has to achieve, which is to group points of the mosaic such that the boundaries of those groups correspond to the boundaries of the objects in the visual field. So it is not only incredibly difficult (no mechanism has been proposed), it is a pointless diversion. Even if this were not the case, the requirement for a 'transparency hypothesis" renders it absurd on that basis alone. There is no credible evidence in its favor.

      Other seemingly absurd claims include the statement that: "it recently has been shown that the availability of horizontal structure underlies the face-specific N170 response...." Is there such a thing as an image lacking "horizontal structure," or lacking structure in any direction? In other words, can we test this statement by controlling for "horizontal structure"? The term is too general to serve as a substitute for the much more specific and theoretically baseless image manipulation to which it refers.

      Another problem I have is the problem of sampling. With complex stimuli, the number of potential confounds is large. Not only is the sample of stimuli used here small; in addition, the authors don't indicate that they were randomly selected, only that they were "selected." On what basis? It seems to me that faces with well-defined eyebrows, for example, would be more likely to produce the desired results, given that the vertical filters seem to make them disappear in the sample provided in the article.

      I agree that familiarity makes perceptual tasks easier, and even that we notice relationships across the vertical axis more easily than across the horizontal, but the present experiment has nothing to do with that.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 22, Lydia Maniatis commented:

      This study is an exercise in generating and crunching numbers following a 50-year-old recipe containing many untested and implausible assumptions, i.e. Green and Swets (1966) analogizing of perception to signal detection. I've commented on this issue in detail in a number of PubPeer/PubMedCommons on other articles.

      I'll just make one additional comment here. In the Appendix, we're casually informed, as a preliminary, that:

      "All targets were assumed to be known to the ideal observer."

      In what sense is this omniscient-homunculus-assumption related to human perception? What is the neuroscientific theory behind it, and what is the relationship of this homunculus to the processes doing the "detecting and discriminating?" In other words, what is the relevance of the "ideal observer model" other than to provide an arbitrary value with which to compare the human data?

      (Also, what does it mean, exactly, that the targets were "known" to the ideal observer? What does this knowing consist of?)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 22, Clive Bates commented:

      I am struggling to see any useful purpose in this paper.

      It seems to consist of finding alcohol flavours in tobacco product and e-liquids by looking on the internet. There is no data on volumes or any age-related information about sales or appeal. The authors have done nothing to show an effect that requires a regulatory intervention or anything to justify their policy conclusion.

      The widespread availability of alcohol-flavoured tobacco products illustrates the need to regulate characterising flavours on all tobacco products.

      No so. The widespread availability of a product is not in itself a problem. Nor is the widespread use of a product, unless its use can be linked to a harm - which the authors have not done here, other than by just asserting it. It is likely and certainly plausible that these flavours are beneficial by encouraging adults to switch from smoking to vaping and that a regulatory intervention would cause more harm than good. It is also possible that if young people were attracted to such flavours, they may be diverted from smoking to vaping, which is a benefit. Needless to say, the authors do not consider such real-world possibilities.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 23, Dorothy V M Bishop commented:

      There is some evidence for diagnostic substitution from developmental language disorder to autism. I suspect, unfortunately, that this might be harder to quantify in public data than the substitution from intellectual disability, unless states have been consistent in terminology used for language disorder. Our study on this topic was just based on an adult follow-up of a small sample of children who had been diagnosed with language disorder, but it was striking how some cases who were identified as cases of language disorder (or specific language impairment) 20-30 yr earlier would nowadays been seen as clearcut cases of ASD - not because of any change in their profiles, but rather because of less restrictive diagnostic criteria for autism. Autism and diagnostic substitution: evidence from a study of adults with a history of developmental language disorder By: Bishop, Dorothy V. M.; Whitehouse, Andrew J. O.; Watt, Helen J.; et al. DEVELOPMENTAL MEDICINE AND CHILD NEUROLOGY Volume: 50 Issue: 5 Pages: 341-345 Published: MAY 2008


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 22, Sunil Verma commented:

      Dear Colleagues,

      This paper had used our Universal Primers (mcb398/mcb869 - US patent 7141364) along with other primers and cited our paper, Verma and Singh 2003, Mol Ecol Notes 3:28–31; therefore, I became interested in it.

      Being the inventor of these primers used in this study, I am aware that our primers CAN NOT establish the identity of the species from "Ash". I was really shocked to read the title, that someone may establish species identity from "Ash" using my primers!

      I was eager that all the scientific queries that I had answered so far in last 20 years, and all the arguments that I have done as wildlife forensic expert in the court of law, that species identity can not be established from "Ash" will prove wrong in the light of this paper.

      After going through the abstract itself, I understood the matter. I also went through the full paper, and understood that the title of the paper is misleading. The authors indeed did not establish the identity from ash but they did it from some partially burnt biological material that was recovered from the scene of crime. Thus, the title of the paper should have NOT been "Molecular identification from ash"

      My scientific view-point is that the title of this paper should be corrected as appropriate and erratum be published in the respective journal.

      Dr Sunil Kumar Verma

      [Sunil Kumar Verma]


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Lydia Maniatis commented:

      (Comment #2)

      The three sentences of the conclusion (which I annotate below, in its entirety, reflects the article's utter lack of content:

      1. *"Motion information generated by moving specularities across a surface is used by human observers when judging the bumpiness of 3D shapes." *

      The boundaries of specularities are effectively contour lines. It would be thoroughly unrealistic to predict that they would not play a role, moving or not. And the observation had already been made.

      1. "In the presence of specular motion, observers tend to not rely on the motion parallax information generated by the matte-textured reflectance component."

      The two parts of this sentence seem to be a non-sequitur - how could observers of specular motion employ information generated by matte-textured objects (i.e. objects other than the ones they were observing)? What the authors mean to say is that observers don't use the motion parallax info generated by the specular stimulus. While they frame this as though it were an actual finding, it is, as discussed above, a purely speculative attempt to explain the poorer performance with specular objects.

      1. *"This study further highlights how 3D shape, surface material, and object motion interact in dynamic scenes." *

      It really doesn't, given the mixed results and failed predictions. It couldn't for a number of other reasons, discussed below.

      1. All of the heavy lifting in this article is done by computer programmers, whose renderings are supposed to qualify as "specular" "specular motion" "matte-textured" etc. These renderings rest on theoretical assumptions most of which are never made explicit. They are, however, inadequate; we learn that observers sometimes saw the moving specular stimuli as non-rigid. This is a problem. There is no objective description of the phenomenon "specular object in motion around an axis" other than "objects generated by this particular program." Is there any doubt that results would have been different if the renderings had accurately mimicked the physical phenomenon? If the surface of the object is seen as changing, don't this affect the "motion parallax" hypothesis? The speed with which a particular point on a surface is moving optically is confounded with the speed with which it is moving on its own.

      2. The so-called matte-textured objects appeared purely reflective when not in motion. The apparent specularities were "stuck-on" so that they moved with the surface. I have never seen a matte surface with this characteristic. I would be curious to see the in-motion renderings, because I cannot imagine what they look like. What is clear is that a simple reference to matte textured objects is not appropriate. We are talking about a different phenomenon, which may not correspond to any physically actualizable one. This latter fact wouldn't matter if the theoretical framework were tight enough that such stimuli allowed isolation of some particular factor of interest. Here, however, it is just means that "matte" doesn't mean what it normally is thought to mean.

      3. Observers were confused about the meaning of the term "bumpiness." Stimuli involve hills and ridges of various extents as well as varying apparent heights. The authors were interested in height. They instructed observers who asked for clarification (not the others) that they were interested in "the amplitude not the frequency." I would say a large hill or a wide ridge could qualify as more ample for people not thinking in terms of graphs with height in the ordinate. In other words, I think there is a observational confound between extent and height of the bumps.

      4. In the introduction, the authors refer to previous papers which came to opposite conclusions. Presumably, this means that some relevant factors/confounds were not considered. But the authors don't attempt to analyze these conflicted citations, which thus merely function as window-dressing. They move on to their experiments, on the slightest and vaguest of pretexts, with poorly described stimuli and poorly controlled tasks.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 20, Lydia Maniatis commented:

      The term "motion parallax" figures prominently as an (oddly negative) explanatory concept in Dovencioglu et al's (2017) brief conclusion. Which is interesting, since we do not come across it in the title, abstract, introduction, methods or results sections of the paper.

      It shows up for the first time in the discussion, in the form of a highly speculative suggestion aiming to explain the study's unpredicted and uninterpretable results, in particular why it failed to produce an effect that, throughout the paper, the authors create the impression it did produce.

      I don't know if they deliberately meant to mislead, but until I got to a pair of isolated statements in the Discussion, I was under the impression that they were claiming to have shown that "specular motion" (a highly problematic term in its own right, as will be discussed in a subsequent comment) improved observers' ability to make veridical estimates of 3D shape. This is definitely not the case.

      While they're careful not to say it outright in the abstract, and while the title, as is typical of vision papers, is uninformative, the prominence of the "specular" term might lead an unsuspecting reader to assume that the claim that results "provide an additional layer of evidence for the capacity of the visual system to exploit image information for shape inference" is referring to some value-added information provided by "specular" as opposed to non-specular, stimuli.

      Similarly, the text of the final section of the introduction, titled "Does specular flow facilitate or interfere with 3D shape extimation? " might give the impression that the former is the case: For example:

      "...we focus on...whether specular flow...can provide better information on 3D shape than optic flow...specular flows are directly related to 3D curvature and seem to be less sensitive to the particular motion of the object, whereas optic flows vary more substantially with the latter. Thus, if a perceptual task required observers to make judgments abouta an objects' 3D curvature structure...one would expect more consistent shape perception across changes in object rotation axis."

      Given the lead-up, I think a reader would be justified in interpreting the phrase "more consistent shape perception" as equivalent to "better shape perception." Note, by the way, that "more consistent" as distinct from "facilitating" or "interfering" does not seem to be among the choices entertained in the title heading. That dichotomy has effectively disappeared by the end of the passage, because it cannot be settled by the data. In what is effectively a case of "bait and switch," a notion of "consistent" performance has been substituted for "better" (facilitated) performance. Under the circumstances, I think a reader might be excused for forming the impression that the answer to the question posed in the section heading was that specularity facilitates. The idea that more "consistent" performance corresponds to overall worse performance is not something that one would naturally assume.

      In any event, at a couple of points in a very opaque and incoherent text, the authors share with us the fact that "Surprisingly, overall [so-called] specular objects tended to be less discriminable than [so-called] matte-textured objects" (p. 10) and that "for in-depth rotations, discriminability of specular objects was overall lower than that of matte shapes, and for viewing axis rotations, it never exceeded that of mattte objects." (p. 11). (As far as I can see, these facts are not made clear in the Results section).

      So, to make lemonade out of lemons, the authors treat the (unexplained) lesser variability (across conditions) but overall worse performance in the "specular" case across conditions as advantageous relative to the (unexplained) overall better performance but greater variability (across conditions) of the "matte" case. The one specific claim, post hoc, that motion parallax is not used in the specular case, but is used in the matte case, is only a somewhat bizarre and superficial attempt to explain the worse performance (but not the greater "consistency") of the specular case.

      What is, in fact, the case is that the premises (tbd) and methods (tbd) of this project are sloppy, the analysis confused, forced, and misleading.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 12, David Keller commented:

      Pre-Existing Rheumatoid Arthritis Should Increase, Not Reduce, the Risk of Incident Parkinson Disease

      The authors of this editorial ask "Would tamping down the immune system be a good thing for PD (Parkinson disease) symptoms, or would activation of the immune system be advantageous?"[1] They cite epidemiologic studies which showed "a higher risk of PD in patients with type 1 diabetes mellitus (T1DM) and Crohn disease (CD), among others [2]", in conflict with studies that showed a decreased risk of PD in patients with rheumatoid arthritis (RA). So, does the presence of RA increase or decrease the incidence of PD?

      Genome-wide association studies (GWAS) found "enrichment" of loci associated with Parkinson disease conditional on the presence of loci associated with each of 7 autoimmune diseases, to varying degrees. The graphs in Figure 1 [3] demonstrate that for all 7 autoimmune diseases, the greater the population of SNPs associated with autoimmunity, the greater the enrichment of PD SNPs. The authors designate this as "leftward" deviation of the curves, although mathematically it is really UPWARD deviation from the straight line representing the null hypothesis. So, in genetic studies, all seven of the tested autoimmune diseases (T1DM, CD, Ulcerative Colitis, Celiac Disease, Psoriasis, Multiple Sclerosis and Rheumatoid Arthritis) were genetically associated with increased risk of incident PD.

      How can genetic risk of PD increase directly with the genetic risk of RA, yet epidemiological studies demonstrate an inverse association of established RA disease on the incidence of PD? The authors of one such epidemiological study did not believe their own results, and hypothesized that "the decreased risk [of incident PD] among patients with RA might be explained by underdiagnosis of movement disorders such as PD in this patient group, or by a protective effect of treatment with anti-inflammatory drugs over prolonged periods." [4] In other words, early signs of PD, such as bradykinesia, could be masked in RA patients, in whom slow movement might be attributed to pain or joint destruction, and ibuprofen use could have further confounded their results.

      The nonsteroidal antiinflammatory drugs (NSAID) have been studied extensively, and the only one which significantly reduces the risk of incident PD is ibuprofen.[5] A study by Sung and colleagues [6] concluded that pre-existing RA reduces the risk of incident PD, but they corrected their data for the use of any NSAID, rather than the use of ibuprofen, introducing systematic errors in their results, and potentially invalidating their conclusions. [7]

      Can a destructive autoimmune disease like RA reduce the risk of incident PD, in contrast to 6 other autoimmune diseases, which raise risk for PD? Or, do symptom masking and the protective effects of ibuprofen explain the reduction in incident PD seen in patients with RA? In an unpublished reply to these arguments, Sung's group wrote: "[Keller's] criticism focuses on the issue whether [any] non-aspirin NSAID or ibuprofen only, has the truly protective effect against the development of PD", and agreed that "ibuprofen was associated with decreased risk of PD, but not aspirin or other NSAIDs" and concluded that "ibuprofen use should be considered as an important covariable in future correlational research in PD." [8]

      References

      1: McFarland NR, McFarland KN, Golde TE. Parkinson Disease and Autoimmune Disorders-What Can We Learn From Genome-wide Pleiotropy? JAMA Neurol. 2017 Jul 1;74(7):769-770. doi: 10.1001/jamaneurol.2017.0843. PubMed PMID: 28586798.

      2: Lin JC, Lin CS, Hsu CW, Lin CL, Kao CH. Association Between Parkinson's Disease and Inflammatory Bowel Disease: a Nationwide Taiwanese Retrospective Cohort Study. Inflamm Bowel Dis. 2016 May;22(5):1049-55. doi: 10.1097/MIB.0000000000000735. PubMed PMID: 26919462.

      3: Witoelar A, Jansen IE, et al. for the International Parkinson’s Disease Genomics Consortium. Genome-wide Pleiotropy Between Parkinson Disease and Autoimmune Diseases. JAMA Neurol. 2017;74(7):780–792. doi:10.1001/jamaneurol.2017.0469

      4: Rugbjerg K, Friis S, Ritz B, Schernhammer ES, Korbo L, Olsen JH. Autoimmune disease and risk for Parkinson disease: a population-based case-control study. Neurology. 2009 Nov 3;73(18):1462-8. doi: 10.1212/WNL.0b013e3181c06635. Epub 2009 Sep 23. PubMed PMID: 19776374; PubMed Central PMCID: PMC2779008.

      5: Gao X, Chen H, Schwarzschild MA, Ascherio A. Use of ibuprofen and risk of Parkinson disease. Neurology. 2011 Mar 8;76(10):863-9. doi: 10.1212/WNL.0b013e31820f2d79. Epub 2011 Mar 2. PubMed PMID: 21368281; PubMed Central PMCID: PMC3059148.

      6: Sung YF, Liu FC, Lin CC, Lee JT, Yang FC, Chou YC, Lin CL, Kao CH, Lo HY, Yang TY. Reduced Risk of Parkinson Disease in Patients With Rheumatoid Arthritis: A Nationwide Population-Based Study. Mayo Clin Proc. 2016 Oct;91(10):1346-1353. doi: 10.1016/j.mayocp.2016.06.023. PubMed PMID: 27712633.

      7: Keller DL, Only ibuprofen is associated with reduced PD risk - controlling for use of any NSAID introduces error. PubMed Commons Comment, accessed on 9/12/2017 at the following URL: https://www.ncbi.nlm.nih.gov/pubmed/27712633#cm27712633_34408

      8: Sung YF, Lin CL, Kao CH, and Yang TY. Reply to Keller's unpublished letter to Mayo Clinic Proceedings. Received by email on November 22, 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Lydia Maniatis commented:

      I don't get it; are citations like these valid? They seem to be doing most of the heavy lifting.

      "The finger spread and beauty judgement were recorded by our web app, emotiontracker.com (A.A.B., L. Vale, and D.G.P., unpublished data).

      As previous work has shown (A.A.B., L. Vale, and D.G.P., unpublished data), continuous pleasure ratings are well fit by a simple model, refined here ((Equation 1), (Equation 2) ; (Equation 3) and Figure 1B). The model supposes..."

      What gives the model assumptions their credibility? Are readers now supposed to take claims on faith?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 21, Stuart RAY commented:

      Conclusions revised, here Jakobsen JC, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 05, David Thomas commented:

      Readers should exercise considerable caution interpreting this report. The review correctly identifies a sparse literature comparing placebo to treatment. As they observe, "there were no data on hepatitis C-related morbidity and very few data on mortality." However, they mistakenly conclude in the preceding sentence that "DAAs do not seem to have any effects on the risk of hepatitis C-related morbidity or all-cause mortality." If there are no data, how could this evidence-based review make that conclusion? The lack of evidence of an effect is not evidence of a lack of effect.

      Even greater clarity on the matter comes from understanding why those data are missing. The paucity of long-term mortality data of persons with HCV randomized to placebo versus treatment is because, as Dr. Ray points out, the medical consensus underscored by AASLD/IDSA, EASL, the US National Academies of Sciences, Engineering, and Medicine and others is that treatment with DAAs is beneficial. There would be ethical challenges to randomizing someone to placebo and observing them for 10-20 years to see if they acquire conditions that we know the virus causes when a treatment that eliminates the infection exists. Of course, not all will develop liver cancer or liver failure, but neither does any other complication always occur for conditions for which treatment has universal support.

      Readers should note that the lack of placebo-controlled long-term data is because DAAs are effective not evidence "DAAs do not seem to have any effects".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 28, Stuart RAY commented:

      This review acknowledges some limitations of the evidence, but fails to balance the limitations of examining long-term outcomes in short-term trials that used SVR as a well-founded surrogate endpoint; perhaps more importantly, the review falls short in the larger context of the global HCV epidemic and elimination campaigns. That this is the case is illustrated by the response from organizations such as AASLD and IDSA, EASL, and related groups in Australia and Asia. Each of these statements provides considerations for the Cochrane review and its interpretation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 05, Lydia Maniatis commented:

      You should be faulted for the nature and extent of the problems, not for making them explicit, which is, indeed, useful. I think the nature of the problems reflect an unproductive approach to science.

      Below, I contrast this approach with an alternative and briefly explain why the latter is productive while the former is not. (The alternative is presented first.)

      1. Someone makes a guess about the causal basis of an effect or phenomenon. These guesses entail various assumptions. The assumptions should be adequate to explain the thing they have been proposed to explain. Additionally, one may derive certain other implications, pointing to effects or facts that have not, as yet, been observed, i.e. predicted effects or facts. The data in an experiment or investigation designed to produce those predicted effects, or discover these predicted facts, thus act as a test of those predictions and the related assumptions. The criterion for provisional acceptance of assumptions is, in other words, the match between their observable implications and observation.

      2. Data collected are interpreted on the basis of assumptions which they are not designed, and cannot, test. Thus, the experiment plays no role in corroborating or falsifying these assumptions. The criterion for adopting them is simply the personal preference of the investigator - a criterion which, again, is independent of experiment.

      Because this approach is not designed to test assumptions, it is inherently uninformative as to their verisimilitude, their relationship to the "ground truth." The titles of the corresponding articles are similarly uninformative. They report "exploring," "characterizing," "measuring," various effects, or simply state the topic area they are addressing without giving any hint of their conclusions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 05, Michael Tarr commented:

      Interested parties should read the entire paper and make up their own minds. Every scientific study has limitations and we should not be faulted for making them explicit so as to inform interested readers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jun 03, Lydia Maniatis commented:

      “Pick your poison”

      I’m speculating, but it seems to me that, no longer able to bar post publication critiques of low quality studies (via muscular refusal to publish critical Letters to the Editor), journal editors have adopted a compromise strategy of continuing to accept low-quality papers, while requiring authors to enumerate (many of) the study’s (often fatal) flaws, in a section referring to “limitations of the study” This is an improvement on the old system; however, this special section should be placed at the front, not at the tail end, of the paper. More often than not, it reveals that methods were confounded and interpretation based on flimsy, vaguely elucidated and untested assumptions; as such, conclusions can carry little or no theoretical weight.

      This is the case here; among the “Issues and limitations,” of the study the authors mention:

      a. That the study lacks an important control condition (“we did not collect neural responses for any untrained face stimuli…” (p. 18)) and that it is not clear how the necessary control might be achieved.

      b. That it is unclear whether results generalize in order to explain what they are supposed to explain; this, we’re told, is contingent on whether certain vague assumptions adopted by the authors, about what observers are doing, actually hold ("we hypothesized that this task prompted subjects to learn...").

      c. That it is not clear that subjects were discriminating faces holistically, or only on the basis of the simple variations used in the stimulus set. The authors explain that they prefer to make the more convenient assumption that “the task used in our study was biased towards facial discrimination rather than facial part discrimination.”

      d. Most interestingly, we learn that, in order for the results of the imaging technique used to be interpretable, it is necessary to impose certain constraints on the analysis, and that the choice of constraints “can lead to source reconstructions that are different from the true activity in the brain.” What should a scientist do, in the absence of information about the proper (true) assumptions to make? I would say, make and test the assumptions you need in order to ascertain the proper constraints. Instead, the authors take a riskier route. In the absence of knowledge, they say, one has to “pick his poison by simply choosing some constraints.” Of course, the authors “tried to choose reasonable constraints…”

      To make situation clear: To interpret their data, collected on the basis of conditions which are heavily confounded (and which the authors unconfound simply on the basis of wishful thinking), they must make further, untested assumptions in a way that is so rigorous that they analogize it to picking a poison. Conclusions are thus wholly contingent on layers of highly speculative assumptions. Until these are clarified, tested, and corroborated, the empirical content of this project - the theoretical weight of the conclusions - is null.

      Assuming these articles are actually written to be read, the poison label should be affixed prominently at the top.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 19, Stefano Casola commented:

      Authors acknowledge the insightful comments. We would like to draw the attention of Dr Woodgett to the following observations:<br> • The antibody used to detect GSK3-beta Serine-9 phosphorylation does not recognize phosphorylated forms of GSK3-alpha (https://www.cellsignal.com/products/primary-antibodies/phospho-gsk-3-beta-ser9-d3a4-rabbit-mab/9322). Thus, whereas our data indicate that loss of the BCR in MYC-driven lymphoma cells leads to a reduction in GSK3-beta Ser-9 phosphorylation, it remains to be investigated whether GSK3-alpha is similarly affected in these cells.

      • GSK3-beta knock-down experiments were performed using six independent shRNAs (referred to in the Methods section of the paper). Data obtained with the two most effective hairpins are shown in Extended Data Figure 5c, d. Importantly, the shRNAs were selected for their ability to target GSK3-beta but sparing GSK3-alpha. Despite only partial GSK3-beta knock-down, lymphoma cells losing BCR expression resisted substantially better to their BCR+ counterparts in competition assays, with the most effective hairpin (shRNA# 2) causing a complete block of their counter selection (Extended Data Figure 5e). These results closely mirror those obtained studying BCR+/BCR- lymphoma cell competitions treated with the GSK3 inhibitor CHIR99021 (Figures 3d and Extended Data Figure 5a).

      Therefore, whereas we cannot exclude a contribution of GSK3-alpha, our data indicate that modest changes in GSK3-beta expression/phosphorylation are sufficient to critically affect BCR-controlled fitness of MYC-driven lymphoma cells.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 15, Jim Woodgett commented:

      This is an interesting paper but from what I can see, the evidence for these effects being dependent on GSK3beta (rather than a combination of GSK-3beta plus GSK-3alpha) is limited to a partial (maximally 35%) knockdown by siRNA in extended data figure 5 - where a marginal effect was observed (partial knockdown of GSK3alpha may have given a similar result). The pharmacological inhibitor used, CHIR99021, has NO significant selectivity for GSK3beta over GSK3alpha (the authors do refer to it being in GSK3 inhibitor in two places). In every example where phosphorylation of Serine 9 of GSK3beta has been examined along with phosphorylation of Serine 21 of GSK3alpha, they are phosphorylated in parallel. There are no kinases that target these sites selectively (not true of a more C-terminal site, Ser389, targeted on GSK3beta by p38 MAPK). Why is this important? Because throughout the paper, there are over 50 mentions of GSK3beta, including the title, yet there was no measurement of GSK3alpha phosphorylation or knockdown.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 03, David Mage commented:

      The authors have performed a thorough study of SUID rates in the entire U.S. for years 1995-1997 and 2011-2013 and test for significant differences (at P<0.05) in Table 2.

      However, they neglected to make the finite population correction (fpc) that adjusts for the sample size n of independent probability samples without replacement from a finite population of size N.

      Given that they sampled the entire U.S. infant populations for SUID during those years so that n = N, the fpc = 0 as shown below because there is no sampling error.

      Consequently footnote 'a' and related discussion could be removed.

      fpc = Sqrt[(N - n)/(N - 1)] = 0 for n = N.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 30, E Naydenov commented:

      The expectations about IRT should be realistic. Our experience so far indicates that it might be usable localization tool only in cases with subcortical convexity tumors. The results are more evident in patients with metastatic brain disease. Another study on the last topic is on the way.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 14, Jean-Jacques Letesson commented:

      To my knowledge this is the first reported bacteriologicaly positive case of Brucella infection into the eye.

      Existing information on uveitis and other affections of the eye structure suggest that these are not so rare complication of brucellosis. Depending on the series, up to 50% of patients (B. melitensis) can show some of these troubles (early observations are in Dalrymple-Champneys, W., 1960. Brucella Infection and Undulant Fever in Man. Oxford University Press, London, pp. 88; also, in a recent work done in an endemic area, about 20% of the patients showed these complications: Gulten Karatas, Set al 2009. Canadian Journal of Ophthalmology 44, 598–601). However, Brucella has never been isolated from enucleated human eyes to confirm the diagnosis (Brucellosis. M. Monir Madkour. ISBN 0-7236-0941-1)

      This ocular localization could be related to the presence of aldose reductase (AR) in these organs. Aldose redutase being probably involved (among other function) in the production of erythritol the favorite carbon source of Brucella. (Erythritol Availability in Bovine, Murine and Human Models Highlights a Potential Role for the Host Aldose Reductase during Brucella Infection Front. Microbiol., 13 June 2017 | https://doi.org/10.3389/fmicb.2017.01088)

      Actualy, lens and retina while not being described as being a site of high AR expression, have also been analyzed for AR content because the expression of AR is induced by high glucose or osmotic stress and because of the role of the polyol in diabetic cataract or retinopathy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Lydia Maniatis commented:

      p.s. The fact that references to decomposition into layers are references to perceptual facts also means that simply asking people to qualitatively describe their experience is a necessary and sufficient means for deciding between the authors' two proposed "models" of these experiences, as both refer to perceptual dimensions and only perceptual dimensions (regardless of the presumed physical causes). Thus: "The hybrid decomposition thus suggests that the human visual system may decompose compound BRDF's into two perceptual components..."

      As this simple operation of asking people whether they perceive layers, etc, was not performed, the authors try to guess at whether their results are consistent with one or another description. "To this end, we have performed a number of linear regressions on our experimental data..." but of course "it should be noted that the evidence derived from the current experiment is mixed, so additional, more targeted [less confounded, better rationalized] experiments would be required..." But still, they go on to elaborate a post hoc description of results, for what its worth.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 31, Lydia Maniatis commented:

      What do Vangorp et al (2017) have to say about the “perception of hazy gloss” (and why don’t they break the news up front)?

      If the final statement of their discussion is any indication, it seems they ended up where they began,: “An exciting avenue of future work is thus to understand how the human visual system encodes hazy gloss and distinguishes it from glare in images.” (p. 14)

      But this study also used images, and its stated goal was “to investigate the conditions in which haze gloss is encoded in the perceptual representation of glossy materials.” (p. 2)

      As reported, the experiments performed were hopelessly confounded, their results uninterpretable, and their rationale conceptually confused. As is too often the case, all the authors can do under the circumstances is to describe their results in a purely self-referential way, offering vague and incoherent post hoc speculations as to the causes of their particular outcomes with respect to their particular conditions. Thus:

      Experiment 1

      Experiment 1 doesn’t actually evaluate perception of a quality that observers would necessarily describe (and which the authors vaguely describe) as “hazy gloss,” but, rather, the ability to match certain parameters of certain graphic images.

      Results are highly variable and don’t possess a form the authors did or could have predicted or are able to interpret: “The space of average matches is severely compressed towards the middle of the valid space. The variance of these average matches is so large that representing it as ellipses would clutter the graph.” (p. 6) Vague speculations follow:

      “One interpretation could be that a perceptual decomposition of sharp and wide specular components [being graphic images, the stimuli do not actually contain specular components, unless we are talking about reflections on the screen] is difficult…” (p. 6)

      “However…these results do not prove that participants perceptually separated the material into two components.” This is partly due to confounds: “In each condition only a single parameter is kept correct, while the other three vary…This means that observers might attend to the narrow shading features to judge…Observers might also attend to other shading features…” (p. 6)

      “Since there is no correct match [in the single component condition], success must be defined as the consistency between participants and consistency with plausible models…” (p. 6) There is, however, no discussion about the plausibility of models.

      “The polar plots show skewed and even bimodal probability distributions…” (p. 6)

      “The findings suggest that, at least for some parameter ranges, particularly in the upper edges of the stimulus space, participants are able to match the properties of two-component specular materials [assuming that the graphics program accurately mimics the optical effects of the referenced specular conditions].” (p. 6)

      It should be clear that the muddled, ad hoc conclusions, to the extent that they are intelligible at all, refer narrowly to the particular experimental conditions and stimuli; the conceptual analysis is not such as to allow us to draw out implications outside of this self-referential space.

      (And yet another confound is referred to in the description of Experiment 2: “The diffuse component in Experiment 1 could have obscured the intended differences in the specular [so-called] components…we performed the 4AFC task in a darkened room …in order to further optimize the performance of the observers.” (p. 9-10) (Further optimize....as in “even more perfect...”)

      Experiment 2 This is a forced choice discrimination task. Forced choice means we don’t really care about what observers are actually perceiving, only that they give some manageable responses.

      Again, there is much post hoc speculation (very briefly, to give the style of the account): “For all three classes of distractors there are ranges of corresponding two component materials that are easily distinguishable.” “The main differences between conditions may be explained by…This would explain the opposite slopes of the performance curves…However, Figure 10 suggests that a simple interpretation…is unlikely… Nevertheless, the discrimination experiment suggests…”

      In any case: “However, this in itself does not tell us about the subjective interpretation of these differences. Does the [presumed, and variable] ability to distinguish between different BRDF’s reflect a distinct perceptual parameter, or are the image differences detectable but not interpretable?”

      Answering this question is a job for Experiment 3, which, naturally, employs entirely different graphic material effects (silvery as opposed to plasticky...Why?) with different graphic lighting effects. (“This experiment used a different lighting environment representing another church interior, Galileo’s Tomb.” Well, as long as it's a church...We’re not told why, or how such a change might be expected to affect the phenomenon of interest.) The authors also give us various other details about the parameters employed, but what’s the point?

      The results of Experiment 3 are as informative as those of the other two. The arguably tautological conclusion is drawn that “the subjective impression of hazy or layered materials is crucially associated with a “bloom” or “halo” around sharp reflections.” (p. 10). (The use of quotation marks indicates that a subjective impression is being explained in terms of a similar, differently-expressed subjective impression.)

      The theoretical discussion generally is conceptually confused, essentially failing to make the crucial distinctions among the distal object, the proximal (retinal) stimulation and the perceptual experience. This is illustrated, for example, by the following phrase: “It is highly unlikely that the human visual system refers to only two specific angles [of reflections] to infer haze gloss…” The angles being referred to have no retinal correlate, so of course they could not be the cause of the percept. These angles of reflection may have a correlate in the percept, but this cannot be used as an explanatory principle for the formation of the percept.

      Another funny thing the authors do (they’re not the only ones) is to express a perceptual fact as though it were a theoretical one, treating a perceptual fact as something yet to be ascertained experimentally. Thus:

      “We therefore propose that the visual system may represent haze by decomposing the material response into two distinct components, or causal layers. One obvious choice for such a decomposition would be the physical components themselves (i.e. the broad and narrow specular terms). [Note that, as usual, the authors are inappropriately treating the physical features of the distal stimulus as explanatory variables]. This decomposition separates the composite reflection into two [and the authors can only be referring to perceived layers, as in their later reference to transparency] superimposed layers…much like the decomposition of image patches...in transparency perception. This would be broadly [a high standard!] consistent with our findings that observers base many of their judgments on the narrow component.”

      We don’t have to guess as to whether the percept contains perceived layers; this is, by definition, a subjective fact and can be directly ascertained by personal experience and/or by asking other people what they see. If they answer in the positive, then any experiment that indicated otherwise would obviously be in error.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Xiaolin Wu commented:

      This paper raises important safety issue for gene therapy application of CRISPR-Cas9. However, there are serious doubts about the results or interpretation. First of all, the authors listed Top-10 predicted off-target sites. But all genes are wrong! looking at the sequence they listed (supp. figure 3), you will not be able to find it in the genes! After careful inspection, the first predicted off-target is actually the "on-target" sequence for pde6b gene. For such a high-profile journal, you can't be so sloppy. This is not just a typo. I inspected them and they are all assigned to wrong gene. If you can't even get your on-target correct, how do you think people can trust your data? There are some genes are assigned to even wrong chromosomes! Supp fig3 panel b, listed herc1 gene on ch11. That gene is supposed to be on chr9. After this first figure, I don't even know if any other information reported here is correct!

      I then went on to inspect Supp table 1-3. The authors listed all off-targets observed from the WGS. However, Pde6b pTyr347fs/c1041_1050CGTAGCAGAA is actually the on-target indel. and the author did not even notice this is their target gene? and listed it as one of the two off-target genes with mouse phenotype? The CRISPR-cas9 system is supposed to created Indel here! You simply did not repair it. You replaced the stop codon with the indel. I downloaded the raw sequence, and found that this specific deletion (CTGAGCAGAA)can not be found. Only by reading the authors previous paper, I figured out that they mean a 10 bp deletion but they don't even have the correct deletion sequence!

      After seeing all these careless mistakes, I don't even know if they mislabeled the mouse or samples! It is hard for me to imagine CRISPR-case9 causes so many homozygous deletions in two independent mice (all right, it may happen in rare case for specific sgRNA like this one). And even if some of the mutations/indels are real, they may have nothing to do with CRISPR-cas9. For example, the authors see homozygous deletion in Pde9a gene in both animals. Do the authors consider the possibility that this deletion might be created by totally unrelated mechanisms and strongly selected for in vivo? since Pde9a and pde6b are paralogues. The easiest way to test if these are real CRISPR-cas9 off-target is to check these loci in treated cells in vitro. In that setting, you can check millions of cells to see if they do occur or do not occur. Maybe none of them is created by CRISPR-cas9 off-target. But during the embryo development, these mutations are created and strongly selected to compensate for something. I admit that in vitro does not speak for in vivo. But you can't just assume these mutations are generated by CRISPR-Cas9.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 13, Atanas G. Atanasov commented:

      Congratulations for the excellent overview dear colleagues. I have featured your work in my recent science popularization article “Are Probiotics Useful For Therapy of Autoimmune Diseases?”: https://www.consumerhealthdigest.com/general-health/probiotics-and-autoimmune-diseases.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Jens Staal commented:

      Interesting. I wonder if the common factor is CARD9, which is known to be required for both pathways in mammals.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 27, Andrea Messori commented:

                            .

       

      THE ADVANTAGES OF NET MONETARY BENEFIT IN HANDLING MULTIPLE SIMULTANEOUS COMPARISONS .

       

      A.Messori, HTA Unit, Regional Health Service, Firenze, Italy .

       

      The paper by Trippoli [1] discusses the main advantages of net monetary benefit (NMB) in comparison with incremental cost-effectiveness ratio (ICER). There is however another important methodological advantage that deserves to be discussed in detail.

      In the “classic” analyses based on ICER, only two comparators are directly managed. For example, if A is the innovative therapy and B is the standard therapy (and assuming that all values of cost and effectiveness are normalised to 1 patient), ICER is defined as follows: .  

       

      Equation 1)    ICERAvsB  =  (costA - cost B) / (effectivenessA - effectivenessB)  <br>  <br>     .<br> After this calculation, ICERAvsB is evaluated against the pre-defined threshold (T) of cost-effectiveness (e.g.£ 30,000 in the UK or around $100,000 in the US) to decide if using A as opposed to B has a favourable cost-effectiveness (ICER<T) or an unfavourable cost-effectiveness (ICER>T).

      In Western countries, the process of in-hospital procurement is often managed by running competitive tenders, particularly in the field of implantable medical devices. The problem is that, while tenders generally evaluate three or more comparators, the design of Equation 1 manages just a single comparison, i.e. two comparators only.

      As pointed out by Trippoli [1], one important advantage of the NMB is that this parameter can be separately calculated for each of the (three or more) comparators under examination. Furthermore, these (three or more) values of NMB can then be compared with one another in any binary comparison and, finally, these values are expressed according to easily understandable units (represented by “differences in benefit”, where all benefits and all costs are expressed in monetary units normalized to 1 patient).

      On the other hand, in the “classic” approach based on ICER the issue of comparing three or more comparators (e.g. four comparators named A, B, C, and D) is usually addressed by applying some methodological tricks. One of these tricks introduces ‘no treatment’ as a further comparator (although ‘no treatment’ is in some cases a reasonable comparator, but in other cases is not). Another one is to identify, as standard treatment (ST), a single comparator among A, B, C, and D (e.g. B so that ST=B), and to calculate ICERAvsST, ICERCvsST, and ICERDvsST; according to this latter solution, if ST=C, the calculation involves ICERAvsST, ICERBvsST, and ICERDvsST; if ST=D, the calculation involves ICERAvsST, ICERBvsST, and ICERCvsST; and so on. The drawback to all of this “classic” approaches is that the units of ICERs (ratio of incremental cost and incremental effectiveness) make their interpretation very difficult, and furthermore finding a role for T in this type of reasoning is difficult as well.

      In conclusion, the NMB is much more efficient than the ICER in performing the simultaneous comparison of three or more comparators.

       

       <br> References

      [1] Trippoli S. Incremental cost-effectiveness ratio and net monetary benefit: current use in pharmacoeconomics and future perspectives. Eur J Int Med 2017 Sep;43:e36.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 09, Lydia Maniatis commented:

      The idea that one can define a spatial extent (an aperture) for which perception of faces is equivalent to perception without such a constraint, is not credible, and the conclusions not interpretable; the conditions under which it was defined here make it all the more uninterpretable. The large difference local details of stimulation can make on the organization of the resulting percept, and the contingency of the effect of that detail on the all of the other local details and their relationship to the entire collection, are facts not compatible with simply additive, spatially-defined predictions.

      In addition, the experiments are highly confounded and the authors address these confounds post hoc, using a hodge podge of analytical tools that entail many untested, untestable, and very likely false, assumptions. Among these is the "Random Field Theory" used to assess similarity between images of faces, which involves a pixel-by-pixel definition of similarity that, given the nature of perception and particularly face perception, is not credible. (It is certain that two faces might have a strong family resemblance which would not be correlated with a pixel-by-pixel definition. Similarity is a notoriously difficult concept and in practice comparisons are highly selective).

      Given the post hoc ad hoc style of the analysis and weak theoretical framework, it is virtually certain that these results could not be replicated using a different sample of images understood as "faces" (the choice of the sample images has no more specificity than that) but otherwise identical procedures and analysis. This, the authors make pretty clear:

      "Following this idea, we emphasize that the Facespan should not be considered as an absolute quantity. Inasmuch as the perceptual span for reading is not absolute, but instead flexible, the Facespan reported here should be considered as an average benchmark obtained under the aforementioned specific viewing conditions and task."

      Obtained - and re-obtainable? It seems contradictory to suggest that a flexible outcome, the parameters mediating whose flexibility are undetermined, can serve as a benchmark, average or otherwise. We're left with a vague, less than credible, not-really-quantified concept - the Facespan.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 15, Ricky Turgeon commented:

      Duplication of PLATO data in meta-analyses

      Upon reviewing this meta-analysis, I noticed that the data from PLATO have inadvertently been duplicated in all meta-analyses of clinical outcomes in this review. This was done by including both the results of the primary publication (Wallentin et al, NEJM 2009 - reference 28 in this article) and the subgroup of PLATO patients planned for an invasive strategy (Cannon et al, Lancet 2010 - reference 21).

      Ideally, this review should be retracted and republished in its corrected form that excludes the double-counting of events and ~13,000 participants from the Cannon, et al Lancet 2010 subgroup analysis of PLATO.

      Sincerely, Ricky Turgeon BSc(Pharm), ACPR, PharmD

      (copy of comment posted on PLOS ONE)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 22, Peter Gøtzsche commented:

      Leucht et al. conclude that “approximately twice as many patients improved with antipsychotics as with placebo” (1). However, no reliable conclusion can be made, as all the 167 double-blind trials were flawed. None of the trials exclusively examined first-episode patients and when patients who already receive antipsychotic drugs are randomised to placebo, serious withdrawal symptoms will occur. Yet, this withdrawal group is mischaracterized as a placebo group. Even trials with a long tapering period before randomization are unreliable. Antipsychotics cause permanent brain damage, e.g. about 5% a year develop tardive dyskinesia but the drugs may mask this, so that it appears for the first time when the drugs are stopped.

      My conclusion is that 60 years of “placebo”-controlled trials of antipsychotics have been wasted. We need to do trials in drug-naïve patients with their first episode of psychosis if we want to know what these drugs to do people. We also need to ensure that the trials are adequately blinded by adding a substance to the placebo that gives side effects. Leucht et al. praise the NIMH study from 1964, but in this trial, the psychiatrists reported the exact opposite of what happens when people get antipsychotics. The drugs were said to reduce apathy, improve motor movement and make patients less indifferent (2).

      1. Leucht S, Leucht C, Huhn M, Chaimani A, Mavridis D, Helfer B, et al. Sixty years of placebo-controlled antipsychotic drug trials in acute schizophrenia: systematic review, bayesian meta-analysis, and meta-regression of efficacy predictors. Am J Psychiatry 2017;174:927-942.

      2. Cole JO. Phenothiazine treatment in acute schizophrenia; effectiveness: the National Institute of Mental Health Psychopharmacology Service Center Collaborative Study Group. Arch Gen Psychiatry 1964;10:246-61.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 17, Randi Pechacek commented:

      Simon Lax, first author of this paper, wrote a brief blog post on microbe.net about the importance of the hospital microbiome in context of this paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Lydia Maniatis commented:

      "In summary, we have shown a nonlinear response to a compound of gratings (plaid) that does not arise purely from contrast normalization between spatial frequency channels."

      There is zero evidence and zero rationale for the existence of "spatial frequency channels" in the visual system. A spatial frequency analysis of the retinal stimulation would not contribute to the derivation of the percept, but only make the job more difficult-to-impossible. The notion has never been tested, and it's not clear how it could be tested. Simple data derived from carefully constrained experimental conditions (for which the assumptions can be made to seem to fit) are simply interpreted as though these untestable/untested/false assumptions were true, and it were only a matter of fine-tuning "models" via data-fitting. Similarly, I could posit other invisible forces as being responsible, and interpret accordingly.

      The notion that EEG's correlated with particular percepts can be directly (theoretically) correlated with the activity of particular populations of neurons in the brain is absurd, and, at any rate, has not been tested, and cannot be tested in the foreseeable future. The many reasons why it is absurd have been discussed by me in various pppr comments and by Teller (1984).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 29, Andreas Lundh commented:

      Comment on Association of streptococcal throat infection with mental disorders

      A recent Danish register-based cohort study(1) concludes that “individuals with a streptococcal throat infection had elevated risks of mental disorders, particularly OCD and tic disorders.” However, some methodological issues need consideration.

      Firstly, the choice of exposure has a risk of misclassification. Exposure (i.e. Group A Streptococcal (GAS) throat infection) was the combination of a rapid antigen test performed by the patient’s general practitioner and subsequent antibiotic prescription. In Denmark use of rapid antigen test and antibiotic prescription are guided by modified Centor criteria(2,3) where symptomatic patients at high risk of GAS throat infection are not tested, but instead receive empirical antibiotic treatment. This leads to patients being misclassified as unexposed since they are never tested.

      Secondly, the choice of outcome has a similar risk of misclassification. Psychiatric diagnoses are identified from national databases that only contain hospital information. Psychiatric patients that are not treated in hospitals (e.g. treated by primary care psychiatrists) are misclassified as not having had the outcome. Misclassification seems likely as only 0.1% and 0.2%, respectively, had a diagnosis of OCD or tics in the study period.

      Thirdly, the analytical strategy has a risk of bias. The authors compared patients that had received both a rapid antigen test and antibiotics with a group that was never tested. GAS throat infection will in most cases resolve spontaneously and many patients will never contact their general practitioner for testing. The group tested therefore likely differs from the group not being tested and represents a group with certain healthcare seeking behavior. This is substantiated by the findings that risk of mental disorders seems to increase with number of tests and regardless of whether the tests are negative or positive. A more reasonable analysis that avoids confounding by test indication would be to compare the group of tested patients prescribed antibiotics with the group of tested patients without prescribed antibiotics. This comparison weakens the association and it is no longer statistically significant for tics.

      Instead of describing this as a possible source of bias the authors conclude that nonstreptococcal throat infection was also associated with increased risk of mental disorders, a theory that was not part of the original study hypothesis. Another interpretation is that these associations can be explained by a certain healthcare seeking behavior of patients and parents leading to an increased probability of receiving an antigen test, being prescribed an antibiotic and being treated in hospital.

      References

      1) Orlovska S, Vestergaard CH, Bech BH, Nordentoft M, Vestergaard M, Benros ME. Association of Streptococcal Throat Infection With Mental Disorders: Testing Key Aspects of the PANDAS Hypothesis in a Nationwide Study. JAMA Psychiatry 2017;74:740-6.

      2) Bjerrum L, Gahrn-Hansen B, Hansen MP, Córdoba G, Aabenhus R, Monrad RN. [Airway infections – diagnosis and treatment. Clinical guideline for general practitioners]. Copenhagen: Danish College of General Practitioners; 2014.

      3) Choby BA. Diagnosis and treatment of streptococcal pharyngitis. Am Fam Physician 2009; 79:383-90.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 28, Guan-Hua Huang commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Sep 28, Guan-Hua Huang commented:

      Dear Dr. Bryson:

      Thank you very much for your questions.

      The variance between the outcomes registered on clinical trials.gov and those reported in JAMA Surgery resulted from our delay in updating the final project outcomes. Since the original protocol was written in Chinese and did not translate fully to English at the time, our research nurse started with one primary outcome first, and there was a delay in updating the others. Thank you so much for bringing this important omission to our attention. All outcomes have now been accurately registered on clinical trials.gov. In addition, we have now updated and confirmed that the research protocol, which documented all outcomes and was published on the JAMA Surgery website as a supplement, was approved by the institutional review board at the study site.

      With regards to the sample size estimation, we were also delayed in posting these results. Initially, we had difficulty finding similar studies to estimate the effect size for delirium, as well as in identifying appropriate methodologic approaches to be used in power analysis for cluster-randomized controlled trials when a binary outcome is modeled. We have now updated to include the power analysis for the cluster continuous outcome and found that 270 patients were required for 80% power, and 360 patients were required for 90% power. In the end, we managed to recruit 377 patients, and post hoc analysis indicated that our study was powered at 81% for delirium.

      Thank you for the careful read, and for bringing these important issues to our attention.

      We would be glad to answer any further questions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Sep 20, Greg Bryson commented:

      Would the authors (or editors) comment on: a) the variance between outcomes registered on ClinicalTrials.gov, those reported in the published manuscript, and those described in the protocol appended as an electronic supplemental file. b) the absence of a formal sample size estimate from the published report (CONSORT 7a) or the protocol (SPIRIT 14). Thank you.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 22, thomas samaras commented:

      Taller people have higher risk of atrial fibrillation, blood clots, pulmonary infarction and abdominal aortic aneurysms. The heart problems of shorter people are not due to inherent biological factors. For example, CHD was rare before the industrial revolution and people were shorter then. In addition, during early 1900s, CHD was low and we were shorter in the US and UK. Women are shorter than men and have lower death rates from heart disease. In addition, many short populations studied in the 20th Century were found to be free of CHD and stroke. Some possible confounders are BMI differences, socioeconomic status, catch-up growth of lower birth weight infants, childhood illnesses which stunt growth and promote adult health problems and poorer quality diets. In addition, shorter people often are more overweight than taller people. Three recent studies by Sohn, Shapiro, and Elsayed found shorter people had lower all-cause and cardiovascular disease mortality.(The Sohn finding for lower CVD was,however, non-significant.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 28, Philippe Gorphe commented:

      Please note that . Hep-2 cell line is HeLa cells so originating from the cervix, . M2e cell line does not exist (actually ATCC can provide Me2 cell line, not M2e, but it is melanoma cell line), . TU212 cell line is a misidentified cell line that is thought not to originate from the larynx https://www.ncbi.nlm.nih.gov/pubmed/21868764

      Authors would benefit a strong methodological support in further works


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 26, Valerio Cozzani commented:

      Classification of Aerosols formed in the operation of Heat-not-Burn tobacco products.

      “Smoke” is nowadays a term frequently used in colloquial non-technical language to indicate a wide number of different aerosols originated by the combustion or pyrolysis of a material. However, from a technical point of view, due to its importance for fire science, several definitions were adopted to clarify what aerosols should be classified as “smoke” (e.g. see the comprehensive discussions provided by Gross et al. (1967), Mulholland (2008), and Drysdale (2011) to have some examples). A comprehensive definition is given by the National Fire Protection Association (See NFPA 921): smoke is composed of airborne solid and liquid particulates and gases evolved when a material undergoes pyrolysis or combustion, together with the quantity of air that is entrained or otherwise mixed into the mass. Moreover, the above definition clearly remarks that smoke, as evident from the literature, may have a very different chemical and physical nature compared to other aerosols:

      1) from a physical point of view it may be composed of solid and liquid particles with different size distributions and concentrations;

      2) from a chemical point of view, smoke components may be: i) condensed, liquid combustion products, and/or ii) condensed, liquid products of partial combustion (as high molecular weight and/or low vapor pressure organic liquids formed during fuel primary pyrolysis and volatile emission), and/or iii) solid combustion products (mainly graphitic carbon particles as soot, and inorganic fly ashes), iv) and/or unburned or partially oxidized solid or liquid fuel particles.

      Smoke from tobacco combustion has specific features, as reported in reference publications (e.g. see Baker, 2006), and has a very complex composition, including all the above cited chemical components such as: condensed liquid drops of volatiles (tar), soot, and ashes. There is no doubt that the aerosol stream produced by a burning cigarette may be classified as “smoke” according to its definitions reported in the scientific and technical literature.

      Recently, Philip Morris International (PMI) developed a heat-not-burn tobacco product, that operates with very different modalities with respect to conventional cigarettes. Even if also in the operation of the heat-not-burn tobacco product an aerosol stream is formed, its classification as “smoke” is not appropriate. The aerosol generated in the PMI heat-not-burn tobacco product is very different in the chemical composition from the smoke formed by the self-sustained smoldering combustion of tobacco in cigarettes and more in general from smoke formed in combustion processes. The aerosol generated in the heat-not-burn tobacco product is composed mainly of water and of products deriving from the evaporation, in the absence of chemical reactions, of substances present in the original tobacco substrate present in the heat-not-burn tobacco product. Even applying the more comprehensive definitions of smoke reported in the literature, these do not apply to the aerosol produced in the operation of such devices, since:

      i) experimental data have confirmed that combustion processes are absent in the PMI tobacco product when heated in the heat-not-burn device.

      ii) the aerosol produced by the heat-not-burn tobacco product during operation is formed mostly by vaporization phenomena, as proven by the experimental data showing its chemical characterization

      iii) very limited low temperature pyrolysis phenomena may be present in the tobacco substrate present in the device during the operation of the heat-not-burn tobacco product (temperatures during operation are lower than 350°C)

      Nevertheless, it should be remarked that the above only concerns the correct scientific definition of “smoke” and its applicability to heat-not-burn tobacco products, and that in no way what is written above addresses health issues related to the inhalation of such aerosols.

      References

      Baker R.R., Smoke generation inside cigarette: Modifying combustion to develop cigarettes that may be less hazardous to health, Progress in Energy and Combustion Science, 32, 373-385, 2006.

      Drysdale D., Introduction to Fire Dynamics, 3rd Edition, J.Wiley & Sons Ltd, UK, 2011

      Gross D., J.J. Loftus, A.F. Robertson, Method for measuring smoke from burning materials. Symposium on Fire Test Methods – Restraint and Smoke, 1966 ASTM STP 422 (ed. A.F. Robertson), pp. 166–204. American Society for Testing and Materials, Philadelphia, PA.

      Mulholland G.W., Smoke production and properties, SFPE Handbook of Fire Protection Engineering, 4th Ed. (Eds Di Nenno et al.), pp. 2.291–2.302. National Fire Protection Association, Quincy, MA, 2008.

      National Fire Protection Association, NFPA Glossary of Terms, 2016 Edition, Updated September 23rd 2016, 2016; p.1336. http://www.nfpa.org/codes-and-standards/resources/glossary-of-terms. Last accessed June 26th, 2017.

      Valerio Cozzani is Professor of Chemical Engineering at University of Bologna, Italy. This comment is provided on the basis of the results of a scientific evaluation of PMI’s heat-not-burn device committed by PMI


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 31, Manuel Peitsch commented:

      Philip Morris International (PMI) supports scientific data transparency, data sharing and actively encourages the conduct of independent studies on IQOS that are aimed at advancing scientific and medical knowledge or verifying the results we obtained through our assessment program described by Smith MR, 2016. To date we have published over 30 peer-reviewed articles describing studies specifically conducted with IQOS [1]. We have also submitted a Modified Risk Tobacco Product (MRTP) application with the U.S. FDA [2]. It is important that independent studies are conducted with a high degree of scientific rigor, the same degree of rigor that is expected from our company. Key standards for scientific research are:

      1. Studies should be conducted using fit-for-purpose and validated methods
      2. Study results should be interpreted in an appropriate and non-misleading manner

      As independent scientists are starting to analyze our products, it is perhaps not surprising that different methodologies are applied. We note, for example, that Auer R, 2017 reporting on a chemical analysis of the IQOS aerosol used a “smoking device designed and tested in [their] facility”. Without further description of this system, it is hard to compare their analysis with those we have reported previously, using standard and validated smoking machine systems and procedures. While some of the reported results seem consistent with those that we have previously published, significant points of difference in the described methodologies may account for the disagreements in results obtained by Auer et al. in comparison to our peer-reviewed and published data.

      1. Methodologies for smoke and aerosol generation have been amply described in the literature. We are therefore surprised that the authors did not use a recognized smoking regime, but rather used a hybrid between the ISO and the Health Canada Intense (HCI) regime [3], and then used this data to compare their results with those published using the ISO regime (Vu AT, 2015).
      2. We were surprised by the measurement results the authors obtained for the volatile organic compounds. For instance, it is perplexing that they report yields of approximately 1 µg acrolein per stick for both IQOS and the cigarette they used in their study. It appears that the acrolein values reported by the authors for the cigarette are 50 times lower than those published by e.g. Health Canada (Hammond D, 2008). Similarly Auer et al. find 10 times lower levels of formaldehyde in cigarettes than Health Canada. Furthermore, the yields of acetone, crotonaldehyde and propionaldehyde are also underestimated.
      3. It is generally established that the HCI regime is more relevant to how humans smoke than the ISO regime. Under HCI we find that the reference cigarette 3R4F yields 154±20 µg acrolein per stick (Schaller JP, 2016), which is similar to the 142±17 µg/stick reported by Health Canada. We have also reported that IQOS yields 11±2.36 µg acrolein per stick under HCI. Furthermore, under the ISO, IQOS yields 4.89±0.74 µg acrolein per stick (Schaller JP, 2016).
      4. Regarding the polycyclic aromatic hydrocarbons, Auer et al. reported a level of acenaphthene for IQOS that is three fold higher than for cigarettes. Acenaphthene is not part of the list of 58 substances we routinely quantify, nor is it part of any regulatory lists (including the most extensive list, the FDA 93). It is, however, a compound we have measured in the smoke of 3R4F, but could not detect in the IQOS aerosol. Our method is based on mass spectrometry, which is a specific detector, as opposed to the non-specific detection system used by the authors. Their reported level for IQOS may therefore come from an artefact, not linked specifically to acenaphthene.
      5. It is also surprising that for many analytes, the reported standard deviations are close to, or even larger than the mean values.
      6. Taken together, the above-mentioned issues lead us to question the analytical methods that were used. For future studies we recommend that the authors should reduce the measurement variability between the replicas, validate their methods with a reference cigarette (e.g. 3R4F) and compare their results with those published by a recognized regulatory agency.
      7. Unfortunately, the results for carbon monoxide (CO) measurements are reported in ppm (parts per million). We would recommend that they should have been converted to mg/stick. Reporting in the way the authors do precludes a comparison of the measured levels with a standard reference cigarette. In addition, the level of CO for the cigarette was above the measurement range of the used instrument, which precludes a comparison between the IQOS and cigarette yields.
      8. Contrary to the authors’ suggestions, “Heat-not-burn” is not an advertising slogan but a shorthand for a product description. We have clearly demonstrated the absence of combustion in IQOS through robust scientific substantiation, which we summarized on PMIscience [1] and in our MRTP application to the U.S. FDA [2]. This has been corroborated by several combustion experts. Furthermore, we have never claimed that IQOS is devoid of pyrolytic processes, which are well known to increase with temperature, and are responsible for much of the remaining HPHCs found in the IQOS aerosol.
      9. The authors suggest that we are “dancing around the definition of smoke to avoid indoor-smoking bans”. Unfortunately, the authors did not present any data regarding the impact of IQOS on indoor use. Due to the way in which IQOS functions and is used, its impact on air quality cannot be linearly extrapolated from mainstream aerosol chemistry data. Towards that end, proper indoor air quality studies, using validated methods are needed. One such example can be found in Mitova MI, 2016.
      10. PMI has consistently communicated that IQOS aerosol is not devoid of HPHCs, and has transparently published the relative yields of HPHCs in comparison with cigarette smoke. Chemical analysis of the IQOS aerosol shows that it contains on average >90% reduction in the levels of HPHCs when compared with the smoke of the 3R4F reference cigarette. This furthermore leads to a concomitant reduction in cytotoxicity and genotoxicity (Schaller JP, 2016).
      11. Since we understand the skepticism around tobacco industry-generated data, we also commissioned an independent, recognized and accredited laboratory to quantify the 58 analytes we routinely measure in our studies [4]. The data was submitted as part of our MRTP application to the U.S. FDA [2].
      12. The totality of the evidence collected to date, across a broad range on toxicology, systems toxicology and clinical studies, indicates that IQOS has the potential to present less risk of harm compared to continued smoking for adult smokers who switch to it completely [1].

      [1] Smith M, Haziza C, Hoeng J, Lüdicke F, Maeder S, Vanscheeuwijck P and Peitsch MC (2017) The Science behind the Tobacco Heating System: a summary of published scientific articles. Available at: https://www.pmiscience.com/library/pmi-science-ths-executive-summary.

      [2] U.S. Food and Drug Administration (FDA). Philip Morris Products S.A. Modified Risk Tobacco Product (MRTP) Applications. May 24, 2017. Available from: https://www.fda.gov/TobaccoProducts/Labeling/MarketingandAdvertising/ucm546281.htm.

      [3] Health Canada (2000). Health Canada - Tobacco Products Information Regulations SOR/2000-273, Schedule 2. http://laws-lois.justice.gc.ca/PDF/SOR-2000-273.pdf.

      [4] Available at: https://www.pmiscience.com/platform-development/platform-development/aerosol-chemistry-physics/hphcs/levels-hphcs-measured.

      Manuel Peitsch is a fully paid employees of PMI, the manufacturer of IQOS.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 14, Daniel Weeks commented:

      Concerns about the analyses presented in the Han et al (2017) paper have been described by Frank Harrell in his "Statistical Thinking" blog in the "Improper Subgrouping" section of the post entitled "Statistical Errors in the Medical Literature". He points out that this paper "makes the classic statistical error of attempting to learn about differences in treatment effectiveness by subgrouping rather than by correctly modeling interactions. They compounded the error by not adjusting for covariates when comparing treatments in the subgroups, and even worse, by subgrouping on a variable for which grouping is ill-defined and information-losing: age.". For further details and additional concerns, please see the blog post.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 05, David Keller commented:

      Rationale for performing a per-protocol analysis of this study's data

      The percentage of control patients who were switched to statin treatment by their personal physicians reached 29.0 % by the end of the study. Meanwhile, non-adherence to statin therapy grew to 22.2 % in the active treatment group.

      Intention-to-treat (ITT) analysis assigns the events experienced by a patient (e.g. death, heart attack) to the group to which the patient was initially randomized, regardless of whether the patient actually took the study medicine (if he was randomized to take it), or whether he took the medicine off-protocol (despite being randomized to the control group). The net effect of these "crossover" events, from control to active treatment, or vice-versa, is to weaken the apparent benefit of the medication, as calculated using ITT. This is good, because as a sort of "worst-case scenario" evaluation, we are assured that, if our patients actually take the study medication, they should probably benefit at least as much as the patients did in the clinical trial.

      However, in a case where we are evaluating whether there is really no benefit to a medication, we should also consider a best-case scenario evaluation, because if the medicine is not beneficial even when it is evaluated in a manner which is more highly sensitive for detecting benefit, we can be that much more certain that the study medication has no role in treating the study population.

      Per-protocol analysis assigns the outcomes and events experienced by a patient based on his actual behavior during the study. If he crossed over from the control group to active treatment and then had a good outcome, that good outcome would be attributed by per-protocol analysis to the effects of active treatment, not to control treatment. Conversely, if a patient is randomized to active treatment, but never takes a pill, a bad outcome in his case would be "blamed" on the control treatment, not on the study medicine he never took. Per-protocol provides a "best case" scenario evaluation of the study drug; one can think of per-protocol analysis as having increased sensitivity to the benefits of the study medication.

      So, in a study like this, it is not enough to perform an intention-to-treat analysis, because all those crossovers might have obscured a significant signal of benefit. It is important to also perform a per-protocol analysis, to assure ourselves that, even in the best of circumstances, the medicine being studied is not beneficial, if even the per-protocol analysis cannot detect benefit.

      If per-protocol analysis reveals benefit to the study medication, but intention-to-treat analysis does not show any benefit, then a new study should be conducted, which is better-designed and more carefully executed. In this case, a stronger statin, like atorvastatin, could be tested, and the patients and investigators could be double-blinded, and so on...


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jul 05, David Keller commented:

      This randomized trial of pravastatin was not blinded, so expectation effects may run rampant

      Han and colleagues fail to mention in the above abstract that this randomized study of pravastatin for primary prevention of coronary heart disease and mortality was conducted open-label (unblinded). Blinding of subjects and investigators in clinical trials is required to control expectation effects, which can have an important influence in triggering cardiovascular events. The open-label design of this study is mentioned in the Methods section of the body of the paper, but there is no further discussion of the role of uncontrolled placebo, nocebo, Pygmalion and other expectation effects on the outcome of this study. Why was this study not double-blinded, and how dependable are open-label data for making important clinical decisions regarding statin use?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 08, Christopher Southan commented:

      This paper was a follow up to the 2004 "Has the yo-yo stopped? An assessment of human protein-coding gene number" https://www.ncbi.nlm.nih.gov/pubmed/15174140


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 02, Christopher Southan commented:

      A revised version is planned in order to incorporate the very useful points raised by the open referees


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Prashant Sharma, MD, DM commented:

      A read-only version of the paper is available at http://rdcu.be/td4R.

      [Author comment] In this series, currently the largest compilation of cases of this entity in literature, we describe the frequencies of three unusual findings on hemoglobin CE-HPLC chromatograms - tiny S-window peaks, small spiky post-HbQ peaks, and split HbA2 peaks, that suggest HbQ-India trait as the likely diagnosis in the north Indian practice setting.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 30, Cicely Saunders Institute Journal Club commented:

      This paper was discussed at the Cicely Saunders Institute Journal Club on Wednesday 7th June.

      This paper reports a well conducted trial of an intervention to improve end of life care in hospitalised elderly people. The authors are to be commended for addressing a clinical priority in a population where end of life care is under-researched.

      We discussed this paper in a clinical-academic journal club. Our discussion of the paper was lively and generated a series of reflections on the methods used in the conduct of the trial as well as broader issues relating to the aims and processes of the study intervention.

      The long study set up period was considered a strength, enabling participating wards to become accustomed to the data collection procedures before the commencement of the study. We discussed the challenges related to selecting the proxy-reported primary outcome measures, and the potential impact of the unblinded nurses assessing outcomes of care they themselves delivered.

      We also discussed whether family recollection of symptom control several weeks into bereavement was a reliable measure of care quality, as it may not capture all the factors contributing to their experience of care, potentially confounding their report. It was suggested that collecting family members’ data via face to face interviews could enhance analysis of the quantitative findings.

      The intervention, as reported in this paper, supported by previously published development work, represents a comprehensive effort to improve the quality of care for elderly people dying in hospital settings. The group recognised the challenges and the range of competencies required of hospital medical and nursing staff delivering end of life care. We wondered how the reported changes to the training components of the intervention addressed criticisms of the Liverpool Care Pathway in terms of improving competencies in compassionate communication with families. We discussed the possibility of measuring family care givers experience of receiving safe, compassionate care as an alternative outcome for this intervention.

      In their discussion the authors report that qualitative work is to be conducted to explore the findings of the study in more depth, particularly those related to poorer family satisfaction with care in the intervention group. We felt this potential negative effect on families should be investigated before further roll out of the intervention We look forward to reading further outputs from this extensive and commendable body of work.

      Commentary by Jo Bayly, Dr Simon Etkind and Dr Wei Gao


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 14, Stuart RAY commented:

      These are substantial concerns - I have written to Professor Nowak to invite comment on this discussion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 29, James M Heilman commented:

      Seriously "sea-buckthorn oil protects against infections, prevents allergies, eliminates inflammation and inhibits the aging process". This sounds like world changing news. Looking for the RCTs that back it up and not finding any. The only RCT listed in the refs found NO benefit. https://www.ncbi.nlm.nih.gov/pubmed/23131570 That paper which found NO benefit is used to support this sentence "Sea-buckthorn oil as well as extracts from its fruit are used as an adjunctive therapy in treatment of many diseases". If it has no effect that is not a treatment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 17, Ken G Ryan commented:

      Thanks Sarah. Yes you are right of course. This was a typo introduced by the publisher at the last phase of publishing. I tried several times to get them to change it and nothing happened. You will see throughout the paper that the correct epithet is spelled correctly. Its very annoying. Ken


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 16, Seán Turner commented:

      The species epithet 'torques' (sic) is misspelled in the title. The correct species name is Psychroflexus torquis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 07, DANIEL BARTH commented:

      The possible role of spike-wave discharges (SWDs) in epilepsy is a highly controversial. Although we expected our work to generate debate, the eLetter from Blumenfeld et al. is disappointing.

      For a more detailed response to this eLetter, please see: https://www.dropbox.com/s/d8ut94ayf57f8pk/Response to responseBlumenfeld et al2017.pdf?dl=0

      • Partial consciousness and maintenance of cognitive functions during SWD

      eLetter: “Their logic appears to be based on the misperception that seizures in absence epilepsy (AE) are always associated with ‘profound impairment of consciousness,’ leading to the flawed premise of the study and its interpretation that anything less than full loss of consciousness must not be AE.”

      Loss of conscious is stipulated as an inclusion criterion for diagnosis of typical childhood absence epilepsy, versus mild or no impairment of consciousness as an exclusion criterion (Loiseau and Panayiotopoulos, 2000; Engle, 2013). Accordingly, we characterized WAG/Rij rats as mild absence with partial impairment of consciousness during seizures.

      eLetter: “The ‘ability’ to modulate SWD severity in rodent models is not demonstrated since the reduction in number and duration can be explained otherwise.” The alternative explanation put forward is “a sensory cue white noise, which may increase arousal and vigilance known to reduce SWD”.

      We stated that operantly conditioned arousal is what terminates SWD bursts early. The possibility that the arousal is due only to the white noise cue, however, fails to account for the critical result that preemptive pellet checks occurred almost entirely in the seconds after each SWD burst, indicating awareness of the SWD, associative learning, and operant control over SWDs.

      • SWDs occur in several rodent strains

      eLetter: Blumenfeld et al. note SWDs are not observed in most laboratory rodent strains.

      That is not correct. Observations of SWDs are common in outbred Sprague Dawley, Long Evans, Wistar and hooded rats. Unlike human absence epilepsy, SWDs become more prevalent with age). Our conclusion is not that SWDs cannot reflect absence epilepsy, but that their ubiquity in various outbred rat strains suggests their unreliability as a signature of absence epilepsy. We have no vested interest in whether SWDs are genetic epilepsy or part of normal rat behavior. We simply recommend caution that Blumenfeld et al. appear opposed to.

      eLetter: Single gene mutations can lead to SWDs therefore absence epilepsy. We note that while genes can influence innate rhythms, this does not prove that all SWDs are epileptic or that all SWDs model genetic absence seizures. Furthermore, inbreeding does not seem to be a requirement for SWDs, since our outbred Sprague Dawley rats had the same amount of SWDs as our inbred WAG/Rij rats, and Long Evans rats had approximately four times this amount (Fig. 7).

      • SWD/immobility as a model of absence epilepsy

      Blumenfeld et al list characteristics that support SWDs as a model of absence epilepsy. We do not understand why they are raising this issue, since we clearly stated that inbred WAG/Rij rats model mild absence seizures in humans.

      We believe, however, the case that all SWDs in outbred rats serve to model genetic absence seizures in humans is weak. We and others remain skeptical that most outbred rats have developed - or are developing - absence epilepsy; however, as we said, it is possible.

      Conclusion

      The eLetter by Blumenfeld et al. was written by experts with decades of publications in absence epilepsy. We have examined many of these papers, plus ones challenging the epileptic nature of SWDs (e.g. Kaplan, 1985; Wiest and Nicolelis, 2003). None of us have studied absence seizures or SWDs, except recently (Rodgers et al., 2015). The research history and publication record of Blumenfeld et al., however, could incline them toward an imbalanced interpretation of our results. We do not understand what specifically were the “overstatements” and inappropriate “assumptions” in Taylor et al. that Blumenfeld et al. claimed in the beginning of their eLetter. We urge readers to re-read our Significance Statement in the context of the eLetter by Blumenfeld et al.: “Our evidence that inbred and outbred rats learn to control the duration of spike–wave discharges (SWDs) suggests a voluntary behavior with maintenance of consciousness. If SWDs model mild absence seizures and/or complex partial seizures in humans, then an opportunity may exist for operant control complementing or in some cases replacing medication. Their equal occurrence in outbred rats also implies a major potential confound for behavioral neuroscience experiments, at least in adult rats where SWDs are prevalent. Alternatively, the presence and voluntary control of SWDs in healthy outbred rats could indicate that these phenomena do not always model heritable absence epilepsy or post-traumatic epilepsy in humans, and may instead reflect typical rodent behavior.”

      While writing and revising this manuscript in response to successive peer reviews, we responded to the points of Blumenfeld et al. and tried to objectively incorporate reviewers’ suggestions and avoid misinterpretations of our data. We are disappointed that our efforts were either largely ignored or misrepresented in the eLetter, since this unnecessarily complicates an already controversial subject and detracts from what we believe is the importance of this work.

      References

      Engle, J, Jr. (2013) Seizures and Epilepsy, 2nd ed. New York ; London: Oxford University Press. Kaplan BJ (1985) The epileptic nature of rodent electrocortical polyspiking is still unproven. Exp Neurol 88:425–436. Loiseau, P, Panayiotopoulos, CP (2000) Childhood absence epilepsy. In: Neurobase. San Diego: Arbor. Rodgers KM, Dudek FE, Barth DS (2015) Progressive, seizure-Like, spike-Wave discharges are common in both Injured and uninjured Sprague-Dawley rats: Implications for the fluid percussion injury model of post-traumatic epilepsy. J Neurosci 35:9194–9204. Wiest MC, Nicolelis MAL (2003) Behavioral detection of tactile stimuli during 7-12 Hz cortical oscillations in awake rats. Nat Neurosci 6:913–914.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 29, Antoine Depaulis commented:

      This study presents interesting behavioral observations during seizures in absence epilepsy (AE). However, there are many overstatements that could be misinterpreted. This begins with the flawed premise that anything less than full loss of consciousness during spike-wave discharges (SWD) is not AE. The broad group of experts in absence epilepsy who signed this response strongly disagree as outlined below. A more complete collective response can be found at: <https://dl.dropboxusercontent.com/u/3541791/Collective reply to Taylor et al_2017.pdf>

      1. Partial consciousness during SWD The authors claim that patients with AE experience "profound loss of consciousness" during seizures. On the contrary, some preservation of consciousness is quite common in human AE. Many clinical studies have shown highly variable responsiveness depending both on task difficulty and vigilance level, even from one SWD to the next in the same individual (Blumenfeld, 2005, Guo et al., 2016). Perception of sensory stimuli, discrimination between relevant and irrelevant stimuli during absence seizures and preservation of some cortical processing has been shown in both rat models and human patients (e.g., Inoue et al, 1992, Chipaux et al., 2013, Berman et al., 2010, Drinkenburg et al., 2003, Guo et al., 2016). In addition, as the authors acknowledge, external stimuli like those in this study can increase vigilance and reduce SWD. Therefore, preserved ability to respond during a task or to modulate seizure severity is not a surprise; instead it provides further support for face validity of rodent SWD for human absence seizures.

      2. SWD occur in several rodent strains The occurrence of SWD in some animals from outbred rodent strains has been published many times since the 60’s (Marescaux et al., 1992). However, SWDs are not observed in many individual animals in most inbred or outbred rodent strains (e.g., Letts et al., 2014). For example, when rats with SWD (about 30%) were selected from the initial Wistar colony of Strasbourg to produce the GAERS substrain, about 70 % of the colony did not have SWD and were bred as the non-epileptic control (NEC) strain. No NEC display SWD, even when over one year old (Depaulis et al., 2016). Why SWD are so prevalent in some outbred strains is unknown but might be due to preferential selection of dominantly inherited mutated AE genes in docile animals chosen for breeding. The many examples of single gene mutations in mice that lead to SWD/AE not seen in wild type littermates (Maheshwari and Noebels, 2014), provide further evidence that SWD are not normal in rodents. Some monogenic mutations likely result from genetic drift, such as the spontaneous Gria4 gene mutation causing SWD in C3H/HeJ mice, modulated by SWD suppressor mutations in other genes (Beyer et al., 2008, Frankel et al., 2014). Several additional differences between rodents with or without SWD make it very doubtful that SWD reflect “typical rodent behavior” (see PubMed commons for further details).

      3. SWD/immobility as a model of absence epilepsy The authors disregard 4 decades of work firmly demonstrating the face validity, pharmacological predictivity and construct validity of rats and mice with SWD as models for AE (see Jarre et al., 2017 for a recent review).These animals fulfill many features relevant to the human AE (Guillemain et al., 2012). In addition to SWD, immobility and mild facial clonus, rodents models exhibit behavioral, structural, molecular and functional co-morbidities not seen in animals without SWD, but also observed in human patients (Shaw, 2007). Furthermore, the anti-epileptic drug profile in rodent AE models corresponds remarkably well with effects in human patients (Depaulis and van Luijtelaar, 2005, Shaw, 2007, Jarre et al., 2017). Over 20 single gene mutations associated with SWD, have been identified in mice and in rats that are consistent with findings in human AE (Powell et al., 2009, Noebels and Sidman, 1979) (Maheshwari and Noebels, 2014). Finally, many electrophysiological (see Depaulis et al., 2017 for a recent review), and fMRI studies (David et al., 2008, Mishra et al., 2011, 2013) in rat AE models agree with clinical data (Westmijse et al., 2009, Hamandi et al., 2008).

      Based on these lines of evidence, we assert that SWD/immobility represents a form of epilepsy in rodents. In our view, these episodes are not a natural behavior nor do all individuals display this trait. Studying SWD, in both outbred and inbred strains as well as single gene mutations, has already enabled 1) the development of predictive models of the efficacy of antiepileptic drug efficacy (Tringham et al., 2012, Marks et al., 2016, Glauser et al., 2017), and 2) enhanced understanding of the pathophysiology of cortico-thalamic circuitry that generates and maintains SWD and the mechanisms underlying associated comorbidities.

      Contributors and institutions (by alphabetical order) Hal Blumenfeld, Yale University, New Haven, CT, USA Stéphane Charpier, Pierre and Marie Curie University and INSERM, France Doug Coulter, University of Pennsylvania, Philadelphia, USA Vincenzo Crunelli, Cardiff University, Cardiff, UK. Antoine Depaulis, Grenoble Alpes University and INSERM, France Wayne Frankel, Columbia University, NY, USA Martin J. Gallagher, Vanderbilt University, Nashville, TN, USA John Huguenard, Stanford University, Stanford, CA, USA Cian McCafferty, Yale University, New Haven, CT, USA Richard Ngomba, University of Lincoln, UK Jeffrey Noebels, Baylor College of Medicine, TX, USA Jeanne T. Paz, Univ California & Gladstone Institute of Neurological Disease, San Francisco, USA Terence J. O’Brien, University of Melbourne, Melbourne, Australia Filiz Onat, Marmara University, Turkey Gilles van Luijtelaar, Donders Centre for Cognition, Radboud University, Nijmegen, the Netherlands Laurent Vercueil, Grenoble University Hospital, Neurology Department, Grenoble, France

      REFERENCES Berman R et al. (2010) Epilepsia 51:2011–2022. Beyer B et al. (2008) Human Molecular Genetics 17:1738–1749. Blumenfeld H (2005) Epilepsia 46 Suppl 9:21–33. Chipaux M et al. (2013) PLoS One 8:e58180. David O et al. (2008) Plos Biol 6:e315–e2697. Depaulis A, Charpier S (2017) Neurosci Letters 17:30141-6. Depaulis A et al. (2016) J Neurosci Meth 260:159–174. Depaulis A, van Luijtelaar G (2005) In: Models of seizures and epilepsy (Pitkänen A, Schwartzkroin P, Moshe S, eds), pp 233–248. Amsterdam: Oxford: Elsevier Academic. Drinkenburg WHIM et al. (2003) Behavioural Brain Research 143:141–146. Frankel WN et al. (2014) PLoS Genet 10:e1004454. Glauser TA et al. (2017) Ann Neurol 81:444–453. Guillemain I et al. (2012) Epileptic Disord 14:217–225. Guo JN et al. (2016) The Lancet 15:1336-1345 Hamandi K et al. (2008) NeuroImage 39:608–618. Inoue M et al. (1992) Electroencephalogr Clin Neurophysiol. 84:172-9. Jarre G et al. (2017) In: Models of seizure and epilepsy. Second edition (Pitkänen A, Buckmaster P, Galanopoulou AS, Moshe SM, eds). Elsevier, in press. Letts VA et al. (2014) Genes, Brain and Behavior 13:519–526. Maheshwari A, Noebels JL (2014) Monogenic models of absence epilepsy: windows into the complex balance between inhibition and excitation in thalamocortical microcircuits, 1st ed. Elsevier B.V. Marescaux C et al. (1992) J Neural Trans - S35:37–69. Marks WN et al. (2016) Eur J Neurosci 43:25–40. Mishra AM et al. (2013) Epilepsia 54:1214–1222. Mishra AM et al. (2011) J Neurosci 31:15053–15064. Noebels JL, Sidman RL (1979) Science 204:1334–1336. Powell KL et al. (2009) J Neurosci 29:371–380. Shaw FZ (2007) 7-12 Hz J Neurophysiol 97:238–247. Tringham E et al. (2012) Science Transl Med 4:121ra19–121ra19. Westmijse I et al. (2009) Epilepsia 50:2538–2548.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 22, Anton Pottegård commented:

      On the misleading conclusion reached by Wang et al. regarding use of phosphodiesterase 5 inhibitors and risk of melanoma

      Recently, Wang et al. published their findings from a meta-analysis on the putative association between use of phosphodiesterase 5 inhibitors (PDE5i) and risk of melanoma (Wang J, 2017). With this letter, we, the authors of four of the main studies on which their meta-analysis was founded, would like to express our serious concerns as to the conclusions reached by Wang et al.

      The initial report on the potential for an association between use of the PDE5i sildenafil and an increased risk of melanoma was published by Li et al. in JAMA Internal Medicine in 2014 (Li WQ, 2014). A large proportion of melanomas contain mutations that lead to suppression of phosphodiesterase enzyme 5A (Gray-Schopfer V, 2007), which again leads to melanoma cell invasion (Arozarena I, 2011). As PDE5is infer a direct pharmacological inhibition of the same enzyme, the hypothesis that use of PDE5i would infer an increased risk of melanoma is biologically plausible.

      Four different teams subsequently tried to replicate these findings. In a case-control study using nationwide Swedish health registries, Loeb et al. reported a modest increased risk among PDE5i users (adjusted odds ratio (OR) 1.21, 95% confidence interval (CI) 1.08-1.36) (Loeb S, 2015). However, they clearly pointed out that the association was unlikely to be causal, because there was a lack of dose-response in the observed association (i.e. no increase in risk with greater exposure). Further, the strongest association was observed with early stage melanoma, which lacks biological gradient. Lastly, the authors also observed PDE5i users to have an increased risk of basal cell carcinoma, which was a negative control outcome with no hypothesized association with PDE5i use. This last observation suggested possible confounding by UV exposure, which is the key risk factor for both BCC and melanoma.

      The next three studies were published in close succession. Two cohort studies based on data from the UK Clinical Practice Research Datalink (CPRD) found weak evidence of a small increased risk of melanoma among those prescribed a PDE5i (Lian et al (Lian Y, 2016): hazard ratio (HR) 1.18, 95%CI 0.95-1.47); Matthews et al. (Matthews A, 2016): HR 1.14, 95%CI 1.01-1.29). In the study by Lian et al. (Lian Y, 2016), associations were also observed with basal cell and squamous cell carcinoma with respect to certain prescription and pill categories, thereby arguing against a causal association due to lack of specificity. Matthews et al. (Matthews A, 2016) again found no gradient in the association, and observed an association with UV-related negative control outcomes including basal cell carcinoma. They also reported an association between previous solar keratosis and later use of phosphodiesterase inhibitors, strongly suggesting confounding due to PDE5i users having more sun exposure. Lastly, in July 2016, Pottegård et al. (Pottegård A, 2016) published the results of two case-control studies, finding weak evidence of a small positive association in Denmark (OR 1.22, 95%CI 0.99-1.49), and no evidence of an association in the US (OR 0.95, 95%CI 0.78-1.14). Again, no dose-response pattern was found, and as in the analysis by Loeb et al., the highest estimates were obtained for early stage disease.

      In summary, all four papers reported a slightly increased risk of malignant melanoma among PDE5i users. Critically, however, all the studies included analyses that probed Hill's criteria for causality (Hill AB, 2015), and the findings of all four studies suggested the association was unlikely to be causal. In summarizing the evidence from these papers, the systematic review by Wang et al. focuses heavily on the single point estimate obtained in their meta-analysis, disregarding the supplementary analyses and the conclusions regarding causality within the individual papers. For this reason, we believe that the conclusions reached by Wang et al. are misleading.

      Importantly, Loeb et al. have recently performed a similar meta-analysis to that of Wang et al., arriving at an almost identical point estimate, but with a considerably more informed interpretation (https://doi.org/10.1093/jnci/djx086). While acknowledging the weak positive association between PDE5i use and malignant melanoma, they conclude that the association is unlikely to be causal. Having authored the individual papers, we collectively support this conclusion.

      Anton Pottegård (1); Krishnan Bhaskaran (2); Anthony Matthews (2); Laurent Azoulay (3); Pär Stattin (4); Laurel A Habel (5); and Stacy Loeb (6)

      (1) Clinical Pharmacology and Pharmacy, Department of Public Health, University of Southern Denmark, Odense, Denmark (2) Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK (3) Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Canada (4) Department of Surgical Sciences, Uppsala University, Uppsala, Sweden (5) Division of Research, Kaiser Permanente Northern California, Oakland, CA, USA (6) Department of Urology and Population Health, New York University, NY, NY, USA


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 12, John Sotos commented:

      West describes angiogenesis, erythropoiesis, and vasoconstriction as clinical sequelae arising from chronically hypoxic tissues at altitude. Another, similar, effect is solid-organ hyperplasia.

      In the late 1960s a Peruvian medical student observed several-fold enlargement of the carotid bodies in Andean altitude dwellers (1,2). The degree of enlargment increased with time spent at altitude and, in animal models, reversed after restoration of normoxia(3). Interestingly, this hyperplasia is mediated via endothelin signalling, not by hypoxia-inducible-factors(4).

      Tissue hyperplasia may also occur in hypoxic patients at sea level. For example, even before the Peruvian discovery, hyperplasia -- and sometimes malignancy -- of adrenal chromaffin cells, i.e. pheochromocytomas, were described in adults having uncorrected cyanotic congenital heart disease(5). Carotid body cells and adrenal chromaffin cells have similar lineage (from the neural crest) and function (oxygen sensing).

      (1) Arias-Stella J. Human carotid body at high altitudes. (Abstract). American Journal of Pathology. 1969; 55: 82a.

      (2) Heath D. The carotid bodies in chronic respiratory disease. Histopathology. 1991; 18: 281-283.

      (3) Kay JM, Laidler P. Hypoxia and the carotid body. J Clin Pathol Suppl (R Coll Pathol). 1977; 11: 30-44.

      (4) Platero-Luengo A, González-Granero S, Durán R, Díaz-Castro B, Piruat J, García-Verdugo JM, Pardal R, López-Barneo J. An O2-Sensitive Glomus Cell-Stem Cell Synapse Induces Carotid Body Growth in Chronic Hypoxia. Cell. 2014; 156, 291–303.

      (5) Folger GM, Roberts WC, Mehrizi A, Shah KD, Glancy DL, Carpenter CCJ, Esterly JR. Cyanotic Malformations of the Heart with Pheochromocytoma: A Report of Five Cases. Circulation. 1964; 29: 750-757.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 18, Youhe Gao commented:

      The strategy I posted here last month was published three years ago on Proteome Sci. 2014 Feb 1;12(1):6. doi: 10.1186/1477-5956-12-6. Fast fixing and comprehensive identification to help improve real-time ligands discovery based on formaldehyde crosslinking, immunoprecipitation an... - PubMed - NCBI https://www.ncbi.nlm.nih.gov/pubmed/24484773


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 14, Youhe Gao commented:

      I found two "overexpression"s in the paper. "Although the extent of bait overexpression is difficult to judge and varies across IP's, previous experimentation has shown that over-expression has little effect on identification of true interacting partners (Sowa et al., 2009)" "VAPBWT overexpression strongly increased the association of EGFP-LSG1 and OSBP with the ER (Figure 7E,G)" Personally, I am not sure if those are enough. In a system, increasing [A] or [B] will lead to more [AB]. As we know more about protein interaction now, this kind of systematic false positive should not be ignored any more. In cells, overexpression with tag may even change the location of the protein. That is why I think the next generation of massive protein interaction studies should start from in vivo crosslinking. I do not want to overemphasize the problem. Most of the protein interactions identified are probably true in cells. The amount of work done is very impressive and respected. I hope users who is using a particular interaction data as the only clue for their future experiment design, maybe they should start with an in vivo crosslinking as a conformation of that interaction. It may make them more confident to proceed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 13, J Wade Harper commented:

      These issues have been addressed and discussed in many prior publications (see for example - Cell. 2015 Jul 16;162(2):425-40) and are widely known


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jun 09, Youhe Gao commented:

      I hope that authors can discuss the impact and possibility of false positive and negative made by tagging, overexpression and washing conditions on protein interactions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 May 31, Youhe Gao commented:

      This is a great master piece in the field of protein interaction. It is the largest network so far. It is extremely valuable for all biologists. I guess for the authors to reach such a great throughput, using tagging, overexpression and affinity purification was almost inevitable. These procedures could produce some false positives and false negatives. A strategy named 4F-acts was proposed a few years ago trying to minimize false positives and false negatives. Fast Fixation is necessary to study real-time protein-protein interactions under physiological conditions. Fast formaldehyde crosslinking can fix transient and weak protein interactions. With brief exposure to a high concentration of formaldehyde during the crosslinking, the complex is crosslinked only partially, so that the complex is small enough to be resolved by SDS-PAGE, and the uncrosslinked parts of the proteins can be used for identification by shotgun proteomics. Immunoaffinity purification can Fish out complexes that include the proteins of interest. Because the complex is covalently bound, it can be washed as harshly as the antibody-antigen reaction can stand; the weak interactions will remain. Even if the nonspecific binding can persist on the beads or antibody, it will be eliminated at the next step. To Filter out these complexes, SDS-PAGE is used to disrupt non-covalent bonds, thereby eliminating uncrosslinked complexes and simultaneously providing molecular weight information for identification of the complex. The SDS-polyacrylamide gel can then be sliced on the basis of the molecular weight without staining. All the protein complexes can be identified with the sensitivity of mass spectrometry rather than sensitivity of the staining method. The advantages are the following: (i) The method does not involve tagging. (ii) It does not include overexpression. (iii) A weak interaction can be detected because the complexes can be washed as hard as the antigen-antibody reaction can stand as the complexes are crosslinked covalently. No new covalent bond can form as a false positive result. (iv) The formaldehyde crosslinking can be performed at the cellular, tissue, or organ level fast enough so that the protein complexes are fixed in situ in real time. The throughput of this strategy is probably not high enough. But I hope one day a large-scale study can be conducted with it. No matter what, this is a great milestone!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Jun 03, J Wade Harper commented:

      The results are being loaded now into biogrid. IntAct I believe will take data from biogrid to upload directly.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 May 29, Christopher Southan commented:

      Will these valuable results be loaded into the EBI IntAct Molecular Interaction Database? (http://www.ebi.ac.uk/intact/)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 28, Noam Y. Harel commented:

      This article has been RETRACTED. It is an obvious plagiarism of Han et al, Annals of Neurology, PMID 26814620. Please see this link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5440146/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 10, Christopher Southan commented:

      This valuable paper would be enhanced by including database links to the structures. In lieu of such, the following URL includes CID links to the 90 TCMD identifiers from Supplementary Data 3 https://www.ncbi.nlm.nih.gov/sites/myncbi/christopher.southan.1/collections/52757717/public/ (this may need a couple of tries for the overloaded Entrez servers). Of these, 26 have vendor matches (maybe more via salt-stripping) and 10 have patent extraction matches (but probably not impinging antinfectives research use)

      The final six from Table 1 (3 with vendor matches) are:

      TCMDC-123767 = CID 1476278, TCMDC-125445 = CID 44522338, TCMDC-141698 = CID 44536555, TCMDC-141070 = CID 44535609, TCMDC-141154 = CID 44535720, TCMDC-124559 = CID 4782980

      https://www.ncbi.nlm.nih.gov/sites/myncbi/christopher.southan.1/collections/52765327/public/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 01, Andrea Brancaccio commented:

      An interesting and solid biochemical work. As the Authors have stated “the POMTs are suggested to have exquisitely narrow glycosylation functions of α-DG”. As an additional note of some interest, it may be worth considering that POMTs predate dystroglycan. In fact, orthologous members of this family can be found in Capsaspora owczarzaki (Accession: XP004363497) or also in Monosiga brevicollis (Accession: XP001744300) belonging respectively to Filasterea and Choanoflagellata, two unicellular groups, close to early-diverging metazoans, in which dystroglycan had not been found (see Adams JC and Brancaccio A., The evolution of the dystroglycan complex, a major mediator of muscle integrity. Biol Open. 2015 4:1163-79, Adams JC, 2015).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 14,     commented:

      On behalf of the authors, I would like to refer readers to our recent study that demonstrated reproducibility of the original work after more than 10-years. http://dx.doi.org/10.1016/j.freeradbiomed.2016.10.456


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 22, Lydia Maniatis commented:

      Part 1 This publication is burdened with an unproductive theoretical approach as well as methodological problems (including intractable sampling problems). Conclusions range from trivial to doubtful.

      Contemporary vision science seems determined to take organization of the retinal stimulation out of the picture, and replace it with raw numbers, whether neural firing rates or statistics. This is a fundamental error. A statistical heuristic strategy doesn’t work in any discipline, including physics. For example, a histogram of the relative heights of all the point masses in a particular patch of the world wouldn’t tell anything about the mechanical properties of the objects in that scene, because it would not tell us about distribution and cohesiveness of masses. (Would it tell us anything of interest?)

      In perception, it is more than well established that the appearance of any point in the visual field –with respect to lightness, color, shape, etc - is intimately dependent on the intensities/spectral compositions of the points in the surrounding (the entire) field (specifically their effects on the retina) and on the principles of organization that the visual process effectively applies to the stimulation. Thus, a compilation of, for example, the spectral statistics of Purves’ colored cube would not allow us either to explain or predict the appearance of colored illumination or transparent overlays. Or, rather, it wouldn’t allow us to predict these things unless we employed a very special sample of images, all of which produced such impressions of colored illumination. Then we might get a relatively weak correlation. This is because, within this sample, a preponderance of certain wavelengths would tend to correlate with e.g. a yellow, illumination impression, rather than being due, as might be true for the general case, to the presence of a number of unified apparently yellow and opaque surfaces. Thus, we see how improper sampling can allow us to make better (and, I would add, predictable) predictions without implying explanatory power. In perception, explanatory power strictly requires we take into account principles of organization.

      In contrast, the authors here take the statistics route. They want to show, or rather, don’t completely fail to corroborate the observation that when surfaces are wet, their look colors are deeper and more vivid, and also to corroborate the fact that changes in perception are linked to changes in the retinal stimulation. Using a set of ready-made images (criteria for the selection of which are not provided), they apply to them a manipulation (among others) that has the general effect of increasing the saturation of the colors perceived. One way to ascertain whether this manipulation causes a surface to appear wet would be to simply ask observers to describe the surface, without any clues to what was expected. Would the surface be spontaneously be described as “wet” or “moist”? This would be the more challenging test, but is not the approach taken.

      Instead, observers are first trained on images (examples of which are not provided - I have requested examples) that we are told appear very wet (and the dry versions), and include shape-based cues, such as drops of water or puddles. They are told to use these as a guide to what counts as very wet, or a rating of 5. They are then shown a series of images containing both original and manipulated images (with more saturated colors, but lacking any shape-based cues), and asked to rate wetness from 1 to 5.

      The results are messy, with some transformed images getting higher ratings than the originals and others not, though on average they are more highly rated. But the ratings for all the images are relatively low; and we have to ask, how have the observers understood their task? Are they reporting an authentic perception of wetness or moistness, or do they believe are they trying to guess at how wet a surface actually is, based on a rule of thumb adopted during the training phase, in which, presumably, the wet images were also more color-saturated? (In other words, is the task authentically perceptual, or is it more cognitive guesswork?) What does it mean to rate the wetness of a surface at e.g. the “2” level?

      The cost of ignoring the factor of shape/structure is evident in the authors’ attempt to explain why the ratings for all images were so low, reaching 4 in only one case. They explain that it may be because their manipulation didn’t include areas that looked like drops or puddles. Does this mean that the presence of drops or puddles actually changes the appearance of the surrounding areas, and/or that perhaps those very different training images included other organized features that were overlooked and that affected perception? Did the training teach observers to apply a cue in practice that by itself produces somewhat different perceptual outcomes? I suppose we could ask the observers about their strategy, but this would muddy the facade of quantitative purity.

      At any rate, the manipulation (like most ad hoc assumptions) fails as a tool for prediction, leading the authors to acknowledge that “The image transformation greatly increased the wetness rating for some images but not for others…” (Again, it isn’t clear that “wetness rating” correlates with an authentically perceptual scale). Thus, relative success or failure of the transformation is image-specific, and thus sample-specific; some samples and sample sets would very likely not reach statistical significance. Thus the decision to investigate further (Experiment 1b) using (if I’m reading this correctly) only a single custom-made image that was not part of the original set (on what basis was this chosen?) seems unwise. (This might seem to worsen the sampling problem, but the problem is intractable anyway. As there is no possible sample that would allow the researchers to generate reliable statistics-based predictions for the individual case, any generalization would be instantly falsifiable, and thus lack explanatory power).

      The degree to which any conclusions are tied to the specific (and unrationalized) sample is illustrated by the fact that the technical manipulations were tailored to it (from Experiment 1a): “In deciding [the] parameters of the WET transformation, we preliminarily explored a range of parameters and chose ones that did not disturb the apparent naturalness of all the images used in Experiment 1a.” Note the lack of objective criteria for “naturalness.”). (We’re not told on what basis the parameters in Experiment 1b were chosen). In short, I don’t think this numbers game can tell us anything more from a theoretical point of view than casual observation and e.g., trial and error by artists, already have.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 22, Lydia Maniatis commented:

      Part !!

      The authors also generate a hypothesis via data-digging on the statistics of the images used in 1a. This hypothesis is that greater “hue entropy” is correlated with a larger wetness impression (whatever this may mean given the study’s methods). This is tested with a new set of artificial images made using third-party software, which also are also obviously subject to sampling/confounding problems. Results of manipulating this variable were mixed, with one comparison non-significant, one significant if we apply a relatively low standard (p = 0.04), and one significant at the 0.005 level. So the results are inconclusive, even for this particular sample. The authors note, further, that “the effect of hue entropy cannot be explained by ecological optics” and rationalize their (ambiguous) results in the following very casual and logically incoherent manner:

      “Since there is a significant overlap in the distribution of color saturation between dry and wet samples…[t]he key to resolving these ambiguities is to increase the number of samples. When wet-related image features are shared by many different parts in the scene, the image features are likely to be produced by a global common factor, such as wetting. In other words, the more independent colors the scene contains, the more reliably the visual system can judge scene wetness.”

      I don’t see why a larger sample would necessarily be a more colorful sample. Also, the authors are suggesting that a larger patch of the visual scene will be more likely to receive a higher wet score than a small patch; this seems very implausible. A Bayesian gloss of this explanation follows, complete with arbitrarily chosen “prior probabilities.” Such a mechanism would render the verisimilitude of human perception highly unreliable on a case-by-case basis, much more so than is the case. The fact is that the visual system doesn’t have to rely on weak probabilities for weakly correlated features when it has much more reliable structural principles to work with.

      The description of stimulus as “a natural texture” is not informative from an experimental point of view. The potential choices are infinitely variable.

      In the text, the authors are using the term color as though it were an objective feature of the stimulus rather than perceptual factor, which is confusing and should be avoided. (From Wikipedia: “As colorfulness, chroma and saturation are defined as attributes of perception they can not be physically measured as such.”)

      Bottom line: 1. Statistical compilations divorced from reference to principles of organization lack explanatory and general predictive power, in perception as in every other discipline. They are not productive tools of scientific discovery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Lydia Maniatis commented:

      I don't get it; are citations like these valid? They seem to be doing most of the heavy lifting.

      "The finger spread and beauty judgement were recorded by our web app, emotiontracker.com (A.A.B., L. Vale, and D.G.P., unpublished data).

      As previous work has shown (A.A.B., L. Vale, and D.G.P., unpublished data), continuous pleasure ratings are well fit by a simple model, refined here ((Equation 1), (Equation 2) ; (Equation 3) and Figure 1B). The model supposes..."

      What gives the model assumptions their credibility? Are we supposed to take the claims on faith?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 21, Amanda Capes-Davis commented:

      The authors tested against "two different cancer cell lines", HeLa and KB. Unfortunately this is an example of a misidentified cell line causing confusion. KB is known to be misidentified and is actually HeLa, from cervical cancer, not oral carcinoma as originally reported - so the authors are comparing two samples from the same tumour and individual.

      A paper has just been published on the impact of KB in the scientific literature: https://www.ncbi.nlm.nih.gov/pubmed/28455420.

      Cell lines are increasingly used by medicinal chemists as models for treatment. The field must consider how to incorporate quality procedures for cell lines as part of manuscript submission if this problem is to be avoided.

      A simple check against a list - in this case the ICLAC list of known misidentified cell lines - would have picked up the problem in this instance. See: http://iclac.org/databases/cross-contaminations/

      To be confident that misidentification has not occurred, human cell lines should be tested for authenticity using short tandem repeat (STR) profiling. See: http://iclac.org/resources/advice-scientists/

      All scientists who use cell lines as models should be aware of guidelines for good cell culture practice. For an example see: https://www.ncbi.nlm.nih.gov/pubmed/25117809


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 15, Preben Berthelsen commented:

      During light chloroform anaesthesia, 1 of 2500 patients succumbed to sudden cardiac syncope – usually when the skin was incised.

      1 in 2000 has a genetically determined defect in repolarisation of the myocardium – the long-QT syndrome. Such patients may likewise succumb to sudden cardiac arrest when experiencing emotional and/or physically stressing events.

      The striking similarity in the mode of dying – the sudden unexpected arrest of the heart during stress - makes it a fair hypothesis/assumption that patients dying during chloroform anaesthesia were individuals with an inherited or acquired delay in myocardial repolarisation.

      No ECG recording from a patient dying during chloroform anaesthesia exists so the hypothesis cannot be proven.

      For 100 years, chloroform was used to alleviate labour pain – remarkably with no maternal deaths. A recent investigation has shown an oestradiol-mediated shortening of the QT-interval - both in normal women and in women with the inherited form of delayed myocardial repolarisation - providing a likely explanation for the safety of the obstetric use of chloroform and lends credence to the hypothesis presented in the paper.

      P.G.Berthelsen. MD. Charlottenlund, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 26, Jafar Kolahi commented:

      This is an up-date to the previous research "Kolahi J, Khazaei S. Altmetric: Top 50 dental articles in 2014. Br Dent J. 2016 Jun 10;220(11):569-74. doi: 10.1038/sj.bdj.2016.411."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 23, Stuart RAY commented:

      The publisher has now added the URLs to references in the publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 13, Stuart RAY commented:

      The reference URLs were lost along the way, but here they are (have also asked publisher to add these to the online version):

      1 - ORI. Guidelines for responsible data management in research. 2006.

      2 - DuBois JM, Chibnall JT, Tait R, Vander Wal J. Misconduct: Lessons from researcher rehab.Nature. 2016;534:173–175. doi: 10.1038/534173a. DuBois JM, 2016

      3 - ICMJE. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals 2015.

      4 - DHHS. Hipaa privacy rule – research. 2013.

      5 - NIH. Grants policy statement. Application and Information Processes. 2016.

      6 - NIH. NIH data sharing policy. 2007.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 14, Kiyoshi Ezawa commented:

      I think that this commentary is well written overall.

      Unfortunately, it lacks some important facts that would have assisted the readers' fair judgements.

      So, I have posted my post-publication peer review (PPPR), which also contains the aforementioned facts, onto PubPeer (https://pubpeer.com/publications/0BBC818513066058DB929595CE7C32/comments/120820).

      It would be strongly recommended to read the PPPR as well before or after reading this commentary, in order to have unbiased opinions on its subjects.

      Kiyoshi Ezawa, Ph.D., the author of references [53,54,55], which are a part of the main subjects of the commentary.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 05, Mogens Groenvold commented:

      Thank you very much to the Cicely Saunders Institute Journal Club and Javiera Leniz Martelli and Katherine Bristowe for an insightful discussion of key aspects of our article.

      Concerning ‘standard care’ in the control group we are currently analyzing and comparing the health care activities in the two groups. This will map the palliative care activities offered outside SPC and clarify whether there was ‘compensation’ in the control group.

      Regarding the primary outcome, the most important point to make is that while we did indeed devise and employ a new approach aimed at addressing the issue of heterogeneity in palliative care needs (Johnsen AT, 2013 p. 5 and Johnsen AT, 2014 p. 7), we also carried out a fully conventional analysis. The comparison of these two methods suggested that the new approach was slightly superior (Groenvold M, 2017 p. 822).

      In the conventional analysis we compared the change over time between groups according to each of the seven EORTC QLQ-C30 scales selected by our clinicians as the key targets of their SPC. Some of the previous trials found an impact on generic health-related quality of life scales, and we have no reason to believe that our measures are less sensitive (unless our clinicians picked the ‘wrong’ scales; this will be elucidated in a forthcoming analysis of explorative outcomes). Therefore, unfortunately, we do not believe that positive effects have been attenuated in our analysis of the primary outcome.

      In relation to the patient sample we agree that just as we sought to minimize heterogeneity in palliative care needs, the heterogeneity in cancer diagnoses should be minimized if the main concern is observe maximal effects on specific problems. However, we aimed at evaluating the impact of SPC in a mixed population of cancer patients with different needs, and, as suggested, a larger sample size may be required for this.

      Finally, we agree that the possible lack of uniformity of the intervention between the six SPC units may have weakened the ability to detect a positive effect. Future analyses will elucidate whether the interventions differed between the units.

      At this point in time we believe that the two most important explanations of the negative (or, in fact, neutral) trial outcome are that the SPC units may not have had an active, structured approach to early SPC, and that the eight-week trial period was too short. The ongoing analyses are likely to give additional insights.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Dec 19, Cicely Saunders Institute Journal Club commented:

      This paper was discussed at the Cicely Saunders Institute Journal Club on Wednesday 6th December. We appreciated the opportunity to discuss a well conducted RCT and to see it published even after getting a negative result. We enjoyed discussing this paper and it generated a series of reflections on the methods, in particular regarding the primary outcome.

      We positively highlighted the detailed discussion about previous trials regarding early specialist palliative care (SPC) in the introduction, and the clear description of the randomization process as well as the blindness of analysts and the sensitivity analysis. We felt more information about what ‘standard care’ involved for individuals in the control group would have been beneficial.

      We discussed extensively the use of patient’s primary need based on the EORTC QLQ-C30 dimensions as primary outcome. We agree with the authors about the importance of addressing the heterogeneity in palliative care patients needs and we value the use of measurements that are important for patients. We wondered if the nature of the outcome chosen required a larger sample size to show a difference between arms. It is clear that some of the dimensions would be easier to modify by a SPC team than others. That makes the positive effect SPC might have had in some dimensions being attenuated in the weighted mean. In addition, SPC might have a different impact according to type of cancer. Including all type of cancer patients increases the variability of the effect, and might require a larger sample size. We wondered if the lack of uniformity of the intervention among the different centres also contributed to lack of significant difference in the primary outcome.

      Finally, we valued the discussion of the possible reasons for the lack of effect of the intervention, which need to be taken into account for future trials.

      Commentary by Javiera Leniz Martelli and Katherine Bristowe


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 08, BSH Cancer Screening, Help-Seeking and Prevention Journal Club commented:

      The BSH Cancer Screening, Help-Seeking and Prevention Journal Club read with great interest this paper, which we feel provides a useful addition to the literature. In this paper, the authors present a framework for the complex relations between cognition and affect in predicting health behaviour. The authors describe the model that is currently used most often in health behaviour research, which looks at the independent effects of cognition and effect on health behaviour (the “main effects approach”). They then convincingly argue that this model does not consider possible interconnections of thought and feeling and describe three alternative models that may be better at explaining subsequent health behaviour: a mediation model (either an affect-preceding-cognition model or a cognition-preceding-affect model), a moderation model, and a contextualised effects model, which are helpfully explained graphically in the Figure on page 3 of the paper. The authors also provide empirical evidence from the health behaviour literature to support each of these models, and argue that these complex relations should be routinely examined in health behaviour research.

      Our group recognised the importance and relevance of this topic to much of our work on the determinants and possible points for intervention in cancer screening, help-seeking, and cancer prevention. The authors provide a good reminder of how theory can be improved to better understand behaviour, which is relevant to our work on, for example, cancer screening as a teachable moment or cancer worry as determinant of screening uptake, but is also applicable to a wide range of other health behaviours such as smoking, exercise, and healthy eating.

      However, the paper also raised some questions in our group. First, it was unclear to us which model would be applicable under what circumstances. For example, is this a function of the behaviour or of the affective state that is under study, or could this also be construed as characteristic of an individual or group of individuals? For example, for some people, perhaps those who are more organised or conscientious, the cognition-preceding-affect model may better predict subsequent behaviour, while for others, perhaps those who are struggling to cope due to life difficulties or mental health issues, the affect-preceding-cognition model may better predict behaviour. The authors acknowledge that the models they present are “not mutually exclusive” (p.4), and so “multiple types of relations could be involved in determining engagement in a particular health behaviour” (p.4), but this does not provide much guidance on how these models might guide our formulation of hypotheses to be tested in a particular study. Relatedly, it is unclear how these models can (or should) be applied to existing health behaviour models, and to what extent they require an overhaul of these existing models. Our group noted that the inclusion of these complex relationships could water down existing theoretical models, especially if the specific relationship cannot be identified a priori on theoretical grounds but is a function of the behaviour, affective state, individual, or group under study. From the empirical examples that the authors provide throughout, it is unclear whether the complex relations in those studies were pre-specified based on theoretical grounds, or merely exploratory in nature. In their Discussion on p.11, the authors seem to acknowledge that inclusion of the often-neglected interconnections between cognition and affect will require an exploratory, theory-building approach. Our group would have found it helpful if the authors had provided more practical advice on how to formulate a priori hypotheses about these complex relations, and perhaps some worked examples.

      Other questions that were raised by our group are of a more pragmatic nature, such as what the implications of the inclusion of complex relations between cognition and affect would have on study sample size and power (especially if not pre-defined a priori but tested post hoc). A related concern was how to practically take the ideas presented by the authors forward, given that mediation and moderation analyses may require a slightly different skill set, and one that many social scientists may not be very familiar with.

      Overall, however, the group felt that these concerns -especially those of a more practical nature- do not override the importance of taking forward the excellent ideas presented in this paper, which could herald a new era for health behaviour research, both in terms of theory and practice.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 26, Hilda Bastian commented:

      These are interesting results, showing the critical importance of transparency about trials of pharmaceuticals. However, it does not identify the trials it found, or identify the phases of those trials. It would be helpful if the authors were to release these data, for those interested in the results of this study, anyone interested in doing similar work, and those looking for trials on these particular drugs.

      The abstract reports the number of participants in the unpublished trials. It would be good to also provide the number of participants in the published trials.

      Note: I wrote a blog post about this study and its context.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 14, Lily Chu commented:

      The authors and others might be interested in the following case report from Dr. Nancy Klimas, published in the Journal of Chronic Fatigue Syndrome in 2001. Autologous lymph node transplant was done successfully in one subject with resulting improvements in clinical status and cytokine measurements:

      http://www.tandfonline.com/doi/abs/10.1300/J092v08n01_03?journalCode=icfs20


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 15, Lydia Maniatis commented:

      Below I comment on some serious conceptual and methodological problems with this publication.

      1. Observers are asked to decide which of four briefly presented images is the most blurred, in a forced choice task. Why? The authors tell us that "blur perception is an elemental feature of the human visual system" because it drives accommodation and vergence. The problem, of course, is that we don't perceive the blur driving these responses - at least I don't, as I look around me. So the term "blur perception" isn't valid. In fact, the authors are not talking, here, about the retinal stimulation just preceding and producing proper accommodation etc, but, rather, images that, even post-accommodation, lack sharpness due to the manner in which they were produced or designed. They are asking observers for a conscious assessment of the relationship between an image and the objects of which it might be an imperfect projection. They can call this "blur perception," but this is obviously a very different definition than 'the retinal stimulation preceding accommodation and vergence.' So from a theoretical point of view, I submit that at least the first five paragraphs of this article are irrelevant. The definition of blur perception shifts at paragraph 6, which refers to psychophysical studies. (Given the irrelevance of the exposition, we are left, of course, with the question of the theoretical interest of the present measurements).

      2. In paragraph 6, to accommodate the second definition of blur perception, we are referred to "the variance discrimination model of blur." This model "assumes that the visual system is attempting to estimate the local variance of the luminance profile of an image from a set of luminance samples. Each of these samples is, however, perturbed by some level of internal noise or intrinsic blur." The reference to "internal noise or intrinsic blur" is to be taken on faith (and is part of the untenable "signal detection" view of perception, see below). At best, the idea has never been tested and it is not clear how it could be tested. It is my suspicion that adherence to forced-choice paradigms in the area of psychophysics implicitly serves the purpose of ensuring that something that can be interpreted as "noise" (i.e. the shots in the dark by observers) will be present in the data. The same goes for the very brief presentations (which are also uncritically an irrationally assumed to tap into "lower levels" of visual processing).

      3. "This model well describes the dipper data and is grounded in signal detection theories of sensory discrimination and decision making (Green & Swets, 1966)." Again, signal detection "theories," are not remotely corroborated or conceptually valid. They date from the sixties when perception was crudely analogized to radar operators trying to decide whether a particular blip was a whale or a ship, and judging from the reference provided have apparently not been further developed since. The "model" along with methods amenable to producing data of the right shape in the context of multiple free parameters and post hoc adjustments was simply adopted uncritically. The concept, with its treatment of neurons as "noisy detectors" has been criticized by Teller (1984). I've discussed the problems in various comments, including here: https://pubpeer.com/publications/8B2F3402AFA4F136252567815CB415. The notion is implicitly homuncular, the homunculus observing neural firing rates at various levels of the visual system and assigning them meaning, presumably via other firing rates...

      4. Here's how you explain away discrepancies: "Overall, Intrinsic Blur estimates with our dead leaves stimuli were greater than those reported for border blur discrimination (Watson & Ahumada, 2011) or for blur discrimination with fractal patterns (Mather, 1997), suggesting that blur perception with naturalistic stimuli may be mediated by receptive fields with larger space constants (Mather & Smith, 2002)." It's that easy. Note that the term "naturalistic stimuli" is undefined. Many scholarly perception publications include in this category photos of buildings, sidewalks and shrubbery on college campuses. Not to mention that the "dead leaves" stimulus does not look remotely natural, nor would anyone spontaneously relate it to leaves.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 07, Lydia Maniatis commented:

      The major assumption of this article is unviable. That it appears premised wholy on early results of Hubel and Wiesel should be a clue:

      “The primary function of the biological visual system is to identify objects in an environment. To achieve this, given that in the earlier stages of visual processing an input image would be decomposed into fragmented components by neural mechanisms with localized receptive fields and specific tuning characteristics (Hubel & Wiesel, 1962, 1968), the visual system must be able to integrate image components into the percept of a coherent object.”

      As I discuss in a comment on a different article (positing “curvature detectors”), the idea that neurons at any level of the nervous system may be described as “local detectors” is unviable:

      (From https://pubpeer.com/publications/8B2F3402AFA4F136252567815CB415):

      “As simple as it may seem, the notion of “curvature-sensitive-filter,” or more generally the notion of neurons as “detectors” is wholly untenable for a number of reasons, including those discussed by Teller (1984). On neurons as detectors: “Single simple cells in area 17 are often described as being triggered by, or detecting, the presence of bars of light at particular orientations [because,] of the stimulus set that has been tried, an oriented bar of light seems to make the cell fire fastest. But to say that the cell has a special role in encoding the stimulus which makes it fire fastest is to commit the same fallacy as to say that a cone with maximum sensitivity at 570 nm is a 570nm detector. It would seem to make more sense to assume that each perceptual element is coded by a pattern of firing among many neurons, and that each different firing rate of each cortical cell is important to the neural code….To use the concept of a trigger feature appears to be to claim implicitly three things: that for any given cortical cell most of the stimuli in the universe are in the null class…that the later elements which receive inputs from the cell ignore variations in the firing rate of the cell and treat the cell as binary…it is hard to set aside the convictions that all of the possible firing rates of cortical cells play a role in the neural code; and that the use of broader universes of stimuli in physiological experiments would reveal the size and heterogeneity of the equivalence classes of neurons….” (p. 1243) The arguments obviously also apply to the case of “curvature-detectors,” for which any correlations to firing rates have not been ascertained, but have only been “modeled” on the basis of a narrow set of stimuli. If the outlines of our RF shapes and circles were constructed of dots, or radiating rays, or shapes producing illusory contours, would the data look different? How would such results connect with the “curvature filter” based model? Would it make sense to make yet more ad hoc models for these cases? The adoption of a particular type of stimulus by many researchers serves to immunize them from inconvenient results and create the illusion of an at least superficially coherent research program.

      Teller also challenges explanations based on firing activity of sets of cells at a peripheral level of the visual system, on the basis that it relies implicitly on a “nothing mucks it up” proviso, ignoring as it does questions of how this pattern is maintained through the system or how it constrains more complete models. Given such gaps, this type of explanation, she notes, amounts to little more than a “remote homunculus theory.”

      A more subtle (and too little appreciated) problem with Schmidtmann and Kingdom’s claims is that they are confusing perceptual cause and effect. The “curves” we are talking about are perceived curves, not actual, objectively curved objects. There are no curves in the retinal stimulation. As we well know, the presence of a particular form in perception is not a passive response to the retinal stimulation, which at its inception consists of points of contact with photons, but an active construction of forms based on principles the outlines of which may be discerned by the products of the process. There are many examples of geometrical forms delineated by luminance differences that are not perceived, e.g. the stimuli used by Gottschaldt (in Ellis (1938) A Source Book of Gestalt Psychology ) and demonstrations by Kanizsa (1979, Organization in Vision). So to say that neurons are detecting curve maxima and minima is basically to say that a form that is first constructed by the visual system, on the basis of dynamic feedback and feedforward mechanisms which we are not even close to understanding, are brought to consciousness, and are secondarily inspected and by a set of detectors said to live at some arbitrary level of the process, which signal the conscious observer in some cryptic way. Again, it’s a hall of mirrors. “

      In addition, one could note that, given the massive interconnectedness and interaction of the visual system, when Hubel and Wiesel were recording from particular neurons in V1 they were in effect recording from the whole visual process. That is, it is not possible to claim that these recordings were isolating the independent behavior of these particular neurons.

      In addition to the premise that neurons in the visual system act as “local linear detectors” another underlying premise that has zero theoretical support is that the conditions, stimuli and data of these experiments are such as to allow them to be used to infer the behavior of such detectors at specific locations of the visual system. Of course, as Graham (1992) has admitted, hundreds of threshold studies “consistent” with the gross over-interpretation of Hubel and Wiesel’s early results (at a time when V1 was thought to be all there was) were generated in the ensuing decades. This is just one of the latest.

      As usual in this type of study, the number of observers is very small (3), two are described as naïve but the third is an author, i.e. not naïve. If naivete matters, then why an author/subject?

      As usual in this type of study, a “model,” involving untested or false premises and various atheoretical free parameters, is constructed post hoc, narrowly tailored to the specific dataset, stimuli and conditions. Observations on all other stimuli, conditions fall outside its purview.

      As is usual in this type of study, we are given very detailed descriptions of stimuli and conditions, but no indication of their theoretical necessity, and how data and interpretation would change if they were even slightly altered. Relatedly it was interesting to note that stimuli were exposed for 167ms, the identical interval used by Wilson and Wilkinson (1998). What is special about this interval?

      “It is known that the visual system cannot group two dots of opposite luminance polarities into a dipole [dot pair] (Glass & Switkes, 1976; J. A. Wilson et al., 2004).” I’m sure this isn’t true. Even if they are the only two dots in the visual field, we will see a pair of dots.

      The intellectual level of theorizing in this line of research is exemplified by the elevation of the observation that like figures tend to be grouped together in the visual percept (as in the case of the classic Gestalt dot demos) into “similarity theory.” (Casually tacking on the word “theory” to observations of effects is typical in psychology in general.) Similarity isn’t the only factor mediating organization of the visual stimulus.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 06, Hilda Bastian commented:

      The conclusion that implicit bias in physicians "does not appear to impact their clinical decision making" would be good news, but this systematic review does not support it. Coming to any conclusion at all on this question requires a strong body of high quality evidence, with representative samples across a wide range of representative populations, using real-life data not hypothetical situations. None of these conditions pertain here. I think the appropriate conclusion here is that we still do not know what role implicit racial bias, as measured by this test, has on people's health care.

      The abstract reports that "The majority of studies used clinical vignettes to examine clinical decision making". In this instance, "majority" means "all but one" (8 out of 9). And the single exception has a serious limitation in that regard, according to Table 1: "pharmacy refills are only a proxy for decision to intensify treatment". The authors' conclusions are thus related, not to clinical decision making, but to hypothetical decision making.

      Of the 9 studies, Table 1 reports that 4 had a low response rate (37% to 53%), and in 2 studies the response rate was unknown. As this is a critical point, and an adequate response rate was not defined in the report of this review, I looked at the 3 studies (albeit briefly). I could find no response rate in any of the 3. In 1 of these (Haider AH, 2014), 248 members of an organization responded. That organization currently reports having over 2,000 members (EAST, accessed 6 May 2017). (The authors report that only 2 of the studies had a sample size calculation.)

      It would be helpful if the authors could provide the full scoring: given the limitations reported, it's hard to see how some of these studies scored so highly. This accepted manuscript version reports that the criteria themselves are available in a supplement, but that supplement was not included.

      It would have been helpful if additional important methodological details of the included studies were reported. For example, 1 of the studies I looked at (Oliver MN, 2014) included an element of random allocation of race to patient photos in the vignettes: design elements such as this were not included in the data extraction reported here. Along with the use of a non-validated quality assessment method (9 of the 27 components of the instrument that was modified), these issues leave too many questions about the quality rating of included studies. Other elements missing from this systematic review (Shea BJ, 2007) are a listing of the excluded studies and assessing the risk of publication bias.

      The search strategy appears to be incompletely reported: it ends with an empty bullet point, and none of the previous bullet points refer to implicit bias or the implicit association test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, William McBride commented:

      We have concerns that this paper overestimates the risk of dengue transmission through reliance on a test inadequately specific for blood donor screening. An earlier study of 100 donors from the same group Ashshi AM, 2015 acknowledged that the absence of molecular confirmation was a weakness of that study, but they utilised the same methods in this larger group. A similarly sized study conducted in Australia Rooks K, 2016 showed that of 973 donors tested, 3.3% were positive using the PanBio NS1 assay, but that no samples were positive using the BioRad NS1 assay. Further testing of over 6000 blood samples collected during 2 outbreaks were negative using a nucleic acid amplification assay. The question of the proportion of patients who remain asymptomatic during dengue infection is important, not just for assessing risk from blood donors, but for better understanding transmission of dengue more widely. A recent contribution in our understanding of the importance of asymptomatic dengue in transmission dynamics can be found at Duong V, 2015 which showed that around 7% of people infected with dengue remain asymptomatic. This rate may be even lower in an adult population.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 05, Donald Forsdyke commented:

      ORGANIC MEMORY

      The view that Richard Semon's work was neglected seems to be based on psychologist Daniel Schacter's 1982 text (1). This was reissued with a new title and a few changes in 2001, without mention of the profound interim account by historian Laura Otis (2). While the authors cite my 2006 text on Samuel Butler and Ewald Hering, later work corroborates and extends Otis’s study and casts a somewhat different light on the authors' prime hero (3, 4).

      Even if offering a list of heroes that is "entirely personal," a paper that extolls the "benefits of exploring the history of science" and of acknowledging our "debts … to those scientists who have offered key ideas," could have mentioned the doubts cast on Semon by Freud and Hertzog, and Semon's dismissal of Butler's work as "rather a retrogression than an advance."

      1. Schacter DL (1982) Stranger behind the Engram: Theories of Memory and the Psychology of Science. Hillsdale, NJ: Erlbaum.

      2. Otis L (1994) Organic Memory. History and the Body in the Late Nineteenth and Early Twentieth Centuries. Lincoln: University of Nebraska Press.

      3. Forsdyke DR (2009) Samuel Butler and human long term memory: is the cupboard bare? J Theor Biol 258:156-164.Forsdyke DR, 2009

      4. Forsdyke DR (2015) "A vehicle of symbols and nothing more." George Romanes, theory of mind, information, and Samuel Butler. History of Psychiatry 26:270-287. Forsdyke DR, 2015


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 10, Kevin Hall commented:

      This is a corrigendum to the original article (https://www.ncbi.nlm.nih.gov/pubmed/28074888) with the following correction:

      Since the publication of the original article, the author has noticed that the references at the end of the third paragraph of the section ‘Experimental falsification of the carbohydrate–insulin model’ are incorrectly cited as 13, 14, where they should be 19, 20. The correction is given below:

      ‘In concordance with the model predictions, carbohydrate restriction led to increased fat oxidation reaching a maximum within a few days and remaining constant thereafter. However, neither study found the predicted augmentation of body fat loss with carbohydrate restriction. Rather, despite the reduction in insulin secretion, both studies found slightly less body fat loss during the carbohydrate restricted diets compared with isocaloric higher carbohydrate diets with identical protein.19,20’

      However, for some strange reason, this published corrigendum goes on to incorrectly state the following:

      The authors also noticed an error in reference 9. The correct reference is: Pahlavani N, Jafari M, Rezaei M, Rasad H, Sadeghi O, Rahdar HA, et al. L-arginine supplementation and risk factors of cardiovascular diseases in healthy men: a double-blind randomized clinical trial. F1000Res 2014; 3: 306. doi:10.12688/f1000research.5877.1.

      The original reference 9 is actually correct and the author is mystified as to the source of this error.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 23, Alexander Kraev commented:

      Regretfully, this article has a misleading title and abstract. The correct title should be "Strenuous exercise triggers a life-threatening response in C57BL/6J mice carrying RYR1 Y522S/WT and CASQ1 null mutations". Besides, the authors never care to state that they analyze the pathogenesis of an experimental disease, without attempting to decide, whether it is closely related to the respective disease of man.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 23, Giles Hardingham commented:

      We would like to extend our sincere thanks to Michel Goedert for the use of his Thy1-P301S transgenic mouse (Allen et al. (2002), PMID: 12417659). Regrettably, this note was erroneously absent from the Acknowledgements section of the manuscript.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 09, Pavel Nesmiyanov commented:

      Funny, but β-endorphin, oxytocin, and dopamine are not neuropeptides. They are not even peptides.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 18, Martine Crasnier-Mednansky commented:

      The model for the chitin catabolic cascade indicates GlcNAc oligomers are degraded in the periplasm to ABC-transported (GlcNAc)2 and PTS-transported GlcNAc (figure 5 in Park JK, 2002 for the original model). The authors used (GlcNAc)4 and, because Enzyme IIA<sup>Glc</sup> was largely phosphorylated in the presence of (GlcNAc)4, proposed there was "some mechanism for which chitin oligosaccharides escape from degradation into GlcNAc in the periplasmic space". If such mechanism occurs under the authors’ experimental conditions, it precludes any major PTS-transport effects on the chitin cascade, i.e. via dephosphorylation of Enzyme IIA<sup>Glc</sup> during GlcNAc transport.

      Working with Vibrio furnissii, Keyhani NO, 1996 argued, "since (GlcNAc)2 is an important inducer in the cascade, it must resist hydrolysis in the periplasm", and further provided an explanation for the stability of (GlcNAc)2 in the periplasm, particularly in sea water. It may well be that the rapid catabolism of (GlcNAc)2 is 'free' from any PTS control, and as such the cAMP necessary for the chitin cascade is provided.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 21, CO Stocco commented:

      Thank you to the authors for such wonderful and detailed description of the extremely complex interaction between FSH and locally produced factors in the regulation of granulosa cells. This review will surely foster innovative ideas and projects to further explore the role of gonadotropins and growth factors in the regulation of female fertility.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 04, Sin Hang Lee commented:

      The medical profession, including medical schools and hospitals, is now a part of the health care industry, and implementation of editorial policies of medical journals is commonly biased in favor of business interests. PubMed Commons has offered the only, albeit constrained, open forum to air dissenting research and opinions in science-based language. Discontinuation of PubMed Commons will silence any questioning of the industry-sponsored promotional publications indexed in PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 03, Sin Hang Lee commented:

      In this Reply, Marks and colleagues did not dispute that 16S rRNA gene sequencing has provided objective evidence regarding the existence of chronic Lyme disease.

      Marks and colleagues contended “persistent polymerase chain reaction (PCR) positivity for Borrelia burgdorferi does not signify the presence of active infection or bacteremia that merits prolonged antibiotic treatment.” However, neither PCR positivity, nor prolonged antibiotic treatment, was an issue raised in my Letter to Editor. The issue is: Does chronic Lyme disease exist? The answer is "yes". Whether bacteremia needs antibiotic treatment is beyond the scope of this discussion.

      If a B burgdorferi 16S rRNA gene is detected in the DNA extraction of the pellet of the centrifuged serum or plasma sample from a patient, the positive test result serves to confirm that there were Lyme disease bacteria, dead or alive, circulating in the patient’s blood at the time when the blood sample was drawn – definition of bacteremia.Dead bacteria are quickly removed by the spleen and macrophages in other organs. Free foreign DNA in the blood of living mammals is known to be degraded or removed in 48 hours [1]. Many infectious diseases are diagnosed by testing the nucleic acid of the causative agents, for example the hepatitis C virus and the human papillomaviruses which are difficult to culture. Some of the bacterial strains causing Lyme borreliosis are not easily cultivated in artificial media. The references cited to dismiss the significance of gene sequencing in the diagnosis of Lyme disease are inappropriate.

      Reference: [1] Schubbert R et al. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc. Natl. Acad. Sci. U. S. A. 1997; 94: 961-6.

      Conflicts of Interest: Dr Lee is the director of Milford Molecular Diagnostics Laboratory specialized in developing DNA sequencing-based diagnostic tests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 04, Sin Hang Lee commented:

      The medical profession, including medical schools and hospitals, is now a part of the health care industry, and implementation of editorial policies of medical journals is commonly biased in favor of business interests. PubMed Commons has offered the only, albeit constrained, open forum to air dissenting research and opinions in science-based language. Discontinuation of PubMed Commons will silence any questioning of the industry-sponsored promotional publications indexed in PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 05, Sin Hang Lee commented:

      In their Reply, Marks and colleagues did not dispute that 16S rRNA gene sequencing has provided objective evidence regarding the existence of chronic Lyme disease.

      Marks and colleagues contended “persistent polymerase chain reaction (PCR) positivity for Borrelia burgdorferi does not signify the presence of active infection or bacteremia that merits prolonged antibiotic treatment.” However, neither PCR positivity, nor prolonged antibiotic treatment, was an issue raised in my Letter to Editor. The issue is: Does chronic Lyme disease exist? The answer is "yes". Whether bacteremia needs antibiotic treatment is beyond the scope of this discussion.

      If a B burgdorferi 16S rRNA gene is detected in the DNA extraction of the pellet of the centrifuged serum or plasma sample from a patient, the positive test result serves to confirm that there were Lyme disease bacteria, dead or alive, circulating in the patient’s blood at the time when the blood sample was drawn – definition of bacteremia.Dead bacteria are quickly removed by the spleen and macrophages in other organs. Free foreign DNA in the blood of living mammals is known to be degraded or removed in 48 hours [1]. Many infectious diseases are diagnosed by testing the nucleic acid of the causative agents, for example the hepatitis C virus and the human papillomaviruses which are difficult to culture. Some of the bacterial strains causing Lyme borreliosis are not easily cultivated in artificial media. The references cited to dismiss the significance of gene sequencing in the diagnosis of Lyme disease are inappropriate.

      Reference: [1] Schubbert R et al. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc. Natl. Acad. Sci. U. S. A. 1997; 94: 961-6.

      Conflicts of Interest: Dr Lee is the director of Milford Molecular Diagnostics Laboratory specialized in developing DNA sequencing-based diagnostic tests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 11, Alexis Frazier-Wood commented:

      The key points section of this article, the purpose of which is to isolate the ‘key conclusion and implication based on the primary [study] finding(s)’, states that: “A randomized intervention that increased breastfeeding intensity was not associated with reduced obesity”. This is a selective interpretation of the study data on weight status, which show that maternal participation in the PROBIT intervention (which increased breastfeeding exclusivity and duration) was 1. associated with increased odds of offspring having adolescent overweight/obesity (odds ratio=1.14; 1.02-1.28) but 2. not associated having adolescent obesity (odds ratio=1.09; 0.92-1.29), although it can be seen that both associations were in the same direction. Body mass index (BMI) was also higher in the children from the intervention group by adolescence (mean difference Δ= 0.21, 0.06-0.36).

      The reasons for the difference in significance (according to the specified alpha) between the results specifying BMI and overweight/obesity as outcomes vs. those specifying obesity likely arise from the differential power for the two outcomes. The relationship of breastfeeding to obesity vs. overweight/obesity had less power, largely due to the lower number of cases (obesity N=589 vs. overweight/obesity N=1868). Simulations in R v3.3.3, suggest the power for has “overweight/obesity” vs. has “obesity” was around 74% vs. 17%. These simulations did not account for the intention to treat procedure nor the correction for data clustering (since not enough data on e.g. ICC correlations was available), but due to the effects of these on power being equal across outcomes an assessment of relative power can be made. In addition, the alternative hypothesis (breastfeeding -> overweight/obesity) was formally tested, but the null hypothesis (breastfeeding ≠ obesity) was not, given the absence of any equivalence testing. Positive associations between breastfeeding and offspring adiposity have been reported before, but have not reached statistical significance (see Cope MB, 2008). Therefore, while the association of increased breastfeeding with significantly increased odds of overweight/obesity represents a novel finding which needs to be subjected to replication, without recourse to empirically stronger results, omitting this from the overall interpretation of the study in favor of a lesser powered, untested hypothesis represents a form of bias.

      While the manifestation of bias in the article may be small, its effect can still be pernicious. Several organizations, including the World Health Organization and the American Heart Association, state that breastfeeding provides protection against offspring obesity (see WHO report and AHA Fact Sheets). However, this lacks strong statistical justification, given that any inverse associations between breastfeeding and offspring obesity are derived from observational designs and likely to reflect confounding (Kramer MS, 2002), and that probability theory suggests that the breastfeeding-offspring obesity data in the literature as a whole reflect one of two situations: (1) publication bias, or (2) a true positive association between breastfeeding and offspring obesity in at least one other published sample (Cope MB, 2008). That is not to deny that there may be a number of valid reasons to support breastfeeding, not related to obesity (see e.g. APA report). But perhaps it is this which has lead to a problem with ‘white hat bias’ in the breastfeeding-obesity literature - a term coined by Cope and Allison to denote ‘bias leading to the distortion of research based-based information in the service of what may be perceived as righteous ends (Cope MB, 2010). One such reason to support breastfeeding is to enable personal choice for parents and caregivers. However, this is incompatible with the practice of giving misleading information on the benefits of breastfeeding which actually deprives caregivers of their right to make informed decisions about feeding infants.

      This is a problematic situation, and needs to be corrected. The causes are unknown, but distorted presentation of data has been identified in multiple reports of randomized clinical trials, often in only one section e.g. the abstract (Boutron I, 2010), and often in the secondary literature, such as press releases (Cope MB, 2010). Therefore all authors need express conclusions with great clarity and consistency, and not selectively include and exclude results without recourse to relative empirical strengths. To be accurate in reporting in this article, the results of this study as a whole are most consistent with either (1) an association between breastfeeding and increased offspring overweight/obesity, or (2) a lack of empirical strength in this study to contribute to the debate on whether there is an association between the two constructs.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 08, Ellis Muggleton commented:

      HEp2 Cells are HeLa cells, not laryngeal cancer cells.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 13, Annie De Groot MD commented:

      The author GROOT is actually De Groot. See De Groot, AS in pubmed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 03, Kenneth Pollard commented:

      Last author should be Pollard KM.

      Author affiliation for last author should be 5 Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA 92037. mpollard@scripps.edu.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 07, Clive Bates commented:

      Has it occurred to the authors that the value (or 'USP') of new and emerging tobacco or nicotine product like e-cigarettes or heated tobacco products might be that they really are very much "better and safer" than smoking?

      No serious scientist doubts this. The question is by how much, with many credible sources suggesting 95% or greater reduced risk (see, for example, the Royal College of Physicians' 2016 report Nicotine without smoke: tobacco harm reduction).

      However, the authors' conclusion appears to be inviting regulators to mislead the public about the risks of these products in order to reduce demand for them. There are many problems with such an approach:

      • It is ethically improper for the state to intervene in this way to manipulate adult choices by withholding or distorting information (see: Kozlowski LT, 2016).<br>
      • The unintended, but wholly foreseeable, effect of trying to prevent people using safer alternatives to cigarettes is not that they quit smoking, but that they carry on smoking - and are harmed as a result of regulatory misinformation.
      • How would regulators (or the authors) take responsibility and assume liability for harms arising from deceptive communications that adversely influence behaviour?
      • Companies have a right to make true and non-misleading statements about their products. Under what principle should they be prevented from doing that?

      The appropriate approach for a regulator is to truthfully inform smokers of the relative risks of different nicotine products. That would allow consumers to make informed choices that could prevent disease, save life and improve welfare. It is not to enforce abstinence from the use of the legal drug nicotine, which in itself, and without delivery through tobacco smoke, poses low risks to health.

      Once again, tobacco control academics proceed from results to conclusions and on to policy prescription without any remotely adequate policy evaluation framework or any apparent awareness of the limitations of their analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 29, Ellen M Goudsmit commented:

      I am concerned that two previous efforts to correct factual errors have not been incorporated in the revision.

      1. I have previously written to the main author that Wallman et al evaluated pacing, as defined by Goudsmit et al (2012). This is very different from the GET protocols used in other RCTs. I suspect that few readers would be aware of the difference between GET and pacing.<br>
      2. No study assessing GET used the original or revised London criteria for classic ME (Goudsmit et al 2009). The version published by the Westcare ME Task Force is different from both as well as incomplete. Research has indicated that the Westcare ME criteria select a different sample (Jason et al, personal communication). As no study has yet assessed exercise for classic ME, one can not generalise any conclusion about efficacy from the trials in the review to patients with this disease.
      3. As pointed out by Professor Jason who devised the Envelope theory, Adaptive Pacing Therapy (APT) is not based on the former. Again, this has been pointed out before.
      4. APT should not be equated with the strategy of pacing recommended by many self-help groups. Pacing helps (cf. all surveys conducted to date): APT is of little value (White et al, 2011). NB: The PACE trial did not assess pacing.

      Science demands precision so I hope that this third attempt to correct errors will be responded to in an appropriate manner. To repeat inaccurate information undermines the scientific process.

      Goudsmit EM, Jason LA, Nijs J, et al. (2012) Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation 34(13): 1140-1147.

      Goudsmit EM, Shepherd C, Dancey CP, et al. (2009) ME: Chronic fatigue syndrome or a distinct clinical entity? Health Psychology Update 18(1): 26-33. Available at: http://shop.bps.org.uk/publications/publications-by-subject/health/health-psychology-update-vol-18-no-1-2009.html

      Jason LA (2017) The PACE trial missteps on pacing and patient selection. Journal of Health Psychology. Epub ahead of print 1 February.

      Jason LA, Brown M, Brown A, et al. (2013) Energy conservation/envelope theory interventions. Fatigue: Biomedicine, Health & Behavior 1(1-2): 27-42.

      ME Association (2015) ME/CFS Illness management survey results. ‘No decisions about me without me’. Part 1. Available at: http://www.meassociation.org.uk/wp-content/uploads/2015-ME-Association-Illness-Management-Report-No-decisions-about-me-without-me-30.05.15.pdf (Various survey results in Appendix 6)

      White PD, Goldsmith KA, Johnson AL, et al. (2011) PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): A randomised trial. The Lancet 377: 823–836.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 28, Andrea Messori commented:

      Ventral hernia surgery: economic information in a series of Italian patients

      Andrea Messori, Sabrina Trippoli

      HTA Unit, ESTAR Toscana, Firenze, Italy

      The paper by Rampado et al. [1] is the first analysis that examines the issue of costs in Italian patients undergoing incisional hernia repair with synthetic or biological meshes. One interesting finding of this study is the analysis of DRGs and reimbursements observed in this real-life setting. Table 2 of the article by Rampado et al. [1] shows that 7 different DRGs were employed for the overall series of 76 patients divided in three groups. The amounts reimbursed according to these 7 DRGs ranged from EUR 1,704.03 to EUR 13,352.72 (mean value = EUR 2,901 weighted according to the number of patients in the three groups). The length of stay was more homogenous across the three groups (7 days in Group 1, N=35; 7 days in Group 2, N=31; 13 days in Group 3, N=11), with a mean value of 7.87 days weighted according to the size of the three groups. According to Rampado et al.[1], DRGs in Italy are an underestimation of real costs. In fact, while the weighted mean of reimbursements is EUR 2,901, the weighted mean for cost in the same patients is EUR 6,908. This real-life information on costs can be extremely useful for conducting modeling studies that evaluate the cost effectiveness of meshes in Italian patients subjected to incisional hernia repair with synthetic or biological meshes.

      References

      1. Rampado S, Geron A, Pirozzolo G, Ganss A, Pizzolato E, Bardini R. Cost analysis of incisional hernia repair with synthetic mesh and biological mesh: an Italian study. Updates Surg. 2017 Sep;69(3):375-381. doi:10.1007/s13304-017-0453-9.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 28, Gabriel Lima-Oliveira commented:

      "Brazilian scientific societies currently allow laboratory directors choose between fasting/no-fasting time for all laboratory tests when prescribed together with lipid profile; but such a ‘‘permit’’ is not granted by any scientific evidence. Fasting time for most blood tests should be 12 hours, whereas for lipid profile alone is an exception based on European consensus."

      Text published by Journal of Clinical Lipidology Official Journal of National Lipid Association. All rights reserved


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 13, David Nunan commented:

      On the day this editorial was released we contacted the Editor for consideration of the following commentary. We have yet to hear back from the Editor. To avoid further delay via formal submission, we present here a truncated version of our commentary.

      Response to “Saturated fat does not clog the arteries: coronary heart disease is a chronic inflammatory condition, the risk of which can be effectively reduced from healthy lifestyle interventions”

      Implausible discussions in saturated fat “research”

      Definitive solutions won’t come from another million editorials (or a million views of one).

      The British Journal of Sports Medicine again acts as the unusual home to an opinion editorial advocating for public health guidance on saturated fat to be revised based on selected “evidence”. As an editorial it was always going to struggle to avoid calls of “cherry picking”. More worrying was the failure to apply even the basic of evidence-based principles. Here, we do the job of authors (and editor[s]) in addressing the quality of the evidence presented and highlighting some of the contradictory evidence, the complexity and uncertainty of the evidence-base whilst being mindful of our own cognitive biases.

      Effects of reducing saturated fat intake for cardiovascular disease

      The authors refer to evidence from a “landmark” meta-analysis of observational studies to show a lack of an association between saturated fat consumption and all-cause mortality, coronary artery disease incidence and mortality, ischaemic stroke, and type 2 diabetes [1]. According to best practice evidence-based methods, the types of studies included here provide low quality evidence (unless specific criterion are met) [2]. Indeed, the review authors actually reported the certainty of reported associations (or lack there of) was “very low”, indicating any estimates of the effect are very uncertain [1].

      Conversely, a high-quality meta-analysis of available RCTs (n= 17 with ~59,000 participants) from the Cochrane Collaboration, found moderate quality evidence from long-term trials that reducing dietary saturated fat lowered the risk of cardiovascular events (number needed to treat [NNT]=14), but no effect on all-cause and cardiovascular mortality, risk of myocardial infarction, and stroke, compared to usual diet [3]. The Cochrane review also found in subgroup analyses, the reduction in cardiovascular events was observed in the studies replacing saturated fat with polyunsaturated fat (but not with carbohydrates, protein, or monounsaturated fat).

      Thus the consensus viewpoint of a beneficial effect of reduced dietary saturated fat and replacement with polyunsaturated fat in the general population appears to be underpinned by a higher quality evidence base.

      Benefits of a Mediterranean diet on primary and secondary cardiovascular disease

      In the section “dietary RCTs with outcome benefit in primary and secondary prevention”, the authors switch from saturated fat to low fat diets and cite two trials, namely the PREDIMED study [5] and the Lyon Diet Heart study [6].

      The PREDIMED study investigated the effects of a Mediterranean diet including fish, whole grain cereals, fruits and supplemented with extra-virgin olive oil versus the same Mediterranean diet supplemented with mixed nuts, versus advice to reduce dietary fat on primary prevention of cardiovascular disease. The dietary interventions in PREDIMED were designed to increase intakes of mono- and poly-unsaturated fat and reduce intake of saturated fat.

      The Lyon Diet Heart study examined the impact of a Mediterranean alpha-linolenic acid-rich (with significantly less lipids, saturated fat, cholesterol, and linoleic acid) compared to no dietary advice. This study also aimed to assess the effect of increase dietary intake of unsaturated (polyunsaturated) fats.

      Both these studies support the current consensus to increase intakes of polyunsaturated dietary fats in replacement of saturated fat. These findings also suggest placing a limit on the percentage of calories from unsaturated fats may be unwarranted which has now been acknowledged in a recent consensus [7].

      Furthermore, a meta-analysis reviewing the effects of the Mediterranean diet on vascular disease and mortality [8], found that using the best available data the Mediterranean diet reduced vascular events and incidence of stroke, but did not result in improvements in all-cause mortality, cardiovascular mortality, coronary events, or heart failure compared to controls. The review authors highlighted the limited quantity and quality of evidence and the uncertainty of the effects of a Mediterranean diet on cardiovascular outcomes, and the non-existence of data about adverse outcomes.

      LDL-Cholesterol and Cardiovascular mortality

      The authors support their view that the cardiovascular risk of LDL-cholesterol has been exaggerated with 45 year-old data from the Minnesota Coronary Experiment (MCE) [9] and a systematic review of observational studies [10]. However, the authors do not address observed limitations of the MCE study including discrepant event rate and selective outcome reporting, over 80% attrition (with lack of intention-to-treat analysis and a small event rate difference (n=21) plausibly driven by a higher unexplained drop out in the control group [11].

      The review cited found that LDL-cholesterol is not associated with cardiovascular disease and is inversely associated with all-cause mortality in elderly populations [10]. However, the methodological quality of this review has been judged to be poor for, among other problems, non-uniform application of inclusion/exclusion criteria, a lack of critical appraisal of the methods used in the eligible studies (low quality observational studies), failure to account for multifactorial analysis (i.e., lack of control for confounders), and not considering statin use (see Eatz letter in response to [12] and [13]).

      The authors fail to discuss large-scale RCT evidence showing that LDL-cholesterol reducing statin therapy reduces the risk of coronary deaths, myocardial infarction, strokes, coronary revascularisation procedures by ~25% for each mmol/L reduction in LDL-cholesterol during each year, following the first, it is taken [14]. We are aware of the on-going debate around the integrity of the data in relation to statins, particularly around associated harms, and their potential mechanisms. However, there appears general consensus on their effectiveness in reducing hard endpoints regardless of their underlying mechanism.

      Therefore, given the flaws of the referenced trial and systematic review of observational studies and evidence in support of benefits of LDL-cholesterol lowering therapy, it is too early to dismiss LDL-cholesterol as a risk factor for cardiovascular disease and mortality.

      We note with interest the authors’ statement “There is no business model or market to help spread this simple yet powerful intervention.” It’s not beyond comprehension that journals present a credible business model based on attracting controversy in areas of public health importance where clarity, not confusion, is needed. Notable conflicts of interest include income from a low budget film purporting the benefits of a high saturated fat diet.

      The latest opinion editorial overlooks a large contradictory evidence-base and the inherent uncertainty with nutritional epidemiological studies and trials [15]. Arguably what is needed is a balanced discussion of dietary patterns over and above individual macronutrients that considers collaborative efforts for improving the evidence-base and our understanding of the complex relationship between dietary fat and health.

      References available from corresponding author.

      David Nunan<sup>1*,</sup> Ian Lahart<sup>2.</sup> <sup>1Senior</sup> Researcher, University of Oxford, UK. david.nunan@phc.ox.ac.uk *Corresponding author <sup>2Senior</sup> Lecturer in Exercise Physiology, University of Wolverhampton, UK


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 12, Rima Obeid commented:

      Trimethylamine N-oxide and platelets aggregation: insufficient evidence for causal inference in thrombosis - http://amj.amegroups.com/article/view/4016/4744

      Trimethylamine N-oxide (TMAO) is an amine oxide generated in the liver from nutrients such as choline, betaine, or carnitine via an intermediate gut-bacteria driven metabolite, trimethylamine (TMA). Recently, Zhu et al. conducted a 2-month open-, non-placebo controlled intervention in vegetarians and omnivores using 450 mg total choline/day (1). Zhu et al. reported a significant increase in plasma TMAO concentrations (from 2.5 to 36.4 microM in omnivores and from 2.6 to 27.2 microM in vegetarians). A corresponding increase in platelet aggregation according to an in vitro platelets function test was also reported and expressed as percentage of maximal amplitude. The assumed effects of TMAO on platelets aggregation were observed at the first follow-up visit, 1 month after the start of supplementation, and the effects were stronger in omnivores compared with vegetarians (1). No further changes occurred in the following month of supplementation. There is no information on latency period, sustainability of the effect, or resistance to high TMAO levels. The results suggest that the increase in platelet aggregation have leveled off after 1 month (no further increase between month 1 and month 2). There is no evidence on sustainability of the effect after the last oral choline dose that was taken in the evening before the platelet function test. The results of the adenosine diphosphate (ADP)-induced platelets aggregation in vitro were interpreted as prothrombotic effect of TMAO (1). After aspirin usage in subjects without platelets disorders, lowering of in vitro platelets-reactivity to 5 microM ADP was hypothesized to indicate that “TMAO may overcome antiplatelet effects of aspirin”. Nevertheless, the interactive effect of aspirin and TMAO can be equally argued to indicate that: “TMAO may reduce the risk of bleeding from aspirin” or “TMAO may reduce resistance to aspirin in subjects who need anti-platelet drugs”. But how to interpret the results in term of cause and effect?

      Platelet aggregation is a highly complex process involving numerous cellular receptors and transmembrane pathways. Platelet activation occurs when agonists, such as ADP, thromboxane A2 (TxA2), and thrombin, bind to their receptors. This physiological process is involved in protective hemostasis (i.e., prevents bleeding by forming cloth) as well as in pathological thrombosis (over-aggregation). A variety of agonists such as ADP, epinephrine, arachidonic acid, or collagen can induce platelet aggregations via different mechanisms (2). This characteristic has been used for in-vitro diagnosis of platelet disorders and for monitoring resistance to anti-platelets. Nevertheless, assays that use a single agonist or a single concentration of any agonist are oversimplification of platelet function that could be completely different under physiological conditions (3).

      In vivo platelets activation causes ADP to release from dense granules. ADP activates surface glycoprotein IIb/IIIa that is attached to fibrinogen, thus leading to aggregation of platelets to adherent layer. Adding ADP to platelets rich plasma (in vitro) causes an initial increase in aggregation due to activation of the glycoprotein IIb/IIIa platelets membrane receptor and a second wave of aggregation due to recruitment of additional platelets aggregates. In contrast, aspirin inhibits platelet activation mainly by targeting cyclooxygenase 1 (COX-1) thus leading to inhibition of TxA2 formation. Because arachidonic acid affects the COX-1/TxA2 system, this compound is used, instead of ADP, for in vitro induction of platelets aggregation in platelet rich plasma under aspirin treatment. Despite that aspirin has been shown to reduce platelet aggregation induced by ADP, aspirin inhibition of platelets aggregation after arachidonic acid is greater and this test is used for routine monitoring of aspirin effect (4,5).

      Zhu et al. observed higher platelet aggregation at high TMAO (compared with low TMAO) and lower aggregation under aspirin compared with the same subject without aspirin (1). The results are not interpretable for the following reasons; first, because the results of the platelets aggregation in platelets rich plasma are not comparable between studies, agonists, and agonists concentrations (6); second, because TMAO was anticipated to inhibit surface glycoprotein IIb/IIIa (that is activated by ADP), but aspirin acts mainly via TxA2. Thus, using ADP as an agonist for the surrogate platelet aggregation test is not selective for aspirin effect. However, what would have happened in subjects with indication for anti-platelets treatment? Could high TMAO be protective against bleeding? Could it reduce resistance to long term antiplatelet therapy? Could there be a platelet-adaptation to high choline intake? Clearly these questions are not answered yet.

      The long-term risk of thrombosis associated with high choline intake or high plasma TMAO is not evident. The value of platelet function tests in predicting future thrombosis in non-symptomatic individuals has been questioned by the Framingham Heart Study cohort where no association was found between several platelets function tests (including ADP-aggregation) and future thrombosis after controlling for other likely competing risk factors (6). Similar negative results were reported by Weber et al., who found that ADP-aggregation was not associated with thrombosis (7). Moreover, compared with omnivores, vegetarians could have fewer or larger platelets. In addition, any possible association between TMAO and platelet functions could be subject to effect modification from dietary components such as betaine, carnitine, fatty acids, lipids, or micronutrients (8,9). In line with this, Zhu et al. have indeed shown that the platelet aggregation results that were not different between vegetarians and omnivores at baseline, became different after 1 and 2 months of supplementation of 450 mg/day choline. Therefore, since the intervention was identical in both groups, the results strongly suggest the presence of effect modifications via yet unknown factors.

      Zhu et al. have shown that aspirin lowers plasma TMAO after choline load by almost 50% within 1 month (1). This could be related to changing gastrointestinal acidity and bacterial populations, thus affecting the production rate of TMA; affecting FMO3 system; or affecting a yet unknown TMA-metabolizing system. The results also draw attention to the role of aspirin (and possibly many other drugs) as an effect modifier in clinical studies on the role of TMAO in vascular diseases.

      If the study of Zhu et al. (1) is to be used for synthesizing evidence, the following arguments can be made: the hypothesis could be “exposure to TMAO causes thrombosis (shown by using an appropriate surrogate test)” (Figure 1). A randomized controlled trial would be an appropriate design. However, dietary intakes of other sources of TMAO should be controlled and confounding from aspirin or other well-known factors (i.e., renal dysfunction, inflammation, or vascular diseases) that affect TMAO and simultaneously the outcome “platelet aggregation” should be conditioned on. Information on short and long term effects of high choline intake is equally important because of platelet adaptation and analytical limitations of most available surrogate in vitro tests. Since the effect does not appear to further increase over time, resistance or adaptation to high TMAO could be equally a valid explanation.

      Taken together, because of serious limitations in the study design, inappropriate surrogate outcomes, unknown kinetics of the response of platelets to TMAO, and uncontrolled confounders there is a risk of using such data for causal inference on a proposed direct prothrombotic effect of dietary choline.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 13, David Keller commented:

      Low LDL cholesterol was associated with significantly higher risk of Parkinson disease than was high LDL

      The authors report that the adjusted hazard ratio for Parkinson disease ("PD") in subjects with LDL < 1.8 mmol/L versus those with LDL >4.0 mmol/L was 1.70 [1.03 - 2.79]. Note that the 95% confidence interval (in brackets) does not cross 1.0, so this association of low LDL with increased risk of PD is statistically significant.

      The above data were then subjected to "genetic, causal analysis", which yielded a risk ratio for a lifelong 1 mmol/L lower LDL cholesterol level of 1.02 [0.26 - 4.00] for Parkinson's disease. Note that the 95% confidence crosses 1.0, and the mean risk ratio of 1.02 is barely elevated.

      The tiny and non-significant increase in PD risk caused by a lifelong 1 mmol/L lower serum LDL level, as calculated by the genetic causal analysis, appears to contradict the significant increase in risk of PD for subjects with LDL < 1.8 mmol/L, as compared with subjects with LDL > 4.0 mmol/L seen in the observational analysis.

      This apparent contradiction may be an artifact of the diminished statistical power of the genetic causal analysis (which compared change in PD risk for a change in LDL of only 1 mmol/L) versus the observational study, which found significantly higher PD risk associated with LDL < 1.8 mmol/L than with LDL > 4.0 mmol/L. In the observational analysis, the LDL in the high-PD-risk subjects was at least 2.2 mmol/L lower than in the low-risk subjects (ie: 4.0 - 1.8 = 2.2 mmol/L). Thus, the genetic causal analysis calculated the effect of an LDL lower by only 1.0 mmol/L, while the observational analysis looked at the effect of an LDL lower by at least 2.2 mmol/L.

      I suggest that the authors compare apples with apples, by re-calculating the genetic, causal analysis to determine the effect of lifelong lowering of LDL by at least 2.2 mmol/L, the minimum separation of LDL levels between subjects in the comparator groups in the observational analysis. Comparing the effect of a larger decrease in LDL may enhance the size and significance of the results calculated by the genetic analysis, and bring it into agreement with the significantly increased risk of lower LDL found in the observational analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 26, Zhang Weihua commented:

      Congratulations! Great work! Thanks for citing our paper!

      Formation of solid tumors by a single multinucleated cancer cell. Weihua Z, Lin Q, Ramoth AJ, Fan D, Fidler IJ. Cancer. 2011 Sep 1;117(17):4092-9. doi: 10.1002/cncr.26021. Epub 2011 Mar 1. PMID: 21365635

      It is time to rethink about our experimental models for cancer study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 24, Jim van Os commented:

      We would like to report two small and non-essential errors in the publication of this paper:

      1. The first error is in Table 2, page 6: The bottom row in the column of mean PRS values, now depicted as ‘-0.28’ with a standard deviation of ‘0.55’ is incorrect and has to be replaced by ‘0.77’ with S.D = ‘0.19’.

      2. The second error pertains to the text in the results section on page 5, the fifth paragraph under the heading ‘Associations in relatives and healthy comparison subjects’ in which the results of associations between PRS and CASH-based lifetime depressive and manic episodes are reported. The results of OR’s, CI’s and p-values for the outcome of ‘any affective episode’ in both the relatives group and the healthy comparison group in this text have to be replaced with the corresponding OR;s, CI’s and p-values reported in Table 7, page 11 for any affective episode.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 22, Alessandro Rasman commented:

      Aldo Bruno MD, Pietro M. Bavera MD, Aldo d'Alessandro MD, Giampiero Avruscio MD, Pietro Cecconi MD, Massimiliano Farina MD, Raffaello Pagani MD, Pierluigi Stimamiglio MD, Arnaldo Toffon MD and Alessandro Rasman

      We read with interest this study by Zakaria et al. titled "Failure of the vascular hypothesis of multiple sclerosis in a rat model of chronic cerebrospinal venous insufficiency".(1) Unfortunately the authors ligated the external jugular veins of the rats and not the internal jugular veins. Dr, Zamboni's theory on chronic cerebrospinal venous insufficiency is based on the internal jugular veins and not on external jugular veins.(2) Maybe the authors can read the two papers from Dr. Mancini et al. (3) and (4). So, in our opinion the title of this study is absolutely not correct.

      References: 1. Zakaria, Maha MA, et al. "Failure of the vascular hypothesis of multiple sclerosis in a rat model of chronic cerebrospinal venous insufficiency." Folia Neuropathologica 55.1 (2017): 49-59. 2. Zamboni, Paolo, et al. "Chronic cerebrospinal venous insufficiency in patients with multiple sclerosis." Journal of Neurology, Neurosurgery & Psychiatry 80.4 (2009): 392-399. 3. Mancini, Marcello, et al. "Head and neck veins of the mouse. A magnetic resonance, micro computed tomography and high frequency color Doppler ultrasound study." PloS one 10.6 (2015): e0129912. 4. Auletta, Luigi, et al. "Feasibility and safety of two surgical techniques for the development of an animal model of jugular vein occlusion." Experimental Biology and Medicine 242.1 (2017): 22-28.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 12, Lydia Maniatis commented:

      "The high levels of correlation between the four measures used in this study (fixations, interest points, taps and computed salience; see Fig. 3) support the conclusion that the tapping paradigm is a valid measure of salience."

      Ascertaining that people look where they point (how could they guide their movement if they didn't?) has to be very high on the list of predictable predictions. With respect to "computed salience,": "Finally, saliency maps computed from the Itti et al. (1998) model were compared against the tap data and found to correlate beyond the null hypothesis, Rӻ bTޠ젰:21; p 젴:3 1015, though not significantly below the sample error hypothesis, Rӻ eST ޠ젰:25; p 젰:075. This relatively low value of Rӻ eST ޠis obtained because the computed saliency maps were relatively diffuse."

      "In the absence of a specific task (蘦ree viewing� it seems reasonable to assume that at least for the first few images, and for the first few fixations in these images, observers let themselves be guided by the visual input, rather than by some more complex strategy..."

      The criterion of "it seems reasonable [to us] to assume that..." is the contemporary definition of rigor (providing a solid rationale or even testing assumptions, in this case at the least debriefing subjects). In contrast, it seems reasonable to me to assume that if someone asks me to freely select a place in a picture to point to, I would want to point at something interesting or meaningful, not at the first thing that caught my attention, e.g. the brightest spot. That is, observers awareness that someone else is observing and in some way assessing their choices makes the authors assumptions that they are limiting "top-down" influences seem very weak to me. Of course, the top-down/bottom up distinction is itself completely vague. If, in the image, I see two chairs and a sofa and point to the one that I immediately recognize as having seen in IKEA, is this top-down or bottom up?

      Relatedly, the authors casually address the issue of how many fixations preceded the pointing during the 1.4 seconds of viewing time: "Note that for the tapping study, the reaction time includes the time after the subject has decided where to tap, the movement of the hand, as well as the (relatively short) delay between the tap on the initialization screen and the presentation of the image. We therefore estimate that the majority of subjects performed three or fewer saccades before deciding where to tap." So, at least 127/252? Is this really an adequate assumption? And what is the rationale for 촨ree or fewer�eing an important cut-off?

      It's also typical of the contemporary approach that the experimental emphasis is wholly on technique and statistics and completely agnostic to the actual stimuli/conditions and to the percepts to which they give rise, as well as to the many fundamental conceptual issues that such considerations entail, and of course the effect of stimulus variations on the shape of the data.

      This empirical agnosticism is reflected in the use of the term "natural scene" to characterize stimuli; it is completely uninformative as to the characteristics of the stimuli. (This is especially the case as "natural scene" here includes, as it often does in scholarly publications, images of buildings on a college campus).

      Surely, certain sets of such stimuli would produce greater or smaller inter-individual differences than others, altering the already weak data significantly as to "saliency maps." For example, if an image contained one person, then attention would generally fall on this person. But if there were two people, the outcome would probably be divided between the two, and so on. (Is seeing a person in a brief presentation top-down or bottom-up?)

      Wouldn't it be weird if "attentive pointing" DIDN'T correlate with "other measures of attention"? So weird that the interpretation of the results would probably be chalked up to the many sampling uncertainties and confounding factors that are, in the predictable case, bustled through with lots of convenient (or "reasonable") assumptions and special pleading for weak data.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 01, Lydia Maniatis commented:

      Like too many in the vision literature, this article recycles various unviable odds and ends from the theoretical attic. The general discussion begins with this amazing statement: “It is generally accepted that lateral inhibition among orientation-selective units is responsible for the tilt illusion.” The collection of citations mentioned in the subsequent discussion provides absolutely no support for this completely untenable (if somewhat ambiguous) statement, as all they show is that the perception of any element in the visual field is affected by the structure of the surrounding field. Perhaps (no guarantee) the cases cited could superficially be reconciled with the "lateral inhibition" claim; but it crashes and burns in an infinite number of other cases, the perception of orientation being wholly contingent on a sophisticated structural interpretation of the retinal stimulation; and the neurons responsible for this interpretive activity are, of course, the same neurons supposedly acting via dumb (so to speak) local interactions to produce the "tilt illusion."

      The claim that neurons act as detectors of orientation, each attuned to a particular value, is equally untenable (as Teller (1984) has pointed out in detail). Again, claims such as : “lateral interactions between these lines, or neurons, can skew the distribution and change the perceived orientation” are blind to the fact that perception does not act from local to global, but is effectively constrained by the whole visual field and the values inherent in the possible organizations of this field, which are infinite and among which it generally "chooses" only one. We don’t see the tilt of the lines composing the drawn Necker cube veridically from a 2D point of view; so if there were “labeled lines” for tilt, as the authors suggest, then these responses cannot directly affect the percept; but direct percepts are what the authors are using to draw their conclusions. Also, an orientation is a feature of a structure; and any structures in perception are constructed, along with their orientation, from point stimulation from photons striking the retina; so this is a case of the visual system supposedly "detecting" features of things that it has itself created.

      Similarly: “Known psychophysical features of the tilt illusion … also suggest low-level locus of the tilt illusion. Taken together, V1 is a likely locus for the main site of the tilt illusion.“ The attribution of perceptual experiences to ‘low level’ or peripheral cortical processes was also criticized by Teller (1984) who noted that it implicitly relies on a “nothing mucks it up” proviso, i.e. assuming the low level activity is directly reflected in the percept, without explaining what happens upstream. Again, attributing a perceptual effect such as the perceived tilt of an image to simple interactions in the same V1 neurons that are responsible for observers' perception of e.g. forms of the room, the computer, the investigators, the keypad, etc., is not credible. It would be paradoxical to claim, as Graham (1992) has done, that some percepts are a direct, conscious reflection of low level neural activity, as there would have to be a higher level process deciding that the interpretation of image x should be mediated only by the lower level, and the products shunted directly to consciousness. Such arguments should never be made again.

      Similarly: “To summarize, spatial contextual modulations of V1 neurons and their population responses seem to be likely candidates for the neural basis for simultaneous hue contrast.”

      The references to “simultaneous contrast mechanisms” is inapt for all the same reasons, i.e. that this is a an effect highly sensitive to global context with sophisticated criteria, and thus cannot be simply segregated theoretically from the processes of perceptual organization in general.

      Finally, I don't get this: "No fixation point was provided..." but then "The observers' task was to adjust the orientation of the comparison grating, which was presented on the other side of the fixation point…” Was there or wasn't there?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 07, Michelle Fiander commented:

      PRISMA describes elements to report; not how to conduct a systematic review. THe Newcastle/Ottawa score is appropriate for non-RCT but not for RCTs. Of the 2 RCTs included in this review, Rezk 2016, for example, may not have scored as high on Cochrane RoB as on the Newcastle scale.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 31, Lise Bankir commented:

      About osmoles and osmolytes

      It is important to use the right words to ensure an unambiguous understanding of the diverse aspects of scientific studies. Thus, I would like to draw attention to the difference between "osmolytes" and "osmoles".

      The word "osmolyte" is misused in this paper and should be replaced throughout by "osmole".

      Osmoles (e.g. sodium, potassium, chloride, urea, glucose) are substances that increase the osmolarity of the fluid in which they are dissolved.<br> Osmolytes (e.g. betaine, sorbitol, myoinositol, glycine, taurine, methyamines) are substances that accumulate inside cells to protect them from a high ambient osmolarity.

      See definition of osmolytes in the two encyclopedia below.

      http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/osmolyte

      https://en.wiktionary.org/wiki/osmolyte

      See also reviews about osmolytes (two examples given below).

      J Exp Biol. 2005 Aug;208(Pt 15):2819-30. Organic osmolytes as compatible, metabolic and counteracting cytoprotectants in high osmolarity and other stresses. Yancey PH

      Curr Opin Nephrol Hypertens. 1997 Sep;6(5):430-3. Renal osmoregulatory transport of compatible organic osmolytes. Burg MB


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 28, NephJC - Nephrology Journal Club commented:

      The controversial and thought-provoking paper “Increased salt consumption induces body water conservation and decreases fluid intake.” was discussed on June 6th and 7th 2017 on #NephJC, the open online nephrology journal club.

      Introductory comments written by Joel Topf are available at the NephJC website here

      181 people participated in the discussion with nearly 1000 tweets. We were delighted that Paul Welling, an expert in renal physiology also joined in the chat.

      The highlights of the tweetchat were:

      • Nephrologists were surprised that the ‘basic tenet’ of Nephrology that is steady state sodium balance is now in dispute.

      • The methodology of this study was very impressive with the simulated Mars missions Mars105 and Mars 520 providing a unique opportunity to do prolonged metabolic balance studies, albeit in only 10 subjects.

      • It was unclear if the salt content was blinded or not and this may limit result interpretation.

      • It’s interesting that cortisol may have a more important role in sodium/water balance than previously thought via its stimulation of protein catabolism to generate more urea for urine concentration, however its overall significance is still thought to be considerably less than that of ADH/aldosterone.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 31, Lise Bankir commented:

      About osmoles and osmolytes

      It is important to use the right words to ensure an unambiguous understanding of the diverse aspects of scientific studies. Thus, I would like to draw attention to the difference between "osmolytes" and "osmoles".

      The word "osmolytes" is misused in this paper and should be replaced throughout by "osmoles".

      Osmoles (e.g. sodium, potassium, chloride, urea, glucose) are substances that increase the osmolarity of the fluid in which they are dissolved.<br> Osmolytes (e.g. betaine, sorbitol, myoinositol, glycine, taurine, methyamines) are substances that accumulate inside cells to protect them from a high ambient osmolarity.

      See definition of osmolyte in the two encyclopedia below.

      http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/osmolyte

      https://en.wiktionary.org/wiki/osmolyte

      See also reviews about osmolytes (two examples given below).

      J Exp Biol. 2005 Aug;208(Pt 15):2819-30. Organic osmolytes as compatible, metabolic and counteracting cytoprotectants in high osmolarity and other stresses. Yancey PH

      Curr Opin Nephrol Hypertens. 1997 Sep;6(5):430-3. Renal osmoregulatory transport of compatible organic osmolytes. Burg MB


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Oct 31, Lise Bankir commented:

      See E-Letter to the JCI Editor about this article, by Richard Sterns and Lise Bankir

      "Of Salt and Water: Let's Keep it Simple"

      https://www.jci.org/eletters/view/88532#sec1


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 06, Christopher Southan commented:

      IMCO it should have been incumbent on the Bentham Editor-in-Chief, the reviewing Editor and the 3 referees, to have spotted the severe grammatical problems and offered appropriate editorial support. This would have spared these non-native English authors the global embarrassment of publishing such a glaringly broken abstract. However, on checking, it looks like I could be naive in expecting this (https://en.wikipedia.org/wiki/Bentham_Science_Publishers).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 21, Manuel Menéndez González commented:

      Yes, right. Though that would be an application to adjust pressures. There are many other potential applications where modifying the composition of CSF may represent a treatment of the condition.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 18, Darko Lavrencic commented:

      I believe that implantable systems for continuous liquorpheresis and CSF replacement  could be successfully used also for intracranial hypotension-hypovolemia syndrome as it could be caused by decreased CSF formation. See: http://www.med-lavrencic.si/research/correspondence/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 19, Seán Turner commented:

      The authors describe eight (8) new species, not nine (9) as indicated in the title. In the manuscript, the accessions numbers for the 16S rRNA genes of the type strains of Mailhella massiliensis (strain Marseille-P3199) and Mordavella massiliensis (strain Marseille-P3246) are switched; the correct assignments are LT615363: Mailhella massiliensis Marseille-P3199, and LT598584: Mordavella massiliensis Marseille-P3246.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 11, Anders von Heijne commented:

      In addition to the complications to treatment with fingolimod that the authors report, there are a number of reported cases with PRES, with obviuous radiological implications. In the EudraVigilance database there are currently (september 2017) 21 reported cases of fingolimod-related PRES.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 24, Lydia Maniatis commented:

      This article’s casual approach to theory is evident in the first few sentences. After noting irrelevantly, that “Since their introduction (Wilkinson, Wilson, & Habak, 1998), RF patterns have become a popular class of stimuli in vision science, commonly used to study various aspects of shape perception,” the authors immediately continue to say that “Theoretically, RF pattern detection (discrimination against a circle) could be realized either by local filters matched to the parts of the pattern, or by a global mechanism that integrates local parts operating on the scale of the entire pattern.” No citation is offered for this vague and breezy assertion, which begs a number of questions.

      1. How did we jump from “shape perception” to “RF detection against a circle”? How is the latter related to the former?

      2. Is the popularity of a pattern sufficient reason to assume that there exist special mechanisms – special detectors, or filters – tailored to its characteristics? Is there any basis whatsoever for this assertion?

      3. Given that we know that the whole does determine the parts perceived, why are we talking about integration of “local” elements? And how do we define local? Doesn’t a piece of a shape also consist of smaller pieces, etc? What is the criterion for designating part and whole in a stimulus pattern (as opposed to the fully-formed percept)?

      Apparently, there have been many ‘models’ proposed for special mechanisms for “RF detection against a circle,” addressing the question in these local/local-to-global terms. Could the mechanism involve maximum curvature integration, tangent orientations at inflection points, etc.? These simply take for granted the underlying assumption that there are special “filters” for “RF discrimination against a circle.” The only question is to what details of the figure are these mechanisms attuned.

      What if we were dealing with different types of shapes? What if the RF boundary shape were formed by different sized dots, or dashes, or rays of different lengths radiating from a center? Would we be talking about dot filters, or line length filters? Why put RF patterns in general, and RF patterns of this type in particular, on such an explanatory pedestal?

      More critically, how is it possible to leverage such patterns to dissect the neural processes underlying perception? When I look at one of these patterns, I don’t have any trouble distinguishing it from a circle. What can this tell me about the underlying process?

      A subculture of vision science has opted to uncritically embrace the view that underlying processes can be inferred quite straightforwardly on the basis of certain procedures that mimic the general framework of signal detection. This view is labeled “signal detection theory” or SDT, but “theory” is overstating it. As noted in my earlier comment, Schmidtmann and Kingdom (2017) never explain why they make what, to a naïve observer, must seem very arbitrary methodological choices, nor does their main reference, Wilkerson, Wilson and Habak (1998). So we have to go back further to find some suggestion of a rationale.

      The founding fathers of the aforementioned subculture include Swets, Tanner and Birdsall (e.g. 1961). As may be seen from a quote from that article (below), the framing of the problem is artificial; major assumptions are adopted wholesale; “perception” is casually converted to “detection” (in order to fit the analogy of a radar observer attempting to guess which blip is the object of interest).

      “In the fundamental detection problem, an observation is made of events occurring in a fixed interval of time and a decision is made; based on this observation, whether the interval contained only the background interference or a signal as well. The interference, which is random, we shall refer to as noise and denote as N; the other alternative we shall term signal plus noise, SN. In the fundamental problem, only these two alternatives exist…We shall, in the following, use the term observation to refer to the sensory datum on which the decision is based. We assume that this observation may be represented as varying continuously along a single dimension…it may be helpful to think of the observation as…the number of impulses arriving at a given point in the cortex within a given time.” Also “We imagine the process of signal detection to be a choice between Gaussian variables….The particular decision that is made depends on whether or not the observation exceeds a criterion value….This description of the detection process is an almost direct translation of the theory of statistical decision.”

      In what sense does the above framework relate to visual perception? I think we can easily show that, in concept and application, it is wholly incoherent and irrational.

      I submit, first, that when I look around me, I don’t see any noise, I just see things. I’m also not conscious of looking for a signal to compare to noise; I just see whatever comes up. I don’t have a criterion for spotting what I don’t know will come up, and I don’t feel uncertain of - I certainly hardly ever have to guess at – what I’m seeing. The very effortlessness of perception is what made it so difficult to discern the fundamental theoretical problems. This is not, of course, to say that what the visual system does in constructing the visual percept from the retinal stimulation isn’t guesswork; but the actual process is light years more complex and subtle than a clumsy and artificial “signal detection” framework.

      Given the psychological certainty of normal perceptual experience, it’s hard to see how to apply this SDT framework. The key seems to be to make conditions of observation so poor as to impede normal perception, making the observer so unsure of what they saw or didn’t see that they must be forced to choose a response, i.e. to guess. One way to degrade viewing conditions is to make the image of interest very low contrast, so that it is barely discernible; another way is to flash it for very brief intervals. Now, in these presentations, the observer presumably sees something; so these manipulations don’t necessarily produce an uncertain perceptual situation (though the brevity of the presentation may make the recollection of that impression mnemonically challenging). Where the uncertainty comes in is in the demand by investigators that observers decide whether the impression is consistent with a quick, degraded glimpse of a particular figure, in this case an RF of a certain type or a circle. I don’t see how one can defend the notion put forth by Swets et al (1961) that this decision, which is more a conscious, cognitive one than a spontaneous perceptual one, is based on a continuously varying criterion. The decision, for example, may be based on a glimpse of one diagnostic feature or another, or on where, by chance, the fovea happens to fall in the 180ms (Schmidtmann and Kingdom, 2017) or 167ms (Wilkerson et al, 1998) interval allowed. But the forced noisiness (due to the poor conditions), the Gaussian presumptions, the continuous variable assumption, and the binary forced choice outputs are needed for the SDT framework to be laid on top of the data.

      For rest of comment (here limited by comment size limits), please see PubPeer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 23, Lydia Maniatis commented:

      It is oddly difficult to explain why a particular publication has no scientific content, even when, here, this is unequivocally the case. I think it’s important to try and make this quite clear.

      Before addressing the serious theoretical problems, I would like to make the easier points that show that, even on its own terms, the project is sloppy and unsuccessful.

      According to the authors, whatever it is they are proposing is “physiologically unrealistic” (p. 24). Yet they continue on to say: “Nonetheless, the model presented here will hopefully serve as a basis for developing a more physiological model of LF and RF detection.” There is no rationale offered to underpin this inarticulate hope, which seems even more misplaced given that “there is a modest, systematic mismatch between the [unrealistic] model and the data,” despite the very permissive (three free parameters, the post hoc introduction of a corrective function) model-fitting. That the modeling is strictly post hoc and ad hoc in character is reflected in the following statements: “The CFSF model presented here does not predict the inevitable increase in thresholds at frequencies higher than those explored in the present study. To do so would require CFSF with a somewhat different shape to the one shown in Figure 4…However, because we do not have the requisite data showing an upturn in thresholds at very high frequencies, we have not incorporated this feature into our present model.” (p. 24). We are dealing with atheoretical, condition/data-specific post hoc model-fitting with no heuristic value.

      There is also a lack of methodological care in the procedure. As is usual in papers of this type, the number of observers is very small, and they are not all naïve (here, ¾). One is apparently an author (GS). If naivete doesn’t matter, then why mention it, and if it does, why the author participation? Also, while we’re given quite detailed descriptions of many aspects of the stimuli per se – details whose theoretical basis or relevance is unclear - we’re only told that the “monitor’s background was initially set to a mean luminance (grey).” The reference to “grey” is uninformative with respect to actual luminance. The monitor is part of the stimulus. (I don’t understand the reference to “initially.” Maybe I’m missing something.) The following statement also seems strangely casual and vague: “Observers usually completed two experimental blocks for each experimental conditions…” Usually?

      As for this: "The cross-sectional luminance profile was defined by a Gaussian with a standard deviation of 0.05 deg" -- it's just a part of the culture, no explanation needed.

      And then this - in the context of trying to rationalize differences between the present results and those of previous studies: “In addition to the reported data, we conducted a control experiment to measure detection thresholds for RF and LF patterns with a modulation frequency of 30 for two additional naïve observers. Results show that thresholds are no higher than for a modulation frequency of 20.” Why are we discussing unreported data? Why wasn’t this control experiment reported in the body of the paper?

      Experimental stimuli were exposed for 180ms, with a 400ms isi. Why not 500ms, with a 900ms isi? Or something else? 180ms is very short, when we consider the time it takes to initiate a saccade. Was this taken into consideration? Does it matter? In general, on what theoretical basis were conditions selected? Would changing some or all change the results? What would it mean with respect to theory? Is the model so narrowly applicable that it extends only to these specific and apparently arbitrary conditions? If changing conditions would lead to different results, and to different post hoc models, and if the authors can’t predict and assign a theoretical meaning to these different possible outcomes, then it should be clear that the model has no explanatory status, that it is merely an ad hoc mathematical exercise.

      The idea that binary forced choices, with their necessary loss of information, are a good idea is mind-boggling, compounded by the arbitrariness of defining “thresholds” based on a 75% correct rate. Why not 99%? (As I'll discuss later, the SDT rationale is wholly inappropriate here). Why wouldn’t vision scientists be interested in what observers are actually seeing, instead of lumping together who knows what impressions experienced under extremely suboptimal conditions? (The reason for this SDT-related, unfortunate indifference to perception by vision scientists will be discussed in a following comment). Generating data in the required form seems more important than understanding what natural phenomena it reflects and explains, if any. Relatedly, I would note that it is indispensible to the evaluation of any visual perception study for the actual stimuli to be presented for interested readers’ inspection. I have asked the authors for access to these stimuli but haven’t yet received a response.

      But these are minor problems. The fundamental problem is that the authors have implicitly and explicitly adopted assumptions of visual system function that are never tested and are demonstrably lacking in face validity. (In a nutshell we are talking about the major fallacy of treating perception as a signal detection problem and neurons as "detectors.") In other words, even if the assumptions are false, the experiments premised on them are not designed to reveal this. (Yet, not only do existing facts and logical analysis falsify the premises, it would be easy to design similar experiments within the same framework that would falsify or render its arbitrariness evident, as I'll discuss in my second comment). Rather, data generated are simply assumed to reflect the claimed mechanisms, and loosely, with the help of lots of free parameters and ad hoc manipulations, are perpetually interpreted (via model-fitting) in these terms, with tweaks and excuses for every new and slightly different data set that comes along.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 13, John Sotos commented:

      To stimulate data-sharing, Bierer et al propose a new type of authorship, "data author," to credit persons who collect data but do not analyze it as part of a scientific study. Their proposal, however, requires a non-trivial modification to the structure of the Pubmed database, revision of authorship criteria across thousands of journals, and assigns a specialness to data authorship that could equally be claimed for "statistical authorship," "drafting authorship," "study-conceiving authorship," "benchwork authorship," etc.

      Reviving decades-old proposals for fractional authorship (1) could better achieve the same laudable aims, especially if open-source "blockchain" software technology (2)(3)(4) were used to conveniently, publicly, quantitatively, and securely track authorship credit in perpetuity.

      Authorship would thereby have some features of alternate currency (e.g. BitCoin): senior authors could use future authorship credits to "purchase" data from owners according to the data's value. They could also assign roles from a controlled vocabulary (data author, statistical author, etc.) to some or all authors. Over time, norms for pricing and authorship roles would coalesce in the scientific community.

      Overall, a blockchain fractional-authorship system would be more flexible and extensible than a special case made for data authors.

      (1) Shaw BT. The Use of Quality and Quantity of Publication as Criteria for Evaluating Scientists. Washington, DC: Agriculture Research Service, USDA Miscellaneous Publication No. 1041, 1967. Available at: http://bit.ly/2pVTImI

      (2) Nakamoto S. Bitcoin: A Peer-to-Peer Electronic Cash System. October 31, 2008. https://bitcoin.org/bitcoin.pdf

      (3) Tapscott D, Tapscott A. Blockchain revolution : how the technology behind bitcoin is changing money, business, and the world. New York: Portfolio / Penguin, 2016

      (4) Sotos JG, Houlding D. Blockchains for data sharing in clinical research: trust in a trustless world. (Blockchain Application Note #1.) March 2017. https://simplecore.intel.com/itpeernetwork/wp-content/uploads/sites/38/2017/05/Intel_Blockchain_Application_Note1.pdf


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 17, NephJC - Nephrology Journal Club commented:

      This randomised controlled trial of the C5a Receptor Inhibitor Avacopan in ANCA-Associated Vasculitis., was discussed on May 8th and 9th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Tom Oates are available at the NephJC website here

      There was significant interest in this promising trial, with 141 participants in the discussion and nearly 700 tweets.

      The highlights of the tweetchat were:

      • There is a lot of concern in the Nephrology community about the long-term side-effects of steroid use in ANCA vasculitis and that an alternative agent that would allow for lower glucocorticoid exposure would be very welcome.

      • Overall, it was thought to be a well-designed and well-conducted trial.

      • The chosen primary endpoint, a decrease in Birmingham Vasculitis Activity Score of 50% or more, was hotly debated. Although very frequently used in vasculitis research, using observed changes from baseline as a trial endpoint in a parallel group study may render it a less valid tool.

      • The group also questioned whether vaccination would be required with Avacopan however decided that it wouldn’t because it's a receptor blocker unlike Eculizumab which is a complement cleavage inhibitor.

      • The treatment response to Avacopan without steroids was excellent and it appears to be a safe drug. We look forward to seeing results of the Phase III studies and some long-term data regarding relapse rates in the absence of steroids.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 28, Hilda Bastian commented:

      This is an interesting methodological approach to a thorny issue. But the abstract and coverage (such as in Nature glosses over the fact that the results measure the study method's biases more than they measure scientists on Twitter. I think the method is inferring people who are a subset of people working in limited science-based professions.

      The list of professions sought is severely biased. It includes 161 professional categories and their plural forms, in English only. It was based on a U.S. list of occupations (SOC) and an ad hoc Wikipedia list. A brief assessment of the 161 titles in comparison with an authoritative international list shows a strong skew towards social scientists and practitioners of some science-based occupations, and away from meical science, engineering, and more (United Nations Educational, Scientific and Cultural Organization (UNESCO)'s nomenclature for fields of science and technology, SKOS).

      Of the 161 titles, 17% are varieties of psychologist, for example, but psychiatry isn't there. Genealogists and linguists are there, but geometers, biometricians, and surgeons are not. The U.S. English language bias is a major problem for a global assessment of a platform where people communicating with the general public.

      Influence is measured in 3 ways, but I couldn't find a detailed explanation of the calculations or a reference to one, in the paper. It would be great if the authors could point to that here. More detail on the "Who is who" service used in terms of how up-to-date it is would be useful as well.

      I have written more about this paper at PLOS Blogs, and point to key numbers that aren't reported, for who was excluded at different stages. The paper says that data sharing is limited by Twitter's terms of service, but it doesn't specify what that covers. Providing a full list of proportions in the 161 titles, and descriptions of more than 15 of the communities they found (none of which appear to be medical science circles), seem unlikely to be affected by that restriction. More data would be helpful to anyone trying to make sense of these results, or extend the work in ways that minimize the biases in this first study.

      There is no research cited that establishes the representativeness of data from a method that can only classify less than 2% of people who are on multiple lists. The original application of the method (Sharma, 2011) was aimed at a very different purpose, so representativeness was not such a big issue there. There was no reference in this article to data on list-creating behavior. There could be a reason historians came out on top in this group: list-curating is probably not a randomly-distributed proclivity.

      It might be possible with this method to better identify Twitter users who work in STEM fields. Aiming for "scientists", though, remains, it seems to me, unfeasible at scale. Methods described by the authors as product-centric (e.g. who is sharing links to scientific articles and/or discussing them, or discussing blogs where those articles are cited), and key nodes such as science journals and organizations seem essential.

      I would also be interested to know the authors' rationale for trying to exclude pseudonyms - as well as the data on how many were excluded. I can see why methods gathering citations for Twitter users exclude pseudonyms, but am not sure why else they should be excluded. A key reason for undertaking this kind of analysis is to understand to what extent Twitter expands the impact of scientific knowledge and research. That inherently means looking to wider groups, and the audiences for their conversations. Thank you to the authors, though, for a very interesting contribution to this complex issue.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 06, John Greenwood commented:

      (cross-posted from Pub Peer, comment numbers refer to that discussion but content is the same)

      To address your comments in reverse order -

      Spatial vision and spatial maps (Comment 19):

      We use the term “spatial vision” in the sense defined by Russell & Karen De Valois: “We consider spatial vision to encompass both the perception of the distribution of light across space and the perception of the location of visual objects within three-dimensional space. We thus include sections on depth perception, pattern vision, and more traditional topics such as acuity." De Valois, R. L., & De Valois, K. K. (1980). Spatial Vision. Annual Review of Psychology, 31(1), 309-341. doi:doi:10.1146/annurev.ps.31.020180.001521

      The idea of a "spatial map” refers to the representation of the visual field in cortical regions. There is extensive evidence that visual areas are organised retinotopically across the cortical surface, making them “maps". See e.g. Wandell, B. A., Dumoulin, S. O., & Brewer, A. A. (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383.

      Measurement of lapse rates (Comments 4, 17, 18):

      There really is no issue here. In Experiment 1, we fit a psychometric function in the form of a cumulative Gaussian to responses plotted as a function of (e.g.) target-flanker separation (as in Fig. 1B), with three free parameters: midpoint, slope, and lapse rate. The lapse rate is 100-x where x is the asymptote of the curve. It accounts for lapses (keypress errors etc) when performance is otherwise high - i.e. it is independent of the chance level. In this dataset it is never about 5%. However its inclusion does improve estimate of slope (and therefore threshold) which we are interested in. Any individual differences are therefore better estimated by factoring out individual differences in lapse rate. Its removal does not qualitatively affect the pattern of results in any case. You cite Wichmann and Hill (2001) and that is indeed the basis of this three-parameter fit (though ours is custom code that doesn’t apply the bootstrapping procedures etc that they use).

      Spatial representations (comment 8):

      We were testing the proposal that crowding and saccadic preparation might depend on some degree of shared processes within the visual system. Specific predictions for shared vs distinct spatial representations are made on p E3574 and in more detail on p E3576 of our manuscript. The idea comes from several prior studies arguing for a link between the two, as we cite, e.g.: Nandy, A. S., & Tjan, B. S. (2012). Saccade-confounded image statistics explain visual crowding. Nature Neuroscience, 15(3), 463-469. Harrison, W. J., Mattingley, J. B., & Remington, R. W. (2013). Eye movement targets are released from visual crowding. The Journal of Neuroscience, 33(7), 2927-2933.

      Bisection (Comments 7, 13, 15):

      Your issue relates to biases in bisection. This is indeed an interesting area, mostly studied for foveal presentation. These biases are however small in relation to the size of thresholds for discrimination, particularly for the thresholds seen in peripheral vision where our measurements were made. An issue with bias for vertical judgements would lead to higher thresholds for vertical vs. horizontal judgements, which we don’t see. The predominant pattern in bisection thresholds (as with the other tasks) is a radial/tangential anisotropy, so vertical thresholds are worse than horizontal on the vertical meridian, but better than horizontal thresholds on the horizontal meridian. The role of biases in that anisotropy is an interesting question, but again these biases tend to be small relative to threshold.

      Vernier acuity (Comment 6):

      We don’t measure vernier acuity, for exactly the reasons you outline (stated on p E3577).

      Data analyses (comment 5):

      The measurement of crowding/interference zones follows conventions established by others, as we cite, e.g.: Pelli, D. G., Palomares, M., & Majaj, N. J. (2004). Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of Vision, 4(12), 1136-1169.

      Our analyses are certainly not post-hoc exercises in data mining. The logic is outlined at the end of the introduction for both studies (p E3574).

      Inclusion of the authors as subjects (Comment 3):

      In what way should this affect the results? This can certainly be an issue for studies where knowledge of the various conditions can bias outcomes. Here this is not true. We did of course check that data from the authors did not differ in any meaningful way from other subjects (aside from individual differences), and it did not. Testing (and training) experienced psychophysical observers takes time, and authors tend to be experienced psychophysical observers.

      The theoretical framework of our experiments (Comments 1 & 2):

      We make an assumption about hierarchical processing within the visual system, as we outline in the introduction. We test predictions that arise from this. We don’t deny that feedback connections exist, but I don’t think their presence would alter the predictions outlined at the end of the introduction. We also make assumptions regarding the potential processing stages/sites underlying the various tasks examined. Of course we can’t be certain about this (and psychophysics is indeed ill-poised to test these assumptions) and that is the reason that no one task is linked to any specific neural locus, e.g. crowding shows neural correlates in visual areas V1-V4, as we state (e.g. p E3574). Considerable parts of the paper are then addressed at considering whether some tasks may be lower- or higher-level than others, and we outline a range of justifications for the arguments made. These are all testable assumptions, and it will be interesting to see how future work then addresses this.

      All of these comments are really fixated on aspects of our theoretical background and minor details of the methods. None of this in any way negates our findings. Namely, there are distinct processes within the visual system, e.g. crowding and saccadic precision, that nonetheless show similarities in their pattern of variations across the visual field. We show several results that suggest these two processes to be dissociable (e.g. that the distribution of saccadic errors is identical for trials where crowded targets were correctly vs incorrectly identified). If they’re clearly dissociable tasks, how then to explain the correlation in their pattern of variation? We propose that these properties are inherited from earlier stages in the visual system. Future work can put this to the test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 08, Lydia Maniatis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jul 07, Lydia Maniatis commented:

      I'm not an expert in statistics, but it seems to me that the authors have conducted multiple and sequential comparisons without applying the appropriate correction. In addition, the number of subjects is small.

      Also, the definition of key variables - "crowding zone" and "saccade error zone" - seems arbitrary given that they are supposed to tap into fundamental neural features of the brain. The former is defined as "target-flanker separation at which performance reached 80% correct [i.e. 20% incorrect]...which we take as the dimensions of the crowding zone," the latter by fitting "2D Gaussian functions to the landing errors and defin[ing] an ellipse with major and minor axes that captured 80% of the landing positions (shown with a black dashed line in Fig. 1C). The major and minor axes of this ellipse were taken as the radial and tangential dimensions of the “saccade error zone.”"

      What is the relationship between what the authors "take as" the crowding/saccade error zones and a presumptive objective definition? What is the theoretical significance of the 80% cut-off? What would the data look like if we used a 90% cut-off?

      Is a "finding" that a hierarchical linear regression "explains" 7.3% of the variance meaningful? The authors run two models, and in one saccades are a "significant predictor" of the data while in the other they are no longer significant, while gap resolution and bisection are. Conclusions seem to be based more on chance than necessity, so to speak.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jul 07, Lydia Maniatis commented:

      What would it mean for "crowding and saccade errors" to rely on a common spatial representation of the visual field? The phenomena are clearly not identical - one involves motor planning, for example - and thus their neural substrates will not be identical. To the extent that "spatial map" refers to a neural substrate, then these will not be identical. So I'm not understanding the distinction being made between spatial maps "with inherited topological properties" and "distinct spatial maps."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Jul 02, Lydia Maniatis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Jul 02, Lydia Maniatis commented:

      Part 6

      With respect to line bisection:

      It is mentioned by Arnheim (Art and Visual Perception) that if you ask a person to bisect a vertical line under the best conditions - that is, conditions of free-viewing without time limits - they will tend to place the mark too high:

      "An experimental demonstration with regard to size is mentioned by Langfeld: "If one is asked to bisect a perpendicular line without measuring it, one almost invariably places the mark too high. If a line is actually bisected, it is with difficulty that one can convince oneself that the upper half is not longer than the lower half." This means that if one wants the two halves to look alike, one must make the upper half shorter. " (p. 30).

      As the authors of this study don't seem to have taken this apparent, systematic bias into account, their "correct" and "incorrect" criterion of line bisection under the adverse conditions they impose may not be appropriate. It is also obvious that the results of the method used did not alert the authors to the possibility of such a bias.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Jun 29, Lydia Maniatis commented:

      Part 5

      With respect to Vernier acuity, and in addition to my earlier objections, I would add that a "low-level" description seems to be at odds with the fact that Vernier acuity, which is also described as hyperacuity, is better than would be expected on the basis of the spacing of the receptors in the retina.

      "Yet spatial distinctions can be made on a finer scale still: misalignment of borders can be detected with a precision up to 10 times better than visual acuity. This hyperacuity, transcending by far the size limits set by the retinal 'pixels', depends on sophisticated information processing in the brain....[the] quintessential example and the one for which the word was initially coined,[1] is vernier acuity: alignment of two edges or lines can be judged with a precision five or ten times better than acuity. " (Wikipedia entry on hyperacuity).

      When an observer is asked a question about alignment of two line segments, the answer they give is, always, based on the percept, i.e. a high-level, conscious product of visual processing. It is paradoxical to argue that some percepts are high and others low-level, because even if one wanted to argue that some percepts reflect low-level activity, the decision to derive the percept or features thereof from a particular level in one and another level in another case would have to be high-level. The perceived better-than-it-should be performance that occurs in instances of so-called hyperacuity is effectivelyan inference, as are all interpretations of the retinal stimulation, whether a 3D Necker cube or the Mona Lisa. It's not always the case that two lines that are actually aligned will appear aligned. (Even a single continuous line may appear bent - yet line segments are supposed to be the V1 specialty). It all depends on the structure of the whole retinal configuration, and the particular, high-level, inferences to which this whole stimulation give rise in perception.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2017 Jun 29, Lydia Maniatis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2017 Jun 28, Lydia Maniatis commented:

      Part 3

      I don't understand why it has become normalized for observers in psychophysical experiments to include the authors of a study. Here, authors form one quarter of the participants in the first experiment and nearly one third in the second. Aside from the authors, participants are described as "naive." As this practice is accepted at PNAS, I can only imagine that psychophysical experiments require a mix of subjects who are naive to the purpose and subjects who are highly motivated to achieve a certain result. I only wish the reasons for the practice were made explicit. Because it seems to me that if it's too difficult to find enough naive participants for a study that requires them, then it's too difficult to do the study.

      If the inclusion of authors as subjects seems to taint the raw data, there is also a problem with the procedure to which the data are subjected prior to analysis. This essentially untestable, assumption-laden procedure is completely opaque, and mentioned fleetingly in the Methods:

      "Psychometric functions were fitted to behavioral data using a cumulative Gaussian with three parameters (midpoint, slope, and lapse rate). "

      The key term here is "lapse rate." The lapse rate concept is a controversial theoretical patch-up developed to deal with the coarseness of the methods adopted in psychophysics, specifically the use of forced choices. When subjects are forced to make a choice even when what they perceive doesn't fall into the two, three or four choices preordained by the experimenters, then they are forced to guess. The problem is serious because most psychophysical experiments are conducted under perceptually very poor conditions, such as low contrast and very brief stimulus presentations. This obviously corrupts the data. At some point, practitioners of the method decided they had to take into account this "lapse rate," i.e. the "guess rate." That the major uncertainty incorporated into the forced-choice methodology could not be satisfactorily resolved is illustrated in comments by Prins (2012/JOV), whose abstract I quote in full below:

      "In their influential paper, Wichmann and Hill (2001) have shown that the threshold and slope estimates of a psychometric function may be severely biased when it is assumed that the lapse rate equals zero but lapses do, in fact, occur. Based on a large number of simulated experiments, Wichmann and Hill claim that threshold and slope estimates are essentially unbiased when one allows the lapse rate to vary within a rectangular prior during the fitting procedure. Here, I replicate Wichmann and Hill's finding that significant bias in parameter estimates results when one assumes that the lapse rate equals zero but lapses do occur, but fail to replicate their finding that freeing the lapse rate eliminates this bias. Instead, I show that significant and systematic bias remains in both threshold and slope estimates even when one frees the lapse rate according to Wichmann and Hill's suggestion. I explain the mechanisms behind the bias and propose an alternative strategy to incorporate the lapse rate into psychometric function models, which does result in essentially unbiased parameter estimates."

      It should be obvious that calculating the rate at which subjects are forced to guess is highly-condition-sensitive and subject-sensitive, and that even if one believes the uncertainty can be removed by a data manipulation, there can be no one-size-fits all method. Which strategy for calculating guessing rate have Greenwood et al (2017) adopted? Why? What was the "lapse rate"? There would seem to be no point in even looking at the data unless their data manipulation and its rationale are made explicit.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2017 Jun 27, Lydia Maniatis commented:

      Part 2b (due to size limit)

      I would note, finally, that unless the authors are also believers in a transparent brain for some, but not other, perceived features resulting from a retinal stimulation event, the idiosyncratic/summative/inherited/low-level effects claims should presumably be detectable in a wide range of normal perceptual experiences, not only in peripheral vision under conditions which are so poor that observers have to guess at a response some unknown proportion of the time, producing very noisy data interpreted in vague terms with a large number of researcher degrees of freedom and a great deal of theoretical special pleading. Why not look for these hypothesized effects where they would be expected to be most clearly expressed?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2017 Jun 27, Lydia Maniatis commented:

      Part 2 A related and equally untenable theoretical claim casually adopted by Greenwood et al (2017) is one which Teller (1984) has explicitly criticized and Graham (2011) has uncritically embraced and unintentionally satirized. It is the notion that some aspects of perceptual experience are directly related to - and can be used to discern - the behavior of neurons in the "lower levels" of the visual system, usually V1:

      "Prior studies have linked variations in both acuity (26) and perceived object size (59) with idiosyncrasies in visual cortical regions as early as V1."

      "To consider the origin of the relationship between crowding and saccades, we conducted a second experiment to compare crowding with two "lower-level" measures of spatial localization: gap resolution and bisection thresholds."

      "If these similarities were to arise due to an inheritance of a common topology from earlier stages of the visual system, we would expect to see similar patterns of variations in tasks that derive from lower-level processes."

      Before addressing the fatal flaws with the theoretical premise, I would like to note that the two references provided in no way rise to the occasion. Both are attempts to link some measures of task performance to area V1 based on fMRI results. fMRI is still a very crude method of studying neural function to begin with. Additionally, the interpretation of the scans is assumption-laden, and we are supposed to take all of the underlying assumptions as given, with no arguments or evidence. For example, from citation 26:

      "To describe the topology of a given observer's V1, we fit these fMRI activity maps with a template derived from a conformal mapping method developed by Schwartz (Schwartz 1980, Schwartz 1994). According to Schwartz, two-dimensional visual space can be projected onto the two-dimensional flattened cortex using the formula w=k x log(z + a), where z is a complex number representing a point in visual space, and w represents the corresponding point on the flattened cortex. [n.b. It is well-known that visual experience cannot be explained on a point by point basis]. The parameter a reflects the proportion of V1 devoted to the foveal representation, and the parameter k is an overall scaling factor."

      The 1994 Schwartz reference is to a book chapter, and the method being referenced appears to have been proposed in 1980 (pre-fMRI?). I guess we have to take it as given that it is valid.

      From ref. 59:

      "For pRF spread we used the raw, unsmoothed pRF spread estimates produced by our fine-fitting procedure. However, the quantification of surface area requires a smooth gradient in the eccentricity map without any gaps in the map and with minimal position scatter in pRF positions. therefore, we used the final smoothed prameter maps for this analysis. The results for pRF spread are very consistent when using smoothed parameter maps, but we reasoned that the unsmoothed data make fewer assumptions."

      One would ask that the assumptions be made explicit and rationalzed. So, again, references act as window-dressing for unwarranted assertions that the tasks used by the authors directly reflect V1 activity.

      The theoretical problem is that finding some correlation between some perceptual task and some empirical observations of the behavior of neurons in some part of the visual system in no way licenses the inference that the perceptual experience tapped by the task is a direction reflection of the activities of those particular neurons. Such correlations are easy to come by but the inference is not tenable in principle. If the presumed response properties of neurons in V1, for example, are supposed to directly cause feature x of a percept, we have to as a. how is this assumption reconciled with the fact that the activities of the same "low-level" neurons underlie all features of the percept and b. how is it that for this feature, all of the other interconnectivities with other neural layers and populations bypassed?

      Tolerance for the latter problem was dubbed by Teller (1984) the "nothing mucks it up proviso." As an example of the fallacious nature of such thinking, she refers to the Mach bands and their supposed connection to the responses of ganglion cells as observed via single cell recordings:

      "Under the right conditions, the physiological data "look like" the psychophysical data. The analogy is very appealing, but the question is, to what extent, or in what sense, do these results provide an explanation of why we see Mach bands?" (And how, I would add, is this presumed effect supposed to be expressed perceptually in response to all other patterns of retinal stmulation? How does it come about that the responses of ganglion cells are simultaneously shunted directly to perceptual experience, and at the same time participate in the normal course of events underlying visual process as a whole?)

      Teller then points out that, in the absence of an explicit treatment of "the constraints that the hypothesis puts on models of the composit map from the peripheral neural level [in this she includes V1] and the bridge locus, and between the bridge locus and phenomenal states," the proposal is nothing more than a "remote homunculus theory," with a homunculus peering down at ganglion cell activity through "a magical Maxwellian telescope." The aganglion cell explanation continues to feature in perception textbooks and university perception course websites.

      It is interesting to note that Greenwood et al's first mention of "lower-level" effects (see quote above) is placed between scare quotes, yet nowhere do they qualify the term explicitly.

      The ease with which one can discover analogies between presumed neural behavior and psychophysical data was well-described by Graham (2011):

      "The simple multiple-analyzers model shown in the top panel of Fig. 1 was and is a very good account, qualitatively and quantitatively, of the results of psychophysical experiments using near-threshold contrasts . And by 1985 there were hundreds of published papers each typically with many such experiments. It was quite clear by that time, however, that area V1 was only one of 10 or more different areas in the cortex devoted to vision. ...The success of this simple multiple-analyzers model seemed almost magical therefore. [Like a magical Maxwellian telescope?] How could a model account for so many experimental results when it represented most areas of the visual cortex and the whole rest of the brain by a simple decision rule? One possible explanation of the magic is this: In response to near-threshold patterns, only a small proportion of the analyzers are being stimulated above their baseline. Perhaps this sparseness of information going upstream limits the kinds of processing that the higher levels can do, and limits them to being described by simple decision rules because such rules may be close to optimal given the sparseness. It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen." Rather than challenging the “nothing mucks it up proviso” on logical and empirical grounds, Graham has uncritically and absurdly embraced it. (I would note that the reference to "near-threshold" refers only to a specific feature of the stimulation in question, not the stimulation as a whole, e.g. the computer screen on which stimuli are being flashed, which, of course, is above-threshold and stimulating the same neurons.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2017 Jun 26, Lydia Maniatis commented:

      Over thirty years ago, Teller (1984) attempted to inspire a course correction in a field that had become much too reliant on very weak arguments and untested, often implausible assumptions. In other words, she tried to free it from practices appropriate to pseudoscience. Unfortunately, as Greenwood et al (2017) illustrates so beautifully, the field not only ignored her efforts, it became, if anything, even less rigorous.

      The crux of Teller's plea is captured by the passage below (emphasis mine). Her reference is to "linking propositions" which she defines as "statements that relate perceptual states to physiological states, and as such are one of the fundamental building blocks of visual science."

      "Twenty years ago, Brindley pointed out that in using linking hypotheses, visual scientists often introduce unacknowledged, non-rigorous steps into their arguments. Brindley's remarks correctly sensitized us to the lack of rigor ** with which linking propositions have undoubltedly often been used, but led to few detailed, explicit discussions of linking propositions. it would seem usefule to encourage such discussions, and to encourage visual scientists to make linking propositions explicit **so that linking propositions can be subjected to the requirements of consistency and the risks of falsification appropriate to the evaluation of all scientific [as opposed to pseudoscientific] propositions."

      Data itself tells us nothing; it must be interpreted. The interpretation of data requires a clear theoretical framework. One of the requirements of a valid theoretical framework is that its assumptions be a. consistent with each other; b. consistent with known facts; and c. testable, in the sense that it makes new predictions about potentially observable natural phenomena. ("Linking propositions" are effectively just another term for the link between data and theory, applied to a particular field). Theoretical claims, in other words, are not to be made arbitrarily and casually because they are key to the valid interpretation of data.

      The major theoretical premise of Greenwood et al (2017_ is arbitrary and inconsistent with the facts aw we know them and as we can infer them logicaly. The authors don't even try to provide supporting citations that are anything more than window-dressing. The premise is contained in the following two exceprted statements:

      "Given the hierarchical structure of th eviusal system, with inherited receptive field properties at each stage (35), variations in this topological representation could arise early in the viusal system, with pattenrs specific to each individual that are inherited throughout later stages." (Introduction, p. E3574).

      "Given that the receptive fields at each stage in the visual system are likely built via the summation of inputs from the preceding stages (e.g. 58)..." (Discussion, p. E3580).

      The statements are false, so it is no surprise that neither of the references provided is anywhere near adequate to support what we are supposed to accept as "given."

      The first referenc is to Hubel and Wiesel (1962), an early study recording from the striate cortex of the cat. Its theoretical conclusions are early, speculative, based on a narrow set of stimulus conditions, and apply to a species with rather different visual skills than humans. Even so, the paper does not support Greenwood et al's breezy claim; it includes statements that contradict both of the quoted assertions e.g. (emphasis mine):

      "Receptive fields were termed complex when the response to light could not be predicted from the arrangements of excitatory and inhibitory regions. Such regions could generally not be demonstrated; when they could the laws of summation and mutual antagonism did not apply." (p. 151). Even the conclusions that may seem to apply are subject to a conceptual error noted by Teller (1984); the notion that a neuron is specialized to detect the stimulus (of the set selected for testing) to which it fires the fastest. (She likens this error to treating each retinal cone as a detector of wavelength to which it fires the fastest, or at all, when as we know the neural code for color is contingent on relative firing rates of all three cones).

      Well before Hubel and Wiesel, it had become abundantly clear that the link between retinal stimulation and perception could not remotely be described in terms of summative processes. (What receptive field properties have been inherited by the neurons whose activity is responsible for the perception of an edge in the absence of a luminance or spectral step? Or an amodal contour? or a double-layer? etc). Other than as a crude reflection of the fact that neurons are all interconnected in some way, the "inherited" story has no substance and no support.

      And of course, it is well-known that neural connections in the brain are so extraordinarily dynamic and complex - feedforward, feedback, feed-sideways, diagonally...even the effect of the feedforward component, so to speak, is contingent on the general system state at a given moment of stimulation...that to describe it as "hierarchical" is basically to mislead.

      The second supporting citation, to Felleman and van Essen (1991) is also to a paper in which the relevant claims are presented in a speculative fashion.

      To be continued (in addition to additional theoretical problems, the method and analysis - mostly post hoc - is also highly problematic).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 12, Takashi Shichita commented:

      As the comment on PubMed Common, the targeted site of our guide RNA2 was split over Exons 2 and 3. This is our mistake. However, we successfully obtained the Msr1-deficient RAW cell clone through limiting dilution. Our guide RNA1 is thought to function correctly for the disruption of Msr1 gene.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Oct 09, Nirajkumar Makadiya commented:

      Hi Authors,

      Have some questions regarding the methods section of PMID: 28394332. 1) Guide RNA 2 (5′-CTTCCTCACAGCACTAAAAA-3′) for the CRISPR is spanning over exon 2 and 3. Was wondering if that can work? 2) Did you try the other sgRNA sequences to knock out the Msr1 gene?

      Thank you!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 19, Seán Turner commented:

      The accession number for the 16S rRNA gene sequence is incorrectly cited in the manuscript as LN598544.1. The correct number is LT598544.1.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 16, David Keller commented:

      Thank you, again, for your illuminating and scholarly reply to my comments and questions. Your field of causal inference theory may provide a much-needed bridge spanning the chasm between the land of rigor where mathematicians dwell, and the land of rigor mortis inhabited by clinicians and patients. I will continue to follow your work, and that of your colleagues, with great interest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Nov 16, Ian Shrier commented:

      We completely agree on objectives and importance of estimating the per protocol effect. It is absolutely the effect that I am interested in as a patient, and therefore as a physician who wants to communicate important information to the patient.

      I do think we have different experiences on how people interpret the words "per protocol analysis". Historically, this term has been used to mean an analysis that does not estimate the per protocol effect except under unusual contexts. More recently, some have used it to refer to a different type of analysis that does estimate the per protocol effect. The field of causal inference is still relatively new and there are other examples of changing terminology. I expect the terminology will stabilize over the next 10 years, which will make it much easier for readers, authors and reviewers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Nov 15, David Keller commented:

      Thank you for your thoughtful reply to my comment. In response, I would first remark that, while "it is sometimes difficult to disentangle the jargon", it is always worthwhile to clearly define our terms.

      Commonly, an "analysis" is a method of processing experimental data, while an "effect" is a property of experimental data, observed after subjecting it to an "analysis".

      As applied to our discussion, the "per protocol effect" would be observed by applying "per protocol analysis" to the experimental data.

      My usage of "per protocol results" was meant to refer to the "per protocol effect" observed in a particular trial.

      The above commonly-understood definitions may be a reason causal inference terminology "can get quite confusing to others who may not be used to reading this literature", for example, by defining "per protocol effect" differently than as "the effect of per protocol analysis".

      Nevertheless, clinicians are interested in how to analyze clinical study data such that the resulting observed effects are most relevant to individual patients, especially those motivated to gain the maximal benefit from an intervention. For such patients, I want to know the average causal effect of the intervention protocol, assuming perfect adherence to protocol, and no intolerable side-effects or unacceptable toxicities. This tells the patient how much he can expect to benefit if he can adhere fully to the treatment protocol.

      Of course, the patient must understand that his benefits will be diminished if he fails to adhere fully to treatment, or terminates it for any reason. Still, this "average expected benefit of treatment under ideal conditions" remains a useful goal-post and benchmark of therapy,despite any inherent bias it may harbor, compared with the results of intention-to-treat analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Oct 31, Ian Shrier commented:

      Thank you for your comment. We seem to be in agreement on what many patients may be most interested in.

      In your comment, you say "the per-protocol results of a treatment are, therefore, of interest to patients and their clinicians, and should be reported by clinical trials, along with appropriate statistical caveats and disclaimers."

      The words "per protocol results" might mean different things to different people and I thought it important to clarify some of the terminology, which can get quite confusing to others who may not be used to reading this literature.

      In the causal inference literature, it has been suggested that we use "per protocol analysis" to refer to an analysis that examines only those participants who follow their assigned treatment. This is different from the "per protocol effect" (also known as population average causal effect), which estimates the causal effect of what would be observed if the entire population received a treatment compared to the entire population not receiving a treatment.

      Further, when we refer to the causal effect of “treatment”, we really mean the causal effect of a “treatment strategy”. For example, clinical practice would be to discontinue a medication if there is a serious side effect. In a trial, this would be part of the protocol. Therefore, a person with a serious side effect still counts as following the “treatment strategy” (i.e. the protocol of a per protocol effect) even though they are no longer on treatment.

      In brief, the per protocol analysis and per protocol effect are only the same under certain conditions. Assume a randomized trial with the control group receiving usual care and also not having access to the active treatment. In this case, those who are assigned active treatment and do not take their active treatment still receive the same usual care as the control group. The per protocol analysis will be the same as the per protocol effect only if these non-adherent active treatment group participants receiving usual care have the same outcomes on average as those assigned to the control group receiving usual care. This is an assumption that many of us are reluctant to make because the reasons for non-adherence are often related to the probability of the outcome. This is why more sophisticated analyses are helpful in estimating the true population average causal effect.

      I hope this makes sense. It is sometimes difficult to disentangle the jargon and still be 100% correct in statements.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Oct 24, David Keller commented:

      Patients considering an intervention want to know the actual effect of receiving it

      Motivated patients are not very interested in how much benefit they can expect to receive from being assigned to a therapy; they want to know the benefits and risks of actually receiving the treatment. The average causal effect of treatment is, for these patients, more clinically relevant than the average causal effect of assignment to treatment.

      Intention-to-treat analysis may be ideal for making public health decisions, but it is largely irrelevant for treatment decisions involving particular individuals. Patients want personalized medical advice. A patient's genetic and environmental history may modify his expected results of receiving treatment, and the estimated effects should be discussed.

      Regardless of their inherent biases, the per-protocol results of a treatment are, therefore, of interest to patients and their clinicians, and should be reported by clinical trials, along with appropriate statistical caveats and disclaimers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 17, Lydia Maniatis commented:

      To synopsize the problem as I understand it: The author is claiming that large sections of the population are walking around, in effect, with differently tinted lenses, some more bluish, some more white or yellowish.

      This is a radical claim, and would have wide-ranging consequences, not least of which is that, as in the case of the dress, there would be a general disagreement about color. Such a general disagreement would not be detectable, as no one could know that what we all, for example, might call blue is actually perceived in different ways.

      The reason we can know that we disagree about the colors of the dress is that we agree on colors generally, and the dress constitutes a surprising exception.

      If the author believes in his hypothesis, a strong, direct experimental test is in order. (It would certainly falsify.) If he insists on focussing on correlations with "owls" and "larks," then he should better control his populations, e.g. use night watchmen for the owls, and park rangers for the larks, or investigate how the dress is perceived by populations in e.g. Norway, where the days and nights are months-long and the same for everyone. Do we get less variation there?

      What doesn't seem worth pursuing is another uninterpretable replication based on poor quality, muddy and uncheckable data from anonymous readers of Slate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 15, Lydia Maniatis commented:

      Please see comments/author responses on PubPeer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 12, Lydia Maniatis commented:

      "Although the results from both our experiments appear to be consistent with previous research (Atsma et al., 2012; Fencsik et al., 2007; Franconeri et al., 2012; Howe & Holcombe, 2012; Iordanescu et al., 2009; Keane & Pylyshyn, 2006; Khan et al., 2010; Lovejoy et al., 2009; Luu & Howe, 2015; Szinte et al., 2015; Watamaniuk & Heinen, 2015), they do not seem to be consistent with each other. Obviously, the two experiments we have described here were not exactly the same. We will discuss some of the differences that might explain the seemingly conflicting results."

      The conflict between the results has to also be a conflict between some of the results and the hypothesis being tested. The broad speculation as to which of the many confounds may be responsible just shows that there were too many confounds. Such as:

      "the amount of attentional resources dedicated to the task might have been different between the two experiments. For both overtly tracked and covertly tracked targets, we see that the overall probe detection rate was higher in the second experiment compared to the first. Moreover, the feedback we received from several participants in both experiments suggests that tracking the objects in Experiment 1 was so easy that participants were very easily distracted by their thoughts, and that Experiment 2 was more challenging and engaging. We therefore speculate that participants focused their attention more strongly (i.e., dedicated more attentional resources) toward tracking each target during Experiment 2 than during Experiment 1."

      The only way to test that speculation is to do another experiment, hopefully one less confounded. Otherwise - if speculation by itself can resolve serious confounds in an otherwise inconclusive experiment - why do any experiments at all? Just assume that any differences between future results and prediction will be due to confounds.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 12, Christopher Southan commented:

      This is a welcome development, particularly from the instantiation of QC'd compounds. However, the utility would be enhanced by submission of the 4,707 structures to PubChem, including enabling selects for the 1,988 approved and 1,348 Ph1s (easily done via SID tags)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Tanai Cardona commented:

      Quite an interesting article.

      On page 16, you say: "However, several lines of evidence are consistent with the existence of oxygenic photosynthesis hundreds of millions of years before the Archean–Proterozoic boundary [...]"

      Another line of evidence for an early origin of oxygenic photosynthesis comes from the evolution of the photochemical reaction centers and Photosystem II, the enzyme that oxidizes water to oxygen. I have recently shown that the earliest events in the evolution of water oxidation catalysis likely date back to the early Archaean. See Cardona, 2016, Front. Plant Sci. 7:257 doi: 10.3389/fpls.2016.00257; and also Cardona et al., 2017, BiorXiv, doi.org/10.1101/109447 for an in depth follow up.

      I am very glad to read that a largely anaerobic Archaean atmosphere with oxygen levels as low as 10E-7 is not inconsistent with the presence of oxygenic photosynthesis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 06, Dale D O Martin commented:

      It should be noted that myristoylation occurs on N-terminal glycines. It can only happen on internal glycines if there is a proteolytic event that generates a new N-terminal glycine on the new C-terminal fragment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.