16,205 Matching Annotations
  1. Jul 2018
    1. On 2017 Jul 03, Vijay Sankaran commented:

      Our re-analysis of the gene expression data presented in this paper shows confounding due to variation in erythroid maturation. Correction for these changes results in alternative conclusions from those presented here. This re-analysis has been published: https://www.ncbi.nlm.nih.gov/pubmed/28615220


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 28, Preben Berthelsen commented:

      The three original insights: John Snow issued the first warning on the prolonged use of oxygen in the new-born in his 1841 paper on “Asphyxia and Resuscitation of Still-born Children”.

      In 1850, Snow was the first to advocate and use chloroform in the treatment of status asthmaticus.

      The first description of “Publication Bias”, in the medical literature, was by Snow in 1858 in “On Chloroform and other Anæsthetics: Their Action and Administration”.

      The misconception. Snow believed that death during chloroform anaesthesia was caused by “air too heavily charged with chloroform” and could be prevented by “judicious” as opposed to “freely” administration of the vapour.

      Preben G. Berthelsen, MD. Charlottenlund, Denmark.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Peter Hajek commented:

      This paper presents some interesting data, but the language it uses is misleading. The word ‘initiation’ implies a start of regular use, but the data concern mostly a single instance when people tried an e-cigarette. Among non-smokers, progression to regular vaping is extremely rare. Trying vaping once or twice and never going back to it does not initiate anything. (Among smokers, a switch to vaping is a good thing).

      Describing e-cigarette use as ‘e-cigarettes smoking’ is another misleading sound-bite. Vaping poses only a small fraction of risks of smoking.

      Finally, preventing e-cigarette use is not ‘tobacco use prevention’. Vaping does not include use of tobacco. If the authors mean by this phrase that experimentation with e-cigarettes inevitably leads to smoking, there is no sign of that. Smoking prevalence in youth is declining at an unprecedented rate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 24, Rafael Fridman commented:

      Nice study but the title of the paper does not reflect the findings and thus is misleading. There is no functional evidence that the pathway presented (TM4SF1/DDR1) indeed "promotes metastasis". Invasion in vitro is not metastasis (a complex process). Cells may be invasive in vitro but not metastatic in vivo. Therefore, the authors are respectfully recommended to change the title of the paper to better represent the actual findings and the limitations of the experimental systems.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 02, William McAuliffe commented:

      This is a valuable addition to the literature, but its generalizability is limited by the recruitment source of the prescription group. Most iatrogenically addicted patients do not seek treatment in a drug treatment program because of the demographic differences between them and the typical non-medical opioid addict. The pain patients are much more likely go to a pain clinic or are simply tapered off of the drug by the original providers. There are important differences between the pain patients that go to drug treatment programs and those that go to pain clinics that are attenuated in this study and likely result in its failure to find many significant effects.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 23, Md. Shahidul Islam commented:

      In human beta cells TRPM5 is almost absent while its closest relative TRPM4 is abundant. Marabita F, and Islam MS. Pancreas. 2017 Jan;46(1):97-101. Expression of Transient Receptor Potential Channels in the Purified Human Pancreatic β-Cells. PMID: 27464700 DOI: 10.1097/MPA.0000000000000685


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 12, Yu-Chen Liu commented:

      We sincerely appreciate your insightful feedbacks and constructive advices on our research. We strongly appreciate the advice on excluding all sequences that mapped on the mammal genome before the search of potential plant miRNAs. On the other hand, given the facts that the reads mapped on both plant and mammal, whether such reads were false positively mammal prompted cannot be assured before further experimental validation. On the prospect of potential candidate discovery attempts, reads mapped on both plant and mammal genomes should not be omitted indifferently. This, in my opinion, is a dilemma between the measures of avoiding false positive and increasing discovery rate. Maybe both measures should be taken in the future study.

      Thank you again, for the great advices and dedication made in reviewing of this research.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 12, Kenneth Witwer commented:

      Liu YC, 2017 reported mapping very low levels of mature plant miRNAs in a subset of public data gathered from 198 human plasma samples, concluding that this was evidence of "cross-kingdom RNAi"; however, both the authors and I observed that only one putatively foreign sequence, "MIR2910," mapped consistently and at levels above a reasonable noise threshold. No data were presented to support functional RNAi. I further noted that MIR2910 is a plant rRNA sequence, has been removed from miRBase, and also maps with 100% coverage and identity to human rRNA. In the comment below, Dr. Liu now links to unpublished predicted hairpin mapping data that were not included in the Liu YC, 2017 BMC Genomics conference article, which, like my comments, focused on mature putative xenomiRs. Dr. Liu states that mapping has been done not only to the putative MIR2910 mature sequence (as reported), but also to the predicted MIR2910 precursor hairpin sequence.

      This is an interesting development, and I strongly and sincerely commend Liu et al for sharing their unpublished data in this forum. This is exactly what PubMed Commons is about: a place for scientists to engage in civil and constructive discourse.

      However, examination of the new data reinforces my observation that the only consistently mapped "foreign" sequence in the Liu YC, 2017 study is a human rRNA sequence, not a plant miRNA, mature or otherwise. Beyond the 100% identity of the 21nt putative MIR2910 mature sequence with human rRNA, a 47nt stretch (80%) of the plant "pre-MIR2910" rRNA fragment aligns to human 18S rRNA with only one mismatch (lower-case), and indeed Liu et al allowed one mismatch:

      Plant rRNA fragment:

      UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGuUAACGAACGAGAC

      Human rRNA fragment:

      UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGaUAACGAACGAGAC

      Dr. Liu provides the example of a plasma small RNA sequencing dataset, DRR023286, as the primary example of plant miRNA mapping, so let us examine this finding more closely. DRR023286 was by far the most deeply sequenced of the six plasma samples in the Ninomiya S, 2015 study (71.9 million reads), as re-analysed by Liu et al. Yet, like all other data examined in the Liu et al study, and despite the much deeper sequencing, DRR023286 yielded only the pseudo-MIR2910 as a clear "xenomiR" (mature or precursor). Of special note, previously reported dominant dietary plant xenomiRs such as MIR159, MIR168a, and the rRNA fragment "MIR2911" were not detected reliably, even with one mismatch.

      The precursor coverage plots for DRR023286 (and less deeply sequenced datasets, for that matter), also according to the newly provided Liu et al data, show that any coverage is in the 5' 80% of the putative MIR2910 sequence: exactly the part of the sequence that matches human rRNA. The remaining 12 nucleotides at the 3' end of the purported MIR2910 precursor are conspicuously absent and never covered in their entirety. To give one example from the Liu et al data, in the deepest-sequenced DRR023286 dataset, even the single short read that includes just 11 of these 12 nucleotides has a mismatch. Furthermore, various combinations of these 3' sequences match perfectly to rRNA sequences in plant and beyond (protist, bacterial, etc.). Hence, the vanishingly small number of sequences that may appear to support a plant hairpin could just as convincingly be attributed to bacterial contamination...but we are already playing in the noise.

      As noted, Liu et al allowed one mismatch to plant in their mapping and described no pre-filtering against human sequences. In the DRR023286 dataset, fully 90% of the putative mapping to plant MIR2910 included a mismatch (and thus human)...the other 10% were mostly sequences 100% identical to human.

      In conclusion, the main points I raised previously have not been disputed by Liu et al:

      1) numerous annotated plant miRNAs in certain plant "miRNA" databses appear to have been misannotated or the result of contamination, including some sequences reported by Liu et al that map to human and not plant;

      2) the mature "MIR2910" sequence is a plant ribosomal sequence that also maps perfectly to human rRNA; and

      3) the read counts for all but one putative plant xenomiR in the Liu et al study are under what one might consider a reasonable noise threshold for low-abundance RNA samples (plasma) with tens of millions of reads each.

      Furthermore, the current interaction establishes a new point:

      4) the plant rRNA sequence annotated by some as a "MIR2910" precursor maps almost entirely (and for all practical purposes entirely) to a human rRNA sequence.

      Future, more rigorous searches for plant xenomiRs in mammalian tissues and fluids will require a pre-filtering step to exclude all sequences that map with one (or more) mismatches to all mammalian genomes/transcriptomes and preferably other possible contaminants, followed by a zero-mismatch requirement for foreign mapping.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 11, Yu-Chen Liu commented:

      The authors appreciate the insightful feedbacks and agree with prospect that hypothesis derived from small RNA-seq data analysis deserve examination in skeptical views and further experimental validation. Regarding the skeptical view of Prof. Witwer on this issue, whether a specific sequence were indeed originate from plant can be validated through examining the 2’-O-methylation on their 3’ end (Chin, et al., 2016; Yu, et al., 2005). The threshold of potential copy per cell for plant miRNAs to affect human gene expression was also discussed in previous researches (Chin, et al., 2016; Zhang, et al., 2012).

      Some apparent misunderstandings are needed to be clarified:

      In the commentary of Prof. Witwer:

      “A cross-check of the source files and articles shows that the plasma data evaluated by Liu et al were from 198 plasma samples, not 410 as reported. Ninomiya et al sequenced six human plasma samples, six PBMC samples, and 11 cultured cell lines 19. Yuan et al sequenced 192 human plasma libraries (prepared from polymer-precipitated plasma particles). Each library was sequenced once, and then a second time to increase total reads.”

      Authors’ response:

      First of all, the statement "410 samples" within the article was meant to the amount of runs of small RNA-seq run conducted in the referred researches. Whether multiple NGS runs conducted on same plasma sample should be count as individual experiment replicates is debatable. The analysis of each small RNA-seq run was conduct independently. The authors appreciate the kind comments for the potential confusion that can be made in this issue.

      In the commentary of Prof. Witwer:

      “Strikingly, the putative MIR2910 sequence is not only a fragment of plant rRNA; it has a 100% coverage, 100% identity match in the human 18S rRNA (see NR 003286.2 in GenBank; Table 3). These matches of putative plant RNAs with human sequences are difficult to reconcile with the statement of Liu et al that BLAST of putative plant miRNAs "resulted in zero alignment hit", suggesting that perhaps a mistake was made, and that the BLAST procedure was performed incorrectly.”

      Authors’ response:

      The precursor sequences of the plant miRNAs, including the stem loop sequences (precursor sequences) were utilized in the BLAST sequence alignment in this work. The precursor sequence of peu-MIR2910, “UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGUUAACGAACGAGACCUCAGCCUGCUA” was used. The alignment was not performed merely with the mature sequence, “UAGUUGGUGGAGCGAUUUGUC”. The stem loop sequences, as well as the alignment of the sequences against the plant genomes, was taken into consideration by using miRDeep2 (Friedländer, et al., 2012). As illustrated in the provided figures, sequencing reads were mapped to the precursor sequences of MIR2910 and MIR2916. As listed in the table below, a lot of sequencing reads can be aligned to other regions within the precursor sequences except the sequencing reads aligned to mature sequences. For instance, in small RNA-seq data of DRR023286, 5369 reads were mapped to peu-MIR2910, and 4010 reads were mapped to the other regions in the precursor sequences.  

      miRNA | Run |Total reads | on Mature | on precursor

      peu-MIR2910 | DRR023286 | 9370 | 5369 | 4010

      peu-MIR2910 | SRR2105454 | 3013 | 1433 | 1580

      peu-MIR2914 | DRR023286 | 1036 | 19 | 1017

      peu-MIR2916 |SRR2105342 | 556 | 227 | 329

      (Check the file MIR2910_in_DRR023286.pdf, MIR2910_in_SRR2105454.pdf, MIR2914_in_DRR023286 and MIR2916_in_SRR2105342.pdf)

      The pictures are available in the URL:

      https://www.dropbox.com/sh/9r7oiybju8g7wq2/AADw0zkuGSDsTI3Aa_4x6r8Ua?dl=0

      As described in the article, all reported reads mapped onto the plant miRNA sequences were also mapped onto the five conserve plant genomes. Within the provided link a compressed folder file “miRNA_read.tar.gz” is available. Results of the analysis through miRDeep2, were summarized in these pdf files. Each figure file was named according to the summarized reads, sequence run and the mapped plant genome. For example, reads from the run SRR2105181 aligned onto both Zea mays genome and peu-MIR2910 precursor sequences are summarized in the figure file “SRR2105181_Zea_mays_peu-MIR2910.pdf”.

      In the commentary of Prof. Witwer:

      “Curiously, several sequences did not map to the species to which they were ascribed by the PMRD. Unfortunately, the PMRD could not be accessed directly during this study; however, other databases appear to provide access to its contents.”

      Authors’ response:

      All the stem loop sequences of plant miRNAs were acquired from the 2016 updated version of PMRD (Zhang, et al., 2010), which was not properly referred. The used data were provided in the previously mentioned URL.

      In the commentary of Prof. Witwer:

      “Counts were presented as reads per million mapped reads (rpm). In contrast, Liu et al appear to have reported total mapped reads in their data table. Yuan et al also set an expression cutoff of 32 rpm (log2 rpm of 5 or above). With an average 12.5 million reads per sample (the sum of the two runs per library), and, on average, about half of the sequences mapped, the 32 rpm cutoff would translate to around 200 total reads in the average sample as mapped by Liu et al.”

      Authors’ response:

      Regarding the concern of reads per million mapped reads (rpm) threshold, the author appreciate the kind remind of the need to normalize sequence reads count into the unit in reads per million mapped reads (rpm) for proper comparison between samples of different sequence depth. However the comparison was unfortunately not conducted in this work. Given the fact that the reads were mapped onto plant genome instead of human genome, the normalization would be rather pointless, considering the overall mapped putative plant reads only consist of ~3% of the overall reads. On the other hand, the general amount of cell free RNA present in plasma samples was meant to be generally lower than within cellar samples (Schwarzenbach, et al., 2011).

      Reference

      Chin, A.R., et al. Cross-kingdom inhibition of breast cancer growth by plant miR159. Cell research 2016;26(2):217-228.

      Friedländer, M.R., et al. miRDeep2 accurately identifies known and hundreds of novel microRNA genes in seven animal clades. Nucleic acids research 2012;40(1):37-52.

      Schwarzenbach, H., Hoon, D.S. and Pantel, K. Cell-free nucleic acids as biomarkers in cancer patients. Nature Reviews Cancer 2011;11(6):426-437.

      Yu, B., et al. Methylation as a crucial step in plant microRNA biogenesis. Science 2005;307(5711):932-935.

      Zhang, L., et al. Exogenous plant MIR168a specifically targets mammalian LDLRAP1: evidence of cross-kingdom regulation by microRNA. Cell research 2012;22(1):107-126.

      Zhang, Z., et al. PMRD: plant microRNA database. Nucleic acids research 2010;38(suppl 1):D806-D813.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Apr 07, Kenneth Witwer commented:

      Caution is urged in interpreting this conference article, as described in more detail in my recent commentary. For careful analysis of this issue, using greater numbers of studies and datasets and coming to quite the opposite conclusions, see Kang W, 2017 and Zheng LL, 2017. Here, Liu et al examined sequencing data from two studies of a total of 198 plasma samples (not 410 as reported). Although no canonical plant miRNAs were mapped above a reasonable background threshold, one rRNA degradation fragment that was previously and erroneously classified as a plant miRNA, MIR2910, was reported at relatively low but consistent counts. However, this rRNA fragment is found in human 18S rRNA and is thus most simply explained as part of the human degradome. The other reportedly detected plant miRNAs were mostly found in a small minority of samples and in those were mapped at average read counts of less than one per million. These sequences may be amplification, sequencing, or mapping errors, since reads were mapped directly to plant (with one mismatch allowed) with no pre-filtering against mammalian genomes/transcriptomes. Several purported plant sequences, e.g., ptc-MIRf12412-akr and ptc-MIRf12524-akr, map perfectly to human sequences but do not appear to map to Populus or to other plants, suggesting that the plant miRNA database used by the authors and published in 2010 may include some human sequences. This is not a surprise, given pervasive low-level contamination in sequencing data, as reported by many authors.

      Of course, even if some of the mapped sequences were genuine plant RNAs, they would be present in blood at greatly subhormonal levels unlikely to affect biological processes. No evidence of function is provided, apart from in silico predictions of human targets of the putative MIR2910 sequence, which, as noted above, is a human sequence. Thus, the titular claim of "evidences of cross-kingdom RNAi" is wholly unsupported. Overall, the results of this study corroborate the findings of Kang W, 2017 and previous studies: that dietary xenomiR detection is likely artifactual.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 05, Frances Cheng commented:

      Immobility in mice, calm or stressed?

      Frances Cheng, PhD; Ingrid Taylor, DVM; Emily R Trunnell, PhD

      People for the Ethical Treatment of Animals

      This paper by Yackle, et al. claims to have found a link between breathing and calmness in mice. However, the conclusions made by the authors leave several critical questions unanswered.

      Mice are naturally inquisitive and the expression of exploratory behavior is generally interpreted as good welfare. Conversely, being motionless or less exploratory is typically thought to be indicative of stress, pain, or poor welfare. There is no established relationship between calmness and sitting still; in fact the literature would likely attribute time spent immobile to anxiety in this species. Similarly, the relationship between grooming and calmness is not clear. Grooming can be elicited by both stressful and relaxing situations, and as such is problematic to use as an absolute marker of stress levels. In some cases, grooming can be a stress reliever and restraint stress can increase grooming (1). A study by Kaleuff and Tuohima attempted to differentiate between stress grooming and relaxed grooming, stating: “While a general pattern of self-grooming uninterrupted cephalocaudal progression is normally observed in no-stress (comfort) conditions in mice and other rodents, the percentage of ‘incorrect’ transitions between different stages and the percentage of interrupted grooming bouts may be used as behavioural marker of stress" (2). Indeed the preBötC ablated mouse in this supplementary video (http://science.sciencemag.org/content/sci/suppl/2017/03/29/355.6332.1411.DC1/aai7984s1.mp4) appears to be less active, but without other objective measurements, it is a leap to conclude that this mouse is calm.

      The chamber used to measure the animals’ behavior is extremely small and inadequate for observing mouse behavior and making conclusions about calmness or other emotions, particularly when the differences in behavior between the two mice being compared are very subtle, as they are here. In addition, simply being in a chamber of this size could potentially be restraining and stress-inducing. A larger chamber would allow for more traditional measures of calmness and anxiety, such as exploratory behavior, where the amount of time the mouse spends along the wall of the chamber versus the amount of time he or she leaves the safety of the walls to explore the center area is measured and scored (the Open Field test).

      As mentioned, sitting still does not necessarily dictate calmness, and in many behavioral paradigms, immobility is thought to be an outward sign of anxiety or distress. The authors mention that they observe different breathing rates associated with different behaviors, e.g., faster during sniffing and slower during grooming; however, observed breathing rate alone cannot be used as the sole measure to associate an emotion with a behavior. Consider that humans under stress can hyperventilate when sitting still. It would be more informative, and more applicable to calmness, to know whether or not ablation of Cdh9/Dbx1 double-positive preBötC neurons would influence one’s ability to control their breathing and potentially to breathe slower during a psychologically stressful situation, rather than how the ablation impacts breathing coupled with normal physiological functions.

      It is not clear if the experimental group is physiologically capable of breathing faster in response to external stimuli. Not being able to do so—in other words being programmed to breathe a certain way—could be distressing. The lack of compensatory respiration mechanisms, such as increased respiration in unknown, potentially dangerous situations, could affect prey species such as mice in ways that have not been previously characterized.

      The experimental groups were born with full use of the Cdh9/Dbx1 double-positive preBötC neurons. These neurons were ablated in adult animals. If these animals did not have a full range of control of their breathing after ablation, they might have experienced unpleasant psychological reactions to the forced change in breathing pattern, which could be distressing.

      In the Methods section, the authors did not specify whether or not the behavioral experiment was performed during the mice's light or dark cycle. From the Supplementary video, linked above, the artificial lighting of the indoor facility also makes this determination impossible. As you may know, mice are nocturnal. Conducting behavioral tests during the light cycle, when the mice would normally be sleeping, can lead to dramatically different results (3,4). Interruption of a rodent’s normal sleeping period reduces welfare and increases stress (5,6). It has been recommended that for behavioral phenotyping of genetically engineered mice, dark-phase testing allows researchers to better discriminate these strains against wild-type animals and provides superior outcomes (7).

      To fully assess calmness or stress, one can measure physiological parameters such as hormone levels or heart rate, to name a few. However, the authors did not examine any measure of stress beyond breathing rate, which they artificially manipulated, not even to measure the baseline stress level between groups. The use of theta rhythm as secondary external validation for emotion further supports our concerns that the authors have drawn a broad conclusion based on rather tenuous connections. The relationship between theta rhythm and arousal may depend entirely on locomotion. As noted by Biskamp and colleagues, “The power of hippocampal theta activity, which drives theta oscillations in the mPFC, depends on locomotion and is attenuated when animals remain immobile” (8). The authors conclude that mice are “calm” for simply sitting still, a behavior that has in most other cases been attributed to decreased well-being.

      For the reasons listed above, we are concerned that the authors may have drawn premature and/or incorrect conclusions regarding the relative “calmness” of the mice with preBötC ablation. Importantly, the authors claim as a justification for their work that this data may be useful in understanding the effects of pranayama yoga on promoting “mental calming and contemplative states”. However, the practice of pranayama includes not only controlled breathing but also mental visualization and an increased emphasis on abdominal respiration. There are also periods when the breath is held deliberately. It cannot be assumed the various components of pranayama can individually achieve a calmer state in humans, and, crucially, these components cannot be modeled or replicated in animals.

      References

      1) S.D. Paolo et al., Eur J Pharmacol., 399, 43-47 (2000).

      2) A.V. Kaleuff, P. Tuohimaa, Brain Res. Protoc., 13, 151-158 (2004).

      3) A. Nejdi, J. M. Gustavino, R. Lalonde, Physiol. Behav., 59, 45-47 (1995).

      4) A. Roedel, C. Storch, F. Holsboer, F. Ohl, Lab. Anim., 40, 371-381 (2006).

      5) U. A. Abou-Ismail, O. H. P. Burman, C. J. Nicol, M. Mendl, Appl. Anim. Behav. Sci., 111, 329-341 (2008).

      6) U. A. Abou-Ismail, R. A. Mohamed, S. Z, El-Kholya, Appl. Anim. Behav. Sci., 162, 47-57 (2015).

      7) S. M. Hossain, B. K. Y. Wong, E. M. Simpson, Genes Brain Behav., 3, 167-177 (2004).

      8) J. Biskamp, M. Bartos, J. Sauer, Sci. Rep., 7, 45508 (2017).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 10, Thomas Heston commented:

      This is an interesting concept which could improve scientific research and trust in science. One possible shortcoming of their proposal is that they propose a private network as opposed to an open network with no central authority as proposed in the Blockchain-based scientific study (Digit Med 2017;3:66-8). These concepts of combining blockchain technology with smart contracts are a step in the right direction towards making research studies more reproducible, reliable, and trusted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 01, Lydia Maniatis commented:

      Last sentence of abstract: "Our findings suggest participants integrate shape, motion, and optical cues to infer stiffness, with optical cues playing a major role for our range of stimuli."

      The authors offer no criterion for "range of stimuli," even though they clearly limit their conclusions to this undefined set. This means that. if one wanted to attempt a replication, it would not be clear what "range of stimuli" would constitute a valid attempt. The relevance of this point can be appreciated in the context of statements like: "Compared with previous studies, we find a much less pronounced effect of shape cues compared with material cues (Han & Keyser, 2015, 2016; Paulun et al., 2017)." Speculation as to the reasons for this discrepancy are moot if we can't circumscribe our stimulus set in a theoretically clear way.

      Also, the terms "shape, motion, and optical cues," as they are used by the authors, reflect an unproductive failure to distinguish between perceived and physical properties. Relevant physical properties of a stimulus are limited to the retinal stimulation it produces. The correct title for this paper would be "Inferring the stiffness of unfamiliar objects from their inferred surface properties, shapes, and motions."

      Instead, the authors are treating stiffness as an inference and the rest as objective fact. (Not that thinking about perceptual qualities like stiffness, which seem more indirectly* inferred than the others, isn't interesting in itself, but the lines between perception and physics, and the relationships between them, shouldn't be blurred).

      *Having said this, I don't think it's actually appropriate to characterize any perceived quality as more or less indirect than others. Even the seemingly simplest things - such as extent (which includes amodal completion), or lightness (which includes double layers, subjective contours), are not in any sense read directly off of the retinal stimulation.

      The problem of confusing perceived and objective properties is the more acute given that the investigators aren't using real objects, but objects that rendered by a third-party computer program. "

      "The render engine used to generate the final images was Maxwell 3.0.1.3 (NextLimit Technologies, Madrid, Spain).... Specifically, they were designed to approximate the following materials: black marble, white marble, porcelain, nickel, concrete paving, cement, ceramic, steel, copper, light wood, dark wood, silvered glass, glass, stone, leather, wax, gelatine, cardboard, plastic, paper, latex, cork, ice cream, lichen, waffle, denim, moss, and velvet. Some of these materials were downloaded or based on downloads from the Maxwell free resources library (http://resources.maxwellrender.com), and others were designed by us."

      The authors are skipping all the good parts. What are the theoretical underpinnings of what are essentially assumptions about what various computer images will look like? Why is Maxwell's rendering of "velvet" equivalent, in terms of the retinal stimulation it generates, with real velvet? What are the criteria of a valid rendering of all of these perceived qualities and substances?

      The criteria are empirical (see below), but loose. It is not clear that they are statistically valid, or how this could be assessed:

      "Finally, Supplementary Figure S2 summarizes the results of the free material-naming task. Generally, they show that most of our renderings yielded compelling impressions of realistic materials that observers were able to reliably classify. In the following, the naming results were used to decide on the materials to test in Experiments 3 and 4 by choosing only materials that were identified as the same material by at least 50% of participants (see Stimuli section of Experiment 3)."

      Fifty percent agreement seems like a pretty low bar. Why not at least 51% (which would still be low). Why not shoot for 100%? Is normal inter-individual variability in perception of materials this low in real-life? Or are the renderings generally inadequate? Even poor pictorial renderings of materials can contain cues - e.g. wood grain - which could produce seemingly clear answers that don't really reflect a valid percept in all the particulars. The very brief description of the naming task doesn't make it clear whether or not it was a forced answer, i.e. whether or not participants were allowed to say "not sure," which seems relevant.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 05, Gwinyai Masukume commented:

      The authors state that South Africa is the “country with the highest global incidence of HIV/AIDS.”

      This is an oversight because both Swaziland and Lesotho have an estimated HIV incidence rate among adults (15-49) greater than South Africa’s. The incidence rates as of 2015, the most recent available from http://aidsinfo.unaids.org/, are 1.44 for South Africa, 1.88 for Lesotho and 2.36 for Swaziland.

      This translates into approximately 380 000 new HIV infections per year for South Africa, 18 000 for Lesotho and 11 000 for Swaziland. South Africa has a much larger population, about 55 million people compared to Lesotho’s of about 2 million and to Swaziland’s of about 1.5 million https://www.cia.gov/library/publications/the-world-factbook/rankorder/2119rank.html#sf.

      Although the absolute numbers of new HIV infections is higher for South Africa, both Lesotho and Swaziland, for their population sizes, have disproportionately more new HIV infections (incidence) than South Africa.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 09, Kenneth J Rothman commented:

      Lappe et al. (1) reported that women receiving vitamin D and calcium supplementation had 30% lower cancer risk than women receiving placebo after four years (hazard ratio (HR)=0.70, 95% confidence interval (CI): 0.47 to 1.02). Remarkably, they interpreted this result as indicating no effect. So did the authors of the accompanying editorial (2), who described the 30% lower risk for cancer as “the absence of a clear benefit,” because the P-value was 0.06. Given the expected bias toward a null result in a trial that comes from non-adherence coupled with an intent-to-treat analysis (3), the interpretation of the authors and editorialists is perplexing. The warning issued last year by the American Statistical Association (ASA) (4) about this type of misinterpretation of data should be embraced by researchers and journal editors. In particular, the ASA stated: “Scientific conclusions …should not be based only on whether a p-value passes a specific threshold.” Editors in particular ought to guide their readership and the public at large to avoid such mistakes and foster more responsible interpretation of medical research.

      EE Hatch, LA Wise

      Boston University School of Public Health

      KJ Rothman

      Research Triangle Institute & Boston University School of Public Health

      References

      (1) Lappe J,Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women. JAMA. 2017; 317:1234-1243. doi:10.1001/jama.2017.2115

      (2) Manson JE, Bassuk SS, Buring JE. Vitamin D, Calcium, and Cancer. Approaching Daylight? JAMA 2017; 317:1217-1218.

      (3) Rothman KJ. Six persistent research misconceptions. J Gen Intern Med 2014; 29:1060-1064. doi: 10.1007/s11606-013-2755-z

      (4) ASA statement on statistical significance and P-values. Am Stat. 2016. doi:10.1080/ 00031305.2016.1154108.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 01, JOANN MANSON commented:

      We are writing in response to the comment by EE Hatch, LA Wise, and KJ Rothman that was posted on PubMed Commons on May 9. The authors questioned our interpretation [1] of the key finding of the recent randomized trial by Lappe et al. [2], asserting that we relied solely on the p-value of 0.06 and noting that “Scientific conclusions…should not be based only on whether a p-value passes a specific threshold.” However, the p-value in isolation was not the basis for our interpretation of this trial’s results or our conclusion regarding the effectiveness of vitamin D supplementation as a chemopreventive strategy. As we stated in our editorial, “...the absence of a clear benefit for this endpoint [in the Lappe et al. trial] is in line with the totality of current evidence on vitamin D and/or calcium for prevention of incident cancer..... [F]indings from observational epidemiologic studies and randomized clinical trials to date have been inconsistent. Previous trials of supplemental vitamin D, albeit at lower doses ranging from 400 to 1100 IU/d and administered with or without calcium, have found largely neutral results for cancer incidence; a 2014 meta-analysis of 4 such trials [3-6] with a total of 4333 incident cancers among 45,151 participants yielded a summary relative risk (RR) of 1.00 (95% CI, 0.94-1.06) [7]. Similarly, previous trials of calcium administered with or without vitamin D have in aggregate demonstrated no effect on cancer incidence, with a 2013 meta-analysis reporting a summary RR of 0.95 (0.76-1.18) [8].” (Parenthetically, we note that, in aggregate, vitamin D trials do find a small reduction in cancer mortality [summary RR=0.88 (0.78-0.98)] [7], but, as stated in our editorial, “[t]he modest size, relatively short duration, and relatively small numbers of cancers in the [recent Lappe et al.] trial … preclude[d] robust assessment” of the cancer mortality endpoint.) If the commenters believe that a p-value of 0.06 in the context of the generally null literature (at least for the endpoint of cancer incidence) should be interpreted as a positive finding, then where do they draw the line? A p-value of 0.07, 0.10, 0.20, or elsewhere? Large-scale randomized trials of high-dose supplemental vitamin D are in progress and are expected to provide definitive answers soon regarding its utility for cancer prevention.

      --JoAnn E. Manson, MD, DrPH1,2, Shari S. Bassuk, ScD1, Julie E. Buring, ScD1,2

      1Division of Preventive Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston<br> 2Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston

      References

      1. Manson JE, Bassuk SS, Buring JE. Vitamin D, calcium, and cancer: approaching daylight? JAMA 2017;317:1217-8.
      2. Lappe J, Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women: a randomized clinical trial. JAMA 2017;317:1234-43.
      3. Trivedi DP, Doll R, Khaw KT. Effect of four monthly oral vitamin D3 (cholecalciferol) supplementation on fractures and mortality in men and women living in the community: randomised double blind controlled trial. BMJ 2003;326:469.
      4. Wactawski-Wende J, Kotchen JM, Anderson GL, et al. Calcium plus vitamin D supplementation and the risk of colorectal cancer. N Engl J Med 2006;354:684-96.
      5. Lappe JM, Travers-Gustafson D, Davies KM, Recker RR, Heaney RP. Vitamin D and calcium supplementation reduces cancer risk: results of a randomized trial. Am J Clin Nutr 2007;85:1586-91.
      6. Avenell A, MacLennan GS, Jenkinson DJ, et al. Long-term follow-up for mortality and cancer in a randomized placebo-controlled trial of vitamin D3 and/or calcium (RECORD trial). J Clin Endocrinol Metab 2012;97:614-22.
      7. Keum N, Giovannucci E. Vitamin D supplements and cancer incidence and mortality: a meta-analysis. Br J Cancer 2014;111:976-80.
      8. Bristow SM, Bolland MJ, MacLennan GS, et al. Calcium supplements and cancer risk: a meta-analysis of randomised controlled trials. Br J Nutr 2013;110:1384-93.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 09, Kenneth J Rothman commented:

      Lappe et al. (1) reported that women receiving vitamin D and calcium supplementation had 30% lower cancer risk than women receiving placebo after four years (hazard ratio (HR)=0.70, 95% confidence interval (CI): 0.47 to 1.02). Remarkably, they interpreted this result as indicating no effect. So did the authors of the accompanying editorial (2), who described the 30% lower risk for cancer as “the absence of a clear benefit,” because the P-value was 0.06. Given the expected bias toward a null result in a trial that comes from non-adherence coupled with an intent-to-treat analysis (3), the interpretation of the authors and editorialists is perplexing. The warning issued last year by the American Statistical Association (ASA) (4) about this type of misinterpretation of data should be embraced by researchers and journal editors. In particular, the ASA stated: “Scientific conclusions …should not be based only on whether a p-value passes a specific threshold.” Editors in particular ought to guide their readership and the public at large to avoid such mistakes and foster more responsible interpretation of medical research.

      EE Hatch, LA Wise

      Boston University School of Public Health

      KJ Rothman

      Research Triangle Institute & Boston University School of Public Health

      References

      (1) Lappe J,Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women. JAMA. 2017; 317:1234-1243. doi:10.1001/jama.2017.2115

      (2) Manson JE, Bassuk SS, Buring JE. Vitamin D, Calcium, and Cancer. Approaching Daylight? JAMA 2017; 317:1217-1218.

      (3) Rothman KJ. Six persistent research misconceptions. J Gen Intern Med 2014; 29:1060-1064. doi: 10.1007/s11606-013-2755-z

      (4) ASA statement on statistical significance and P-values. Am Stat. 2016. doi:10.1080/ 00031305.2016.1154108.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 19, Helmi BEN SAAD commented:

      The correct names of the authors are: "Khemiss M, Ben Khelifa M, Ben Rejeb M, Ben Saad H".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 19, Donald Forsdyke commented:

      THE VIRUS-VIRUS ARMS RACE

      For commentary on this paper please see ArXiv preprint (1). For further discussion see commentary on a BioRxiv preprint (2).

      (1) Forsdyke DR (2016) Elusive preferred hosts or nucleic acid level selection? ArXiv Preprint (https://arxiv.org/abs/1612.02035).

      (2) Shmakov SA, Sitnik V, Makarova KS, Wolf YI, Severinov KV, Koonin EV (2017) The CRISPR spacer space is dominated by sequences from the species-specific mobilome. BioRxiv preprint (http://biorxiv.org/content/early/2017/05/12/137356).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 12, Bastian Fromm commented:

      Summary The author describes the results of a combined smallRNA sequencing and blasting approach in Taenia ovis. Specifically RNA was retrieved from Tov metacercaria and then mapped to the genome of T. solium. Mapping reads are then blasted against miRBase and so the author describes 34 miRNAs as present in Tov.

      Major problems

      1. The author uses miRBase as reference for cestode miRNAs although it is very outdated (last update 2014). The author should rather have used available literature for comparisons (1-9).
      2. Consequently (?) the author fails to acknowledge his results in the light of standard work in the field of miRNA evolution in flatworms (10-12) and to draw conclusions about the completeness of his predictions.
      3. The approach of mapping a smallRNA sequencing library of a given species against another is problematic and I cannot understand why no the author does not at least try to use classical PCR to confirm loci.

      Minor problems 1) page 3 line 61 author should make sentence more clear. It looks like author removed all reads that had adapter sequences. Recommendation 1) Author should get all available PRE-sequences for cestodes and ma his Tov reads with liberal settings to them and report results.

      1. Jiang, S., Li, X., Wang, X., Ban, Q., Hui, W. and Jia, B. (2016) MicroRNA profiling of the intestinal tissue of Kazakh sheep after experimental Echinococcus granulosus infection, using a high-throughput approach. Parasite, 23, 23.
      2. Kamenetzky, L., Stegmayer, G., Maldonado, L., Macchiaroli, N., Yones, C. and Milone, D.H. (2016) MicroRNA discovery in the human parasite Echinococcus multilocularis from genome-wide data. Genomics, 107, 274-280.
      3. Macchiaroli, N., Cucher, M., Zarowiecki, M., Maldonado, L., Kamenetzky, L. and Rosenzvit, M.C. (2015) microRNA profiling in the zoonotic parasite Echinococcus canadensis using a high-throughput approach. Parasit Vectors, 8, 83.
      4. Jin, X., Guo, X., Zhu, D., Ayaz, M. and Zheng, Y. (2017) miRNA profiling in the mice in response to Echinococcus multilocularis infection. Acta tropica, 166, 39-44.
      5. Bai, Y., Zhang, Z., Jin, L., Kang, H., Zhu, Y., Zhang, L., Li, X., Ma, F., Zhao, L., Shi, B. et al. (2014) Genome-wide sequencing of small RNAs reveals a tissue-specific loss of conserved microRNA families in Echinococcus granulosus. BMC genomics, 15, 736.
      6. Cucher, M., Prada, L., Mourglia-Ettlin, G., Dematteis, S., Camicia, F., Asurmendi, S. and Rosenzvit, M. (2011) Identification of Echinococcus granulosus microRNAs and their expression in different life cycle stages and parasite genotypes. International journal for parasitology, 41, 439-448.
      7. Ai, L., Xu, M.J., Chen, M.X., Zhang, Y.N., Chen, S.H., Guo, J., Cai, Y.C., Zhou, X.N., Zhu, X.Q. and Chen, J.X. (2012) Characterization of microRNAs in Taenia saginata of zoonotic significance by Solexa deep sequencing and bioinformatics analysis. Parasitology research, 110, 2373-2378.
      8. Wu, X., Fu, Y., Yang, D., Xie, Y., Zhang, R., Zheng, W., Nie, H., Yan, N., Wang, N., Wang, J. et al. (2013) Identification of neglected cestode Taenia multiceps microRNAs by illumina sequencing and bioinformatic analysis. BMC veterinary research, 9, 162.
      9. Ai, L., Chen, M.-X., Zhang, Y.-N., Chen, S.-H., Zhou, X.-N. and Chen, J.-X. (2014) Comparative analysis of the miRNA profiles from Taenia solium and Taenia asiatica adult. African Journal of Microbiology Research, 8, 895-902.
      10. Fromm, B., Worren, M.M., Hahn, C., Hovig, E. and Bachmann, L. (2013) Substantial Loss of Conserved and Gain of Novel MicroRNA Families in Flatworms. Molecular biology and evolution, 30, 2619-2628.
      11. Cai, P., Gobert, G.N. and McManus, D.P. (2016) MicroRNAs in Parasitic Helminthiases: Current Status and Future Perspectives. Trends Parasitol, 32, 71-86.
      12. Fromm, B., Ovchinnikov, V., Hoye, E., Bernal, D., Hackenberg, M. and Marcilla, A. (2016) On the presence and immunoregulatory functions of extracellular microRNAs in the trematode Fasciola hepatica. Parasite immunology.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 16, Michael Tatham commented:

      Is CoAlation biologically relevant or a non-functional by-product of the chemical reaction between CoA and cysteine thiols in proximal proteins under certain redox conditions?

      Firstly, in my opinion the work described here is technically sound. The development of the specific antibody for CoA, and the mass spectrometric method to detect the modification on peptides are key tools in the analysis of any post-translational modification. However, there is a risk when using these super-sensitive methods, that one can detect vanishingly small amounts of modified peptides, which inevitably calls relevance into question. More specifically, modern mass-spectrometry based proteomics in combination with peptide-level enrichment of modified species has allowed us to identify modification sites in the order of tens of thousands for phosphorylation, ubiquitination, acetylation and SUMOylation (as of May 2017). For these fields, the onus of the researcher has very quickly shifted from identification of sites, to evidence for biological meaning. In short, the question is no longer “Which proteins?”, but “Why?”.

      Taking acetylation as an example: Phosphositeplus (www.phosphosite.org) lists over 37000 acetylation sites, the majority identified via MS-based proteomics where acetylated peptides have been enriched using acetylated lysine specific antibodies. However, further work investigating endogenous stoichiometry (or site occupancy) of acetylated lysines has revealed that the vast majority are below 1%. Meaning, for most sites, less than 1% of the pool of a protein actually has an acetyl group on a particular lysine (see https://www.ncbi.nlm.nih.gov/pubmed/26358839 and https://www.ncbi.nlm.nih.gov/pubmed/24489116). This clearly calls into question the ability of acetylation to drastically alter the function of most of the proteins identified as ‘targets’.

      A very interesting hypothesis is emerging, whereby many of the identified sites of acetylation are not mediated by the specific transfer of acetyl groups via acetyl-transferase enzymes in cells, but are direct acceptors of acetyl groups from reactive chemicals such as acetyl-CoA, or acetyl-phosphate (an earlier review can be found here https://www.ncbi.nlm.nih.gov/pubmed/24725594). This is termed, non-enzymatic, or chemical modification.

      Intriguingly, this proximity-based direct modification process may not be restricted to non-enzymatic modification systems. In fact the majority of enzyme-catalysed cellular post-translational modifications involve highly reactive intermediates (such as thioester-bonded ubiquitin or ubiquitin-like modifiers to E1 or E2 enzymes), which can modify lysines in absence of the specificity-determining enzymes (E3 ligases). So it follows that ‘unintended’ modifications can occur for any biologically relevant post-translational modification simply by spatial proximity. This actually also fits with the acetylation site occupancy studies that showed (relatively) higher occupancy in proteins that are themselves involved in acetylation dynamics. Couple these theories with the exquisitely sensitive detection methods used in modern proteomics studies, and we have the potential to create huge lists of modification sites where the proportion with true biological relevance is unknown.

      Where does this all fit in with this work describing post-translational modification of cellular proteins with CoA? Reviewing these data bearing the above in mind, it seems the simplest explanation is that non-enzymatic CoAlation occurs in cells when the redox potential has shifted to tip the balance in favour of reaction of CoA with cysteine thiols in proximal proteins. Removal of oxidising agents would allow the balance to revert to more reducing conditions, and so reversal of the CoAlation. The data presented in this paper support this idea as CoAlation is redox-dependent and ‘targets’ proteins that are known to interact with CoA in the cell.

      In short, as with many of the published post-translational modification proteomes, much needs to be done to give biological credibility to sites of CoAlation. In particular occupancy calculations and protein-specific evidence that CoAlation regulates function in vivo, will go a long way to putting the notion of biological relevance beyond reasonable doubt. Until then we should consider the possibility that in many cases, post-translational modifications identified by modern methods have the potential to be the unintended consequence of interactions between reactive molecules and nearby proteins. It is worth noting that such a situation does not exclude biological relevance, but it makes finding any very challenging.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 05, Leonie Brose commented:

      Correction to Figure: The parts of figure 1 are in the wrong place so that the results and the figure legend refer to the wrong bar chart; the chart shown as 1c (workplaces) should be at the top of the figure as 1a, thereby shifting 1a (homes) to 1b, and 1b (extending law) to 1c. In the legend for 1b, ‘by socio-economic status’ is incorrect and should be omitted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 28, Peter Hajek commented:

      After stop-smoking treatment, smokers who quit successfully have no need to use e-cigarettes and so treatment successes are concentrated in the group that did not vape post-treatment. Quit rates are of course higher in this group.

      A more informative analysis would compare quit rates at one year in people who failed to stop smoking after treatment and who did and did not try vaping during the follow-up period (though even this would face the problem of self-selection).

      The results as reported just show that people who fail to stop smoking with other methods are more likely to try e-cigarettes than those who quit smoking successfully.

      It is unfortunate that the Conclusions fail to point this out and instead indicate that vaping undermined quitting. It did no such thing, but as with previous such reports, this is how anti-vaping activists are likely to misrepresent this study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 28, Miguel Lopez-Lazaro commented:

      Cancer etiology: assumptions lead to erroneous conclusion

      The authors claim that cancer is caused and driven by mutations, and that two-thirds of the mutations required for cancer are caused by unavoidable errors arising during DNA replication. The first claim is based on the somatic mutation theory. The second claim is based on a highly positive correlation between the lifetime number of stem cell divisions in a tissue and the risk of cancer in that tissue, and on their method for estimating the proportion of mutations that result from heredity (H mutations), environmental factors (E mutations) and unavoidable errors arising during DNA replication (R mutations). These claims raise several questions:

      1. Sequencing studies have found zero mutations in the genes of a variable proportion of different cancer types (see, e.g., https://dx.doi.org/10.1093/jnci/dju405 and references therein). If cancer is caused by mutations in driver genes, could the authors explain what causes these cancers with zero mutations? Could the authors use their method for estimating the proportion of cancer risk that is preventable and unpreventable in people with tumors lacking driver gene mutations?

      2. Environmental factors are known to affect stem cell division rates. According to IARC, drinking very hot beverages probably causes esophageal cancer (Group 2A). If you drink something hot enough to severely damage the cells lining the esophagus, the stem cells located in deeper layers have to divide to produce new cells to replace the damaged cells. These stem cell divisions, triggered by an environmental factor, will lead to mutations arising during DNA replication. However, these mutations are avoidable if you do not drink very hot beverages. Should these mutations be counted as environmental mutations (H mutations) or as unavoidable mutations arising during DNA replication (R mutations)?

      3. The authors' work is based on the somatic mutation theory. This theory is primarily supported by the idea that cancer incidence increases exponentially with age. Since our cells are known to accumulate mutations throughout life, the accumulation of driver gene mutations in our cells would perfectly explain why the risk of cancer increases until death. However, it is now well established that cancer incidence does not increase exponentially with age for some cancers (acute lymphoblastic leukemia, testicular cancer, cervical cancer, Hodgkin lymphoma, thyroid cancer, bone cancer, etc). It is also well known that cancer incidence decreases late in life for many cancer types (lung cancer, breast cancer, prostate cancer, etc). For example, according to SEER cancer statistics review, 1975-2014, men in their 80s have approximately half the risk of developing prostate cancer than men in their 70s. The somatic mutation theory, which is the basis for this article, does not explain why the lifetime accumulation of driver gene mutations in the cells of many tissues is not translated into an increase in cancer incidence throughout life. Are the authors' conclusions applicable to all cancers or only to those few cancers in which incidence increases exponentially with age until death?

      4. The authors estimate that 23% of the mutations required for the development of pancreatic cancer are associated with environmental and hereditary factors; the rest (77%) are mutations arising during DNA replication. However, Notta et al. recently found that 65.4% of pancreatic tumors develop catastrophic mitotic events that lead to mutations associated with massive genomic rearrangements (https://doi.org/10.1038/nature19823). In other words, Notta et al. demonstrate that cell division not only leads to mutations arising during DNA replication, but also to mutations arising during mitosis. For this cancer type, the authors could introduce a fourth source of mutations, and estimate the proportion of mutations arising during mitosis (M mutations) and re-estimate those arising during DNA replication (R mutations). Alternatively, they could reanalyze their raw data without assuming that the parameters “stem cell divisions” and ”DNA replication mutations” are interchangeable. Cell division, process by which a cell copies and separates its cellular components to finally split into two cells, can lead to mutations occurring during DNA replication, but also to other cancer-promoting errors, such as chromosome aberrations arising during mitosis, errors in the distribution of cell-fate determinants between the daughter cells, and failures to restore physical interactions with other tissue components. Would the authors' conclusions stand without assuming that the parameters “stem cell divisions” and ”DNA replication mutations” are interchangeable?

      5. The authors report a striking correlation between the number of stem cell divisions in a tissue and the risk of cancer in that tissue. They do not report any correlation between the number of mutations in a tissue and the risk of cancer in that tissue; in fact, these parameters are not correlated (see. e.g., https://doi.org/10.1038/nature19768). In addition, the authors discuss that most of the mutations required for cancer are a consequence, not a cause, of the division of stem cells. So, why do the authors use their correlation to say that cancer is caused by the accumulation of mutations in driver genes instead of saying that cancer is caused by the accumulation of cell divisions in stem cells?

      For references and additional information see: Comment on 'Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention' DOI: 10.13140/RG.2.2.28889.21602 https://www.researchgate.net/publication/318744904; also https://www.preprints.org/manuscript/201707.0074/v1/download


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 29, Daniel Corcos commented:

      The authors confuse mutation incidence with cancer incidence. Furthermore the factors are not additive. Mutations are obviously related to the number of cell divisions, which is well known, but this does not tell anything on the contribution of heredity and environment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Mar 29, Atanas G. Atanasov commented:

      Compliments to the authors for this so interesting work, I have featured it at: http://healthandscienceportal.blogspot.com/2017/03/new-study-points-that-two-thirds-of.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 26, Zhang Weihua commented:

      These authors should have cited our publication that, for the first time, shows that multinucleated giant cells are drug resistant and capable of generating tumors and metastases from a single cell in vivo.

      Formation of solid tumors by a single multinucleated cancer cell. Weihua Z, Lin Q, Ramoth AJ, Fan D, Fidler IJ. Cancer. 2011 Sep 1;117(17):4092-9. doi: 10.1002/cncr.26021. Epub 2011 Mar 1. PMID: 21365635


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 04, L Lefferts commented:

      This study funded by members of the International Association of Color Manufacturers (IACM) and written by IACM staff, members, and consultants touting the safety of food dyes is so riddled with inaccuracies and misleading statements that it should be retracted and disregarded. Each of its conclusions is incorrect. The Corrigendum only partially and inadequately addresses the errors. Bastaki et al. mischaracterizes the relationship between the study’s exposure estimates and actual concentrations measured analytically by the US Food and Drug Administration (FDA), systematically underestimates food dye exposure, and relies on acceptable daily intake (ADI) estimates that are based on outdated animal studies that are incapable of detecting the kinds of adverse behavioral effects reported in multiple double-blind clinical trials in children. Bastaki ignores the nine recent reviews (including three meta-analyses) drawing from over 30 such double-blind clinical trials that all conclude that excluding food dyes, or adherence to a diet that eliminates food dyes as well as certain other foods and ingredients, reduces adverse behavior in some children (Arnold et al. 2012, Arnold et al. 2013, Faraone and Antshel 2014, Nigg et al. 2012, Nigg and Holton 2014, Schab and Trinh 2004, Sonuga-Barke et al. 2013, Stevens et al. 2011, Stevenson et al. 2014). While Bastaki et al. has been revised to delete the incorrectly reported doses used in the Southampton study, it makes misleading statements about the Southampton study.

      Each erroneous conclusion is addressed in turn, in a letter sent to the editor, signed by myself, Lisa Lefferts, Senior Scientist, Center for Science in the Public Interest, and Jim Stevenson, Emeritus Professor of Developmental psychopathology, School of Psychology, University of Southampton, and available at <https://cspinet.org/sites/default/files/attachment/dyes Bastaki LTE.pdf>.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 23, Markus Meissner commented:

      During this lengthy discussion, the Kursula group succeeded to solve part of the puzzle regarding the polymerisation mechanism of apicomplexan actin and confirmed our suspicions (see below) that sedimentation assays, while useful in other systems, lead to variable and unreliable results in case of apicomplexan actin (see: https://www.nature.com/articles/s41598-017-11330-w). Briefly, in this study polymerisation assays based on pyrene labelling were used to compare polymerisation kinetics of Plasmodium and rabbit actin and conclusively showed that: - Apicomplexan Actin polymerises in a cooperative manner with a similar critical concentration as canonical Actin - Shorter filament lengths result from higher depolymerisation rate. Since Skillmann et al., 2013 reached their conclusion exclusively based on sedimentation assays, their conclusion regarding an isodesmic polymerization mechanism of apicomplexan actin, as discussed below, should be seen with great scepticism. As discussed in this study these in vitro data also support our (Periz et al., 2017, Whitelaw et al., 2017 and Das et al., 2017) findings in vivo suggesting that a critical concentration of G-actin is required in order to form F-actin filaments. Therefore, the hypothesis of an isodesmic polymerisation mechanism can be considered as falsified.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 20, Robert Insall commented:

      Professor Sibley's most extensive comments are based around a single paper (Skillman et al. 2013) that concluded the polymerization of Toxoplasma actin uses an isodesmic, rather than a nucleation-based mechanism. While this work was well-executed, and thorough, it is not on its own sufficient to support the level of absolutism that is in evidence in these comments. In particular, results from actin that has been exogenously expressed (in this case, in baculovirus) are less reliable than native apicomplexan actin. The folding of actin is infamously complex, with a full set of specialist chaperones and idiosyncratic N-terminal modifications. Even changes in the translation rate of native actin can affect its function and stability (see for example Zhang, 2010). Exogenously-expressed actin may be fully-folded, but still not representative of the physiological protein. Thus it is not yet appropriate to make dogmatic statements about the mechanism of apicomplexan actin function until native actin has been purified and its polymerization measured. When this occurs, as it surely will soon, stronger rulings may be appropriate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 20, Markus Meissner commented:

      We thank David Sibley for his last comment. As we mentioned previously, it was not the aim of this study to prove or disprove isodesmic polymerisation. We highlighted the current discussion in the field regarding isodesmic polymerisation (see previous comments). It is contra productive to turn the comments on this paper into a discussion on Skillmann et al., 2013, which is seen with great scepticism in the field. We made our views clear in previous responses and we hope that future results will help to clarify this issue. However, we find it concerning (and distracting) that– in contrast to his earlier comments, according to which our data can be consolidated with isodesmic polymerisation -David Sibley is now doubting the validity of our data, mentioning that CB might affect actin dynamics. This is certainly the case, as shown in the study and as is the case with most actin binding proteins used to measure actin dynamics in eukaryotic cells. This issue was discussed at length in the manuscript, by the reviewers comments and authors response, which can all be easily accessed: https://elifesciences.org/articles/24119 The above statement reflect the joint opinions of: Markus Meissner (University of Glasgow), Aoife Heaslip (University of Conneticut) and Robert Insall (Beatson Institute, Glasgow).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jun 19, L David Sibley commented:

      Based on the most recent response by Dr. Meissner, it is clear that there is still some confusion about the difference between measuring the kinetics of actin polymerization in vivo vs. monitoring actin dynamics in vivo. These are fundamentally different processes, the former of which cannot be directly inferred from the later. Given this confusion, it is worth reviewing how these two processes are distinct, yet inter-related.

      When referring to the mechanism of actin polymerization in vitro, nucleation is the process of forming stable trimers, which are normally limited by an intrinsic kinetic barrier imposed by unstable dimers. Due to this intrinsic instability, the nucleation step is normally revealed as a pronounced lag phase in the time course of polymerization, after which filaments undergo rapid elongation Pollard TD, 2000. TgACT1 lacks this nucleation step and instead uses a non-cooperative, isodesmic process. The Arp/23 complex facilitates formation of the trimer by acting as the barbed end, thus reducing the lag time and accelerating polymerization, typically by side branching from existing filaments. Toxoplasma has no use for such a step as it would not affect the efficiency of an isodesmic process since dimers and trimers normally form without a lag phase Skillman KM, 2011. By contrast, formins bind to barbed end of existing filaments and promote elongation, both by preventing capping protein from binding and by using profilin to gather actin monomers for addition to the barbed end. Formins may also nucleate F-actin by binding to two monomers to lower the lag phase for trimer formation, thus facilitating elongation, although this role is less well studied. Importantly, formins can act on actins that use either an intrinsic “nucleation-elongation” cooperative mechanism or an isodesmic process, such as that used by Toxoplasma. Hence, the fact that formins function in Toxoplasma has no bearing on the intrinsic polymerization mechanism of TgACT1.

      Once the above definitions are clearly understood, it becomes apparent why the isodesmic process of actin nucleation used by Toxoplasma is fully compatible with both the short filament, rapid turnover dynamics that have been described previously Sahoo N, 2006, Skillman KM, 2011, Wetzel DM, 2003, and the new findings of long-stable filaments described in the present paper Periz J, 2017. These different states of actin polymerization represent dynamics that are driven by the combination of the intrinsic polymerization mechanism and various actin-binding proteins that modulate this process. However, the dynamic processes that affect the status of G and F-actin in vivo cannot be used to infer anything about the intrinsic mechanism of actin polymerization as it occurs in solution. As such, we strongly disagree that there is an issue to resolve regarding the intrinsic mechanism of actin polymerization in Toxoplasma nor do any of the studies in the present report address this point. Our data on the in vitro polymerization kinetics of TgACT1 clearly fit an isodesmic process Skillman KM, 2013 and we are unaware of any data that demonstrates otherwise. Hence we fail to see why this conclusion is controversial and find it surprising that these authors continue to question this point in their present work Periz J, 2017, previous report Whitelaw JA, 2017, and comments by Dr. Meissner. As it is not possible to predict the intrinsic mechanism of actin polymerization from the behavior observed in vivo, these comments are erroneous and misleading. On the other hand, if these authors have new data that speaks directly to the topic of the intrinsic polymerization mechanism of TgACT1, we would welcome them to provide it for discussion.

      Although we disagree with the authors on the above points, we do agree that the fact that actin filaments can be visualized in Toxoplasma for the first time is interesting and certainly in contrast to previous studies. For example, previous studies failed to reveal such filaments using YFP-ACT1, despite the fact that this tagged form of actin is readily incorporated into Jasplakinolide-stabilized filaments Rosenberg P, 1989. As well, filaments have not been seen by CryoEM tomography Paredes-Santos TC, 2012 or by many studies using conventional transmission EM. This raises some concern that the use of chromobodies (Cb) that react to F-actin may stabilize filaments and thus affect dynamics. Although the authors make some attempt to monitor this in transfected cells, it is very difficult rule out that Cb are in fact enhancing filament formation. One example of this is seen in Figure 6 A, where in a transiently transfect cell, actin filaments are seen with both the Cb-staining and anti-actin, while in the non-transfected cell, it is much less clear that filaments are detected with anti-actin Periz J, 2017. Instead the pattern looks more like punctate clusters that concentrate at the posterior pole or residual body. Thus while we would agree that the Cb-stained filaments also stain with antibodies ot F-actin, it is much less clear that they exist in the absence of Cb expression. It would thus be nice to see these findings independently reproduced with another technique. It would also be appropriate to test the influence of Cb on TgACT1 in vitro to determine if it stabilizes filaments. There are published methods to express Toxoplasma actin in a functional state and so this could easily be tested Skillman KM, 2013. Given the isodesmic mechanism used by TgACT1, it is very likely that any F-actin binding protein would increase the stability of the short filaments that normally form spontaneously, thus leading to longer, more stable filaments. This effect is likely to be less pronounced when using yeast or mammalian actins as they intrinsically form stable filaments above their critical concentration. Testing the effects of Cb on TgACT1 polymerization in vitro would provide a much more sensitive readout than has been provided here, and would help address the question of whether expression of Cb alters in vivo actin dynamics.

      In summary, we find the reported findings of interest, but do not agree that they change the view of how actin polymerization operates in Toxoplasma at the level of the intrinsic mechanism. They instead reveal an important aspect of in vivo dynamics and it will be import to determine what factors regulate this process in future studies.

      The above statement reflect the joint opinions of: John Cooper (Washington University), Dave Sept (University of Michigan) and David Sibley (Washington University).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Jun 16, Markus Meissner commented:

      Thank you for your comment which appears to be only a slight update of the comments already made on the eLIFE website and it would be helpful for all readers who wish to follow this discussion if we could stick to the website where the discussion started (see: https://elifesciences.org/articles/24119).

      Regarding the second comment of David Sibley: It is good to see that the authors of the Skillmann paper (Skilmann et al., 2013) are able to reconcile our data with their unusual, isodesmic polymerisation model, despite their initial interpretations that clearly states that “…an isodesmic mechanism results in a distribution of SMALL OLIGOMERS, which explains why TgACTI only sediments efficiently at higher g force. Our findings also explain why long TgACTI filaments have not been observed in parasites by any method, including EM, fluorescence imaging of GFP–TgACTI and Ph staining." While it appears that we will need a lengthy discussion about Skillmann et al., 2013 or even better more reliable assays to answer the question of isodesmic vs cooperative polymerisation, our study did not aim to answer this open issue that we briefly introduced in Periz et al., 2017 to give a more complete picture of the open questions regarding apicomplexan actin. As soon as more convincing evidences are available for cooperative or isodesmic polymerisation of apicomplexan actin, we will be happy to integrate it in our interpretation. Meanwhile we remain of the opinion that our in vivo data (see also Whitelaw et al., 2017) best reflects the known behaviours of canonical actin. While it seems that under the conditions used by Skillmann et al., 2013 apicomplexan actin polymerizes in an isodemic manner, in the in vivo situation F-actin behaviour appears very similar to other, well characterised model systems. However, we would like to point out that a major argument in the interpretation of Skillmann et al., 2013 for isodesmic polymerisation is that “This discovery explains previous differences from conventional actins and offers insight into the behaviour of the parasite in vivo. First, nucleation is not rate limiting, so that T.gondii does not need nucleation-promoting factors. Indeed, homologs of actin nucleating proteins, such as Arp2/3 complex have not been identified within apicomplexan genomes”. This statement is oversimplified and cannot be reconciled with the literature on eukaryotic actin. For example, Arp2/3 knockouts have been produced in various cell lines (and obviously their actin doesn’t switch to an isodesmic polymerisation process). Instead, within cells, regulated actin assembly is initiated by two major classes of actin nucleators, the Arp2/3 complex and the formins (Butler and Cooper, 2012). Therefore, we thought it is necessary to mention in Periz et al., 2017 that apicomplexans do possess nucleators, such as formins. Several studies agree, that apicomplexan formins efficiently NUCLEATE actin in vitro, both rabbit and apicomplexan actin (Skillmann et al., 2012, Daher et al., 2010 and Baum et al., 2008). In summary we agree that future experiments will be required to solve this issue and we are glad that David Sibley agrees with the primary findings of our study. We hope that future in vitro studies will help to solve the question of isodesmic vs cooperative polymerisation mechanism in the case of apicomplexan actin so that a better integration of in vivo and in vitro data will be possible.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Jun 14, L David Sibley commented:

      We feel it is worth briefly reviewing the concept of the critical concentration (Cc), and the properties of nucleation-dependent actin polymerization, since there seems to be some misconception about these terms as they are used in this paper Periz J, 2017.

      Polymerization assays using muscle or yeast actin clearly show that these actins undergo nucleation dependent assembly. Nucleation is a cooperative assembly process in which monomers of actin (i.e. G-actin) form small, unstable oligomers that readily dissociate. The Cc is the concentration of free actin above which a stable nucleus is formed and the filament elongation begins, a process that is more thermodynamically favorable than the nucleation step. A key feature of this nucleation-elongation mechanism is that for total actin concentrations above the Cc, the concentration of free G-actin remains fixed at the Cc, and all of the additional actin, over and above the Cc, is polymerized into filaments (i.e. F-actin). In contrast, an isodesmic polymerization process is not cooperative, and all steps (formation of dimer, trimer, etc.,) occur with the same binding and rate constants. With isodesmic polymerization, the monomer concentration (G-actin concentration) does not display a fixed limit; instead, as total actin concentration increased, the G-actin concentration continues to increase. Another key difference with isodesmic polymerization is that polymer forms at all concentrations of total actin (i.e. there is no concept of a critical concentration, Cc, that must be exceeded in order to achieve polymer formation).

      The inherent differences between nucleation-elongation and isodesmic polymerization give rise to distinct kinetic and thermodynamic signatures in experiments. Because the nucleation process is unfavorable and cooperative, the time course of nucleation-elongation polymerization shows a characteristic lag phase, with a relatively low rate of initial growth, before the favorable elongation phase occurs. In contrast, isodesmic polymerization shows no lag phase, but exhibits linear growth vs. time from the start at time zero. The thermodynamic differences are manifested in experiments examining the fractions of polymer (F-actin) and monomer (G-actin) at steady state. Since nucleation-elongation has a critical concentration (Cc), the monomer concentration plateaus at this value and remains flat as the total protein concentration is increased. Polymer concentration is zero until the total concentration exceeds the critical concentration, and above that point, all the additional protein exists as polymer. In the isodesmic model, in stark contrast, the monomer concentration continues to increase and polymer form at all concentrations of total protein. These two distinct behaviors are illustrated in Figure 1 from Miraldi ER, 2008.

      Our previous study on yeast and Toxoplasma actin Skillman KM, 2013 shows sedimentation assays that are closely matched by the theoretical results discussed above. In our study yeast actin (ScACT, Figure 2c) displays the saturation behavior characteristic of a nucleation-elongation mechanism; however, for TgACT1 (Figure 2a), the monomer concentration (red) continues to increase as total actin increases. In addition, the inset to Figure 2a shows that filaments (blue) are present at the lowest concentrations of total actin, and also does not exhibit a lag Skillman KM, 2013. Based on these features, it is unequivocal that Toxoplasma actin follows an isodemic polymerization process, with no evidence of cooperativity.

      Several of the comments in the response may lead the reader to confound polymerization behavior in vitro with that observed fro actin polymerization in vivo in cells. The question of whether actin polymerization occurs by a nucleation-elongation mechanism or by an isodesmic mechanism is one that can only be determined in vitro using a solution of pure actin, because this is a property of the actin molecule itself, irrespective of other components. While the in vitro polymerization behavior is relevant as the template upon which various actin-binding proteins act, the polymerization mechanism for the actin alone cannot be inferred from in vivo observations due to the presence of actin-interacting proteins.

      The authors state that the presence of “nucleation centers” in the parasite is not easy to consolidate with the isodesmic model Periz J, 2017. We disagree completely and emphatically. We agree that there are “centers” of accumulation of F-actin in the cell, these foci should not be referred to as “nucleation” centers in this case, because the term “nucleation” has a specific meaning in regard to the polymerization mechanism. F-actin may accumulate in these foci over time as a result of any one or more of several dynamic processes – new filament formation, elongation of short filaments, decreased turnover, or clustering of pre-existing filaments. The result is interesting and important; however, the result cannot be used to infer a polymerization mechanism.

      The authors imply that these centers of F-actin correspond with sites of action of formins Periz J, 2017, which are capable of binding to actin monomers or actin filaments and thereby promoting actin polymerization. With vertebrate or yeast actin, which has a nucleation-elongation mechanism, formins do accelerate the nucleation process, and they also promote the elongation process. In the case of the isodesmic model for actin polymerization, formins would still function to promote polymerization, by interacting with actin filaments and actin monomers. Indeed, the short filaments that formed with the isodesmic mechanism are ideal templates for elongation from the barbed end (which formins enhance). We have previously shown that when TgACT1 polymerized in the presence of formins assembles into clusters of intermediate sized filaments that resemble the in vivo centers Skillman KM, 2012. Hence, as we commented previously, the isodesmic mechanism is entirely consistent with the observed in vivo structures labeled by the chromobodies.

      The authors also suggest that evidence of a nucleation-elongation mechanism, with a critical concentration, is provided by the observation that actin filaments seen by chromobodies in vivo do not form in a conditional knock down of TgACT1 Periz J, 2017. In our view, this conclusion is based on incorrectly using observations of in vivo dynamics to infer the intrinsic polymerization mechanism of pure actin protein. Higher total actin concentration leads to higher actin filament concentration under both models, with control provided by the various actin-binding proteins of the cell and their relative ability to drive filament formation and turnover in vivo. However, dependence on total actin concentration is not a reflection of the intrinsic polymerization mechanism. The polymerization mechanism of TgACT1, whether isodesmic or nucleation-elongation, is unlikely to be the critical determinant of actin dynamics in vitro; instead, actin monomers and filaments are substrates for numerous actin-binding proteins that regulation filament elongation, filament turnover, and G-actin sequestration, that is, the whole of actin cytoskeleton dynamics.

      Although we agree that much more study is need to unlock the molecular basis of actin polymerization and dynamics in apicomplexans, it will be important to distinguish between properties that are intrinsic to the polymerization process as it occurs in vitro, vs. interactions with proteins that modulate actin dynamics in vivo. The challenge, as has been the case in better studied systems Pollard TD, 2000, will be to integrate both sets of findings into a cohesive model of actin regulation and function in apicomplexans.

      The above statement reflect the joint opinions of: John Cooper (Washington University), Dave Sept (University of Michigan) and David Sibley (Washington University).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 05, thomas samaras commented:

      This study supports the work of Lindeberg and Lundh some 25 years ago. They found no evidence of CHD or stroke among the natives of Kitava. They also reported no evidence of CHD and stroke in all of Papua New Guinea. Eaton et al, also reported the rarity or absence of CHD and stroke in the Solomon Islands, PNG, Kalahari bushmen, and Congo pygmies.

      Lindeberg and Lundh, Apparent absence of stroke and ischaemic heart disease in a traditional Melanesian island: a clinical study in Kitava, Journal of Internal Medicine 1993; 233: 269-275.

      Eaton, Konner, Sostak, Stone agers in the fast lane: chronic degenerative diseases in evolutionary perspective. The American Journal of Medicine, 1988; 84;, 739-749


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 27, Janet Kern commented:

      In the Endres et al. study above, they found a marginally significant trend in decreased glutathione (GSH) signals between the two groups in the dorsolateral prefrontal cortex (p=0.076). It did not quite reach statistical significance. To achieve statistical significance, a study must have sufficient statistical power. Statistical power, or the power of a test to correctly reject the null hypothesis, is affected by the effect size, sample size, alpha significance criterion. In their study, the overall and single-group differences in neurometabolite signals, the level of significance was corrected for multiple tests using the Bonferroni approach (p < 0.025 due to performing the measurements in two independent regions). So, the alpha significance criterion was 0.025. Effect size is the magnitude of the sizes of associations or the sizes of differences. Typically, a small effect size is considered about 0.2; a medium effect size is considered about 0.5; and a large effect size is considered about 0.8. Assuming an alpha, two tailed, set at 0.025, and a large effect size of 0.8 and a power of 0.8, the total number of subjects required would need to be 62 (or 31 in each group). The sample size the Endres et al. study was 24 ASD patients and 18 matched control subjects. Was there truly no statistically significant difference between the two groups, or was the study underpowered?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 07, Noel de Miranda commented:

      Interesting report that confirms the works of Kloor et al. 2005 Cancer Res and Dierssen et al. 2007 BMC Cancer which are not cited in the current work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 22, Vojtech Huser commented:

      This is a nice example of semantic integration. Inclusion of these CDEs in the common NIH portal for CDEs would be an added bonus.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 10, Helge Knüttel commented:

      Unfortunately, the search strategy for this systematic review was not published by the journal. It may be found here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 04, Steve Alexander commented:

      The title describes plasma, as does the findings. But the methods talks about serum, although it describes EDTA collection. I'm assuming it's plasma throughout, but it is confusing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 22, Paola Pizzo commented:

      We thank the Authors for their Reply to our Letter. However, given the importance of this topic, we have to add an additional comment to further clarify some criticisms. First, the Authors reason that the average mitochondrial surface in contact with ER can be extracted from the data presented in table S1, S2, S3 of their paper (Naon D, 2016). However, these data have not the same relevance that presenting the total number of ER-mitochondria contacts in the two situations. Indeed, the percentage of mitochondrial surface in contact with ER substantially varies whenever the analysis is restricted only to mitochondria that display contacts with the ER, or includes also contact-deprived mitochondria. In their analysis, the Authors considered only those mitochondria that are engaged in the interaction with the ER (and not the total mitochondrial population), as they stated in the Results section (“we devised an ER-mitochondria contact coefficient (ERMICC) that computes [....] the perimeter of the mitochondria involved in the interaction). This approach could be misleading: considering that in Mfn2-depleted cells a higher percentage of mitochondria endowed with contacts has been found (Filadi R, 2016), the fraction of contact-deprived mitochondria should be taken into account to calculate the real average OMM juxtaposition surface. Second, the Authors argue that the fluorescent organelle proximity probes they used (both ddGFP and FRET-based FEMP probe) do not artificially juxtapose organelles: we did not claim this in our Letter, and we apologize if, for space limitations, this was not clear enough. Nevertheless, as to the FEMP probe, its propensity to artificially force ER-mitochondria juxtaposition, already after few minutes from rapamycin treatment, has been clearly shown by EM analysis in the original paper describing this tool (Csordás G, 2010). Additionally, it is worth to mention that comparison of FRET values between different conditions is possible only when the dynamic range (i.e., the difference between minimal and maximal FRET values) of a given FRET probe is similar in these different conditions. The new data provided by the Authors in their Reply (Tables 1 and 2) show that the average rapamycin-induced FRETmaximal values are dramatically different between wt and Mfn2-depleted cells. Thus, we believe that at least some cautions should be adopted to claim the use of this probe as a reliable tool for the comparison of ER-mitochondria tethering in such different conditions. Recently, we suggested how the fragmented/altered mitochondrial and ER morphology, present in Mfn2-depleted cells, may impair the rapamycin-induced assembly of this probe (Filadi R, 2017), thus severely complicating the interpretation of any result. Regarding the other fluorescent probe (the ddGFP) used by Naon et al. (Naon D, 2016), it is unclear why only ~10 % of the transfected wt/control cells (mt-RFP positive cells; Fig. 1G and 2E) are positive for the ddGFP signal (claimed as organelles tethering indicator). Should not ER-mitochondria juxtaposition be a feature of every cell? Concerning the Ca<sup>2+</sup> experiments, we are forced to discuss additional criticisms present in the Authors’ Reply. Our observation that, in Naon et al. (Naon D, 2016), mitochondrial Ca<sup>2+</sup> peaks in control cells (mt-YFP traces), presented in Fig. 3F, are ~ 100-fold higher than those in Fig. 3B was rejected by the Authors, because in Fig. 3F “Mfn2<sup>flx/flx</sup> cells were preincubated in Ca<sup>2+</sup> -free media to equalize cytosolic Ca<sup>2+</sup> peaks”. However, in our Letter, we clearly referred to control, mt-YFP expressing cells, and not to Cre-infected Mfn2<sup>flx/flx</sup> cells. Nevertheless, even if a “preincubation in a Ca<sup>2+</sup> -free media to equalize cytosolic Ca<sup>2+</sup> peaks” (i.e., a treatment that decreases the ER Ca<sup>2+</sup> content) was applied to both cell types, the prediction is that, in Fig. 3F, the ATP-induced mitochondrial Ca<sup>2+</sup> peaks would be lower for both Mfn2<sup>flx/flx</sup> and control (mt-YFP) cells, and not higher than those presented in Fig. 3B. Lastly, as members of a lab where mitochondrial Ca<sup>2+</sup> homeostasis has been studied over the last three decades, we have here to point out that, in our opinion, the reported values for ATP-induced mitochondrial [Ca<sup>2+</sup> ] peaks (i.e., 160 nM and 390 nM in Fig. 3B and 3C, respectively) are unusually very low and hardly considerable to be over the basal mitochondrial matrix [Ca<sup>2+</sup> ] (~ 100 nM). Furthermore, these low [Ca<sup>2+</sup> ] values cannot be reliably measured by the mitochondrial aequorin probe (Brini M, 2008) used by Naon et al. (Naon D, 2016). Finally, concerning the speed of Ca<sup>2+</sup> accumulation in isolated mitochondria, we clearly stated in our Letter that at 50 uM CaCl2 in the medium and no Mg<sup>2+,</sup> the rate of Ca<sup>2+</sup> accumulation is limited by the activity of the respiratory chain (Heaton GM, 1976), and thus does not offer any information on the MCU content. We did not refer to problems in respiratory chain activity in Mfn2-depleted cells, as interpreted by Naon et al. in their Reply.<br> Overall, while we appreciate the attempt of the Authors to highlight some aspects of the controversy, we renew all the concerns we discussed in our Letter.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 19, Martine Crasnier-Mednansky commented:

      The authors have previously reported that very little SgrT is made in E. coli as compared to Salmonella typhimurium, which led them to conclude that E. coli K12 has "lost the need for SgrT" (Wadler CS, 2009). Later on, the rationale for using S. typhimurium instead of E. coli for studying SgrT was reinforced (Balasubramanian D, 2013). In the present work, the authors use E. coli sgrS mutant strains overproducing SgrT. Therefore, the present work does not establish a 'physiological' role for SgrT in preventing the E. coli PTS-transport of glucose, thus the title of the article is misleading.

      The authors’ interpretation of figure 1 does not agree with the following data. E. coli mutant strains lacking Enzyme IICB<sup>Glc</sup> (PtsG) do grow on glucose (see Curtis SJ, 1975; Table VIII in Stock JB, 1982). Mutant strains lacking both the glucose and mannose enzyme II grow very slowly on glucose. In other words, because growth had been observed on mannose, growth should have been observed on glucose. Furthermore, the authors should have been aware that an increased level of cAMP from overexpressing SgrT further impairs growth on glucose.

      PtsG is not "comprised of three main functional domains", as the authors state. PtsG has two functional domains (IIB and IIC) connected by a flexible linker. In the nomenclature for PTS proteins (Saier MH Jr, 1992), PtsG translates into Enzyme IICB<sup>Glc,</sup> which is informative (and therefore should be preferred to any other designations) because it indicates a two-domain structure, a specificity for glucose, and the order of the domains (from N to C terminus).

      Kosfeld A, 2012 clearly established, by cross-linking experiments, the interaction between SgrT and Enzyme IICB<sup>Glc</sup> in the presence of glucose. They also visualized the recruitment of SgrT to the membrane by in vivo fluorescence microscopy. It is therefore unwarranted for the authors to 'hypothesize' an interaction and localization to the membrane, and to state: "Once we established that SgrT inhibits PtsG specifically and its localization to the membrane …". In addition, the demonstration by Kosfeld A, 2012, the motif KTPGRED (in the flexible linker) is the main target for SgrT, is rather convincing.

      Finally, the statement "SgrT-mediated relief of inducer exclusion may allow cells experiencing glucose-phosphate stress to utilize alternative carbon sources" is inaccurate because it ignores the positive effect of cAMP on the utilization of alternative carbon sources like lactose.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 28, Thomas Perls, MD, MPH commented:

      Some studies cite this very compelling study as evidence against the Compression of Morbidity hypothesis. This study observes progressively higher prevalence rates of morbidity and disability with increasing age among octogenarians, nonagenarians and centenarians. However, they were unable to determine when in their lives these individuals developed these problems and therefore the work does not describe any differences in compression of disability of morbidity. One of the virtues of becoming a centenarian is the likelihood of compressing the time that you experience disability towards the end of your life. Surviving to ages that relatively approach human lifespan (e.g. >105 years) likely also entails compressing morbidity as well Andersen SL, 2012.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Mauro Podda commented:

      Dear Sir/Madam. Many thanks for showing your interest in our manuscript. We can guarantee that our systematic review with meta-analysis of RCTs comparing antibiotic therapy and surgery for uncomplicated acute appendicitis has been performed accordingly with the instructions provided by the Cochrane handbook for systematic review of interventions, and the PRISMA statement has been used for reporting research in the systematic review. Probably, this should better clarified in the text. With regards to the search keys, this is the strategy: (((((appendicitis) AND antibiotic treatment) OR conservative management) OR nonoperative management) OR nonoperative treatment) AND appendectomy


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 16, Michelle Fiander commented:

      What is a Systematic Literature Search?

      This review describes its literature search as "systematic" but provides no evidence to support the statement. Reproducible search strategies are not provided--per PRISMA 8, but two sets of keywords are. I created a PubMed search strategy (copied below) by making an educated guess as to how the authors combined the list of terms provided and in doing so, found over 3800 citations up to April 2016 (one month before the search date reported in the review). The review reports screening 938 citations so it is unclear how the evidence for this review were identified.

      The authors say the meta-analysis was "performed in accordance with the recommendations from the...PRISMA Statement." PRISMA does not recommend methodological approaches to data analysis, it describes the data authors should provide in a systematic review manuscript. For recommendations on how to analyze data in a systematic review, sources such as The Cochrane Handbook should be consulted.

      There has been recent research on the poor quality of published systematic reviews; journal editors should engage with methodologists conversant with systematic review methodology to ensure the reviews it publishes are rigorously reported.


      PubMed: Search (antibiotic OR "nonoperative treatment" OR "conservative management" OR "nonoperative management" OR "medical treatment" OR appendectomy OR appendicectomy OR laparoscopy) AND ("acute appendicitis") AND Filters: Publication date to 2016/04/30


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 23, Wenqiang Yu commented:

      My comments on this impressive paper are mainly regarding the relationship between the enhancer and microRNA. MicroRNAs are expressed in a tissue and cell type specific manner, as is the enhancer. Therefore, it would be intriguing to know whether miRNA and enhancer may be intrinsically linked while regulating gene expression. Results of this paper are interesting because Suzuki HI et al found that enhancer regions overlap with miRNA genome loci and may play a role in shaping the tissue-specific gene expression pattern. These findings directly support our earlier results from a 5-year long project that was finally published in RNA biology in the beginning of 2016, “MicroRNAs Activate Gene Transcription Epigenetically as an Enhancer Trigger”.(http://www.tandfonline.com/doi/abs/10.1080/15476286.2015.1112487?journalCode=krnb20) In this paper, we not only found that many miRNA genome loci overlap with enhancer regions, but also identified a subset of miRNAs in the nucleus that function as universal and natural gene activators emanating from the enhancer loci, which we termed NamiRNA (Nuclear activating miRNA; although this specific term was not used in the paper). These miRNAs are associated with active enhancers characterized by distinct H3K27ac enrichment, p300/CBP binding and DNase I hypersensitivity. We also presented evidence that NamiRNA promotes genome-wide gene transcription through the binding and activating of its targeted enhancers. Thus, we anticipate that NamiRNA-enhancer-mRNA activation network may be involved in cell behavior modulation during development and disease progression. Having said all that, we hope our results published in RNA biology can be cited by this paper. Meanwhile, we want to emphasize the dual functionality of miRNAs that is supported by our results---work as an activator via enhancer in the nucleus and as a traditional silencer in the cytoplasm. In light of this, more attention should be paid towards research that clarifies the detail of these NamiRNAs functions. It is in our belief that the miRNA-enhancer-gene activation network may be the intrinsic link between miRNA and enhancer when the two coordinate in regulating gene expression during cell fate transitions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 04, Ralph Brinks commented:

      One of the key methods used in this paper is not appropriate. Note the comment on this paper: Hoyer A, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 11, Lydia Maniatis commented:

      Reading this article, one gets the impression that the authors don’t quite believe their own claims, or aren’t really sure about what they’re claiming. This is illustrated by the following statement (caps mine): “It could be that other factors often associated with perceptual inferences and top-down processing—scission, grouping, object formation, and so forth—could affect the filters OR ACTUALLY ARE THE FILTERS” (page 17).

      Whereas up to this point the notion of “filters,” described as acting “early” in the visual process, has referred to a technical manipulation of the image based on sharpening or blurring luminance “edges” (a process which is unnecessarily and somewhat confusingly described in terms of removing “low or high spatial frequency content,” even though the elements referred to are not repeating) we are now told that this manipulation - the “simple filter” - may be equivalent to processes of perceptual organization that produce de facto inferences about the distal stimulus, such as the process of figure-ground segregation (which is, of course, prior to “object formation”). This is quite a surprise – in this case we perhaps could refer to, and more explicitly address, these organizing processes - assuming the authors decide definitively that this is what they mean, or unless the term “filter” is only intended to mean “whatever processes are responsible for certain products of perception.”

      With respect to the specific story being told: As with all "spatial filtering" accounts to date, it is acknowledged to be ad hoc, and to apply to a small, arbitrary set of cases (those for which it has been found to "work"). Which means that it is falsified by those cases which it cannot explain. The ad hoc-ness is incorporated into the “hypothesis: “The results support the hypothesis that, under some conditions, high spatial frequency content remains invariant to changes in illuminant” (p. 17). Which conditions are being referred to is not specified, begging the question of the underlying rationale. The authors continue to say that “Of course, this hypothesis may not be true for complex scenes with multiple illuminants or large amounts of interreflection.” Arguably, all natural scenes are effectively under multiple illuminants due to shadows created by obstructions and orientations relative to light sources.

      In fact, as with all filtering accounts to date, the account doesn’t even adequately explain the cases it is supposed to explain. The reason for this is that it doesn’t address the “double layers” present in perception with respect to illumination. It isn't fair to say that when a perceived surface is covered by a perceived shadow, we are discarding the illumination; the shadow is part of the percept. So to the extent that Dixon and Shapiro’s manipulation describes the perceptual product as containing only perceived surface lightness/color values but not illumination values, it is not representative of the percept corresponding to the image being manipulated.

      Relatedly, Dixon and Shapiro don’t seem to understand the nature of the problem they are addressing. They say that: “Most explanations of the dress assume that a central task of color perception is to infer the reflectance of surface material by way of discounting the illumination falling on the object” (p. 14). This may be accurate with respect to “most explanations” but, again, such explanations are inapt. As I have noted in connection to one such explanation (https://pubpeer.com/publications/17A22CF96405DA0181E677D42CC49E), attributing perceived surface color to perceived illumination is equivalent to attributing perceived illumination color/quality to perceived surface color. They are simultaneous and correlated inferences, and treating one as a cause of the other is like treating the height of one side of a seesaw as the cause of the height of the other side. You need to explain both what you see and what you saw.

      The confusion is similarly illustrated in Dixon and Shapiro's description of Purves’ cube demo, described as “an iconic image for illustrating the effect of illumination on color appearance…” (p. 3). But the demo is actually not illuminated if on a computer screen, and if observed on a page, is typically observed under ordinary lighting. Again, both the color of the surfaces and the color of the illumination are inferred on the basis of the chromatic structure of the unitary image (of its retinal projection); and both are effects, not causes, and both are represented in perception. I see the fabric of the dress as white and gold, and the illumination as bluish shadow. Neither is “discounted” in the sense that Dixon and Shapiro seem to be claiming.

      With respect to the problem of the dress specifically, none of these explanations address why it interpreted one way by some, another way by others. The ambiguity of light/surface applies to all images, so general explanations in terms of illumination/surface color estimation don't differentiate cases in which there is agreement from those rarer ones for which there is disagreement. The reference to one perceptual outcome of the dress as indicating “poor color constancy” or “good color constancy” is inapt as the images do not differ in illumination, or surface color, but only in interpretation.

      As I've also noted previously, the proof of understanding the dress is to be able to construct other images with similar properties. So far they've only been found by chance.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Seán Turner commented:

      The genus name Eggerthella is not novel: Eggerthella Wade et al. 1999 (PubMed Id 10319481). In the title, "gen. nov." should be replaced by "sp. nov."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Seán Turner commented:

      The authors refer to the type strain variably as "Marseille P-3114", "Marseille-P3114", and "Marseille-P-3114" in the manuscript. Both "Marseille-P3114" and "Marseille-P-3114" are used in the protologue of the novel species Metaprevotella massiliensis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 17, Seán Turner commented:

      Contrary to the title, Pseudoflavonifractor is not a novel genus (Pseudoflavonifractor Carlier et al. 2010; PubMed Id 19654357).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 10, Melissa Vaught commented:

      Prior randomized controlled trials have examined the impact of organizational social media promotion (i.e., using journal’s official social media accounts) on article views (Fox CS, 2015, Fox CS, 2016, Adams CE, 2016) or on article downloads and citations (Tonia T, 2016). With the exception of Adams CE, 2016, no significant effect of social media posting on page views or downloads has been observed. However, a key question has remained: Might sharing by individuals have an effect where publisher promotion has not?

      The trial reported here attempts to address this question directly. The authors enlisted members and trainees on its editorial board to share links to articles from their personal social media accounts (enhanced Twitter intervention). Outcomes were compared to control and to sharing via the journal’s official Twitter account (basic Twitter intervention). Selected publications were between 2 months and more than 2 years old at the time of intervention.

      Similar to Fox CS, 2015, Fox CS, 2016, & Tonia T, 2016, posts by the @JACRJournal account did not increase article views (though removing a ‘most read’ outlier that had been randomized to the control group changed this conclusion). As summarized in the abstract, weekly page views were higher in the enhanced intervention than in control and basic groups. In fact, the enhanced group outperformed the basic group in all 4 primary and secondary endpoints. Authors found no significant effect of publication age.

      The authors note that the difference between enhanced and basic groups may derive from multiple vs. single posting of a link. The difference in effects is not proportional to the number of posts, and as the authors note, Fox CS, 2016 used a high frequency social media posting to no avail. In addition, the JACR authors observed that 1 team had a much larger effect on page views than the other 3, and the effect did not track with follower count.

      I would first note that some limitations in the methods and/or reporting might influence interpretation of these comparisons. Methods state that team members were assigned 1 article to tweet per day, and they were to only post about each article once. However, there is no indication that participants’ accounts were reviewed to check adherence to the instructions, in particular whether all 4 team members posted the assigned article with a functioning link on the designated day. It was unclear to me whether team members were sent a link the day they were assigned to post it, or whether these might have been provided in batches with instruction to tweet on the assigned day. The article also does not discuss how teams were assigned and whether members knew who their teammates were. Finally, although the team effect did not correlate with follower number, it would have been useful to know the number of followers for @JACRJournal at the start of the intervention, for comparison.

      Nonetheless, the outsized effect on outcomes for 1 team is interesting. Though largely beyond the scope of this article, additional analytics could provide the basis for some interesting exploratory analysis and might be worth consideration in future studies of this type. At the Twitter account level, the authors reported the number of followers for each team member, but the age and general activity of the account during the intervention period could be relevant. Follower overlap between team members (or more refined user network analysis, as suggested by the authors) might also be informative.

      It also might have been useful to also gather tweet-level analytics from team members, to identify high-engagement tweets (e.g., based on URL clicks and/or replies). This could determine whether team performance was driven by a single member, particular publications/topics, or discussion about a publication. I liked that team members composed their own tweets about articles, so that there was a chance for the tweets in the intervention to have congruent “voice”/style. Pairing tweet-level analytics with content analysis—even as simple as whether a hashtag or an author’s Twitter handle were included—could offer some insight.

      Overall, I appreciate this authors’ efforts to untangle questions about how organizational and individual social media promotion might differentially influence viewing (and perhaps reading) of scholarly publications.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 03, Lily Chu commented:

      I have written a comment which was published on the Annals of Internal Medicine online comments section linked to this article, which can be accessed here:

      http://annals.org/aim/article/2607809/cytokine-inhibition-patients-chronic-fatigue-syndrome-randomized-trial

      I asked whether the authors had considered subgrouping subjects by infectious/ inflammatory symptoms and comparing their responses to treatment and mention two trials using another cytokine inhibitor (of TNF-alpha), etanercept, in the treatment of ME/CFS. Materials related to those trials can be accessed here:

      1. Vallings R. A report from the 5th International AACFS Conference. Availablet at: http://phoenixrising.me/conferences-2/a-report-from-the-fifth-international-aacfs-conference-by-dr-rosamund-vallings.
      2. Fluge, O. Tumor necrosis factor-alpha inhibition in chronic fatigue syndrome. Available at: https://clinicaltrials.gov/ct2/show/NCT01730495.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 11, Andrew Kewley commented:

      I thank the authors for conducting this study and note that such studies are of value even if the outcome is a null result.

      While I agree with the overall conclusion that subcutaneous anakinra is ineffective, I carefully note that the published manuscript contains an error.

      In the abstract of the article, it states: "At 4 weeks, 8% (2 of 25) of anakinra recipients and 20% (5 of 25) of placebo recipients reached a fatigue level within the range reported by healthy persons."

      The closest reference in the body of the manuscript is the following: "In the anakinra group, 2 patients (8%) were no longer severely fatigued after the intervention period (reflected by a CIS-fatigue score <35 [47]), compared with 5 patients (20%) in the placebo group difference, -12.0 percentage points [CI, -31.8 to 7.8 percentage points]; P = 0.22)."

      Where the reference [47] was: 47. Wiborg JF, van Bussel J, van Dijk A, Bleijenberg G, Knoop H. Randomised controlled trial of cognitive behaviour therapy delivered in groups of patients with chronic fatigue syndrome. Psychother Psychosom. 2015;84:368-76. [PMID: 26402868] doi:10.1159 /000438867

      However the claims made in the abstract refer to healthy ranges, but this is not the same as "severe fatigue" as operationalised by a CIS-fatigue score of less than or equal to 35.

      The healthy ranges are instead provided by another study which has also been cited: 41. Vercoulen JH, Alberst M, Bleijenberg G. The Checklist Individual Strength (CIS). Gedragstherapie. 1999;32:131-6.

      That study found in a group of 53 healthy controls (mean age of 37.1, SD 11.5) had a mean CIS-fatigue score of 17.3 (SD 10.1). This would provide a cut-off for the "healthy range" of ~27. The manuscript of the present RCT does not provide the results of how many patients met this cut-off score.

      Also of note, in a study co-authored by one of the authors of the present study utilised a threshold for a "Level of fatigue comparable to healthy people" as less than or equal to 27.

      See: Knoop H, Bleijenberg G, Gielissen MFM, van der Meer JWM, White PD: Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome? Psychother Psychosom 2007; 76: 171–176.

      Therefore the claim made in the abstract of patients reaching "a fatigue level within the range reported by healthy persons" is not based on evidence provided in the manuscript, or is simply incorrect. I ask the authors to provide the results of how many patients in both groups met the criteria of having a CIS-fatigue score of less than 27.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 08, Lydia Maniatis commented:

      The authors discuss “face signals” and “face aftereffects” and “face-selective neurons,” but these terms have no conceptual backing. The underlying basis of the “aftereffects” that have been reported using faces is not known; we could argue (always prematurely) that adaptation to particular contour patterns affects the organization of these contours into faces. To give a parallel, the case of color aftereffects doesn’t license us to posit adaptation of “color-selective” neurons, even though the perception of color is a “high-level” process; we know that they are the perceptual consequences of pre-perceptual, cone-level activity.

      The idea of face-selective neurons is highly problematic, as are all claims that visual neurons act as “detectors.” (They are essentially homuncular, among other problems). What, exactly, is being detected? If we draw any shape and stick two dots on it along the horizontal, it becomes a face. It cannot be overstated the extent to which references to visual neural processes in this paper are premature.

      With respect to functional explanations, if we don’t know the reason for the effects – again, references to “face-selective neurons” are hopelessly vague and have serious logical problems – then it’s premature to speculate if and what.

      All of this seems almost moot given the comments in the discussion that “Our results seemingly contradict those of Kiani et al. (2014), who found that identity aftereffects did decay with time (and that this decay was accelerated when another face was presented between adaptor and test).” The authors don’t know why this is, but speculate that “it is likely that the drastically different temporal parameters between the two studies contributed to the different findings… It is plausible that this extended adaptation procedure would have induced larger and more persistent aftereffects than Kiani et al.'s procedure…. It is plausible that aftereffects resulting from short-term fatigue might recover rapidly with time, whereas aftereffects resulting from more structural changes might be more long-lived and require exposure to an unbiased diet of gaze directions to reset.”

      Given that the authors evidently didn’t expect their results to contradict those being discussed, these casual post hoc speculations are not informative. “Plausibility” is not a very high bar, and it is rather subjective. What is clear is that the authors don’t understand (were not able to predict and control for) how variations in their parameters affect the phenomenon of interest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Paul Grossman commented:

      Unfortunately, the authors of this paper so far refuse to discuss key most likely severely flawed assumptions of their "recommendations" paper in any open forum available to scientists (here, ResearchGate or PubMed Commons: see their reply to my qnd others's comments in ResearchGate, https://www.researchgate.net/publication/313849201_Heart_Rate_Variability_and_Cardiac_Vagal_Tone_in_Psychophysiological_Research_Recommendations_for_Experiment_Planning_Data_Analysis_and_Data_Reporting )

      I have been active in this field for over 30 years. Vagal tone is not "variability in heart rate between inhalation and exhalation"; the latter is termed respiratory sinus arrhythmia (RSA, or also high-frequency heart-rate variability, HRV ) and under very specific conditions may sometimes partially reflect, or be a marker of, cardiac vagal tone. Cardiac vagal tone--on the other hand--is defined as the magnitude of mean heart rate change from one condition to another (e.g. rest to different levels of physical exertion or to pharmacological blockade of parasympathetic control) that is a specific consequence of parasympathetic effects. Obviously the two phenomena are not equivalent: Resipratory sinus arrhythmia is an inherently phasic (not tonic) phenomenon (heart rate shifting rhythmically from inspiration to expiration). Cardiac vagal tone characterizes the average effect of vagal influences upon heart rate during a particular duration of time. Changes in breathing frequency can have dramatic effects upon magnitude of RSA without any effects upon cardiac vagal tone. There are also other conditions in which the two phenomena do not change proportionally to each other: e.g. sometimes when sympathetic activity substantially changes; or when efferent vagal traffic to the heart is blocked by chemicals before it can reach the sinus atrial node; or probably when vagal discharge is so great that the vagal traffic saturates the sinus atrial node leading to profound slowing of heart rate during both inspiration and expiration. These effects are rather clearly shown in the autonomic cardiovascular physiological literature but fail to be acknowledged in much of the psychological or psychophysiological publications. Thus it is plain wrong to believe that RSA is vagal tone. There is really so much evidence that is often systematically ignored, particularly by psychologists working in the field.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 13, Paul Grossman commented:

      The issue of the influence of respiration (breathing rate and volume) confounding heart-rate variability (HRV) as an index of within-individual changes of cardiac vagal tone remains inadequately covered in this review. My colleagues' and my 1991 and 1993 papers (Grossman , Karemaker & Wieling, 1991; Grossman & Kollai, 1993) using pharmacological blockade controls, rather conclusively show that respiratory sinus arrhythmia (RSA, or high-frequency HRV) under spontaneously varying rates and/or depths of breathing does not provide an accurate reflection of quantitative within-individual variations in cardiac vagal tone. Our results are also clearly far from the only findings demonstrating this fact (see also literature of Saul and Berger, as well as others). Yet this rich resource of findings is neither cited nor addressed in the paper. I would be curious why? The research clearly and consistently shows that when a person's heart rate changes from one condition to another are completely vagally mediated (documenting changes in cardiac vagal tone), changes in RSA WILL NOT ACCURATELY REFLECT those variations in cardiac vagal tone whenever breathing parameters substantially change as well: The alterations in RSA amplitude will be much more closely correlated with respiratory pattern changes, but may not at all reflect vagal tone alterations! The proper method to correct for this issue is, however, another question. The crucial point is that this confound must no longer be swept under the carpet. I welcome any dialogue about this from the authors or others..

      A bit simpler explanation: We and others (e.g. Grossman , Karemaker & Wieling, 1991; Grossman & Kollai, 1993; Eckberg, various publications; JP Saul, various publications; R Berger, various publications) have consistently shown that changes in breathing rate and volume can easily and dramatically alter heart-rate variability (HRV) indices of cardiac vagal tone, without actual corresponding changes in cardiac vagal tone occurring! This point is not at all considered in this or many other HRV papers. Any paper purporting to provide standards in this area must deal with this issue. If the reader of this comment is a typically young healthy person, this point can easily be documented by noting your pulse rate as you voluntarily or spontaneously alter your breathing frequency substantially: slow breathing will bring about an often perceptibly more irregular pulse over the respiratory cycle than fast breathing, but there will be little-to-no change in average heart rate over the time (which would almost certainly have to occur for such dramatic perceptible changes in HRV to reflect cardiac vagal tone: heart rate should slow as vagal tone increases, and should speed as vagal tone decreases, provided there are no sympathetic shifts in activity--extremely unlikely in this little experiment!).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 07, Paul Brookes commented:

      This is an interesting study, but I feel it's a little too close to a study published by my own lab' last year... http://www.jbc.org/content/291/38/20188.abstract

      Specifically, several of the findings in this new paper are repeats of findings in our paper (such as the pH sensitive nature of 2HG generation by LDH and MDH, and the discovery that 2HG underlies acid induction of HIF). While it's always great to have your work validated, it's also nice to be CITED when that happens.

      Our study was posted on BioRXiv on May 3rd 2016 (http://biorxiv.org/content/early/2016/05/03/051599), 6 weeks before this paper was submitted to Nature Chem Biol. At that time we also submitted to a journal where the senior author here is on the editorial board, only to be rejected. Our paper was finally published by JBC in August, >4 months before the revised version of this current study was submitted.

      Furthermore, during the revision of our work, another paper came out from the lab of Josh Rabinowitz, showing 2HG generation by LDH, and also demonstrating pH sensitivity Teng X, 2016. We put an addendum in our JBC paper to specifically mention this work as a "note in added proof". The current paper doesnt' cite the Rabinowitz study either.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 07, Seán Turner commented:

      Strain MCCC 1A03042 is not a type strain of Thalassospira xiamenensis. According to Lai and Shao (2012) [PubMed ID 23209216], strain MCCC 1A00209 is the type.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 16, Sebastien Delpeut commented:

      One sentence was accidentally omitted from the acknowledgments.

      We thank all laboratory members for continuing support and constructive discussion and Angelita Alcos for technical support.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 05, Lydia Maniatis commented:

      “The present findings demonstrate that it is difficult to tease apart low-level (e.g., contrast) and midlevel (e.g., transparency) contributions to lightness phenomena in simple displays… Dissociating midlevel transparency explanations from low-level contrast explanations of the crispening effect will always be problematic, as by definition information is processed by “low-level” mechanisms before higher visual processing areas responsible for the midlevel segmentation of surfaces.”

      As the above passage indicates, the authors of this article are endorsing the (untenable but common) notion that, within a visual percept, some features are reflections of “low-level” processes, i.e. activities of neurons at anatomical levels nearer to the retinal starting point, while other features are reflections of the activities of “mid-level” neurons, later in the anatomical pathway. Still others, presumably, are reflections of the activities of “high-level” neurons. Thus, when we observe that a grey square on a dark grey background appears lighter than the same grey square on light grey background, this is the result of “low-level” firing patterns, while if we perceive a grey film overlying both squares and backgrounds (an effect we can achieve by simply making certain alterations in the wider configuration, leaving the "target" area untouched), this is a consequence of “mid-level” firing activity. And so on. Relatedly, the story goes, we can effectively observe and analyze neural processes at selected levels by examining selected elements of the percepts to which various stimuli give rise.

      These assumptions are not based on any evidence or rational arguments; the arguments, in fact, are all against.

      That such a view constitutes a gross and unwarranted oversimplification of an unimaginably complex system whose mechanics, and the relationships between those mechanics and perception, we are not even close to understanding, should be self-evident.

      Even if this were not the case, the view is paradoxical. It’s paradoxical for many reasons, but I’ll focus on one here. We know that at any given location in the visual percept – any patch – what is perceived – with respect to any and all features – is contingent on the entire area of stimulation. That is, with respect to the percept, we are not dealing with a process of “and-sum.” This has been demonstrated ad infinitum.

      But the invocation of “low-level” processes is simultaneously an invocation of “local” processes. So to say that the color of area “x” in this visual percept is the product of local process “y” is tantamount to saying that for some reason, the normal, organized feedback/feedforward response to the retinal stimulation stopped short at this low-level. But when and how does the system decide when to stop processing at the lower-level? Wouldn't some process higher up, with a global perspective, need to ok this shutting down of the more global process (to be sure, for example, that a more extended view doesn’t imply transparency)? And if so, would we still be justified in attributing the feature to a low-level process?

      In addition, the “mid-level segmentation of surfaces” has strong effects on perceived lightness; are these supposed to be added to the “low-level contrast effects” (with the "low-level" info simultaneously underpinning the "mid-level" activity)? A rationale is desperately needed.

      Arbitrarily interpreting the visual percept in terms of piecemeal processes for one feature and semi- global processes for another and entirely global processes for a third, and some or all at the same time is not a coherent position.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Lydia Maniatis commented:

      We can have an interesting conversation right here. Unless you clarify your assumptions, and properly control your variables and sample sizes,further research efforts will be wasted. Every difference in conscious experience is inevitably correlated with differences in physiology. The trick is in how you interpret the inevitable variations in the latter.(At this point in our understanding of brain function, I submit that such efforts are extremely premature.) In your case, you don't even seem to know what experience you are trying to correlate with brain activity.

      I believe that your small stimulus duration and the fixation on the red spots may have biased the perception of figure to the corresponding surfaces in older viewers. You clearly don't have a alternative hypothesis that doesn't suffer from serious logical problems (as I noted in an earlier comment).

      Peer reviewers are obviously not infallible, which is why this forum exists, and invoking them doesn't count as a counterargument.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 06, Jordan W Lass commented:

      There are many interesting followups to our work indeed, some of which I believe merit further study. You have begun to identify some of these, and it seems to me there is the possibility for a constructive conversation to be had here. I highly encourage you to stop by our upcoming poster at the Vision Sciences Society conference in Florida in May, where we extend this work by exploring electrophysiological correlates of performance on this task in various conditions in both age groups. Our research group would be happy to discuss the issues you are taking with our work, as well as potentially-fruitful followups that can further address the questions we have raised in this work and that you have touched in some of the above comments.

      I believe that, especially due to the presentation of this work over a number of conferences where I was challenged by experts in the field who helped me formulate and refine the ideas presented, as well as the rigorous peer review editorial process leading to the publication of this work in a high quality journal, the rational of our hypothesis and interpretation of our results are clearly laid out in the paper.

      Thank you again for your keen interest in this work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 06, Jordan W Lass commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Apr 05, Lydia Maniatis commented:

      You say that what you mean by “reduced ability to resolve figure-ground competition…is an open question.” But the language is clear, and regardless of whether concave or convex regions are seen as figure, the image is still being resolved into figure and ground. In other words, your experiments in no sense provide evidence that older people are not resolving images into figure and ground, only that convexity may not be as dispositive a factor as in younger people. Perhaps they are influenced more by the location of the red dot, as I believe that it is more likely that fixated regions will be seen as figure, all other things being equal.

      In your response you specify that ‘failure to resolve’ may be interpreted in the sense of “decreased stability of the dominant percept and increased flipping.” However, in your discussion, you note, that, on the contrary, other researchers have found increased stability of the initial percept and difficulty in reversing ambiguous stimuli in older adults. If your inhibition explanation is consistent with BOTH increased flipping and greater stability, then it’s clearly too flexible to be testable. And, again, increased flipping rate is not really the same thing as “inability to resolve.”

      The second alternative you propose is that stimuli are “not perceived to have figure ground character, perhaps being perceived as flat patterns.” This is obviously also in conflict with the other studies cited above. If the areas are perceived as adjacent rather than as having a figure-ground relationship, this also involves perceptual organization. For normal viewers, such a percept – e.g. simultaneously seeing both faces and vase in the Rubin vase, is very difficult, so it hard to imagine it occurring in older viewers, but who knows. If such an idea is testable, then you should test it.

      You say the logic of your hypothesis is sound and your interpretations parsimonious, but in fact it isn’t clear what your hypothesis is, (what failure to resolve means). If your results are replicable, you may have demonstrated that, under the conditions of your experiment, convex region is less dispositive a factor in older adults. But in no sense have you properly formed or tested any explanatory hypotheses as to why this occurred.

      In addition, I don’t think its fair to say that you’ve excluded the possible effect of the brevity of the stimulus. 250ms is still pretty short, considering that saccades typically take about 200ms to initiate. We know that older people generally respond more slowly at any task. The fact that practical considerations make it hard to work with longer exposure times doesn’t make this less of a problem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Apr 04, Jordan W Lass commented:

      The interpretation that “differences in the ability to resolve the competition between alternative figure-ground interpretations of those stimuli" comes from the combination of results across experiments, and the literature on figure-ground and convexity context effects in specific. Given that we used a two-alternative forced choice paradigm, which has been commonly used to measure perception even when stimuli are presented below threshold, chance performance is P(convex=figure) = .5. Our observation was regressions to chance in the older group in both convexity bias and CCEs, which is consistent with the interpretation that the older group showed reduced ability to resolve figure-ground competition. Interestingly, as you may be getting at, what "reduced ability to resolve figure-ground" means is an open question: could it be decreased stability of the dominant percept and increased flipping between them or time spent in transition states? could it be that the stimuli are not perceived to have figure-ground character, perhaps being perceived as flat patterns? These are interesting questions indeed, which your idea of adding another response option "no figure-ground observed" is one way of addressing, although it comes with its own set of limitations.

      Alternatively, as you propose, it may be the case that the older adults are resolving equally well as younger adults, but with increased tendency of perceiving concave figures compared to the younger group, which would also bring P(convex=figure) closer to .5. However, I can think of no literature or reasoning as to why that would be the case, so I see that as a less parsimonious interpretation. I am intrigued though, and if you are able to develop a hypothesis as to why this would be the case, it could make for an interesting experiment that might shed light into the nature of figure-ground organization in healthy aging.

      Critically, the results of Experiment 4 showed a strong CCE in older adults when only concave regions were homogeneously coloured, which is a stimulus class that has been shown to be processed more quickly in younger adults (e.g., Salvagio and Peterson, 2012). Since no conCAVity-context effects were observed when only convex regions were homogeneously coloured (the opposite stimulus properties of the reduced competition stimuli), the Experiment 4 results are strongly supportive of the notion that older adults do show the CCE pattern well-characterized in younger adults, but that the high competition stimuli used in Experiment 1 are are particularly difficult for them to resolve.

      The logic of our hypothesis is sound, and our interpretation is the most parsimonious we are aware of based on all the results. Thank you for your question, I would be happy to discuss further if you would like further clarification, or are interested in discussing some of these interesting followups.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Apr 04, Lydia Maniatis commented:

      If I’m reading this paper correctly, there’s a problem with the logic of the argument.

      The finding is that older people are less likely than young people to see convex regions of a stimulus as figure. The authors say that this implies age “differences in the ability to resolve the competition between alternative figure-ground interpretations of those stimuli.”

      However, the question they are asking in Exp’t1 is whether a red spot is seen as “on or off the region perceived as figure.” This implies that in every case, one of two border regions is seen as figure; at least, the authors don’t suggest that older people saw neither region as figure - and the question doesn’t allow for this possibility. So my question is, why doesn’t seeing the concave region as figure count as a resolution, inhibitory-competition-wise?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 04, Lydia Maniatis commented:

      What Ratnasingam and Anderson are doing here is analogous to this imaginary example: Let’s say that I have a strong allergy to food x, a milder one to food y, none to food z, and so on, and that my allergies produce various symptoms. Let’s assume also that some of these effects can be interpreted fairly straightforwardly in terms of formal structural relationships between my immune system and the molecular components of the foods, and others not. For these others, we can assume either a functional rationale or perhaps consider them a side effect of structure or function. We don’t know yet. For other individuals, other allergy/food combinations have corresponding effects. Again, if we know something about the individual we can predict some of the allergic reactions based on known principles.

      How much sense would it make now, to conduct a study whose goal is: “to articulate general principles that can predict when the size of an allergic reaction will be large or small for arbitrarily chosen food/patient combinations…. What (single) target food generates the greatest allergic difference when ingested by two arbitrarily chosen patients?” (“Our goal is to articulate general principles that can predict when the size of induction will be large or small for arbitrarily chosen pairs of center-surround displays…. What (single) target color generates the greatest perceptual difference when placed on two arbitrarily chosen surround colors?”)

      Furthermore, having gotten their results, our researchers now decline to attempt to interpret them in terms of the nuanced understanding already available.

      The most striking thing about the present study is that a researcher who has done (unusually) good work in studying the role of structure and chromatic/lightness relationships in the perception of color is now throwing all this insight overboard, ignoring what is known about these factors and lumping them all together, in the hope of arriving at some magic, universal formula for “simultaneous contrast” that is blind to them. Obviously the effort is bound to fail, and the title – framed as a question, not an answer – is evidence of this. Here is a sample, revealing caveat:

      “Finally, it should also be noted that although some of our comparisons involved target–surround combinations in which some targets can appear as both an increment and decrement relative to the two surrounds, which would induce differences in both hue and saturation (e.g., red and green). Such pairs may be rated as more dissimilar than two targets of the same hue (e.g., red and redder), but it could be argued that this does not imply that the size of simultaneous contrast is larger in these conditions. However, it should be noted that such conditions are only a small subset of those tested herein.” Don’t bother us with specifics, we’re lumping.

      As the authors discuss in their introduction, studies (treating “simultaneous contrast” in a crude, structure-and-relationship-blind way) produce conflicting results: “The conflicting empirical findings make it difficult to articulate a general model that predicts when simultaneous contrast effects will be large or small, since there is currently no model that captures how the magnitude of induction varies independently of method used…. “ Of course. When you don’t take into account relevant principles, and control for relevant factors, your results will always mystify you.

      The conflation between, or refusal to distinguish explicitly, cases in which transparency arises and in which it does not arise is really inexplicable.

      "The suggestion that the strongest forms of simultaneous contrast arise in conditions that induce the perception of transparency gains conceptual support from evidence showing that transparency can generate dramatic transformations in both perceived lightness and color..." But the contextual conditions that produce transparency are really quite...transparent...There's no clear reason to lump these with situations that are perceptually and logically distinct.

      Also: "In simultaneous contrast displays, the targets and surrounds are also texturally continuous, in the sense that they are both uniform, but there are no strong geometric cues for the continuation of the surround through the target region of the kind known to give rise to vivid percepts of transparency (such as contours or textures). It is therefore difficult to generate a prediction for when transparency should be induced in homogeneous center-surround patterns, or how the induction of transparency should modulate the chromatic appearance of a target as a function of the chromatic difference between a target and its surround."

      First, I'll pay him the compliment of saying that I don't think that it would be that difficult for Anderson to generate predictions for when transparency should occur...(I think even I could do it). Second, if this theoretical gap really exists, then this is the problem that should be addressed, not "what happens if we test a lot of random combinations and average the results." It might be useful to take into consideration a demo devised by Soranzo, Galmonte & Agostini (2010) which is a case of transparency effect that lacks the "cues" mentioned here - and thus by these authors' criteria qualifies as a basic simultaneous contrast display. (I don't think its that difficult to explain, but maybe I haven't thought about it enough.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 09, Lydia Maniatis commented:

      First, I apologize for the error vis a vis the open-loop experiment.

      With respect to "very standard procedures:" Vision science is riddled with references to "standard," "popular," "traditional" "common" "widely used" procedures that have no theoretical rationale. "It is considered safe" is also not a rationale.

      With respect to fitting, you calculate r-squared by fitting data in the context of very specific conditions whose selection is without a clear rationale, and thus it is very likely that conditions similar in principle but different in detail would yield different results. For example, you use Gabors, which are widely used but seem to be based on the idea that the visual process performs a Fourier analysis - and a local one at that - for which there are no arguments in favor.

      Your findings don't warrant any substantial conclusion. You claim in the paper that: “we have shown that eye and hand movements made toward identical targets can end up in different locations, revealing that they are based on different spatial information.” Only the former claim is true.

      Your prior discussion reveals that the conclusions couldn't be more speculative and go far beyond the data : “This difference between hand movements and saccades might reflect the different functional specificity of the two systems… One interpretation of the current results is that there are two distinct spatial maps, or spatial representations, of the visual world.”

      Your arguments in favor of this explanation are peppered with casual assumptions: "...the priority for the saccade system might be to shift the visual axis toward the target as fast as possible with little cost for small foveating errors. If integrating past sensory signals with the current input increases processing time (Greenwald, Knill, & Saunders, 2005), the saccadic system might prefer to use current input and maximize the speed of the eye movement. For hand movements instead, a small error might make the hand miss its target with potentially large behavioral costs."

      Might + might + might + might (eleven in all in the discussion) means that the effect that you report is far too limited in its implications to warrant the claim you make, quite unequivocally, in your title. Most of your "mights" are not currently testable, and almost certainly represent an overly simplistic view of a system of which we have only the crudest understanding at the neural level. The meaning of the term "sensory signals" also needs clarification, as it also implies a misunderstanding of the nature of perceptual processes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 07, Matteo Lisi commented:

      Thanks for the interest in our study.

      I invite you to read more carefully the article, the experiment that you indicate as "post-hoc"is actually the open-loop pointing experiment: the details of the method are described at page 4-5, and the results are reported at page 7.

      The truncation of response latencies is a very standard procedure, to prevent extreme spurious RTs (e.g. due to anticipation or attention lapses) from being included in the analysis. While there is no universal agreement on the ideal procedure for selecting cut-offs criteria, it is considered safe to use extreme cut-offs that result in the exclusion of only a tiny fraction of trials (<0.5%); see for example the recommendations by Ulrich R, 1994, page 69.

      I am confused by your comment regarding the "fitting". The statistical model used in the analysis is a standard multivariate linear regression, with the x, y location of the response as dependent variables. It doesn't require any particular assumption, other than the usual assumptions of all linear models (such as independence of errors, homoscedasticity, normally-distributed residuals), which were not violated in our dataset. These are the same assumptions required also by other common linear models such as simple linear regression and ANOVA.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Mar 07, Lydia Maniatis commented:

      The discussion of this paper refers to a post hoc experiment whose results apparently used in the analysis of the reported results, but gives details of neither methods nor results, nor any citation. This seems oddly casual:

      "To test this hypothesis, we repeated the pointing task in a condition in which vision was blocked during the execution of the movement by means of shutter glasses (open-loop hand pointing), making it impossible to use visual feedback for online correction of the hand movement. The results of this experiment replicated those of the experiment with normal pointing; the only difference was a moderate increase in the variability of finger landing positions, which is reflected in the decreased r2 values of the model used to analyze pointing locations in the open-loop pointing condition with respect to the “normal” pointing condition (see also Figures 1D, E and 2)."

      As is usual but hard to understand, an author made up a large proportion of the subjects (1/6), confusing the issue of whether naivete is or is not an important condition to control: "all [subjects] except the author were naïve to the specific purpose of the experiments."

      And this:

      "In the experiment involving saccades, we excluded trials with latency less than 100 ms or longer than 600 ms (0.36% of total trials); the average latency of the remaining trials was 279.89 ms (SD = 45.88 ms). In the experiment involving pointing, we excluded trials in which the total response time (i.e., the interval between the presentation of the target and the recording of a touch response on the tactile screen) was longer than 3 s (normal pointing: 0.45% of total trials; open-loop pointing: 0.26% of total trials). The average response time in the remaining trials was 1213.53 ms (SD = 351.61 ms) for the experiment with normal pointing and 1004.70 ms (SD = 209.83 ms) for the experiment with open-loop pointing."

      It's not clear if this was a post hoc decision, or, whether planned or not, what was the rationale.

      As usual, there was a lot of fitting, using assumptions whose rationale is also not clear.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 06, Lydia Maniatis commented:

      The authors’ theoretical position does not seem coherent. They are making an unintelligible distinction between what they call “low-level” stimulus features – among which they list “brightness, contrast, color or spatial frequency” – on the one hand, and “high-level information such as depth cues.” The latter include “texture and shading.” But in an image, the latter are simply descriptions of perceptual effects of variations in luminance, etc. For example, in a black and white photo what we might refer to as “shading” is objectively changes in the luminance of the surface, and the reaction of our visual system to these variations. Similarly for texture. So when they say that “The perception of depth can have the effect of over-riding some of the salient 2-D cues,” one wonders whether they mean to suggest that the “perception of depth” is based on some kind of clairvoyance. And when they say that “The results lend support to a depth cue invariant mechanism for object inspection and gaze planning” they’re basically just saying “how we look at something depends on what it looks like.” And what it looks like depends on…

      With respect to the division of perceptual features into “high-level” and “low-level,” this is also a theoretical non-starter, which I’ve discussed in various comments, including a recent one on Schmid and Anderson (2017), copied below.

      The methods are for the most part pre-packaged, from various sources. Their theoretical underpinnings are questionable. The figure 2 example of a face based on texture just doesn’t look like a face at all. We’re told that it was generated using the method described by Liu et al (2005). I guess that will have to do…The use of forced choices is indefensible, resulting in the loss of information and the need to invent untestable “guess rates” and “lapse rates:” “The guessing and the lapse rates were fixed to 0.25 and 0.001, respectively.” The stimuli were vary ambiguous, rendering the recognition task difficult, which necessitated certain post hoc measures to clean up the data. Basically we end up comparing a couple of arbitrary manipulations without any interpretable theoretical significance.

      From comment on Schmid and Anderson (2017) https://pubpeer.com/publications/8BCF47A7F782E357ECF987E5DBFC55#fb117951

      “The present findings demonstrate that it is difficult to tease apart low-level (e.g., contrast) and midlevel (e.g., transparency) contributions to lightness phenomena in simple displays… Dissociating midlevel transparency explanations from low-level contrast explanations of the crispening effect will always be problematic, as by definition information is processed by “low-level” mechanisms before higher visual processing areas responsible for the midlevel segmentation of surfaces.”

      As the above passage indicates, the authors of this article are endorsing the (untenable but common) notion that, within a visual percept, some features are reflections of “low-level” processes, i.e. activities of neurons at anatomical levels nearer to the retinal starting point, while other features are reflections of the activities of “mid-level” neurons, later in the anatomical pathway. Still others, presumably, are reflections of the activities of “high-level” neurons. Thus, when we observe that a grey square on a dark grey background appears lighter than the same grey square on light grey background, this is the result of “low-level” firing patterns, while if we perceive a grey film overlying both squares and backgrounds (an effect we can achieve by simply making certain alterations in the wider configuration, leaving the "target" area untouched), this is a consequence of “mid-level” firing activity. And so on. Relatedly, the story goes, we can effectively observe and analyze neural processes at selected levels by examining selected elements of the percepts to which various stimuli give rise.

      These assumptions are not based on any evidence or rational arguments; the arguments, in fact, are all against.

      That such a view constitutes a gross and unwarranted oversimplification of an unimaginably complex system whose mechanics, and the relationships between those mechanics and perception, we are not even close to understanding, should be self-evident.

      Even if this were not the case, the view is paradoxical. It’s paradoxical for many reasons, but I’ll focus on one here. We know that at any given location in the visual percept – any patch – what is perceived – with respect to any and all features – is contingent on the entire area of stimulation. That is, with respect to the percept, we are not dealing with a process of “and-sum.” This has been demonstrated ad infinitum.

      But the invocation of “low-level” processes is simultaneously an invocation of “local” processes. So to say that the color of area “x” in this visual percept is the product of local process “y” is tantamount to saying that for some reason, the normal, organized feedback/feedforward response to the retinal stimulation stopped short at this low-level. But when and how does the system decide when to stop processing at the lower-level? Wouldn't some process higher up, with a global perspective, need to ok this shutting down of the more global process (to be sure, for example, that a more extended view doesn’t imply transparency)? And if so, would we still be justified in attributing the feature to a low-level process?

      In addition, the “mid-level segmentation of surfaces” has strong effects on perceived lightness; are these supposed to be added to the “low-level contrast effects” (with the "low-level" info simultaneously underpinning the "mid-level" activity)? A rationale is desperately needed.

      Arbitrarily interpreting the visual percept in terms of piecemeal processes for one feature and semi- global processes for another and entirely global processes for a third, and some or all at the same time is not a coherent position.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 03, M Mangan commented:

      There's a strong response to this by the NASEM. Full version: http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=312017b&_ga=1.59874159.399318741.1481664833

      "....The National Academies of Sciences, Engineering, and Medicine have a stringent, well-defined, and transparent conflict-of-interest policy, with which all members of this study committee complied. It is unfair and disingenuous for the authors of the PLOS article to apply their own perception of conflict of interest to our committee in place of our tested and trusted conflict-of-interest policies...."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 09, Norberto Chavez-Tapia commented:

      Since the first descriptions about pioglitazone treatment in NAFLD we were skeptical, unfortunately at the present time we still lack pharmacological treatment options for this prevalent disease. Despite the favorable results of this manuscript still does not adequately guide clinicians in decision making; based on this we decide to perform trial sequential analysis for assessment of data reliabity in the cumulative meta-analysis; for improvement in advanced cirrhosis for all NASH patients the analysis shows the robustness, reaching the sample size to cross the monitoring boundaries, indicating that the available data is more than enough to sustain the conclusion. This statistical robustness is companied with a number needed to treat for improvement in advanced cirrhosis for all NASH patients of 14 (95% CI 8.9-29.7), and 11 (95% CI 7.2-22) for rosiglitazone-pioglitazone, and pioglitazone respectively. The best candidate for this therapy are those with advanced fibrosis, for this case the number needed to treat is 3 (95%CI 1.8-4.1), and 2 (95%CI 1.4-2.9) for rosiglitazone-pioglitazone, and pioglitazone respectively. The clinical decision should be balanced with the risks described earlier with pioglitazone use, being one of the most relevant the increased risk of bladder cancer (HR of 2.642; 95%CI: 1,106, 6.31, p=0.029) with a number needed to harm of 1200. This data emphasizes the need of suitable assessment of fibrosis, to promote pioglitazone in highly selected patients, which will offer better benefits, reducing exposure to potential adverse effects. Finally, the references in the figures are incorrect, being difficult to follow the original sources.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 22, NephJC - Nephrology Journal Club commented:

      This trial on the comparison of peritoneal dialysis versus furosemide in pediatric post-operative acute kidney injury, was discussed on May 23rd and 24th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Michelle Rheault are available at the NephJC website here The discussion was quite detailed, with over 90 participants, including pediatric and adult nephrologists and fellows, and joined by author Dave Kwiatkowski. The highlights of the tweetchat were:

      • The authors should be commended for designing and conducting this important trial, with funding received from the American Heart Association–Great Rivers Affiliate and the Cincinnati Children’s Hospital Medical Center

      • Overall, it was thought to be a well-designed and well-conducted trial, with possible weaknesses being the use of bolus (rather than continuous infusion) of furosemide in the control arm, and the importance of negative fluid balance at day 1 as an important outcome being possible weaknesses

      • The results were thought to be quite valid and important, and given the not uncommon risk of acute kidney injury and fluid overload in this setting, that preemptive peritoneal dialysis catheters should be considered more often in children at high risk Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website. Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 07, Claudiu Bandea commented:

      Are the conclusions of the Lancet Neurology article by Stopschinski and Diamond flawed?

      In their article entitled “The prion model for progression and diversity of neurodegenerative diseases”(1), Barbara Stopschinski and Marc Diamond conclude: “We do not know if common neurodegenerative diseases (e.g. Alzheimer's disease, Parkinson's disease, ALS and Huntington's disease) involve transcellular propagation of protein aggregation, as predicted by the prion model. Until specific interventions are able to block protein propagation and successfully treat patients, this model will be mainly speculative” (italics and parenthesis added).

      Given that one of the authors, Marc Diamond, published several previous articles in which he refers to the primary proteins implicated in neurodegenerative diseases as 'prions' [see for example: “Tau Prion Strains Dictate Patterns of Cell Pathology, Progression Rate, and Regional Vulnerability In Vivo” (2) and “Propagation of prions causing synucleinopathies in cultured cells” (3)], it is surprising to learn in this new article (1) that, in fact, we do not know if these neurodegenerative disorders are caused by 'prions'. Does this mean that there are no “Tau Prion Strains” and there are no “Propagation of prions causing synucleinopathies”?

      Also, how do the authors reconcile their conclusion with the following statement in the Abstract of a concurrently published paper entitled “Cellular Models for the Study of Prions”(4): “It is now established that numerous amyloid proteins associated with neurodegenerative diseases, including tau and α-synuclein, have essential characteristics of prions, including the ability to create transmissible cellular pathology in vivo”?

      Further, Stopschinski and Diamond state that “until specific interventions are able to block protein propagation and successfully treat patients” the prion model remains speculative. If that’s the case, have these “specific interventions” (which would prove that Alzheimer's, Parkinson's, ALS and Huntington's are indeed caused by ‘prions') been used to also validate that the disorders traditionally defined as ‘prion diseases’, such as Creutzfeld-Jakob disease, are indeed caused by 'prions'? If not, is the prion model for Creutzfeld-Jakob disease just speculative?

      In their outline of future directions, the authors write: “Given the wide ranging role of self-replicating protein aggregates in biology, we propose that pathological aggregation might in fact represent a dysregulated, but physiological function of some proteins—ie, the ability to change conformation, self-assemble, and propagate” (1).

      This is a remarkable statement in that it points to the radical idea that the pathological aggregation of the proteins implicated in neurodegenerative diseases, such as tau, amyloid β, α-synuclein and ‘prion protein’, is an intrinsic phenomenon associated with their physiological function, which is a profound departure from the conventional view presented in thousands of publications over the last few decades. However, I have a problem with the formulation of the statement, specifically with “…we propose…”. The authors might not be fully familiar with the literature on neurodegenerative diseases, but what they are proposing has been the primary topic of articles published several years ago (e.g. 5, 6).

      Given the extraordinary medical, public health and economic burden associated with neurodegenerative diseases, it should be expected for the authors or the editorial team/reviewers (7) to address the questions and issues posted in this comment.

      References

      (1) Stopschinski BE, Diamond MI. 2017. The prion model for progression and diversity of neurodegenerative diseases. Lancet Neurology. doi: 10.1016/S1474-4422(17)30037-6. Stopschinski BE, 2017

      (2) Kaufman SK, Sanders DW, Thomas TL et al. 2016. Tau Prion Strains Dictate Patterns of Cell Pathology, Progression Rate, and Regional Vulnerability In Vivo. Neuron. 92(4):796-812. Kaufman SK, 2016

      (3) Woerman AL, Stöhr J, Aoyagi A, et al. 2015. Propagation of prions causing synucleinopathies in cultured cells. Proc Natl Acad Sci U S A. 112(35):E4949-58. Woerman AL, 2015

      (4) Holmes BB, Diamond MI. 2017. Cellular Models for the Study of Prions.Cold Spring Harb Perspect Med. doi: 10.1101/cshperspect.a024026. Holmes BB, 2017

      (5) Bandea CI. 2009. Endogenous viral etiology of prion diseases. Nature Precedings. http://precedings.nature.com/documents/3887/version/1/files/npre20093887-1.pdf

      (6) Bandea CI. 2013. Aβ, tau, α-synuclein, huntingtin, TDP-43, PrP and AA are members of the innate immune system: a unifying hypothesis on the etiology of AD, PD, HD, ALS, CJD and RSA as innate immunity disorders. bioRxiv. doi:10.1101/000604; http://biorxiv.org/content/biorxiv/early/2013/11/18/000604.full.pdf

      (7) George S, Brundin P. 2017. Solving the conundrum of insoluble protein aggregates. Lancet Neurol. doi: 10.1016/S1474-4422(17)30045-5. George S, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 05, Anders von Heijne commented:

      If you like this article, you simply must read this: Lawson AE1, Daniel ES. Inferences of clinical diagnostic reasoning and diagnostic error.J Biomed Inform. 2011 Jun;44(3):402-12. The US philosopher Charles Sanders Peirce has at lot important things to say about how we create our hypotheses!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 18, Tamás Ferenci commented:

      I congratulate the authors for making the results of Nolte et al much more accessible. To further facilitate the application and investigation of this model, I implemented their reparameterized version under the free and open-source mrgsolve which can be run using R.

      The model is available at https://github.com/tamas-ferenci/NolteWeisser_AluminiumKinetics.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 29, Dalia Al-Karawi commented:

      Dear Authors, This meta-analysis has previously been published in the Phytotherapy Research Journal since February 2016. The data included in this meta-analysis is identical to the one we published before. The original publication, The Role of Curcumin Administration in Patients with Major Depressive Disorder: Mini Meta-Analysis of Clinical Trials, has included the same studies and quantified the same effect size as you did in this paper. The same goes with the interpretation of data and the conclusions you came up with. I wonder what was your take on this topic that wasn't mentioned before?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 05, Anil Makam, MD, MAS commented:

      We thank Dr. Atkin and colleagues for publishing long-term outcomes after a one-time screening flexible sigmoidoscopy.(1) Although we agree that this strategy reduces the relative risk of colorectal cancer (CRC) diagnoses and death, we disagree with the methods the authors used to calculate the absolute magnitude of benefit, which is critical in determining whether screening is actually worth the burden, harms and cost.(2) By relying on per protocol analyses, the authors overestimate the absolute benefit of flexible sigmoidoscopy given healthy user and adherer biases inherent in preventive health interventions— i.e., those who adhere to CRC screening also have other behaviors that reduce their overall risk of cancer and death (e.g. diet, smoking habits, exercise, etc.) independent of the screening test itself.(3, 4) There is strong evidence for the presence of these biases in the UK Flexible Sigmoidoscopy Screening Trial given the marked differences in all-cause mortality within the invited group when stratified by those who were adherent versus those who were not adherent to flexible sigmoidoscopy (20.7% versus 29.5%), a screening test that does not reduce overall mortality. Assessing the absolute benefits for screening from the intention-to-treat analyses gives the most accurate estimates and avoids the pitfalls of these biases. This approach results in markedly attenuated estimates of the benefits (Table). Because screening does not save lives (number needed to screen of infinity for all-cause mortality), accurate estimates of the absolute benefit on reducing CRC diagnoses and CRC-related death are key to informing decision aid development and shared decision making.

      See Table here: https://twitter.com/AnilMakam/status/860490959225847809

      Anil N. Makam, MD, MAS; Oanh K. Nguyen, MD, MAS

      Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas, USA

      We declare no competing interests.

      REFERENCES (1) Atkin W, Wooldrage K, Parkin DM, Kralj-Hans I, MacRae E, Shah U, Duffy S, Cross AJ. Long term effects of once-only flexible sigmoidoscopy screening after 17 years of follow-up: the UK Flexible Sigmoidoscopy Screening randomised controlled trial. Lancet. 2017. (2) Makam AN, Nguyen OK. An Evidence-Based Medicine Approach to Antihyperglycemic Therapy in Diabetes Mellitus to Overcome Overtreatment. Circulation. 2017;135(2):180-195. (3) Shrank WH, Patrick AR, Brookhart MA. Healthy user and related biases in observational studies of preventive interventions: a primer for physicians. J Gen Intern Med. 2011;26(5):546-550. (4) Imperiale TF, Monahan PO, Stump TE, Glowinski EA, Ransohoff DF. Derivation and Validation of a Scoring System to Stratify Risk for Advanced Colorectal Neoplasia in Asymptomatic Adults: A Cross-sectional Study. Ann Intern Med. 2015;163(5):339-346.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 03, Peter Hajek commented:

      Meta-analyses do not normally use different approaches than a widely accepted standard and this field has one (see over a dozen Cochrane meta-analyses, Russell Standard, and any other norm in this field). As far as I know, no meta-analysis or individual study over the past 20 years or so included completers only. As explained earlier, the key point is that ‘missingness’ in this field is not considered random. If among 100 smokers who had treatment only 10 answer follow-up calls and report abstinence, the success rate is considered to be 10% rather than 100% as you would report it.

      Re: studies that exclude treatment successes, imagine a treatment with good efficacy that helps 50% of patients, but only the 50% that were not helped are followed-up. These treatment failures may have worse outcomes than a random comparator group (they could have been treatment resistant e.g. because they have a more severe condition or other adverse circumstances). Your approach would interpret the finding as showing that the treatment is not only ineffective, but that it causes harm – when in fact it shows no such thing and is simply an artefact of the selection bias.

      I appreciate that getting these things right can be difficult and would leave this alone if it was not such an important topic open to ideological misuse. And I agree that more studies are needed for the definitive answers to emerge.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 17, Regina El Dib commented:

      In his comment, Dr. Hajek states that ‘in smoking cessation trials, drop-outs are classified as non-abstainers;. The approach to dealing with missing data in meta-analysis is differnet from that in trials. A survey of the methods literature identified four proposed approaches for dealing with missing outcome data when conducting a meta-analysis (https://www.ncbi.nlm.nih.gov/pubmed/26202162). All approaches recommended the use of a complete case analysis as the primary analysis. This is exactly how we conducted our meta-analysis (figure 5 in the published paper); the pooled relative ratio (RR) was 2.03 (95% CI 0.94 to 4.38) for smoking cessation with ENDS relative to ENNDS.

      The same proposed approaches recommended additional sensitivity analyses using different imputation methods. The main purpose of these additional analyses is to assess the extent to which missing data may be biasing the findings of the primary analysis (https://www.ncbi.nlm.nih.gov/pubmed/23451162). Accordingly, we have conducted two sensitivity analyses respectively assuming that all participants with missing data had success or failure in smoking cessation. When assuming success, the pooled RR was 0.95 (95% CI 0.76 to 1.18, p=0.63) with ENDS relative to ENNDS; when assuming failure, the pooled RR was 2.27 (95% CI 1.04 to 4.95, p=0.04). This dramatic variation in the results when making different assumptions is clearly an indicator that the missingness of data is associated with a risk of bias, and that decreases our confidence in the results. We have already reflected that judgment in our risk of bias assessment of these two studies, in table 4 and figure 2; and in our assessment of the quality of evidence in table 7.

      Even if we were going to consider the RR of 2.27 as the best effect estimate (i.e., assuming all those with missing data had failure with smoking cessation), the findings would not be supporting the effectiveness of e-cigarettes on smoking cessation. Indeed, the included trials do not address that question, and our review found no study comparing e-cigarettes to no e-cigarettes. The included trials compare two forms of e-cigarettes.

      When assessing an intervention A (e.g., e-cigarettes) that has two types A1 (e.g., ENDS) and A2 (e.g., ENNDS), it would be important to first compare A (A1 and/or A2) to the standard intervention (e.g., no intervention or nicotine replacement therapy (NRT)), before comparing A1 to A2. If A1 and A2 are inferior to the standard intervention with A1 being less inferior than A2 (but still inferior to the standard intervention), focusing on the comparison of A1 to A2 (and ignoring the comparison to the standard intervention) will show that A1 is better than A2. That could also falsely suggest that at least A1 (and maybe A2) is favorable. Therefore, a recommendation of A1 vs. A2 should be considered only if A is already recommended over the standard intervention (i.e. A is non inferior to the standard intervention).

      Dr. Hajek also criticizes the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke. When discussing treatment and examining evidence, we refer to effectiveness (known as pragmatic; a treatment that works under real-world conditions). This includes (among other criteria) the inclusion all participants who have the condition of interest, regardless of their anticipated risk, responsiveness, comorbidities or past compliance. Therefore, the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke had a role in portraying the impact of ENDS and ENNDS in cigarette smokers on long-term tobacco use.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Mar 02, Wasim Maziak commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Mar 02, Peter Hajek commented:

      Wow! Let me try to explain these points in turn.

      Re excluding drop-outs: Imagine 100 people get real treatment and 40 quit, 100 get placebo and 20 quit. All those who were successful attend for follow up (they feel good, grateful, get praised) but many less among treatment failures are willing to face the music (feel that they disappointed clinicians, may be told off, or feel that treatment was rubbish). If the same proportion of failures attend in each group (or if none attend), the success rate among attenders will be identical for both study arms, despite the real quit rates being 40% vs 20%. Check https://www.ncbi.nlm.nih.gov/pubmed/15733243 I cannot think of a mechanism through which this could act in the opposite way as you assert.

      Re including irrelevant studies, your response provided no explanation for doing this.

      Re. your statements about 'industry people' who should conduct independent science, I do not have and never had any links with any tobacco or e-cigarette manufacturers; and have published dozens of studies on smoking cessation treatments. I believe that anti-vaping activism presented as science needs challenging because misinforming smokers keeps them smoking and undermining much less risky alternatives to cigarettes protects cigarettes monopoly and harms public health.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Mar 02, Wasim Maziak commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Mar 01, Peter Hajek commented:

      The conclusion that further trials of e-cigarettes are needed is correct, but there are two major problems with this review of the work that has been done so far.

      In smoking cessation trials, drop-outs are classified as non-abstainers because treatment failures are less likely to engage in further contact than treatment successes. Practically all smoking cessation trials and reviews published in the last 10 years or so use this approach. Removing drop-outs from the sample, as was done here, dilutes any treatment effect.

      The second serious issue is the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke. Such studies have a higher proportion of treatment failures in the ‘tried e-cig cohort’ and so have less quitting in this subgroup, but they provide no useful information on the efficacy of e-cigarettes. Saying that they provide low quality evidence is wrong – they provide no evidence at all.

      The narrative that some studies show that vaping helps quitting and some that it hinders misrepresents the evidence. No study showed that vaping hinders quitting smoking. The two RCTs with long-term outcome, if analysed in an unbiased way, show a positive effect despite controlling for sensorimotor effects and using low nicotine delivery products.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 24, KEVIN BLACK commented:

      Nature.com: ... a link to the "MRI-specific module to complement the methods reporting checklist," please?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 24, Mary Klem commented:

      A primary rationale provided by Cooper et al. for conducting this study is that a previous review (Olatunji et al. 2014) used a search strategy that "was not systematic or clearly defined" (pg 111). Cooper et al. claim that they conducted a systematic search "...following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance...(pg 110). However, details provided in the Methods section strongly suggest that Cooper et al. themselves failed to conduct searches that were systematic and that they have failed to provide a clear explanation of what and how they searched. Thus, this review suffers from the same flaws Cooper et al. identified in Olatunji et al 2014.

      PRISMA recommends that authors provide, for each database searched, the database name, the platform used to search the database, and the start and end dates for the search of each database. Cooper et al., fail to do this, providing only database names and no start-end dates. They also include EBSCO in the list of databases searched, even though EBSCO is not a database. EBSCO is a platform that can be used to search a variety of databases, and it is not clear from the paper which databases were searched using this platform.

      PRISMA also recommends that authors present the complete search strategy used for at least one of the major databases searched. Cooper et al. fail to do this, providing only what appears to be a simple description of their search terms. The authors note that the search terms "were searched in key words, title, abstract, and as MeSH subject headings" (pg 112). This is an odd statement, because the authors say they searched multiple databases, yet MeSH are controlled vocabulary only available for use in PubMed. So did the authors utilize each database's controlled vocabulary e.g., Emtree terms for Embase, PsycINFO thesarus terms? Or did they somehow attempt to use MeSH in these other databases and fail to use the appropriate controlled vocabulary? Given this apparent confusion about the nature of subject headings, I have little confidence that the authors conducted systematic or comprehensive searches.

      Overall, then, this review suffers from a lack of clarity about the search strategies used, and the search strategies themselves are suspect. With these limits, it is unlikely that the findings of this study can add to the current literature in any substantive way.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 25, Marcia Herman-giddens commented:

      It is already known that C6 is not a test for acute LB. This study found "All (34/34) seropositive blood donors followed over time remained seropositive at follow-up after 22-29 months." Perhaps seropositivity remains much longer than this. (Doesn't this question beg for a longer study unless the whole test is thrown out as useless?) This could explain why the older people get the more likely they are to be positive since they would have a longer exposure period ("seroprevalence was significantly higher in males and increased with age"). Males are usually more exposed to the outdoors over a lifetime than women. Thus, it would seem that the conclusion of the study should be that a lot of people have had LB infections in Kalmar County, and that a positive C6 could well indicate ongoing infection or a past infection, rather than the study's implication of false positivity. One wonders who would want blood from a C6 positive donor? The bottom line from this study seems to be that C6 is a useless test for blood screening for LB and should not be used.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 22, Martine Crasnier-Mednansky commented:

      The terminology 'induction prevention' does not apply to cAMP-dependent Carbon Catabolite Repression (CCR). It has been used to illustrate CCR in Bacillus subtilis, see figure 2 in Görke B, 2008. Escherichia coli cAMP-dependent CCR does not cause induction prevention. It is therefore incorrect to state: "The current model for glucose preference over other sugars involves inducer exclusion and induction prevention, both of which are strictly dependent on the phosphorylation state of EIIA<sup>Glc</sup> in E. coli". Moreover, the major player in induction prevention is HPr, not EnzymeIIA<sup>Glc</sup>.

      Some referenced papers were misinterpreted by the authors. Lengeler J, 1972 reported, in wild type cells, induction of the mannitol operon 'is not prevented' by glucose. Lengeler J, 1978 reported expression of the mtl operon, in both constitutive and inducible strains, is 'resistant to CCR' caused by glucose. This expression was nearly insensitive to cAMP addition, even though expression of the mtl operon is dependent on cAMP (a cya mutant strain does not grow on mannitol). Hence, the level of cAMP in glucose-growing cells is probably sufficient for expression of the mannitol operon. It is unclear how the authors monitored CCR with their inducible strains, as data were not shown.

      It was proposed induction of the mannitol operon may take place in the absence of PTS transport as follows. In the unphosphorylated state, transport of mannitol by Enzyme IICBA<sup>Mtl</sup> (MtlA) occurred by facilitated diffusion, upon high-affinity binding of mannitol to the IIC domain Lolkema JS, 1990. Thus, the IIC domain appears as a transporter by itself translocating mannitol at a slow rate. This provides an explanation for the observations that (1) mutant strains lacking Enzyme I and/or HPr were still inducible by mannitol (which originally led to the proposal mannitol may be the inducer of the mannitol operon Solomon E, 1972) and (2) mutant strains lacking Enzyme IICBA<sup>Mtl</sup> could not be induced unless mannitol was artificially generated in the cytoplasm. It was therefore concluded mannitol was the inducer of the mannitol operon. Interestingly, in the phosphorylated state, transport of free mannitol by Enzyme IICBA<sup>Mtl</sup> can be detected on the condition that the transporter has a poor phosphorylation activity Otte S, 2003.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 23, Marko Premzl commented:

      The third party data gene data set of eutherian kallikrein genes LT631550-LT631670 was deposited in European Nucleotide Archive under research project "Comparative genomic analysis of eutherian genes" (https://www.ebi.ac.uk/ena/data/view/LT631550-LT631670). The 121 complete coding sequences were curated using tests of reliability of eutherian public genomic sequences included in eutherian comparative genomic analysis protocol including gene annotations, phylogenetic analysis and protein molecular evolution analysis (RRID:SCR_014401).

      Project leader: Marko Premzl PhD, ANU Alumni, 4 Kninski trg Sq., Zagreb, Croatia

      E-mail address: Marko.Premzl@alumni.anu.edu.au

      Internet: https://www.ncbi.nlm.nih.gov/myncbi/mpremzl/cv/130205/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 12, Christian J. Wiedermann commented:

      After critical reading of the paper, the authors' conclusions suggesting renal safety of hydroxyethyl starch (HES) do not appear warranted. Since BMC Anesthesiology does not offer a correspondence or letters-to-the-editor section, where some of the study's limitations could be discussed, I would like to post a comment here.

      Zhang et al. describe a multicenter, double-blind, controlled randomized clinical trial that evaluated the renal safety of HES in patients undergoing hip arthroplasty under spinal anesthesia, apparently showing that there is no increase in renal injury with 6% HES 130/0.4 compared with lactated Ringer’s solution during this type of orthopedic surgery:

      • The reported methodology for the study provided no information on the statistical approach, either regarding the sample size calculation or the comparisons between groups for each outcome. Thus, it is impossible to determine whether this study, which involved a relatively small patient population (120 patients randomized), was powered sufficiently to detect statistically significant and clinically important differences between HES and Ringer’s lactate in the primary and secondary outcomes.
      • Eligibility criteria meant that the patient population did not include patients with American Society of Anesthesiologists physical status score >III, thus limiting this elderly population to those at a lower risk of developing AKI.
      • The primary outcome of the study was the levels of urine and plasma neutrophil gelatinase-associated lipocalin (NGAL) and plasma interleukin 18 (IL-18), which were used as biomarkers for the early detection of AKI. NGAL has been widely investigated as a biomarker for AKI; however, its clinical utility remains unclear because of difficulties in interpreting results due to different settings, sampling time points, measurement methods, and cut-off values [Singer E, 2013]. Although IL-18 holds promise as a biomarker for the prediction of AKI, it has only moderate diagnostic value [Lin X, 2015].
      • The follow-up period in the study was only 5 days, and consequently HES-induced AKI may have been missed (the FDA and EMA recommended monitoring of renal function in patients for at least 90 days [Wiedermann CJ, 2017].

      Thus, no conclusions regarding the renal safety of HES can be drawn from the study. The assessment of the benefit/risk profile associated with the perioperative administration of HES will require rigorously designed, sufficiently powered randomized controlled clinical trials incorporating a clinically meaningful outcome for the licensed dosage/indication in an appropriate patient population. Most clinical research fails to be useful not because of its findings but because of its design [Ioannidis JP, 2016], which is particularly true for studies with HES [Wiedermann CJ, 2014].


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 04, william jobin commented:

      In the situation where people are using the boats to cross to the island, it is obviously a place to do snail control through periodic weed removal and application of bayluscide. This technique was successfully demonstrated years ago on Volta Lake. Why do you limit yourself to drugs?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 27, Harri Hemila commented:

      Djulbegovic and Guyatt do not refute my criticism of their 3 novel EBM principles

      Djulbegovic and Guyatt challenge my short critique of Djulbegovic B, 2017 by stating that “the BMJ’s rating of EBM as one of medicine’s 15 most important advances since 1840 is but one testimony to the impact of its conceptualization of key principles of medical practice, see BMJ 2007 Jan 20;334(7585):111”.

      The short news article to which they refer to has the title “BMJ readers choose the sanitary revolution as greatest medical advance since 1840”.

      First, that BMJ news article is a 305-word summary of findings from a Gallup poll of 11341 readers of the BMJ, only one third of whom were physicians. Reporting the opinions of BMJ readers does not refute my criticisms.

      Second, the 305-word BMJ text was published in 2007. The BMJ readers could not have anticipated in 2007 the revision of the basic principles of EBM a decade later by Djulbegovic B, 2017. The findings in a 10-year-old survey are not relevant to the current discussion of whether the 3 novel EBM principles are reasonable.

      Third, the short BMJ text does not mention the term EBM anywhere in the piece. The text states that “sanitation topped the poll, followed closely by the discovery of antibiotics and the development of anaesthesia.” There are no references to EBM whatsoever.

      Fourth, one major proposal of the original EBM-paper in JAMA (1992) was that physicians should not lean uncritically on authorities: “the new [EBM] paradigm puts a much lower value on authority”, see p. 2421 in Evidence-Based Medicine Working Group., 1992. Thus, it is very odd that Djulbegovic and Guyatt argue for the importance of the EBM-approach, while they simultaneously consider that the BMJ is such an important authority. It is especially surprising that the mere reference to a 305-word text in the BMJ somehow would refute my comments, even though the text does not discuss EBM or other issues related to my comments either literally or by implication.

      Fifth, Djulbegovic and Guyatt state that the 305-word text in the BMJ is a “testimony” in favor of EBM. Testimonies do not seem relevant to this kind of academic discussion. Testimonies are popularly used when trying to impress uncritical readers about claims to which there is no sound support, such as testimonies for homeopathy on numerous pages in the internet.

      Djulbegovic and Guyatt also write “The extent to which EBM ideas are novel or, rather, an extension, packaging and innovative presentation of antecedents, is a matter we find of little moment.”

      I do not agree with their view. If there is no novelty in the 3 new EBM principles proposed by Djulbegovic B, 2017, and if there is no reasonable demarcation line between “evidence-based medicine” and “ordinary medicine” or simply “medicine”, why should we reiterate the prefix “evidence-based” instead of simply stating the we are discussing “medicine” and we try to make progress in medicine. If “evidence-based” does not give any added meaning in a discussion, why should such a prefix be used?

      Djulbegovic and Guyatt stated that I “do not appear to disagree with [their] overview of the progress in EBM during last quarter of century”. That statement is not entirely correct. A short commentary must have a narrow focus. The focus in my comment was simply on the 3 new EBM principles that were presented by Djulbegovic B, 2017. The absence of any other comments on their overview does not logically mean that I agree with other parts of their overview.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Nov 23, BENJAMIN DJULBEGOVIC commented:

      Hemila does not appear to disagree with our overview of the progress in EBM during last quarter of century. His main concerns seem to relate to the origin of the ideas. The extent to which EBM ideas are novel or, rather, an extension, packaging and innovative presentation of antecedents, is a matter we find of little moment. The BMJ’s rating of EBM as one of medicine’s 15 most important advances since 1840 is but one testimony to the impact of its conceptualization of key principles of medical practice (see BMJ. 2007 Jan 20; 334(7585): 111.doi: 10.1136/bmj.39097.611806.DB)

      Benjamin Djulbegovic & Gordon Guyatt


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Nov 19, Harri Hemila commented:

      The three novel principles for EBM are old: the emperor has no clothes

      In their paper, Djulbegovic B, 2017 describe three novel principles for EBM.

      Djulbegovic and Guyatt write (p. 416): “the first EBM epistemological principle is that not all evidence is created equal, and that the practice of medicine should be based on the best available evidence.”

      There is no novelty in that statement. Even before 1992 scientists including those in the medical fields, understood that some types of research give more reliable answers.

      Furthermore, Djulbegovic and Guyatt do not follow the first principle in their own paper. They write (p. 416): “Millions of healthy women were prescribed hormone replacement therapy [HRT] on the basis of hypothesised reduction in cardiovascular risk; randomised trials refuted these benefits and demonstrated that hormone replacement therapy increased the incidence of breast cancer.”

      In an earlier paper, Vandenbroucke JP, 2009 wrote “Recent reanalyses have brought the results from observational and randomised studies into line. The results are surprising. Neither design held superior truth. The reasons for the discrepancies were rooted in the timing of HRT and not in differences in study design.” In another paper, Vandenbroucke JP, 2011 wrote “Four meta-analyses contrasting RCTs and observational studies of treatment found no large systematic differences … the notion that RCTs are superior and observational studies untrustworthy … rests on theory and singular events”.

      Djulbegovic and Guyatt thus reiterate old assumptions about the unambiguous superiority of RCTs compared with observational studies. They do not follow their own first EBM principle that arguments ”should be based on the best available evidence”. The above mentioned Vandenbroucke’s papers had already been published and were therefore available; thus they should have been taken into account when Djulbegovic and Guyatt argued for the superiority of RCTs in 2017.

      Djulbegovic and Guyatt further write (p. 416): “the second [EBM] principle endorses the philosophical view that the pursuit of truth is best accomplished by evaluating the totality of the evidence, and not selecting evidence that favours a particular claim.”

      There is no novelty in espousing that principle either. Objectivity has been a long term goal in the natural sciences, and also in the medical fields.

      Furthermore, Djulbegovic and Guyatt do not follow the second principle in their own paper. Their reference 94 is to the paper by Lau J, 1992, to which Djulbegovic ja Guyatt refer with the following statement (p. 420): “the history of a decade-or-more delays in implementing interventions, such as thrombolytic therapy for myocardial infarction.” However, in the same paper, Lau J, 1992 also calculated that there was very strong evidence that magnesium was a useful treatment for infarctions with an OR = 0.44 (95% CI: 0.27 - 0.71). However, in the ISIS-4 trial, magnesium had no effects: “Lessons from an effective, safe, simple intervention that wasn't ”, see Egger M, 1995.

      Thus, Djulbegovic and Guyatt cherry picked one intervention (trombolytic therapy) to support their statement that many interventions should have been taken into use much more rapidly, but they dismissed another intervention in the paper by Lau J, 1992, that would serve as an unequivocal counter example of the same statement. This surely is an example of “selecting evidence that favours a particular claim”.

      Principles 1 ja 2 had already been advocated in James Lind’s book on scurvy (1753), which was listed as reference number 1 in Djulbegovic B, 2017. Lind wrote: “As it is no easy matter to root out old prejudices, or to overturn opinions which have acquired an establishment by time, custom and great authorities; it became therefore requisite for this purpose, to exhibit a full and impartial view of what has hitherto been published on the scurvy; and that in a chronological order, by which the sources of those mistakes may be detected. Indeed, before this subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish.” See Milne I, 2012.

      Thus, Djulbegovic ja Guyatt’s EBM principles 1 and 2 are not new; they are at least over 250 years old.

      Djulbegovic and Guyatt write further (p. 416): “the third epistemological principle of EBM is that clinical decision making requires consideration of patients’ values and preferences.”

      The importance of patient autonomy is not an innovation that is attributal to EBM, however.

      When the EBM-movement started with the publication of the JAMA (1992) paper by the Evidence-Based Medicine Working Group., 1992, there was novelty in the proposals. The suggestion that each physician should himself or herself read original literature to the extent proposed by EBM enthusiasts in 1992 was novel as far as I can comprehend from the history. The strength of the suggestion to restrict to RCTs as the source of valid evidence about the efficacy of medical interventions was also novel as far as I can see. Thus, the Evidence-Based Medicine Working Group., 1992 had novel ideas and described the background for those ideas. We can disagree about the 1992 proposals  as many have done  but I do not consider that it is fair to claim that the JAMA (1992) paper had no novelty.

      In contrast, the aforementioned three principles described by Djulbegovic B, 2017 are not novel. The principles can be traced back to times long before even 1992. In addition, none of the principles alone or in combination set any unambiguous demarkation line as to what EBM is in 2017 and what it is not. How does evidence-based medicine differ from “ordinary” medicine, which has been using the same three principles for ages. If there is no difference between the two, why should the “evidence-based” term be used instead of simply writing “medicine”.

      In their paper, Djulbegovic and Guyatt also describe their visions for the future, but I cannot see that any of their visions is specific to EBM. We could as well write their visions for future by changing “EBM” to “medicine”.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 02, Gwinyai Masukume commented:

      In an article he co-authored in The Lancet the late Hans Rosling, internationally acclaimed for giving entertaining and informative videos on global health Maxmen A, 2016, contends the discipline has entered the post-fact era Nordenstedt H, 2016. The article makes a case that in the post-fact era data is tweaked for advocacy purposes, in medical journals, with the inadvertent consequence of misguiding investments needed to achieve the Sustainable Development Goals.

      In this article on male circumcision for HIV prevention Downs JA, 2017, it is stated that a “systematic review estimated a risk reduction between 38% and 66%” for female-to-male HIV transmission conferred by voluntary medical male circumcision. It is not clarified whether this risk reduction is absolute or relative although the cited systematic review, in its abstract, remarks it is a relative risk reduction Siegfried N, 2009. An article titled, “Misleading communication of risk” Gigerenzer G, 2010, discusses how such risk communication is non-transparent. By omitting the baseline risk the bigger numbers make better headlines and better advocacy. However, Hans Rosling cautioned against operating on tweaked facts designed for advocacy, especially in medical journals, because journals can easily propagate an inaccurate understanding of the situation which can make fixing global health problems more challenging Maxmen A, 2016 Nordenstedt H, 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 07, Victoria MacBean commented:

      Plain English Summary:

      This study examined differences in children’s awareness of breathing difficulty, specifically the influence of weight and asthma. With obesity on the rise in Western society and asthma being a common long-term medical condition, it is crucial to understand why obese, asthmatic children report more breathlessness than asthmatic children who are not overweight, even when there are no differences in the severity of their asthma. It has previously been suggested that overweight children may have an increased awareness of breathing effort.

      This study compared various aspects of breathing across three groups of children: asthmatic children with healthy weight, overweight children with asthma and a control group of healthy weight children. The project involved the children breathing through a device which added resistance to breathing. Children were asked to rate how hard they felt it was to breathe, and the tests also measured the children’s breathing muscle activity to find out how hard the breathing muscles were working as the researchers purposefully increased the children’s effort to breathe.

      The anticipated results were that healthy weight asthmatic children and healthy weight children would show similar results, that is that their breathing effort scores would steadily increase as they found it harder to breathe, with the breathing muscles working gradually harder. Meanwhile, the overweight asthmatic children would show a much steeper increase.

      From the 27 children who were studied, the results showed that the overweight children gave higher effort scores throughout the tests, but that these increased at the same rate. There were no differences in the way the children’s breathing muscles responded to the tests. The reason for the higher overall effort scores in the overweight asthmatic children was that their muscles are already working harder than the other two groups before the experiment due to the changes that occur in the lung with increased weight. It was then concluded that overweight asthmatic children do not have differences in their awareness of breathing effort, but that their additional body mass means their muscles are already working harder.

      This summary was produced by Sarah Ezzeddine, Year 13 student from Harris Academy Peckham, London and Neta Fibeesh, Year 13 student from JFS School, Harrow, London as part of the authors' departmental educational outreach programme.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 20, Daniel Corcos commented:

      1) What you call"underlying increasing incidence" of breast cancer is the increase due to x-ray induced carcinogenesis. 2) As expected, the major spike of breast cancer incidence (invasive + in situ) is at the end of the 1990's and at the beginning of the 2000's in the USA, contrasting with very few change in women under 50. Similar changes are seen in other countries in the appropriate age group, after implementation of mammography screening.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 20, Daniel B Kopans commented:

      Response Daniel Corcos' two concerns: 1. Actually, the annual incidence of "true" breast cancers has not been stable. It had been increasing going back to the 1940's. That is the basic point. When women are screened, the cancers detected earlier are layered on top of the increasing baseline. The incidence of invasive breast cancers was increasing in the U.S. and the other countries you mention long before screening became available. .

      It is fundamental epidemiology that when screening begins (the prevalence screen) you find the cancers that would have become clinically evident that year PLUS the cancers that were clinically evident that year but over looked PLUS the cancers that are found 1,2, or 3 years earlier by the screening test. Consequently, when new women begin screening the cancers detected (unfortunately called annual incidence) jump up. If this is done in one year (rarely) then it will go up and come down back toward the baseline incidence. If screening continues it never reaches back to the baseline since there will be new women each year having their prevalence screen. In addition, since the incidence of breast cancer increases with age, and screening advances the date of detection (a 47 year old woman will have the incidence of a 49 year old if screening finds cancers 2 years earlier) the annual detection rate will come back down toward the “baseline”, but not reach it. In the U.S. SEER data you can see that the prevalence numbers remained high from the mid 1980’s (when screening began) to 1999 when they turned back down. This is because, in the U.S. the number of women participating in screening steadily increased each year (new prevalence screens) until it plateaued in 1999. This was followed by a decline in the annual detection rate back toward the baseline. However, you will note that the entire SEER curve is tilted up. This is because the baseline was almost certainly increasing by 1% per year over the same time period (as it had been doing going back to 1940). This is why, despite a fairly steady participation in screening after 1999, the annual incidence in 2012 is higher than in 1978. It is not because screening is finding fake cancers, but because the underlying incidence of breast cancer has been steadily increasing. This is evident in other countries as well. 2. Radiation risk to the breast is age related and drops rapidly with increasing age so that by the age of 40 there is no measurable risk at mammographic doses. All of the estimates are extrapolated and even these are below even the smallest benefit from screening. Millions (hundreds of millions??) of mammograms were performed in the 1980’s. If mammography was causing cancers then we would have expected a major spike in breast cancer at the end of the 1990’s (a latency of 8-10 years). Instead, the incidence of breast cancer began to fall in 1999 consistent with the end of the prolonged prevalence peak. Even those who are trying to reduce access to screening no longer point to the radiation risk because there are no data to support it for women ages 40 and over.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 20, Daniel Corcos commented:

      Clearly, all the evidence for the overdiagnosis epidemics rests on the assumption that the annual incidence of "true" (unable to spontaneously regress) breast cancers is stable after implementation of mammography screening. You acknowledge that breast cancer incidence has increased in the USA after implementation of screening. You should also acknowledge that breast cancer incidence has increased in every country after implementation of screening. This cannot be a coincidence. However, as you have noticed, these cancers must be "true" cancers. So, there is misinformation, but it comes also from those pretending that low-dose x-rays are safe.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Seán Turner commented:

      The genus name Roultibacter is not novel. It was published previously by the same research group: "Raoultibacter" Traore et al. 2016 (PubMed Id 27595003).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 17, Clive Bates commented:

      It's a surprise that the authors are apparently unaware of the efforts that have been made to reduce the supply and demand of nicotine in its most dangerous form (smoking cigarettes). These includes high taxation, advertising bans, public smoking bans, warnings, plain packaging, communications campaigns, smoking cessation services and so on. In fact, a whole WHO treaty (the FCTC) is devoted to it.

      The idea of reducing the supply and demand of the very low-risk alternative is obviously absurd. The whole point of harm reduction is to expand the supply of and demand for the low-risk harm-reduction alternative at the expense of the high-risk product or behaviour. Are they seriously suggesting that we should take measures to reduce the supply of clean needles or reduce the demand for condoms in high HIV risk settings?

      The main problem is that many commentators from this school are thoughtful harm-reductionists when it comes to illicit drugs, sexual behaviours and other risks, but inexplicably become 'abstinence-only' when it comes to the mildly psychoactive drug nicotine. It is a glaring inconsistency that this article helps to illuminate.

      The point is that having low-risk alternatives to smoking is synergistic with the tobacco control measures favoured by these authors. E-cigarette, smokeless tobacco and heated tobacco products increase the range of options for smokers to respond to the pressures from tobacco control policies (e.g. taxation) without requiring abstinence, recourse to the black market or enduring the unwanted effects of tobacco policies on continuing smokers - like regressive tax burdens. That should appeal to those with genuine concerns for public health and wider wellbeing unless part of the purpose is to force smokers into an abstinence-only 'quit or die' choice and to make life harder for them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Harri Hemila commented:

      Statistical problems in the vitamin D review by Martineau et al.

      In their abstract, Martineau et al. state that "Vitamin D supplementation reduced the risk of acute respiratory tract infection among all participants (adjusted odds ratio 0.88 ...".

      The odds ratio [OR] is often used as an approximation for the risk ratio [RR]. Martineau's abstract suggests that vitamin D might reduce the risk of respiratory infections by 12%. However, when the incidence of events is high, then OR can be highly misleading as it exaggerates the size of the effect, see eg. Viera AJ, 2008, Knol MJ, 2012, Katz KA, 2006, and Holcomb WL Jr, 2001 .

      Acute respiratory infections are not rare. In Figure 2 of the Martineau et al. meta-analysis, only 2 of the 24 trials had event rates less than 20% in both groups. I reproduced their Figure 2 using the random effects Mantel-Haenszel approach and calculated OR=0.82 (95% CI 0.72 to 0.95) for the 24 trials. The minor discrepancy with their published OR (i.e. OR=0.80) in Figure 2 is explained by adjustments. The Figure 2 data gives RR=0.92 (95% CI 0.87 to 0.98). Thus, the OR suggests that the incidence of respiratory infections might be reduced by 18%, but the RR shows that 8% reduction is the valid estimate. Thus, OR exaggerates the effect of vitamin D by over two times.

      Further statistical problems are described at the BMJ pages: http://www.bmj.com/content/356/bmj.i6583/rr-3 and http://www.bmj.com/content/356/bmj.i6583/rr-8 and in some other BMJ rapid responses.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 05, Andrea Messori commented:

       

      Is baricitinib more effective than adalimumab?

      by Andrea Messori (HTA unit, ESTAR, Firenze, Italy) and Daniele Mengato (Dept. of Pharmaceutical Sciences, University of Padova, Padova, Italy)

       

      To increase the amount of clinical evidence supporting biosimilars, one report[1] has recently proposed to carry out a network meta-analysis (NETMA) that includes not only the equivalence study comparing the biosimilar with the originator, but also all randomized studies (RCTs) comparing the originator with the previous non-biologic standard of care; most of these RCTs were conducted over the interval between the registration of the originator and the registration of the biosimilar. This approach, originally aimed at biosimilars, can also be employed to better evaluate a newly developed biologic rather than a newly registrered biosimilar. In the case of a newly registered biosimilar, the objective is to establish if the equivalence between the biosimilar and the originator, already demonstrated in the registrative RCT, is also confirmed by the NETMA. In the case of a newly developed biologic, the objective is to establish if the superiority of the new biologic over the old one, already demonstrated in the pivotal RCT, is also confirmed by the NETMA.  

      In patients with rhematoid arthritis, baricitinib (recently developed) has been shown to be more effective than adalimumab (end-point=ACR20; odds-ratio=1.44, 95% confidence interval: 1.06 to 1.95)[2]. We have reassessed this comparison using an "enhanced evidence" NETMA (bayesian approach, random-effect model, 60,000 iterations) in which 7 RCTs were included (Table 1). Our results (odds-ratio=1.44; 95% credible interval: 0.50 to 3.83) did not confirm the superiority of baricitinib over adalimumab.

       

      References

       

      [1] Messori A, Trippoli S, Marinai C. Network meta-analysis as a tool for improving the effectiveness assessment of biosimilars based on both direct and indirect evidence: application to infliximab in rheumatoid arthritis. Eur J Clin Pharmacol. 2016 Dec 14. [Epub ahead of print] PubMed PMID: 27966035.  

       

      [2] Taylor PC, Keystone EC, van der Heijde D, et al. Baricitinib versus Placebo or Adalimumab in Rheumatoid Arthritis. N Engl J Med. 2017;376:652-662.  

       

      [3] Hazlewood GS, Barnabe C, Tomlinson G, Marshall D, Devoe D, Bombardier C. Methotrexate monotherapy and methotrexate combination therapy with traditional and biologic disease modifying antirheumatic drugs for rheumatoid arthritis: abridged Cochrane systematic review and network meta-analysis. BMJ. 2016;353:i1777.  

       

      Table 1. Achievement of ACR20 in 7 randomized trials: the 6 trials comparing adalimumab+methotrexate vs methotrexate alone have been reported by Hazlewood et al.[3] while the 3-arm trial by Taylor et al.[2] has recently been published in the NEJM.


                       ACR20 at 24/26 weeks                                                                             Duration


      STUDY                    BARICITINIB          METHOTREXATE        ADALIMUMAB<br>                              +METHOTREXATE   MONOTHERAPY        +METHOTREXATE

      Kim HY et al.                   -                           40/65                          23/63                 24 weeks

      ARMADA trial                  -                           45/67                           9/62                 24 weeks

      HOPEFUL 1 study           -                        129/171                        92/163               26 weeks

      Keystone EC et al.           -                        131/207                        59/200               24 weeks

      Weinblatt ME et al.           -                          39/59                          23/61                 24 weeks

      OPTIMA trial                    -                        207/466                      112/460                26 weeks

      TAYLOR et al.             360/487                 219/330                      179/488                26 weeks



      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 28, SANGEETA KASHYAP commented:

      I appreciate the comment by Dr. Weiss as medical therapy for diabetes is constantly evolving and improving. However, patients enrolled in this trial were poorly controlled despite using 3 or more glucose lowering agents at baseline including over half requiring basal bolus insulin. This coupled with the fact that two thirds had class 2 or greater severity of obesity, made them somewhat refractory to IMT. It is unlikely that patients like these would ever be able to maintain therapeutic targets of tight glycemic control for five years. Those that do, obviously should not consider bariatric surgery. Being in a rigorous clinical trial as this, all subjects had benefits of care that many real world patients do not and develop complications of the disease. Although the medical algorithm developed for this trial incorporated elements from ACCORD, titration of medical therapy was in some ways patient driven in that weight gain and hypoglycemia limit adherence to therapy. In medically refractory patients like this, surgery was more effective in treating hyperglycemia for five years with less medications overall.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 03, JOHN KIRWAN commented:

      The success of the medical-therapy-alone arm in the STAMPEDE trial is clearly evidenced by the data, i.e., a 1.7% reduction in HbA1c at 1-year in this patient group. When one considers that RCTs evaluating the effectiveness of diabetes medications alone report HbA1c reductions of <1.0%, the outcome for the combined drug/lifestyle/education approach in STAMPEDE is consistent and it could be argued, is superior to most pharmacotherapy interventions currently reported in the extant-literature. If one looks at this from a slightly different perspective and compares the medical therapy arm of STAMPEDE to LookAHEAD, another intensive intervention (exercise/diet/education and pharmacotherapy) for obese patients (average BMI 36 kg/m2) with type 2 diabetes where the reduction in HbA1c was less than 1% at 1 year, then like the previous example, it is clear that the medical-therapy-alone arm in STAMPEDE was highly effective.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 02, Deepak Bhatt commented:

      Having been involved with designing several trials, I would state that the control arm of STAMPEDE did indeed provide optimal medical therapy which exceeded what is generally obtained in real world practice. Randomized trials of surgical procedures are relatively uncommon, and STAMPEDE has helped greatly advance knowledge of the benefits of metabolic surgery. Adherence to polypharmacy is understandably challenging for many patients, and surgery gets around this barrier quite effectively. Furthermore, this landmark trial should not be viewed as surgery versus medical therapy, but rather surgery versus no surgery on a background of excellent medical therapy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Apr 01, PHILIP SCHAUER commented:

      I beg to differ with Dr. Weiss in that the control group of our study was provided intensive, multi-agent medical therapy as tolerated with an intent to reach a HbA1c < 6%- as per ACCORD. Furthermore, medication choice, intensification, dose and frequency were managed by a highly qualified, experienced team of expert endocrinologists at an academic center. A favorable decrease in HbA1c by 1.7% from baseline (> 9% HbA1c) was achieved initially in the control group which was already heavily medicated at baseline (average of 3+ diabetes drugs). Thus, many would agree that our approach was "intensive". This initial improvement, however, was not sustained possibly due to inherent limitations of medical therapy related to adherence, side effects, and cost. Surgery is much less adherence-dependent, which likely accounts for some of the sustained benefit of surgery. Many will disagree with Dr. Weiss that ACCORD defines “true intensive medical therapy” since that regimen actually increased mortality compared to standard therapy, likely due to drug related effects (eg. hypoglycemia). On the contrary, more than 10 comparative, non-randomized studies show a long-term mortality reduction with surgery compared to medical therapy alone (1). New, widely endorsed guidelines by the American Diabetes Association and others now support the role of surgery for treating diabetes in patients with obesity, especially for patients who are not well controlled on medical therapy (2). 1)Schauer et al.Diabetes Care 2016 Jun;39(6):902-11 2)Rubino et al. Diabetes Care. 2016 Jun;39(6):861-77


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Mar 15, Daniel Weiss commented:

      The benefit of weight loss on glycemic control for those with Type 2 Diabetes has been recognized for decades. The five-year outcomes of STAMPEDE are not surprising. However there was a major flaw in the design of this trial: despite its title, the control group was not provided “intensive medical therapy”.

      The primary outcome was to compare “intensive medical therapy” alone to bariatric surgery plus medical therapy in achieving a glycated hemoglobin of 6% or less. The medical therapy group was seen every 3 months and had minimal increase in medications (mean of 2.8 medications at baseline and 3 at one year). And, at the one-year and five year time points, substantially fewer patients were on insulin as compared to baseline. At one year, 41 percent were on a glucagon-like peptide-1 receptor agonist.

      Minimally intensified medical therapy obviously would bias results toward surgery. True intensive medical therapy as in the landmark ACCORD trial (Action to Control Cardiovascular Risk in Diabetes) involved visits every 2-4 weeks with actual medication intensification.

      Reference The Action to Control Cardiovascular Risk in Diabetes Study Group. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008;358:2545-59.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 10, Paul Sullins commented:

      Reported findings of "no differences" by parent type in this study are an artifact of a well-known sampling error which conflates same-sex couples with a larger group of miscoded different-sex couples. Large disparities between the reported sample and same-sex couple population data reported by Statistics Netherlands strongly confirm this conclusion. The remainder of this comment presents detailed analysis supporting these claims. A longer critique, with standard citations and a table, is available at http://ssrn.com/author=2097328 .

      The authors report that same-sex couples were identified using “information about the gender of the participating parent and the gender of the participant’s partner” (p. 5). However, validation studies of the use of this procedure on other large representative datasets, including the 2000 U.S. Census, the U.S. National Health Interview Survey (NHIS), and the National Longitudinal Study of Adolescent to Adult Health (“Add Health”), have found that most "same-sex couples" identified in this way are actually misclassified different-sex couples.

      The problem stems from the fact that, like all survey items, the indication of one’s own sex or the sex of one’s partner is subject to a certain amount of random error. Respondents may inadvertently mark the wrong box or press the wrong key on the keyboard, thus indicating by mistake that their partner is the same sex as themselves. Black et al., who examined this problem on the U.S. Census, explains that “even a minor amount of measurement error, when applied to a large group, can create a major problem for drawing inferences about a small group in the population. Consider, for example, a population in which 1 out of 100 people are HIV-positive. If epidemiologists rely on a test that has a 0.01 error rate (for both false positives and false negatives), approximately half of the group that is identified as HIV-positive will in fact be misclassified” The measurement of same-sex unmarried partner couples in the 2000 US Census. Since same-sex couples comprise less than one percent of all couples in the population of Dutch parent couples studied by Bos et al., even a small random error in sex designation can result in a large inaccuracy in specifying the members of this tiny subpopulation.

      A follow up consistency check can effectively correct the problem; however without this it can be quite severe. When the NHIS inadvertently skipped such a consistency check for 3.5 years, CDC estimated that from 66% to 84% of initially identified same-sex married couples were erroneously classified different-sex married couples Division of Health Interview Statistics, National Center for Health Statistics. 2015. Changes to Data Editing Procedures and the Impact on Identifying Same-Sex Married Couples: 2004-2007 National Health Interview Survey. Likewise, Black reported that in the affected portion of the 2000 Census “only 26.6 percent of same-sex female couples and 22.2 percent of same-sex male couples are correctly coded” Black et al, p. 10. The present author found, in an Add Health study that ignored a secondary sex verification, that 61% of the cases identified as “same-sex parents” actually consisted of different-sex parent partners The Unexpected Harm of Same-sex Marriage: A Critical Appraisal, Replication and Re-analysis of Wainright and Patterson’s Studies of Adolescents with Same-sex Parents. British Journal of Education, Society & Behavioural Science, 11(2)..

      The 2011 Statistics Netherlands data used by Bos et al. are based on computer assisted personal interviews (CAPI), in which the respondent uses a computer keyboard to indicate his or her responses to interview questions that are presented by phone, website or in person. Sex of respondent and partner is indicated is indicated by the respondent entering "1" or "2" on the keyboard, a procedure in which a small rate of error, hitting the wrong key, would be quite normal. The Statistics Netherlands interview lacks any additional verification of sex designation, making sample contamination very probable. [Centraal Bureau voor de Statistiek Divisie Sociale en Ruimtelijke Statistieken Sector Dataverzameling. (2010). Jeugd en Opgroeien (SCP) 2010 Vraagteksten en schema’s CAPI/CATI. The Hague].

      Several key features of the reported control sample strongly confirm that sample contamination has occurred. First, in the Netherlands in 2011, the only way for a same-sex co-parent to have parent rights was to register an adoption, so we would expect one of the partners, for most same-sex couples, to be reported as an adoptive parent [Latten, J., & Mulder, C. H. (2012). Partner relationships at the dawn of the 21st century: The case of the Netherlands. In European Population Conference pp. 1–19]. But in Bos et al.'s sample, none of the same-sex parents are adoptive parents, and both parents indicate that the child is his/her "own child" (eigen kind). This is highly unlikely for same-sex couples, but what we would expect to see if a large proportion of the "same-sex" couples were really erroneously-coded opposite-sex couples. Second, the ratio of male to female same-sex couples in the Bos et al. sample is implausibly high. In every national and social setting studied to date, far fewer male same-sex couples raise children than do female ones. Statistics Netherlands reports that in 2011 the disparity in the Netherlands was about seven to one: Of the (approximately) 30,000 male and 25,000 female same-sex couples counted in that year “[o]nly 3% (nearly 800) of the men's pairs had one or more children, compared to 20% (almost 5000) of the female couples.” [de Graaf, A. (2011). Gezinnen in cijfers, in Gezinsrapport 2011: Een portret van het gezinsleven in Nederland. The Hague: The Netherlands Institute for Social Research.] Yet Bos et al. report, implausibly, that they found about equal numbers of both lesbian and gay male couples with children, actually more male couples (68) than female (63) with children over age 5. They also report that 52% of Dutch same-sex parenting couples in 2011 were male, but Statistics Netherlands reports only 14%. The Bos sample is in error exactly to the degree that we would expect if these were (mostly) different-sex couples that were inaccurately classified as being same-sex due to random errors in partner sex designation.

      Third, according to figures provided by Eurostat and Statistics Netherlands [Eurostats. (2015). People in the EU: who are we and how do we live? - 2015 Edition. Luxembourg: Publications Office of the European Union.] [Nordholt, E. S. (2014). Dutch Census 2011: Analysis and Methodology. The Hague: Statistics Netherlands.] (www.cbs.nl/informat), same-sex parents comprised an estimated 0.28 percent of all Dutch parenting couples in 2011, but in the Bos sample the prevalence is more than three times this amount, at 0.81 percent. From this disparity, it can be estimated roughly that about 65% of the Bos control sample consisted of misclassified different-sex parents. This rate of sample contamination is very similar to that estimated for the three datasets discussed above (61% for Add Health; 66% or higher for NHIS, and about 75% for the 2000 U.S. Census.)<br> The journal Family Process has advised that it is not interested in addressing errors of this type in its published studies. I therefore invite the authors to provide further population evidence in this forum, if possible, showing why their findings should be considered credible and not spurious.

      Paul Sullins, Ph.D. Catholic University of America sullins@cua.edu


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 10, Paul Sullins commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 26, Martine Crasnier-Mednansky commented:

      I do appreciate your answer to my comment, to which I gladly reply. First, there is prior work by Ghosh S, 2011 indicating colonization was attenuated in mutant strains that were incapable of utilizing GlcNAc, which included a nagE mutant strain. Second, Mondal M, 2014 analyzed the products of the ChiA2 reaction and found GlcNAc was the most abundant product. In fact, the amount of (GlcNAc)2 was found to be very low as compared to GlcNAc and (GlcNAc)3. Therefore, it is fully legitimate to conclude the PTS substrate GlcNAc is utilized in the host by V. cholerae for growth and survival.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 26, Ankur Dalia commented:

      Re: Martine Crasnier-Mednansky

      I appreciate your evaluation of the manuscript, however, I have to disagree with your comment. The study by Mondal et al. indicates that ChiA2 can liberate GlcNAc from mucin in vitro and that it is critical for bacterial growth in vivo, however, they did not test the role for GlcNAc uptake and/or catabolism in that study. In our manuscript, however, we demonstrate that loss of all PTS transporters (including the GlcNAc transporter) does not result in attenuation in the same infant mouse model, which is a more formal test for the role of GlcNAc transport during infection. It is formally possible that other carbohydrate moieties are liberated via the action of ChiA2 that are required for growth of V. cholerae in vivo, however, our results would indicate that these are not translocated by the PTS. Alternatively, the reduced virulence of the ChiA2 mutant observed in the Mondal et al study may indicate that ChiA2 has other effects in vivo (i.e. on immune evasion, resistance to antimicrobial peptides, etc.).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jul 24, Martine Crasnier-Mednansky commented:

      The authors’ proposal 'the PTS has a limited role during infection' and concluding remark 'PTS carbohydrates are not available and/or not utilized in the host' are both questionable. Mondal M, 2014 established, when Vibrio cholerae colonizes the intestinal mucus, the PTS-substrate GlcNAc is utilized for growth and survival in the host intestine (upon mucin hydrolysis).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 21, Simon Young commented:

      This paper reports a concentration of tryptamine in human cerebrospinal fluid (CSF) of 60 nmol/L. A concentration that high seems unlikely. The concentration in human CSF of the related compound 5-hydroxytryptamine (serotonin) is very much lower. Although levels of serotonin in human CSF reported in the literature vary over several orders of magnitude, most of the results reported are probably false due to lack of rigorous methodology and analytical inaccuracy Young SN, 2010. In a study with rigorous methodology, measurements were performed in two different laboratories using different HPLC columns and eluting buffers Anderson GM, 2002. One lab used an electrochemical detector (detection limit 7 – 8 pg/ml for serotonin) and the other a fluorometric detector (detection limit 7 – 15 pg/ml). In both labs, N-methylserotonin was used as an internal standard and a sample was injected directly into the HPLC after removal of proteins. Neither system could detect serotonin in any CSF sample. The conclusion was that the real value was less than 10 pg/ml (0.057 nmol/L, about three orders of magnitude less than the level reported for tryptamine). Anderson et al Anderson GM, 2002 suggest that the higher values for serotonin reported in the literature can be attributed to a failure to carry out rigorous validation steps needed to ensure that a peak in HPLC is in fact the analyte of interest and not another compound with a similar retention time and fluorescent or electrochemical properties.

      The concentration of tryptamine in rat brain is very much lower than the concentration of serotonin Juorio AV, 1985, and levels of the tryptamine metabolite, indoleacetic acid, in human CSF are lower than the levels of the serotonin metabolite, 5-hydroyindoleactic acid Young SN, 1980. Thus, the finding that the concentration of tryptamine in human CSF is about a thousand times greater than the concentration of serotonin does not seem plausible. There are three possible explanations for this finding. First, there may be some unknown biochemical or physiological factor that explains the finding. Second, the result may be due to the use of CSF obtained postmortem instead of from a live human. Levels of some neuroactive compounds change rapidly after death. For example, levels of acetylcholine decrease rapidly after death due to the continued action of acetylcholinesterase, the enzyme that breaks down acetylcholine Schmidt DE, 1972. Serotonin can be measured in postmortem samples because the rate limiting enzyme in the conversion of tryptophan to serotonin, tryptophan hydroxylase, and the main enzyme metabolizing serotonin, monoamine oxidase, both require oxygen. The brain becomes anoxic quickly after death thereby preventing synthesis or catabolism of serotonin. Tryptamine is synthesized by the action of aromatic amino acid decarboxylase, which does not require oxygen, but is metabolized by monoamine oxidase, which does require oxygen. Autopsies usually occur many hours after death, and therefore the high levels of tryptamine reported in this study may reflect continued synthesis, and the absence of catabolism, of tryptamine after death. Third, there may be problems with the HPLC and fluorometric detection of tryptamine in this paper, in the same way that there have been many papers reporting inaccurate measurements of serotonin in human CSF, as outlined above. The method reported in this paper would have greater credibility if the same results were obtained with two different methods, as for serotonin Anderson GM, 2002.

      In conclusion, more work needs to be done to establish a reliable method for measuring tryptamine in CSF obtained from living humans. Levels in human CSF obtained postmortem may have no physiological relevance.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 07, Victoria MacBean commented:

      Plain English Summary:

      Neural respiratory drive (NRD) is commonly used as a measure of respiratory function, as it measures the overall muscular effort required to breathe in the presence of the changes that occur in lung disease. Both bronchoconstriction (airway narrowing) and hyperinflation (over-inflation of the chest, caused by air trapped in deep parts of the lung) occur in lung disease and are known to have detrimental effects on breathing muscle activity. Electromyography (EMG) is a measure of electrical activity being supplied to a muscle and can be used to measure the NRD leaving the brain towards respiratory muscles (in this study the parasternal intercostals – small muscles at the front of the chest). This study aimed to research the individual contributions of bronchoconstriction and hyperinflation on EMG and the overall effectiveness of the EMG as an accurate marker of lung function.

      A group of 32 young adults were tested as subjects for this study, all of which had lung function within normal limits at rest, prior to testing. The subjects inhaled increasing concentrations of the chemical methacholine to stimulate the contraction of airway muscles – imitating a mild asthma attack. Subjects’ EMG, spirometry (to measure airway narrowing) and IC (inspiratory capacity) was measured to test for hyperinflation. Detailed statistical testing was used to assess the relationships between all the measures.

      The results show that obstruction of the airway was closely related to the increase in EMG, however inspiratory capacity was not related. The data suggests that the overinflation of the chest had less of an effect on the EMG than the airway diameter (bronchoconstriction). This helps advance the understanding of how EMG can be used to assess lung disease.

      This summary was produced by Talia Benjamin, Year 13 student from JFS School, Harrow, London as part of the authors' departmental educational outreach programme.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 11, Prashant Sharma, MD, DM commented:

      A full-text, read-only version of this article is available at http://rdcu.be/nxtG.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 19, Martine Crasnier-Mednansky commented:

      Novick A, 1957 reaffirmed a fully induced culture could be maintained fully induced at low inducer concentrations. In this paper, the authors reported preinduced cells with melibiose do not maintain induction of the melibiose (mel) operon in the presence of 1 mM TMG. However, experimental conditions and data interpretation are both questionable in view of the following.

      The authors used a lacY strain whose percentage of induction by 1 mM TMG is less than 0.2%, 100% being for melibiose as the inducer (calculated from data in Table 1 and 3). They transfer the cells from a minimal-medium-melibiose to a minimal-medium-glycerol supplemented with 1 mM TMG. The cells therefore have to 'enzymatically adapt' to glycerol while facing pyrimidine starvation (Jensen KF, 1993, Soupene E, 2003). Under these conditions, cells are unlikely to maintain induction of the mel operon (even if they could, see below) because uninduced cells have a significant growth advantage over induced cells. Incidentally, Novick A, 1957 noted, "the fact that a maximally induced culture can be maintained maximally induced for many generations [by using a maintenance concentration of inducer] shows that the chance of a bacterium becoming uninduced under these conditions is very small. Were any uninduced organisms to appear, they would be selected for by their more rapid growth". Advancing further, the percentage of induction by TMG for the mel operon in a wild type strain (lacY<sup>+</sup>) is 16% (calculated as above). This induction is due mostly to TMG transport by LacY considering the sharp decrease in the percentage of induction with a lacY strain (to <0.2%). Consequently, in the presence of TMG, any uninduced lacY cells remain uninduced. Thus, it appears a population of uninduced cells is likely to 'take over' rapidly under the present experimental conditions.

      In the presence of LacY, the internal TMG concentration is about 100 times the medium one, and under these conditions, induction of the mel operon by TMG is only 16%. Therefore, the cells could not possibly maintain their full level of induction simply because TMG is a relatively poor inducer of the mel operon. It seems the rationale behind this experiment does not make much sense.

      Note: The maintenance concentration of inducer is the concentration of inducer added to the medium of fully induced cells and allowing maintenance of the enzyme level for at least 25 generations (Figure 3 in Novick A, 1957). It is not the intracellular level of inducer, as used in this paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 01, Kevin Hall commented:

      The corrected manuscript is now posted online, including the Supplemental Materials describing the methodology for the systematic review and meta-analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 22, Kevin Hall commented:

      Dr. Harnke is correct that the early online publication did not provide the peer-reviewed Supplemental Materials describing the methodology for the systematic review and meta-analysis. Also, the online publication erroneously provided the penultimate version of the figures. The Supplemental Materials and the updated figures are available upon request: kevinh@niddk.nih.gov.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Mar 20, Ben Harnke commented:

      The E-pub ahead of print version of this article does not appear to provide details about the search strategy, databases searched, limits, etc. used to identify the included studies. Without this information it is impossible to replicate the study or to verify that all relevant citations were located. Hopefully these details will be included in the final published version and/or online supplementary material.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 08, Cicely Saunders Institute Journal Club commented:

      We selected and discussed this paper at our monthly journal club on 1st November 2017.

      The paper generated a lot of discussion and we felt that this was an important concept, especially for clinicians, to think about. The topic of QALYs was unfamiliar to some of us and we found that the authors explained it very clearly in the paper. We were intrigued by the use of an integrative review method and discussed this at length. It may have been helpful to read more explanation of this method and know how it differs from other types of review methods. We also wondered about some of the inclusion/exclusion criteria such as the exclusion of reviews and the decision making process for the theoretical papers included. We enjoyed discussing the themes which emerged from this paper and the wider debate around the most appropriate measures for palliative care populations, particularly in light of the recent paper by Dzingina et al. 2017 (https://www.ncbi.nlm.nih.gov/pubmed/28434392). We feel this paper will be a useful educational resource.

      Commentary by Dr. Nilay Hepgul & Dr. Deokhee Yi on behalf of researchers at Cicely Saunders Institute of Palliative Care, Policy & Rehabilitation, King’s College London.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 04, Ralf Koebnik commented:

      This publication states that “HprX is the AraC/XylS regulator of the Xanthomonas citri T3SS and is activated by HrpG, a sensor kinase that phosphorylates HrpX [43].“ This is a wrong interpretation of Ref. 43. To be correct, it is HrpX, not HprX, and second, HrpX is not activated via phosphorylation by HrpG. Instead, HrpG (probably in a phorphorylated state, but this is still speculation) activates the transcription of the hrpX gene. HrpX in turn binds to the promoter region of hrp genes that encode the structural components of the T3SS.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 12, John Tucker commented:

      The article summarizes the results of Ohio’s 2011-2015 program to reduce prescription painkiller overdose deaths by stating that prescribing was reduced by 10%, leading to a reduction in the percentage of drug overdose deaths attributable to prescription painkillers from 45% to 22%.

      While it would be natural for a reader to assume that this percentage reduction arose from a decline in prescription drug overdoses, this is not the case. Instead, overdoses due to prescription painkillers remained relatively constant while heroin and illicit fentanyl deaths skyrocketed.

      CDC WONDER gives the following death counts for Ohio in 2011 and 2015

      Heroin: 325 in 2011 and 1103 in 2015

      Other Opioids: 197 in 2011 and 340 in 2015

      Methadone: 70 in 2011 and 50 in 2015

      Other synthetic narcotics (including fentanyl): 57 in 2011 and 891 in 2015

      Unspecified narcotics: 62 in 2011 and 80 in 2015

      Total opioid overdose deaths: 711 in 2011 and 2464 in 2015.

      Thus the opioid overdose death rate in Ohio increased by 246% during the analysis period, compared to 40% for the United States as a whole.

      The program cannot by any stretch of the imagination be considered a success, and simply serves as yet another example of the futility of supply-side focused approaches to drug abuse.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 04, Andrea Messori commented:

      PubMed database: A selection of pharmacoeconomic studies based on the net monetary benefit

      Andrea Messori, HTA Unit, Regional Health Service, Firenze, Italy

      The objective of the study by Capri and co-workers was to compare the cost-effectiveness of pazopanib versus sunitinib as first-line therapy in patients with advanced or metastatic renal cell carcinoma; the perspective was that of the Italian National Health Service.

      In patients with cancer, most economic studies based on this design are carried out by determining the incremental cost effectiveness ratio (ICER). In contrast, one reason of interest of the study by Capri and co-workers is that the net monetary benefit (NMB) was the methodological tool employed to carry out the pharmacoeconomic analysis.

      Although the NMB is not the standard tool for performing these analyses, there are some advantages in using this parameter as opposed to the ICER. For example, while the relationship between the cost of the intervention and the ICER is nonlinear, the relationship between the cost of the intervention and the NMB is linear. Hence, predicting the consequences of an increased cost of treatment (or a decreased cost of treatment) is easier, or more intuitive, if the NMB is used rather than the ICER.

      Using a standard syntax of PubMed search (“net monetary benefit”[text]; search date = 4 March 2017), we identified a total of 148 citations that met this criterion. Among these citations, we selected 20 studies published between 2010 and 2016 in which the NMB played a key role in generating the pharmacoeconomic results (see http://www.osservatorioinnovazione.net/papers/nmb20examples.html).

      Curiously enough, among these 148 citations, some studies were lacking in which the NMB had been successfully employed (e.g. Ganesalingam J, Pizzo E, Morris S, Sunderland T, Ames D, Lobotesis K.Cost-Utility Analysis of Mechanical Thrombectomy Using Stent Retrievers in Acute Ischemic Stroke. Stroke. 2015 Sep;46(9):2591-8, https://www.ncbi.nlm.nih.gov/pubmed/26251241/); this indicates that the keyword “net monetary benefit” in PubMed misses a number of pertinent articles.

      In conclusion, despite the low number of retrieved articles, our preliminary overview of the literature shows that the NMB is still being used in pharmacoeconomic studies and deserves to be further investigated.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 05, Julie Glanville commented:

      As we note in the Methods section of our paper, this paper is informed by an update of a systematic review:

      "For the update to the systematic review, we searched 18 databases, websites, trial registries and three conference websites between February and March 2016. The search strategy for MEDLINE is shown in Supplemental Fig. 1 (Fig. S1) and the other searches are available on request. This search strategy was originally developed in 2013, and then updated for this analysis taking into account relevant recent changes in indexing [19, 20]. The searches were not limited by date, language, or document type. The information sources searched are shown in Supplemental Table 1 (Table S1). Search results were downloaded into Endnote and de-duplicated against each other and against the results of the original review."

      Figure 1 in the supplementary material shows the Medline update search. Although it is a search carried out to identify new records since the original searches, we did not limit to recent years but reran the search for all years. This resulted in 6845 records. However, many of these records had been processed in the initial SR, so "Search results were downloaded into Endnote and de-duplicated against each other and against the results of the original review". This resulted in a much lower number of records which needed to be assessed for relevance from Medline in the update as can be seen in Table S2. In Table S2 the second column shows the number of records downloaded before deduplication against the original search results and the third column shows the number of records assessed for relevance after deduplication against the original search results. Hence the number difference. We acknowledge that there has been a transposition error for the Medline results in Table S2 - the search resulted in 6845 records and we have entered 2845 by mistake. We will correct the transposition error.

      Despite what we say in the Methods section, the full search strategies for the original search and the update searches are in the supplementary material at page 28 onwards.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 04, Wichor Bramer commented:

      The authors of this article seen to have made a mistake in their registration of results per database. The supplements show that EMBASE alone retrieved over 7000 results and SCI more than 6000. Yet the total number of articles after deduplication is 5500.

      Only the Medline search is provided in detail. The number of results shown in the search strategy seems to be 6800, whereas in the overview table of all databases the number is 2800.

      It is recommended to add the search strategies for all databases and to keep a good and clear track of the results per database before and after deduplication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 25, Donald Forsdyke commented:

      LECTIN PATHWAY STUDIES WITH PLANT MANNOSE-BINDING LECTINS

      Papers on the lectin pathway (LP) of complement activation in animal sera generally refer to animal mannose-binding lectins (MBLs), with little reference to work with plant MBLs. For example, citing May and Frank (1973), this fine paper states: "Reports of unconventional complement activation in the absence of C4 and/or C2 predate the discovery of LP." Actually, a case can be made that the discovery of the LP predates May-Frank.

      The MASP-binding motif on animal MBL, which is necessary for complement activation, includes the amino acid sequence GKXG (at positions 54-57), where X is often valine. The plant lectin concanavalin-A (Con-A) has this motif at approximately the same position in its sequence (the 237 amino acid subunit of Con-A had the sequence GKVG at positions 45-48). The probability of this being a chance event is very low. Indeed, prior to the discovery of MASP involvement, Milthorp & Forsdyke (1970) reported the dosage-dependent activation of complement by Con-A.

      As far as I am aware, it has not been formally shown that MASP is involved in the activation of the complement pathway by this plant MBL. Our studies in the 1970s demonstrated that Con-A activates complement through a cluster-based mechanism, which is consistent with molecular studies of animal MBL showing “juxtaposition- and concentration dependent activation” (Degn et al. 2014). References to our several papers on the topic may be found in a review of innate immunity (Forsdyke 2016).

      Degn SE et al. (2014) Complement activation by ligand-driven juxtaposition of discrete pattern recognition complexes. Proc Natl Acad Sci USA 111:13445-13450. Degn SE, 2014

      Forsdyke DR (2016) Almroth Wright, opsonins, innate immunity and the lectin pathway of complement activation: a historical perspective. Microb Infect 18: 450-459. Forsdyke DR, 2016

      May JE, Frank MM (1973) Hemolysis of sheep erythrocytes in guinea pig serum deficient in the fourth component of complement. I. antibody and serum requirements. J Immunol 111: 1671-1677. May JE, 1973

      Milthorp PM, Forsdyke DR (1970) Inhibition of lymphocyte activation at high ratios of concanavalin A to serum depends on complement. Nature 227:1351-1352 Milthorp P, 1970

      Yaseem et al. (2017) Lectin pathway effector enzyme mannan-binding lectin-associated serine protease-2 can activate native complement C3 in absence of C4 and/or C2. FASEBJ 31:2210-2219 Yaseen S, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 18, Clive Bates commented:

      So we learn from this study that pharmacists demand:

      training in the form of information packs (88%), online tutorials (67%), continuous professional development (CPD) workshops (43%) to cover safety, counselling, dosage instructions, adverse effects and role in the smoking cessation care pathway in the future.

      But how many of them have made use of the existing resources already provided by the UK National Centre for Smoking Cessation and Training, in particular, its excellent E-cigarettes: a briefing for stop smoking services, 2016. This is readable and accessible and easily found by anyone with a professional interest.

      If they wanted to go into the issue more deeply, there is the Royal College of Physicians report, Nicotine without smoke: tobacco harm reduction, 2016 which provides a scientific assessment for UK health professionals, and concludes:

      that e-cigarettes are likely to be beneficial to UK public health. Smokers can therefore be reassured and encouraged to use them, and the public can be reassured that e-cigarettes are much safer than smoking.

      As they are selling these products, isn't there a legitimate professional expectation that community pharmacists should make a modest effort to find out more about them? The survey reveals a disturbing level of ignorance and unscientific assertion and the demand for more training is the flip side of an admission of ignorance. A good question would have found out whether they have made any effort at all to resolve their uncertainties, for example by consulting the sources above.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 12, Romain Brette commented:

      From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus.

      In sound localization for example, one of the four examples in this essay, a relevant problem for a predator or a prey is to locate the source of a sound, i.e. absolute localization. But models such as the recent model mentioned in the essay (which is influential but not consensual) have been proposed on the basis of their performance in discriminating between identical sounds played at slightly different angles, a common experimental paradigm. Focusing on this paradigm leads to models that maximize sensitivity, but perform very poorly on the more ecologically relevant task of absolute localization, which casts doubts on the models (Brette R, 2010; Goodman DF, 2013). Unfortunately the available set of relevant behavioral data is incomplete (e.g., what is the precision of sound localization with physical sound sources in ecological environments, and are orienting responses invariant to non-spatial aspects of sounds?). Thus I sympathize with the statement in this essay that more proper behavioral work should be done.

      In other words, a good model should not only explain laboratory data: it should also work (i.e. explain how the animal manages to do what it does). It is good to remind this crucial epistemological point.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 03, Dmitri Rusakov commented:

      We thank the Reviewer for following up the story. Below are our point-by-point reply to his latest set of comments. This reply and the manuscript version revised accordingly appeared to satisfy the journal Editorial Board and the other reviewer(s). We appreciate that this reviewer might not have therefore received the full set of explanations shown below. We however urge him to look at the published paper and its Supplementary data as these do contain material answers to his key questions.

      Reviewer #1

      Q: The authors have taken a rather minimalist approach to my suggestion that the fluorescence anistropy measurements be analysed and presented in greater detail in the MS.

      A: There must have been a misunderstanding. In response to the original Reviewer's comments, we have provided a full point-by-point response with extended explanations, added quantitative evidence for the two-exponential approximation, and provided a full summary Table for the FLIM characteristics over six different areas of interest. This is precisely what was requested in the original comments.

      Q: In addition, in considering this revision, additional questions have arisen. I therefore give a more detailed and prescriptive list of the data that needs to be shown. The following conditions are of interest: 1) free solution 2) with extracellular dye a) measurement inside cell b) measurement over neuropil c) measurement over synapse d) measurement inside pipette 3) with intracellular dye a) measurement inside cell 4) no dye a) measurement of autofluorescence over neuropil b) measurement inside soma For each of the above conditions, please show: 1) full fluorescence time course -1 to 12 ns 2) full anisotropy time course -1 to 12 ns 3) specimen traces 4) global averages 5) fit of global average 6) timing of the light pulse should always be indicated (I assume it occurs at 1ns, but this must be made explicit)

      A: We note that all the requested information is contained, in the shape of single-parameter outcomes, in the original figures and Tables. We also note that in healthy brain slices autofluorescence (two-photon excitation) is undetectable. With all due respect, we did not fully understand the grounds for requesting excessive primary material: the process of analysing anisotropy FLIM data involves automated, pixel-by-pixel data collection and curve fittings representing tens of thousands of single-pixel plots at all stages of the data processing. Presenting such data does not appear technically feasible. However, we have added some extensive primary-data examples, as requested, to illustrate:

      (a) Fluorescence decay in parallel and perpendicular detectors at different viscosity values (Fig. S1a);

      (b) Instrument response for the two-detector system (Fig. S1b), indicating that it has much faster dynamics than the anisotropy decay;

      (c) Anisotropy decay data in slice tissue after dye washout - indicating a specific reduction of the fast (free-diffusion) rather than slow (membrane-bound) molecular component (Fig. S1c);

      (d) AF350 anisotropy decay examples recorded in a free medium, intracellular compartment, extracellular space in the synapse and neuropil extracellular space (Fig. S1d).

      Q: It remains a good idea to try the same measurements with a second dye.

      A: AF350 is the smallest bright fluorophore which shows no lifetime dependency on physiological cellular environment. It is therefore the best candidate to explore the movement of small ions or neurotransmitter molecules such as glutamate in similar environment. We have tried other fluorophores such as AF594, which is three times heavier, more prone to photobleaching and more likely to bind to cell membranes in slices, all of which makes interpreting the data more difficult. We believe that the experimental evidence obtained in the present study has led to a self-contained set of conclusions that are of interest to the journal readership. Repeating the entire study with a different dye, or a different animal species, or a different preparation, or else, might be a good idea for future research.

      Q: Why is the rise of the anisotropy not instantaneous in Fig. 1B? I couldn't find any mention of temporal filtering.

      A: The rising phase is influenced by the instrument response to the femtosecond laser pulse in raw lifetime data, which remained unmodified in the presented plots. Signal deconvolution would produce instantaneous anisotropy (while increasing noise in raw data) which is not our preferred way of presenting the data. We have added this explanation to the text.

      Q: Assuming that the light pulse occurs at 1ns in Fig. S1A, why is the anisotropy shown so high? I would have thought that it should have decayed over a time constant (to about 1/e) by the point where the data shown, yet it is only reduced by about 15%. Was some additional normalisation carried out that I missed? If so it should certainly be removed and the true time course shown.

      A: The example in question shows a direct comparison between decay shapes in baseline conditions and after photobleaching: for illustration purposes, the graph displays a fragment of raw decay data, including the instrument response (without deconvolution). The latter at least doubles the apparent decay constant, plus the fast component has a y-offset of 0.2-0.3 due to the slow-component. These concomitants make the fast decay appear slower but this is irrelevant for the purposes of this particular raw data illustration (in contrast, Table S1 summary data are obtained with the instrument response de-convolved and removed). The text and figure legend has been expanded to explain this.

      Q: In case it is not clear, I am interested in the possibility of the fast component in fact containing two components. The data do not allow me to evaluate this possibility.

      A: Whilst we appreciate personal scientific interests of the Reviewer, we see no scientific reasons, in the present context, to try and find 'the possibility' of fast anisotropy decay sub-components: we simply refer to all molecular sub-populations showing distinctly fast anisotropy decay as one free-diffusion pool.

      Q: Does Scientific Reports required the authors to provide access to the original data upon publication? What is the authors' position on this?

      A: All original data, including tens of thousands of single-pixel FLIM data plots at different analysis stages, etc. are available from the authors on request.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 25, Boris Barbour commented:

      Below I show the key parts of my first and second reviews of this paper, which reports a very interesting and powerful optical technique for probing the microscopic properties of fluid compartments in the brain. I felt that greater detail of the analysis should be shown to support fully the conclusion of slowed diffusion in the extracellular space and synaptic cleft. It will be apparent that some questions unfortunately only occurred to me upon reading the first revision. The authors only responded to some of the points raised before the paper was published without my seeing it again. In particular, no global average anisotropy time courses are shown and the timing of the excitation pulses remains rather mysterious.


      First review

      This MS reports an extremely interesting approach to providing quantitative information about the diffusion of small molecules in micro-compartments of brain tissue - potentially resolving the intracellular and extracellular spaces, as well as providing information about diffusion within the synaptic cleft.

      The basis for the approach is to measure the relaxation of fluorescence polarisation following two-photon excitation of small compartments. If polarised exciting light is used the emitted light is also polarised, as long as the orientation of the fluorophore remains unchanged. However, as the molecule undergoes thermal reorientations, that polarisation is lost. The authors use this technique to measure the diffusion of a small molecule - alexa fluor 350.

      A subsequent section of the MS reports some synaptic modelling, applying the tissue/free ratio obtained for the fluorophore to the modelled glutamate. However, the important and by far the most interesting part of the MS is the diffusion measurement. I would be happy for the MS to consist solely of an expanded and more detailed analysis of these measurements, postponing the modelling to another paper.

      My main comments relate to the analysis, presentation and interpretation of these diffusion measurements. The authors report that the relaxation time for the polarisation displays two phases - a rapid phase, which is somewhat slowed in brain tissue, and a slow phase attributed to membrane binding. But the authors do not illustrate the analysis of the fast component. As this is critical, a good deal more detail should be shown.

      A first issue is whether there are additional bound/retarded states other than "fixed". To examine this the authors should show the fit to the average relaxation; it is important to be able to verify that there is a single exponential fast component rather than some mixture (it may not be possible to tell with any certainty). Some examination of the robustness and precision of the fitting would also be desirable.

      The authors should also characterise the variation between different measurements of the same compartments. The underlying question here is how the various decay components might vary as, for instance, the ratio of membrane to extracellular space varies across different measurement points.

      I think the authors need to give more thought to the possibility that some of the slowing they observe arises from fluorophore embedded in the membrane without being immobilised. I don't see how this can be easily ruled out - certainly the two-photon resolution does not permit distinguishing the membrane and fluid phases in the neuropil. Additionally, how would the authors rule out adsorption onto some extracellular proteins?

      The reason I raise these points is because I have at least a slight difficulty with the interpretation. A slowing of molecular rotation of 70% (intracellular) suggests to me that a large fraction of fluorophores, essentially 100%, must be in direct contact with some larger molecule. This seems quite extreme even in the crowded intracellular space. I have similar reservations about the synaptic cleft and extracellular space. Is at least 50% and 30% of the volume of these spaces really occupied by macromolecules? There may be somewhat reduced diffusion due to a boundary layer at the membrane or near macromolecules. Do estimates of the thickness of such a boundary layer exist (and its effect on diffusion)?


      Second review

      The following conditions are of interest:

      1) free solution

      2) with extracellular dye

      a) measurement inside cell

      b) measurement over neuropil

      c) measurement over synapse

      d) measurement inside pipette

      3) with intracellular dye

      a) measurement inside cell

      4) no dye

      a) measurement of autofluorescence over neuropil

      b) measurement inside soma

      For each of the above conditions, please show:

      1) full fluorescence time course -1 to 12 ns

      2) full anisotropy time course -1 to 12 ns

      3) specimen traces

      4) global averages

      5) fit of global average

      6) timing of the light pulse should always be indicated (I assume it occurs at 1ns, but this must be made explicit)

      It remains a good idea to try the same measurements with a second dye.

      Why is the rise of the anisotropy not instantaneous in Fig. 1B? I couldn't find any mention of temporal filtering.

      Assuming that the light pulse occurs at 1ns in Fig. [1E], why is the anisotropy shown so high? I would have thought that it should have decayed over a time constant (to about 1/e) by the point where the data shown, yet it is only reduced by about 15%. Was some additional normalisation carried out that I missed? If so it should certainly be removed and the true time course shown.

      In case it is not clear, I am interested in the possibility of the fast component in fact containing two components. The data do not allow me to evaluate this possibility.


      (Note to moderators: as the copyright holder of my reviews, I am entitled to post them.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 24, Jakob Näslund commented:

      For a discussion of this study regarding some outstanding issues relating to methodology, as well as the presence of a number of possible inaccuracies, see our commentary in Acta Neuropsychiatrica entitled "Multiple possible inaccuracies cast doubt on a recent report suggesting selective serotonin reuptake inhibitors to be toxic and ineffective", available at https://doi.org/10.1017/neu.2017.23


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 24, Konstantinos Fountoulakis commented:

      This paper confirms that the overall SMD is around or above 0.30 adding to the literature against the idea that antidepressants do not work (e.g. Kirsch 2008). The question of the magnitude of the effect has been discussed in the literature again and again. The NICE has abandoned since years the 3-point magnitude to define 'clinical relevance' and a SMD of 0.30 is the effect size expected for successful psychiatric treatments and also for treatments elsewhere in medicine. Of course we would like more, but this is the only means we have. All the other options (psychotherapy, alternative therapies etc) do not meet the stringent criteria of this meta-analysis as there are essentially not blinded and not adequately placebo-controlled, not to mention the risk of bias. I strongly disagree with the comment on the HDRS by the authors. Yes, regulatory authorities do recommend the HDRS but this does not constitute an essential argument. It is not correct to double register an event as both a symptom and an adverse event. It is either or (at least in principle). Unfortunately the HDRS is based on an antiquitated model of depression while the MADRS is tailored to the needs of trials. For a review please see Fountoulakis et al J Psychopharmacol. 2014 Feb;28(2):106-17. Concerning the adverse events, yes indeed there is a significant effect of the active drug but the NNH are high and there was no difference in severe adverse events in comparison with placebo. In my opinion, the real SMD of SSRIs are much higher, but not really impressive. This is masked by the properties of the HDRS but also by the possibility that a large number of patients enrolled in these studies are not suitable for a number of reasons. I would love to see US vs, rest of world comparison


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 08, Christian Gluud commented:

      Response to Søren Dinesen Østergaard's critique

      Søren Dinesen Østergaard (SDØ) [1] criticizes our systematic review on selective serotonin reuptake inhibitors (SSRI) for patients with major depressive disorder [2] for using Hamilton’s depression rating scale (HDRS)17 instead of HDRS6. SDØ refer to four studies ‘documenting’ his claims [3-6].

      Two of the studies relate to duloxetine and desvenlafaxine, which are dual action drugs and not SSRIs [3, 4]. The third study is a meta-analysis assessing fluoxetine versus placebo [5]. The results show a mean effect size of the SSRI of -0.30 (95% confidence interval (CI) -0.39 to -0.21) when using HDRS17 and an effect size of -0.37 (95% CI -0.46 to -0.28) when using HDRS6. The difference of 0.07 corresponds to 0.7 HDRS17 points assuming a standard deviation of 10 points. The fourth study is a patient-level analysis of 18 industry-sponsored placebo-controlled trials regarding paroxetine, citalopram, sertraline, or fluoxetine [6]. The authors report a mean effect size of the SSRIs of -0.27 when using HDRS17 and an effect size of -0.35 when using HDRS6 [6]. The difference of 0.08 corresponds to 0.8 HDRS17 points assuming a standard deviation of 10 points. Hence, the absolute effect size difference between the two scales seems less than 1 HDRS point. The National Institute for Clinical Excellence (NICE) recommended a difference of 3 points on the HDRS17 for 'a minimal effect' [7-9]. However, the required minimal clinical relevant difference is probably much larger than this figure. One study showed that a mirtazapine-placebo mean difference of up to 3.0 points on the HDRS corresponds to ‘no clinical change’ [10]. Another study showed that a SSRI-placebo mean difference of 3.0 points is undetectable by clinicians, and that a mean difference of 7.0 HDRS17 points is required to correspond to a rating of ‘minimal improvement’ [11].

      Moreover, none of the meta-analyses [5, 6] take into account risks of systematic errors (‘bias’) [12-14] or risks of random errors [15]. Hence, there are risks that the two meta-analyses may overestimate the beneficial effects of SSRIs.

      Other studies have shown that HDRS17 and HDRS6 largely show similar results [16,17]. It cannot be concluded that HDRS6 is a better assessment scale than HDRS17, just considering the psychometric validity of the two scales. If the total score of HDRS17 is affected by some of the adverse effects of SSRIs, then this might in fact better reflect the actual summed clinical effects of SSRIs than HDRS6 ignoring these effects. Until scales are validated against patient-centred clinically relevant outcomes, such scales are merely non-validated surrogate outcomes [18].

      National and international medical agencies [19-21] all recommend HDRS17 for assessing depressive symptoms. We need access to all individual patient data from all randomised clinical trials to compare the effects of antidepressants on HDRS6 to HDRS17 [22].

      SDØ states “no conflicts of interest“. We are aware that SDØ has received substantial support from ‘Lundbeckfonden’, its main objective being to maintain and expand the activities of the Lundbeck Group, one of the companies producing and selling SSRIs [23]. We think it would have been fair to declare this.

      Janus Christian Jakobsen, Kiran Kumar Katakam, Naqash Javaid Sethi, Jane Lindshou, Jesper Krogh, and Christian Gluud.

      Conflicts of interest:
 None known.

      Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen, Denmark

      References 1. Ostergaard SD: Do not blame the SSRIs: blame the Hamilton Depression Rating Scale. Acta Neuropsychiatrica 2017:1-3. 2. Jakobsen JC, Katakam KK, Schou A, et al: Selective serotonin reuptake inhibitors versus placebo in patients with major depressive disorder. A systematic review with meta-analysis and Trial Sequential Analysis. BMC Psychiatr 2017, 17(1):58. 3. Bech P, Kajdasz DK, Porsdal V: Dose-response relationship of duloxetine in placebo-controlled clinical trials in patients with major depressive disorder. Psychopharmacology 2006, 188(3):273-280. 4. Bech P, Boyer P, Germain JM, et al: HAM-D17 and HAM-D6 sensitivity to change in relation to desvenlafaxine dose and baseline depression severity in major depressive disorder. Pharmacopsychiatry 2010, 43(7):271-276. 5. Bech P, Cialdella P, Haugh MC, et al: Meta-analysis of randomised controlled trials of fluoxetine v. placebo and tricyclic antidepressants in the short-term treatment of major depression. Br J Psychiatr 2000, 176:421-428. 6. Hieronymus F, Emilsson JF, Nilsson S, Eriksson E: Consistent superiority of selective serotonin reuptake inhibitors over placebo in reducing depressed mood in patients with major depression. Mol Psychiatr 2016, 21(4):523-530. 7. Fournier JC, DeRubeis RJ, Hollon SD, et al: Antidepressant drug effects and depression severity: a patient-level meta-analysis. JAMA 2010, 303(1):47-53. 8. Mathews M, Gommoll C, Nunez R, Khan A: Efficacy and safety of vilazodone 20 and 40 Mg in major depressive disorder: a randomized, double-blind, placebo-controlled trial. Int Clin Psychopharmacol 2015, 30. 9. Kirsch I, Deacon BJ, Huedo-Medina TB, et al: Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS medicine 2008, 5(2):e45. 10. Leucht S, Fennema H, Engel R, et al: What does the HAMD mean? J Affect Disord 2013, 148(2-3):243-248. 11. Moncrieff J, Kirsch I: Empirically derived criteria cast doubt on the clinical significance of antidepressant-placebo differences. Cont Clin Trials 2015, 43:60-62. 12. Hróbjartsson A, Thomsen ASS, Emanuelsson F, et al: Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors. CMAJ : Canadian Medical Association Journal = Journal de l'Association Medicale Canadienne 2013, 185(4):E201-211. 13. Lundh A, Lexchin J, Mintzes B, Scholl JB, Bero L: Industry sponsorship and research outcome. Cochrane Database Syst Rev 2017, Art. No.: MR000033. DOI: 10.1002/14651858.MR000033.pub3.(2):MR000033. 14. Savovic J, Jones HE, Altman DG, et al: Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med 2012, 157(6):429-438. 15. Wetterslev J, Jakobsen JC, Gluud C: Trial Sequential Analysis in systematic reviews with meta-analysis. BMC Medical Research Methodology 2017, 17(1):39. 16. Hooper CL, Bakish D: An examination of the sensitivity of the six-item Hamilton Rating Scale for Depression in a sample of patients suffering from major depressive disorder. J Psychiatry Neurosci 2000, 25(2):178-184. 17. O'Sullivan RL, Fava M, Agustin C, Baer L, Rosenbaum JF: Sensitivity of the six-item Hamilton Depression Rating Scale. Acta psychiatrica Scandinavica 1997, 95(5):379-384. 18. Gluud C, Brok J, Gong Y, Koretz RL: Hepatology may have problems with putative surrogate outcome measures. J Hepatol 2007, 46(4):734-742. 19. Sundhedsstyrelsen (Danish Health Agency): Referenceprogram for unipolar depression hos voksne (Guideline for unipolar depression in adults). http://wwwsstdk/~/media/6F9CE14B6FF245AABCD222575787FEB7ashx 2007. 20. European Medicines Agency: Guideline on clinical investigation of medicinal products in the treatment of depression. EMA/CHMP/185423/2010 Rev 2 previously (CPMP/EWP/518/97, Rev 1) 2013. 21. U.S. Food and Drug Administration: https://www.fda.gov/ohrms/dockets/AC/07/briefing/2007-4273b1_04-DescriptionofMADRSHAMDDepressionR(1).pdf. 22. Skoog M, Saarimäki JM, Gluud C, et al.: Transaprency and Registration in Clinical Research in the Nordic Countries. Nordic Trial Alliance, NordForsk. 2015:1-108. 23. Lundbeck Foundation. http://www.lundbeckfonden.com/about-the-foundation.25.aspx.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Mar 03, Søren Dinesen Østergaard commented:

      For a comment on this meta-analysis, see the letter to the editor in Acta Neuropsychiatrica "Do not blame the SSRIs: blame the Hamilton Depression Rating Scale": https://doi.org/10.1017/neu.2017.6

      SDØ declares no conflicts of interest. Grants received from the Lundbeck foundation (see comment below) were all non-conditional and not given to support studies on antidepressant efficacy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 06, Andrea Messori commented:

      Clinical and administrative issues in the in-hospital management of innovative pharmaceuticals and medical devices

      by Andrea Messori and Valeria Fadda

      ESTAR, Regional Health System, via San Salvi 12, 50100 Firenze (Italy)​

      Innovative treatments are increasingly being developed for a variety of disease conditions, particularly in the field of pharmaceutics and medical devices. In this scenario, two needs have clearly emerged in the past years: firstly, the activities of horizon scanning, assessment of innovation, and prediction of health-care expenditure for the new products are becoming more and more important in practical terms, thus underscoring that all the main components HTA must be strengthened to improve the governance of in-hospital innovation; secondly, in most jurisdictions of national health systems (NHS), the management of innovation includes also the administrative process of procurement, and so this mix of clinical and bureaucratic pathways further complicates the practical handling of the new products. On the other hand, optimizing the supply chain and the procurement of pharmaceuticals and/or medical devices is known to be essential for the governance of public health-care [1].

      To our knowledge, the current medical literature does not comprise any real-life experience in which the management of new products for in-hospital use has been described in the context of a public NHS by examining both clinical and administrative aspects. In this brief report, we present one such experience that has been carried out in 2017 in a regional setting (the Tuscany region) of the Italian NHS.

      The Tuscany region includes a total of 3.75 million inhabitants; there are 7 separate Local Health Authorities with an overall number of 27 hospitals and 11,000 beds.

      Since 2014, the requests for any new pharmaceutical or medical device by the regional hospitals are submitted through a regional website. The activity carried out in May 2017 (601 requests) has been taken as an example.

      If one examines separately the data for pharmaceuticals (N=436) and medical devices (N=165), the reasons for these requests were of administrative nature in 42% of cases for pharmaceuticals and in 84% of cases for devices; in more detail, administrative requests dealt with the need to extend a contract close to expiration or to run a new tender including some new products in replacement for the old tender. New products were requested in 38% and in 16% of cases for pharmaceuticals and devices, respectively, but most of these new products could not be classified as innovative (according to the criteria of our national medicines agency [2]). Overall, there were only 3 innovative products -two medical devices and one drug-, and their requests, according to our internal procedures, were transmitted to the Regional Unit responsible for HTA reports. These 3 products that met our criteria for innovation represented only 0.5% of all requests received at our website.

      In conclusion, in the perspective of public hospitals, our experience shows that the management of innovation raises several practical problems because clinical and administrative aspects often co-exist and cannot be easily separated from one another. One risk in this joint management of clinical and administrative issues is that administrative criteria eventually prevail over a sound HTA assessment of the product concerned.

      References

      [1] Seidman G, Atun R. Do changes to supply chains and procurement processes yield cost savings and improve availability of pharmaceuticals, vaccines or health products? A system-atic review of evidence from low-income and middle-income countries. BMJ Glob Health. 2017 Apr 13;2(2):e000243.

      [2] Motile D, De Ponti F, Peruzzi E, Martini N, Rossi P, Silvan MC, Vaccheri A, Montanaro N. An update on the first decade of the European centralized procedure: how many innova-tive drugs? Br J Clin Pharmacol. 2006 Nov;62(5):610-6.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 03, Ole Jakob Storebø commented:

      In their editorial, Gerlach and colleagues make several critical remarks (Gerlach M, 2017) regarding our Cochrane systematic review on methylphenidate for children and adolescents with attention-deficit hyperactivity disorder (ADHD) (Storebø OJ, 2015). While we thank them for drawing attention to our review we shall here try to explain our findings and standpoints.

      They argue, on the behalf of the World Federation of ADHD and EUNETHYDIS, that the findings from our Cochrane systematic review contrast with previously published systematic reviews and meta-analyses, (National Collaborating Centre for Mental Health (UK), 2009, Faraone SV, 2010, King S, 2006, Van der Oord S, 2008) which all judged the included trials more favorably than we did.

      There are methodological flaws in most of these reviews that could have led to inaccurate estimates of effect. For example, most of these reviews did not publish an a priori protocol (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or present data on spontaneous adverse events (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), nor did they report on adverse events as measured by rating scales (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or systematically assess the risk of random errors, risk of bias, and trial quality (Faraone SV, 2010, King S, 2006,Van der Oord S, 2008). King at al. emphasised in the quality assessments for the NICE review that almost all studies did not score very well in the quality assessments and, consequently, results should be interpreted with caution (King S, 2006).

      The authors of this editorial refer to many published critical editorials and they argue that the issues they have raised have not adequately been addressed adequately by us. On closer examination, it is clear that virtually the same criticism has been levelled at us each time by the same group of authors, published in several journal articles, blogs, letters, and comments (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.

      Each time, we have refuted repeatedly with clear counter-arguments, recalculation of data, and detailed explanations (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      Our main point is that the very low quality of the evidence makes it impossible to estimate, with any certainty, what the true magnitude of the effect might be.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children changes the standardised mean difference effect size from 0.77 to 0.89. However, even if the effect size increases upon excluding these trials, the overall risk of bias and quality of the evidence deems this discussion irrelevant. As mentioned above, we have responded several times to this group of authors Storebø OJ, 2016, Storebø OJ, 2016,< PMID: 27138912, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      We did not exclude any trials for the use of the cross-over design, as these were included in a separate analysis. The use of end-of-period data in cross-over trials is problematic due to the risk for “carry-over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org). In addition, we tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity. This means that the risk of “unit of analysis error” and “carry-over effect” is uncertain, and could be real. The aspect about our bias assessment have been raised earlier by these authors and others affiliated to the EUNETHYDIS. In fact, we see nothing new here. There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Flacco ME, 2015, Lathyris DN, 2010, Kelly RE Jr, 2006). Moreover, the AMSTAR tool for methodological quality assessment of systematic reviews includes funding and conflicts of interest as a domain (http://amstar.ca/). The Cochrane Bias Methods Group (BMG) is currently working on including vested interests in the upcoming version of the Cochrane Risk of Bias tool.

      The aspect about whether teachers can detect well known adverse events of methylphenidate have also been raised earlier by these authors and others affiliated to the EUNETHYDIS (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.). We have continued to argue that teachers can detect the well-known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep. We highlighted this in our review (Storebø OJ, 2015) and have answered this point in several replies to these authors (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment. The well-known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness, and weight loss.

      We have considered the persistent, repeated criticism by these authors seriously, but no evidence was provided to justify changing our conclusions regarding the very low quality of evidence of methylphenidate trials, which makes the true estimate of the methylphenidate effect unknowable. This is a methodological rather than a clinical or philosophical issue.<br> We had no preconceptions of the findings of this review and followed the published protocol; therefore, any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high-quality meta-analyses. As we have repeatedly responded clearly to the criticism of these authors, and it is unlikely that their view of our (transparent) work is going to change, we propose to agree to disagree.

      Finally, we do not agree that the recent analysis from registries provides convincing evidence on the long-term benefits of methylphenidate due to multiple limitations of this type of kind of study, albeit that interesting perspectives are provided. They require further study to be regarded as reliable.

      Ole Jakob Storebø, Morris Zwi, Helle B. Krogh, Erik Simonsen, Carlos Renato Maia, Christian Gluud


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 29, Lydia Maniatis commented:

      It is interesting that the strawman description (the purely feedforward description) that Heeger is correctly and trivially rejecting in this article is the description serving as the major theoretical premise of a more recent PNAS article (Greenwood, Szinte, Sayim and Cavanagh (2017)) of which Heeger served as editor, and which I have been extensively critiquing. Some representative quotes from that article:

      "Given that the receptive fields at each stage in the visual system are likely built via the summation of inputs from the preceding stages..."

      "...idiosyncrasies in early retinotopic maps...would be propagated throughout the system and magnified as one moved up the cortical hierarchy."

      "Given the hierarchical structure of the visual system, with inherited receptive field properties at each stage..."

      These descriptions are never qualified in Greenwood et al (2017), and guide the interpretation of data. How does Heeger reconcile the assertions in the paper he edited with the assertions in his own paper?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 17, Lydia Maniatis commented:

      Given that the number of possible realities (distribution of matter and light) underlying each instance of retinal stimulation is infinite; given that each instance of retinal stimulation is unique: given that any individual's experience represents only a small subsample of possible experience; given that, as is well-known, lifetime experience (not least for the reasons given above) cannot explain perception (what do we see when we lack the experience needed to re-cognize something?), Heeger's (2017) references to prior probability distributions are unintelligible (and thus untestable).

      The question of how these statistical distributions are supposed to be instantiated in the brain is also left open, another reason this non-credible "theory" is untestable. All we have is a set of equations that can't be linked to any relevant aspect of the reality they're supposed to explain.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 12, Randi Goldman commented:

      I'm the first author on this study, and there's now a free online version of this tool that can be used for counseling patients. I hope you find it useful: https://www.mdcalc.com/bwh-egg-freezing-counseling-tool-efct


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 05, Anne Niknejad commented:

      error in figure3 legend: "The enzyme activity with pNP-myristate was taken as 100%."

      should be "The enzyme activity with pNP-butyrate was taken as 100%."

      to be in accordance with results displayed and text: "Maximum enzymatic activity was observed with pNP-butyrate (C4) (100%, 176.7U/mg), while it showed weaker activity with p-NP acetate (C2, 53.08%), p-NP octanoate (C8, 25.59%), p-NP deconoate (C10, 35.05%), p-NP laureate (C12, 18.51%), p-NP myristate (C14, 7.39%), p-NP palmitate (C16, 2.23%) than that with C4 ( Fig. 3)."

      Also, it is 'decanoate', not 'deconoate' (this kind of details impacts on text mining)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 03, Randi Pechacek commented:

      One of the authors of this paper, Despina Lymperopoulou, wrote about this paper on microBEnet discussing some of the background. Read about it here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 14, Peter Hajek commented:

      To clarify the dependence potential of vaping, could the authors provide data on never-smokers who became daily vapers please?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 15, Lydia Maniatis commented:

      In short, there are too many layers of uncertainty and conceptual vagueness here for this project to offer any points of support for any hypothesis or theory.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 15, Lydia Maniatis commented:

      Every hypothesis or theory is a story, but in the current relaxed climate in vision science, the story doesn't need to be empirically tested and well-rationalized in order to be publishable. We just need a special section in the paper acknowledging these problems, often entitled "limitations of the study" or, as here, "qualifying remarks." Below I excerpt remarks from this and other sections of the paper (caps mine):

      "A major goal of the present work was to test hypotheses related to multi-stage programming of saccades in infants. On the empirical side, we exposed 6-month-old infants to double-step trials but DID NOT SUCCEED IN COLLECTING RELIABLE DATA. In this respect, the model simulations were used to investigate an aspect of eye movement control that was not tested empirically."

      "In the model simulations, certain model parameters were allowed to vary across age groups and/or viewing conditions, based on theoretical and empirical considerations. We then interpreted the constellation of best-fitting parameters." There is a great deal of flexibility in post hoc data-fitting with numerous free parameters.

      "in supplementary analyses (not presented here) we used this approach to follow up on the results obtained in Simulation Study 2. To determine the individual contributions of saccade programming and saccade timing model parameters in generating the fixation duration distributions from LongD and ShortD groups during free viewing of naturalistic videos, we ran simulations in which we estimated the saccade programming parameters (mean durations of labile and non-labile stages) while keeping the saccade timing parameters fixed, and vice versa. In brief, the results confirmed that, for both ShortD and LongD groups, a particular combination of saccade-programming and saccade timing parameters was needed to achieve a good fit. HOLDING EITHER SET OF PARAMETERS FIXED DID NOT RESULT IN AN ADEQUATE FIT."

      There are also a lot of researcher degrees of freedom in generating and analysing data. From the methods:

      "Fixation durations (FDs). Eye-tracking data from infants may contain considerably higher levels of noise than data from more compliant participants such as adults due to various factors including their high degree of movement, lack of compliance tothe task, poor calibration and corneal reflection disturbances dueto the underdeveloped cornea and iris (Hessels, Andersson,Hooge, Nystr歬 & Kemner, 2015; Saez de Urabain, Johnson, &Smith, 2015; Wass, Smith, & Johnson, 2013). To account for this potential quality/age confound, dedicated in-house software for parsing and cleaning eye tracking data has been developed (GraFix, Saez de Urabain et al., 2015). This software allows valid fixations to be salvaged from low-quality datasets whilst also removing spurious invalid fixations. In the present study, both adult and infant datasets were parsed using GraFix鳠two-stage semi-automated process (see Appendix A for details). The second stage of GraFix involves manual checking of the fixations detected automatically during the first stage. This manual coding stage was validated by assessing the degree of agreement between two different raters. ONE RATER WAS ONE OF THE AUTHORS (IRSdU)."

      The "in-house software" changes the data, a d therefore the assumptions implicit in its computations should be made explicit. The p-value used to assess rater agreement was p<05, which nowadays is considered rather low. We need more info on the "valid/invalid" distinction as well as on how the software is supposed to make this distinction.

      From the "simulation studies": "The parameter for the standard deviation of the gamma distributions (rc) is a fixed parameter. To accommodate the higher variability generally observed in infant data compared to adult data, it was set to 0.33 for the infant data and 0.25 for the adult data. These values were adopted from previous model simulations (Engbert et al., 2005; Nuthmann et al., 2010; Reichle et al., 1998, 2003)."

      Using second-hand values adopted by other researchers in the past doesn't absolve the current ones from explaining the rationale behind these choices (assuming there is one.) "


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 07, Dejan Stevanovic commented:

      Since this review was done in 2014, one study has been published supporting the cross-cultural validity of the Revised Child Anxiety and Depression Scale (RCADS) (https://www.ncbi.nlm.nih.gov/pubmed/27353487).

      Two studies have been published evidencing that the self-report Strengths and Difficulties Questionnaire (SDQ) lacks corss-cultural validity and it is not suitable for cross-cultural comparisons (https://www.ncbi.nlm.nih.gov/pubmed/28112065, https://www.ncbi.nlm.nih.gov/pubmed/?term=New+evidence+of+factor+structure+and+measurement+invariance+of+the+SDQ+across+five+European+nations).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 19, Helmi BEN SAAD commented:

      The exact names of the authors are: Ben Moussa S, Sfaxi I, Ben Saad H, Rouatbi S".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 19, Helmi BEN SAAD commented:

      The exact names of the authors are: Ben Saad H, Khemiss M, Nhari S, Ben Essghaier M, Rouatbi S".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 13, Khaled Moustafa commented:

      Thank you, Josip, for your thoughtful comments. It is a general issue, indeed, among many others that need to be fixed in the publishing industry. Hopefully, some publishers will be all ears.

      Regards,

      KM


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 08, Josip A. Borovac commented:

      Great input, Khaled! Thank you so much for these observations. You are not alone my friend! Best wishes, JAB


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Feb 08, Khaled Moustafa commented:

      Thank you all for your comments.

      Further reasons that strongly support the idea not to ask for any specific format or style at the submission stage include:

      1) It is rare that a manuscript is accepted immediately from the first run; in most cases there is a revision, nonetheless a minor one. So, editors can ask authors to apply the journal’s style and page setup in the revised version only, but not at the submission stage.

      2) In most cases, the final publication format is different from the initial submission format, whatever the style required by the journal and applied by the authors. That is, even if we apply a particular journal's style at the submission stage, the accepted final version (i.e., the PDF file) often appears in a different format and style than the initial one required at the submission stage. So, asking for a given page setup, citation style or specific format at the submission phase but not taking it into account in the final version is an obvious waste of authors’ time. Much time indeed is lost in doing things that are not taken into consideration in the final published versions (except maybe in the HTML version or authors’ version posted online prior to the proof version but not in the final PDF format. So, once again, it does not make much sense to require a drastic formatting at the submission stage).<br> Regardless of innumerable references’ styles (by name or date, with superscript or brackets, journal names in italic or not, in bold or not, underlined or not, etc.), some journals return manuscripts to the author just because the references are non-indented or indented… or because the headers were enumerated/non-enumerated... Other journals ask to upload two files of the same manuscript (a Word file and a PDF file), some others ask to include the images/figures or tables in the text or in separated files…etc. All these are trivial issues related to the form that does not change the inherent value of a manuscript. As it is the content that should matter but not the format or style, page setup or styles could be done only in a revised version when the manuscript is accepted but not elsewhere.<br> At least, journals should make it optional for authors to apply or not to apply the journal’s style at the submission stage. On another hand, the submission steps, in turn, are also long and overwhelming in many journals. In my view, these also need to be shortened to the strict minimum (for .e.g., login and upload files). Then, if the manuscript is accepted, authors could provide the long list of the required information (statement of conflict of interest, list of keywords, all the answers and questions currently stuffing in the submission process, etc.).

      Regards,

      KM


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Feb 07, Saul Shiffman commented:

      Super-agree. Even application of this attitude to paper length could be useful. I'm sure we've all gone to the trouble of whittling a paper down to the (ridiculously low) word-count requirements of a particular journal, only to have the paper rejected.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Feb 07, Josip A. Borovac commented:

      Agree with this - a well-identified problem and good article. Some journals are trying to implement "your paper - your way" style, but this should be taken to a whole another level, generally speaking. Many journals sponsor very obscure formatting styles and to resubmit to another journal is a nightmare and definitely time-consuming. Researchers should focus on science as much as possible, much less on submission technicalities and crude formatting issues. I never understood a point of having 3 billion citation styles, for example. What is the true purpose of that except making our lives miserable?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Feb 07, Thittayil Suresh Apoorv commented:

      This is nice suggestion by the author. Some journals already following this. Journal like Cytokine are part of Elsevier’s Article Transfer Service (ATS). Reformatting is required only after acceptance.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Feb 07, Francesco Brigo commented:

      I couldn´t agree more: time is brain!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 13, Konstantinos Fountoulakis commented:

      This paper discusses possible mechanisms of antidepressant effect 1. Although the whole discussion is intriguing, it should be noted that the fundamental assumption of the authors is mistaken. More specifically, the authors start from the position that there is a latency time of approximately two weeks from initiation of antidepressant treatment to the manifestation of the treatment effect. This is an old concept, now proven wrong. We now know that the treatment response starts within days and it takes two weeks not for the treatment effect to appear but for the medication group to separate from placebo 2. This conclusion is so solid that it has been incorporated in the NICE CG90 guidelines for the treatment of depression (available at https://www.nice.org.uk/guidance/cg90/evidence/full-guidance-243833293, page 413). These two are completely different concepts, often confounded in the literature. However it is clear that medication significantly improves the chances of a patient to be better after two weeks in comparison to placebo, but improvement itself has started much earlier. We also know that the trajectories of patients we respond to medication are similar to the trajectories of those who improve under placebo. One of the possible consequences of this observation is that there might not be a physiological difference underlying response under medication in comparison to response under placebo; however the chances these physiological mechanisms are activated are higher under medication References 1. Harmer CJ, Duman RS, Cowen PJ. How do antidepressants work? New perspectives for refining future treatment approaches. The lancet Psychiatry 2017. 2. Posternak MA, Zimmerman M. Is there a delay in the antidepressant effect? A meta-analysis. The Journal of clinical psychiatry 2005; 66(2): 148-58.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 19, NephJC - Nephrology Journal Club commented:

      This trial of extending hemodialysis hours and quality of life was discussed on September 12th and September 13th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Swapnil Hiremath are available at the NephJC website here .

      The highlights of the tweetchat were:

      • The study was well-designed and relevant.

      • Interpreting the results, can you validly fit a linear model to a questionnaire score?

      • Some would argue that including both incident and prevalent patients may have confounded some of the results as LV mass regresses in the first few months after initiation.

      • This paper confirmed previous literature that preserving residual renal function is really essential for better outcomes on dialysis and that HD dose should track with this.

      • Fluid control may be more important than solute clearance.

      Transcripts of the tweetchats, and curated versions as storify will be shortly available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 06, Clive Bates commented:

      To build on to Professor Hajek's cogent criticism, I would like to add a further three points:

      First, the authors offer the usual disclaimer that they cannot make causal inferences from a study of this nature, which is correct. But they go on to do exactly that within the same paragraph:

      Finally, although the crosssectional design of our study allowed us to determine associations between variables, it restricted our ability to draw definitive causal inferences, particularly about the association between ENDS use and smoking cessation. Nevertheless, the association between ENDS use and attempts at smoking cessation suggests that a substantial proportion of smokers believe that ENDS use will help with smoking cessation. Furthermore, the inverse association between ENDS use and smoking cessation suggests that ENDS use may actually lower the likelihood of smoking cessation. (emphasis added)

      From that, they build a policy recommendation:

      "Tobacco cessation programs should tell cigarette smokers that ENDS use may not help them quit smoking"

      That statement is literally true for e-cigarettes and every other way of quitting smoking, but it is not a meaningful or legitimate conclusion to draw from this study because the design does not allow for causal inferences.

      Second, the authors characterise e-cigarette use as 'ever use' in calculating their headline odds ratio (0.53).

      Our most important finding was that having ever used ENDS was significantly associated with reduced odds of quitting smoking.

      Ever use could mean anything from 'used once and never again' to 'use all day, every day' or 'used once when I couldn't smoke'. What it does not mean is 'used an e-cigarette in an attempt to quit smoking'. So this way of characterising e-cigarette use can tell us little about people who do try to quit smoking using an e-cigarette or whether that approach should be recommended.

      Third, as well as the basic timing point made by Professor Hajek, the authors do not consider a further obvious contributory explanation: reverse causality. It is quite possible that those who find it hardest to quit or don't want to quit may be those who are drawn to trying e-cigarettes - either because they don't want to stop or have tried everything else already and failed. It is not safe to assume that the population is homogeneous in the degree of nicotine dependence, that e-cigarette ever-use is randomly distributed across the population or that e-cigarette use is generally undertaken with the intention of quitting.

      The analysis provides no insights relevant to the efficacy of e-cigarettes in smoking cessation and building any sort of recommendation to smoking cessation programs based on this survey is wrong and inappropriate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 06, Peter Hajek commented:

      The unsurprising finding that people who quit smoking between 2009 and 2013 were less likely to try e-cigarettes than those who still smoked in 2014 is presented as if this shows that the experience with vaping somehow ‘reduced odds of quitting smoking’. It shows no such thing.

      It is obvious that current smokers must be more likely to try e-cigarettes than smokers who quit years ago. E-cigarettes were more widely used in 2014 than in previous years. People who quit smoking up to five years earlier would have much less (or even no) opportunities to try vaping, and no reason to do so after they quit. Current smokers in contrast continue to have a good reason to try e-cigarettes, and are having many more opportunities to do so. This provides no information at all about odds of quitting smoking or about whether e-cigarettes are effective or not.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 12, Lydia Maniatis commented:

      It occurs to me that the there are two ways to interpret the finding that people were influenced by the stable versions of the dress image in the different settings. The authors say that these versions introduced a bias as to the illumination. But it seems to me more straightforward to assume that they introduced a bias or expectation with respect to the actual colors of the dress, that is a perceptual set mediating the latter. It took me a while to realize this - an example of how explanations that are given to us can create a 'conceptual set.'


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 15, Lydia Maniatis commented:

      I don’t think the authors have addressed the unusual aspects of the dress in saying that “The perceived colors of the dress are due to (implicit) assumptions about the illumination.” As they note themselves, “This is exactly what would be predicted from classical color science…”

      The spectrum of light reflected to our eye is a function of the reflectance properties of surfaces and the spectrum of the illumination; both are disambiguated on the basis of implicit assumptions, and both are represented in the percept. The two perceptual features (surface color and illumination) are two sides of the same coin: Just as we can say that seeing a surface as having color x of intensity y is due to assumptions about the color and intensity of the illuminants, so we can say that seeing illumination of color x and intensity y is due to implicit assumptions about the reflectance (how much light they reflect) and the chromaticity (which wavelengths they reflect/absorb) of the viewed surfaces. We haven’t explained anything unless we can explain both things at the same time.

      The authors are choosing one side of the perceptual coin – the apparent illumination – and claiming to have explained the other. Again, it’s a truism to say that seeing a patch of the dress as color “x” implies we are seeing it as being under illumination “y,” while perceiving the patch as a different color means perceiving a different illumination. This doesn’t explain what makes the dress unusual - why it produces different color/illumination impressions in different people.

      The authors seem to want to take the “experience” route (“prior experiences may influence this perception”); this is logically and empirically untenable, as has been shown and argued innumerable times in the vision literature. For one thing, such a view is circular, since what we see in the first place is a product of the assumptions implicit in the visual process. It’s not as though we see things first, and then adopt assumptions that allow us to see it…In addition, why would such putative experience influence only the dress, and not each and every percept? (The same objection applies to explanations in terms of physiological differences). Again, the question of what makes the dress special is left unaddressed.

      It’s odd that, for another example of such a phenomenon, vision researchers need to turn to “poppunkblogger.” If they understood it in principle, then they would be able to construct any number of alternative versions. Even if they could show the perception of the dress to be experience-based (which, again, is highly unlikely to impossible), this would not not help; they would still be at a loss to explain why different people see different versions of one image and not most others. To understand the special power of the dress, they need at a minimum to analyze its structure, not only in terms of color but in terms of shape, which is the primary mediator of all aspects of perception. Invoking “scene interpretation” and “the particular color distributions” are only placeholders for all the things the authors don’t understand.

      The construction of images that show that the dress itself can produce consistent percepts is genuinely interesting, but it is a problem that the immediate backgrounds are not the same (e.g. arm placements). This produces confounds. The claim that these confounds are designed to produce the opposite effect of what is seen, based on contrast effects, is not convincing, since the idea that illusions involving transparency/illumination are based on local contrast effects is a claim that is easy to falsify empirically, and has been falsified. So we are dealing with unanalyzed confounds, and one has to wonder how much blind trial and error was involved in generating the images.

      Finally, I’m wondering why a cutout of the dress wasn’t also placed against a plain background as a control; what happens in this case? Has this been done yet?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 03, Elisabeth Schramm commented:

      In reply to a comment by Falk Leichsenring

      Allegiance effects controlled

      Elisabeth Schramm, PhD; Levente Kriston, PhD; Ingo Zobel, PhD; Josef Bailer, PhD; Katrin Wambach, PhD; Matthias Backenstrass, PhD; Jan Philipp Klein, MD; Dieter Schoepf, MD; Knut Schnell, MD; Antje Gumz, MD; Paul Bausch, MSc; Thomas Fangmeier, PhD; Ramona Meister, MSc; Mathias Berger, MD; Martin Hautzinger, PhD; Martin Härter,MD, PhD

      Corresponding Author: Elisabeth Schramm, PhD, Department of Psychiatry, Faculty of Medicine, University of Freiburg, Hauptstrasse 5, 79104 Freiburg, Germany (elisabeth.schramm@uniklinik-freiburg.de)

      We acknowledge the comment of Drs. Steinert and Leichsenring (1) on our study (2) reasoning that our findings may at least in part be attributed to allegiance effects. Unfortunately, they provide neither a clarification of what they exactly refer to with the term “allegiance effects” nor a specific description of the presumed mechanisms (chain of effects) through which they think allegiance may have influenced our results. In fact, as specified both in the trial protocol (3) and the study report (2), we took a series of carefully safeguarded measures to minimize bias. Unlike stated in the comment, training and supervision of the study therapists and the center supervisors were performed by qualified and renowned experts for both investigated approaches (Martin Hautzinger for Supportive Psychotherapy and Elisabeth Schramm for Cognitive Behavioral Analysis System of Psychotherapy). Moreover, none of them has been involved in treating any study patients in this trial. We are confident that any possible allegiance of the participating researchers, therapists, supervisors, or other involved staff towards any, both, or none of the investigated interventions is very unlikely to have been able to surmount all of the implemented measures against bias and to affect the results substantially.

      References

      (1) Steinert C, Leichsenring F. The need to control for allegiance effects in psychotherapy research. PubMed Commons. Sep 08 2017

      (2) Schramm E, Kriston L, Zobel I, Bailer J, Wambach K, Backenstrass M, Klein JP, Schoepf D, Schnell K, Gumz A, Bausch P, Fangmeier T, Meister R, Berger M, Hautzinger M, Härter M. Effect of Disorder-Specific vs Nonspecific Psychotherapy for Chronic Depression: A Randomized Clinical Trial. JAMA Psychiatry. Mar 01 2017; 74(3): 233-242

      (3) Schramm E, Hautzinger M, Zobel I, Kriston L, Berger M, Härter M. Comparative efficacy of the Cognitive Behavioral Analysis System of Psychotherapy versus supportive psychotherapy for early onset chronic depression: design and rationale of a multisite randomized controlled trial. BMC Psychiatry. 2011;11:134


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Sep 08, Falk Leichsenring commented:

      The need to control for allegiance effects in psychotherapy research

      Christiane Steinert1,2, PhD and Falk Leichsenring, DSc2

      1 Medical School Berlin, Department of Psychology, Calandrellistraße 1-9, 12247 Berlin

      2 University of Giessen, Department of Psychosomatics and Psychotherapy, Ludwigstraße 76, 35392 Giessen

      Corresponding author: Prof. Dr. Falk Leichsenring University of Giessen Department of Psychosomatics and Psychotherapy Ludwigstr. 76, 35392 Giessen, Germany Fon | +49-641-99 45647 Fax | +49-641-99 45669 Mail | falk-leichsenring@psycho.med.uni-giessen.de

      In a recent trial on psychotherapeutic efficacy Schramm et al. hypothesized that Cognitive Behavioral Analysis System of Psychotherapy (CBASP) would be superior to supportive therapy (SP) in the treatment of chronic depression.1 This hypothesis was corroborated, CBASP improved depression significantly more, but only by a moderate effect size of 0.31. Some issues, however, raise the question of possible allegiance effects on several levels.2 (1) The authors clearly are in favour of CBASP and expect it to be superior to SP (primary hypothesis). (2) Furthermore, six authors participated as therapists in the CBASP group, while none of the authors seems to have participated in the SP group. The authors can be expected to be alleged to CBASP. (3) In addition, the therapy sessions were supervised by the very same authors participating in the CBASP group as therapists. (4) No expert of SP was listed to have participated in the study, neither as a researcher, a therapist nor as a supervisor. (5) The authors stated that all therapists completed a 3-year psychotherapeutic training program or were in an advanced stage of training. However, at least in Germany, there is no 3-year training program specifically for SP, only for CBT. Thus, it is unlikely that the therapists in the SP condition were alleged to SP in the same way as the therapists in the CBASP condition. Further information on the background of the therapists in SP would be informative.

      Thus, a researcher, a therapist and a supervisor allegiance effect can be expected to be present in this study.3

      Munder et al. found an association between researcher allegiance and outcome of r=0.35 which corresponds to a medium effect size.2 For this reason the possibility cannot be ruled out that the moderate between-group effect size of 0.31 is at least in part due to allegiance effects. - The fact that the treatments do not seem to differ with regard to treatment fidelity ratings does not rule out this possibility. This is also true for the fact that therapists met the criteria for mastery of CBASP and SP before treating study patients.

      References

      1. Schramm E, Kriston L, Zobel I, Bailer J, Wambach K, et al. Effect of Disorder-Specific vs Nonspecific Psychotherapy for Chronic Depression: A Randomized Clinical Trial. JAMA Psychiatry. Mar 01 2017;74(3):233-242.
      2. Munder T, Flückiger, C, Gerger, H, Wampold, BE, Barth, J. Is the Allegiance Effect an Epiphenomenon of True Efficacy Differences Between Treatments? A Meta-Analysis. J Couns Psychol. 2012(Epub ahead of print).
      3. Steinert C, Munder T, Rabung S, Hoyer J, Leichsenring F. Psychodynamic Therapy: As Efficacious as Other Empirically Supported Treatments? A Meta-Analysis Testing Equivalence of Outcomes. Am J Psychiatry. May 25 2017:appiajp201717010057.
      4. Cuijpers P, Huibers MJ, Furukawa TA. The Need for Research on Treatments of Chronic Depression. JAMA Psychiatry. Mar 01 2017;74(3):242-243.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 16, Lydia Maniatis commented:

      "Our findings suggest that people who perceive the dress as blue might rely less on contextual cues when estimating surface color."

      Can there be a "context-free" condition, in principle? What would this look like? The term context seems far too general as it is used here. The conclusion as framed has no theoretical content. If the reference is to specific manipulations, and the principles behind them, then it's an entirely different thing, and should be specified.

      The fact that the results of this study differed significantly from the results of others should be of concern with respect to all of them. Might replication attempts be in order, or are chatty post-mortems enough?

      "Our results lend direct support to the idea that blue and white perceivers see the dress in a different color because they discount different illumination colors."

      This statement involves a major conceptual error in the sense that it cannot function as an explanation. The visual system infers both light and illumination from the stimulation of the retina by various wavelengths of various intensities. Both surface appearance and illumination are inferred from the same stimulation; to make an inference about illumination is to simultaneously make an inference about reflectance/chromaticity. One “explains” the other in the sense that each inference is contingent on the other; but to say that one inference has priority over the other is like saying the height of one side of a see-saw determines the height of the other; it’s an empty statement. What we need to explain is the movement of the whole, interconnected see saw.

      This error is unfortunately a common one; it's also made by Witzel, Racey and O'Regan (2017) in this special issue. In short, this is a non-explanation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Raphael Stricker commented:

      Another Lyme OspA Vaccine Whitewash

      The meta-analysis by Zhao and colleagues comes to the conclusion that "the OspA vaccine against Lyme disease is safe and its immunogenicity and efficacy have been verified." The authors arrive at this sunny conclusion by excluding 99.6% of published articles that demonstrate potential problems with the OspA vaccine. Furthermore, the authors ignore peer-reviewed studies, FDA regulatory meetings and legal proceedings that point to major problems with OspA vaccine safety (1-3). This whitewash bodes ill for future Lyme vaccine candidates because it fosters disregard for vaccine safety among Lyme vaccine manufacturers and mistrust among potential Lyme vaccinees.

      References 1. Stricker RB (2008) Lymerix® risks revisited. Microbe 3: 1–2. 2. Marks DH (2011) Neurological complications of vaccination with outer surface protein A (OspA). Int J Risk Saf Med. 23: 89–96. 3. Stricker RB, Johnson L (2014) Lyme disease vaccination: safety first. Lancet Infect Dis. 14(1):12.

      Disclosure: RBS is a member of the International Lyme and Associated Diseases Society (ILADS) and a director of LymeDisease.org. He has no financial or other conflicts to declare.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 02, Stuart RAY commented:

      Regarding the prominent concluding statement of the abstract, what is the evidence that treatment success (with current recommended regimens) will be reduced by the RASs found in this study?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 15, thomas samaras commented:

      This is an excellent paper on height trends in Sardinia. However, I disagree with the precept that height can be used as a measure of health and longevity. Many researchers view height as a byproduct of the Industrial Revolution and the Western diet. In actuality, greater height and associated weight is harmful to our long-term health and longevity. There are many reasons for this position.

      1. Carrera-Bastos reported that our modern diet is not the cause for increased life expectancy (LE). Instead, our progress in sanitation, hygiene, immunization, and medical technology have driven our rise in life expectancy. This increase in life expectancy is not that great at older ages; e.g., in 1900, a 75-year old man could expect to live another 8.5 years. A 100 years later, he could expect to live 10 years. Not a substantial improvement in spite of great advances in food availability, lifestyle, medicine, worker safety, etc.

      2. A number of researchers have associated our increased height with excess nutrition, not better quality nutrition (Farb, Galton, Gubner, and Campbell).

      3. Nobel prize winner, Charles Townes stated that shorter people live   
        

        longer. Other scientists supporting the longevity benefits of smaller<br> body size within the same species include Bartke, Rollo, Kraus, Pavard, Promislow, Richardson, Topol, Ringsby, Barrett, Storms, Moore, Elrick, De Magahlaes and Leroi.

      4. Carrera-Bastos, et al. reported that pre-Western societies rarely get age-related chronic diseases until they transition to a Western diet. Trowell and Burkitt found this to be true based on their research over 40 years ago (Book: Western Diseases, Trowell and Burkitt.) Popkin noted that the food system developed in the West over the last 100+ years has been “devastating” to our health.

      5. A 2007 report by the World Cancer Research Fund/American Institute of Cancer Research concluded that the Industrial Revolution gave rise to the Western diet that is related to increased height, weight and chronic diseases. (This report was based on evaluation of about 7000 papers and reports.)

      6. US males are 9% taller and have a 9% shorter life expectancy. Similar differences among males and females in Japan and California Asians were found. It is unlikely that the inverse relationship in life expectancy and height is a coincidence. (Bulletin of the World Health Organization, 1992, Table 4.)

      7. High animal protein is a key aspect of the Western Diet, but it has many negative results. For example, a high protein diet increases the levels of CRP, fibrinogen, Lp (a), IGF-1, Apo B, homocysteine, type 2 diabetes, and free radicals. In addition, the metabolism of protein has more harmful byproducts; e.g., the metabolism of fats and carbs produces CO2 and water. In contrast protein metabolism produces ammonia, urea, uric acid and hippuric acid. (Fleming, Levine, Lopez).

      8. The high LE ranking of tall countries is often cited as supporting the the conviction that taller people live longer. However, if we eliminate non-developed countries, which have high death rates during the first 5 years of life and poor medical care, the situation changes. However, among developed countries, shorter countries rank the highest compared to tall countries. For example, out of the top 10 countries, only Iceland is a tall country. The other developed countries are relatively short or medium in height: The top 10 countries include: Monaco (1), Singapore, Japan, Macau, San Marino, Iceland (tall exception), Hong Kong, Andorra, Switzerland, and Guernsey (10). The Netherlands, one of the tallest countries in Europe, ranks 25 from the top. The ranking of other tall countries include: Norway (21), Germany (34), Denmark (47), and Bosnia and Herzegovina (84). Source for LE data: CIA World Factbook, 2016 data. Male height data from Wikipedia.

      It should be pointed out that a number of confounders exist that can invalidate mortality studies that show shorter people have higher mortality. Some of these confounders include socioeconomic status, higher weight for height in shorter people, smoking, and failure to focus on ages exceeding 60 years (differences showing shorter people live longer generally occur after 60 years of age). For example, Waaler’s mortality study covered the entire age range. He found that between 70 and 85 years of age, tall people had a higher mortality than shorter men between 5’7” and 6’. An insurance study (Build Study, 1979) found that when they compared shorter and taller men with the same degree of overweight, the shorter men had a slightly lower mortality.

      Anyone interested in the evidence showing that smaller body size is related to improved health and longevity can find evidence in the article below which is based on over 140 longevity, mortality, survival and centenarian studies.

      Samaras TT. Evidence from eight different types of studies showing that smaller body size is related to greater longevity. JSRR 2014. 2(16): 2150-2160, 2014; Article no. JSRR.2014.16.003


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 01, Hilda Bastian commented:

      The authors raise interesting and important points about the quandaries and complexities involved in updating a systematic review and reporting the update. However, their review of the field and conclusion that of the 250 journals they looked at, only BMC Systematic Reviews has guidance on the process of updating is deeply flawed.

      One of the 185 journals in the original sample they included (Page MJ, 2016) is the Cochrane Database of Systematic Reviews. Section 3.4 of the Cochrane Handbook is devoted to updating, and updating is addressed within several other sections as well. The authors here refer to discussion of updating in Cochrane's MECIR standards. Even though this does not completely cover Cochrane's guidance to authors, it contradicts the authors' conclusion that BMC Systematic Reviews is the only journal with guidance on updating.

      The authors cite a recent useful analysis of guidance on updating systematic reviews (Garner P, 2016). Readers who are interested in this topic could also consider the broader systematic review community and methodological guidance. Garritty C, 2010 found 35 organizations that have policy documents at least on updating, and many of these have extensive methodological guidance, for example AHRQ (Tsertsvadze A, 2008). Recently, guidelines for updating clinical guidelines have also been published (Vernooij RW, 2017).

      The authors reference some studies that address updating strategies, however this literature is quite extensive. You can use this filter in PubMed along with other search terms for studies and guidance: sysrev_methods [sb] (example). (An explanation of this filter is on the PubMed Health blog.)

      Disclosure: I work on PubMed Health, the PubMed resource on systematic reviews and information based on them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 13, GARRET STUBER commented:

      The corrected version of this manuscript is now online at the journal's website. A detailed correction notice is linked to the corrected article.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 09, GARRET STUBER commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 20, Erin Frazee Barreto commented:

      The Cystatin C-Guided Vancomycin Dosing tool can be accessed using the mobile or web app 'Calculate' available at QxMD

      https://qxmd.com/calculate/calculator_449/vancomycin-dosing-based-on-egfr-creatinine-cystatin-c


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 08, Israel Hanukoglu commented:

      A three dimensional (3D) video of the human eccrine sweat gland duct covered with sodium channels can be seen at: https://www.youtube.com/watch?v=JcddOILffOM

      The video was generated based on the results of this study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 17, Israel Hanukoglu commented:

      Our lab has undertaken to map the sites of expression and localization of ENaC and CFTR in epithelial tissues. This article is the second one in the series and it concentrates on the skin.

      In the first paper, we covered the sites of localization of ENaC and CFTR in the respiratory tract and the female reproductive tract. Both of these tissues contain large stretches of epithelium covered with multi-ciliated cells. We had shown that in these epithelia with motile cilia, ENaC is expressed along the entire length of the cilia. Reference: https://www.ncbi.nlm.nih.gov/pubmed/22207244

      In the current work on the skin, epidermis and epidermal appendages, ENaC was found mostly located in the cytoplasm of keratinocytes, sebaceous glands, and smooth muscle cells. Only in the eccrine type sweat glands, ENaC and CFTR were found predominantly on the luminal membrane facing the lumen of the ducts. Thus, the reuptake of Na+ ions secreted in sweat probably takes place in the eccrine glands.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 28, Eric Fauman commented:

      For a report on Somascan pQTLs from a much larger but cross-sectional population check out the recent (not yet peer-reviewed) BioRxiv paper from Karsten Suhre and colleagues:

      Connecting genetic risk to disease endpoints through the human blood plasma proteome http://biorxiv.org/content/early/2016/11/09/086793

      For example, the association reported above (rs3197999, pvalue=6e-10 for MST1 levels) is reported in the BioRxiv paper with a p-value of 1e-242.

      The data from the BioRxiv paper can be explored at http://proteomics.gwas.eu


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 05, Martine Crasnier-Mednansky commented:

      All data in this paper should be discarded simply because the strains used by the authors are not what the authors think they are, as explained below.

      An Escherichia coli strain lacking PEP synthase does not grow on pyruvate. In fact, the crucial role of PEP synthase during growth on pyruvate is well documented. In brief, mutant strains were isolated which could grow on glucose or acetate but not on pyruvate; it was found they lacked PEP synthase (see Cooper RA, 1967 for an early paper). Furthermore, because the PEP synthase gene (ppsA) is transcriptionally positively regulated by the fructose repressor FruR (Geerse RH, 1986, also known as Cra), fruR mutant strains are routinely checked for their inability to grow on pyruvate. Therefore, data (in supplementary Fig. 1) indicating wild type and ppsA strains grow equally well on pyruvate are incorrect; the strain used by the authors is not a ppsA strain.

      The ptsI strain also does not appear to be a ptsI strain, as it grows on xylose as well as a wild type strain (figure 3b), it should not; growth on xylose requires cAMP, which requires the phosphorylated form of Enzyme IIA<sup>Glc</sup>.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 12, Atanas G. Atanasov commented:

      Silymarin is indeed a very prominent herbal product with a variety of demonstrated bioactivities. It was also recently studied in my group in the context of regulation of PPARgamma activity and macrophage cholesterol efflux. I have enjoyed reading this review focused on the usefulness of the plant product in chronic liver disease, and have featured it on: http://healthandscienceportal.blogspot.com/2017/03/how-beneficial-is-silymarinsilybin-use.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.