- Jul 2018
-
europepmc.org europepmc.org
-
On 2017 Apr 01, PHILIP SCHAUER commented:
I beg to differ with Dr. Weiss in that the control group of our study was provided intensive, multi-agent medical therapy as tolerated with an intent to reach a HbA1c < 6%- as per ACCORD. Furthermore, medication choice, intensification, dose and frequency were managed by a highly qualified, experienced team of expert endocrinologists at an academic center. A favorable decrease in HbA1c by 1.7% from baseline (> 9% HbA1c) was achieved initially in the control group which was already heavily medicated at baseline (average of 3+ diabetes drugs). Thus, many would agree that our approach was "intensive". This initial improvement, however, was not sustained possibly due to inherent limitations of medical therapy related to adherence, side effects, and cost. Surgery is much less adherence-dependent, which likely accounts for some of the sustained benefit of surgery. Many will disagree with Dr. Weiss that ACCORD defines “true intensive medical therapy” since that regimen actually increased mortality compared to standard therapy, likely due to drug related effects (eg. hypoglycemia). On the contrary, more than 10 comparative, non-randomized studies show a long-term mortality reduction with surgery compared to medical therapy alone (1). New, widely endorsed guidelines by the American Diabetes Association and others now support the role of surgery for treating diabetes in patients with obesity, especially for patients who are not well controlled on medical therapy (2). 1)Schauer et al.Diabetes Care 2016 Jun;39(6):902-11 2)Rubino et al. Diabetes Care. 2016 Jun;39(6):861-77
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 15, Daniel Weiss commented:
The benefit of weight loss on glycemic control for those with Type 2 Diabetes has been recognized for decades. The five-year outcomes of STAMPEDE are not surprising. However there was a major flaw in the design of this trial: despite its title, the control group was not provided “intensive medical therapy”.
The primary outcome was to compare “intensive medical therapy” alone to bariatric surgery plus medical therapy in achieving a glycated hemoglobin of 6% or less. The medical therapy group was seen every 3 months and had minimal increase in medications (mean of 2.8 medications at baseline and 3 at one year). And, at the one-year and five year time points, substantially fewer patients were on insulin as compared to baseline. At one year, 41 percent were on a glucagon-like peptide-1 receptor agonist.
Minimally intensified medical therapy obviously would bias results toward surgery. True intensive medical therapy as in the landmark ACCORD trial (Action to Control Cardiovascular Risk in Diabetes) involved visits every 2-4 weeks with actual medication intensification.
Reference The Action to Control Cardiovascular Risk in Diabetes Study Group. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008;358:2545-59.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 10, Paul Sullins commented:
Reported findings of "no differences" by parent type in this study are an artifact of a well-known sampling error which conflates same-sex couples with a larger group of miscoded different-sex couples. Large disparities between the reported sample and same-sex couple population data reported by Statistics Netherlands strongly confirm this conclusion. The remainder of this comment presents detailed analysis supporting these claims. A longer critique, with standard citations and a table, is available at http://ssrn.com/author=2097328 .
The authors report that same-sex couples were identified using “information about the gender of the participating parent and the gender of the participant’s partner” (p. 5). However, validation studies of the use of this procedure on other large representative datasets, including the 2000 U.S. Census, the U.S. National Health Interview Survey (NHIS), and the National Longitudinal Study of Adolescent to Adult Health (“Add Health”), have found that most "same-sex couples" identified in this way are actually misclassified different-sex couples.
The problem stems from the fact that, like all survey items, the indication of one’s own sex or the sex of one’s partner is subject to a certain amount of random error. Respondents may inadvertently mark the wrong box or press the wrong key on the keyboard, thus indicating by mistake that their partner is the same sex as themselves. Black et al., who examined this problem on the U.S. Census, explains that “even a minor amount of measurement error, when applied to a large group, can create a major problem for drawing inferences about a small group in the population. Consider, for example, a population in which 1 out of 100 people are HIV-positive. If epidemiologists rely on a test that has a 0.01 error rate (for both false positives and false negatives), approximately half of the group that is identified as HIV-positive will in fact be misclassified” The measurement of same-sex unmarried partner couples in the 2000 US Census. Since same-sex couples comprise less than one percent of all couples in the population of Dutch parent couples studied by Bos et al., even a small random error in sex designation can result in a large inaccuracy in specifying the members of this tiny subpopulation.
A follow up consistency check can effectively correct the problem; however without this it can be quite severe. When the NHIS inadvertently skipped such a consistency check for 3.5 years, CDC estimated that from 66% to 84% of initially identified same-sex married couples were erroneously classified different-sex married couples Division of Health Interview Statistics, National Center for Health Statistics. 2015. Changes to Data Editing Procedures and the Impact on Identifying Same-Sex Married Couples: 2004-2007 National Health Interview Survey. Likewise, Black reported that in the affected portion of the 2000 Census “only 26.6 percent of same-sex female couples and 22.2 percent of same-sex male couples are correctly coded” Black et al, p. 10. The present author found, in an Add Health study that ignored a secondary sex verification, that 61% of the cases identified as “same-sex parents” actually consisted of different-sex parent partners The Unexpected Harm of Same-sex Marriage: A Critical Appraisal, Replication and Re-analysis of Wainright and Patterson’s Studies of Adolescents with Same-sex Parents. British Journal of Education, Society & Behavioural Science, 11(2)..
The 2011 Statistics Netherlands data used by Bos et al. are based on computer assisted personal interviews (CAPI), in which the respondent uses a computer keyboard to indicate his or her responses to interview questions that are presented by phone, website or in person. Sex of respondent and partner is indicated is indicated by the respondent entering "1" or "2" on the keyboard, a procedure in which a small rate of error, hitting the wrong key, would be quite normal. The Statistics Netherlands interview lacks any additional verification of sex designation, making sample contamination very probable. [Centraal Bureau voor de Statistiek Divisie Sociale en Ruimtelijke Statistieken Sector Dataverzameling. (2010). Jeugd en Opgroeien (SCP) 2010 Vraagteksten en schema’s CAPI/CATI. The Hague].
Several key features of the reported control sample strongly confirm that sample contamination has occurred. First, in the Netherlands in 2011, the only way for a same-sex co-parent to have parent rights was to register an adoption, so we would expect one of the partners, for most same-sex couples, to be reported as an adoptive parent [Latten, J., & Mulder, C. H. (2012). Partner relationships at the dawn of the 21st century: The case of the Netherlands. In European Population Conference pp. 1–19]. But in Bos et al.'s sample, none of the same-sex parents are adoptive parents, and both parents indicate that the child is his/her "own child" (eigen kind). This is highly unlikely for same-sex couples, but what we would expect to see if a large proportion of the "same-sex" couples were really erroneously-coded opposite-sex couples. Second, the ratio of male to female same-sex couples in the Bos et al. sample is implausibly high. In every national and social setting studied to date, far fewer male same-sex couples raise children than do female ones. Statistics Netherlands reports that in 2011 the disparity in the Netherlands was about seven to one: Of the (approximately) 30,000 male and 25,000 female same-sex couples counted in that year “[o]nly 3% (nearly 800) of the men's pairs had one or more children, compared to 20% (almost 5000) of the female couples.” [de Graaf, A. (2011). Gezinnen in cijfers, in Gezinsrapport 2011: Een portret van het gezinsleven in Nederland. The Hague: The Netherlands Institute for Social Research.] Yet Bos et al. report, implausibly, that they found about equal numbers of both lesbian and gay male couples with children, actually more male couples (68) than female (63) with children over age 5. They also report that 52% of Dutch same-sex parenting couples in 2011 were male, but Statistics Netherlands reports only 14%. The Bos sample is in error exactly to the degree that we would expect if these were (mostly) different-sex couples that were inaccurately classified as being same-sex due to random errors in partner sex designation.
Third, according to figures provided by Eurostat and Statistics Netherlands [Eurostats. (2015). People in the EU: who are we and how do we live? - 2015 Edition. Luxembourg: Publications Office of the European Union.] [Nordholt, E. S. (2014). Dutch Census 2011: Analysis and Methodology. The Hague: Statistics Netherlands.] (www.cbs.nl/informat), same-sex parents comprised an estimated 0.28 percent of all Dutch parenting couples in 2011, but in the Bos sample the prevalence is more than three times this amount, at 0.81 percent. From this disparity, it can be estimated roughly that about 65% of the Bos control sample consisted of misclassified different-sex parents. This rate of sample contamination is very similar to that estimated for the three datasets discussed above (61% for Add Health; 66% or higher for NHIS, and about 75% for the 2000 U.S. Census.)<br> The journal Family Process has advised that it is not interested in addressing errors of this type in its published studies. I therefore invite the authors to provide further population evidence in this forum, if possible, showing why their findings should be considered credible and not spurious.
Paul Sullins, Ph.D. Catholic University of America sullins@cua.edu
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 10, Paul Sullins commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 26, Martine Crasnier-Mednansky commented:
I do appreciate your answer to my comment, to which I gladly reply. First, there is prior work by Ghosh S, 2011 indicating colonization was attenuated in mutant strains that were incapable of utilizing GlcNAc, which included a nagE mutant strain. Second, Mondal M, 2014 analyzed the products of the ChiA2 reaction and found GlcNAc was the most abundant product. In fact, the amount of (GlcNAc)2 was found to be very low as compared to GlcNAc and (GlcNAc)3. Therefore, it is fully legitimate to conclude the PTS substrate GlcNAc is utilized in the host by V. cholerae for growth and survival.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 26, Ankur Dalia commented:
Re: Martine Crasnier-Mednansky
I appreciate your evaluation of the manuscript, however, I have to disagree with your comment. The study by Mondal et al. indicates that ChiA2 can liberate GlcNAc from mucin in vitro and that it is critical for bacterial growth in vivo, however, they did not test the role for GlcNAc uptake and/or catabolism in that study. In our manuscript, however, we demonstrate that loss of all PTS transporters (including the GlcNAc transporter) does not result in attenuation in the same infant mouse model, which is a more formal test for the role of GlcNAc transport during infection. It is formally possible that other carbohydrate moieties are liberated via the action of ChiA2 that are required for growth of V. cholerae in vivo, however, our results would indicate that these are not translocated by the PTS. Alternatively, the reduced virulence of the ChiA2 mutant observed in the Mondal et al study may indicate that ChiA2 has other effects in vivo (i.e. on immune evasion, resistance to antimicrobial peptides, etc.).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 24, Martine Crasnier-Mednansky commented:
The authors’ proposal 'the PTS has a limited role during infection' and concluding remark 'PTS carbohydrates are not available and/or not utilized in the host' are both questionable. Mondal M, 2014 established, when Vibrio cholerae colonizes the intestinal mucus, the PTS-substrate GlcNAc is utilized for growth and survival in the host intestine (upon mucin hydrolysis).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 21, Simon Young commented:
This paper reports a concentration of tryptamine in human cerebrospinal fluid (CSF) of 60 nmol/L. A concentration that high seems unlikely. The concentration in human CSF of the related compound 5-hydroxytryptamine (serotonin) is very much lower. Although levels of serotonin in human CSF reported in the literature vary over several orders of magnitude, most of the results reported are probably false due to lack of rigorous methodology and analytical inaccuracy Young SN, 2010. In a study with rigorous methodology, measurements were performed in two different laboratories using different HPLC columns and eluting buffers Anderson GM, 2002. One lab used an electrochemical detector (detection limit 7 – 8 pg/ml for serotonin) and the other a fluorometric detector (detection limit 7 – 15 pg/ml). In both labs, N-methylserotonin was used as an internal standard and a sample was injected directly into the HPLC after removal of proteins. Neither system could detect serotonin in any CSF sample. The conclusion was that the real value was less than 10 pg/ml (0.057 nmol/L, about three orders of magnitude less than the level reported for tryptamine). Anderson et al Anderson GM, 2002 suggest that the higher values for serotonin reported in the literature can be attributed to a failure to carry out rigorous validation steps needed to ensure that a peak in HPLC is in fact the analyte of interest and not another compound with a similar retention time and fluorescent or electrochemical properties.
The concentration of tryptamine in rat brain is very much lower than the concentration of serotonin Juorio AV, 1985, and levels of the tryptamine metabolite, indoleacetic acid, in human CSF are lower than the levels of the serotonin metabolite, 5-hydroyindoleactic acid Young SN, 1980. Thus, the finding that the concentration of tryptamine in human CSF is about a thousand times greater than the concentration of serotonin does not seem plausible. There are three possible explanations for this finding. First, there may be some unknown biochemical or physiological factor that explains the finding. Second, the result may be due to the use of CSF obtained postmortem instead of from a live human. Levels of some neuroactive compounds change rapidly after death. For example, levels of acetylcholine decrease rapidly after death due to the continued action of acetylcholinesterase, the enzyme that breaks down acetylcholine Schmidt DE, 1972. Serotonin can be measured in postmortem samples because the rate limiting enzyme in the conversion of tryptophan to serotonin, tryptophan hydroxylase, and the main enzyme metabolizing serotonin, monoamine oxidase, both require oxygen. The brain becomes anoxic quickly after death thereby preventing synthesis or catabolism of serotonin. Tryptamine is synthesized by the action of aromatic amino acid decarboxylase, which does not require oxygen, but is metabolized by monoamine oxidase, which does require oxygen. Autopsies usually occur many hours after death, and therefore the high levels of tryptamine reported in this study may reflect continued synthesis, and the absence of catabolism, of tryptamine after death. Third, there may be problems with the HPLC and fluorometric detection of tryptamine in this paper, in the same way that there have been many papers reporting inaccurate measurements of serotonin in human CSF, as outlined above. The method reported in this paper would have greater credibility if the same results were obtained with two different methods, as for serotonin Anderson GM, 2002.
In conclusion, more work needs to be done to establish a reliable method for measuring tryptamine in CSF obtained from living humans. Levels in human CSF obtained postmortem may have no physiological relevance.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 07, Victoria MacBean commented:
Plain English Summary:
Neural respiratory drive (NRD) is commonly used as a measure of respiratory function, as it measures the overall muscular effort required to breathe in the presence of the changes that occur in lung disease. Both bronchoconstriction (airway narrowing) and hyperinflation (over-inflation of the chest, caused by air trapped in deep parts of the lung) occur in lung disease and are known to have detrimental effects on breathing muscle activity. Electromyography (EMG) is a measure of electrical activity being supplied to a muscle and can be used to measure the NRD leaving the brain towards respiratory muscles (in this study the parasternal intercostals – small muscles at the front of the chest). This study aimed to research the individual contributions of bronchoconstriction and hyperinflation on EMG and the overall effectiveness of the EMG as an accurate marker of lung function.
A group of 32 young adults were tested as subjects for this study, all of which had lung function within normal limits at rest, prior to testing. The subjects inhaled increasing concentrations of the chemical methacholine to stimulate the contraction of airway muscles – imitating a mild asthma attack. Subjects’ EMG, spirometry (to measure airway narrowing) and IC (inspiratory capacity) was measured to test for hyperinflation. Detailed statistical testing was used to assess the relationships between all the measures.
The results show that obstruction of the airway was closely related to the increase in EMG, however inspiratory capacity was not related. The data suggests that the overinflation of the chest had less of an effect on the EMG than the airway diameter (bronchoconstriction). This helps advance the understanding of how EMG can be used to assess lung disease.
This summary was produced by Talia Benjamin, Year 13 student from JFS School, Harrow, London as part of the authors' departmental educational outreach programme.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2018 Jan 11, Prashant Sharma, MD, DM commented:
A full-text, read-only version of this article is available at http://rdcu.be/nxtG.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 19, Martine Crasnier-Mednansky commented:
Novick A, 1957 reaffirmed a fully induced culture could be maintained fully induced at low inducer concentrations. In this paper, the authors reported preinduced cells with melibiose do not maintain induction of the melibiose (mel) operon in the presence of 1 mM TMG. However, experimental conditions and data interpretation are both questionable in view of the following.
The authors used a lacY strain whose percentage of induction by 1 mM TMG is less than 0.2%, 100% being for melibiose as the inducer (calculated from data in Table 1 and 3). They transfer the cells from a minimal-medium-melibiose to a minimal-medium-glycerol supplemented with 1 mM TMG. The cells therefore have to 'enzymatically adapt' to glycerol while facing pyrimidine starvation (Jensen KF, 1993, Soupene E, 2003). Under these conditions, cells are unlikely to maintain induction of the mel operon (even if they could, see below) because uninduced cells have a significant growth advantage over induced cells. Incidentally, Novick A, 1957 noted, "the fact that a maximally induced culture can be maintained maximally induced for many generations [by using a maintenance concentration of inducer] shows that the chance of a bacterium becoming uninduced under these conditions is very small. Were any uninduced organisms to appear, they would be selected for by their more rapid growth". Advancing further, the percentage of induction by TMG for the mel operon in a wild type strain (lacY<sup>+</sup>) is 16% (calculated as above). This induction is due mostly to TMG transport by LacY considering the sharp decrease in the percentage of induction with a lacY strain (to <0.2%). Consequently, in the presence of TMG, any uninduced lacY cells remain uninduced. Thus, it appears a population of uninduced cells is likely to 'take over' rapidly under the present experimental conditions.
In the presence of LacY, the internal TMG concentration is about 100 times the medium one, and under these conditions, induction of the mel operon by TMG is only 16%. Therefore, the cells could not possibly maintain their full level of induction simply because TMG is a relatively poor inducer of the mel operon. It seems the rationale behind this experiment does not make much sense.
Note: The maintenance concentration of inducer is the concentration of inducer added to the medium of fully induced cells and allowing maintenance of the enzyme level for at least 25 generations (Figure 3 in Novick A, 1957). It is not the intracellular level of inducer, as used in this paper.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
On 2017 May 01, Kevin Hall commented:
The corrected manuscript is now posted online, including the Supplemental Materials describing the methodology for the systematic review and meta-analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 22, Kevin Hall commented:
Dr. Harnke is correct that the early online publication did not provide the peer-reviewed Supplemental Materials describing the methodology for the systematic review and meta-analysis. Also, the online publication erroneously provided the penultimate version of the figures. The Supplemental Materials and the updated figures are available upon request: kevinh@niddk.nih.gov.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 20, Ben Harnke commented:
The E-pub ahead of print version of this article does not appear to provide details about the search strategy, databases searched, limits, etc. used to identify the included studies. Without this information it is impossible to replicate the study or to verify that all relevant citations were located. Hopefully these details will be included in the final published version and/or online supplementary material.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 08, Cicely Saunders Institute Journal Club commented:
We selected and discussed this paper at our monthly journal club on 1st November 2017.
The paper generated a lot of discussion and we felt that this was an important concept, especially for clinicians, to think about. The topic of QALYs was unfamiliar to some of us and we found that the authors explained it very clearly in the paper. We were intrigued by the use of an integrative review method and discussed this at length. It may have been helpful to read more explanation of this method and know how it differs from other types of review methods. We also wondered about some of the inclusion/exclusion criteria such as the exclusion of reviews and the decision making process for the theoretical papers included. We enjoyed discussing the themes which emerged from this paper and the wider debate around the most appropriate measures for palliative care populations, particularly in light of the recent paper by Dzingina et al. 2017 (https://www.ncbi.nlm.nih.gov/pubmed/28434392). We feel this paper will be a useful educational resource.
Commentary by Dr. Nilay Hepgul & Dr. Deokhee Yi on behalf of researchers at Cicely Saunders Institute of Palliative Care, Policy & Rehabilitation, King’s College London.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 04, Ralf Koebnik commented:
This publication states that “HprX is the AraC/XylS regulator of the Xanthomonas citri T3SS and is activated by HrpG, a sensor kinase that phosphorylates HrpX [43].“ This is a wrong interpretation of Ref. 43. To be correct, it is HrpX, not HprX, and second, HrpX is not activated via phosphorylation by HrpG. Instead, HrpG (probably in a phorphorylated state, but this is still speculation) activates the transcription of the hrpX gene. HrpX in turn binds to the promoter region of hrp genes that encode the structural components of the T3SS.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.sciencedirect.com www.sciencedirect.com
-
On 2017 Mar 12, John Tucker commented:
The article summarizes the results of Ohio’s 2011-2015 program to reduce prescription painkiller overdose deaths by stating that prescribing was reduced by 10%, leading to a reduction in the percentage of drug overdose deaths attributable to prescription painkillers from 45% to 22%.
While it would be natural for a reader to assume that this percentage reduction arose from a decline in prescription drug overdoses, this is not the case. Instead, overdoses due to prescription painkillers remained relatively constant while heroin and illicit fentanyl deaths skyrocketed.
CDC WONDER gives the following death counts for Ohio in 2011 and 2015
Heroin: 325 in 2011 and 1103 in 2015
Other Opioids: 197 in 2011 and 340 in 2015
Methadone: 70 in 2011 and 50 in 2015
Other synthetic narcotics (including fentanyl): 57 in 2011 and 891 in 2015
Unspecified narcotics: 62 in 2011 and 80 in 2015
Total opioid overdose deaths: 711 in 2011 and 2464 in 2015.
Thus the opioid overdose death rate in Ohio increased by 246% during the analysis period, compared to 40% for the United States as a whole.
The program cannot by any stretch of the imagination be considered a success, and simply serves as yet another example of the futility of supply-side focused approaches to drug abuse.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 04, Andrea Messori commented:
PubMed database: A selection of pharmacoeconomic studies based on the net monetary benefit
Andrea Messori, HTA Unit, Regional Health Service, Firenze, Italy
The objective of the study by Capri and co-workers was to compare the cost-effectiveness of pazopanib versus sunitinib as first-line therapy in patients with advanced or metastatic renal cell carcinoma; the perspective was that of the Italian National Health Service.
In patients with cancer, most economic studies based on this design are carried out by determining the incremental cost effectiveness ratio (ICER). In contrast, one reason of interest of the study by Capri and co-workers is that the net monetary benefit (NMB) was the methodological tool employed to carry out the pharmacoeconomic analysis.
Although the NMB is not the standard tool for performing these analyses, there are some advantages in using this parameter as opposed to the ICER. For example, while the relationship between the cost of the intervention and the ICER is nonlinear, the relationship between the cost of the intervention and the NMB is linear. Hence, predicting the consequences of an increased cost of treatment (or a decreased cost of treatment) is easier, or more intuitive, if the NMB is used rather than the ICER.
Using a standard syntax of PubMed search (“net monetary benefit”[text]; search date = 4 March 2017), we identified a total of 148 citations that met this criterion. Among these citations, we selected 20 studies published between 2010 and 2016 in which the NMB played a key role in generating the pharmacoeconomic results (see http://www.osservatorioinnovazione.net/papers/nmb20examples.html).
Curiously enough, among these 148 citations, some studies were lacking in which the NMB had been successfully employed (e.g. Ganesalingam J, Pizzo E, Morris S, Sunderland T, Ames D, Lobotesis K.Cost-Utility Analysis of Mechanical Thrombectomy Using Stent Retrievers in Acute Ischemic Stroke. Stroke. 2015 Sep;46(9):2591-8, https://www.ncbi.nlm.nih.gov/pubmed/26251241/); this indicates that the keyword “net monetary benefit” in PubMed misses a number of pertinent articles.
In conclusion, despite the low number of retrieved articles, our preliminary overview of the literature shows that the NMB is still being used in pharmacoeconomic studies and deserves to be further investigated.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 05, Julie Glanville commented:
As we note in the Methods section of our paper, this paper is informed by an update of a systematic review:
"For the update to the systematic review, we searched 18 databases, websites, trial registries and three conference websites between February and March 2016. The search strategy for MEDLINE is shown in Supplemental Fig. 1 (Fig. S1) and the other searches are available on request. This search strategy was originally developed in 2013, and then updated for this analysis taking into account relevant recent changes in indexing [19, 20]. The searches were not limited by date, language, or document type. The information sources searched are shown in Supplemental Table 1 (Table S1). Search results were downloaded into Endnote and de-duplicated against each other and against the results of the original review."
Figure 1 in the supplementary material shows the Medline update search. Although it is a search carried out to identify new records since the original searches, we did not limit to recent years but reran the search for all years. This resulted in 6845 records. However, many of these records had been processed in the initial SR, so "Search results were downloaded into Endnote and de-duplicated against each other and against the results of the original review". This resulted in a much lower number of records which needed to be assessed for relevance from Medline in the update as can be seen in Table S2. In Table S2 the second column shows the number of records downloaded before deduplication against the original search results and the third column shows the number of records assessed for relevance after deduplication against the original search results. Hence the number difference. We acknowledge that there has been a transposition error for the Medline results in Table S2 - the search resulted in 6845 records and we have entered 2845 by mistake. We will correct the transposition error.
Despite what we say in the Methods section, the full search strategies for the original search and the update searches are in the supplementary material at page 28 onwards.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 04, Wichor Bramer commented:
The authors of this article seen to have made a mistake in their registration of results per database. The supplements show that EMBASE alone retrieved over 7000 results and SCI more than 6000. Yet the total number of articles after deduplication is 5500.
Only the Medline search is provided in detail. The number of results shown in the search strategy seems to be 6800, whereas in the overview table of all databases the number is 2800.
It is recommended to add the search strategies for all databases and to keep a good and clear track of the results per database before and after deduplication.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 25, Donald Forsdyke commented:
LECTIN PATHWAY STUDIES WITH PLANT MANNOSE-BINDING LECTINS
Papers on the lectin pathway (LP) of complement activation in animal sera generally refer to animal mannose-binding lectins (MBLs), with little reference to work with plant MBLs. For example, citing May and Frank (1973), this fine paper states: "Reports of unconventional complement activation in the absence of C4 and/or C2 predate the discovery of LP." Actually, a case can be made that the discovery of the LP predates May-Frank.
The MASP-binding motif on animal MBL, which is necessary for complement activation, includes the amino acid sequence GKXG (at positions 54-57), where X is often valine. The plant lectin concanavalin-A (Con-A) has this motif at approximately the same position in its sequence (the 237 amino acid subunit of Con-A had the sequence GKVG at positions 45-48). The probability of this being a chance event is very low. Indeed, prior to the discovery of MASP involvement, Milthorp & Forsdyke (1970) reported the dosage-dependent activation of complement by Con-A.
As far as I am aware, it has not been formally shown that MASP is involved in the activation of the complement pathway by this plant MBL. Our studies in the 1970s demonstrated that Con-A activates complement through a cluster-based mechanism, which is consistent with molecular studies of animal MBL showing “juxtaposition- and concentration dependent activation” (Degn et al. 2014). References to our several papers on the topic may be found in a review of innate immunity (Forsdyke 2016).
Degn SE et al. (2014) Complement activation by ligand-driven juxtaposition of discrete pattern recognition complexes. Proc Natl Acad Sci USA 111:13445-13450. Degn SE, 2014
Forsdyke DR (2016) Almroth Wright, opsonins, innate immunity and the lectin pathway of complement activation: a historical perspective. Microb Infect 18: 450-459. Forsdyke DR, 2016
May JE, Frank MM (1973) Hemolysis of sheep erythrocytes in guinea pig serum deficient in the fourth component of complement. I. antibody and serum requirements. J Immunol 111: 1671-1677. May JE, 1973
Milthorp PM, Forsdyke DR (1970) Inhibition of lymphocyte activation at high ratios of concanavalin A to serum depends on complement. Nature 227:1351-1352 Milthorp P, 1970
Yaseem et al. (2017) Lectin pathway effector enzyme mannan-binding lectin-associated serine protease-2 can activate native complement C3 in absence of C4 and/or C2. FASEBJ 31:2210-2219 Yaseen S, 2017
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 18, Clive Bates commented:
So we learn from this study that pharmacists demand:
training in the form of information packs (88%), online tutorials (67%), continuous professional development (CPD) workshops (43%) to cover safety, counselling, dosage instructions, adverse effects and role in the smoking cessation care pathway in the future.
But how many of them have made use of the existing resources already provided by the UK National Centre for Smoking Cessation and Training, in particular, its excellent E-cigarettes: a briefing for stop smoking services, 2016. This is readable and accessible and easily found by anyone with a professional interest.
If they wanted to go into the issue more deeply, there is the Royal College of Physicians report, Nicotine without smoke: tobacco harm reduction, 2016 which provides a scientific assessment for UK health professionals, and concludes:
that e-cigarettes are likely to be beneficial to UK public health. Smokers can therefore be reassured and encouraged to use them, and the public can be reassured that e-cigarettes are much safer than smoking.
As they are selling these products, isn't there a legitimate professional expectation that community pharmacists should make a modest effort to find out more about them? The survey reveals a disturbing level of ignorance and unscientific assertion and the demand for more training is the flip side of an admission of ignorance. A good question would have found out whether they have made any effort at all to resolve their uncertainties, for example by consulting the sources above.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www-sciencedirect-com.ezaccess.libraries.psu.edu www-sciencedirect-com.ezaccess.libraries.psu.edu
-
On 2017 Feb 12, Romain Brette commented:
From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus.
In sound localization for example, one of the four examples in this essay, a relevant problem for a predator or a prey is to locate the source of a sound, i.e. absolute localization. But models such as the recent model mentioned in the essay (which is influential but not consensual) have been proposed on the basis of their performance in discriminating between identical sounds played at slightly different angles, a common experimental paradigm. Focusing on this paradigm leads to models that maximize sensitivity, but perform very poorly on the more ecologically relevant task of absolute localization, which casts doubts on the models (Brette R, 2010; Goodman DF, 2013). Unfortunately the available set of relevant behavioral data is incomplete (e.g., what is the precision of sound localization with physical sound sources in ecological environments, and are orienting responses invariant to non-spatial aspects of sounds?). Thus I sympathize with the statement in this essay that more proper behavioral work should be done.
In other words, a good model should not only explain laboratory data: it should also work (i.e. explain how the animal manages to do what it does). It is good to remind this crucial epistemological point.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 03, Dmitri Rusakov commented:
We thank the Reviewer for following up the story. Below are our point-by-point reply to his latest set of comments. This reply and the manuscript version revised accordingly appeared to satisfy the journal Editorial Board and the other reviewer(s). We appreciate that this reviewer might not have therefore received the full set of explanations shown below. We however urge him to look at the published paper and its Supplementary data as these do contain material answers to his key questions.
Reviewer #1
Q: The authors have taken a rather minimalist approach to my suggestion that the fluorescence anistropy measurements be analysed and presented in greater detail in the MS.
A: There must have been a misunderstanding. In response to the original Reviewer's comments, we have provided a full point-by-point response with extended explanations, added quantitative evidence for the two-exponential approximation, and provided a full summary Table for the FLIM characteristics over six different areas of interest. This is precisely what was requested in the original comments.
Q: In addition, in considering this revision, additional questions have arisen. I therefore give a more detailed and prescriptive list of the data that needs to be shown. The following conditions are of interest: 1) free solution 2) with extracellular dye a) measurement inside cell b) measurement over neuropil c) measurement over synapse d) measurement inside pipette 3) with intracellular dye a) measurement inside cell 4) no dye a) measurement of autofluorescence over neuropil b) measurement inside soma For each of the above conditions, please show: 1) full fluorescence time course -1 to 12 ns 2) full anisotropy time course -1 to 12 ns 3) specimen traces 4) global averages 5) fit of global average 6) timing of the light pulse should always be indicated (I assume it occurs at 1ns, but this must be made explicit)
A: We note that all the requested information is contained, in the shape of single-parameter outcomes, in the original figures and Tables. We also note that in healthy brain slices autofluorescence (two-photon excitation) is undetectable. With all due respect, we did not fully understand the grounds for requesting excessive primary material: the process of analysing anisotropy FLIM data involves automated, pixel-by-pixel data collection and curve fittings representing tens of thousands of single-pixel plots at all stages of the data processing. Presenting such data does not appear technically feasible. However, we have added some extensive primary-data examples, as requested, to illustrate:
(a) Fluorescence decay in parallel and perpendicular detectors at different viscosity values (Fig. S1a);
(b) Instrument response for the two-detector system (Fig. S1b), indicating that it has much faster dynamics than the anisotropy decay;
(c) Anisotropy decay data in slice tissue after dye washout - indicating a specific reduction of the fast (free-diffusion) rather than slow (membrane-bound) molecular component (Fig. S1c);
(d) AF350 anisotropy decay examples recorded in a free medium, intracellular compartment, extracellular space in the synapse and neuropil extracellular space (Fig. S1d).
Q: It remains a good idea to try the same measurements with a second dye.
A: AF350 is the smallest bright fluorophore which shows no lifetime dependency on physiological cellular environment. It is therefore the best candidate to explore the movement of small ions or neurotransmitter molecules such as glutamate in similar environment. We have tried other fluorophores such as AF594, which is three times heavier, more prone to photobleaching and more likely to bind to cell membranes in slices, all of which makes interpreting the data more difficult. We believe that the experimental evidence obtained in the present study has led to a self-contained set of conclusions that are of interest to the journal readership. Repeating the entire study with a different dye, or a different animal species, or a different preparation, or else, might be a good idea for future research.
Q: Why is the rise of the anisotropy not instantaneous in Fig. 1B? I couldn't find any mention of temporal filtering.
A: The rising phase is influenced by the instrument response to the femtosecond laser pulse in raw lifetime data, which remained unmodified in the presented plots. Signal deconvolution would produce instantaneous anisotropy (while increasing noise in raw data) which is not our preferred way of presenting the data. We have added this explanation to the text.
Q: Assuming that the light pulse occurs at 1ns in Fig. S1A, why is the anisotropy shown so high? I would have thought that it should have decayed over a time constant (to about 1/e) by the point where the data shown, yet it is only reduced by about 15%. Was some additional normalisation carried out that I missed? If so it should certainly be removed and the true time course shown.
A: The example in question shows a direct comparison between decay shapes in baseline conditions and after photobleaching: for illustration purposes, the graph displays a fragment of raw decay data, including the instrument response (without deconvolution). The latter at least doubles the apparent decay constant, plus the fast component has a y-offset of 0.2-0.3 due to the slow-component. These concomitants make the fast decay appear slower but this is irrelevant for the purposes of this particular raw data illustration (in contrast, Table S1 summary data are obtained with the instrument response de-convolved and removed). The text and figure legend has been expanded to explain this.
Q: In case it is not clear, I am interested in the possibility of the fast component in fact containing two components. The data do not allow me to evaluate this possibility.
A: Whilst we appreciate personal scientific interests of the Reviewer, we see no scientific reasons, in the present context, to try and find 'the possibility' of fast anisotropy decay sub-components: we simply refer to all molecular sub-populations showing distinctly fast anisotropy decay as one free-diffusion pool.
Q: Does Scientific Reports required the authors to provide access to the original data upon publication? What is the authors' position on this?
A: All original data, including tens of thousands of single-pixel FLIM data plots at different analysis stages, etc. are available from the authors on request.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 25, Boris Barbour commented:
Below I show the key parts of my first and second reviews of this paper, which reports a very interesting and powerful optical technique for probing the microscopic properties of fluid compartments in the brain. I felt that greater detail of the analysis should be shown to support fully the conclusion of slowed diffusion in the extracellular space and synaptic cleft. It will be apparent that some questions unfortunately only occurred to me upon reading the first revision. The authors only responded to some of the points raised before the paper was published without my seeing it again. In particular, no global average anisotropy time courses are shown and the timing of the excitation pulses remains rather mysterious.
First review
This MS reports an extremely interesting approach to providing quantitative information about the diffusion of small molecules in micro-compartments of brain tissue - potentially resolving the intracellular and extracellular spaces, as well as providing information about diffusion within the synaptic cleft.
The basis for the approach is to measure the relaxation of fluorescence polarisation following two-photon excitation of small compartments. If polarised exciting light is used the emitted light is also polarised, as long as the orientation of the fluorophore remains unchanged. However, as the molecule undergoes thermal reorientations, that polarisation is lost. The authors use this technique to measure the diffusion of a small molecule - alexa fluor 350.
A subsequent section of the MS reports some synaptic modelling, applying the tissue/free ratio obtained for the fluorophore to the modelled glutamate. However, the important and by far the most interesting part of the MS is the diffusion measurement. I would be happy for the MS to consist solely of an expanded and more detailed analysis of these measurements, postponing the modelling to another paper.
My main comments relate to the analysis, presentation and interpretation of these diffusion measurements. The authors report that the relaxation time for the polarisation displays two phases - a rapid phase, which is somewhat slowed in brain tissue, and a slow phase attributed to membrane binding. But the authors do not illustrate the analysis of the fast component. As this is critical, a good deal more detail should be shown.
A first issue is whether there are additional bound/retarded states other than "fixed". To examine this the authors should show the fit to the average relaxation; it is important to be able to verify that there is a single exponential fast component rather than some mixture (it may not be possible to tell with any certainty). Some examination of the robustness and precision of the fitting would also be desirable.
The authors should also characterise the variation between different measurements of the same compartments. The underlying question here is how the various decay components might vary as, for instance, the ratio of membrane to extracellular space varies across different measurement points.
I think the authors need to give more thought to the possibility that some of the slowing they observe arises from fluorophore embedded in the membrane without being immobilised. I don't see how this can be easily ruled out - certainly the two-photon resolution does not permit distinguishing the membrane and fluid phases in the neuropil. Additionally, how would the authors rule out adsorption onto some extracellular proteins?
The reason I raise these points is because I have at least a slight difficulty with the interpretation. A slowing of molecular rotation of 70% (intracellular) suggests to me that a large fraction of fluorophores, essentially 100%, must be in direct contact with some larger molecule. This seems quite extreme even in the crowded intracellular space. I have similar reservations about the synaptic cleft and extracellular space. Is at least 50% and 30% of the volume of these spaces really occupied by macromolecules? There may be somewhat reduced diffusion due to a boundary layer at the membrane or near macromolecules. Do estimates of the thickness of such a boundary layer exist (and its effect on diffusion)?
Second review
The following conditions are of interest:
1) free solution
2) with extracellular dye
a) measurement inside cell
b) measurement over neuropil
c) measurement over synapse
d) measurement inside pipette
3) with intracellular dye
a) measurement inside cell
4) no dye
a) measurement of autofluorescence over neuropil
b) measurement inside soma
For each of the above conditions, please show:
1) full fluorescence time course -1 to 12 ns
2) full anisotropy time course -1 to 12 ns
3) specimen traces
4) global averages
5) fit of global average
6) timing of the light pulse should always be indicated (I assume it occurs at 1ns, but this must be made explicit)
It remains a good idea to try the same measurements with a second dye.
Why is the rise of the anisotropy not instantaneous in Fig. 1B? I couldn't find any mention of temporal filtering.
Assuming that the light pulse occurs at 1ns in Fig. [1E], why is the anisotropy shown so high? I would have thought that it should have decayed over a time constant (to about 1/e) by the point where the data shown, yet it is only reduced by about 15%. Was some additional normalisation carried out that I missed? If so it should certainly be removed and the true time course shown.
In case it is not clear, I am interested in the possibility of the fast component in fact containing two components. The data do not allow me to evaluate this possibility.
(Note to moderators: as the copyright holder of my reviews, I am entitled to post them.)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 10, Fernanda Leite commented:
https://www.nature.com/nature/journal/v542/n7640/full/nature21363.html#comment-69823
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
bmcpsychiatry.biomedcentral.com bmcpsychiatry.biomedcentral.com
-
On 2017 Jul 24, Jakob Näslund commented:
For a discussion of this study regarding some outstanding issues relating to methodology, as well as the presence of a number of possible inaccuracies, see our commentary in Acta Neuropsychiatrica entitled "Multiple possible inaccuracies cast doubt on a recent report suggesting selective serotonin reuptake inhibitors to be toxic and ineffective", available at https://doi.org/10.1017/neu.2017.23
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 24, Konstantinos Fountoulakis commented:
This paper confirms that the overall SMD is around or above 0.30 adding to the literature against the idea that antidepressants do not work (e.g. Kirsch 2008). The question of the magnitude of the effect has been discussed in the literature again and again. The NICE has abandoned since years the 3-point magnitude to define 'clinical relevance' and a SMD of 0.30 is the effect size expected for successful psychiatric treatments and also for treatments elsewhere in medicine. Of course we would like more, but this is the only means we have. All the other options (psychotherapy, alternative therapies etc) do not meet the stringent criteria of this meta-analysis as there are essentially not blinded and not adequately placebo-controlled, not to mention the risk of bias. I strongly disagree with the comment on the HDRS by the authors. Yes, regulatory authorities do recommend the HDRS but this does not constitute an essential argument. It is not correct to double register an event as both a symptom and an adverse event. It is either or (at least in principle). Unfortunately the HDRS is based on an antiquitated model of depression while the MADRS is tailored to the needs of trials. For a review please see Fountoulakis et al J Psychopharmacol. 2014 Feb;28(2):106-17. Concerning the adverse events, yes indeed there is a significant effect of the active drug but the NNH are high and there was no difference in severe adverse events in comparison with placebo. In my opinion, the real SMD of SSRIs are much higher, but not really impressive. This is masked by the properties of the HDRS but also by the possibility that a large number of patients enrolled in these studies are not suitable for a number of reasons. I would love to see US vs, rest of world comparison
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 08, Christian Gluud commented:
Response to Søren Dinesen Østergaard's critique
Søren Dinesen Østergaard (SDØ) [1] criticizes our systematic review on selective serotonin reuptake inhibitors (SSRI) for patients with major depressive disorder [2] for using Hamilton’s depression rating scale (HDRS)17 instead of HDRS6. SDØ refer to four studies ‘documenting’ his claims [3-6].
Two of the studies relate to duloxetine and desvenlafaxine, which are dual action drugs and not SSRIs [3, 4]. The third study is a meta-analysis assessing fluoxetine versus placebo [5]. The results show a mean effect size of the SSRI of -0.30 (95% confidence interval (CI) -0.39 to -0.21) when using HDRS17 and an effect size of -0.37 (95% CI -0.46 to -0.28) when using HDRS6. The difference of 0.07 corresponds to 0.7 HDRS17 points assuming a standard deviation of 10 points. The fourth study is a patient-level analysis of 18 industry-sponsored placebo-controlled trials regarding paroxetine, citalopram, sertraline, or fluoxetine [6]. The authors report a mean effect size of the SSRIs of -0.27 when using HDRS17 and an effect size of -0.35 when using HDRS6 [6]. The difference of 0.08 corresponds to 0.8 HDRS17 points assuming a standard deviation of 10 points. Hence, the absolute effect size difference between the two scales seems less than 1 HDRS point. The National Institute for Clinical Excellence (NICE) recommended a difference of 3 points on the HDRS17 for 'a minimal effect' [7-9]. However, the required minimal clinical relevant difference is probably much larger than this figure. One study showed that a mirtazapine-placebo mean difference of up to 3.0 points on the HDRS corresponds to ‘no clinical change’ [10]. Another study showed that a SSRI-placebo mean difference of 3.0 points is undetectable by clinicians, and that a mean difference of 7.0 HDRS17 points is required to correspond to a rating of ‘minimal improvement’ [11].
Moreover, none of the meta-analyses [5, 6] take into account risks of systematic errors (‘bias’) [12-14] or risks of random errors [15]. Hence, there are risks that the two meta-analyses may overestimate the beneficial effects of SSRIs.
Other studies have shown that HDRS17 and HDRS6 largely show similar results [16,17]. It cannot be concluded that HDRS6 is a better assessment scale than HDRS17, just considering the psychometric validity of the two scales. If the total score of HDRS17 is affected by some of the adverse effects of SSRIs, then this might in fact better reflect the actual summed clinical effects of SSRIs than HDRS6 ignoring these effects. Until scales are validated against patient-centred clinically relevant outcomes, such scales are merely non-validated surrogate outcomes [18].
National and international medical agencies [19-21] all recommend HDRS17 for assessing depressive symptoms. We need access to all individual patient data from all randomised clinical trials to compare the effects of antidepressants on HDRS6 to HDRS17 [22].
SDØ states “no conflicts of interest“. We are aware that SDØ has received substantial support from ‘Lundbeckfonden’, its main objective being to maintain and expand the activities of the Lundbeck Group, one of the companies producing and selling SSRIs [23]. We think it would have been fair to declare this.
Janus Christian Jakobsen, Kiran Kumar Katakam, Naqash Javaid Sethi, Jane Lindshou, Jesper Krogh, and Christian Gluud.
Conflicts of interest: None known.
Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen, Denmark
References 1. Ostergaard SD: Do not blame the SSRIs: blame the Hamilton Depression Rating Scale. Acta Neuropsychiatrica 2017:1-3. 2. Jakobsen JC, Katakam KK, Schou A, et al: Selective serotonin reuptake inhibitors versus placebo in patients with major depressive disorder. A systematic review with meta-analysis and Trial Sequential Analysis. BMC Psychiatr 2017, 17(1):58. 3. Bech P, Kajdasz DK, Porsdal V: Dose-response relationship of duloxetine in placebo-controlled clinical trials in patients with major depressive disorder. Psychopharmacology 2006, 188(3):273-280. 4. Bech P, Boyer P, Germain JM, et al: HAM-D17 and HAM-D6 sensitivity to change in relation to desvenlafaxine dose and baseline depression severity in major depressive disorder. Pharmacopsychiatry 2010, 43(7):271-276. 5. Bech P, Cialdella P, Haugh MC, et al: Meta-analysis of randomised controlled trials of fluoxetine v. placebo and tricyclic antidepressants in the short-term treatment of major depression. Br J Psychiatr 2000, 176:421-428. 6. Hieronymus F, Emilsson JF, Nilsson S, Eriksson E: Consistent superiority of selective serotonin reuptake inhibitors over placebo in reducing depressed mood in patients with major depression. Mol Psychiatr 2016, 21(4):523-530. 7. Fournier JC, DeRubeis RJ, Hollon SD, et al: Antidepressant drug effects and depression severity: a patient-level meta-analysis. JAMA 2010, 303(1):47-53. 8. Mathews M, Gommoll C, Nunez R, Khan A: Efficacy and safety of vilazodone 20 and 40 Mg in major depressive disorder: a randomized, double-blind, placebo-controlled trial. Int Clin Psychopharmacol 2015, 30. 9. Kirsch I, Deacon BJ, Huedo-Medina TB, et al: Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS medicine 2008, 5(2):e45. 10. Leucht S, Fennema H, Engel R, et al: What does the HAMD mean? J Affect Disord 2013, 148(2-3):243-248. 11. Moncrieff J, Kirsch I: Empirically derived criteria cast doubt on the clinical significance of antidepressant-placebo differences. Cont Clin Trials 2015, 43:60-62. 12. Hróbjartsson A, Thomsen ASS, Emanuelsson F, et al: Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors. CMAJ : Canadian Medical Association Journal = Journal de l'Association Medicale Canadienne 2013, 185(4):E201-211. 13. Lundh A, Lexchin J, Mintzes B, Scholl JB, Bero L: Industry sponsorship and research outcome. Cochrane Database Syst Rev 2017, Art. No.: MR000033. DOI: 10.1002/14651858.MR000033.pub3.(2):MR000033. 14. Savovic J, Jones HE, Altman DG, et al: Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med 2012, 157(6):429-438. 15. Wetterslev J, Jakobsen JC, Gluud C: Trial Sequential Analysis in systematic reviews with meta-analysis. BMC Medical Research Methodology 2017, 17(1):39. 16. Hooper CL, Bakish D: An examination of the sensitivity of the six-item Hamilton Rating Scale for Depression in a sample of patients suffering from major depressive disorder. J Psychiatry Neurosci 2000, 25(2):178-184. 17. O'Sullivan RL, Fava M, Agustin C, Baer L, Rosenbaum JF: Sensitivity of the six-item Hamilton Depression Rating Scale. Acta psychiatrica Scandinavica 1997, 95(5):379-384. 18. Gluud C, Brok J, Gong Y, Koretz RL: Hepatology may have problems with putative surrogate outcome measures. J Hepatol 2007, 46(4):734-742. 19. Sundhedsstyrelsen (Danish Health Agency): Referenceprogram for unipolar depression hos voksne (Guideline for unipolar depression in adults). http://wwwsstdk/~/media/6F9CE14B6FF245AABCD222575787FEB7ashx 2007. 20. European Medicines Agency: Guideline on clinical investigation of medicinal products in the treatment of depression. EMA/CHMP/185423/2010 Rev 2 previously (CPMP/EWP/518/97, Rev 1) 2013. 21. U.S. Food and Drug Administration: https://www.fda.gov/ohrms/dockets/AC/07/briefing/2007-4273b1_04-DescriptionofMADRSHAMDDepressionR(1).pdf. 22. Skoog M, Saarimäki JM, Gluud C, et al.: Transaprency and Registration in Clinical Research in the Nordic Countries. Nordic Trial Alliance, NordForsk. 2015:1-108. 23. Lundbeck Foundation. http://www.lundbeckfonden.com/about-the-foundation.25.aspx.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 03, Søren Dinesen Østergaard commented:
For a comment on this meta-analysis, see the letter to the editor in Acta Neuropsychiatrica "Do not blame the SSRIs: blame the Hamilton Depression Rating Scale": https://doi.org/10.1017/neu.2017.6
SDØ declares no conflicts of interest. Grants received from the Lundbeck foundation (see comment below) were all non-conditional and not given to support studies on antidepressant efficacy.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 06, Andrea Messori commented:
Clinical and administrative issues in the in-hospital management of innovative pharmaceuticals and medical devices
by Andrea Messori and Valeria Fadda
ESTAR, Regional Health System, via San Salvi 12, 50100 Firenze (Italy)
Innovative treatments are increasingly being developed for a variety of disease conditions, particularly in the field of pharmaceutics and medical devices. In this scenario, two needs have clearly emerged in the past years: firstly, the activities of horizon scanning, assessment of innovation, and prediction of health-care expenditure for the new products are becoming more and more important in practical terms, thus underscoring that all the main components HTA must be strengthened to improve the governance of in-hospital innovation; secondly, in most jurisdictions of national health systems (NHS), the management of innovation includes also the administrative process of procurement, and so this mix of clinical and bureaucratic pathways further complicates the practical handling of the new products. On the other hand, optimizing the supply chain and the procurement of pharmaceuticals and/or medical devices is known to be essential for the governance of public health-care [1].
To our knowledge, the current medical literature does not comprise any real-life experience in which the management of new products for in-hospital use has been described in the context of a public NHS by examining both clinical and administrative aspects. In this brief report, we present one such experience that has been carried out in 2017 in a regional setting (the Tuscany region) of the Italian NHS.
The Tuscany region includes a total of 3.75 million inhabitants; there are 7 separate Local Health Authorities with an overall number of 27 hospitals and 11,000 beds.
Since 2014, the requests for any new pharmaceutical or medical device by the regional hospitals are submitted through a regional website. The activity carried out in May 2017 (601 requests) has been taken as an example.
If one examines separately the data for pharmaceuticals (N=436) and medical devices (N=165), the reasons for these requests were of administrative nature in 42% of cases for pharmaceuticals and in 84% of cases for devices; in more detail, administrative requests dealt with the need to extend a contract close to expiration or to run a new tender including some new products in replacement for the old tender. New products were requested in 38% and in 16% of cases for pharmaceuticals and devices, respectively, but most of these new products could not be classified as innovative (according to the criteria of our national medicines agency [2]). Overall, there were only 3 innovative products -two medical devices and one drug-, and their requests, according to our internal procedures, were transmitted to the Regional Unit responsible for HTA reports. These 3 products that met our criteria for innovation represented only 0.5% of all requests received at our website.
In conclusion, in the perspective of public hospitals, our experience shows that the management of innovation raises several practical problems because clinical and administrative aspects often co-exist and cannot be easily separated from one another. One risk in this joint management of clinical and administrative issues is that administrative criteria eventually prevail over a sound HTA assessment of the product concerned.
References
[1] Seidman G, Atun R. Do changes to supply chains and procurement processes yield cost savings and improve availability of pharmaceuticals, vaccines or health products? A system-atic review of evidence from low-income and middle-income countries. BMJ Glob Health. 2017 Apr 13;2(2):e000243.
[2] Motile D, De Ponti F, Peruzzi E, Martini N, Rossi P, Silvan MC, Vaccheri A, Montanaro N. An update on the first decade of the European centralized procedure: how many innova-tive drugs? Br J Clin Pharmacol. 2006 Nov;62(5):610-6.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 03, Ole Jakob Storebø commented:
In their editorial, Gerlach and colleagues make several critical remarks (Gerlach M, 2017) regarding our Cochrane systematic review on methylphenidate for children and adolescents with attention-deficit hyperactivity disorder (ADHD) (Storebø OJ, 2015). While we thank them for drawing attention to our review we shall here try to explain our findings and standpoints.
They argue, on the behalf of the World Federation of ADHD and EUNETHYDIS, that the findings from our Cochrane systematic review contrast with previously published systematic reviews and meta-analyses, (National Collaborating Centre for Mental Health (UK), 2009, Faraone SV, 2010, King S, 2006, Van der Oord S, 2008) which all judged the included trials more favorably than we did.
There are methodological flaws in most of these reviews that could have led to inaccurate estimates of effect. For example, most of these reviews did not publish an a priori protocol (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or present data on spontaneous adverse events (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), nor did they report on adverse events as measured by rating scales (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or systematically assess the risk of random errors, risk of bias, and trial quality (Faraone SV, 2010, King S, 2006,Van der Oord S, 2008). King at al. emphasised in the quality assessments for the NICE review that almost all studies did not score very well in the quality assessments and, consequently, results should be interpreted with caution (King S, 2006).
The authors of this editorial refer to many published critical editorials and they argue that the issues they have raised have not adequately been addressed adequately by us. On closer examination, it is clear that virtually the same criticism has been levelled at us each time by the same group of authors, published in several journal articles, blogs, letters, and comments (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.
Each time, we have refuted repeatedly with clear counter-arguments, recalculation of data, and detailed explanations (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.
Our main point is that the very low quality of the evidence makes it impossible to estimate, with any certainty, what the true magnitude of the effect might be.
It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children changes the standardised mean difference effect size from 0.77 to 0.89. However, even if the effect size increases upon excluding these trials, the overall risk of bias and quality of the evidence deems this discussion irrelevant. As mentioned above, we have responded several times to this group of authors Storebø OJ, 2016, Storebø OJ, 2016,< PMID: 27138912, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.
We did not exclude any trials for the use of the cross-over design, as these were included in a separate analysis. The use of end-of-period data in cross-over trials is problematic due to the risk for “carry-over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org). In addition, we tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity. This means that the risk of “unit of analysis error” and “carry-over effect” is uncertain, and could be real. The aspect about our bias assessment have been raised earlier by these authors and others affiliated to the EUNETHYDIS. In fact, we see nothing new here. There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Flacco ME, 2015, Lathyris DN, 2010, Kelly RE Jr, 2006). Moreover, the AMSTAR tool for methodological quality assessment of systematic reviews includes funding and conflicts of interest as a domain (http://amstar.ca/). The Cochrane Bias Methods Group (BMG) is currently working on including vested interests in the upcoming version of the Cochrane Risk of Bias tool.
The aspect about whether teachers can detect well known adverse events of methylphenidate have also been raised earlier by these authors and others affiliated to the EUNETHYDIS (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.). We have continued to argue that teachers can detect the well-known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep. We highlighted this in our review (Storebø OJ, 2015) and have answered this point in several replies to these authors (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment. The well-known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness, and weight loss.
We have considered the persistent, repeated criticism by these authors seriously, but no evidence was provided to justify changing our conclusions regarding the very low quality of evidence of methylphenidate trials, which makes the true estimate of the methylphenidate effect unknowable. This is a methodological rather than a clinical or philosophical issue.<br> We had no preconceptions of the findings of this review and followed the published protocol; therefore, any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high-quality meta-analyses. As we have repeatedly responded clearly to the criticism of these authors, and it is unlikely that their view of our (transparent) work is going to change, we propose to agree to disagree.
Finally, we do not agree that the recent analysis from registries provides convincing evidence on the long-term benefits of methylphenidate due to multiple limitations of this type of kind of study, albeit that interesting perspectives are provided. They require further study to be regarded as reliable.
Ole Jakob Storebø, Morris Zwi, Helle B. Krogh, Erik Simonsen, Carlos Renato Maia, Christian Gluud
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 29, Lydia Maniatis commented:
It is interesting that the strawman description (the purely feedforward description) that Heeger is correctly and trivially rejecting in this article is the description serving as the major theoretical premise of a more recent PNAS article (Greenwood, Szinte, Sayim and Cavanagh (2017)) of which Heeger served as editor, and which I have been extensively critiquing. Some representative quotes from that article:
"Given that the receptive fields at each stage in the visual system are likely built via the summation of inputs from the preceding stages..."
"...idiosyncrasies in early retinotopic maps...would be propagated throughout the system and magnified as one moved up the cortical hierarchy."
"Given the hierarchical structure of the visual system, with inherited receptive field properties at each stage..."
These descriptions are never qualified in Greenwood et al (2017), and guide the interpretation of data. How does Heeger reconcile the assertions in the paper he edited with the assertions in his own paper?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 17, Lydia Maniatis commented:
Given that the number of possible realities (distribution of matter and light) underlying each instance of retinal stimulation is infinite; given that each instance of retinal stimulation is unique: given that any individual's experience represents only a small subsample of possible experience; given that, as is well-known, lifetime experience (not least for the reasons given above) cannot explain perception (what do we see when we lack the experience needed to re-cognize something?), Heeger's (2017) references to prior probability distributions are unintelligible (and thus untestable).
The question of how these statistical distributions are supposed to be instantiated in the brain is also left open, another reason this non-credible "theory" is untestable. All we have is a set of equations that can't be linked to any relevant aspect of the reality they're supposed to explain.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 10, Stuart RAY commented:
Response from EASL here: EASL Recommendations for Treatment of Hepatitis C 2016 Panel. Electronic address: easloffice@easloffice.eu., 2017 [not linked above in PubMed, currently]
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 12, Randi Goldman commented:
I'm the first author on this study, and there's now a free online version of this tool that can be used for counseling patients. I hope you find it useful: https://www.mdcalc.com/bwh-egg-freezing-counseling-tool-efct
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 05, Anne Niknejad commented:
error in figure3 legend: "The enzyme activity with pNP-myristate was taken as 100%."
should be "The enzyme activity with pNP-butyrate was taken as 100%."
to be in accordance with results displayed and text: "Maximum enzymatic activity was observed with pNP-butyrate (C4) (100%, 176.7U/mg), while it showed weaker activity with p-NP acetate (C2, 53.08%), p-NP octanoate (C8, 25.59%), p-NP deconoate (C10, 35.05%), p-NP laureate (C12, 18.51%), p-NP myristate (C14, 7.39%), p-NP palmitate (C16, 2.23%) than that with C4 ( Fig. 3)."
Also, it is 'decanoate', not 'deconoate' (this kind of details impacts on text mining)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 03, Randi Pechacek commented:
One of the authors of this paper, Despina Lymperopoulou, wrote about this paper on microBEnet discussing some of the background. Read about it here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 14, Peter Hajek commented:
To clarify the dependence potential of vaping, could the authors provide data on never-smokers who became daily vapers please?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 15, Lydia Maniatis commented:
In short, there are too many layers of uncertainty and conceptual vagueness here for this project to offer any points of support for any hypothesis or theory.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 May 15, Lydia Maniatis commented:
Every hypothesis or theory is a story, but in the current relaxed climate in vision science, the story doesn't need to be empirically tested and well-rationalized in order to be publishable. We just need a special section in the paper acknowledging these problems, often entitled "limitations of the study" or, as here, "qualifying remarks." Below I excerpt remarks from this and other sections of the paper (caps mine):
"A major goal of the present work was to test hypotheses related to multi-stage programming of saccades in infants. On the empirical side, we exposed 6-month-old infants to double-step trials but DID NOT SUCCEED IN COLLECTING RELIABLE DATA. In this respect, the model simulations were used to investigate an aspect of eye movement control that was not tested empirically."
"In the model simulations, certain model parameters were allowed to vary across age groups and/or viewing conditions, based on theoretical and empirical considerations. We then interpreted the constellation of best-fitting parameters." There is a great deal of flexibility in post hoc data-fitting with numerous free parameters.
"in supplementary analyses (not presented here) we used this approach to follow up on the results obtained in Simulation Study 2. To determine the individual contributions of saccade programming and saccade timing model parameters in generating the fixation duration distributions from LongD and ShortD groups during free viewing of naturalistic videos, we ran simulations in which we estimated the saccade programming parameters (mean durations of labile and non-labile stages) while keeping the saccade timing parameters fixed, and vice versa. In brief, the results confirmed that, for both ShortD and LongD groups, a particular combination of saccade-programming and saccade timing parameters was needed to achieve a good fit. HOLDING EITHER SET OF PARAMETERS FIXED DID NOT RESULT IN AN ADEQUATE FIT."
There are also a lot of researcher degrees of freedom in generating and analysing data. From the methods:
"Fixation durations (FDs). Eye-tracking data from infants may contain considerably higher levels of noise than data from more compliant participants such as adults due to various factors including their high degree of movement, lack of compliance tothe task, poor calibration and corneal reflection disturbances dueto the underdeveloped cornea and iris (Hessels, Andersson,Hooge, Nystr歬 & Kemner, 2015; Saez de Urabain, Johnson, &Smith, 2015; Wass, Smith, & Johnson, 2013). To account for this potential quality/age confound, dedicated in-house software for parsing and cleaning eye tracking data has been developed (GraFix, Saez de Urabain et al., 2015). This software allows valid fixations to be salvaged from low-quality datasets whilst also removing spurious invalid fixations. In the present study, both adult and infant datasets were parsed using GraFix鳠two-stage semi-automated process (see Appendix A for details). The second stage of GraFix involves manual checking of the fixations detected automatically during the first stage. This manual coding stage was validated by assessing the degree of agreement between two different raters. ONE RATER WAS ONE OF THE AUTHORS (IRSdU)."
The "in-house software" changes the data, a d therefore the assumptions implicit in its computations should be made explicit. The p-value used to assess rater agreement was p<05, which nowadays is considered rather low. We need more info on the "valid/invalid" distinction as well as on how the software is supposed to make this distinction.
From the "simulation studies": "The parameter for the standard deviation of the gamma distributions (rc) is a fixed parameter. To accommodate the higher variability generally observed in infant data compared to adult data, it was set to 0.33 for the infant data and 0.25 for the adult data. These values were adopted from previous model simulations (Engbert et al., 2005; Nuthmann et al., 2010; Reichle et al., 1998, 2003)."
Using second-hand values adopted by other researchers in the past doesn't absolve the current ones from explaining the rationale behind these choices (assuming there is one.) "
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 07, Dejan Stevanovic commented:
Since this review was done in 2014, one study has been published supporting the cross-cultural validity of the Revised Child Anxiety and Depression Scale (RCADS) (https://www.ncbi.nlm.nih.gov/pubmed/27353487).
Two studies have been published evidencing that the self-report Strengths and Difficulties Questionnaire (SDQ) lacks corss-cultural validity and it is not suitable for cross-cultural comparisons (https://www.ncbi.nlm.nih.gov/pubmed/28112065, https://www.ncbi.nlm.nih.gov/pubmed/?term=New+evidence+of+factor+structure+and+measurement+invariance+of+the+SDQ+across+five+European+nations).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The exact names of the authors are: Ben Moussa S, Sfaxi I, Ben Saad H, Rouatbi S".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The exact names of the authors are: Ben Saad H, Khemiss M, Nhari S, Ben Essghaier M, Rouatbi S".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 13, Khaled Moustafa commented:
Thank you, Josip, for your thoughtful comments. It is a general issue, indeed, among many others that need to be fixed in the publishing industry. Hopefully, some publishers will be all ears.
Regards,
KM
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 08, Josip A. Borovac commented:
Great input, Khaled! Thank you so much for these observations. You are not alone my friend! Best wishes, JAB
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 08, Khaled Moustafa commented:
Thank you all for your comments.
Further reasons that strongly support the idea not to ask for any specific format or style at the submission stage include:
1) It is rare that a manuscript is accepted immediately from the first run; in most cases there is a revision, nonetheless a minor one. So, editors can ask authors to apply the journal’s style and page setup in the revised version only, but not at the submission stage.
2) In most cases, the final publication format is different from the initial submission format, whatever the style required by the journal and applied by the authors. That is, even if we apply a particular journal's style at the submission stage, the accepted final version (i.e., the PDF file) often appears in a different format and style than the initial one required at the submission stage. So, asking for a given page setup, citation style or specific format at the submission phase but not taking it into account in the final version is an obvious waste of authors’ time. Much time indeed is lost in doing things that are not taken into consideration in the final published versions (except maybe in the HTML version or authors’ version posted online prior to the proof version but not in the final PDF format. So, once again, it does not make much sense to require a drastic formatting at the submission stage).<br> Regardless of innumerable references’ styles (by name or date, with superscript or brackets, journal names in italic or not, in bold or not, underlined or not, etc.), some journals return manuscripts to the author just because the references are non-indented or indented… or because the headers were enumerated/non-enumerated... Other journals ask to upload two files of the same manuscript (a Word file and a PDF file), some others ask to include the images/figures or tables in the text or in separated files…etc. All these are trivial issues related to the form that does not change the inherent value of a manuscript. As it is the content that should matter but not the format or style, page setup or styles could be done only in a revised version when the manuscript is accepted but not elsewhere.<br> At least, journals should make it optional for authors to apply or not to apply the journal’s style at the submission stage. On another hand, the submission steps, in turn, are also long and overwhelming in many journals. In my view, these also need to be shortened to the strict minimum (for .e.g., login and upload files). Then, if the manuscript is accepted, authors could provide the long list of the required information (statement of conflict of interest, list of keywords, all the answers and questions currently stuffing in the submission process, etc.).
Regards,
KM
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 07, Saul Shiffman commented:
Super-agree. Even application of this attitude to paper length could be useful. I'm sure we've all gone to the trouble of whittling a paper down to the (ridiculously low) word-count requirements of a particular journal, only to have the paper rejected.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 07, Josip A. Borovac commented:
Agree with this - a well-identified problem and good article. Some journals are trying to implement "your paper - your way" style, but this should be taken to a whole another level, generally speaking. Many journals sponsor very obscure formatting styles and to resubmit to another journal is a nightmare and definitely time-consuming. Researchers should focus on science as much as possible, much less on submission technicalities and crude formatting issues. I never understood a point of having 3 billion citation styles, for example. What is the true purpose of that except making our lives miserable?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 07, Thittayil Suresh Apoorv commented:
This is nice suggestion by the author. Some journals already following this. Journal like Cytokine are part of Elsevier’s Article Transfer Service (ATS). Reformatting is required only after acceptance.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 07, Francesco Brigo commented:
I couldn´t agree more: time is brain!
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 10, Christopher Tench commented:
Implementation errors in the GingerALE Software: Description and recommendations http://onlinelibrary.wiley.com/doi/10.1002/hbm.23342/full
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 13, Konstantinos Fountoulakis commented:
This paper discusses possible mechanisms of antidepressant effect 1. Although the whole discussion is intriguing, it should be noted that the fundamental assumption of the authors is mistaken. More specifically, the authors start from the position that there is a latency time of approximately two weeks from initiation of antidepressant treatment to the manifestation of the treatment effect. This is an old concept, now proven wrong. We now know that the treatment response starts within days and it takes two weeks not for the treatment effect to appear but for the medication group to separate from placebo 2. This conclusion is so solid that it has been incorporated in the NICE CG90 guidelines for the treatment of depression (available at https://www.nice.org.uk/guidance/cg90/evidence/full-guidance-243833293, page 413). These two are completely different concepts, often confounded in the literature. However it is clear that medication significantly improves the chances of a patient to be better after two weeks in comparison to placebo, but improvement itself has started much earlier. We also know that the trajectories of patients we respond to medication are similar to the trajectories of those who improve under placebo. One of the possible consequences of this observation is that there might not be a physiological difference underlying response under medication in comparison to response under placebo; however the chances these physiological mechanisms are activated are higher under medication References 1. Harmer CJ, Duman RS, Cowen PJ. How do antidepressants work? New perspectives for refining future treatment approaches. The lancet Psychiatry 2017. 2. Posternak MA, Zimmerman M. Is there a delay in the antidepressant effect? A meta-analysis. The Journal of clinical psychiatry 2005; 66(2): 148-58.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, NephJC - Nephrology Journal Club commented:
This trial of extending hemodialysis hours and quality of life was discussed on September 12th and September 13th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Swapnil Hiremath are available at the NephJC website here .
The highlights of the tweetchat were:
The study was well-designed and relevant.
Interpreting the results, can you validly fit a linear model to a questionnaire score?
Some would argue that including both incident and prevalent patients may have confounded some of the results as LV mass regresses in the first few months after initiation.
This paper confirmed previous literature that preserving residual renal function is really essential for better outcomes on dialysis and that HD dose should track with this.
Fluid control may be more important than solute clearance.
Transcripts of the tweetchats, and curated versions as storify will be shortly available from the NephJC website.
Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 06, Clive Bates commented:
To build on to Professor Hajek's cogent criticism, I would like to add a further three points:
First, the authors offer the usual disclaimer that they cannot make causal inferences from a study of this nature, which is correct. But they go on to do exactly that within the same paragraph:
Finally, although the crosssectional design of our study allowed us to determine associations between variables, it restricted our ability to draw definitive causal inferences, particularly about the association between ENDS use and smoking cessation. Nevertheless, the association between ENDS use and attempts at smoking cessation suggests that a substantial proportion of smokers believe that ENDS use will help with smoking cessation. Furthermore, the inverse association between ENDS use and smoking cessation suggests that ENDS use may actually lower the likelihood of smoking cessation. (emphasis added)
From that, they build a policy recommendation:
"Tobacco cessation programs should tell cigarette smokers that ENDS use may not help them quit smoking"
That statement is literally true for e-cigarettes and every other way of quitting smoking, but it is not a meaningful or legitimate conclusion to draw from this study because the design does not allow for causal inferences.
Second, the authors characterise e-cigarette use as 'ever use' in calculating their headline odds ratio (0.53).
Our most important finding was that having ever used ENDS was significantly associated with reduced odds of quitting smoking.
Ever use could mean anything from 'used once and never again' to 'use all day, every day' or 'used once when I couldn't smoke'. What it does not mean is 'used an e-cigarette in an attempt to quit smoking'. So this way of characterising e-cigarette use can tell us little about people who do try to quit smoking using an e-cigarette or whether that approach should be recommended.
Third, as well as the basic timing point made by Professor Hajek, the authors do not consider a further obvious contributory explanation: reverse causality. It is quite possible that those who find it hardest to quit or don't want to quit may be those who are drawn to trying e-cigarettes - either because they don't want to stop or have tried everything else already and failed. It is not safe to assume that the population is homogeneous in the degree of nicotine dependence, that e-cigarette ever-use is randomly distributed across the population or that e-cigarette use is generally undertaken with the intention of quitting.
The analysis provides no insights relevant to the efficacy of e-cigarettes in smoking cessation and building any sort of recommendation to smoking cessation programs based on this survey is wrong and inappropriate.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 06, Peter Hajek commented:
The unsurprising finding that people who quit smoking between 2009 and 2013 were less likely to try e-cigarettes than those who still smoked in 2014 is presented as if this shows that the experience with vaping somehow ‘reduced odds of quitting smoking’. It shows no such thing.
It is obvious that current smokers must be more likely to try e-cigarettes than smokers who quit years ago. E-cigarettes were more widely used in 2014 than in previous years. People who quit smoking up to five years earlier would have much less (or even no) opportunities to try vaping, and no reason to do so after they quit. Current smokers in contrast continue to have a good reason to try e-cigarettes, and are having many more opportunities to do so. This provides no information at all about odds of quitting smoking or about whether e-cigarettes are effective or not.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
jov.arvojournals.org jov.arvojournals.org
-
On 2017 Apr 12, Lydia Maniatis commented:
It occurs to me that the there are two ways to interpret the finding that people were influenced by the stable versions of the dress image in the different settings. The authors say that these versions introduced a bias as to the illumination. But it seems to me more straightforward to assume that they introduced a bias or expectation with respect to the actual colors of the dress, that is a perceptual set mediating the latter. It took me a while to realize this - an example of how explanations that are given to us can create a 'conceptual set.'
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 15, Lydia Maniatis commented:
I don’t think the authors have addressed the unusual aspects of the dress in saying that “The perceived colors of the dress are due to (implicit) assumptions about the illumination.” As they note themselves, “This is exactly what would be predicted from classical color science…”
The spectrum of light reflected to our eye is a function of the reflectance properties of surfaces and the spectrum of the illumination; both are disambiguated on the basis of implicit assumptions, and both are represented in the percept. The two perceptual features (surface color and illumination) are two sides of the same coin: Just as we can say that seeing a surface as having color x of intensity y is due to assumptions about the color and intensity of the illuminants, so we can say that seeing illumination of color x and intensity y is due to implicit assumptions about the reflectance (how much light they reflect) and the chromaticity (which wavelengths they reflect/absorb) of the viewed surfaces. We haven’t explained anything unless we can explain both things at the same time.
The authors are choosing one side of the perceptual coin – the apparent illumination – and claiming to have explained the other. Again, it’s a truism to say that seeing a patch of the dress as color “x” implies we are seeing it as being under illumination “y,” while perceiving the patch as a different color means perceiving a different illumination. This doesn’t explain what makes the dress unusual - why it produces different color/illumination impressions in different people.
The authors seem to want to take the “experience” route (“prior experiences may influence this perception”); this is logically and empirically untenable, as has been shown and argued innumerable times in the vision literature. For one thing, such a view is circular, since what we see in the first place is a product of the assumptions implicit in the visual process. It’s not as though we see things first, and then adopt assumptions that allow us to see it…In addition, why would such putative experience influence only the dress, and not each and every percept? (The same objection applies to explanations in terms of physiological differences). Again, the question of what makes the dress special is left unaddressed.
It’s odd that, for another example of such a phenomenon, vision researchers need to turn to “poppunkblogger.” If they understood it in principle, then they would be able to construct any number of alternative versions. Even if they could show the perception of the dress to be experience-based (which, again, is highly unlikely to impossible), this would not not help; they would still be at a loss to explain why different people see different versions of one image and not most others. To understand the special power of the dress, they need at a minimum to analyze its structure, not only in terms of color but in terms of shape, which is the primary mediator of all aspects of perception. Invoking “scene interpretation” and “the particular color distributions” are only placeholders for all the things the authors don’t understand.
The construction of images that show that the dress itself can produce consistent percepts is genuinely interesting, but it is a problem that the immediate backgrounds are not the same (e.g. arm placements). This produces confounds. The claim that these confounds are designed to produce the opposite effect of what is seen, based on contrast effects, is not convincing, since the idea that illusions involving transparency/illumination are based on local contrast effects is a claim that is easy to falsify empirically, and has been falsified. So we are dealing with unanalyzed confounds, and one has to wonder how much blind trial and error was involved in generating the images.
Finally, I’m wondering why a cutout of the dress wasn’t also placed against a plain background as a control; what happens in this case? Has this been done yet?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 03, Elisabeth Schramm commented:
In reply to a comment by Falk Leichsenring
Allegiance effects controlled
Elisabeth Schramm, PhD; Levente Kriston, PhD; Ingo Zobel, PhD; Josef Bailer, PhD; Katrin Wambach, PhD; Matthias Backenstrass, PhD; Jan Philipp Klein, MD; Dieter Schoepf, MD; Knut Schnell, MD; Antje Gumz, MD; Paul Bausch, MSc; Thomas Fangmeier, PhD; Ramona Meister, MSc; Mathias Berger, MD; Martin Hautzinger, PhD; Martin Härter,MD, PhD
Corresponding Author: Elisabeth Schramm, PhD, Department of Psychiatry, Faculty of Medicine, University of Freiburg, Hauptstrasse 5, 79104 Freiburg, Germany (elisabeth.schramm@uniklinik-freiburg.de)
We acknowledge the comment of Drs. Steinert and Leichsenring (1) on our study (2) reasoning that our findings may at least in part be attributed to allegiance effects. Unfortunately, they provide neither a clarification of what they exactly refer to with the term “allegiance effects” nor a specific description of the presumed mechanisms (chain of effects) through which they think allegiance may have influenced our results. In fact, as specified both in the trial protocol (3) and the study report (2), we took a series of carefully safeguarded measures to minimize bias. Unlike stated in the comment, training and supervision of the study therapists and the center supervisors were performed by qualified and renowned experts for both investigated approaches (Martin Hautzinger for Supportive Psychotherapy and Elisabeth Schramm for Cognitive Behavioral Analysis System of Psychotherapy). Moreover, none of them has been involved in treating any study patients in this trial. We are confident that any possible allegiance of the participating researchers, therapists, supervisors, or other involved staff towards any, both, or none of the investigated interventions is very unlikely to have been able to surmount all of the implemented measures against bias and to affect the results substantially.
References
(1) Steinert C, Leichsenring F. The need to control for allegiance effects in psychotherapy research. PubMed Commons. Sep 08 2017
(2) Schramm E, Kriston L, Zobel I, Bailer J, Wambach K, Backenstrass M, Klein JP, Schoepf D, Schnell K, Gumz A, Bausch P, Fangmeier T, Meister R, Berger M, Hautzinger M, Härter M. Effect of Disorder-Specific vs Nonspecific Psychotherapy for Chronic Depression: A Randomized Clinical Trial. JAMA Psychiatry. Mar 01 2017; 74(3): 233-242
(3) Schramm E, Hautzinger M, Zobel I, Kriston L, Berger M, Härter M. Comparative efficacy of the Cognitive Behavioral Analysis System of Psychotherapy versus supportive psychotherapy for early onset chronic depression: design and rationale of a multisite randomized controlled trial. BMC Psychiatry. 2011;11:134
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Sep 08, Falk Leichsenring commented:
The need to control for allegiance effects in psychotherapy research
Christiane Steinert1,2, PhD and Falk Leichsenring, DSc2
1 Medical School Berlin, Department of Psychology, Calandrellistraße 1-9, 12247 Berlin
2 University of Giessen, Department of Psychosomatics and Psychotherapy, Ludwigstraße 76, 35392 Giessen
Corresponding author: Prof. Dr. Falk Leichsenring University of Giessen Department of Psychosomatics and Psychotherapy Ludwigstr. 76, 35392 Giessen, Germany Fon | +49-641-99 45647 Fax | +49-641-99 45669 Mail | falk-leichsenring@psycho.med.uni-giessen.de
In a recent trial on psychotherapeutic efficacy Schramm et al. hypothesized that Cognitive Behavioral Analysis System of Psychotherapy (CBASP) would be superior to supportive therapy (SP) in the treatment of chronic depression.1 This hypothesis was corroborated, CBASP improved depression significantly more, but only by a moderate effect size of 0.31. Some issues, however, raise the question of possible allegiance effects on several levels.2 (1) The authors clearly are in favour of CBASP and expect it to be superior to SP (primary hypothesis). (2) Furthermore, six authors participated as therapists in the CBASP group, while none of the authors seems to have participated in the SP group. The authors can be expected to be alleged to CBASP. (3) In addition, the therapy sessions were supervised by the very same authors participating in the CBASP group as therapists. (4) No expert of SP was listed to have participated in the study, neither as a researcher, a therapist nor as a supervisor. (5) The authors stated that all therapists completed a 3-year psychotherapeutic training program or were in an advanced stage of training. However, at least in Germany, there is no 3-year training program specifically for SP, only for CBT. Thus, it is unlikely that the therapists in the SP condition were alleged to SP in the same way as the therapists in the CBASP condition. Further information on the background of the therapists in SP would be informative.
Thus, a researcher, a therapist and a supervisor allegiance effect can be expected to be present in this study.3
Munder et al. found an association between researcher allegiance and outcome of r=0.35 which corresponds to a medium effect size.2 For this reason the possibility cannot be ruled out that the moderate between-group effect size of 0.31 is at least in part due to allegiance effects. - The fact that the treatments do not seem to differ with regard to treatment fidelity ratings does not rule out this possibility. This is also true for the fact that therapists met the criteria for mastery of CBASP and SP before treating study patients.
References
- Schramm E, Kriston L, Zobel I, Bailer J, Wambach K, et al. Effect of Disorder-Specific vs Nonspecific Psychotherapy for Chronic Depression: A Randomized Clinical Trial. JAMA Psychiatry. Mar 01 2017;74(3):233-242.
- Munder T, Flückiger, C, Gerger, H, Wampold, BE, Barth, J. Is the Allegiance Effect an Epiphenomenon of True Efficacy Differences Between Treatments? A Meta-Analysis. J Couns Psychol. 2012(Epub ahead of print).
- Steinert C, Munder T, Rabung S, Hoyer J, Leichsenring F. Psychodynamic Therapy: As Efficacious as Other Empirically Supported Treatments? A Meta-Analysis Testing Equivalence of Outcomes. Am J Psychiatry. May 25 2017:appiajp201717010057.
- Cuijpers P, Huibers MJ, Furukawa TA. The Need for Research on Treatments of Chronic Depression. JAMA Psychiatry. Mar 01 2017;74(3):242-243.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 16, Lydia Maniatis commented:
"Our findings suggest that people who perceive the dress as blue might rely less on contextual cues when estimating surface color."
Can there be a "context-free" condition, in principle? What would this look like? The term context seems far too general as it is used here. The conclusion as framed has no theoretical content. If the reference is to specific manipulations, and the principles behind them, then it's an entirely different thing, and should be specified.
The fact that the results of this study differed significantly from the results of others should be of concern with respect to all of them. Might replication attempts be in order, or are chatty post-mortems enough?
"Our results lend direct support to the idea that blue and white perceivers see the dress in a different color because they discount different illumination colors."
This statement involves a major conceptual error in the sense that it cannot function as an explanation. The visual system infers both light and illumination from the stimulation of the retina by various wavelengths of various intensities. Both surface appearance and illumination are inferred from the same stimulation; to make an inference about illumination is to simultaneously make an inference about reflectance/chromaticity. One “explains” the other in the sense that each inference is contingent on the other; but to say that one inference has priority over the other is like saying the height of one side of a see-saw determines the height of the other; it’s an empty statement. What we need to explain is the movement of the whole, interconnected see saw.
This error is unfortunately a common one; it's also made by Witzel, Racey and O'Regan (2017) in this special issue. In short, this is a non-explanation.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 08, Raphael Stricker commented:
Another Lyme OspA Vaccine Whitewash
The meta-analysis by Zhao and colleagues comes to the conclusion that "the OspA vaccine against Lyme disease is safe and its immunogenicity and efficacy have been verified." The authors arrive at this sunny conclusion by excluding 99.6% of published articles that demonstrate potential problems with the OspA vaccine. Furthermore, the authors ignore peer-reviewed studies, FDA regulatory meetings and legal proceedings that point to major problems with OspA vaccine safety (1-3). This whitewash bodes ill for future Lyme vaccine candidates because it fosters disregard for vaccine safety among Lyme vaccine manufacturers and mistrust among potential Lyme vaccinees.
References 1. Stricker RB (2008) Lymerix® risks revisited. Microbe 3: 1–2. 2. Marks DH (2011) Neurological complications of vaccination with outer surface protein A (OspA). Int J Risk Saf Med. 23: 89–96. 3. Stricker RB, Johnson L (2014) Lyme disease vaccination: safety first. Lancet Infect Dis. 14(1):12.
Disclosure: RBS is a member of the International Lyme and Associated Diseases Society (ILADS) and a director of LymeDisease.org. He has no financial or other conflicts to declare.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 02, Stuart RAY commented:
Regarding the prominent concluding statement of the abstract, what is the evidence that treatment success (with current recommended regimens) will be reduced by the RASs found in this study?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 15, thomas samaras commented:
This is an excellent paper on height trends in Sardinia. However, I disagree with the precept that height can be used as a measure of health and longevity. Many researchers view height as a byproduct of the Industrial Revolution and the Western diet. In actuality, greater height and associated weight is harmful to our long-term health and longevity. There are many reasons for this position.
Carrera-Bastos reported that our modern diet is not the cause for increased life expectancy (LE). Instead, our progress in sanitation, hygiene, immunization, and medical technology have driven our rise in life expectancy. This increase in life expectancy is not that great at older ages; e.g., in 1900, a 75-year old man could expect to live another 8.5 years. A 100 years later, he could expect to live 10 years. Not a substantial improvement in spite of great advances in food availability, lifestyle, medicine, worker safety, etc.
A number of researchers have associated our increased height with excess nutrition, not better quality nutrition (Farb, Galton, Gubner, and Campbell).
Nobel prize winner, Charles Townes stated that shorter people live
longer. Other scientists supporting the longevity benefits of smaller<br> body size within the same species include Bartke, Rollo, Kraus, Pavard, Promislow, Richardson, Topol, Ringsby, Barrett, Storms, Moore, Elrick, De Magahlaes and Leroi.
Carrera-Bastos, et al. reported that pre-Western societies rarely get age-related chronic diseases until they transition to a Western diet. Trowell and Burkitt found this to be true based on their research over 40 years ago (Book: Western Diseases, Trowell and Burkitt.) Popkin noted that the food system developed in the West over the last 100+ years has been “devastating” to our health.
A 2007 report by the World Cancer Research Fund/American Institute of Cancer Research concluded that the Industrial Revolution gave rise to the Western diet that is related to increased height, weight and chronic diseases. (This report was based on evaluation of about 7000 papers and reports.)
US males are 9% taller and have a 9% shorter life expectancy. Similar differences among males and females in Japan and California Asians were found. It is unlikely that the inverse relationship in life expectancy and height is a coincidence. (Bulletin of the World Health Organization, 1992, Table 4.)
High animal protein is a key aspect of the Western Diet, but it has many negative results. For example, a high protein diet increases the levels of CRP, fibrinogen, Lp (a), IGF-1, Apo B, homocysteine, type 2 diabetes, and free radicals. In addition, the metabolism of protein has more harmful byproducts; e.g., the metabolism of fats and carbs produces CO2 and water. In contrast protein metabolism produces ammonia, urea, uric acid and hippuric acid. (Fleming, Levine, Lopez).
The high LE ranking of tall countries is often cited as supporting the the conviction that taller people live longer. However, if we eliminate non-developed countries, which have high death rates during the first 5 years of life and poor medical care, the situation changes. However, among developed countries, shorter countries rank the highest compared to tall countries. For example, out of the top 10 countries, only Iceland is a tall country. The other developed countries are relatively short or medium in height: The top 10 countries include: Monaco (1), Singapore, Japan, Macau, San Marino, Iceland (tall exception), Hong Kong, Andorra, Switzerland, and Guernsey (10). The Netherlands, one of the tallest countries in Europe, ranks 25 from the top. The ranking of other tall countries include: Norway (21), Germany (34), Denmark (47), and Bosnia and Herzegovina (84). Source for LE data: CIA World Factbook, 2016 data. Male height data from Wikipedia.
It should be pointed out that a number of confounders exist that can invalidate mortality studies that show shorter people have higher mortality. Some of these confounders include socioeconomic status, higher weight for height in shorter people, smoking, and failure to focus on ages exceeding 60 years (differences showing shorter people live longer generally occur after 60 years of age). For example, Waaler’s mortality study covered the entire age range. He found that between 70 and 85 years of age, tall people had a higher mortality than shorter men between 5’7” and 6’. An insurance study (Build Study, 1979) found that when they compared shorter and taller men with the same degree of overweight, the shorter men had a slightly lower mortality.
Anyone interested in the evidence showing that smaller body size is related to improved health and longevity can find evidence in the article below which is based on over 140 longevity, mortality, survival and centenarian studies.
Samaras TT. Evidence from eight different types of studies showing that smaller body size is related to greater longevity. JSRR 2014. 2(16): 2150-2160, 2014; Article no. JSRR.2014.16.003
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 01, Hilda Bastian commented:
The authors raise interesting and important points about the quandaries and complexities involved in updating a systematic review and reporting the update. However, their review of the field and conclusion that of the 250 journals they looked at, only BMC Systematic Reviews has guidance on the process of updating is deeply flawed.
One of the 185 journals in the original sample they included (Page MJ, 2016) is the Cochrane Database of Systematic Reviews. Section 3.4 of the Cochrane Handbook is devoted to updating, and updating is addressed within several other sections as well. The authors here refer to discussion of updating in Cochrane's MECIR standards. Even though this does not completely cover Cochrane's guidance to authors, it contradicts the authors' conclusion that BMC Systematic Reviews is the only journal with guidance on updating.
The authors cite a recent useful analysis of guidance on updating systematic reviews (Garner P, 2016). Readers who are interested in this topic could also consider the broader systematic review community and methodological guidance. Garritty C, 2010 found 35 organizations that have policy documents at least on updating, and many of these have extensive methodological guidance, for example AHRQ (Tsertsvadze A, 2008). Recently, guidelines for updating clinical guidelines have also been published (Vernooij RW, 2017).
The authors reference some studies that address updating strategies, however this literature is quite extensive. You can use this filter in PubMed along with other search terms for studies and guidance: sysrev_methods [sb] (example). (An explanation of this filter is on the PubMed Health blog.)
Disclosure: I work on PubMed Health, the PubMed resource on systematic reviews and information based on them.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 13, GARRET STUBER commented:
The corrected version of this manuscript is now online at the journal's website. A detailed correction notice is linked to the corrected article.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 09, GARRET STUBER commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 20, Erin Frazee Barreto commented:
The Cystatin C-Guided Vancomycin Dosing tool can be accessed using the mobile or web app 'Calculate' available at QxMD
https://qxmd.com/calculate/calculator_449/vancomycin-dosing-based-on-egfr-creatinine-cystatin-c
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2018 Jan 08, Israel Hanukoglu commented:
A three dimensional (3D) video of the human eccrine sweat gland duct covered with sodium channels can be seen at: https://www.youtube.com/watch?v=JcddOILffOM
The video was generated based on the results of this study.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 May 17, Israel Hanukoglu commented:
Our lab has undertaken to map the sites of expression and localization of ENaC and CFTR in epithelial tissues. This article is the second one in the series and it concentrates on the skin.
In the first paper, we covered the sites of localization of ENaC and CFTR in the respiratory tract and the female reproductive tract. Both of these tissues contain large stretches of epithelium covered with multi-ciliated cells. We had shown that in these epithelia with motile cilia, ENaC is expressed along the entire length of the cilia. Reference: https://www.ncbi.nlm.nih.gov/pubmed/22207244
In the current work on the skin, epidermis and epidermal appendages, ENaC was found mostly located in the cytoplasm of keratinocytes, sebaceous glands, and smooth muscle cells. Only in the eccrine type sweat glands, ENaC and CFTR were found predominantly on the luminal membrane facing the lumen of the ducts. Thus, the reuptake of Na+ ions secreted in sweat probably takes place in the eccrine glands.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 28, Eric Fauman commented:
For a report on Somascan pQTLs from a much larger but cross-sectional population check out the recent (not yet peer-reviewed) BioRxiv paper from Karsten Suhre and colleagues:
Connecting genetic risk to disease endpoints through the human blood plasma proteome http://biorxiv.org/content/early/2016/11/09/086793
For example, the association reported above (rs3197999, pvalue=6e-10 for MST1 levels) is reported in the BioRxiv paper with a p-value of 1e-242.
The data from the BioRxiv paper can be explored at http://proteomics.gwas.eu
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2018 Jan 05, Martine Crasnier-Mednansky commented:
All data in this paper should be discarded simply because the strains used by the authors are not what the authors think they are, as explained below.
An Escherichia coli strain lacking PEP synthase does not grow on pyruvate. In fact, the crucial role of PEP synthase during growth on pyruvate is well documented. In brief, mutant strains were isolated which could grow on glucose or acetate but not on pyruvate; it was found they lacked PEP synthase (see Cooper RA, 1967 for an early paper). Furthermore, because the PEP synthase gene (ppsA) is transcriptionally positively regulated by the fructose repressor FruR (Geerse RH, 1986, also known as Cra), fruR mutant strains are routinely checked for their inability to grow on pyruvate. Therefore, data (in supplementary Fig. 1) indicating wild type and ppsA strains grow equally well on pyruvate are incorrect; the strain used by the authors is not a ppsA strain.
The ptsI strain also does not appear to be a ptsI strain, as it grows on xylose as well as a wild type strain (figure 3b), it should not; growth on xylose requires cAMP, which requires the phosphorylated form of Enzyme IIA<sup>Glc</sup>.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 12, Atanas G. Atanasov commented:
Silymarin is indeed a very prominent herbal product with a variety of demonstrated bioactivities. It was also recently studied in my group in the context of regulation of PPARgamma activity and macrophage cholesterol efflux. I have enjoyed reading this review focused on the usefulness of the plant product in chronic liver disease, and have featured it on: http://healthandscienceportal.blogspot.com/2017/03/how-beneficial-is-silymarinsilybin-use.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 15, Vojtech Huser commented:
This is a very interesting study. Since it using OMOP CDM, it would be interesting to execute it on additional datasets. The appendix provides some guidance. Are there plans to release (possibly with some delay) additional analysis code details? (the study is possibly using existing software packages where case studies in their application are very valuable)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
onlinelibrary.wiley.com onlinelibrary.wiley.com
-
On 2017 Nov 28, Natalie Clairoux commented:
Free version of this article available at https://papyrus.bib.umontreal.ca/xmlui/handle/1866/19505
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 22, KEVIN BLACK commented:
The version of record is available via the DOI shown above. The peer-reviewed manuscript version appears by agreement with the publisher at http://works.bepress.com/kjb/66/ .
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 28, Thomas Jeanne commented:
There is ambiguity in the wording that the authors use in the body text and the abstract to describe the HbA1c reductions that were observed. A 1.1% mean reduction implies a relative reduction; e.g., a reduction from 7.6% A1c to 7.52% A1c. It is only after looking at Table 2 that it becomes clear that the observed change was an absolute reduction in the percentage of hemoglobin that was glycated. To avoid such confusion, use of the term "percentage point" is well-established (e.g., Hayward RA, 1997, Vijan S, 2014, Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group., 2016).
The mean reduction in HbA1c level from baseline was 1.1 percentage points in the CGM group at 12 weeks, not 1.1%.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 08, Christophe Dessimoz commented:
Story behind the paper in this blog post: http://lab.dessimoz.org/blog/2017/02/08/sex-alcohol-and-structural-variants-in-fission-yeast
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 04, Carl V Phillips commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 25, Peter Hajek commented:
The press release claimed that ‘E-Cigarettes are Expanding Tobacco Product Use Among Youth’ but this study showed no such thing. It detected no increase in youth smoking, on the contrary, the continuous decline in smoking shows that e-cigarettes are not expanding smoking.
In fact, the data in the paper suggest that if anything, the increase in vaping has been associated with an accelerated decline in smoking. The cut-off point of 2009 seems to have been selected to show no acceleration, but very few young people tried vaping in 2009. By 2011, only 1.5% of middle and high school students vaped within the past 30 days and the figures went up after that. If the decline in smoking over 2004-2011 is compared with the decline over following years, it may well have significantly accelerated.
The final conclusion that ‘E-cigarette–only users would be unlikely to have initiated tobacco product use with cigarettes’ makes no sense because e-cigarette only users have not initiated any tobacco product use!
If the authors mean by this that they initiated nicotine use, this is unlikely. In this as in other similar reports, smokers were asked on how many days they smoked in the past 30 days and it is most likely that the same question was asked of vapers, but these results are not reported. Studies that assessed frequency of use report that as with non-smokers who try nicotine replacement products such as nicotine chewing gum, it is extremely rare for non-smokers who try vaping to progress to regular use. While some smokers find e-cigarette satisfactory and switch to vaping, the majority of non-smokers who experiment with e-cigarettes only try them once or twice and virtually none progress to daily use.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 08, Lydia Maniatis commented:
To make clear what Adamian and Cavanagh (2017) do, and what they don’t do, in this publication. What they don’t do is to test a hypothesis. What they do is present a casual, ad hoc explanation of the Frohlich effect based on the results of past experiments, which they replicate here. The proposal remains untested. Even the ad hoc, untested assumptions (“we assume that the critical delay in producing the Fröhlich effect is not just the delay of attention in arriving at the target but also the time a saccade would then need to land on the target, if one were executed;”) can’t explain the results of their experiments, requiring more ad hoc proposals about complex processes: “The results suggest that the simultaneous onsets may be held in iconic memory and the cued motion trajectory can be retrieved if the cue arrives soon enough;” “A late SOA implies a longer memory retention period, and that means that the reported shifts could arise from working memory limitations and might not be perceptual in nature.”
Is Adamian and Cavanagh’s assumption that “the critical delay is not just the delay of attention….but also the time a saccade would then need to land on the target…” testable?
How would one go about testing it, as well as the additional assumptions the authors feel obliged to make with respect to memory?
Why didn’t the authors attempt to test their proposal to begin with, rather than simply performing replications that, even if successful, could do no more than leave the issue unresolved? They have not even proposed possible tests.
Obviously, replication was the safer choice, but one, again, that is essentially uninformative vis a vis an ad hoc proposal. It should be clear that the subject of eye movements and their role in perception is extremely complex and that casual speculations are unlikely to be borne out, if properly tested.
I think Adamian and Cavanagh’s proposal is so vague, the confounds so many, and (least of all, at present) the technical demands so great, that it cannot be tested. If all of the main and subsidiary assumptions, and their implications, were clarified enough to allow them to be critically assessed for logical coherence and consistency with other known facts, it might well fail at this stage, obviating the need for experimental tests.
Of course, I could be wrong in the present case; the authors may intend, post-replication, to attempt to concretize and subject their proposal to a genuine test; that would be genuinely refreshing.
I would note, as an afterthought, the uninformative nature of the title of the article, which is typical of many vision science articles and reflects the essentially uninformative nature of the work itself. The title tells us what the article is about, but not what it concluded or implied.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 24, Jim Johnson commented:
This paper is missing some highly relevant references from the Kieffer lab, including recent studies that establish the requirement for insulin in the anti-diabetic actions of leptin.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
On 2017 Mar 08, Atanas G. Atanasov commented:
Very important and nicely summarized information that is of a very high relevance for the general public… I have featured this review at: http://healthandscienceportal.blogspot.com/2017/03/potential-benefits-and-harms-of-fasting.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 08, Christopher Tench commented:
Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 31, Sin Hang Lee commented:
Assuming you have read the reference under the Comment.
Perhaps, you or someone would like to respond to the comment on behalf of Dr. Mark Schiffman and colleagues on the PubMed Commons below the abstract of following article. https://www.ncbi.nlm.nih.gov/pubmed/27905473
I would like to initiate a forum of open discussion, not one-sided proclamations.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 31, Stuart RAY commented:
The comment above does not provide any evidence for the dissent stated. What is "biased and dangerous" about the HPV vaccination recommendation?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 24, Sin Hang Lee commented:
The Editorial “Trump’s vaccine-commission idea is biased and dangerous” in Nature 2017 Jan 17;541(7637):259 is debatable. At least one article published by the Nature Publishing Group, in Nature Reviews Disease Primers 2016;2:16086 [1], promoting mass human papillomavirus (HPV) vaccination of girls 9-13 years of age and teenage boys at the cost of >$50 million for every 100,000 adolescents in the name of cervical cancer prevention is equally biased and dangerous. Medical journal censorship of dissenting data and opinions has suppressed the facts that the benefits of mass HPV vaccination are uncertain and the risks are substantial at great cost to society.
References:
Sin Hang Lee shlee01@snet.net Milford Molecular Diagnostics Laboratory Milford,CT
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 23, Andy Collings commented:
(Original comment found at: https://elifesciences.org/content/6/e17044#disqus_thread)
Response to “Replication Study: Discovery and preclinical validation of drug indications using compendia of public gene expression data”
Atul J Butte, Marina Sirota, Joel T Dudley
We represent three of the key authors of the original work.
In October 2013, we were pleased to see that our original 2011 publication (Sirota et al., 2011) was selected as one of the top 50 influential cancer studies selected for reproducibility. Our initial impression, probably like most investigators reading this letter, was that such recognition would be a mixed blessing for us. Most of our work for this paper was conducted in 2009, 4 years prior to us being approached. We can see now that this reproducibility effort is one of the first 10 to be completed, and one of the first 5 to be published, more than 3 years later. The reproducibility team should be commended on their diligence to repeat experimental details as much as possible.
The goal of the original study was to evaluate a prediction from a novel systematic computational technique that used open-access gene-expression data to identify potential off-indication therapeutic effects of several hundred FDA approved drugs. We chose to evaluate cimetidine based on the biological novelty of its predicted connection to lung cancer and availability of local collaborators in this disease area.
The key experiment replicated here involved 18 mice treated with three varying doses of cimetidine (ranging from 25 to 100 mg/kg) administered via intraperitoneal injection daily to SCID mice after implantation of A549 human adenocarcinoma cells, along with 6 mice treated with doxorubicin as a positive control, and 6 mice treated only with vehicle as a negative control. The reproducibility team used many more mice in their experiment, but tested only the highest dose of cimetidine.
First, it is very important to clearly note that we are truly impressed with how much Figure 1 in the reproducibility paper matches Figure 4c in our original paper, and this is the key finding that cimetidine has a biological effect between PBS/saline (the negative control) and doxorubicin (the positive control). We commend the authors for even using the same colors as we did, to better highlight the match between their figure and ours.
While several valid analytic methods were used on the new tumor volume data, the analysis most similar to the original was the t-test we conducted on the measurements from day 11, with 100 mg/kg cimetidine compared to vehicle control. The new measurements were evaluated with a Welch t-test yielding t(53) = 2.16, with p=0.035. We are extremely pleased to see this raw p-value come out from their experiment.
However, the reproducibility team then decided to apply a Bonferroni adjustment, resulting in a corrected p=0.105. While this Bonferroni adjustment was decided a priori and documented (Kandela et al., 2015), we fundamentally do not agree with their approach.
The reproducibility team took on this validation effort by starting with our finding that cimetidine demonstrated some efficacy in the pre-clinical experiments. However, our study did not start with that prediction. We started our experiments with open data and a novel computational effort. Readers of our original paper (Sirota et al., 2011) will see that we started our study much earlier in the process, with publicly-available gene expression data on drugs and diseases, and computationally made predictions that certain drugs could be useful to treat certain conditions. We then chose cimetidine and lung adenocarcinoma from among the list of significant drug-disease pairs for validation. This drug-disease pairing was statistically significant in our computational analysis, which included the formal evaluation of multiple-hypothesis testing using random shuffled data and the calculation of q-values and false discovery rates. These are commonly used methods for controlling for the testing of multiple hypotheses. Aside from the statistical significance, local expertise in lung cancer and the availability of reagents and A549 cells and mouse models in our core facilities guided the selection. We then chose an additional pairing that we explicitly predicted (by the computational methodology) would fail. We again used cimetidine and found we had ACHN cells that could represent a model of renal cancer. Scientists will recognize this as a negative control.
At no point did we feel the comparison of cimetidine against A549 cells had anything to do with the effect of cimetidine in ACHN cells; these were independently run experiments. The ACHN cell test was to test the specificity of the computational process upstream of all of this; it had nothing to do with our belief in cimetidine in A549 cells. Thus, we would not agree with the replication team’s characterization that these were all multiple hypotheses being validated equally, and thus merited a common adjustment of p-values. As described above, we corrected for the multiple hypothesis testing earlier in our process, at the computational stage. We never expected the cimetidine/ACHN experiment to succeed when we ran it. Similarly, our test of doxorubicin in A549 cells was performed as a positive control experiment; we fully expected that experiment to succeed.
In email discussion, we learned the replication team feels these three hypotheses were tested equally, and thus adjusted the p-values by multiplying them by 3. We are going to have to respectfully “agree to disagree” here.
We note some interesting results of their adjustments, such as the reproducibility team also not finding doxorubicin to have a statistically significant effect compared to vehicle treated mice. Again, the Welch’s t-test on this comparison yielded p=0.0325, but with their Bonferroni correction, this would no longer be deemed a significant association. Doxorubicin has been used as a known drug against A549 cells for nearly 30 years (Nishimura et al, 1989), and our use of this drug was only as a positive-control agent.
Figure 3 was also very encouraging, where we do see a significant effect from the original and reproduced studies, and the meta-analysis together.
In the end, we want to applaud replication efforts like this. We do believe it is importance for the public to have trust in scientists, and belief in the veracity of our published findings. However, we do recommend replication teams of the future to choose papers in a more impactful manner. While it is an honor for our paper to be selected, we were never going to run a clinical trial of cimetidine in lung adenocarcinoma, and we cannot see any such protocol being listed in clinicaltrials.gov. Our publication was more towards demonstrating the value of open data, through the validation of a specific computational prediction. We suggest that future replication studies of pre-clinical findings should really be tailored towards those most likely to actually be heading into clinical trials.
References
Sirota M, Dudley JT, Kim J, Chiang AP, Morgan AA, Sweet-Cordero A, Sage J, Butte AJ. Discovery and preclinical validation of drug indications using compendia of public gene expression data. Sci Transl Med. 2011 Aug 17;3(96):96ra77. doi: 10.1126/scitranslmed.3001318.
Kandela I, Zervantonakis I; Reproducibility Project: Cancer Biology. Registered report: Discovery and preclinical validation of drug indications using compendia of public gene expression data. Elife. 2015 May 5;4:e06847. doi: 10.7554/eLife.06847.
Nishimura M, Nakada H, Kawamura I, Mizota T, Shimomura K, Nakahara K, Goto T, Yamaguchi I, Okuhara M. A new antitumor antibiotic, FR900840. III. Antitumor activity against experimental tumors. J Antibiot (Tokyo). 1989 Apr;42(4):553-7.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 20, Robert Tibshirani commented:
The Replication Study by Kandela et al of the Sirota et al paper “Discovery and Preclinical Validation of Drug Indications Using Compendia of Public Gene Expression Data“ reports a non-significant p-value of 0.105 for the test of the main finding for cimetidine in lung adenocarcinoma. They obtained this from a Bonferroni adjustment of the raw p-value of 0.035, multiplying this by three because the authors had also tested a negative and a positive control.
This seems to me to be an inappropriate use of a multiple comparison adjustment. These adjustments are designed to protect the analyst against errors in making false discoveries. However if Sirota et al had found that the negative control was significant, they would not have reported it as a "discovery". Instead, it would have pointed to a problem with the experiment. Similarly, the significant result in the positive control was not considered a "discovery" but rather was a check of the experiment's quality.
Now it is true that Kandela et al specified in their protocol that they would use a (conservative) Bonferroni adjustment in their analysis, and used this fact to choose a sample size of 28. This yielded an estimated power of 80%. If they had chosen to use the unadjusted test, the estimated power for n=28 would have been a little higher—about 90%. I think that the unadjusted test is appropriate here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
elifesciences.org elifesciences.org
-
On 2017 Jan 23, Andy Collings commented:
(Original comment found at: https://elifesciences.org/content/6/e21634#disqus_thread)
Response to: “Replication Study: Melanoma genome sequencing reveals frequent PREX2 mutations"
Lynda Chin and Levi Garraway
We applaud the Reproducibility Project and support its goal to reproduce published scientific results. We also thank Horrigan et al for a carefully executed study, for which we provided reagents and extensive consultation throughout. Their work illustrates the inherent challenges in attempting to reproduce scientific results.
We summarize below the results of Horrigan et al., first in lay terms and then in more scientific detail.
Description for Lay Readers
Briefly, our 2012 paper reported that human melanoma patients often carry mutations in the PREX2 gene. To study the effect of mutations in PREX2, we made modified versions of a commonly used immortalized human melanocyte cell line (called p’mels) and injected them into mice. When mice were injected with cells carrying an irrelevant gene or a normal copy of PREX2 (“control mice”), tumors started to form in about 9 weeks. When mice were injected with cells carried the mutated PREX2 genes (“experimental mice”), tumors began to form after around 4-5 weeks—indicating that mutated PREX2 accelerated tumor formation.
When Horrigan et al. tried to reproduce our experiment, they found that tumors began to form in their control mice after about 1 week—not 9-10 weeks. Because their control developed tumors so rapidly, Horrigan et al. recognized that they could not meaningfully test our finding that mutant PREX2 accelerated the tumor formation.
Why did the human melanocyte cells grow tumors in the control mice so much faster in Horrigan et al.’s experiment? The likely explanation is that human cells engineered in this way are known to undergo dramatic changes when they are grown for extended periods in culture. Therefore, Horrigan et al.’s study underscores how important it is to have appropriate control cells, before attempting to reproduce experimental findings.
Finally, we emphasize that Horrigan et al.’s results do not call into question our results about PREX2 because their experiment was not informative. Moreover, we have recently validated the findings about PREX2 in an independent way—by creating genetically engineered mice that carry mutated PREX2 in their own genomes. These PREX2 mutant mice showed accelerated tumor growth compared to controls.
Description for Scientific Readers
The authors repeated a xenograft experiment (Figure 3b) in our 2012 report. In our experiment, we overexpressed GFP (negative control), wild type PREX2 (normal control) and two PREX2 mutants (G844D and Q1430*) (experimental arm) in a TERT-immortalized human melanocyte line engineered with RB and p53 inactivation (p’mel). To further sensitize these melanocytes for tumorigenicity, they were also engineered to overexpress oncogenic NRASG12D. We showed that the mutant PREX2 expression in p’mel cells significantly accelerated tumor formation in vivo. However, Horrigan et al found that the control and PREX WT or mutant expressing p’mels all behaved identically, forming tumors rapidly in vivo (within 1 week of implantation). This finding differed from our study, in which the control cells (both GFP and PREX2) did not form tumors until >10 weeks after implantation.
The fact that Horrigan et al observed rapid tumor formation in all settings means that their findings are uninformative with regard to the reproducibility of a central conclusion of our 2012 report, namely that mutant PREX2 can accelerate tumor formation in vivo. Testing this hypothesis requires a control arm in which tumor formation is sufficiently latent so that a discernible effect on the rate of tumorigenesis by the mutants can be observed. In the Horrigan et al study, tumorigenesis in the control arms was so rapid that it essentially became impossible to detect any additional effect of mutant PREX2.
Why were the controls so much more tumorigenic in the hands of Horrigan et al.? We note that although the investigators were provided with clones from the same p’mels used in the 2012 study, by the time Horrigan et al received the cells, more than two years had passed since the original p’mel cells were engineered. This is a crucial point, because as with many other cell lines, these “primed” human primary melanocytes are known to readily undergo adaptation during extended cultivation in vitro. In particular, these p’mels can spontaneously acquire a more transformed phenotype over time (we have seen this happen on multiple occasions). Thus, although a clone from the same engineered cells were provided to Horrigan et al, the fact that that clone of p’mel cells exhibited very different phenotype suggests that the additional passages, a major geographic relocation, and subsequent freeze-thaw manipulations have rendered them unsuitable as an experimental frame of reference.
When we notice such “drifts” in engineered cell culture models, we often have to re-derive the relevant lines starting from even earlier stages in order to have controls with suitable tumorigenic latency. For example, in this case, we would have re-introduced NRASG12D into a clone of non-transformed melanocytes harboring TERT immortalization and RB/P53 inactivation to re-engineer a p’mel cell line. Had Horrigan et al used less tumorigenic controls, they would have a much better chance to reproduce an accelerating effect of mutant PREX2.
To validate our initial observations regarding the oncogenic role of mutant PREX2, we have since taken an orthogonal approach: we created a genetically engineered mouse (GEM) model targeting both a truncating PREX2 mutation (E824*) and oncogenic NRASG12D expression to melanocytes under a tet-regulated promoter. In this GEM model, we observed significantly increased penetrance and decreased latency of melanoma formation (Lissanu Deribe et al PNAS, 2016, E1296-305; see Figure 3b in Lissanu Deribe et al PNAS, 2016), thus confirming the xenograft findings of our 2012 report showing that mutant PREX2 is oncogenic.
In summary, we support rigorous assessments of reproducibility such as this. Equally, we consider it crucial to recognize and account for salient underlying properties of the model systems and experimental controls in order to minimize the risk of misleading conclusions regarding the reproducibility of any given experiment. Indeed, Horrigan et al. nicely articulated the importance of these considerations when discussing their results.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
elifesciences.org elifesciences.org
-
On 2017 Jan 23, Andy Collings commented:
(Original comment in full found at: https://elifesciences.org/content/6/e18173#disqus_thread)
Response to: “Replication Study: The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors”
Irving Weissman for the authors of "The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors"
Our original paper by Willingham and Volkmer et al in PNAS reported the result of experiments testing the hypothesis that CD47 might be expressed and demonstrate dominant ‘don’t eat me’ functions on human solid cancers, as well as our previously described studies with human leukemias and lymphomas and mouse leukemias. The study included primarily experiments on primary patient solid cancers with minimal passage as xenografts in immune deficient mice tested in vitro and as xenografts in the mice lacking adaptive immune system T, B, and NK cells, but possessing all other bone marrow derived innate immune system cells such as macrophages . We included one experiment on a long passaged mouse breast cancer line transplanted into syngeneic immunocompetent FVB mice. The Replication Study by Horrigan et al in eLife reports the results of efforts to repeat the experiments on the passaged mouse breast cancer line, but none of the experiments on human primary and minimally passaged cancers of several different solid tumor types, either in vitro or as xenografts in mice in which the CD47 ‘don’t eat me’ signal was blocked with monoclonal antibodies.
When we were requested to participate in a replication study of our paper entitled “The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors” we agreed, but were worried that we had spent years developing the infrastructure to obtain human cancers from de-identified patients, found ways to transplant them into immune deficient mice, and limited our studies to human cancers within 1 to less than 10 transplant passages in these mice. Our major objective in the study was to test whether the CD47 molecule was present on these human solid tumors, if it acted as a ‘don’t eat me’ signal for mouse and human macrophages, and whether these tumors in immune deficient mice were susceptible to blocking anti-CD47 antibodies. This was a scientific paper to answer these questions, and not a preclinical study preparatory to human clinical trials.
To our surprise, our study verified on all tested human cancers that they express CD47, perhaps the first cancer gene commonly expressed on all cancers; and it is a molecule which provides a ‘don’t eat me’ function; and we showed that blocking that function led to tumor attack by macrophages.
Unfortunately, the independent group who accepted the task of replicating our studies did not do a single study with human cancers, or to study the effect of our blocking antibodies to the CD47 tumor cell surface molecules on the phagocytic removal of human cancers.
Horrigan et al did begin, with our help, to replicate the one study we did as a pilot to see if anti-CD47 antibodies that also bind to mouse CD47 would have an effect on a long-transplanted mouse breast cancer line. We and others have found that the exact way you transplant these mouse cancers is critical to achieve engraftment of the cancers in appropriate immune competent mice. As we learned from Dr Sean Morrison, UT Southwestern Childrens Hospital, many cancers won’t grow in mice unless a special type of matrigel is used to support the cells in vitro and in transplant. Without it, transplantation may be sporadic and/or absent. The replication team found their own matrigel, and for reasons unknown to us, could not get reproducible transplantation in their testing. This was picked up in reviews of the paper by eLife referees, including a request for repeating the studies a number of ways, but that did not happen.
There is therefore no study that addresses the title of the paper and its major conclusions: human cancers express CD47 and our studies show that it is a target for therapeutic studies.
Several independent papers since ours have replicated not only our findings, but have extended them to many other human cancers (see below). So replication of our major points have occurred with independent groups.
But we agree that everything we publish, major or minor, central or peripheral, must be replicable. Even in our human tumor studies there were a few outlier cancers that did not diminish growth in the presence of blocking anti-CD47 antibodies.
The beginning of replication is to show experience and competence in the transplantability of the cancer. There are many possible reasons that replication of the basic transplantation of MT1A2 breast cancer cells in syngeneic FVB mice was not replicated in the experiments carried out by Horrigan et al, who got only a fraction of the mice transplanted. These could include the particular matrigel used, a problem with using long passaged cell lines[which may be heterogeneous and altered by the passaging in vitro and in vivo], rather than primary or recent mouse or human cancers. It could be inherent in how Horrigan et al did the experiments. Oddly, the control antibodies did diminish the growth of the MT1A2 cancers in their single experiment. Amongst the reasons concerning the heterogeneity of long passaged cell lines we might cite is that we have discovered two more ‘don’t et me’ molecules on cancers that interact with other receptors on macrophages. Although those papers are submitted, but not yet published, we cannot specify the details lest we endanger their publishability. (Readers who send us a request will receive copies of the papers when published.) Laboratories that study tumors at different transplant passages have often found that variant subsets of cells within the cancer can rapidly outgrow the major population of cells transplanted, and it is common that the successive transplants grow more aggressively in the same strain of mice, even though the name of the tumor is retained. For that reason it is clear that studies on long-passaged tumors may be studying some properties of the passaged cell rather than the original cancer in the individual. There are other possibilities. When the replication study lab interacted with us early on, we offered to do the experiments side by side with them to facilitate technology transfer. Horrigan et al declined. The offer still stands.
Before this paper was published we published other papers demonstrating that CD47 was expressed on all samples of human AML and human NHL tested, usually at a higher level than on the same stage or type of human normal cell. Further, we showed both by genetic manipulation of the expression of CD47 on human cells or the treatment of those cells with blocking antibodies to CD47 that interrupt its interaction with macrophage receptor Sirpα lead to mouse or human macrophages to phagocytose and kill the tumor target cells. We used anti-human antibodies that did not trigger phagocytosis by ‘opsonization’, as the isotype of the antibodies used for blocking were not the isotype that is highly efficient at triggering complement activation or ADCC (activation via Fc receptor of NK lineage killer cells), and we demonstrated that on human lymphomas. [...]
(The comment in full can be found at: https://elifesciences.org/content/6/e18173#disqus_thread)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 29, Laura M Cox commented:
Thank you for catching this typo. The journal will issue a corrigendum soon.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Aug 24, Seán Turner commented:
In the title, "Faecalibacterium (sic) rodentium" should be "Faecalibaculum rodentium."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 22, Tom Yates commented:
Shah and colleagues are to be congratulated for an important study (Shah NS, 2017), emphasising the major role of transmitted resistance in the epidemiology of extensively drug-resistant tuberculosis (XDR-TB). However, methodological issues will have impacted the results.
As the authors acknowledge, in such studies, missing data bias estimates, with linked isolates wrongly designated unique. Their decision to look at a convenience sample of 51% of cases from throughout KwaZulu-Natal rather than attempt complete enrolment in a smaller area will have accentuated this bias.
A growing body of research suggests transmission between members of the same household only explains a small proportion of all Mycobacterium tuberculosis (MTB) transmission in Sub Saharan Africa (Verver S, 2004, Andrews JR, 2014, Middelkoop K, 2015, Glynn JR, 2015). In the present study, recall bias plus disease prompting contacts to test for XDR-TB will likely have resulted in household, workplace and hospital contacts being captured more consistently than more casual community contacts.
Determining the proportion of total MTB transmission occurring in specific locations would allow disease control programmes to be better targeted. I agree with Shah et al that this should be a research priority.
Dr Tom A. Yates, Institute for Global Health, University College London, t.yates@ucl.ac.uk
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 24, Pushkar Malakar commented:
This study has the potential to open a new field to explore like viral communications or signaling in viruses.Someway down the line there is possibility that one will know about the evolution of communication system or the signaling system between two organisms.This study has therapeutic potential also as viruses are responsible for many fatal human diseases like cancer, AIDS etc.If one understand the communication or signaling system between viruses then therapeutics can be developed to block this communication or signaling system which will help in preventing the viral diseases.More understanding of viral communication or signaling system may be used in biotech industries to produce cheaper and useful bioproducts as viruses replicate very fast. Further viruses might use different signaling molecules to communicate with each other for different activities.Signaling system might also help in the classification of viruses. Anyway this study is just the beginning and many more mysteries about viruses might be solved in the near future.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 06, Andrea Giaccari commented:
PubMed states this article is Free Full Text, but then links to http://www.bmj.com/content/356/bmj.i6505 asking for "Article access for 1 day: Purchase this article for £23 $37 €30 + VAT"
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 05, Jon-Patrick Allem commented:
There are at least five problems with this paper: First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.
This critique is misguided. The goal of this paper was to characterize public perception of e-cigarette aerosol by using a novel data source (tweets) and not to demonstrate any material risk from e-cigarette aerosol.
Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.
This critique is misguided. This study did not design educational messages. It described people’s perceptions about e-cigarette aerosol.
Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification. Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive. They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.
This critique is misguided. There was no policy proposal made in the passage highlighted here. The suggestion that social media platforms can be used as a communication channel is not a policy. It is a communication strategy. The idea that social media can be used to obtain information and later communicate messages is completely in line with the work presented in this paper. The notion that every paper answers every research question pertaining to a topic is an unreasonable expectation.
Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.
This critique here is based on one’s own opinion about the role of government and could be debated with no clear stopping point.
Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance? Updated 21-Jan-17 with fifth point.
Big social media data (Twitter, Instagram, Google Webs Search) can be used to fill certain knowledge gaps quickly. While one study using one data source is by no means definitive, one study based on timely data can provide an important starting-off point to address an issue of great import to public health. This paper describes why understanding public sentiment toward e-cigarette aerosol is relevant and utilizes a data source that allowed people to organically report on their sentiment toward e-cigarette aerosol unprimed by a researcher, without instrument bias, and at low costs. Also, policy development and communication campaigns are two distinct areas of research. The goal of this study was to inform the latter.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 25, Erica Melief commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 21, Clive Bates commented:
There are at least five problems with this paper:
First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.
Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.
Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification.
Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive.
They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.
Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.
Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance?
Updated 21-Jan-17 with fifth point.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 21, Clive Bates commented:
How did the author manage to publish a paper with the title "E-cigarettes: Are they as safe as the public thinks?", without citing any data on what the public actually does think? There is data in the National Cancer Institute's HINTS survey 2015. This is what it says:
Compared to smoking cigarettes, would you say that electronic cigarettes are…
- 5.3% say much less harmful
- 20.6% say less harmful
- 32.8% say just as harmful
- 2.7% say more harmful
- 2.0% say much more harmful
- 1.2% have never heard of e-cigarettes
- 33.9% don’t know enough about these products
Which brings me to the main issue with the paper. The author claims that there is insufficient knowledge to determine if these products are safer than cigarettes. This is an extraordinary and dangerous claim given what is known about e-cigarettes and cigarettes. It is known with certainty that there are no products of combustion of organic material (i.e tobacco leaf) in e-cigarette vapour - this is a function of the physical and chemical processes involved. We also know that products of combustion cause almost all of the harm associated with smoking. There is also extensive measurement of harmful and potentially harmful constituent of cigarette smoke and e-cigarette aerosol showing many are not detectable or present at levels two orders of magnitude lower in the vapour aerosol (e.g. see Farsalinos KE, 2014, Burstyn I, 2014). So the emissions are dramatically less toxic and exposures much lower.
The author provides a familiar non-sequitur: "There are no current studies that prove that e-cigarettes are safe". There never will be. Firstly because it is impossible to prove something to be completely safe, and almost nothing is. Secondly, no serious commentators claim they are completely safe, just very much safer than smoking. Hence the term 'harm reduction' to describe the benefits of switching to these products.
This view commands support in the expert medical profession. The Royal College of Physicians (London) assessed the toxicology evidence in its 2016 report Nicotine without smoke: tobacco harm reduction and concluded:
Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (Section 5.5 page 87)
This is a carefully measured statement that aims to provide useful information to both users of the products and health and medical professionals while reflecting residual uncertainty. It contrasts with the author's information leaflet for patients, which even suggests there is no basis for believing e-cigarettes to be safer than smoking:
If you are smoking and not planning to quit, we don't know if e-cigarettes are safer. Talk to your health care provider.
But we do know beyond any reasonable doubt that e-cigarettes are very much safer - the debate is whether they are 90% safer or 99.9% safer than smoking. Regrettably, only 5.3% of American adults correctly believe that e-cigarettes are very much less harmful than smoking, while 37% incorrectly think they are as harmful or more harmful (see above). The danger with these misperceptions of risk is that they affect behaviour, causing people to continue to smoke when they might otherwise switch to much safer vaping. The danger with a paper like this and its patient-facing leaflet is that it nurtures these harmful risk misperceptions and becomes, therefore, a vector for harm.
To return to the author's title question: E-Cigarettes: Are They as Safe as the Public Thinks?. The answer is: "No, they are very much safer than the public thinks".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 22, Eric Fauman commented:
I know nothing about cow genetics, but I have done some work on the genetics of metabolites in humans, so I was interested to see how the authors derived biological insights from this genetic study. In particular, I was intrigued by the suggestion in the abstract that they found evidence that genes involved in the synthesis of “milk components” are important for lactation persistence.
Unfortunately, the more I studied the paper the more problems I found that call this claim into question.
First off, the Q-Q plot is currently unavailable, but the text mentions there’s only a “slight deviation in the upper right tail”, which could mean there are no true significant signals.
To account for multiple testing, the authors decided to use a genome-wide association p-value cutoff of 0.95/44100 = 2.15e-5 instead of a more defensible 0.05/44100 = 1.1e-6.
Since their initial p-value cutoff yielded a relatively small number of significant SNPs, the authors used a much more lenient p-value cutoff of 5e-4 which presumably is well within the linear portion of the Q-Q plot.
The biggest problem with the enrichment analysis, however, is that they’ve neglected to account for genes drawn from a common locus. Often, paralogs of similar function are proximal in the genome. But typically we assume that a single SNP is affecting the function of only a single gene at a locus. So, for example, a SNP near the APOA4/APOA1/APOC3/APOA5 locus can tag all 4 genes, but it’s unfair to consider that 4 independent indications that “phospholipid efflux”, “reverse cholesterol transport”, “triglyceride homeostasis” and other pathways are “enriched” in this GWAS.
This issue, of overcounting pathways due to gene duplication, affects all their top findings, presumably rendering them non-significant. Besides lipid pathways, this issue also pertains to the “lactation” GO term, which was selected based on the genes GC, HK2, CSN2 and CSN3. GC, CSN2 and CSN3 are all co-located on Chromosome 6.
A perplexing claim in the paper is for the enrichment of the term “lipid metabolic process” (GO:0006629). According to the Ensembl Biomart, 912 Bos taurus genes fall into this category, or about 4% of the bovine protein coding genes (24616 according to Ensembl). So out of their set of 536 genes (flanking SNPs with P < 5e-4) we’d expect about 20 “lipid metabolic process” genes. And yet, this paper reports only 7. This might be significant, but for depletion, not enrichment.
Sample size is of course a huge issue in GWAS. While 3,800 cows is a large number, it appears this trait may require a substantially larger number of animals before it can yield biologically meaningful results.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 16, Stuart RAY commented:
This is a scientifically interesting report, but the use of "mutation rate" in the title, abstract, and in some portions of the text is unfortunate, because the process being observed and measured in this report is evolutionary rate of substitution (as noted in the authors' Tables 1 and 2). The evolutionary rate of substitution results from a variety of processes that are affected by the rate mutation (determined in particular by the polymerase), positive and negative selection, and stochastic events at multiple levels (from individual cell to population). Thus, the term "mutation rate" is confusing and potentially misleading. With every RNA genome replication, there is a nonzero rate of mutation; what we estimate when we sequence virus obtained from infected individuals, sampled over a period of years, is the evolutionary rate of substitution.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 24, Shafic Sraj commented:
Cubital tunnel score in the presence of carpal tunnel syndrome
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 21, Thomas Heston commented:
This appears to be a classic example of the Hawthorne Effect, i.e. what gets examined tends to improve (http://www.economist.com/node/12510632). The conclusion of this research seems to be that focusing on a problem by providing feedback tends to improve that problem, compared to doing nothing.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 31, Stuart RAY commented:
This is a very interesting and well-done study of a model system. That said, the unqualified use of the terms "antiviral effects of IFN-lambda" and "norovirus" in the title of this article might be misleading without context. Readers should be alert: (a) the noroviruses are very diverse biologically and phylogenetically; (b) murine norovirus is distinct from human noroviruses in apparent tropism, binding (sialic vs blood group antigens), pH dependence of viral entry (Kim Y. Green, Fields Virology 2013, chapter 20); (c) there are significant biological differences between human and mouse responses to lambda interferon (Hermant P, 2014); and B6 mice lack functional MX1 (Pillai PS, 2016, Moritoh K, 2009). Given differences in virus and host, whether the findings presented by Baldridge et al. can be extrapolated to other systems (e.g. natural human norovirus infection) is highly uncertain; therefore, I suggest that the title should end with "in mice".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 14, Sin Hang Lee commented:
In a research paper titled “Specific microbiota direct the differentiation of IL-17-producing T-helper cells in the mucosa of the small intestine” published by Ivanov et al. in Cell Host Microbe. 2008 Oct 16;4(4):337-49, antibiotic treatment of the specific microbiota has been shown to inhibit TH17 cell differentiation. Perhaps, Strle and colleagues may consider developing microbiological tests for accurate diagnosis of the early infection of Lyme borreliosis in patients with or without skin lesions for timely appropriate antibiotic treatment to prevent excessive TH17 responses and the subsequent autoimmune disorders.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 05, Kevin Hall commented:
The theoretical basis of the carbohydrate-insulin model (CIM) relies on generally accepted physiology about endocrine regulation of adipose tissue– data that were all collected on short time scales. Ludwig appears to suggest that this long debate has been about a “straw man” short-term version of the CIM. This apparently explains why the purported metabolic advantages have been elusive when assessed by inpatient controlled feeding studies that were simply too short to unveil the metabolic advantages of the CIM. Indeed, Ludwig believes he has scored a win in this debate by acknowledging that these metabolic advantages of low carbohydrate diets on energy expenditure and body fat predicted by the CIM must operate on longer time scales, conveniently where no inpatient data have been generated either supporting or negating those predictions. This was accurately described in my review as an ad hoc modification of the CIM – a possibility currently unsupported by data but obviously supported by sincere belief.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 05, DAVID LUDWIG commented:
With Hall’s comment of 4 Feb 2017, this long debate nears resolution. He acknowledges it’s “possible” that the very short metabolic studies do not reflect the long-term effects of macronutrients on body weight. We disagree on how likely that possibility is, and now must await further research to resolve the scientific uncertainties.
Finally, on an issue of academic interest only, Hall creates a straw man in claiming to have “falsified” the Carbohydrate-Insulin Model (CIM). Versions of CIM were originally proposed more than a century ago, as detailed by Taubes G, 2013, before short term studies of substrate oxidation would have been possible. Furthermore, in the second paragraph of his review, Hall cites an article I coauthored Ludwig DS, 2014 and three by others Lustig RH, 2006, Taubes G, 2013, Wells JC, 2011 as recent iterations of CIM. Each of these articles focuses on long-term effects, and none asserts that 1 week should be adequate to prove or falsify CIM. In view of the failure of conventional approaches to address the massive public health challenge of obesity, let’s now refocus our energies into the design and execution of more definitive research.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 05, Kevin Hall commented:
Ludwig suggests that demonstration of any metabolic adaptations occurring on a time scale of > 1 week after introduction of an isocaloric low carbohydrate diet somehow invalidates all of the inpatient controlled feeding studies with results that violate carbohydrate-insulin model (CIM) predictions. This presents a false dilemma and is a red herring.
There are indeed metabolic adaptations that take place on longer time scales, but many of these changes actually support the conclusion that the purported metabolic advantages for body fat loss predicted by the CIM are inconsistent with the data. For example, as evidence for a prolonged period of fat adaptation, Ludwig notes modest additional increases in blood and urine ketones observed after 1 week of either starvation Owen OE, 1983 or consuming a hypocaloric ketogenic diet Yang MU, 1976. The implication is that daily fat and ketone oxidation presumably increase along with their blood concentrations over extended time periods to eventually result in an acceleration of body fat loss with low carbohydrate high fat diets as predicted by the CIM. But since acceleration of fat loss during prolonged starvation would be counterproductive to survival, might there be data supporting a more physiological interpretation the prolonged increase in blood and urine ketones?
Both adipose lipolysis Bortz WM, 1972 and hepatic ketone production Balasse EO, 1989 reach a maximum within 1 week as demonstrated by isotopic tracer data. Therefore, rising blood ketone concentrations after 1 week must be explained by a reduced rate of removal from the blood. Indeed, muscle ketone oxidation decreases after 1 week of starvation and, along with decreased overall energy expenditure, the reduction in ketone oxidation results in rising blood concentrations and increased urinary excretion (page 144-152 of Burstztein S, et al. ‘Energy Metabolism, Indirect Calorimetry, and Nutrition.’ Williams & Wilkins 1989). Therefore, rather than being indicative of progressive mobilization of body fat to increase oxidation and accelerate fat loss, rising concentrations of blood ketones and fatty acids occurring after 1 week arise from reductions in ketone and fat oxidation concomitant with decreased energy expenditure.
The deleterious effects of a 600 kcal/d low carbohydrate ketogenic diet on body protein and lean mass were demonstrated in Vasquez JA, 1992 and were found to last about 1 month. Since weight loss was not significantly different compared to an isocaloric higher carbohydrate diet, body fat loss was likely attenuated during the ketogenic diet and therefore in direct opposition to the CIM predictions. Subsequent normalization of nitrogen balance would tend to result in an equivalent rate of body fat loss between the isocaloric diets over longer time periods. In Hall KD, 2016, urinary nitrogen excretion increased for 11 days after introducing a 2700 kcal/d ketogenic diet and coincided with attenuated body fat loss measured during the first 2 weeks of the diet. The rate of body fat loss appeared to normalize in the final 2 weeks, but did not exceed the fat loss observed during the isocaloric high carbohydrate run-in diet. Mere normalization of body fat and lean tissue loss over long time periods cannot compensate for early deficiencies. Therefore, these data run against CIM predictions of augmented fat loss with lower carbohydrate diets.
While I believe that outpatient weight loss trials demonstrate that low carbohydrate diets often outperform low fat diets over the short-term, there are little body weight differences over the long-term Freedhoff Y, 2016. However, outpatient studies cannot ensure or adequately measure diet adherence and therefore it is unclear whether greater short-term weight losses with low carbohydrate diets were due to reduced diet calories or the purported “metabolic advantages” of increased energy expenditure and augmented fat loss predicted by the CIM. The inpatient controlled feeding studies demonstrate that the observed short-term energy expenditure and body fat changes often violate CIM predictions.
Ludwig conveniently suggests that all existing inpatient controlled feeding studies have been too short and that longer duration studies might produce results more favorable to the CIM. But even this were true, the current data demonstrating repeated violations of CIM model predictions constitute experimental falsifications of the CIM requiring an ad hoc modification of the model such that the metabolic advantages only begin after a time lag lasting many weeks. This is possible, but unlikely.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 04, DAVID LUDWIG commented:
Boiling down his comment of 3 Feb 2017, Hall disputes that the metabolic process of adapting to a high-fat/low-carbohydrate diet confounds interpretation of his and other short term feeding studies. If we can provide evidence that this process could take ≥ 1 week, the last leg of his attack on the Carbohydrate-Insulin Model collapses. Well, a picture is worth a thousand words, and here are 4:
For convenience, these figures can be viewed at this link:
Owen OE, 1983 Figure 1. Ketones are, of course, the hallmark of adaptation to a low-carbohydrate ketogenic diet. Generally speaking, the most potent stimulus of ketosis is fasting, since the consumption of all gluconeogenic precursors (carbohydrate and protein) is zero. As this figure shows, the blood levels of each of the three ketone species (BOHB, AcAc and acetone) continues to rise for ≥3 weeks. Indeed, the prolonged nature of adaptation to complete fasting has been known since the classic starvation studies of Cahill GF Jr, 1971. It stands to reason that this process might take even longer on standard low-carbohydrate diets, which inevitably provide ≥ 20 g carbohydrate/d and substantial protein.
Yang MU, 1976 Figure 3A. Among men with obesity on an 800 kcal/d ketogenic diet (10 g/d carbohydrate, 50 g/d protein), urinary ketones continued to rise for 10 days through the end of the experiment, and by that point had achieved levels equivalent only to those on day 4 of complete fasting. Presumably, this process would be even slower with a non-calorie restricted ketogenic diet (because of inevitably higher carbohydrate and protein content).
Vazquez JA, 1992 Figure 5B. On a conventional high-carbohydrate diet, the brain is critically dependent on glucose. With acute restriction of dietary carbohydrate (by fasting or a ketogenic diet), the body obtains gluconeogenic precursors by breaking down muscle. However, with rising ketone concentrations, the brain becomes adapted, sparing glucose. In this way, the body shifts away from protein to fat metabolism, sparing lean tissue. This phenomenon is clearly depicted among women with obesity given a calorie-restricted ketogenic diet (10 g carbohydrate/d) vs a nonketogenic diet (76 g carbohydrate/d), both with protein 50 g protein/d. For 3 weeks, nitrogen balance was strongly negative on the ketogenic diet compared to the non-ketogenic diet, but this difference was completely abolished by week 4. What would subsequently happen? We simply can’t know from the short-term studies.
Hall KD, 2016 Figure 2B. Hall’s own study shows that the transient decrease in rate of fat loss upon initiation of the ketogenic diet accelerates after 2 weeks.
The existence of this prolonged adaptive process explains why metabolic advantages for low-fat diet are consistently seen in very short metabolic studies. But after 2 to 4 weeks, advantages for low-carbohydrate diets begin to emerge, as summarized in my comment of 3 Feb 2017, below.
Fat adaptation on low-carbohydrate diets has admittedly not been thoroughly studied, and its duration may differ among individuals and between experimental conditions. Nevertheless, there is strong reason to think that short feeding studies (i.e., < 3 to 4 weeks) have no relevance to the long-term effects of macronutrients on metabolism and body composition.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 04, Kevin Hall commented:
Is it really an “extreme argument” to conclude that important aspects of the carbohydrate-insulin model (CIM) have been falsified based on data from 20 highly controlled inpatient human feeding studies that failed to support key CIM model predictions? While previously ignoring this conforming body of data from other research groups, Ludwig now conveniently concludes that all of these studies were flawed in some way and are therefore irrelevant and incapable of testing any aspect of the carbohydrate-insulin model.
To Ludwig, more relevant for assessing the energy expenditure and body fat predictions of the CIM are rodent studies and outpatient human studies where diet adherence cannot be adequately controlled or assessed Winkler JT, 2005. One such study Ludwig uses to bolster the CIM Ebbeling CB, 2012 did not measure body fat during the test diets and showed no significant energy expenditure differences between diets with the same amount of protein but varying in carbohydrate vs. fat. Ludwig claims that the study supports the CIM because energy expenditure was observed to increase with a very low carbohydrate diet. But the concomitant 50% increase in protein vs. the comparator diets makes it impossible to definitively conclude that any observed effect was due to carbohydrate reduction alone. Ludwig’s arguments about the possibly minimal effects of dietary protein changes on energy expenditure cannot eliminate this important confound.
Ludwig argues that “adaptation to a higher-fat diet can take at least a week and perhaps considerably longer”. One of Ludwig’s citations in this regard describes the role of diet composition on fuel utilization and exercise performance Hawley JA, 2011. This review paper reported that adaptation to a high fat diet for <1 week was sufficient to alter fuel utilization, but 4-7 days of fat adaptation was required to maintain subsequent exercise performance. Interestingly, longer periods of fat adaptation during training (7 weeks) were concluded to limit exercise capacity and impair exercise performance. The other two studies Ludwig cited to support the necessity for long term fat adaptation fail to support the CIM. An inpatient controlled feeding study Vasquez JA, 1992 showed that a very low carbohydrate, high fat diet led to significantly greater loss of body protein and lean tissue mass despite no significant difference in weight loss compared to an isocaloric higher carbohydrate, lower fat diet. The second study was an outpatient feeding trial Velum VL, 2017 that failed to demonstrate a significant difference in body weight or fat loss despite prescribing diets substantially varying in carbohydrate vs. fat for 3 months.
I agree with Ludwig that it likely takes a long time to equilibrate to added dietary fat without simultaneously reducing carbohydrate because, unlike carbohydrate and protein, dietary fat does not directly promote its own oxidation and does not significantly increase daily energy expenditure Schutz Y, 1989 and Horton TJ, 1995. Unfortunately, these observations also run counter to CIM predictions because they imply that added dietary fat results in a particularly efficient means to accumulate body fat compared to added carbohydrate or protein Bray GA, 2012. If such an added fat diet is sustained, adipose tissue will continue to expand until lipolysis is increased to sufficiently elevate circulating fatty acids and thereby increase daily fat oxidation to reestablish balance with fat intake Flatt JP, 1988.
In contrast, when added fat is accompanied by an isocaloric reduction in carbohydrate, daily fat oxidation plateaus within the first week as indicated by the rapid and sustained drop in daily respiratory quotient in Hall KD, 2016 and Schrauwen P, 1997. Similarly, Hall KD, 2015 observed a decrease and plateau in daily respiratory quotient with the reduced carbohydrate diet, whereas the reduced fat diet resulted in no significant changes indicating that daily fat oxidation was unaffected. As further evidence that adaptations to carbohydrate restriction occur relatively quickly, adipose tissue lipolysis is known to reach a maximum within the first week of a prolonged fast Bortz WM, 1972 as does hepatic ketone production Balasse EO, 1989.
While there is no evidence that carbohydrate restricted diets lead to an acceleration of daily fat oxidation on time scales longer than 1 week, and there is no known physiological mechanism for such an effect, this possibility cannot be ruled out. Such speculative long term effects constitute an ad hoc modification of the carbohydrate-insulin model whereby repeated violations of model predictions on time scales of 1 month or less are somehow reversed.
As I have repeatedly acknowledged, prescribing lower carbohydrate diets in free-living subjects generally leads to greater loss of weight and body fat over the short-term when people are likely adhering most closely to the diet prescriptions. The CIM suggests that such diets offer a “metabolic advantage” that substantially increases energy expenditure and body fat loss even if diet calories are equal. However, inpatient controlled feeding studies do not support this contention as they have repeatedly failed to show significant differences in energy expenditure and body fat. Furthermore, such studies have occasionally measured significant differences in diametrically opposite directions than were predicted on the basis of carbohydrate intake and insulin secretion. These apparent falsifications of the CIM do not imply that dietary carbohydrates and insulin are unimportant for energy expenditure and body fat regulation. Rather, their role is more complicated than the CIM suggests and the model requires thoughtful modification.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 03, DAVID LUDWIG commented:
In his comment of January 31, 2017, Hall presses an extreme argument, that he successfully "falsified" major aspects of the Carbohydrate-Insulin Model (CIM) of obesity, and complains that opponents won't embrace their error. His argument boils down to 3 points:
First, Hall’s small 6-day study and his small, “observational,” “pilot” study are fundamentally correct. Regarding the 6-day study Hall KD, 2015, he continues to insist that results of a very short intervention have relevance to understanding the long-term effects of macronutrients on body composition, despite evidence that adaptation to a higher-fat diet can take at least a week and perhaps considerably longer Hawley JA, 2011 Vazquez JA, 1992 Veum VL, 2017. (We need look no further than his observational study Hall KD, 2016, to see in Figure 2B that the transient decrease in rate of fat loss upon initiation of the low-carbohydrate diet accelerates after 2 weeks.) Of note, the 36 g/d greater predicted body fat loss on his low-fat diet would, if persistent, translate into a massive advantage in adiposity after just one year. If anything, the meta-analyses of long-term clinical trials suggest the opposite Tobias DK, 2015 Mansoor N, 2016 Mancini JG, 2016 Sackner-Bernstein J, 2015 Bueno NB, 2013. Furthermore, Hall’s two studies are mutually inconsistent: The 6-day study implies a major increase in energy expenditure from fat oxidation on the low-fat diet, whereas the observational study shows an increase in energy expenditure after 2 weeks (by doubly-labeled water) on the low-carbohydrate diet. Other limitations of his observational study have been considered elsewhere.
Second, our randomized 3-arm cross-over study Ebbeling CB, 2012 is fundamentally wrong. I’ve addressed Hall’s concerns elsewhere. Here, he reiterates that the 10% difference in protein content (intended by design to reflect the Atkins diet) could account for our observed 325 kcal/d difference in energy expenditure. However, there is no basis in the literature for this belief. Among 10 studies published at the time of our feeding trial in which protein intake was compared within the physiological range (10 to 35% of total energy), energy expenditure on the higher vs. lower protein diets ranged from +95 kcal/d to -97 kcal/d, with the mean difference of near zero Dulloo AG, 1999 Hochstenbach-Waelen A, 2009 Lejeune MP, 2006 Luscombe ND, 2003 Mikkelsen PB, 2000 Veldhorst MA, 2009 Veldhorst MA, 2010 Westerterp KR, 1999 Westerterp-Plantenga MS, 2009 Whitehead JM, 1996. Though these studies have methodological limitations themselves, the finding is consistent with thermodynamic considerations that indicate a very minor increment in the "thermic effect of food" from a 10% increase in protein.
Third, 18 other studies provide definitive support his position. This facile contention disregards that these studies are riddled with the same inherent limitations as his studies, including a combination of short duration, highly limited power, indirect measurements of body composition, reliance on metabolic chambers (which have been shown to underestimate adaptive thermogenesis compared to doubly-labeled water Rosenbaum M, 1996), quality control concerns and other issues. Of the cited studies, six were 1 to 4 days Astrup A, 1994 Dirlewanger M, 2000 Davy KP, 2001 Smith SR, 2000 Thearle MS, 2013 Yerboeket-van de Venne WP, 1996, seven were 7 to 15 days Horton TJ, 1995 Shepard TY, 2001 Eckel RH, 2006 Hill JO, 1991 Schrauwen P, 1997 Treuth MS, 2003 Yang MU, 1976, and just five were 4 to 6 weeks. One of these longer studies was based on recovered data from about 30 years prior to publication, with no direct measurements of body composition or energy expenditure Leibel RL, 1992. The other four longer studies employed severe calorie restriction, which would plausibly obscure macronutrient effects over this short duration. Two of these studies had just 4 subjects per diet group Rumpler WV, 1991 Bogardus C, 1981. The remaining two showed either a non-significant (2 kg lower total body fat) Golay A, 1996 or significant (30 cc lower visceral fat) Miyashita Y, 2004 advantage for the lower-carbohydrate diet. We’ve been down this road before, with the launch of the 40-year low-fat diet era based on over-interpretation of methodologically limited research. Let’s not make the same mistake again.
Even as he over-interprets the short-term feeding studies, Hall disregards extensive animal research, high quality observational studies, mechanistic studies, and clinical trials in support of CIM, as summarized here and elsewhere Ludwig DS, 2014 Lucan SC, 2015 Templeman NM, 2017.
Finally, Hall claims that I misunderstand the notion of “energy gap.” As both Hall and I Katan MB, 2010 have considered elsewhere, a decrease in energy intake produces a compensatory decrease in energy expenditure, resulting in less weight loss than would be predicted from the simple observation that a pound of fat contains 3500 kcal. However, here we consider the opposite phenomenon – an increase in energy expenditure resulting from changing dietary quality, not quantity. There is no reason to believe that compensatory increases in energy intake would occur as a result of faster metabolic rate over a similar time frame as that observed with compensatory changes to energy restriction. (Indeed, Hall himself acknowledges the possibility that low-carbohydrate diets might also lower energy intake.) Of course, progressive weight loss regardless of cause would eventually reduce energy expenditure, but we cannot infer from current data when that energy gap would reach zero. Even with conventional assumptions, NIDDK’s Body Weight Planner indicates the 150 kcal/d change in energy balance Hall found on the low-carbohydrate diet by doubly-labeled water would produce more than a 15 lb weight loss for a typical individual over several years – amounting to half the mean change in weight that occurred during the obesity epidemic in the U.S. Why would we dismiss findings with such major potential public health significance?
Hall's premature claims of (at least partial) victory and calls for curtailment of funding for more research Freedhoff Y, 2016 do not do justice to a complicated scientific question. In view of the failure of conventional obesity treatment and the massive public health challenges, all participants in this debate would do well to acknowledge the limitations of existing evidence and join in the design of more definitive research.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 31, Kevin Hall commented:
Science progresses through an iterative process of formulating models to explain our observations and subjecting those models to experimental interrogation. A single valid experimental result that runs counter to a model prediction falsifies the model and thereby requires its reformulation. Alternatively, refutation of an apparent model falsification requires demonstrating that the experimental observation was invalid.
My review of the carbohydrate-insulin model (CIM) presented a synthesis of the evidence from 20 inpatient controlled feeding studies strongly suggesting that at least some important aspects of the model are in need of modification. In particular, our recent studies Hall KD, 2015, Hall KD, 2016 employing carefully controlled inpatient isocaloric diets with constant protein, but differing in carbohydrate and fat, resulted in statistically significant differences between the diets regarding body fat and energy expenditure that were in directions opposite to predictions of the CIM.
Rather than using our experimental results as the basis for clarifying and reformulating the CIM, Ludwig challenges their validity and simply ignores the 18 other inpatient controlled feeding studies with their conforming body of results failing to support the energy expenditure or body fat predictions of the CIM.
Ludwig’s comments on the diets used in Hall KD, 2015 are irrelevant to whether they resulted in a valid test of the CIM predictions. We fed people diets that selectively reduced 30% of baseline calories solely by restricting either carbohydrate or fat. These diets achieved substantial differences in daily insulin secretion as measured by ~20% lower 24hr urinary C-peptide excretion with the reduced carbohydrate diet as compared with the reduced fat diet (p= 0.001) which was unchanged from baseline. Whereas the reduced fat diet resulted in no significant energy expenditure changes from baseline, carbohydrate restriction resulted in a ~100 kcal/d decrease in both daily energy expenditure and sleeping metabolic rate. These results were in direct opposition to the CIM predictions, but in accord with the previous studies described in the review as well as a subsequent study demonstrating that lower insulin secretion was associated with a greater reduction of metabolic rate during weight loss Muller MJ, 2015.
Ludwig erroneously claims that the study suffered from an “inability to directly document change in fat mass by DXA”, but DXA measurements indicated statistically significant reductions in body fat with both diets. While DXA was not sufficiently precise to detect significant differences between the diets, even this null result runs counter to the predicted greater body fat loss with the reduced carbohydrate diet. Importantly, the highly sensitive fat balance technique demonstrated small but statistically significant differences in cumulative body fat loss (p<0.0001) in the direction opposite to the CIM predictions. Ludwig claims that our results are invalid because “rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings.” However, it is highly implausible that small uncertainties in the metabolizable energy content of the diet amounting to <100 kcal/d could explain the >400 kcal/d (p<0.0001) measured difference in daily fat oxidation rate. Furthermore, our results were robust to the study errors and exclusions fully reported in Hall KD, 2015 and clearly falsified important aspects of the CIM.
We previously responded Hall KD, 2016b to Ludwig’s comments Ludwig DS, 2016 on our ketogenic diet study. Ludwig now argues that we set the bar too high regarding the energy expenditure predictions of the CIM based on “speculative claims by non-scientists like Robert Atkins”. But scientists well-known for promoting low carb diets have claimed that “very low carbohydrate diets, in their early phases, also must supply substantial glucose to the brain from gluconeogenesis…the energy cost, at 4–5 kcal/gram could amount to as much as 400–600 kcal/day” Fine EJ, 2004. Ludwig also sets the energy expenditure bar quite high in his New York Times opinion article, JAMA commentary, and book “Always Hungry” where he claims to have demonstrated a 325 kcal/d increase in expenditure in accordance with the CIM predictions Ebbeling CB, 2012. What Ludwig fails to mention is that such an interpretation is confounded by the low-carbohydrate diet having 50% greater dietary protein which is well-known to increase expenditure. Ludwig also doesn’t mention that his study failed to demonstrate a significant effect on either resting or daily energy expenditure when comparing diets with the same protein content, but varying in carbohydrate and fat.
What was the energy expenditure bar set by our ketogenic diet study Hall KD, 2016? The clinical protocol specified that the primary daily energy expenditure outcome (measured by room calorimetry) must increase by >150 kcal/d to be considered physiologically meaningful. With the agreement of funders at the Nutrition Science Initiative, notable proponents of the CIM, the pre-specified 150 kcal/d threshold was used to calculate number of study subjects required to estimate the energy expenditure effect size in a homogeneous population of men consuming an extremely low carbohydrate diet. If the measured effect size exceeded 150 kcal/d then the results could be reasonably interpreted as a physiologically important increase in energy expenditure worthy of future study in a wider population using a more realistic and sustainable diets. Unfortunately, the primary energy expenditure outcome was substantially less than 150 kcal/d and it would have been unethical to retrospectively “move the goal posts” or emphasize exploratory outcomes that could possibly be interpreted as more favorable to the CIM.
Ludwig sets the bar far too low when he claims that a ~100 kcal/d effect size “would be of major scientific and clinical significance” for treatment of obesity. Ludwig bases this claim on a misunderstanding of the tiny “energy imbalance gap” between calorie intake and expenditure corresponding with the rise of population obesity prevalence Hall KD, 2011. This is especially puzzling since Ludwig himself used the same mathematical model calculations to conclude that development of obesity in adults requires an increased energy intake (or decreased expenditure) amounting to ~400-700 kcal/d Katan MB, 2010.
As described in my review, the carbohydrate-insulin model is clearly in need of reformulation regarding the predicted effects of isocaloric variations in dietary carbohydrate and fat on energy expenditure and body fat. However, other aspects of the model remain to be adequately investigated and reasonable ad hoc modifications of the model have been proposed. Finally, it is important to emphasize that regardless of whether the carbohydrate-insulin model is true or false, dietary carbohydrates and insulin may promote obesity and low carbohydrate diets may offer benefits for weight loss and metabolic health.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 17, DAVID LUDWIG commented:
In this review, Hall claims to have “falsified” the Carbohydrate-Insulin Model (CIM) of obesity as iterated by Mark Friedman and me in 2014 Ludwig DS, 2014. Hall describes this achievement as “rare” in nutritional science and analogous to the refutation of the “luminiferous ether” hypothesis of the 19th century. Elsewhere, he argues that the published data are so definitive as to warrant curtailment of further funding for macronutrient-focused obesity research Freedhoff Y, 2016.
To loosely paraphrase Mark Twain, rumors of CIM’s demise have been greatly exaggerated.
Hall bases his case mainly on his two feeding studies, one small and short (6 days), the other small, non-randomized (i.e., observational) and designated a pilot.
In the discussion section of the 6-day study Hall KD, 2015, Hall and colleagues write: “Our relatively short-term experimental study has obvious limitations in its ability to translate to fat mass changes over prolonged durations” (NB: it can take the body weeks to fully adapt to a high fat diet Hawley JA, 2011 Vazquez JA, 1992 Veum VL, 2017). This appropriately cautious interpretation was evidently abandoned in the current review. Indeed, the study has numerous limitations beyond short duration, as reviewed elsewhere, including: 1) inability to directly document change in fat mass by DXA; 2) use of an exceptionally low fat content for the low-fat diet (< 8% of total energy), arguably without precedent in any population consuming natural diets; 3) use of a relatively mild restriction of carbohydrate (30% of total energy), well short of typical very-low-carbohydrate diets; and 4) experimental errors and exclusions of data that could confound findings. In addition, the investigators failed to verify biologically available energy of the diet (e.g., by analysis of the diets and stools for energy content). Rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings – and this possibility can’t be ruled out in studies of such short duration.
Hall’s non-randomized pilot Hall KD, 2016 potentially suffers from all the well-recognized limitations of small observational studies, importantly including confounding by any time-varying covariate. One such factor is miscalculation of energy requirements, leading to progressive weight loss that would have introduced bias against the very-low-carbohydrate diet. Other major design and interpretive limitations have been considered elsewhere.
Furthermore, Hall sets the bar for the CIM unrealistically high (i.e., 400 to 600 kcal/d greater total energy expenditure), citing speculative claims by non-scientists like Robert Atkins. In fact, effect estimates of 100 to 300 kcal/day – as demonstrated by Hall himself Hall KD, 2016 and by us Ebbeling CB, 2012 using doubly-labeled water – would be of major scientific and clinical significance if real, and do not represent "ad hoc modifications" to evade "falsification." (For comparison, Hall previously argued that the actual energy imbalance underlying the entire obesity epidemic is < 10 kcal/d Hall KD, 2011.)
To test the CIM, we need high-quality studies of adequate duration to eliminate transient biological processes (ideally ≥ 1 month); using a randomized-controlled design; with definitive measurements of body composition (e.g. DXA or MRI); and including appropriate process measures to assure that the diets are properly controlled for biologically available energy content. No such studies have yet been published. Thus, the CIM is neither proven nor “falsified” by existing data. In view of the complexity of diet, many high-quality studies will likely be needed to provide a complete answer to this question, versions of which have been debated for a century.
The CIM aims to explain a paradox: Body weight is controlled (“defended”) by biological factors affecting fat storage, hunger and energy expenditure Leibel RL, 1995. However, the average defended body weight has increased rapidly throughout the world among genetically stable populations. Lacking a definitive explanation for the ongoing obesity epidemic, or effective non-surgical treatment, we should not casually dismiss CIM, especially in light of many studies suggesting benefits of carbohydrate-modified/higher-fat diets for obesity Tobias DK, 2015 Mansoor N, 2016 Mancini JG, 2016 Sackner-Bernstein J, 2015 Bueno NB, 2013, cardiovascular disease Estruch R, 2013 and possibly longevity Wang DD, 2016.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Dec 05, Alex Vasquez commented:
Apparently the publisher is not connecting these related publications for appropriate context; see: Vasquez A. Correspondence regarding Cutshall, Bergstrom, Kalish's "Evaluation of a functional medicine approach to treating fatigue, stress, and digestive issues in women.” Complement Ther Clin Pract. 2016 Oct 19 https://doi.org/10.1016/j.ctcp.2016.10.001
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 27, Robin W.M. Vernooij commented:
We have developed a fillable, user-friendly PDF version of CheckUp, which can be found at the EQUATOR ( Enhancing the QUAlity and Transparency Of health Research ) Library (http://www.equator-network.org/reporting-guidelines/reporting-items-for-updated-clinical-guidelines-checkup/).
CheckUp has recently been translated into Spanish and Dutch (Chinese and Czech versions are being prepared); these translated versions can also be found at the EQUATOR library. Researchers are invited to translate CheckUp into other languages.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 01, Lydia Maniatis commented:
"Could these null findings result simply from poor data quality in infants?"
That a study even warrants such a statement implies a lack of theoretical and methodological rigor. Such questions cannot be resolved post hoc - experiments need to be planned so as to avoid them altogether. The authors feel that "Several observations argue against this [poor quality data] interpretation," but such special pleading by the authors doesn't make me feel any better.
This is a study in which the conceptual categories are crude - e.g. "scenes" is a category - calling into question its replicability (given the broad latitude in selecting stimuli we could label "scenes.") Post hoc evaluations of data - model-fitting, etc - are also poor practice. All they can do is describe a particular dataset, confounds and all. Because the authors make no predictions, emphasizing instead the relative novelty of their technique, one might overlook the fact that data generated without a clear theoretical premise guiding control of variables/potential confounds is of very limited theoretical value. Basically, they're just playing with their toys.
Despite what I see as poor scientific practice, I don't think we needed an fMRI study to to "suggest" to us that, by 4–6 months, babies can distinguish "faces" from "scenes."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 04, Randi Pechacek commented:
Aaron Weimann, the 1st author on this paper, wrote a blog post on microBEnet briefly discussing this new software. Read about it here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 09, Atanas G. Atanasov commented:
Very interesting and relevant topic. I have featured this publication at: http://healthandscienceportal.blogspot.com/2017/03/role-of-intestinal-microbiota-and.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 07, Lydia Maniatis commented:
“In conclusion, using a psychophysical method, for the first time we showed that the timescale of adaption mechanisms for the mid-level visual areas were substantially slower than those for the early visual areas.”
Psychophysical methods are amazing. They let you tap into specific levels of the brain, just by requiring observers to press one of two buttons. As you can imagine, some heavy-duty theoretical and empirical preparation has gone into laying the ground for such a simple but penetrating method.
One example of this preparation is the assertion by Graham (1992) that under certain “simple” conditions, the brain becomes transparent, so that the percept is a direct reflection of, e.g. the activity of V1 neurons. (Like bright students, the higher levels can’t be bothered to respond to very boring stimulation). She concluded this after a subset of a vast number of experiments performed in the very active field had proven “consistent” with the “classical” view of V1 behavior, at a time when V1 was thought to be pretty much all there was (for vision). (The “classical” view was later shown to be premature and inadequate, making the achievement of consistency in this body of work even more impressive). If one wanted to be ornery, one might compare Graham’s position to saying that we can drop an object into a Rube Goldberg contraption and trigger only the first event in the series, while the other events simply disengage, due to the simplicity of the object – perhaps a simple, sinusoidal surface. To be fair, though, the visual system is not as integrated or complex as those darned contraptions.
The incorporation of this type of syllogism into the interpretation of psychophysical data was duly noted by Teller (1984), who, impressed, dubbed it the “nothing mucks it up proviso.” It has obviously remained a pillar of psychophysical research, then and now.
The other important proviso is the assumption that the visual system performs a Fourier analysis, or a system of little Fourier analyses, or something, on the image. There is no evidence for or logic to this proviso (e.g. no imaginable functional reason or even remotely adequate practical account), but, in conjunction with the transparency assumption, it becomes a very powerful tool: little sinusoidal patches tap directly into particular neural populations, or “spatial filters,” whose activity may be observed via a perceiving subject’s button tap (and a few dozen other “linking propositions,” methodological choices and number-crunching/modeling choices for which we have to consult each study individually). (There are also certain minor logical problems with the notion of “detectors,” a concept invoked in the present paper; interested readers should consult Teller (1984))
The basic theoretical ground has been so thoroughly packed that there is little reason for authors to explain their rationale before launching into their methods and results. The gist of the matter, as indicated in the brief introduction, is that Hancock and Pierce (2008) “proposed that the exposure to the compound [grating] pattern gave rise to more adaptation in the mid-level visual areas (e.g., V4) than the exposure to the component gratings.” Hancock and Pierce (2008) doubtless had a good reason for so proposing. Mei et al (2017) extend these proposals, via more gratings, and button presses, to generate even more penetrating proposals. These may become practically testable at some point in the distant future; the rationale, as mentioned, is already well-developed.
n.b. Due to transparency considerations, results apply to gratings only, either individual or overlapping.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 25, Lydia Maniatis commented:
Comment 2:Below are some of the assumptions entailed by the sixty-year-old "signal detection theory," as described by Nevin (1969) in a review of Green and Swets (1966), the founding text of sdt.
"Signal detection theory [has proposed] an indirectly derived measure of sensitivity...This measure is defined as the separation...between a pair of hypothesized normal density functions representing the internally observed effects of signal plus noise, an noise alone."
In other words, for any image an investigator might present, the nervous system of the observer generates a pair of probability functions related to the presence of absence of a feature of that image that the investigator has in mind and which he/she has instructed the observer to watch for. The observer perceives this feature on the basis of some form of knowledge of these functions. These functions have no perceptual correlate, nor is the observer aware of them, nor is there any explanation of how or why they would be represented at the neural level.
"The subject's pre-experimental biases, his expectations based on instructions and the a priori probability of signal, and the effects of the consequences of responding, are all subsumed under the parameter beta. The subject is assumed to transform his observations into a likelihood ratio, which is the ratio of the probability density of an observation if a signal is present to the probability density of that observation in the absence of signal. He is assumed, further, to partition the likelihood ratio continuum so that one response occurs if the likelihood ratio exceeds beta, and the other if it is less than beta."
Wow. None of these assumptions have any relationship to perceptual experience. Are they in the least plausible, or in any conceivable way testable? They underlie much of the data collection in contemporary vision science. They are dutifully taught by instructors; learning such material clearly requires that students set aside any critical thinking instincts.
The chief impetus behind SDT seems to have been a desire for mathematical neatness, rather than for the achievement of insight and discovery.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 May 23, Lydia Maniatis commented:
"For over 60 years, signal detection theory has been used to analyze detection and discrimination tasks. Typically, sensory data are assumed to be Gaussian with equal variances but different means for signal-absent and signal-present trials. To decide, the observer compares the noisy sensory data to a fixed decision criterion. Performance is summarized by d0 (discriminability) and c (decision criterion) based on measured hit and false-alarm rates."
What should be noted about the above excerpt is the way in which a statement of historical fact is offered as a substitute for a rationale. What scientist could argue with a 60-year-old practice?
The "typical" assumptions are not credible, but if the authors believe in them, it would be great if they could propose a way to test them, as well as work out arguments against the seemingly insurmountable objections to treating neurons as detectors, objections raise by, for example, Teller (1984).
While they are at it, they might explain what they mean by "sensory data." Are they referring to the reaction of a single photoreceptor when struck by a photon of a particular wavelength and intensity? Or to one of the infinitely variable combinations of photon intensities/wavelengths hitting the entire retina at any given moment - combinations which mediate what is perceived at any local point in the visual field? How do we get a Gaussian distribution when every passing state of the whole retina, and even parts of it, is more than likely unique? When, with eyes open, is the visual system in a "signal-absent" state?
There is clearly a perfect confusion here about the "decision" by the visual process that produces the conscious percept and the decision by the conscious observer trying to recall and compare percepts presented under suboptimal conditions (very brief presentation times) and decide whether they conform to an extrinsic criterion. (What is the logic of the brief presentation? And why muddy the waters with forced choices? (I suspect it's to ensure the necessary "noisiness" of results)).
"For 5 out of 10 observers in the covert-criterion task, the exponentially weighted movingaverage model fit the best. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the limited-memory models, one was fit best by the Bayesian selection, exponentially weighted moving-average, and the reinforcement learning models, one was fit best by the Bayesian selection and the reinforcement learning models, one was fit best by the exponentially weighted moving-average and reinforcement learning models, and one was best fit by the reinforcement learning model. At the group level, the exceedance probability for the exponentially weighted moving-average is very high (慸ponential = .95) suggesting that given the group data, it is a more likely model than the alternatives (Table 1). In the overt-criterion task, the exponentially weighted moving-average model fit best for 5 out of 10 observers. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the reinforcement learning models, two were fit best by the reinforcement-learning model, and two were fit best by the limited-memory model. At the group level, the exceedance probability for the exponentially weighted moving-average model (慸ponential = .78) is higher than the alternatives suggesting that it is more likely given the group data (Table 2)."
Note how, in the modern conception of vision science practice, failure is not an option; the criterion is simply which of a number of arbitrary models "fits best," overall. Inconsistency with experiment is not cause to reject a "model", as long as other models did worse or did as well, but in fewer case.
What is the aim here?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 04, Henri de la Salle commented:
This review privileges the view that mRNAs are translated in platelets. However, the biological function of mRNAs in platelets is not so clear. Our works do not agree with the conclusions drawn from other ones quoted in this review. We have demonstrated (Angénieux et al., PLoS One. 2016 Jan 25;11(1):e0148064. doi: 10.1371/journal.pone.0148064) that the lifespan of mRNAs and rRNAs in platelets is reduced, only a few hours. Accordingly, the translation activity in platelets rapidly decay, within a few hours. Thus, in vivo, translation of non-mitochondrial mRNAs only occurs in young platelets, a few percent of blood platelets in physiologic conditions. Most of the works reporting translation in platelets should be revisited by quantifying the number of transcripts of interest actually present in platelets. The RNAscope technique is powerful to investigate this problem; our works indicated that the most frequent transcript (eg beta actin mRNA) can be detected most if not all young platelets, but in only few percent of total blood platelets in homeostatic conditions. Finally, the biological role of translation in young platelets need to be established using accurate quantification methods, which is not easy with these cells.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Dec 26, Elena Gavrilova commented:
The initial article with the experimental study could be found here: https://www.ncbi.nlm.nih.gov/pubmed/27769099 The current article provides a detailed response to E.V. Dueva’s concerns regarding the experimental study. Moreover, the current article already contains response to the concerns raised in E.V. Dueva’s comment. Both our articles are fully transparent for the readers so they have an opportunity to familiarize themselves with the study results and detailed response to the concerns and make their own opinion.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 21, Hans Bisgaard commented:
Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.
Sincerely
Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 29, Martijn Katan commented:
My colleague Paul Brand and I published a Dutch-language comment on this paper in the Dutch Medical Journal. The English abstract is below and on www.ncbi.nlm.nih.gov/pubmed/28635579.
Abstract: Taking fish oil supplements in the third trimester of pregnancy was associated with significantly less wheezing or asthma in the child at the age of 3-5 years, according to a randomized clinical trial by Bisgaard et al., NEJM 2017. However, the results of this study should be interpreted with caution. The primary end points were modified at a late stage in the study, and two primary end points, eczema in the first 3 years of life and allergic sensitization at 18 months of age, were demoted to secondary end points, and showed no significant effect of treatment. Furthermore, the age range for the published primary end point, persistent wheeze, differed from that in the protocol. Additional concerns include the emphasis on outcomes by omega-3 fatty acid levels in the blood, a post hoc subgroup analysis not included in the protocol. In our opinion, this study does not justify advising routine fish oil supplements in pregnancy.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On date unavailable, commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 25, Konstantinos Fountoulakis commented:
Nice way to reply without replying to exact and specific questions. You already know that the NEJM editor rejected a letter by me and as i can see here he has also rejected other similar letters which raised the same questions. These specific questions seem to burn and i again mention them here:
- Did you or you did not change the primary outcome after registering the trail and during the study, and after the results of some of the subjects were available? (not in my comments but it needs a definite answer which i did not see so far)<br>
- Did you or you did not include in the paper a different primary outcome (3-5 years) from what you had eventually registered in the protocol (0-3 years) and specifically stated in the paper that this was the primary outcome of the study? Is 0-3 identical to 3-5?
Well i have no way of publishing this as a letter to the editor, I have already tried. To make things worse, the reply letter says (verbatim) that 'As clearly stated in the article the primary outcome was extended to distinguish wheezing children from asthmatic children'. I hope you will respond to the above issues and clarify once and for all the problem.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 21, Hans Bisgaard commented:
Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.
Sincerely
Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 20, Konstantinos Fountoulakis commented:
You do not like my critisism, and your reply is insulting. However in https://clinicaltrials.gov/ct2/show/NCT00798226 it says verbatim: 'Primary Outcome Measures: Persistent wheeze 0 to 3 years of age [ Time Frame: 3 years ]
in the paper you say (again verbatim): Primary End Point During the prespecified, double-blind follow-up period, which covered children from birth to between 3 and 5 years of age, 136 of 695 children (19.6%) received a diagnosis of persistent wheeze or asthma, and this condition was associated with reduced lung function by 5 years of age, with parental asthma, and with a genetic risk of asthma
It is quite different 0 to 3 and 0-to between 3 and 5
I would appreciate an answer on this. BTW Facebook is also good in disceminating scientific findings
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 16, Hans Bisgaard commented:
Reply to comment from Konstantinos Fountoulakis:
We think that the tone and scientific level in this correspondence is inappropriate for a scientific discussion and rather resembles a facebook discussion. The primary outcome reported in the paper is identical to the registered primary outcome of asthmatic symptoms during the prespecified, double blind follow-up period until the youngest child turned 3 years of age. This primary outcome does not include any “unblinded" observation period. The definition of the primary outcome was predefined based upon a previously published algorithm using diary-registration of asthma symptoms and a predefined treatment algorithm, and also the statistical model (survival analysis by cox regression) was predefined. As evident from the paper, the analyses related to further follow-up until the youngest child turned 5 years of age, as requested by NEJM, are clearly reported separately as the results of a "continued follow-up period”.
Sincerely
Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke
COPSAC, Copenhagen, Denmark
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 12, Konstantinos Fountoulakis commented:
The paper reports that high-dose supplementation with n−3 LCPUFA in the third trimester of pregnancy reduces the incidence of wheezing in the offspring [1]. However, the primary outcome as registered [2] is incidence at 3 years while in the paper it is erroneously reported as incidence between 3 and 5 years. This is highly problematic and raises a number of issues. Any changes to the protocol or to the way results are presented should had been made clear in the manuscript. Any other way of presenting the results and conclusions is problematic. It is not acceptable that the NEJM asked for an extension of the primary outcome. This could had been added as an additional post-hoc analysis. The results concerning the real primary outcome are not reported but they are probably negative, taken into consideration figure 1 and the marginal significance (p=0.03) at year 5. Furthermore, the trial becomes single blinded gradually after year 3 which makes conclusions problematic. Conclusively, the paper clearly violates the CONSORT statement [3], is probably negative concerning the primary outcome (which is in accord with the negative secondary outcomes) and it is written in a misleading way.
- Bisgaard H, Stokholm J, Chawes B, Vissing N, Bjarnadóttir E, Schoos A et al. Fish Oil–Derived Fatty Acids in Pregnancy and Wheeze and Asthma in Offspring. New England Journal of Medicine. 2016;375(26):2530-2539.
- ClinicalTrials.gov [Internet]. Bethesda (MD): National Library of Medicine (US). 2000 Feb 29 - . Identifier NCT00798226, Fish Oil Supplementation During Pregnancy for Prevention of Asthma, Eczema and Allergies in Childhood; 2008, Nov 25 [cited 2017 Jan 8]; Available from: https://clinicaltrials.gov/ct2/show/record/NCT00798226
- Schulz K, Altman D, Moher D. CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials. PLoS Medicine. 2010;7(3):e1000251.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 21, Hans Bisgaard commented:
Thank you for your interest in our study. We will be pleased to address any questions or comments in the proper scientific manner, where you submit these to the journal as a Letter to the Editor.
Sincerely
Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke COPSAC, Copenhagen, Denmark
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 06, Robert Goulden commented:
Hi Hans,
Many thanks for the reply. I really don't meant to sound incriminatig or pedantic, but the switching of primary and secondary outcomes is a widespread problem in clinical trials. Anyone looking at the history of changes to the COPSAC registration I think would be keen to find out if that had occured here.
You say 'Before unblinding of the trial we became aware that ranking of outcomes in this registration was not clear and we therefore changed this'. By that, do you mean that 'Development of eczema from 0 to 3 years of age' and 'Sensitization at 18 months of age' were mistakenly listed as primary outcomes in the original registration and subsequent revisions (until correction in Feb 2014), when your original intent was for them to be secondary outcomes from the outset? I of course understand how such an error can be made, but I hope you feel this is a reasonable question given the importance of this issue for determining the appropriate statistical significance threshold.
Rob
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 26, Hans Bisgaard commented:
Reply to question from Robert Goulden
We must admit that we find this comment very incriminating and with no contribution to a scientific discussion. The primary outcome of our study was always 'wheeze’ (early asthmatic symptoms). Otherwise, we would not have reported it as such, and we doubt that the New England Journal of Medicine would have published it. Similarly, the diagnostic algorithm based upon episodes of 'troublesome lung symptoms' was pre-specified as was the analysis method (risk of developing wheeze analyzed by cox regression) in line with previous studies from our COPSAC birth cohorts. It is correct that wheeze, eczema and allergic sensitization (in that order) were all listed as ‘primary outcomes’ in the initial ClinicalTrials.gov registration. Before unblinding of the trial we became aware that ranking of outcomes in this registration was not clear and we therefore changed this (still unaware of the results of the trial). The only change after unblinding of the trial in relation to the primary outcome was the change in nomenclature to ‘Persistent wheeze or asthma’. This was due to a request from the New England Journal of Medicine of an additional 2 years follow-up from 3 to 5 years of age thereby including an age where we would normally use the term ‘asthma’.
Sincerely
Hans Bisgaard, Bo Chawes, Jakob Stokholm and Klaus Bønnelykke
COPSAC, Copenhagen, Denmark
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 09, Robert Goulden commented:
Here's a letter I sent to NEJM which they declined to publish. Hopefully the authors can respond here:
A review of the history of changes on the ClinicalTrials.gov entry (NCT00798226) for Bisgaard et al.’s study raises questions about the selection of their primary outcome and the statistical significance of their positive result.
When first registered in 2008, the trial had three primary outcomes: development of wheeze, development of eczema, and sensitization. In February 2014, two months before the study completion date, the entry was edited to just have persistent wheeze as the primary outcome, with eczema and sensitisation switched to secondary outcomes. The published study in NEJM shows that persistent wheeze – presented as the sole primary outcome – was the only one of the three original primary outcomes to be statistically significant (P = 0.035).
Given multiple primary outcomes, an adjustment such as Bonferroni should have been made to the significance threshold: 0.05/3 = 0.017. Accordingly, the effect on wheeze was not statistically significant. Would the authors comment on their selection of the only ‘significant’ primary outcome as their final primary outcome? Were they aware of the study results at this point and did this influence their decision?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 06, Cicely Saunders Institute Journal Club commented:
This paper was reviewed in the Cicely Saunders Institute Journal Club on 1st March 2017.
The paper reports on the independent associations of income, education and multimorbidity with aggressiveness of end of life care, using rich data from the Health and Retirement Study (HRS) linked to the National Death Index (NDI) and Medicare data. We enjoyed discussing this paper and agree with the authors about the importance of understanding social determinants alongside clinical determinants of care at the end of life. We liked the measure of multimorbidity used, comprising of items related to comorbidity, functional limitations and geriatric syndromes, and thought this comprehensive approach was useful in this population. We were not sure why the sample was limited to fee-for-service patients and whether this may have disproportionately excluded some socio-economic groups. As a non-US audience we would have welcomed some further justification for restricting the sample in this way and discussion of potential limitations, perhaps using a CONSORT diagram to explain the steps. We enjoyed the presentation of the bivariate associations in the bar charts, helping us to understand the U and J shaped relationships between some of the variables. More information about what exactly the income variable was capturing (i.e. including pensions or not, and whether the household income was total or averaged across the number of people in the household) would have been useful. We also felt the race variable was broad and interpretation of the results would have benefited from more refined categories. Overall the paper sparked a good discussion about the importance of measuring social determinants and illness and function related factors in end of life populations and how best to capture these.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 29, Rashmi Das commented:
We thank Harri for his PERSONAL (NON PEER REVIEWED) OPINION which is available at above HANDLE ( http://hdl.handle.net/10138/153180) THAT CONTAINS DIRECT COPY AND PASTE OF THREE FIGURES/IMAGES FROM OUR PREVIOUS PUBLICATIONS (JAMA 2014 and Cochrane 2013). We are happy to reply to above comments made by Harri. First regarding the Cochrane review which was withdrawn in 2015, the detailed report is already available at following link (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001364.pub5/abstract). This report is the collaborative observation and conclusion of the Cochrane editors (UNLIKE THE HANDLE WHICH CONTAINS MORE OF PERSONAL OPINION WHICH HAS ALREADY BEEN EXAMINED BY THE COCHRANE EDITORS BEFORE REACHING THE CONCLUSION). The same HANDLE WAS SENT TO JAMA EDITORS REGARDING THE JAMA CLINICAL SYNOPSIS (PUBLISHED IN 2014) AND HARRI REQUESTED THE EDITORS TO CARRY OUT THE INVESTIGATION AND VERIFY. THE EDITORS ASKED US FOR REPLY WHICH WE CLARIFIED IN A POINT TO POINT MANNER (BOTH THE COMMENT BY HARRI AND OUR REPLY WAS PUBLISHED, SEE BELOW). HAD THE COMMENT/REPORT BY HARRI WAS ENTIRELY CORRECT, THE JAMA EDITORS COULD HAVE STRAIGHTWAY RETRACTED/WITHDRAWN THE SYNOPSIS WITHOUT GOING FOR PUBLICATION OF THE COMMENT/REPLY (Both are available at following: https://www.ncbi.nlm.nih.gov/pubmed/26284729; https://www.ncbi.nlm.nih.gov/pubmed/26284728). IT HAS TO BE MADE CLEAR THAT THE JAMA SYNOPSIS (DAS 2014) WAS WITHDRAWN AS THE SOURCE DOCUMENT ON WHICH IT WAS BASED (COCHRANE 2013 REVIEW) WAS WITHDRAWN (NOT BASED ON THE REPORT IN THE HANDLE WHICH IS A PERSONAL NON PEER REVIEWED OPINION). The irony is that though HARRI'S COMMENT got published as LETTER TO EDITOR in JAMA after OUR REPLY, still the NON PEER REVIEWED HANDLE THAT CONTAINS DIRECT COPY OF THREE FIGURES/IMAGES FROM OUR PUBLICATION IS GETTING PROPAGATED.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 May 24, Harri Hemila commented:
Background of the retraction
Concerns were expressed about unattributed copying of text and data, and about numerous other problems in the Cochrane review “Zinc for the Common Cold” by Singh M, 2013. Details of the concerns are available at: http://hdl.handle.net/10138/153180.
The Cochrane review was withdrawn, see Singh M, 2015.
The JAMA summary of the Cochrane review by Das RR, 2014 had numerous additional problems of its own.
Detailed description of problems in Das RR, 2014 are available at http://hdl.handle.net/10138/153617.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 15, Farrel Buchinsky commented:
How can I find out more about "well-newborn care in the inpatient setting"? It was $12000 and almost exclusively associated with uncomplicated delivery. In other words preterm birth complications, neonatal encephalopathy, and other neonatal disorders were not the major contributors. I do not understand that. What is being charged? Is the labor and delivery being charged to the mother or the child or being split equally among them? Surely it cannot be the "room" charge for hanging out in the newborn nursery for 2 days?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 03, Peter Hajek commented:
The concerns and warnings about 'dual use' are not justified. There is no evidence that dual use of cigarettes and e-cigarettes poses any additional risks, on the contrary. The available evidence suggests that dual use of cigarettes and e-cigarettes has the same or better effect than dual use of cigarettes and nicotine replacement treatments (NRT) that is the basis for licensing NRT for 'cut down to quit' use. It reduces smoke intake (and therefore toxin intake - even of chemicals present in both products, such as acrolein); and increases the chance of quitting smoking later on. The evidence we have up to now leaves no doubt that smokers should be informed truthfully about the risk differential and encouraged to switch to vaping.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 05, Boris Barbour commented:
See also the PubMed commons below Gomes JM, 2016
And https://arxiv.org/abs/1612.08457
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 03, Donald Forsdyke commented:
ASSUME A SPHERICAL COW?
Following a multidisciplinary study of milk production at a dairy farm, a physicist returned to explain the result to the farmer. Drawing a circle she began: "Assume the cow is a sphere … ." (1) This insider math joke may explain Koonin’s puzzlement that "most biologists do not pay much attention to population genetic theory" (2).
The bold statement that "nothing in evolution makes sense except in the light of population genetics," cannot be accepted by biologists when evolution is portrayed in terms of just two variables, "an interplay of selection and random drift," constituting a "core theory." While mathematical biologists might find it "counterintuitive" that "the last common eukaryotic ancestor had an intron density close to that in extant animals," this is not necessarily so for their less mathematical counterparts. They are not so readily inclined to believe that an intron "is apparently there just because it can be" (3).
While expediently adopting "null models" to make the maths easier, population geneticists are not "refuted by a new theoretical development." They have long been refuted by old theoretical developments, as illustrated by the early twentieth century clash between the Mendelians and the Biometricians (4). It is true that by adjusting "selection coefficient values" and accepting that "streamlining is still likely to efficiently purge true functionless sequences," the null models can closer approximate reality. But a host of further variables – obvious to many biologists – still await the acknowledgement of our modern Biometricians.
1.Krauss LM (1994) Fear of Physics: A Guide for the Perplexed. Jonathan Cape, London.
2.Koolin EV (2016) Splendor and misery of adaptation, or the importance of neutral null for understanding evolution. BMC Biology 14:114 Koonin EV, 2016
3.Forsdyke DR (2013) Introns First. Biological Theory 7, 196-203.
4.Cock AG, Forsdyke DR (2008) "Treasure Your Exceptions." The Science and Life of William Bateson. Springer, New York.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 18, Atanas G. Atanasov commented:
Dear co-authors, thanks a lot for the excellent joint work! I have referenced our work at: http://healthandscienceportal.blogspot.co.at/2017/02/what-are-therapeutic-properties-of.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 29, Christopher Southan commented:
BIA 10-2474 is now avaible from vendors (see https://cdsouthan.blogspot.se/2016/01/molecular-details-related-to-bia-10-2474.html). The experimental verification of the predictions in this paper are thus awaited with interest.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 29, Lydia Maniatis commented:
The title of this article indicates that the authors may have something to say about “lightness perception for surfaces moving through different illumination levels”, but leaves us in the dark what that might be.
The abstract isn’t much more illuminating. The somewhat vague message seems to be that the perceived lightness of a patch in the visual field depends on the structure of the “light field,” the “choice of fixation positions,” and whether the scene is viewed freely or not, and that “eye movements in [dynamic scenes and nonuniform light fields] are chosen to improve lightness constancy.”
Unfortunately and fatally absent from the terms of the discussion is any reference to shape. Yet, shape (i.e. the organization (segregation/unification, 3D interpretation), via the visual process, of the points in the retinal projection into perceived forms) is the only available means to the goal of creating percepts of lightness as well as relative illumination of surfaces. This is obvious with respect to the authors' sitmuli, which are images on a computer screen. The luminance structure of the light emitting points on that screen is the only information the visual system has to work with, and unless those points are grouped and boundaries and depth relations inferred there is no basis for designating continuous surfaces, their lightness, their relative illumination. Whether areas of the visual are interpreted as changing in reflectance or illumination is contingent on which parts of the field are eligible to be grouped into perceived physical units, with a homogeneous surface.
In other words: When the luminance of a surface in a part of the visual field changes, (e.g. from lighter to darker), the change may be interpreted as being due to a change in illumination of a surface in that location, a change in the color of the surface at that location, the presence of a fog overlying the surface at that location, etc., or a combination of these possibilities. How is the solution (the percept) arrived at? For example, at the lower left side of Toscani et al’s (2016) Figure 1, an edge between a dark area (the “wall)” and a lighter area (the “side of a cube”) to its right is perceived as a lightening in terms of both perceived illumination and perceived reflectance) while a change from same lighter area to a darker area to its right is seen as a change in illumination only. The reason is structural, based on the very principles of organization not mentioned by the authors.
The consequence of the failure to consider principles of organization in any study of lightness perception is that ANY resulting claims can be immediately falsified. It is impossible to predict how a surface will look when placed in any given location in the visual field by referring only to the distribution of incident illumination, since this information doesn’t in the least allow us to predict luminance structure. And a description of luminance structure doesn’t help us if we don’t consider visual principles of organization. The former fact should be particularly obvious to people using uniformly illuminated pictorial stimuli, whether on a page or on a screen, which produce impressions of non-uniform illumination. Like reflectance, the perception of illumination is constructed, it isn’t an independent variable for vision; so it makes no sense, in the context of perception experiments, to refer to it as though it is – as the authors do in the phrase “moving through different illumination levels” - especially if we aren’t even talking about actual illumination levels, but only visually-constructed ones! The perception of changing illumination levels is the flip side to the perception of unchanging surfaces, and vice versa. Like lightness, perceived illumination is dependent on principles of organization, starting with figure/ground segregation.
So, for example, when the authors say that the brightest parts of a (perceived) surface’s luminance distribution is “an efficient…heuristic for the visual system to achieve accurate…judgments of lightness,” we can counter (falsify) with the glare illusion (http://www.opticalillusion.net/optical-illusions/grey-glow-illusion-the-glare-effect/) in which the brightest area is not perceived as the plain view color of the surface, which appears black and obscured by a glare or bright fog.
With respect to eye movements and fixation: It seems to be the case that fixations are the product, not the cause, of perceptual solutions. For example, it has been shown that while viewing the Muller-Lyer illusion, eye movements trace a longer path when we’re looking at the apparently longer figure and vice versa. Another problem with the claim that eye movements have a causal role by sampling “more relevant” parts of the field is that all parts of the field are taken into account in the generation of a percept, e.g. in order for the visual system to conclude that a particular patch is the lightest part of a homogeneously-colored but differently-illuminated physical unit, rather than a differently colored patch on a different unit. Since the perceived relative lightness/illumination of that particular patch is related to the perceived lightness/illumination of the whole visual field, isolating that patch by fixation can’t be uniquely informative. As we know, reduction conditions can transform the perception of surfaces.
I would note that the emphasis on “lightness constancy” rather than “principles of lightness perception” is common but ill-conceived. With respect to understanding perception, understanding lightness constancy is no more informative than understanding lightness inconstancy. (For a great example, complete with movement, of lightness INconstancy, see https://www.youtube.com/watch?v=z9Sen1HTu5o). In either case, what is constant are the underlying perceptual principles; to understand one effect is to understand the other. This is another reason the claim that eye movements are chosen “to improve lightness constancy” is ill-conceived. Only an all-knowing homunculus can know, a priori, which areas of the visual field represent stimulation from physical surfaces with constant reflectance x, which represent physical surfaces obstructed by fog or in shadow, which areas represent physical surfaces that are actually changing in their light reflecting properties (a squid, for example - do we want to improve his or her constancy?), etc. The visual system has to go where the evidence goes, as interpreted via the evolved process. This process achieves veridicality – e.g. seeing surface properties as unchanging when they’re unchanging, and as changing when they’re changing - in typical conditions.
Ironically, observers in Toscani et al’s (2016) experiments are not perceiving surfaces veridically, since, for example, parts of the screen surface that are actually varying in their color are perceived as unchanging in that respect, while, correspondingly, they are seen (incorrectly) as experiencing changing illumination. So we’re actually talking about the converse of lightness constancy; the authors are equating a physical surface that is unchanging in its light-reflecting properties, but experiencing changing illumination, with a surface (a section of the screen) that is actually changing in its light reflecting/emitting properties independently of incident illumination, which is constant. In the former case, seeing the surface as unchanging parallels the physical situation, while in the latter case it is opposite to the physical situation. Calling both situations examples of “lightness constancy” only confuses the issue, which is: “Why does a retinal projection with a particular luminance structure result in patch x looking the way it does.” The question, again, cannot be answered reliably without invoking principles of organization, i.e. the consequences of that luminance structure for perceived shape.
Short version: Shape.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 04, Lydia Maniatis commented:
This article belongs to the popular "transparent brain" school of thought. (The label is inspired by Graham (1992) see comment https://pubpeer.com/publications/8F9314481736594E8D58E237D3C0D0).
That is, certain visual scenes selectively tap neurons at particular levels of the visual system, such that by analyzing the percept we can draw conclusions about the behavior of groups of neurons at that level.
Teller (1984) called this view the "nothing mucks it up proviso," referring to the fact that it assumes all other parts of the hierarchically-organized, complicatedly interconnected visual system play no role in the particular effect of interest.
The untenable transparent brain fiction is compromised even further by the "simple" stimuli that are supposed to enable the transparent view into V1 etc, as they actually elicit highly sophisticated 3D percepts including effects such as the perception of light and shadow and fog/transparency. OF course, these perceptual features are mediated by the activity of V1 (etc) neurons. But the factors the investigators reference - orientation here, often contrast - are somehow supposed to retain their power to reflect only the behavior of V1 (or whatever level particular investigators are claiming to isolate and "model.")
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 23, Scott D Slotnick commented:
There has been a call for peer commentary on the Editorial/Discussion Paper (Slotnick SD, 2017) in the journal Cognitive Neuroscience (due February 13th, 2017). The Editorial/Discussion Paper, Commentaries, and an Author Response will be published in an issue of Cognitive Neuroscience later this year.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 04, Gerard Ridgway commented:
This editorial suggests that the problems identified by Eklund A, 2016 arise solely from the use of resting-state data in place of null data. It seems to overlook the fact that Eklund A, 2016 use randomly timed designs (two blocked designs and two event-related), meaning the unknown timecourse of default mode network activity cannot consistently give rise to design-synchronised activation (except perhaps in the special case of the initial transients, which is mentioned briefly in an article by Flandin and Friston, 2016, but probably warrants further investigation). On the other hand, the activity of the DMN could perhaps be contributing to non-Gaussianity of the residuals and/or a more complex spatial autocorrelation function (ACF) than is typically modelled, but these aspects seem not to be mentioned, or to be addressed in the author's recommended simulation approach (which seems to be very similar to the original AlphaSim from AFNI).
Regarding the spatial ACF in particular, but also the issue of the cluster-defining threshold (CDT), this article should be contrasted with Cox et al., 2016, which recommends a new long-tailed non-Gaussian ACF available in newer versions of AlphaSim, together with a CDT of 0.001 or below.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 08, Chris Del Mar commented:
Has this trial report hidden the results -- that symptomatic outcomes are clinically almost identical -- in plain sight? See this blog that plots the results more transparently:-
Chris Del Mar cdelmar@bond.edu.au Paul Glasziou pglaszio@bond.edu.au
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 24, Dorothy V M Bishop commented:
It is a pleasure to see this paper, which has the potential to transform the field of ERP research by setting new standards for reproducibility.
I have one suggestion to add to those already given for reducing the false discovery rate in this field, and that is to include dummy conditions where no effect is anticipated. This is exactly what the authors did in their demonstration example, but it can also be incorporated in an experiment. We started to do this in our research on mismatch negativity (MMN), inspired by a study by McGee et al 1997; they worked at a time when it was not unusual for the MMN to be identified by 'experts' – and what they showed was that experts were prone to identify MMNs when the standard and deviant stimuli were identical. We found this approach – inclusion of a 'dummy' mismatch – invaluable when attempting to study MMN in individuals (Hardiman and Bishop, 2010). It was particularly helpful, for instance, when validating an approach for identifying time periods of significant mismatch in the waveform.
Another suggestion is that the field could start to work more collaboratively to address these issues. As the authors note, replication is the best way to confirm that one has a real effect. Sometimes it may be possible to use an existing dataset to replicate a result, but data-sharing is not yet the norm for the field – journals could change that by requiring deposition of the data for published papers. But, more generally, if journals and/or funders started to require replications before work could be published, then one might see more reciprocal arrangements, whereby groups would agree to replicate each other's findings. Years ago, when I suggested this, I remember some people said you could not expect findings to replicate because everyone had different systems for data acquisition and processing. But if our data are specific to the lab that collected them, then surely we have a problem.
Finally, I have one request, which is that the authors make their simulation script available. My own experience is that working with simulations is the best way to persuade people that the problems you have highlighted are real and not just statistical quibbles, and we need to encourage researchers in this area to become familiar with this approach.
Bishop, D. V. M., & Hardiman, M. J. (2010). Measurement of mismatch negativity in individuals: a study using single-trial analysis. Psychophysiology, 47, 697-705 doi:10.1111/j.1469-8986.2009.00970.x
McGee, T., Kraus, N., & Nicol, T. (1997). Is it really a mismatch negativity? An assessment of methds for determining response validity in individual subjects. Electroencephalography and Clinical Neurophysiology, 104, 359-368.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 03, Daniel Schwartz commented:
The BC Cardiac Surgical Intensive Care Score is available online or via the Calculate mobile app for iOS, Android and Windows 10 at https://www.qxmd.com/calculate/calculator_36/bc-cardiac-surgical-intensive-care-score
Conflict of interest: Medical Director, QxMD
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 23, Alessandro Rasman commented:
Bernhard HJ. Juurlink MD, Giovanni Battista Agus MD, Dario Alpini MD, Maria Amitrano MD, Giampiero Avruscio MD, Pietro Maria Bavera MD, Aldo Bruno MD, Pietro Cecconi MD, Elcio da Silveira Machado MD, Miro Denislic MD, Massimiliano Farina MD, Hector Ferral MD, Claude Franceschi MD, Massimo Lanza MD, Marcello Mancini MD, Donato Oreste MD, Raffaello Pagani MD, Fabio Pozzi Mucelli MD, Franz Schelling MD, Salvatore JA Sclafani MD, Adnan Siddiqui MD, PierluigI Stimamiglio MD, Arnaldo Toffon MD, Antonio Tori MD, Gianfranco Vettorello MD, Ivan Zuran MD and Pierfrancesco Veroux MD
We read with interest the study titled "Free serum haemoglobin is associated with brain atrophy in secondary progressive multiple sclerosis" (1). Dr. Zamboni first outlined the similarities between impaired venous drainage in the lower extremities and MS in 2006 in his "Big Idea" paper (2). Chronic venous insufficiency can cause a breakdown of red blood cells, leading to increased levels of free hemoglobin. Exactly what the London researchers saw 11 years later.
References: 1) Lewin, Alex, et al. "Free serum haemoglobin is associated with brain atrophy in secondary progressive multiple sclerosis" Wellcome open research 1-10 (2016). 2) Zamboni, Paolo. "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis" Journal of the Royal Society of Medicine 99.11 (2006): 589-593.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 27, Zvi Herzig commented:
Prolonged exposures of oropharyngeal tissue submerged in refill liquids are a poor comparison to brief exposures from accidents.
The constituents of EC liquids other than nicotine (glycerol, propylene glycol and food flavorings) are GRAS approved in relation to oral consumption. It's therefore unlikely that these would pose a particular hazard in relation to oral cancer.
Likewise, with regards to nicotine, epidemiology of prolonged oral exposure in relation to snus is not linked to oral cancer either Lee PN, 2011.
Thus none of the known e-liquid constituents are plausibly related to oral cancer. Which supports the above conclusion that the study's results unrelated to normal exposures.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 03, Christian Welz commented:
we discussed that issue in the publication: "Because most EC users refill their cartridges by themselves, and incidental or accidental contact is logical and described (Varelt et al., 2015; Vakkalanka et al., 2014) we intentionally used unvapored liquids for our experiments...."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 20, Zvi Herzig commented:
This study directly exposes cells to liquids, despite that fact that users are exposed to the vapor, not the liquid. This issue has been noted previously Hajek P, 2014, Farsalinos KE, 2014. The ~3 ml of liquid which EC users consume daily Farsalinos KE, 2014 is diluted by much air over hundreds of puffs. This is incomparable to the direct exposures to liquids in the present study.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 09, M Mangan commented:
There has now been an EFSA review of the paper, with an eye towards regulatory aspects in the EU. They describe the work as incomplete and with "severe shortcomings".
http://onlinelibrary.wiley.com/doi/10.2903/sp.efsa.2017.EN-1249/abstract
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jan 03, M Mangan commented:
There have now been some really good summaries of issues with this work.
Is GM corn really different to non-GM corn? http://sciblogs.co.nz/code-for-life/2016/12/31/gm-corn-really-different-non-gm-corn/
What are isogenic lines and why should they be used to study GE traits? http://themadvirologist.blogspot.com/2017/01/what-is-isogenic-line-and-why-should-it.html
Another: http://biobeef.faculty.ucdavis.edu/2017/01/03/i_would_appreciate_your_comments/
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 22, M Mangan commented:
The top fold change showed in this paper turns out to be a plant pathogen protein, that might have been affecting the maize. http://www.uniprot.org/uniprot/W7LNM5
If that is the case, it may demonstrate the power of -omics in revealing that the claims of differences have to be evaluated very carefully. They may not be what the authors claim they are.
We await some explanation of this large difference between samples from the authors.
Edit to add: The author Mesnage asked me to post questions at the journal site, but is not coming over to answer them. Maybe the authors will find them here, so I'll also add them here as well.
There's a lot of nonsense drama below now, but I want to hear from the authors (Robin Mesnage asked me to post here, but I can't see if he's responding):
What is your explanation for the fact that top fold-change proteins in your data set are fungal proteins (and it's a known maize pathogen)?
Are you aware that fungal contamination could result in similar changes in regards to the pathway changes that you describe? Did you consider this at all? Why didn't you address this in your paper?
If you wish to dismiss your own top reported proteins, how can you stand by the importance of the fold-change claims you are making about other proteins?
Thanks for your guidance on this. It's very perplexing.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 10, Jesper M Kivelä commented:
The URL in the reference number 2 is invalid (i.e. html is missing in the end). This error slipped through my (evidently not so) pedantic eyes at proof stage.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
On 2017 Mar 15, Sin Hang Lee commented:
Correspondence submitted to Nat.Rev.Dis.Primers
In their recent Primer [Lyme borreliosis. Nat. Rev. Dis. Primers 2, 16091 (2016)] Allen Steere and colleagues described Lyme borreliosis as an important emerging infectious disease [1]. The authors assert that the natural history of untreated Lyme borreliosis can be divided into stages 1, 2 and 3, and that the early stage 1 infections can be treated successfully with a 10–14 day course of antibiotics. However, the authors also stated that demonstration of borrelial infection by laboratory testing is required for reliable diagnosis of Lyme borreliosis, with the exception of erythema migrans and that serodiagnostic tests are insensitive during the first several weeks of infection. If not treated early, “within days to weeks, the strains of B. burgdorferi in the United States commonly disseminate from the site of the tick bite to other regions of the body”. In other words, the authors have affirmed that if reliably diagnosed at the early stage of the infection, Lyme borreliosis can be cured with timely, appropriate antibiotics to prevent deep tissue damage along with its associated clinical manifestations resulting from host immune response to various spirochetal products or components. In the Outlook Diagnostic tests section of the article, the authors failed to mention the fact that currently the diagnosis of emerging infectious diseases largely depends on finding evidence of the causative agent in the host by nucleic acid-based tests [2], not serodiagnostic tests which usually turn positive only during convalescence. The authors seem to advise the medical practitioners to not treat Lyme disease patients until the proliferating spirochetes in the host have elicited certain immune responses which can be confirmed by serologic tests. Such practice should not be accepted or continued for obvious reasons.
The authors stated “After being deposited in the skin, B. burgdorferi usually multiplies locally before spreading through tissues and into the blood or lymphatic system, which facilitates migration to distant sites.”. This statement acknowledges that spirochetemia is an early phase in the pathogenesis of Lyme borreliosis. But under the section of Diagnostic tests, polymerase chain reaction (PCR) test was only mentioned for synovial fluid of patients in late Lyme arthritis and for cerebrospinal fluid (CSF) of late neuroborreliosis. To refute the usefulness of DNA test for Lyme disease diagnosis, the authors cited a study which showed borrelial DNA was detected in synovial fluid of Lyme arthritis patients containing moribund or dead spirochetes [3]. However, the authors failed to discuss the significance of detection of borrelial DNA in the diagnosis of spirochetemia. The authors failed to acknowledge that even the finding of moribund or dead borrelial cells circulating in the blood is diagnostic of an active infection. Free foreign DNA is degraded and eliminated from the mammalian host’s blood within 48 hours [4]. Detection of any borrelial DNA validated by DNA sequencing is indicative of a recent presence of spirochetes, dead or alive, in the circulating blood which is evidence of an active infection beyond a reasonable doubt.
It seems unfortunate for many current Lyme disease patients that Lyme arthritis was described before the era of Sanger sequencing and PCR [5]. If Lyme borreliosis were discovered as an emerging infectious disease today, Lyme disease would probably be routinely diagnosed using a highly accurate nucleic acid amplification test, as reiterated by Dr. Tom Frieden, director of the Centers for Disease Control and Prevention (CDC) for Zika virus infection [6], or by the European Centre for Disease Prevention and Control for the case definition of Ebola virus infection [7]. Now there is evidence that clinical “Lyme disease” in the United States may be caused by B. miyamotoi [8-10], co-infection of B. burgdorferi and B. miyamotoi [9], a novel CDC strain (GenBank ID# KM052618) of unnamed borrelia [10], and a novel strain of B. burgdorferi with two homeologous 16S rRNA genes [11]. The Lyme disease patients infected with these less common strains of borreliae may have negative or non-diagnostic two-tiered serology test results. Neither erythema migrans nor serologic test is reliable for the diagnosis of Lyme disease. In one summer, the emergency room of a small hospital in Connecticut saw 7 DNA sequencing-proven B. burgdorferi spirochetemic patients. Only three of them (3/7) had a skin lesion and only one (1/7) had a positive two-tiered serologic Lyme test [12].
After a 40-year delay, the medical establishment should begin to diagnose “Lyme disease” as an emerging infectious disease by implementing nucleic acid-based diagnostic tests in the Lyme disease-endemic areas. A national proficiency test program to survey the competency of diagnostic laboratories in detecting various pathogenic borrelia species is urgently needed for stimulating diagnostic innovation. We should treat the borrelial infection of “Lyme disease” to reduce its autoimmune consequences, just like treating streptococcal infection early to reduce the incidence of rheumatic heart disease in the past.
Allen Steere and colleagues have written a prescription to treat Lyme borreliosis in their lengthy article raising numerous questions [1], but paid little attention to the issue of how to select the patients at the right time for the most effective treatment. For the physicians managing current and future Lyme disease patients, a sensitive and no-false positive molecular diagnostic test is a priority, also the most important issue for the patients that Allen Steere and his colleagues have simply glossed over.
Conflict of Interest: Sin Hang Lee is the director of Milford Molecular Diagnostics Laboratory specialized in developing DNA sequencing-based diagnostic tests for community hospital laboratories.
References 1. Steere, A.C. et al. Lyme borreliosis. Nat. Rev. Dis. Primers 2,16091 (2016). 2. Olano, J.P. & Walker, D.H. Diagnosing emerging and reemerging infectious diseases: the pivotal role of the pathologist. Arch. Pathol. Lab. Med. 135, 83-91 (2011). 3. Li, X. et al. Burden and viability of Borrelia burgdorferi in skin and joints of patients with erythema migrans or Lyme arthritis. Arthritis Rheum. 63, 2238–2247 (2011). 4. Schubbert, R. et al. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc. Natl. Acad. Sci. U. S. A. 94, 961-966 (1997). 5. Steere, A. C. et al. Lyme arthritis: an epidemic of oligoarticular arthritis in children and adults in three connecticut communities. Arthritis Rheum. 20, 7–17 (1977). 6. Frieden T. Transcript for CDC Telebriefing: Zika Update. https://www.cdc.gov/media/releases/2016/t0617-zika.html (2016) 7. ECDC. Ebola virus disease case definition for reporting in EU. http://ecdc.europa.eu/en/healthtopics/ebola_marburg_fevers/EVDcasedefinition/Pages/default.aspx#sthash.LvKojQGu.wf5kwZDT.dpuf (2016 last accessed) 8. Jobe, D.A. et al. Borrelia miyamotoi Infection in Patients from Upper Midwestern United States, 2014-2015. Emerg. Infect. Dis. 22, 1471-1473 (2016). 9. Lee, S.H. et al. Detection of borreliae in archived sera from patients with clinically suspect Lyme disease. Int. J. Mol. Sci. 15, 4284-4298 (2014). 10. Lee, S.H. et al. DNA sequencing diagnosis of off-season spirochetemia with low bacterial density in Borrelia burgdorferi and Borrelia miyamotoi infections. Int. J. Mol. Sci. 15, 11364-11386 (2014). 11. Lee, S.H. Lyme disease caused by Borrelia burgdorferi with two homeologous 16S rRNA genes: a case report. Int. Med. Case Rep. J. 9,101-106 (2016). 12. Lee, S.H. et al. Early Lyme disease with spirochetemia - diagnosed by DNA sequencing. BMC Res. Notes. 3, 273 (2010).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 28, Marcia Herman-giddens commented:
While there are many aspects of this review paper by Steere, et al, which beg for comment, I focus on the erythema migrans rash (EM). Steere, et al, state that “erythema migrans is the presenting manifestation of Lyme borreliosis in ~80% of patients in the United States” based on their 2003 paper. It is unclear from that paper exactly how this figure was obtained. As far as I know, there has never been a well-designed study to examine this issue.
I was pleased to see Figure 5 showing photographs of EM rashes with their more accurate solid red appearance. Research has shown that, contrary to popular belief (likely because of the promotion of the so-called ‘target or bull’s-eye’ type of lesion), most EMs are solid red. As stated by Shapiro in 2014 in the NEJM, “Although reputed to have a bull’s-eye appearance, approximately two thirds of single erythema migrans lesions either are uniformly erythematous or have enhanced central erythema without clearing around it.” Later, some may have central clearing. The CDC estimates “70-80%” of Lyme disease patients have an EM rash and call the picture on their webpage “classic” even though it shows a bull’s-eye or target type lesion.
One outcome of this misrepresentation as a bull’s eye or target lesion, is that patients with the more common solid EM rash may not present to their medical provider in a timely manner thinking that it does not represent possible Lyme disease. I know of several cases where this happened and the patients went on to develop late Lyme disease. Aucott et al, in their 2012 paper, “Bull’s-Eye and Nontarget Skin Lesions of Lyme Disease: An Internet Survey of Identification of Erythema Migrans,” found that many of the general public participants were familiar with the classic target-type erythema migrans lesion but only 20.5% could correctly identify the nonclassic erythema migrans. In addition, many health care providers are not well trained in the recognition of EM rashes. In a case series by Aucott et al. in 2009, among Lyme disease patients presenting with a rash, the diagnosis of EM was initially missed by providers in 23%.
The well-known lack of sensitivity in the recommended two-tier test for diagnosis of Lyme disease in early infections and the probability that many EM rashes are misdiagnosed or missed, especially among people living alone or when the rash occurs in the hairline, etc. contribute to the lack of accurate data on the incidence of EM rashes following infection with B. burgdorferi. These factors and others affect the collection of accurate data on the proportion of patients newly infected with B. burgdorferi who do develop erythema migrans and suggest that the true incidence is likely lower than 70-80%.
Steere et al. Lyme borreliosis. Nat Rev Dis Primers. 2016 Dec 15;2:16090. doi: 10.1038/nrdp.2016.90. Steere AC, Sikand VK. The presenting manifestations of Lyme disease and the outcomes of treatment. New England Journal of Medicine. 2003 Jun 12;348(24):2472-4. Shapiro ED. Lyme disease. New England Journal of Medicine. 2014 May 1;370(18):1724-31. www.cdc.gov/lyme/signs_symptoms/ Aucott JN, Crowder LA, Yedlin V, Kortte KB. Bull’s-Eye and Nontarget Skin Lesions of Lyme Disease: An Internet Survey of Identification of Erythema Migrans. Dermatology research and practice. 2012 Oct 24;2012. Aucott J, Morrison C, Munoz B, Rowe PC, Schwarzwalder A, West SK. Diagnostic challenges of early Lyme disease: lessons from a community case series. BMC Infectious Diseases. 2009 Jun 1;9(1):1.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 25, Raphael Stricker commented:
Lyme Primer is Obsolete (Part 1)
Raphael B. Stricker, Union Square Medical Associates, San Francisco, CA; Lorraine Johnson, LymeDisease.org, Chico, CA. rstricker@usmamed.com; lbjohnson@lymedisease.org
The Lyme primer by Steere and colleagues presents an overview of the epidemiology, pathogenesis, diagnosis and treatment of Lyme disease. The authors adhere to the dogma and opinions of the Infectious Diseases Society of America (IDSA), and as a result the primer showcases the schizoid nature of the IDSA view of Lyme disease: while the pathogenesis of the disease is highly complex and worthy of a formidable infectious agent, the epidemiology, diagnosis and treatment of the disease is ridiculously simple and rather banal ("hard to catch and easy to cure"). As a result, the primer propagates the myths and misinformation about Lyme disease that have made the IDSA view obsolete and contributed to the unchecked spread of the tickborne disease epidemic around the world. The following points address significant flaws and deficiencies in the primer, with appropriate references.
There are two standards of care for Lyme disease. One is based on the guidelines of IDSA (Reference 101 in the primer) and the other is based on the guidelines of the International Lyme and Associated Diseases Society (ILADS) (1). The primer adheres to the IDSA guidelines, which are based largely on "expert opinion" (2,3) and were recently delisted by the National Guideline Clearinghouse (NGC) because they are obsolete and fail to meet methodological quality standards for guideline development set forth by the Institute of Medicine (IOM) (1). The NGC recognizes the ILADS guidelines, which were developed using the GRADE methodology endorsed by the IOM (1). Much of the clinical information in the primer is refuted by the ILADS guidelines, as outlined below.
In the Abstract, the primer states that "All manifestations of the infection can usually be treated successfully with appropriate antibiotic regimens, but the disease may be followed by post-infectious sequelae in some patients." Current evidence from "big data" analysis indicates that 36-63% of patients treated with IDSA-recommended short-course antibiotics may fail this therapy (4-6). The concept of "post-infectious sequelae" ignores the extensive literature on persistent Borrelia burgdorferi (Bb) infection despite antibiotic treatment (1,7).
The primer states that "Infection through alternate modes of transmission, including transfusion, sexual contact, semen, urine, or breast milk, has not been demonstrated." This is a very strong statement that ignores growing evidence of other modes of Bb transmission, especially via pregnancy and sexual contact (8-10). The primer states that Borrelia does not produce its own matrix degrading proteases. This statement ignores the description of a Bb aggrecanase that plays a role in tissue invasion by the spirochete and probably facilitates chronic infection as well (11,12).
Neurological syndromes associated with Bb infection are considered "controversial" by IDSA proponents because only hard neurological signs (Bell's palsy, meningoencephalitis) are accepted as significant by that group. In contrast, many Lyme patients have only soft neurological signs (cognitive and memory problems, severe fatigue, neuropathy), and these features of chronic Lyme disease are ignored by the primer authors despite supportive literature (13,14). The concept that neurological and cardiac involvement in Lyme disease resolves spontaneously, even without treatment, promotes a pet IDSA theme that Lyme disease is a trivial illness. This concept is not supported by recent literature that has documented cardiac deaths in untreated patients (15).
The primer repeats the discredited view that Lyme testing is "virtually 100%" positive after 4-8 weeks of untreated infection. This unreferenced statement ignores the fact that two-tier testing for persistent Bb infection has poor sensitivity (46%) despite excellent specificity (99%) (16,17). The studies that allegedly show high sensitivity of two tier testing used circular reasoning to arrive at this conclusion: patients were chosen because they had positive Lyme tests, and then they had positive Lyme tests (18). Thus the primer propagates one of the biggest myths about Lyme disease diagnosis instead of acknowledging the dreadful state of 30-year-old Lyme serology and the need for better testing, such as companion and molecular diagnostics.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 25, Raphael Stricker commented:
Lyme Primer is Obsolete (Part 2)
Raphael B. Stricker, Union Square Medical Associates, San Francisco, CA; Lorraine Johnson, LymeDisease.org, Chico, CA. rstricker@usmamed.com; lbjohnson@lymedisease.org
The primer states that a tick must usually be attached for more than 24 hours to transmit Bb, and that a single 200 mg dose of doxycycline can prevent transmission of Bb. The former statement is not supported by recent literature, especially when coinfecting agents are transmitted along with Bb (19). The latter statement is based on a flimsy study that has been attacked repeatedly for its many flaws (1).
The statement that there has been no evidence of Bb drug resistance ignores studies showing that resistance may occur (20-22). The issue of cyst forms that evade the immune system and antibiotic therapy is also ignored (6,23), and the primer disregards recent literature on antibiotic-tolerant persister organisms in Lyme disease (24-26). Once again the primer propagates the IDSA theme that Lyme disease is a trivial infection, with statements about quality of life after short-course treatment such as this: "Regardless of the disease manifestation, most patients with Lyme borreliosis respond well to antibiotic therapy and experience complete recovery." This statement whitewashes the significant morbidity associated with chronic Lyme disease symptoms (4,5). Approximately 42% of respondents in a survey of over 3,000 patients reported that they stopped working as a result of Lyme disease (with 24% reporting that they received disability as a result of chronic Lyme disease), while 25% reported having to reduce their work hours or change the nature of their work due to Lyme disease (4,5).
The unreferenced statement that two weeks of antibiotics cures Lyme carditis is not supported by the literature (1). The primer has limited the discussion of longer antibiotic treatment for post-treatment Lyme disease to studies by Klempner et al. and Berende et al. The authors ignore the positive studies of Krupp et al. and Fallon et al. showing benefit of longer antibiotic treatment, and they avoid discussion of the deep flaws in the negative Lyme treatment trials that lacked the size and power to yield meaningful results (27,28).
The primer calls the LYMErix(R) Lyme vaccine that was withdrawn from the market "safe and efficacious" and the authors blame Lyme advocacy groups for the vaccine failure. This mantra of "blaming the victims" has become a familiar excuse for the failed vaccine, which generated a class action lawsuit based on its lack of safety (29,30). Until vaccine developers come to grips with the very real potential hazards of Lyme vaccine constructs, a successful Lyme vaccine will remain out of reach.
Under "competing interests", there is no disclaimer by Paul Mead, who is an employee of the Centers for Disease Control and Prevention (CDC). Does this mean that Mead represents the CDC in endorsing this slanted and obsolete view of Lyme disease? If that is the case, it is disturbing that a government agency is shirking its responsibility to lead the battle against tickborne disease and instead endorses a regressive viewpoint that stunts science and harms patients.
References 1. Cameron et al, Expert Rev Anti Infect Ther. 2014;12:1103-35. 2. Johnson & Stricker, Philos Ethics Human Med 2010;5:9. 3. Johnson & Stricker, Clin Infect Dis. 2010;51:1108-9. 4. Johnson et al, Health Policy 2011;102:64- 71. 5. Johnson et al, PeerJ 2014;2:e322; DOI 10.7717/peerj.322. 6. Adrion et al, PLoS ONE 2015;10:e0116767. 7. Stricker & Johnson, Infect Drug Resist 2011:4:1-9. 8. Wright & Nielsen, Am J Vet Res. 1990;51:1980-7. 9. MacDonald, Rheum Dis Clin NA. 1989;15:657-677. 10. Stricker & Middelveen, Expert Rev Anti-Infect Ther. 2015;13:1303-6. 11. Russell and Johnson, Mol Microbiol. 2013;90:228-40. 12. Stricker & Johnson, Front Cell Infect Microbiol. 2013;3:40. 13. Cairns & Godwin, Int J Epidemiol. 2005;34:1340-5. 14. Fallon et al, Neurobiol Dis. 2010;37:534-41. 15. Muehlenbachs et al, Am J Pathol. 2016;186:1195-205. 16. Ang et al, Eur J Clin Microbiol Infect Dis. 2011;30:1027-32. 17. Stricker & Johnson, Minerva Med. 2010;101:419-25. 18. Stricker/PMC, Comment on Cook & Puri, Int J Gen Med. 2016;9:427-40. 19. Cook, Int J Gen Med. 2014;8:1-8. 20. Terekhova et al, Antimicrob Agents Chemother. 2002;46:3637-40. 21. Galbraith et al, Antimicrob Agents Chemother. 2005;49:4354-7. 22. Hunfeld & Brade, Wien Klin Wochenschr. 2006;118:659-68. 23. Merilainen et al, Microbiology. 2015;161:516-27. 24. Feng et al, Emerg Microbes Infect. 2014;3:e49. 25. Sharma et al, Antimicrob Agents Chemother. 2015;59:4616-24. 26. Hodzic, Bosn J Basic Med Sci. 2015;15:1-13. 27. Delong et al, Contemp Clin Trials 2012;33:1132-42. 28. Stricker/PMC, Comment on Berende et al, N Engl J Med. 2016;374:1209-20. 29. Marks, Int J Risk Saf Med. 2011;23:89-96. 30. Stricker & Johnson, Lancet Infect Dis. 2014;14:12. Disclosure: RBS and LJ are members of the International Lyme and Associated Diseases Society (ILADS) and directors of LymeDisease.org. They have no financial or other conflicts to declare.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 20, Sin Hang Lee commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 17, Lydia Maniatis commented:
While the title of this article refers to perception, the term is used loosely. The authors are not examining the process or principles via which a percept is formed, but, rather, how certain already-formed features of that percept are used to identify and label various substances whose general appearance is only one, and sometimes not even the most important, characteristic normally used to identify the substance. The question seems trivial and essentially non-perceptual.
An excerpt from the concluding paragraph of the paper may help convey the level of the conceptual discussion:
“…the name “chocolate” is assigned to all viscosities as long as the optical material is chocolate. This presumably reflects the fact that different concentrations and temperatures of chocolate yield a wide range of viscosities, but changes to the surface color and optical appearance are less common… The term “water” specifies a specific colorless transparent appearance and a specific (runny) viscosity. “
What does it mean to say that “the optical material is chocolate.” It seems like just a jargony way of saying “the name chocolate is assigned to anything that looks like chocolate” which begs the (trivial) question, and which isn’t even necessarily true, since the dispositive feature of chocolate is the flavor. Similarly, how do the authors distinguish between the label "water" and the labels "alcohol," "white vinegar," "salt solution" etc?
The distinction the authors are making between “optical” and “mechanical” properties indicates they haven’t understood the problem of perception. It’s not clear what distinction they are making between “optical appearance” and simply "appearance." In the category of “optical” characteristics they place color, transparency, gloss, etc. But color, transparency, gloss as experienced by observers are perceptual characteristics, and as such are in exactly the same category as perceived “mechanical” characteristics, among which the authors place perceived viscosity.
That the distinction they are making is a perceptual one is in no doubt as they are using images as stimuli, i.e. stimuli whose perceived "optical and mechanical" properties differ greatly from their physical properties. Even if they were using actual objects, the objection would be the same, as the actual stimulus would be the retinal projection, which contains neither color, viscosity, etc.
It is also inexplicable to me why the authors would refer to “optical appearance” as equivalent to “low-level image correlates.” Converting a retinal projection into a percept containing features such as transparency or color or gloss requires the very highest level of visual processes.
All in all, the article conveys conceptual confusion about the basic problem of perception, let alone how it is achieved, while the problem chosen for study doesn’t touch on any of these problems or solutions.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 22, William Davies commented:
I would like to congratulate the authors on their interesting study. We have recently shown that acute inhibition of the steroid sulfatase enzyme in new mouse mothers results in increased brain expression of the Nov/Ccn3 gene; as there is some evidence for the extracellular CCN3 protein interacting with integrin B1, I wondered whether the authors had, or had considered, looking at CCN3 levels in their cellular model?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 19, Harri Hemila commented:
Problems in study inclusions, in data extraction, and in the scales may bias the meta-analysis on vitamin C and post-operative atrial fibrillation (POAF)
Hu X, 2017 state in their methods that “studies that met the following criteria were included: (1) randomized controlled trials (RCTs) of adult patients who underwent cardiac surgery; (2) patients randomly assigned to receive vitamin C or placebo …”. However, Hu X, 2017 included the study by Carnes CA, 2001 although that was not an RCT, instead “an age- and gender-matched control group (not receiving ascorbic acid) was retrospectively selected”. In addition, the Hu X, 2017 meta-analysis did not include the data of 2 rather large US trials that found no effect of vitamin C against POAF and thus remained unpublished leading to publication bias, see Hemilä H, 2017. Furthermore, Hu X, 2017 claimed that “funnel plots showed no evidence of publication bias”, but the existence of the 2 unpublished US studies refutes that statement.
Furthermore, Altman DG, 1998 pointed out that “the odds ratio [OR] should not be interpreted as an approximate relative risk [RR] unless the events are rare in both groups (say, less than 20-30%)”. However, in the Fig. 2 of Hu X, 2017, the lowest incidence of POAF in the placebo groups was 19% and 6 out of 8 studies had incidence of POAF over 30% in their placebo groups. In such a case the OR does not properly approximate RR, and the authors should have calculated the effect on the RR scale instead.
In their Fig. 4, Hu X, 2017 state that the mean duration of intensive care unit (ICU) stay in the vitamin C group was 24.9 hours in Colby JA, 2011. However, Colby JA, 2011 reported in their Table 1 that the duration of ICU stay was 249.9 hours, i.e. 10 times greater. Evidently, such an error leads to bias in the pooled estimate of effect, but also leads to exaggeration of the heterogeneity between the included trials.
Hu X, 2017 calculated the effect of vitamin C on the duration of ICU stay and of hospital stay on the absolute scale, i.e. on days, although there were substantial variations in the placebo groups, and thus the relative scale would have been much more informative Hemilä H, 2016. As an illustration of this difference between the scales, Hemilä H, 2017 calculated that the effect of vitamin C on hospital stay in days was significantly heterogeneous with I<sup>2</sup> = 60% (P = 0.02). In contrast, the effect of vitamin C on hospital stay on the relative scale was not significantly heterogeneous with I<sup>2</sup> = 39% (P = 0.09). The lower heterogeneity on the relative scale is explained by the adjustment for the baseline variations in the studies.
Hu X, 2017 write “compared with placebo group, vitamin C administration was not associated with any length of stay, including in the ICU”. However, Hemilä H, 2017 calculated that there was strong evidence from 10 RCTs that vitamin C shortened ICU stay in the POAF trials by 7% (P = 0.002).
Hu X, 2017 also concluded that vitamin C did not shorten the duration of hospital stay, whereas Hemilä H, 2017 calculated that vitamin C shortened hospital stay in 11 POAF trials by 10% (P = 10<sup>-7</sup> ).
Although the general conclusion of Hu X, 2017 that vitamin C has effects against POAF seems reasonable, there is very strong evidence of heterogeneity in the effect. Five trials in the USA found no benefit, discouraging further research in the USA. However, positive findings in less wealthy countries suggest that the effect of vitamin C should be further studied in such countries, Hemilä H, 2017.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 24, Harri Hemila commented:
A manuscript version is available at HDL.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 21, Stuart RAY commented:
Here is the retraction: Finelli C, 2016
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 14, Matthew Wargo commented:
While this report increases our understanding of SphR, many of the findings, including SphR direct transcriptional control of the neutral ceramidase (gene designation cerN (PA0845)), promoter mapping, and binding site determination were previously reported in LaBauve and Wargo 2014 (PMID 24465209)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 12, Lydia Maniatis commented:
It’s amazing how shy Agaoglu and Chung (2017) are in making their legitimate point, and how gently and casually they make it when they finally do get there (even giving credit where credit is not due). We can’t be sure, until we get to the very end (of the article not the abstract), what the authors think about the question posed, somewhat awkwardly, in their title: “Can (should) theories of crowding be unified?” The title begs the question, if they can be unified, then why shouldn’t they be? The explanation for the awkwardness, also saved for the end, is that the term “crowding” is conceptually vague, and has been used as a catch-all term for all manner of demonstrations of the fact that peripheral vision is not as good as central vision. “All in all, although we applaud the attempts at unifying various types of response errors in crowding studies, we think that without a better taxonomy of crowding—instead of calling everything crowding, perhaps introducing types of crowding (as in the masking literature)—unifying attempts will remain unsuccessful.” In other words, the issue of a unifying “theory of crowding” is moot given that we’re talking about a hodge-podge of poorly-understood phenomena.
What is also moot are the experiments reported in the article. While I think they have a lot of problems common to the field (layers of untested/untestable or even false, arbitrarily chosen assumptions), that doesn’t matter. As should be clear from the authors’ own discussion, no new experiments were needed to make the necessary theoretical point, which is that a clear conceptual understanding of phenomena needs to precede any attempt at a technical, causal explanation. Clearly, the experiments added by the authors do not make more acute the stated need for “a better taxonomy of crowding.” The redundancy of the study is reflected in the statement (caps mine) that “Our empirical data and modeling results ALSO [i.e. in addition to previous existing evidence] suggest that crowded percepts cannot be fully accounted for by a single mechanism (or model).” They continue to say that “The part of the problem is that many seemingly similar but mechanistically different phenomena tend to be categorized under the same umbrella in an effort to organize the knowledge in the field. Therefore, constraints for theoretical models become inflated.”
Agaoglu and Chung’s (2016) message, in short, is that the heterogeneous class of phenomena/conditions referred to as “crowding” are clearly not candidates for a common explanation or “model” as they routinely produce mutually conflicting experimental outcomes (one model works for this one but not that one, etc) and that there is a need for investigators to clarify more precisely what they are talking about when they use the term crowding. While, as mentioned, they could surely have made their argument without new experiments, they couldn’t have published it, as the rational argument in science has unfortunately been demoted and degraded in favor of uninterpretable, un-unifyable, premature p-values.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 13, Lydia Maniatis commented:
Witzel et al’s (2016) degree of ignorance of the fundamentals of their chosen topic is capable of disconcerting even the most jaded observer of the vision literature (me). Intentionally or not, it reflects the tenacious, subterranean grip of the behaviorist tradition (consideration only of simple signal (stimulus)/response paradigms) despite overwhelming evidence, both logical and empirical, of its inadequacy. That editors allow such products to pass into the literature is, I guess, also par for the course.
The problem here is that the authors simply ignore one of the most basic facts about color perception, though it is highly relevant to their problem of interest. It is not clear from the text whether they are even aware of this fact, i.e. that the color perceived at any given location in the visual field is contingent on the light reflected both from the local segment of the scene and the areas in the scene as a whole, (both adjacent and non-adjacent), and more specifically, to the structural relationships (and their implications) of the light-reflecting areas in question. These empirical facts are not in the least in question, yet for Witzel et al (2016) they might not as well exist. They acknowledge only local contrast and adaptation as possible reasons for why similar local “color signals” can produce different color experiences:
“To clarify the role of metamer mismatching in color constancy, we investigated whether metamer mismatching predicts the variation of performance in color constancy that cannot be attributed to adaptation and local contrast.”
“In the present study, we tested different high-level (color categories) and sensory factors (metamer mismatching, sensory singularities, and cone ratios) that are likely to affect performance in color constancy beyond what is predicted by adaptation [for some reason “contrast” has been dropped].”
“However, it is known that color appearance and color naming can be influenced by context, such as local contrast and adaptation (Hansen et al., 2007).” Their conclusion that “a considerable degree of uncertainty (about 50%) in judging colors across illuminations is explained by the size of metamer mismatch volumes” is meaningless since results are condition-sensitive and the authors have not considered the relevant confounds (e.g. figure-ground structure of the visual field).
Bizarrely, they speculate that the unexplained “50%” of the failures of color constancy “beyond what is predicted by adaptation…may be rather the result of linguistic categorization.” Anything but consider the alternative that is well-known and rather well-understood. (They also considered the bizarre notion of the role of “color singularities” i.e. that the visual system “maps the sensory signal that results from looking directly at the light of the illumination (illuminant signal).” Given that their stimuli are pictorial, I don’t even know how to interpret their use of the term color singularity. At any rate there is no such illumination signal, as has been proven both logically and empirically (it would, for example, eliminate the possibility of pictorial lighting effects).
The presence of such articles in the vision science literature is mind-boggling.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 13, Lydia Maniatis commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 17, Andrey Khlystov commented:
Gentlemen, The problem with your arguments is that Brand II, though being the “cleaner” of the three we tested, still produces very high aldehyde emissions. Three out of five V2 liquids that we tested exceeded the one time (one time!) exposure limits in a single puff. They also produced higher emissions than non-flavored liquids used in more powerful Brand I and III e-cigarettes. Surely you can check that. Please see our reply to Farsalinos et al. letter to ES&T – there are other studies that found even higher aldehyde concentrations than we did. High aldehyde emissions are not limited to a single study or a single liquid. We also demonstrate that “dry puff” arguments that Farsalinos et al. use to dismiss all high aldehyde studies have absolutely no factual basis. I doubt I can add anything else to this discussion except for reminding you that the strength of science is not only in reproducing results, but also in not cherry-picking studies and data that fit one’s theories or expectations. I do appreciate your efforts in clarifying the benefits and risks of e-cigarette use and whish you success in this endeavor.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 13, Peter Hajek commented:
Dr Khlystov, I understand that your laboratory has a good track record. Your findings are potentially important, but they do need to be replicated so a possibility that they were an artefact of some part of your procedure is ruled out The reason for wanting to test Brand 1 liquids is as follows. If other liquids show no alarming toxicant levels (which is possible because in previous studies where dry puffs were excluded, no such levels were found), the next level of explanation will be that the levels may be low in other liquids, but they were high in the liquid you tested. It is of course possible that your results will be replicated, but if not, this would necessitate another round of testing of the liquids you used. Testing your Brand 1 liquids straight away would remove this potential expense and delay in clarifying the issue. Providing information on which liquids you used, if this is available, should be simple and uncontroversial.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 11, Konstantinos Farsalinos commented:
Dr Khlystov mentions that there is information about 5 samples tested in their study. Unfortunately he refers to the samples with the lowest (by far) levels of carbonyl emissions. In fact, only one of these samples had high carbonyl emissions, unlike Brand I samples which showed very high levels of toxic emissions (especially for 3 of the samples, the levels found were extreme).
Liquids from different batches may not be the same, but finding almost 7000 ug/g of formaldehyde compared to < 0.65 ug/g from an unflavored sample can be easily reproduced with reasonable accuracy even with different batches. A replication study finding levels of carbonyl emissions lower by orders of magnitude cannot be attributed to different batches.
As i mentioned in my ES&T letter to the editor, the levels found by Khlystov and Samburova could only be explained by dry puffs, but this has been excluded because of the findings in unflavored liquids. Also, previous studies with verified realistic (i.e. no dry puff conditions) have found aldehyde levels orders of magnitude lower compared to their study. This creates a crucial need to replicate the samples with the highest levels of carbonyl emissions, despite the reassurance about the laboratory quality. Replication is the epitomy of science. But the authors are not providing the necessary information for these liquids.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 11, Andrey Khlystov commented:
Dr. Hajek,
You have not read the paper carefully. As I said earlier, our paper has information on 5 liquids that should be enough to get anybody started in replicating the data. They are from Brand II, which is V2 Standard (see Table 1). The brand is very easy to find (www.v2.com). The other liquids were from local vape shops, but this is of little consequence for the study, see below.
You seem to miss the point of our paper. Please let me briefly summarize its message: flavors, especially at higher concentrations, appear to dominate aldehyde production. To check generality of our observations, all one needs to do is take any flavored liquid and test it at different concentrations and/or against unflavored liquids and see what happens.
Contrary to what you suggest, there is little value in testing exactly the same liquids. Testing specific liquids or flavors was not the point of our study. We observed that a fairly wide variety of randomly selected flavored liquids produce significant aldehyde emissions, with aldehyde profiles varying among different flavors. If only PG or VG were responsible for the majority of aldehyde emissions, there would be no differences among liquids that have the same PG/VG composition. Yet, we observed significant differences among such liquids.
I also doubt that liquids from different batches are exactly the same, especially from small-time operations. At the time of writing the paper, we did not measure concentrations of liquid constituents. Having understood their role, we are controlling for liquid composition in our on-going study. As we mentioned in the letter to ES&T, we see appreciable aldehyde concentrations in both mainstream and secondary aerosols for a wide variety of e-cigarettes and liquids that users bring to our study. High aldehyde emissions are not limited to the 15 liquids and 3 e-cigarette brands we tested in our original study, it appears; the problem seems to be quite universal.
As we stated in our letter to ES&T, we are calling for checking findings of ANY e-cigarette study. I would like to note, however, that if one doubts our measurements, he or she needs to come up with a plausible mechanism, other than the effect of flavors, that explains why unflavored liquids produced significantly lower emissions than flavored ones or why a diluted flavored liquid was producing less than a more concentrated one. Please note, this was observed for the same e-cigarette, the same power output, and the same experimental setup. As of now and as far as I know, nobody came up with a single credible reason to doubt our results. I would also like to stress that aldehyde measurements are not trivial. We have over 20 years of experience in these measurements with a solid track record of QA/QC. Please rest assured - we stand by the quality of our data.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 09, Peter Hajek commented:
Your paper does not identify the actual products. Can you let others know the product name and the online address where it was purchased? There is no point trying to clarify your finding with different e-liquids.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 08, Andrey Khlystov commented:
Please read our paper carefully. There is information on 5 liquids that can be easily ordered online. Good luck with your experiments.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 08, Peter Hajek commented:
In their just published response to Farsalinos et al. comment on these unexpected results, the authors acknowledge that replications are needed. Could they please respond to repeated requests to specify which e-liquids they used so a replication can be performed?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 12, Peter Hajek commented:
The authors are correct that other studies are needed to check this phenomenon. Can they specify which e-liquids they tested so a replication is possible?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 06, Joe G. Gitchell commented:
In the brief report at the link, we have provided an update to include the 2015 NHIS data.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 21, Stuart RAY commented:
Definitely worth reading, a letter from one of the victims: Dansinger M, 2017
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
On 2017 Feb 14, Lydia Maniatis commented:
In this article, Felin et al (2016) mix up two different issues, which, like oil and water, intermingle but don’t mix. Put another way, the authors alternate between the two poles of a false dichotomy. On the one side is a truism, on the other an epistemological fallacy.
The latter involves the view expressed by Koenderink (2014) in an editorial called “The All-Seeing-Eye?” Here, he essentially makes the old argument that because all knowledge, and all perception, is indirect, inferential and selective, it makes no sense to posit a unique, real world. As I have pointed out in Maniatis (2015), the anti-realist position of Koenderink (2014) as well as Rogers (2014) and Hoffman (2009) and Purves et al (2014) (all of whom are cited in this respect by Felin et al (2016)) are paradoxical and inconsistently asserted. Rogers (2014), for example, describes the concept of “illusion” as invalid, while at the same time saying that “illusions” are useful). Inconsistency is inevitable if we want to make references to an objective world on the one hand (e.g. in making scientific statements), and at the same time claim that there is no unique, objective world.
It is clear from the text that Felin et al (2016) are not, in fact, adopting an anti-realist view. It has simply been inappropriately mixed into a critique of poor practices in the social sciences. These practices involve setting up situations in which participants make incorrect inferences, and treating this as an example of bias or irrationality. As the authors correctly discuss, this is pointless and inappropriate; the relevant question is how do organisms manage, most of the time, to make correct or useful inferences on the basis of information that is always inadequate and partial? What implicit assumptions allow them to fill in the gaps?
These are basic, fundamental points, but do not license a leap to irrationality; Arguing that it is not useful to treat a visual illusion, for example, as simply a mistake is not a reason to reject the distinction between veridical and non-veridical solutions, and this applies also to cognitive inferences about the world.
As the authors themselves note, their points are not new - despite the existence of researchers who have ignored or failed to understand them. So they seem to be giving themselves a little too much credit when they describe their views as “provocative” and a prescription for a “radically different, organism-specific understanding of nature, perception, and rationality.”
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 14, Gregory Francis commented:
An analysis of the statistics reported in this paper suggest that the findings appear too good to be true. Details are published as an eLetter to the original article at
http://www.jneurosci.org/content/36/49/12448/tab-e-letters
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jan 15, Ashraf Nabhan commented:
I thank the authors for this elegant work. The Results are reassuring regarding the risk of congenital malformations after ACE inhibitor exposure during the first trimester. The authors made a wise note to caregivers that women on ACE-I during reproductive years should have a transition off of these medications early in pregnancy to avoid the known adverse fetal effects associated with late pregnancy exposure. This wise note should have appeared in the abstract, since we know that most readers only read the abstract as many do not have access to the full text and some only focus on the authors' conclusion.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 13, Stuart RAY commented:
This finding is intriguing, but the reported findings seem inconclusive, for the following reasons: (1) figure 2a - the tree of sequences from Core/E1 region - has weak clustering with no significant bootstrap value to show confident clustering with subtype 3a, and no significant bootstrap value in genotype 1 to exclude the query sequence; (2) the nearly full-length sequence was cobbled together from 12 overlapping PCR amplifications, raising the possibility that this was a mixed infection with different regions amplified from different variants in the blood, rather than a single recombinant genome; and (3) the title says "not uncommon" but it appears that detailed study was only done for one specimen. It would be more convincing if a single longer amplicon, with phylogenetically-informative sequences on both sides of the breakpoint and showing a reproducible breakpoint, were recovered from separate blood aliquots (i.e. fully independent amplifications). Submission of the resulting sequence(s) to GenBank is a reasonable and important expectation of such studies. In addition, the title of the paper should not say "not uncommon" unless the prevalence can be estimated with some modicum of confidence.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 14, Randi Pechacek commented:
Russell Neches, first author of this paper, wrote a blog on microBEnet explaining the process and background of this experiment.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 07, Raphael Stricker commented:
Serological Test Sensitivity in Late Lyme Disease.
Raphael B. Stricker, MD<br> Union Square Medical Associates San Francisco, CA rstricker@usmamed.com
Cook and Puri have written an excellent review of the sorry state of commercial two-tier testing for Lyme disease (1). Unfortunately the authors failed to address the myth of high serological test sensitivity in late Lyme disease.
In the review, Figure 4 and Table 7 show a mean two-tier serological test sensitivity of 87.3-95.8% for late Lyme arthritis, neuroborreliosis and Lyme carditis. However, this apparently high sensitivity is based on circular reasoning: in order for patients to be diagnosed with these late conditions, they were required to have clinical symptoms AND POSITIVE SEROLOGICAL TESTING. Then guess what, they had positive serological testing! This spurious circular reasoning invalidates the high sensitivity rate and should have been pointed out by the authors of the review.
As an example, the study by Bacon et al. (2) contains the following language: "For late disease, the case definition requires at least one late manifestation AND LABORATORY CONFIRMATION OF INFECTION, and therefore the possibility of selection bias toward reactive samples cannot be discounted" (emphasis added). Other studies of late Lyme disease using spurious circular reasoning to prove high sensitivity of two-tier serological testing have been discussed elsewhere (3-5).
In the ongoing controversy over Lyme disease, it is important to avoid propagation of myths about the tickborne illness, and insightful analysis of flawed reasoning is the best way to accomplish this goal.
References 1. Cook MJ, Puri BK. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy. Int J Gen Med 2016;9:427–40. 2. Bacon RM, Biggerstaff BJ, Schriefer ME, et al. Serodiagnosis of Lyme disease by kinetic enzyme-linked immunosorbent assay using recombinant VlsE1 or peptide antigens of Borrelia burgdorferi compared with 2-tiered testing using whole-cell lysates. J Infect Dis. 2003;187:1187–99. 3. Stricker RB, Johnson L. Serologic tests for Lyme disease: More smoke and mirrors. Clin Infect Dis. 2008;47:1111-2. 4. Stricker RB, Johnson L. Lyme disease: the next decade. Infect Drug Resist. 2011;4:1–9. 5. Stricker RB, Johnson L. Circular reasoning in CDC Lyme disease test review. Pubmed Commons comment on: Moore A, Nelson C, Molins C, Mead P, Schriefer M. Current guidelines, common clinical pitfalls, and future directions for laboratory diagnosis of Lyme disease, United States. Emerg Infect Dis. 2016;22:1169-77.
Disclosure: RBS is a member of the International Lyme and Associated Diseases Society (ILADS) and a director of LymeDisease.org. He has no financial or other conflicts to declare.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 30, Alejandro Montenegro-Montero commented:
I read this very interesting review summarizing recent global studies aimed at characterizing circadian gene expression in mammals. It presents a nice summary of the different ways in which the clock can impact gene expression and additionally, briefly discusses various statistical methods currently used for the identification of rhythmic genes from global data sets. This is a very welcome review on the subject.
Readers might also be interested in our discussion of the relative contributions of the different stages of gene expression, in determining rhythmic profiles in eukaryotes. In our commentary, "In the Driver's Seat: The Case for Transcriptional Regulation and Coupling as Relevant Determinants of the Circadian Transcriptome and Proteome in Eukaryotes", we discuss several scenarios in which gene transcription (even when apparently arrhythmic) might play a much relevant role in determining oscillations in gene expression than currently estimated, regulating rhythms at downstream steps. Further, we argue that due to both biological and technical reasons, the jury is still out on the determination of the relative contributions of each of the different stages of gene expression in regulating output molecular rhythms.
We hope that reviews like the one by Mermet et al., and commentaries like the one presented in this post, stimulate further discussions on this exciting topic: there are still many important challenges ahead in the field of circadian gene regulation.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 08, Lydia Maniatis commented:
Part 2: 2. The terms “adaptation” and “aftereffects.” There is no way to discern the meaning of these terms in the context of this study. The authors seem to be using the term very loosely: “….many aspects of visual perception are adaptive, such that the appearance of a given stimulus may be affected by what has been seen before.” The example they give involves motion aftereffects, the cause of which, as far as I can discover, is still unknown. Continuing, they declare that “This perceptual bias is known as an aftereffect [last term italicized].”
This exposition conflates all possible effects of experience on subsequent perceptual impressions with the color aftereffects based on color opponency at the retinal level, and with the less well-understood motion aftereffects. In other words, we’re lumping together the phenomenon known as “perceptual set,” for example, with color aftereffects, as well as with motion aftereffects. In these latter cases, at least, it is intelligible to talk about aftereffects as being “opposite” to the original effects. In the case of motion, we are fairly straightforwardly talking about a percept of a motion in the opposite direction. In the case of color, the situation is not based on percepts having perceptually opposing characteristics; what makes green the opposite of red is more physiological then perceptual. So even with respect to what the authors refer to as “low-level” effects, ‘opposite’ means rather different things.
The vagueness of the ‘opposite’ concept as used by Burton et al (2016) is expressed in their placement of quotation marks around the term: “In the facial expression aftereffect, adaptation to a face with a particular expression will bias participants’ judgments of subsequent faces towards the “opposite” expression: The expression with visual characteristics opposite those of the adaptor, relative to the central tendency of expressions.”
All of the unexamined theoretical assumptions implicit in the terms ‘visual characteristics,’ ‘adaptor’ ‘central tendency’ and, therefore, ‘opposite’ are embedded in the uncritically-adopted procedure of Tiddeman et al (2001). While the example the authors give – “Where fear has raised eyebrows and an open mouth, anti-fear has lowered eyebrows and a closed mouth, and so on” may seem straightforward, neither it nor the procedure is as straightforward as we might assume. The devil is in the “and so on.” First, “lowered eyebrows” is a relative term; lowered in relation to what? Different faces have different relationships between eyebrows, eyes, nose, hairline, etc. And a closed mouth is a very general state. Second, this discrete, if vague, description doesn’t directly reference the technical procedure developed by Tiddeman et al (2001). When we are told by Burton et al (2016) that “anti-expressions were created by morphing along a trajectory that ran from one of the identity-neutral faces [they look like nothing?] through the average expression and beyond it to a point that differed from the average to the same extent as the original expression,” we have absolutely no way to interpret this without examining the assumptions and mathematics utilized by Tiddeman et al (2001). On a conceptual and practical level, readers and authors are blind as to the theoretical significance of this manipulation and the description of its products in terms of “opposites.”
In addition, there is no way to distinguish the authors’ description of adaptation from any possible effect of previous experience, e.g. to distinguish it from the previously-mentioned concept of “perceptual set,” or from the fact that we perceive things in relative terms; a baby tiger cub, for example, evokes an impression of smallness while a smaller but fully-grown cat large for its size might evoke the impression of largeness. Someone used to being around short people might find average-height people tall, and vice versa. Should we lump this with color, motion aftereffects and perceptual set effects? Maybe we should, but we need to make the case, we need a rationale.
Implications The authors say that their results indicate that “expression aftereffects” may have a significant impact on day-to-day expression perception, but given that they needed to train their observers to deliver adequate results, and given the very particular conditions that they chose (without explanation), this is not particularly convincing. Questions about the specifics of the design are always relevant in this type of studies, where stimuli are very briefly presented. Why, for example, did Burton et al (2016) use a 150 millisecond ISI, versus the 500 millisecond ISI used by Skinner and Benton (2010). With such tight conditions, such decisions can obviously influence results, so it’s important to rationalize them in the context of theory.
It should be obvious already, but the following statement, taken from the General discussion, is an apt demonstration of the general intellectual vagueness of the article: “Face aftereffects are often used to examine the visual representation of faces, with the assumption that these aftereffects tap the mechanisms of visual perception in the same way as lower level visual aftereffects.”
The phrase “in the same way…” is wholly without a referent; we don’t even know what level of analysis is being referred to. In the same way as (retinally-mediated, as far as we understand) color aftereffects? In the same (physiologically not well understood) way as motion aftereffects?” In the same way as the effects of perceptual set? In the same way as seeing size, or shape, or color, etc, in relative terms? What way is “the same way”?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Dec 08, Lydia Maniatis commented:
Part 1: There seems to be a widespread misunderstanding in the vision sciences about the role of experiment in science. The value of experiment rests solely on the clarity, coherence, internal consistency, and respect for known facts of the underlying rationale, certain of whose predictions it is designed to test. The methodological choices made in conducting the experimental test are directly linked to this rationale. (For example, if I did a study on the heritability of a single (assumed) gene underlying nose size (plus “noise”), my results would be interpretable in the terms of my assumption (especially if “size” were not clearly defined), even if my assumption were false. This is why it’s important to take care in constructing our hypotheses and tests). If the underlying concepts are vague, or if the rationale lacks internal consistency, or if it lacks coherence, then the results of the experiment cannot be interpreted (in a scientific sense).
The problems of rationale and method are overwhelmingly on display here. Below, I enumerate and expand on various issues.
- The stimuli The theory/method behind the (implicitly) theory-laden stimuli (averaged and morphed photos of faces having various expressions ) is briefly described as having been adapted from those used by Skinner and Benton (2010). (As the corresponding author has clarified to me, the stimuli are, in fact, the same ones used in that study.) The pre-morphed versions of those stimuli came from a database of 70 individual faces exhibiting various expressions collected by Lundqvist, Flykt & Ohman (1998). This reference is not cited by Burton et al (2016), which I feel is an oversight, and neither Bonner et al (2016) nor Skinner and Benton (2010) feel the need to address the sampling criteria used by Lundqvist et al (1998). Sampling methods are a science in themselves, and are always informed by the purpose and assumptions of a study.
However, we’ll assume the investigators evaluated Lundqvist et al’s (1998) sampling technique and found it acceptable, as it’s arguably less problematic than the theoretical problems glossed over in the morphing procedure. The only (non)description of this procedure provided by either Burton et al (2016) or Skinner and Benton (2010) is a single reference, to Tiddeman, Burt, & Perrett,( 2001). A cursory examination of the work reported on by those researchers reveals it to be a wholly inadequate basis for the use Burton et al (2016) make of it.
Tiddeman et al (2001) were interested in the problem of using morphing techniques to age faces. They were trying to improve the technique in comparison to previous attempts, with regard to this specific problem. They didn’t merely assume their computational solutions achieved the aim, but evaluated the results in empirical tests with observers. (However, they, too, fail to describe the source of the images on which they are basing their procedures; the reference to 70 original faces seems the only clue that we are dealing with the sample collected by Lundqvist et al (1998).) The study is clearly a preliminary step in the development of morphing techniques for a specific purpose: “We plan to use the new prototyping and transformation methods to investigate psychological theories of facial attraction related to aging….We’ll also investigate technical extensions to the texture processing algorithms. Our results show that previous statistical models of facial images in terms of shape and color are incomplete.”
The use of the computational methods being tentatively proposed by Tiddeman et al (2001) by Skinner and Benton (2010) and Burton et al (2016) for a very different purpose has been neither analyzed, rationalized nor validated by either group. Rather, the procedure is casually and thoughtlessly adopted to produce stimuli that the authors refer to as exhibiting “anti-expressions.” What this label means or implies at a theoretical level is completely opaque. I suspect the authors may not even know what the Tiddeman et al algorithm actually does. (Earlier work by Rhodes clearly shows the pitfalls of blind application of computational procedures to stimuli and labeling them on the basis of the pre-manipulation perception. I remember seeing pictures of morphed or averaged“faces” in studies on the perception of beauty that appeared grossly deformed and non-human.)
Averaging of things in general is dicey, as the average may be a completely unrealistic or meaningless value. If we mix all the colors of the rainbow, do we get an “average” color? All the more so when it comes to complex shapes. If we combined images of a pine tree and a palm tree, would the result be the average of the two? What would this mean? Complex structures are composed of multiple internal relationships and the significance of the products of a crude averaging process is difficult to evaluate and should be used with caution. Or not.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2016 Dec 12, David Mage commented:
Carlin and Moon have provided a very thorough review of risk factors for SIDS with one major flaw – a complete lack of recognition that SIDS have a male excess of 50% corresponding to a male fraction of 0.60, or 3 males for each 2 females. CDC (http://wonder.cdc.gov) reports for years 1968-2014 that for causes of death by SIDS, Unknown (UNK) and Accidental Suffocation and Strangulation in Bed (ASSB) there were 119,201 male and 79,629 female post-neonatal cases (28-364 days) for a male fraction of 0.600. Naeye et al. (PMID 5129451) were, to our knowledge, the first to claim that this 50% male excess in infant mortality must be X-linked. We agreed, and have proposed that an X-linked recessive allele with frequency q = 2/3 that is not protective against acute anoxic encephalopathy would place XY males at risk with frequency q = 2/3 and XX females at risk with frequency q*q = 4/9 (PMID 9076695, 15384886).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 22, Atanas G. Atanasov commented:
Dear Colleagues, thank you so much for the excellent collaborative work! Consumer Health Digest Scientific Abstract of this article is now available at: https://www.consumerhealthdigest.com/brain-health/protection-against-neurodegenerative-diseases.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 13, Konstantinos Fountoulakis commented:
This paper compares CBT with short-term psychoanalytical therapy and versus a brief psychological intervention. The results of the paper suggest no difference between the three types of interventions and the conclusion is as follows: ‘Short-term psychoanalytical psychotherapy is as effective as CBT and, together with brief psychosocial intervention, offers an additional patient choice for psychological therapy, alongside CBT, for adolescents with moderate to severe depression who are attending routine specialist child and adolescent mental health service clinics’. Essentially this conclusion is misleading. The description of the ‘brief psychosocial intervention’ suggests it was something between a general psychoeducational approach and supportive psychotherapy, and it was delivered by the usual general staff of the setting without any specialized training in a treatment-as-usual approach. Therefore the interpretation of the results is either that the brief psychosocial intervention was a kind of placebo, and in this case the ‘active’ psychotherapies did not differ from a placebo condition (negative trial), or, if the brief psychosocial intervention is indeed efficacious, then the results do not support the added value of the more complex, demanding, expensive and time consuming CBT and psychoanalytical treatments applied by highly trained therapists versus simpler techniques applied by nurses. In my opinion, the results of this study do not support the applicability of cognitive and psychoanalytical theories in the treatment of adolescent depression but this is not entirely clear and it is debatable. What is clear, is that CBT and psychoanalytic therapy are not better than simpler and cheaper treatment-as-usual psychosocial interventions which are traditionally routinely applied in many clinical settings already.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 31, Krishnan Raghunathan commented:
"We would like to acknowledge the work of Dr. Ingela Parmyrd and colleagues (1) who have shown that in living cells, crosslinking of for instance CTxB induces ordered domain formation in an actin-dependent manner.
(1) Dinic, J., Ashrafzadeh, P., Parmryd, I. (2013) "Actin filaments attachment at the plasma membrane in live cells cause the formation of ordered lipid domains" Biochimica et Biophysica Acta - Biomembranes, 1828(3): 1102-1111"
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 12, Bjarke M Klein commented:
Bjarke Mirner Klein, PhD, Director, Biometrics, Ferring Pharmaceuticals, Copenhagen, Denmark.
We thank the authors for their interest in our trial. This publication is the first reporting data from the ESTHER-1 trial and therefore presents the overall results at a high level. More publications presenting full details on different aspects of the data are in progress. In their comment, they raise the following three concerns: 1) the presentation of the number of oocytes retrieved, 2) the decision not to present the so-called ‘inverted’ responders, and 3) the possibility that the individualised dosing regimen may increase excessive response in low AMH patients. Each of these will be addressed in the following.
Concerning the presentation of the number of oocytes retrieved: Strict criteria for cycle cancellation due to poor or excessive response were specified in the trial protocol. Since a priori both scenarios were considered a possibility and since there is no consensus on how the number of oocytes retrieved should be imputed for each scenario it was decided to present the data as in Table 3. This is a transparent way of presenting the results since the reader can derive the numbers for all subjects using his/her own assumptions on how to impute the cycle cancellations. This is exactly what the two authors have done assuming that cycle cancellations due to poor response should be included as zero oocytes retrieved in the calculations. It should be noted that the treatment difference remains the same irrespective of method of display.
Concerning the ‘inverted’ responders: The terminology of ‘inverted’ ovarian response may be misunderstood, since it may suggest that e.g. a subject who would have <4 oocytes retrieved using a standard starting dose of follitropin alfa (GONAL-F) would have ≥20 oocytes retrieved with the individualised follitropin delta (REKOVELLE®) dose. This would indeed be a dramatic and surprising impact considering that the maximum daily dose of follitropin delta is 12 mcg and the starting dose of follitropin alfa is 150 IU (11 mcg). However, as illustrated on Figures 1A and 1B the consequences of the individualised follitropin delta (REKOVELLE®) dosing regimen dose are not that drastic. From these figures, it can be observed that the ovarian response in terms of number of oocyte retrieved is comparable for the mid-range AMH while the treatment differences are seen at the lower and higher AMH level.
Since the individualised dosing algorithm assigns the same daily dose of individualised follitropin delta (REKOVELLE®) to subjects with an AMH <15 pmol/L and gradually decreases the dose as a function of AMH for subjects with AMH ≥15 pmol/L it seems relevant to present the data by these subgroups. As can be seen from Table 3, the individualised follitropin delta (REKOVELLE®) dosing regimen shifts the distribution of oocytes retrieved upwards for subjects with AMH <15 pmol/L while it shifts the distribution downwards for subjects with AMH ≥15 pmol/L. Such a shift in the distribution obviously also affects the tails of the distribution, i.e. in this case the probability of either too low or too high number of oocytes retrieved. For the publication, it was considered relevant to focus on the risk of poor response in the subjects at risk of hypo-response and the risk of excessive response for the subjects at risk of hyper-response.
Concerning the possibility that the individualised dosing regimen may increase excessive response in low AMH patients: Relevant data on OHSS and preventive interventions is presented in Table 3 and the relationship to AMH is illustrated in Figure 1C. The authors are concerned that since excessive response is not presented for the potential hypo-responders (AMH <15 pmol/L) the overall safety of the individualised dosing is unclear. We take the opportunity to present the data for excessive response in the subjects at risk of hypo-response, where the observed incidence of having ≥15 oocytes retrieved among subjects with AMH <15 pmol/L was (6%) and (5%) in the follitropin delta and follitropin alfa groups, respectively.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 29, Jack Wilkinson commented:
Jack Wilkinson, Centre for Biostatistics,Institute of Population Health, Manchester Academic Health Science Centre (MAHSC), University of Manchester, Manchester, UK.
Sarah Lensen, Department of Obstetrics and Gynaecology, University of Auckland, New Zealand.
Nyboe Andersen and colleagues (2017) present a large randomised study comparing follitropin delta, with dosing based on AMH and body weight, against follitropin alfa, with dose selection by conventional means. Stated key findings of the study include the fact that more women achieved a target response of 8-14 oocytes (43.3% vs 38.4%) in the follitropin delta group, and that fewer women had poor (< 4 oocytes) or excessive (>= 15 or >= 20 oocytes) responses compared to the follitropin alfa group. However, on close inspection of the published study, it is not clear that follitropin delta, administered using this dose selection algorithm, has been shown to be superior to the standard dose of follitropin alfa in relation to response to stimulation.
We note that calculations regarding number of oocytes obtained do not include women who were randomised but subsequently had their stimulation cycle cancelled for anticipated poor response (prior to hCG trigger). Indeed, there were more women with cycles cancelled for anticipated poor response in the follitropin delta group than the follitropin alfa group. The claim that poor response was lower in the follitropin delta arm then appears to be technically correct but potentially highly misleading. It would be preferable to include women with cancelled cycles in the numerator and denominator (they appeared in neither numerator nor denominator in the analysis presented by Nyboe Andersen and colleagues) by setting the number of oocytes recovered for women with cancelled stimulation cycles to be zero. This would rectify any bias resulting from differential cancellation rates between the study arms. It would also preserve the balance over confounding factors produced by randomisation, which is otherwise violated. When we include all patients in this way, the mean number of oocytes retrieved in the follitropin delta and follitropin alfa arms is 9.6 and 10.1, respectively, and the numbers achieving the target response are 275 (41%) and 247 (37%), which gives p=0.15 from a chi-squared test.
A second concern is the fact that the authors present the rate of poor response in the low AMH group and the rate of hyper response in the high AMH group, but do not present the rate of poor response in the high AMH group or the rate of hyper response in the low AMH group. If we refer to hyper responses in low AMH patients and poor responses in high AMH patients as ‘inverted’, then we can calculate the number of inverted responses in each group. In fact, the number of inverted responses are greater in the follitropin delta group (5% vs 4% using 15 oocytes as the threshold for hyper response, or 4% vs 2% using 20 oocytes):
Number Inverted <4 or >=15 is 34/640 (5%) in experimental arm and 25/643 (4%) in the control arm, p=0.233 from Fisher's exact test.<br> Number Inverted <4 or >=20 is 23/640 (4%) in experimental arm and 11/643 (2%) in the control arm, p=0.038 from Fisher’s exact test.
These calculations are conducted in patients achieving hCG trigger, as it does not appear to be possible to identify whether the women with cycles cancelled for poor response were in the high or low AMH stratum from the information presented. The number of hyper responses in the low AMH group is of particular interest, since this represents increased risk to the treated woman and any resulting offspring. Unfortunately, this information is obfuscated by the presentation in the manuscript. We also note that the decision to report the rates of hypo and hyper response only in these subgroups appears to be a departure from the clinicaltrials.gov record for the trial (https://clinicaltrials.gov/ct2/show/NCT01956110).
Given that the claim of the authors in relation to the effectiveness of treatment was one of non-inferiority, the matter of the safety of the individualised dosing regimen compared to standard dosing would appear to be of paramount importance. On the basis of the considerations outlined above, it is unclear that an advantage of the individualised follitropin delta regimen in relation to achieving target ovarian responses has been demonstrated. Moreover, the data leave open the possibility that the individualised regimen may increase excessive responses in low AMH patients. We would like to invite the authors to clarify this point.
A version of this comment has been posted on the journal's website.
References Nyboe Andersen, A., et al. (2017). "Individualized versus conventional ovarian stimulation for in vitro fertilization: a multicenter, randomized, controlled, assessor-blinded, phase 3 noninferiority trial." Fertil Steril 107(2): 387-396 e384.
Conflict of interest statement
JW is funded by a Doctoral Research Fellowship from the National Institute for Health Research (DRF-2014-07-050). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health. JW also declares that publishing research is beneficial to his career. JW is a statistical editor for the Cochrane Gynaecology and Fertility Group, although the views expressed here are not necessarily those of the group.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
On 2017 Jan 09, Gerhard Holt commented:
Intimate Cohabitants'Shared Microbiomes and Alpha-Synuclein Pathology
Sampson et al.'s findings on the role of gut microbiota on Alpha-synucleinopathies[1] raise considerable potential concern regarding the microbiome of intimate cohabitants of patients with Alpha-Synuclein pathology, since they are likely to have substantial exposure to a shared microbiome.
Do these intimate cohabitants develop more early Alpha-Synuclein pathology than non-cohabitant controls ?
Are they at greater risk for Parkinson's disease and other alpha-synucleinopathies ?
From an Infectious Disease perspective, this might be an important clinical study.
Perhaps some of the intimate cohabitants are more resistant to A-SYN pathology, SCFAs, or the underlying gut microbes than others - despite similar microbiome exposure ?
Perhaps differences in the interactions between their immune systems and their microbiome alter outcomes ?
This might be a fertile area for differential proteome studies.
Given the importance of Sampson et al.'s findings, hopefully a clinical trial of antibiotics followed by (healthy donor) fecal transplant for patients with Parkinson's Disease will soon follow - perhaps with a cross-over design.
Given the low risks of a potential treatment which uses well known medications and a well established procedure (Fecal Transplant), and given the grossly disproportionate potential benefit, hopefully this study will be expedited in humans.
It may be useful however to also study Alpha-Synuclein pathology in their intimate cohabitants relative to controls.
References :
[1] Gut Microbiota Regulate Motor Deficits and Neuroinflammation in a Model of Parkinson’s Disease Sampson, Timothy R. et al. Cell , Volume 167 , Issue 6 , 1469 - 1480.e12
<www.cell.com/cell/pdf/S0092-8674(16)31590-2.pdf>
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-