- Jul 2018
-
europepmc.org europepmc.org
-
On 2017 Aug 15, Varshil Mehta commented:
Great work. Congrats to the authors.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 08, Ellis Muggleton commented:
HEp2 Cells are HeLa cells, not laryngeal cancer cells.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 13, Annie De Groot MD commented:
The author GROOT is actually De Groot. See De Groot, AS in pubmed.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 03, Kenneth Pollard commented:
Last author should be Pollard KM.
Author affiliation for last author should be 5 Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA 92037. mpollard@scripps.edu.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 07, Clive Bates commented:
Has it occurred to the authors that the value (or 'USP') of new and emerging tobacco or nicotine product like e-cigarettes or heated tobacco products might be that they really are very much "better and safer" than smoking?
No serious scientist doubts this. The question is by how much, with many credible sources suggesting 95% or greater reduced risk (see, for example, the Royal College of Physicians' 2016 report Nicotine without smoke: tobacco harm reduction).
However, the authors' conclusion appears to be inviting regulators to mislead the public about the risks of these products in order to reduce demand for them. There are many problems with such an approach:
- It is ethically improper for the state to intervene in this way to manipulate adult choices by withholding or distorting information (see: Kozlowski LT, 2016).<br>
- The unintended, but wholly foreseeable, effect of trying to prevent people using safer alternatives to cigarettes is not that they quit smoking, but that they carry on smoking - and are harmed as a result of regulatory misinformation.
- How would regulators (or the authors) take responsibility and assume liability for harms arising from deceptive communications that adversely influence behaviour?
- Companies have a right to make true and non-misleading statements about their products. Under what principle should they be prevented from doing that?
The appropriate approach for a regulator is to truthfully inform smokers of the relative risks of different nicotine products. That would allow consumers to make informed choices that could prevent disease, save life and improve welfare. It is not to enforce abstinence from the use of the legal drug nicotine, which in itself, and without delivery through tobacco smoke, poses low risks to health.
Once again, tobacco control academics proceed from results to conclusions and on to policy prescription without any remotely adequate policy evaluation framework or any apparent awareness of the limitations of their analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 29, Ellen M Goudsmit commented:
I am concerned that two previous efforts to correct factual errors have not been incorporated in the revision.
- I have previously written to the main author that Wallman et al evaluated pacing, as defined by Goudsmit et al (2012). This is very different from the GET protocols used in other RCTs. I suspect that few readers would be aware of the difference between GET and pacing.<br>
- No study assessing GET used the original or revised London criteria for classic ME (Goudsmit et al 2009). The version published by the Westcare ME Task Force is different from both as well as incomplete. Research has indicated that the Westcare ME criteria select a different sample (Jason et al, personal communication). As no study has yet assessed exercise for classic ME, one can not generalise any conclusion about efficacy from the trials in the review to patients with this disease.
- As pointed out by Professor Jason who devised the Envelope theory, Adaptive Pacing Therapy (APT) is not based on the former. Again, this has been pointed out before.
- APT should not be equated with the strategy of pacing recommended by many self-help groups. Pacing helps (cf. all surveys conducted to date): APT is of little value (White et al, 2011). NB: The PACE trial did not assess pacing.
Science demands precision so I hope that this third attempt to correct errors will be responded to in an appropriate manner. To repeat inaccurate information undermines the scientific process.
Goudsmit EM, Jason LA, Nijs J, et al. (2012) Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation 34(13): 1140-1147.
Goudsmit EM, Shepherd C, Dancey CP, et al. (2009) ME: Chronic fatigue syndrome or a distinct clinical entity? Health Psychology Update 18(1): 26-33. Available at: http://shop.bps.org.uk/publications/publications-by-subject/health/health-psychology-update-vol-18-no-1-2009.html
Jason LA (2017) The PACE trial missteps on pacing and patient selection. Journal of Health Psychology. Epub ahead of print 1 February.
Jason LA, Brown M, Brown A, et al. (2013) Energy conservation/envelope theory interventions. Fatigue: Biomedicine, Health & Behavior 1(1-2): 27-42.
ME Association (2015) ME/CFS Illness management survey results. ‘No decisions about me without me’. Part 1. Available at: http://www.meassociation.org.uk/wp-content/uploads/2015-ME-Association-Illness-Management-Report-No-decisions-about-me-without-me-30.05.15.pdf (Various survey results in Appendix 6)
White PD, Goldsmith KA, Johnson AL, et al. (2011) PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): A randomised trial. The Lancet 377: 823–836.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 28, Andrea Messori commented:
Ventral hernia surgery: economic information in a series of Italian patients
Andrea Messori, Sabrina Trippoli
HTA Unit, ESTAR Toscana, Firenze, Italy
The paper by Rampado et al. [1] is the first analysis that examines the issue of costs in Italian patients undergoing incisional hernia repair with synthetic or biological meshes. One interesting finding of this study is the analysis of DRGs and reimbursements observed in this real-life setting. Table 2 of the article by Rampado et al. [1] shows that 7 different DRGs were employed for the overall series of 76 patients divided in three groups. The amounts reimbursed according to these 7 DRGs ranged from EUR 1,704.03 to EUR 13,352.72 (mean value = EUR 2,901 weighted according to the number of patients in the three groups). The length of stay was more homogenous across the three groups (7 days in Group 1, N=35; 7 days in Group 2, N=31; 13 days in Group 3, N=11), with a mean value of 7.87 days weighted according to the size of the three groups. According to Rampado et al.[1], DRGs in Italy are an underestimation of real costs. In fact, while the weighted mean of reimbursements is EUR 2,901, the weighted mean for cost in the same patients is EUR 6,908. This real-life information on costs can be extremely useful for conducting modeling studies that evaluate the cost effectiveness of meshes in Italian patients subjected to incisional hernia repair with synthetic or biological meshes.
References
- Rampado S, Geron A, Pirozzolo G, Ganss A, Pizzolato E, Bardini R. Cost analysis of incisional hernia repair with synthetic mesh and biological mesh: an Italian study. Updates Surg. 2017 Sep;69(3):375-381. doi:10.1007/s13304-017-0453-9.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 28, Gabriel Lima-Oliveira commented:
"Brazilian scientific societies currently allow laboratory directors choose between fasting/no-fasting time for all laboratory tests when prescribed together with lipid profile; but such a ‘‘permit’’ is not granted by any scientific evidence. Fasting time for most blood tests should be 12 hours, whereas for lipid profile alone is an exception based on European consensus."
Text published by Journal of Clinical Lipidology Official Journal of National Lipid Association. All rights reserved
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
bjsm.bmj.com bjsm.bmj.com
-
On 2017 Jul 13, David Nunan commented:
On the day this editorial was released we contacted the Editor for consideration of the following commentary. We have yet to hear back from the Editor. To avoid further delay via formal submission, we present here a truncated version of our commentary.
Response to “Saturated fat does not clog the arteries: coronary heart disease is a chronic inflammatory condition, the risk of which can be effectively reduced from healthy lifestyle interventions”
Implausible discussions in saturated fat “research”
Definitive solutions won’t come from another million editorials (or a million views of one).
The British Journal of Sports Medicine again acts as the unusual home to an opinion editorial advocating for public health guidance on saturated fat to be revised based on selected “evidence”. As an editorial it was always going to struggle to avoid calls of “cherry picking”. More worrying was the failure to apply even the basic of evidence-based principles. Here, we do the job of authors (and editor[s]) in addressing the quality of the evidence presented and highlighting some of the contradictory evidence, the complexity and uncertainty of the evidence-base whilst being mindful of our own cognitive biases.
Effects of reducing saturated fat intake for cardiovascular disease
The authors refer to evidence from a “landmark” meta-analysis of observational studies to show a lack of an association between saturated fat consumption and all-cause mortality, coronary artery disease incidence and mortality, ischaemic stroke, and type 2 diabetes [1]. According to best practice evidence-based methods, the types of studies included here provide low quality evidence (unless specific criterion are met) [2]. Indeed, the review authors actually reported the certainty of reported associations (or lack there of) was “very low”, indicating any estimates of the effect are very uncertain [1].
Conversely, a high-quality meta-analysis of available RCTs (n= 17 with ~59,000 participants) from the Cochrane Collaboration, found moderate quality evidence from long-term trials that reducing dietary saturated fat lowered the risk of cardiovascular events (number needed to treat [NNT]=14), but no effect on all-cause and cardiovascular mortality, risk of myocardial infarction, and stroke, compared to usual diet [3]. The Cochrane review also found in subgroup analyses, the reduction in cardiovascular events was observed in the studies replacing saturated fat with polyunsaturated fat (but not with carbohydrates, protein, or monounsaturated fat).
Thus the consensus viewpoint of a beneficial effect of reduced dietary saturated fat and replacement with polyunsaturated fat in the general population appears to be underpinned by a higher quality evidence base.
Benefits of a Mediterranean diet on primary and secondary cardiovascular disease
In the section “dietary RCTs with outcome benefit in primary and secondary prevention”, the authors switch from saturated fat to low fat diets and cite two trials, namely the PREDIMED study [5] and the Lyon Diet Heart study [6].
The PREDIMED study investigated the effects of a Mediterranean diet including fish, whole grain cereals, fruits and supplemented with extra-virgin olive oil versus the same Mediterranean diet supplemented with mixed nuts, versus advice to reduce dietary fat on primary prevention of cardiovascular disease. The dietary interventions in PREDIMED were designed to increase intakes of mono- and poly-unsaturated fat and reduce intake of saturated fat.
The Lyon Diet Heart study examined the impact of a Mediterranean alpha-linolenic acid-rich (with significantly less lipids, saturated fat, cholesterol, and linoleic acid) compared to no dietary advice. This study also aimed to assess the effect of increase dietary intake of unsaturated (polyunsaturated) fats.
Both these studies support the current consensus to increase intakes of polyunsaturated dietary fats in replacement of saturated fat. These findings also suggest placing a limit on the percentage of calories from unsaturated fats may be unwarranted which has now been acknowledged in a recent consensus [7].
Furthermore, a meta-analysis reviewing the effects of the Mediterranean diet on vascular disease and mortality [8], found that using the best available data the Mediterranean diet reduced vascular events and incidence of stroke, but did not result in improvements in all-cause mortality, cardiovascular mortality, coronary events, or heart failure compared to controls. The review authors highlighted the limited quantity and quality of evidence and the uncertainty of the effects of a Mediterranean diet on cardiovascular outcomes, and the non-existence of data about adverse outcomes.
LDL-Cholesterol and Cardiovascular mortality
The authors support their view that the cardiovascular risk of LDL-cholesterol has been exaggerated with 45 year-old data from the Minnesota Coronary Experiment (MCE) [9] and a systematic review of observational studies [10]. However, the authors do not address observed limitations of the MCE study including discrepant event rate and selective outcome reporting, over 80% attrition (with lack of intention-to-treat analysis and a small event rate difference (n=21) plausibly driven by a higher unexplained drop out in the control group [11].
The review cited found that LDL-cholesterol is not associated with cardiovascular disease and is inversely associated with all-cause mortality in elderly populations [10]. However, the methodological quality of this review has been judged to be poor for, among other problems, non-uniform application of inclusion/exclusion criteria, a lack of critical appraisal of the methods used in the eligible studies (low quality observational studies), failure to account for multifactorial analysis (i.e., lack of control for confounders), and not considering statin use (see Eatz letter in response to [12] and [13]).
The authors fail to discuss large-scale RCT evidence showing that LDL-cholesterol reducing statin therapy reduces the risk of coronary deaths, myocardial infarction, strokes, coronary revascularisation procedures by ~25% for each mmol/L reduction in LDL-cholesterol during each year, following the first, it is taken [14]. We are aware of the on-going debate around the integrity of the data in relation to statins, particularly around associated harms, and their potential mechanisms. However, there appears general consensus on their effectiveness in reducing hard endpoints regardless of their underlying mechanism.
Therefore, given the flaws of the referenced trial and systematic review of observational studies and evidence in support of benefits of LDL-cholesterol lowering therapy, it is too early to dismiss LDL-cholesterol as a risk factor for cardiovascular disease and mortality.
We note with interest the authors’ statement “There is no business model or market to help spread this simple yet powerful intervention.” It’s not beyond comprehension that journals present a credible business model based on attracting controversy in areas of public health importance where clarity, not confusion, is needed. Notable conflicts of interest include income from a low budget film purporting the benefits of a high saturated fat diet.
The latest opinion editorial overlooks a large contradictory evidence-base and the inherent uncertainty with nutritional epidemiological studies and trials [15]. Arguably what is needed is a balanced discussion of dietary patterns over and above individual macronutrients that considers collaborative efforts for improving the evidence-base and our understanding of the complex relationship between dietary fat and health.
References available from corresponding author.
David Nunan<sup>1*,</sup> Ian Lahart<sup>2.</sup> <sup>1Senior</sup> Researcher, University of Oxford, UK. david.nunan@phc.ox.ac.uk *Corresponding author <sup>2Senior</sup> Lecturer in Exercise Physiology, University of Wolverhampton, UK
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 08, Christopher Tench commented:
Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 12, Rima Obeid commented:
Trimethylamine N-oxide and platelets aggregation: insufficient evidence for causal inference in thrombosis - http://amj.amegroups.com/article/view/4016/4744
Trimethylamine N-oxide (TMAO) is an amine oxide generated in the liver from nutrients such as choline, betaine, or carnitine via an intermediate gut-bacteria driven metabolite, trimethylamine (TMA). Recently, Zhu et al. conducted a 2-month open-, non-placebo controlled intervention in vegetarians and omnivores using 450 mg total choline/day (1). Zhu et al. reported a significant increase in plasma TMAO concentrations (from 2.5 to 36.4 microM in omnivores and from 2.6 to 27.2 microM in vegetarians). A corresponding increase in platelet aggregation according to an in vitro platelets function test was also reported and expressed as percentage of maximal amplitude. The assumed effects of TMAO on platelets aggregation were observed at the first follow-up visit, 1 month after the start of supplementation, and the effects were stronger in omnivores compared with vegetarians (1). No further changes occurred in the following month of supplementation. There is no information on latency period, sustainability of the effect, or resistance to high TMAO levels. The results suggest that the increase in platelet aggregation have leveled off after 1 month (no further increase between month 1 and month 2). There is no evidence on sustainability of the effect after the last oral choline dose that was taken in the evening before the platelet function test. The results of the adenosine diphosphate (ADP)-induced platelets aggregation in vitro were interpreted as prothrombotic effect of TMAO (1). After aspirin usage in subjects without platelets disorders, lowering of in vitro platelets-reactivity to 5 microM ADP was hypothesized to indicate that “TMAO may overcome antiplatelet effects of aspirin”. Nevertheless, the interactive effect of aspirin and TMAO can be equally argued to indicate that: “TMAO may reduce the risk of bleeding from aspirin” or “TMAO may reduce resistance to aspirin in subjects who need anti-platelet drugs”. But how to interpret the results in term of cause and effect?
Platelet aggregation is a highly complex process involving numerous cellular receptors and transmembrane pathways. Platelet activation occurs when agonists, such as ADP, thromboxane A2 (TxA2), and thrombin, bind to their receptors. This physiological process is involved in protective hemostasis (i.e., prevents bleeding by forming cloth) as well as in pathological thrombosis (over-aggregation). A variety of agonists such as ADP, epinephrine, arachidonic acid, or collagen can induce platelet aggregations via different mechanisms (2). This characteristic has been used for in-vitro diagnosis of platelet disorders and for monitoring resistance to anti-platelets. Nevertheless, assays that use a single agonist or a single concentration of any agonist are oversimplification of platelet function that could be completely different under physiological conditions (3).
In vivo platelets activation causes ADP to release from dense granules. ADP activates surface glycoprotein IIb/IIIa that is attached to fibrinogen, thus leading to aggregation of platelets to adherent layer. Adding ADP to platelets rich plasma (in vitro) causes an initial increase in aggregation due to activation of the glycoprotein IIb/IIIa platelets membrane receptor and a second wave of aggregation due to recruitment of additional platelets aggregates. In contrast, aspirin inhibits platelet activation mainly by targeting cyclooxygenase 1 (COX-1) thus leading to inhibition of TxA2 formation. Because arachidonic acid affects the COX-1/TxA2 system, this compound is used, instead of ADP, for in vitro induction of platelets aggregation in platelet rich plasma under aspirin treatment. Despite that aspirin has been shown to reduce platelet aggregation induced by ADP, aspirin inhibition of platelets aggregation after arachidonic acid is greater and this test is used for routine monitoring of aspirin effect (4,5).
Zhu et al. observed higher platelet aggregation at high TMAO (compared with low TMAO) and lower aggregation under aspirin compared with the same subject without aspirin (1). The results are not interpretable for the following reasons; first, because the results of the platelets aggregation in platelets rich plasma are not comparable between studies, agonists, and agonists concentrations (6); second, because TMAO was anticipated to inhibit surface glycoprotein IIb/IIIa (that is activated by ADP), but aspirin acts mainly via TxA2. Thus, using ADP as an agonist for the surrogate platelet aggregation test is not selective for aspirin effect. However, what would have happened in subjects with indication for anti-platelets treatment? Could high TMAO be protective against bleeding? Could it reduce resistance to long term antiplatelet therapy? Could there be a platelet-adaptation to high choline intake? Clearly these questions are not answered yet.
The long-term risk of thrombosis associated with high choline intake or high plasma TMAO is not evident. The value of platelet function tests in predicting future thrombosis in non-symptomatic individuals has been questioned by the Framingham Heart Study cohort where no association was found between several platelets function tests (including ADP-aggregation) and future thrombosis after controlling for other likely competing risk factors (6). Similar negative results were reported by Weber et al., who found that ADP-aggregation was not associated with thrombosis (7). Moreover, compared with omnivores, vegetarians could have fewer or larger platelets. In addition, any possible association between TMAO and platelet functions could be subject to effect modification from dietary components such as betaine, carnitine, fatty acids, lipids, or micronutrients (8,9). In line with this, Zhu et al. have indeed shown that the platelet aggregation results that were not different between vegetarians and omnivores at baseline, became different after 1 and 2 months of supplementation of 450 mg/day choline. Therefore, since the intervention was identical in both groups, the results strongly suggest the presence of effect modifications via yet unknown factors.
Zhu et al. have shown that aspirin lowers plasma TMAO after choline load by almost 50% within 1 month (1). This could be related to changing gastrointestinal acidity and bacterial populations, thus affecting the production rate of TMA; affecting FMO3 system; or affecting a yet unknown TMA-metabolizing system. The results also draw attention to the role of aspirin (and possibly many other drugs) as an effect modifier in clinical studies on the role of TMAO in vascular diseases.
If the study of Zhu et al. (1) is to be used for synthesizing evidence, the following arguments can be made: the hypothesis could be “exposure to TMAO causes thrombosis (shown by using an appropriate surrogate test)” (Figure 1). A randomized controlled trial would be an appropriate design. However, dietary intakes of other sources of TMAO should be controlled and confounding from aspirin or other well-known factors (i.e., renal dysfunction, inflammation, or vascular diseases) that affect TMAO and simultaneously the outcome “platelet aggregation” should be conditioned on. Information on short and long term effects of high choline intake is equally important because of platelet adaptation and analytical limitations of most available surrogate in vitro tests. Since the effect does not appear to further increase over time, resistance or adaptation to high TMAO could be equally a valid explanation.
Taken together, because of serious limitations in the study design, inappropriate surrogate outcomes, unknown kinetics of the response of platelets to TMAO, and uncontrolled confounders there is a risk of using such data for causal inference on a proposed direct prothrombotic effect of dietary choline.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 13, David Keller commented:
Low LDL cholesterol was associated with significantly higher risk of Parkinson disease than was high LDL
The authors report that the adjusted hazard ratio for Parkinson disease ("PD") in subjects with LDL < 1.8 mmol/L versus those with LDL >4.0 mmol/L was 1.70 [1.03 - 2.79]. Note that the 95% confidence interval (in brackets) does not cross 1.0, so this association of low LDL with increased risk of PD is statistically significant.
The above data were then subjected to "genetic, causal analysis", which yielded a risk ratio for a lifelong 1 mmol/L lower LDL cholesterol level of 1.02 [0.26 - 4.00] for Parkinson's disease. Note that the 95% confidence crosses 1.0, and the mean risk ratio of 1.02 is barely elevated.
The tiny and non-significant increase in PD risk caused by a lifelong 1 mmol/L lower serum LDL level, as calculated by the genetic causal analysis, appears to contradict the significant increase in risk of PD for subjects with LDL < 1.8 mmol/L, as compared with subjects with LDL > 4.0 mmol/L seen in the observational analysis.
This apparent contradiction may be an artifact of the diminished statistical power of the genetic causal analysis (which compared change in PD risk for a change in LDL of only 1 mmol/L) versus the observational study, which found significantly higher PD risk associated with LDL < 1.8 mmol/L than with LDL > 4.0 mmol/L. In the observational analysis, the LDL in the high-PD-risk subjects was at least 2.2 mmol/L lower than in the low-risk subjects (ie: 4.0 - 1.8 = 2.2 mmol/L). Thus, the genetic causal analysis calculated the effect of an LDL lower by only 1.0 mmol/L, while the observational analysis looked at the effect of an LDL lower by at least 2.2 mmol/L.
I suggest that the authors compare apples with apples, by re-calculating the genetic, causal analysis to determine the effect of lifelong lowering of LDL by at least 2.2 mmol/L, the minimum separation of LDL levels between subjects in the comparator groups in the observational analysis. Comparing the effect of a larger decrease in LDL may enhance the size and significance of the results calculated by the genetic analysis, and bring it into agreement with the significantly increased risk of lower LDL found in the observational analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 26, Zhang Weihua commented:
Congratulations! Great work! Thanks for citing our paper!
Formation of solid tumors by a single multinucleated cancer cell. Weihua Z, Lin Q, Ramoth AJ, Fan D, Fidler IJ. Cancer. 2011 Sep 1;117(17):4092-9. doi: 10.1002/cncr.26021. Epub 2011 Mar 1. PMID: 21365635
It is time to rethink about our experimental models for cancer study.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 24, Jim van Os commented:
We would like to report two small and non-essential errors in the publication of this paper:
The first error is in Table 2, page 6: The bottom row in the column of mean PRS values, now depicted as ‘-0.28’ with a standard deviation of ‘0.55’ is incorrect and has to be replaced by ‘0.77’ with S.D = ‘0.19’.
The second error pertains to the text in the results section on page 5, the fifth paragraph under the heading ‘Associations in relatives and healthy comparison subjects’ in which the results of associations between PRS and CASH-based lifetime depressive and manic episodes are reported. The results of OR’s, CI’s and p-values for the outcome of ‘any affective episode’ in both the relatives group and the healthy comparison group in this text have to be replaced with the corresponding OR;s, CI’s and p-values reported in Table 7, page 11 for any affective episode.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 22, Alessandro Rasman commented:
Aldo Bruno MD, Pietro M. Bavera MD, Aldo d'Alessandro MD, Giampiero Avruscio MD, Pietro Cecconi MD, Massimiliano Farina MD, Raffaello Pagani MD, Pierluigi Stimamiglio MD, Arnaldo Toffon MD and Alessandro Rasman
We read with interest this study by Zakaria et al. titled "Failure of the vascular hypothesis of multiple sclerosis in a rat model of chronic cerebrospinal venous insufficiency".(1) Unfortunately the authors ligated the external jugular veins of the rats and not the internal jugular veins. Dr, Zamboni's theory on chronic cerebrospinal venous insufficiency is based on the internal jugular veins and not on external jugular veins.(2) Maybe the authors can read the two papers from Dr. Mancini et al. (3) and (4). So, in our opinion the title of this study is absolutely not correct.
References: 1. Zakaria, Maha MA, et al. "Failure of the vascular hypothesis of multiple sclerosis in a rat model of chronic cerebrospinal venous insufficiency." Folia Neuropathologica 55.1 (2017): 49-59. 2. Zamboni, Paolo, et al. "Chronic cerebrospinal venous insufficiency in patients with multiple sclerosis." Journal of Neurology, Neurosurgery & Psychiatry 80.4 (2009): 392-399. 3. Mancini, Marcello, et al. "Head and neck veins of the mouse. A magnetic resonance, micro computed tomography and high frequency color Doppler ultrasound study." PloS one 10.6 (2015): e0129912. 4. Auletta, Luigi, et al. "Feasibility and safety of two surgical techniques for the development of an animal model of jugular vein occlusion." Experimental Biology and Medicine 242.1 (2017): 22-28.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 12, Lydia Maniatis commented:
"The high levels of correlation between the four measures used in this study (fixations, interest points, taps and computed salience; see Fig. 3) support the conclusion that the tapping paradigm is a valid measure of salience."
Ascertaining that people look where they point (how could they guide their movement if they didn't?) has to be very high on the list of predictable predictions. With respect to "computed salience,": "Finally, saliency maps computed from the Itti et al. (1998) model were compared against the tap data and found to correlate beyond the null hypothesis, Rӻ bTޠ젰:21; p 젴:3 1015, though not significantly below the sample error hypothesis, Rӻ eST ޠ젰:25; p 젰:075. This relatively low value of Rӻ eST ޠis obtained because the computed saliency maps were relatively diffuse."
"In the absence of a specific task (蘦ree viewing� it seems reasonable to assume that at least for the first few images, and for the first few fixations in these images, observers let themselves be guided by the visual input, rather than by some more complex strategy..."
The criterion of "it seems reasonable [to us] to assume that..." is the contemporary definition of rigor (providing a solid rationale or even testing assumptions, in this case at the least debriefing subjects). In contrast, it seems reasonable to me to assume that if someone asks me to freely select a place in a picture to point to, I would want to point at something interesting or meaningful, not at the first thing that caught my attention, e.g. the brightest spot. That is, observers awareness that someone else is observing and in some way assessing their choices makes the authors assumptions that they are limiting "top-down" influences seem very weak to me. Of course, the top-down/bottom up distinction is itself completely vague. If, in the image, I see two chairs and a sofa and point to the one that I immediately recognize as having seen in IKEA, is this top-down or bottom up?
Relatedly, the authors casually address the issue of how many fixations preceded the pointing during the 1.4 seconds of viewing time: "Note that for the tapping study, the reaction time includes the time after the subject has decided where to tap, the movement of the hand, as well as the (relatively short) delay between the tap on the initialization screen and the presentation of the image. We therefore estimate that the majority of subjects performed three or fewer saccades before deciding where to tap." So, at least 127/252? Is this really an adequate assumption? And what is the rationale for 촨ree or fewer�eing an important cut-off?
It's also typical of the contemporary approach that the experimental emphasis is wholly on technique and statistics and completely agnostic to the actual stimuli/conditions and to the percepts to which they give rise, as well as to the many fundamental conceptual issues that such considerations entail, and of course the effect of stimulus variations on the shape of the data.
This empirical agnosticism is reflected in the use of the term "natural scene" to characterize stimuli; it is completely uninformative as to the characteristics of the stimuli. (This is especially the case as "natural scene" here includes, as it often does in scholarly publications, images of buildings on a college campus).
Surely, certain sets of such stimuli would produce greater or smaller inter-individual differences than others, altering the already weak data significantly as to "saliency maps." For example, if an image contained one person, then attention would generally fall on this person. But if there were two people, the outcome would probably be divided between the two, and so on. (Is seeing a person in a brief presentation top-down or bottom-up?)
Wouldn't it be weird if "attentive pointing" DIDN'T correlate with "other measures of attention"? So weird that the interpretation of the results would probably be chalked up to the many sampling uncertainties and confounding factors that are, in the predictable case, bustled through with lots of convenient (or "reasonable") assumptions and special pleading for weak data.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 01, Lydia Maniatis commented:
Like too many in the vision literature, this article recycles various unviable odds and ends from the theoretical attic. The general discussion begins with this amazing statement: “It is generally accepted that lateral inhibition among orientation-selective units is responsible for the tilt illusion.” The collection of citations mentioned in the subsequent discussion provides absolutely no support for this completely untenable (if somewhat ambiguous) statement, as all they show is that the perception of any element in the visual field is affected by the structure of the surrounding field. Perhaps (no guarantee) the cases cited could superficially be reconciled with the "lateral inhibition" claim; but it crashes and burns in an infinite number of other cases, the perception of orientation being wholly contingent on a sophisticated structural interpretation of the retinal stimulation; and the neurons responsible for this interpretive activity are, of course, the same neurons supposedly acting via dumb (so to speak) local interactions to produce the "tilt illusion."
The claim that neurons act as detectors of orientation, each attuned to a particular value, is equally untenable (as Teller (1984) has pointed out in detail). Again, claims such as : “lateral interactions between these lines, or neurons, can skew the distribution and change the perceived orientation” are blind to the fact that perception does not act from local to global, but is effectively constrained by the whole visual field and the values inherent in the possible organizations of this field, which are infinite and among which it generally "chooses" only one. We don’t see the tilt of the lines composing the drawn Necker cube veridically from a 2D point of view; so if there were “labeled lines” for tilt, as the authors suggest, then these responses cannot directly affect the percept; but direct percepts are what the authors are using to draw their conclusions. Also, an orientation is a feature of a structure; and any structures in perception are constructed, along with their orientation, from point stimulation from photons striking the retina; so this is a case of the visual system supposedly "detecting" features of things that it has itself created.
Similarly: “Known psychophysical features of the tilt illusion … also suggest low-level locus of the tilt illusion. Taken together, V1 is a likely locus for the main site of the tilt illusion.“ The attribution of perceptual experiences to ‘low level’ or peripheral cortical processes was also criticized by Teller (1984) who noted that it implicitly relies on a “nothing mucks it up” proviso, i.e. assuming the low level activity is directly reflected in the percept, without explaining what happens upstream. Again, attributing a perceptual effect such as the perceived tilt of an image to simple interactions in the same V1 neurons that are responsible for observers' perception of e.g. forms of the room, the computer, the investigators, the keypad, etc., is not credible. It would be paradoxical to claim, as Graham (1992) has done, that some percepts are a direct, conscious reflection of low level neural activity, as there would have to be a higher level process deciding that the interpretation of image x should be mediated only by the lower level, and the products shunted directly to consciousness. Such arguments should never be made again.
Similarly: “To summarize, spatial contextual modulations of V1 neurons and their population responses seem to be likely candidates for the neural basis for simultaneous hue contrast.”
The references to “simultaneous contrast mechanisms” is inapt for all the same reasons, i.e. that this is a an effect highly sensitive to global context with sophisticated criteria, and thus cannot be simply segregated theoretically from the processes of perceptual organization in general.
Finally, I don't get this: "No fixation point was provided..." but then "The observers' task was to adjust the orientation of the comparison grating, which was presented on the other side of the fixation point…” Was there or wasn't there?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 07, Michelle Fiander commented:
PRISMA describes elements to report; not how to conduct a systematic review. THe Newcastle/Ottawa score is appropriate for non-RCT but not for RCTs. Of the 2 RCTs included in this review, Rezk 2016, for example, may not have scored as high on Cochrane RoB as on the Newcastle scale.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 31, Lise Bankir commented:
About osmoles and osmolytes
It is important to use the right words to ensure an unambiguous understanding of the diverse aspects of scientific studies. Thus, I would like to draw attention to the difference between "osmolytes" and "osmoles".
The word "osmolyte" is misused in this paper and should be replaced throughout by "osmole".
Osmoles (e.g. sodium, potassium, chloride, urea, glucose) are substances that increase the osmolarity of the fluid in which they are dissolved.<br> Osmolytes (e.g. betaine, sorbitol, myoinositol, glycine, taurine, methyamines) are substances that accumulate inside cells to protect them from a high ambient osmolarity.
See definition of osmolytes in the two encyclopedia below.
http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/osmolyte
https://en.wiktionary.org/wiki/osmolyte
See also reviews about osmolytes (two examples given below).
J Exp Biol. 2005 Aug;208(Pt 15):2819-30. Organic osmolytes as compatible, metabolic and counteracting cytoprotectants in high osmolarity and other stresses. Yancey PH
Curr Opin Nephrol Hypertens. 1997 Sep;6(5):430-3. Renal osmoregulatory transport of compatible organic osmolytes. Burg MB
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Aug 28, NephJC - Nephrology Journal Club commented:
The controversial and thought-provoking paper “Increased salt consumption induces body water conservation and decreases fluid intake.” was discussed on June 6th and 7th 2017 on #NephJC, the open online nephrology journal club.
Introductory comments written by Joel Topf are available at the NephJC website here
181 people participated in the discussion with nearly 1000 tweets. We were delighted that Paul Welling, an expert in renal physiology also joined in the chat.
The highlights of the tweetchat were:
Nephrologists were surprised that the ‘basic tenet’ of Nephrology that is steady state sodium balance is now in dispute.
The methodology of this study was very impressive with the simulated Mars missions Mars105 and Mars 520 providing a unique opportunity to do prolonged metabolic balance studies, albeit in only 10 subjects.
It was unclear if the salt content was blinded or not and this may limit result interpretation.
It’s interesting that cortisol may have a more important role in sodium/water balance than previously thought via its stimulation of protein catabolism to generate more urea for urine concentration, however its overall significance is still thought to be considerably less than that of ADH/aldosterone.
Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.
Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 31, Lise Bankir commented:
About osmoles and osmolytes
It is important to use the right words to ensure an unambiguous understanding of the diverse aspects of scientific studies. Thus, I would like to draw attention to the difference between "osmolytes" and "osmoles".
The word "osmolytes" is misused in this paper and should be replaced throughout by "osmoles".
Osmoles (e.g. sodium, potassium, chloride, urea, glucose) are substances that increase the osmolarity of the fluid in which they are dissolved.<br> Osmolytes (e.g. betaine, sorbitol, myoinositol, glycine, taurine, methyamines) are substances that accumulate inside cells to protect them from a high ambient osmolarity.
See definition of osmolyte in the two encyclopedia below.
http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/osmolyte
https://en.wiktionary.org/wiki/osmolyte
See also reviews about osmolytes (two examples given below).
J Exp Biol. 2005 Aug;208(Pt 15):2819-30. Organic osmolytes as compatible, metabolic and counteracting cytoprotectants in high osmolarity and other stresses. Yancey PH
Curr Opin Nephrol Hypertens. 1997 Sep;6(5):430-3. Renal osmoregulatory transport of compatible organic osmolytes. Burg MB
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Oct 31, Lise Bankir commented:
See E-Letter to the JCI Editor about this article, by Richard Sterns and Lise Bankir
"Of Salt and Water: Let's Keep it Simple"
https://www.jci.org/eletters/view/88532#sec1
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 06, Christopher Southan commented:
IMCO it should have been incumbent on the Bentham Editor-in-Chief, the reviewing Editor and the 3 referees, to have spotted the severe grammatical problems and offered appropriate editorial support. This would have spared these non-native English authors the global embarrassment of publishing such a glaringly broken abstract. However, on checking, it looks like I could be naive in expecting this (https://en.wikipedia.org/wiki/Bentham_Science_Publishers).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 21, Manuel Menéndez González commented:
Yes, right. Though that would be an application to adjust pressures. There are many other potential applications where modifying the composition of CSF may represent a treatment of the condition.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 18, Darko Lavrencic commented:
I believe that implantable systems for continuous liquorpheresis and CSF replacement could be successfully used also for intracranial hypotension-hypovolemia syndrome as it could be caused by decreased CSF formation. See: http://www.med-lavrencic.si/research/correspondence/
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 19, Seán Turner commented:
The authors describe eight (8) new species, not nine (9) as indicated in the title. In the manuscript, the accessions numbers for the 16S rRNA genes of the type strains of Mailhella massiliensis (strain Marseille-P3199) and Mordavella massiliensis (strain Marseille-P3246) are switched; the correct assignments are LT615363: Mailhella massiliensis Marseille-P3199, and LT598584: Mordavella massiliensis Marseille-P3246.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 11, Anders von Heijne commented:
In addition to the complications to treatment with fingolimod that the authors report, there are a number of reported cases with PRES, with obviuous radiological implications. In the EudraVigilance database there are currently (september 2017) 21 reported cases of fingolimod-related PRES.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 24, Lydia Maniatis commented:
This article’s casual approach to theory is evident in the first few sentences. After noting irrelevantly, that “Since their introduction (Wilkinson, Wilson, & Habak, 1998), RF patterns have become a popular class of stimuli in vision science, commonly used to study various aspects of shape perception,” the authors immediately continue to say that “Theoretically, RF pattern detection (discrimination against a circle) could be realized either by local filters matched to the parts of the pattern, or by a global mechanism that integrates local parts operating on the scale of the entire pattern.” No citation is offered for this vague and breezy assertion, which begs a number of questions.
How did we jump from “shape perception” to “RF detection against a circle”? How is the latter related to the former?
Is the popularity of a pattern sufficient reason to assume that there exist special mechanisms – special detectors, or filters – tailored to its characteristics? Is there any basis whatsoever for this assertion?
Given that we know that the whole does determine the parts perceived, why are we talking about integration of “local” elements? And how do we define local? Doesn’t a piece of a shape also consist of smaller pieces, etc? What is the criterion for designating part and whole in a stimulus pattern (as opposed to the fully-formed percept)?
Apparently, there have been many ‘models’ proposed for special mechanisms for “RF detection against a circle,” addressing the question in these local/local-to-global terms. Could the mechanism involve maximum curvature integration, tangent orientations at inflection points, etc.? These simply take for granted the underlying assumption that there are special “filters” for “RF discrimination against a circle.” The only question is to what details of the figure are these mechanisms attuned.
What if we were dealing with different types of shapes? What if the RF boundary shape were formed by different sized dots, or dashes, or rays of different lengths radiating from a center? Would we be talking about dot filters, or line length filters? Why put RF patterns in general, and RF patterns of this type in particular, on such an explanatory pedestal?
More critically, how is it possible to leverage such patterns to dissect the neural processes underlying perception? When I look at one of these patterns, I don’t have any trouble distinguishing it from a circle. What can this tell me about the underlying process?
A subculture of vision science has opted to uncritically embrace the view that underlying processes can be inferred quite straightforwardly on the basis of certain procedures that mimic the general framework of signal detection. This view is labeled “signal detection theory” or SDT, but “theory” is overstating it. As noted in my earlier comment, Schmidtmann and Kingdom (2017) never explain why they make what, to a naïve observer, must seem very arbitrary methodological choices, nor does their main reference, Wilkerson, Wilson and Habak (1998). So we have to go back further to find some suggestion of a rationale.
The founding fathers of the aforementioned subculture include Swets, Tanner and Birdsall (e.g. 1961). As may be seen from a quote from that article (below), the framing of the problem is artificial; major assumptions are adopted wholesale; “perception” is casually converted to “detection” (in order to fit the analogy of a radar observer attempting to guess which blip is the object of interest).
“In the fundamental detection problem, an observation is made of events occurring in a fixed interval of time and a decision is made; based on this observation, whether the interval contained only the background interference or a signal as well. The interference, which is random, we shall refer to as noise and denote as N; the other alternative we shall term signal plus noise, SN. In the fundamental problem, only these two alternatives exist…We shall, in the following, use the term observation to refer to the sensory datum on which the decision is based. We assume that this observation may be represented as varying continuously along a single dimension…it may be helpful to think of the observation as…the number of impulses arriving at a given point in the cortex within a given time.” Also “We imagine the process of signal detection to be a choice between Gaussian variables….The particular decision that is made depends on whether or not the observation exceeds a criterion value….This description of the detection process is an almost direct translation of the theory of statistical decision.”
In what sense does the above framework relate to visual perception? I think we can easily show that, in concept and application, it is wholly incoherent and irrational.
I submit, first, that when I look around me, I don’t see any noise, I just see things. I’m also not conscious of looking for a signal to compare to noise; I just see whatever comes up. I don’t have a criterion for spotting what I don’t know will come up, and I don’t feel uncertain of - I certainly hardly ever have to guess at – what I’m seeing. The very effortlessness of perception is what made it so difficult to discern the fundamental theoretical problems. This is not, of course, to say that what the visual system does in constructing the visual percept from the retinal stimulation isn’t guesswork; but the actual process is light years more complex and subtle than a clumsy and artificial “signal detection” framework.
Given the psychological certainty of normal perceptual experience, it’s hard to see how to apply this SDT framework. The key seems to be to make conditions of observation so poor as to impede normal perception, making the observer so unsure of what they saw or didn’t see that they must be forced to choose a response, i.e. to guess. One way to degrade viewing conditions is to make the image of interest very low contrast, so that it is barely discernible; another way is to flash it for very brief intervals. Now, in these presentations, the observer presumably sees something; so these manipulations don’t necessarily produce an uncertain perceptual situation (though the brevity of the presentation may make the recollection of that impression mnemonically challenging). Where the uncertainty comes in is in the demand by investigators that observers decide whether the impression is consistent with a quick, degraded glimpse of a particular figure, in this case an RF of a certain type or a circle. I don’t see how one can defend the notion put forth by Swets et al (1961) that this decision, which is more a conscious, cognitive one than a spontaneous perceptual one, is based on a continuously varying criterion. The decision, for example, may be based on a glimpse of one diagnostic feature or another, or on where, by chance, the fovea happens to fall in the 180ms (Schmidtmann and Kingdom, 2017) or 167ms (Wilkerson et al, 1998) interval allowed. But the forced noisiness (due to the poor conditions), the Gaussian presumptions, the continuous variable assumption, and the binary forced choice outputs are needed for the SDT framework to be laid on top of the data.
For rest of comment (here limited by comment size limits), please see PubPeer.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 23, Lydia Maniatis commented:
It is oddly difficult to explain why a particular publication has no scientific content, even when, here, this is unequivocally the case. I think it’s important to try and make this quite clear.
Before addressing the serious theoretical problems, I would like to make the easier points that show that, even on its own terms, the project is sloppy and unsuccessful.
According to the authors, whatever it is they are proposing is “physiologically unrealistic” (p. 24). Yet they continue on to say: “Nonetheless, the model presented here will hopefully serve as a basis for developing a more physiological model of LF and RF detection.” There is no rationale offered to underpin this inarticulate hope, which seems even more misplaced given that “there is a modest, systematic mismatch between the [unrealistic] model and the data,” despite the very permissive (three free parameters, the post hoc introduction of a corrective function) model-fitting. That the modeling is strictly post hoc and ad hoc in character is reflected in the following statements: “The CFSF model presented here does not predict the inevitable increase in thresholds at frequencies higher than those explored in the present study. To do so would require CFSF with a somewhat different shape to the one shown in Figure 4…However, because we do not have the requisite data showing an upturn in thresholds at very high frequencies, we have not incorporated this feature into our present model.” (p. 24). We are dealing with atheoretical, condition/data-specific post hoc model-fitting with no heuristic value.
There is also a lack of methodological care in the procedure. As is usual in papers of this type, the number of observers is very small, and they are not all naïve (here, ¾). One is apparently an author (GS). If naivete doesn’t matter, then why mention it, and if it does, why the author participation? Also, while we’re given quite detailed descriptions of many aspects of the stimuli per se – details whose theoretical basis or relevance is unclear - we’re only told that the “monitor’s background was initially set to a mean luminance (grey).” The reference to “grey” is uninformative with respect to actual luminance. The monitor is part of the stimulus. (I don’t understand the reference to “initially.” Maybe I’m missing something.) The following statement also seems strangely casual and vague: “Observers usually completed two experimental blocks for each experimental conditions…” Usually?
As for this: "The cross-sectional luminance profile was defined by a Gaussian with a standard deviation of 0.05 deg" -- it's just a part of the culture, no explanation needed.
And then this - in the context of trying to rationalize differences between the present results and those of previous studies: “In addition to the reported data, we conducted a control experiment to measure detection thresholds for RF and LF patterns with a modulation frequency of 30 for two additional naïve observers. Results show that thresholds are no higher than for a modulation frequency of 20.” Why are we discussing unreported data? Why wasn’t this control experiment reported in the body of the paper?
Experimental stimuli were exposed for 180ms, with a 400ms isi. Why not 500ms, with a 900ms isi? Or something else? 180ms is very short, when we consider the time it takes to initiate a saccade. Was this taken into consideration? Does it matter? In general, on what theoretical basis were conditions selected? Would changing some or all change the results? What would it mean with respect to theory? Is the model so narrowly applicable that it extends only to these specific and apparently arbitrary conditions? If changing conditions would lead to different results, and to different post hoc models, and if the authors can’t predict and assign a theoretical meaning to these different possible outcomes, then it should be clear that the model has no explanatory status, that it is merely an ad hoc mathematical exercise.
The idea that binary forced choices, with their necessary loss of information, are a good idea is mind-boggling, compounded by the arbitrariness of defining “thresholds” based on a 75% correct rate. Why not 99%? (As I'll discuss later, the SDT rationale is wholly inappropriate here). Why wouldn’t vision scientists be interested in what observers are actually seeing, instead of lumping together who knows what impressions experienced under extremely suboptimal conditions? (The reason for this SDT-related, unfortunate indifference to perception by vision scientists will be discussed in a following comment). Generating data in the required form seems more important than understanding what natural phenomena it reflects and explains, if any. Relatedly, I would note that it is indispensible to the evaluation of any visual perception study for the actual stimuli to be presented for interested readers’ inspection. I have asked the authors for access to these stimuli but haven’t yet received a response.
But these are minor problems. The fundamental problem is that the authors have implicitly and explicitly adopted assumptions of visual system function that are never tested and are demonstrably lacking in face validity. (In a nutshell we are talking about the major fallacy of treating perception as a signal detection problem and neurons as "detectors.") In other words, even if the assumptions are false, the experiments premised on them are not designed to reveal this. (Yet, not only do existing facts and logical analysis falsify the premises, it would be easy to design similar experiments within the same framework that would falsify or render its arbitrariness evident, as I'll discuss in my second comment). Rather, data generated are simply assumed to reflect the claimed mechanisms, and loosely, with the help of lots of free parameters and ad hoc manipulations, are perpetually interpreted (via model-fitting) in these terms, with tweaks and excuses for every new and slightly different data set that comes along.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 13, John Sotos commented:
To stimulate data-sharing, Bierer et al propose a new type of authorship, "data author," to credit persons who collect data but do not analyze it as part of a scientific study. Their proposal, however, requires a non-trivial modification to the structure of the Pubmed database, revision of authorship criteria across thousands of journals, and assigns a specialness to data authorship that could equally be claimed for "statistical authorship," "drafting authorship," "study-conceiving authorship," "benchwork authorship," etc.
Reviving decades-old proposals for fractional authorship (1) could better achieve the same laudable aims, especially if open-source "blockchain" software technology (2)(3)(4) were used to conveniently, publicly, quantitatively, and securely track authorship credit in perpetuity.
Authorship would thereby have some features of alternate currency (e.g. BitCoin): senior authors could use future authorship credits to "purchase" data from owners according to the data's value. They could also assign roles from a controlled vocabulary (data author, statistical author, etc.) to some or all authors. Over time, norms for pricing and authorship roles would coalesce in the scientific community.
Overall, a blockchain fractional-authorship system would be more flexible and extensible than a special case made for data authors.
(1) Shaw BT. The Use of Quality and Quantity of Publication as Criteria for Evaluating Scientists. Washington, DC: Agriculture Research Service, USDA Miscellaneous Publication No. 1041, 1967. Available at: http://bit.ly/2pVTImI
(2) Nakamoto S. Bitcoin: A Peer-to-Peer Electronic Cash System. October 31, 2008. https://bitcoin.org/bitcoin.pdf
(3) Tapscott D, Tapscott A. Blockchain revolution : how the technology behind bitcoin is changing money, business, and the world. New York: Portfolio / Penguin, 2016
(4) Sotos JG, Houlding D. Blockchains for data sharing in clinical research: trust in a trustless world. (Blockchain Application Note #1.) March 2017. https://simplecore.intel.com/itpeernetwork/wp-content/uploads/sites/38/2017/05/Intel_Blockchain_Application_Note1.pdf
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 17, NephJC - Nephrology Journal Club commented:
This randomised controlled trial of the C5a Receptor Inhibitor Avacopan in ANCA-Associated Vasculitis., was discussed on May 8th and 9th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Tom Oates are available at the NephJC website here
There was significant interest in this promising trial, with 141 participants in the discussion and nearly 700 tweets.
The highlights of the tweetchat were:
There is a lot of concern in the Nephrology community about the long-term side-effects of steroid use in ANCA vasculitis and that an alternative agent that would allow for lower glucocorticoid exposure would be very welcome.
Overall, it was thought to be a well-designed and well-conducted trial.
The chosen primary endpoint, a decrease in Birmingham Vasculitis Activity Score of 50% or more, was hotly debated. Although very frequently used in vasculitis research, using observed changes from baseline as a trial endpoint in a parallel group study may render it a less valid tool.
The group also questioned whether vaccination would be required with Avacopan however decided that it wouldn’t because it's a receptor blocker unlike Eculizumab which is a complement cleavage inhibitor.
The treatment response to Avacopan without steroids was excellent and it appears to be a safe drug. We look forward to seeing results of the Phase III studies and some long-term data regarding relapse rates in the absence of steroids.
Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.
Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 28, Hilda Bastian commented:
This is an interesting methodological approach to a thorny issue. But the abstract and coverage (such as in Nature glosses over the fact that the results measure the study method's biases more than they measure scientists on Twitter. I think the method is inferring people who are a subset of people working in limited science-based professions.
The list of professions sought is severely biased. It includes 161 professional categories and their plural forms, in English only. It was based on a U.S. list of occupations (SOC) and an ad hoc Wikipedia list. A brief assessment of the 161 titles in comparison with an authoritative international list shows a strong skew towards social scientists and practitioners of some science-based occupations, and away from meical science, engineering, and more (United Nations Educational, Scientific and Cultural Organization (UNESCO)'s nomenclature for fields of science and technology, SKOS).
Of the 161 titles, 17% are varieties of psychologist, for example, but psychiatry isn't there. Genealogists and linguists are there, but geometers, biometricians, and surgeons are not. The U.S. English language bias is a major problem for a global assessment of a platform where people communicating with the general public.
Influence is measured in 3 ways, but I couldn't find a detailed explanation of the calculations or a reference to one, in the paper. It would be great if the authors could point to that here. More detail on the "Who is who" service used in terms of how up-to-date it is would be useful as well.
I have written more about this paper at PLOS Blogs, and point to key numbers that aren't reported, for who was excluded at different stages. The paper says that data sharing is limited by Twitter's terms of service, but it doesn't specify what that covers. Providing a full list of proportions in the 161 titles, and descriptions of more than 15 of the communities they found (none of which appear to be medical science circles), seem unlikely to be affected by that restriction. More data would be helpful to anyone trying to make sense of these results, or extend the work in ways that minimize the biases in this first study.
There is no research cited that establishes the representativeness of data from a method that can only classify less than 2% of people who are on multiple lists. The original application of the method (Sharma, 2011) was aimed at a very different purpose, so representativeness was not such a big issue there. There was no reference in this article to data on list-creating behavior. There could be a reason historians came out on top in this group: list-curating is probably not a randomly-distributed proclivity.
It might be possible with this method to better identify Twitter users who work in STEM fields. Aiming for "scientists", though, remains, it seems to me, unfeasible at scale. Methods described by the authors as product-centric (e.g. who is sharing links to scientific articles and/or discussing them, or discussing blogs where those articles are cited), and key nodes such as science journals and organizations seem essential.
I would also be interested to know the authors' rationale for trying to exclude pseudonyms - as well as the data on how many were excluded. I can see why methods gathering citations for Twitter users exclude pseudonyms, but am not sure why else they should be excluded. A key reason for undertaking this kind of analysis is to understand to what extent Twitter expands the impact of scientific knowledge and research. That inherently means looking to wider groups, and the audiences for their conversations. Thank you to the authors, though, for a very interesting contribution to this complex issue.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 06, John Greenwood commented:
(cross-posted from Pub Peer, comment numbers refer to that discussion but content is the same)
To address your comments in reverse order -
Spatial vision and spatial maps (Comment 19):
We use the term “spatial vision” in the sense defined by Russell & Karen De Valois: “We consider spatial vision to encompass both the perception of the distribution of light across space and the perception of the location of visual objects within three-dimensional space. We thus include sections on depth perception, pattern vision, and more traditional topics such as acuity." De Valois, R. L., & De Valois, K. K. (1980). Spatial Vision. Annual Review of Psychology, 31(1), 309-341. doi:doi:10.1146/annurev.ps.31.020180.001521
The idea of a "spatial map” refers to the representation of the visual field in cortical regions. There is extensive evidence that visual areas are organised retinotopically across the cortical surface, making them “maps". See e.g. Wandell, B. A., Dumoulin, S. O., & Brewer, A. A. (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383.
Measurement of lapse rates (Comments 4, 17, 18):
There really is no issue here. In Experiment 1, we fit a psychometric function in the form of a cumulative Gaussian to responses plotted as a function of (e.g.) target-flanker separation (as in Fig. 1B), with three free parameters: midpoint, slope, and lapse rate. The lapse rate is 100-x where x is the asymptote of the curve. It accounts for lapses (keypress errors etc) when performance is otherwise high - i.e. it is independent of the chance level. In this dataset it is never about 5%. However its inclusion does improve estimate of slope (and therefore threshold) which we are interested in. Any individual differences are therefore better estimated by factoring out individual differences in lapse rate. Its removal does not qualitatively affect the pattern of results in any case. You cite Wichmann and Hill (2001) and that is indeed the basis of this three-parameter fit (though ours is custom code that doesn’t apply the bootstrapping procedures etc that they use).
Spatial representations (comment 8):
We were testing the proposal that crowding and saccadic preparation might depend on some degree of shared processes within the visual system. Specific predictions for shared vs distinct spatial representations are made on p E3574 and in more detail on p E3576 of our manuscript. The idea comes from several prior studies arguing for a link between the two, as we cite, e.g.: Nandy, A. S., & Tjan, B. S. (2012). Saccade-confounded image statistics explain visual crowding. Nature Neuroscience, 15(3), 463-469. Harrison, W. J., Mattingley, J. B., & Remington, R. W. (2013). Eye movement targets are released from visual crowding. The Journal of Neuroscience, 33(7), 2927-2933.
Bisection (Comments 7, 13, 15):
Your issue relates to biases in bisection. This is indeed an interesting area, mostly studied for foveal presentation. These biases are however small in relation to the size of thresholds for discrimination, particularly for the thresholds seen in peripheral vision where our measurements were made. An issue with bias for vertical judgements would lead to higher thresholds for vertical vs. horizontal judgements, which we don’t see. The predominant pattern in bisection thresholds (as with the other tasks) is a radial/tangential anisotropy, so vertical thresholds are worse than horizontal on the vertical meridian, but better than horizontal thresholds on the horizontal meridian. The role of biases in that anisotropy is an interesting question, but again these biases tend to be small relative to threshold.
Vernier acuity (Comment 6):
We don’t measure vernier acuity, for exactly the reasons you outline (stated on p E3577).
Data analyses (comment 5):
The measurement of crowding/interference zones follows conventions established by others, as we cite, e.g.: Pelli, D. G., Palomares, M., & Majaj, N. J. (2004). Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of Vision, 4(12), 1136-1169.
Our analyses are certainly not post-hoc exercises in data mining. The logic is outlined at the end of the introduction for both studies (p E3574).
Inclusion of the authors as subjects (Comment 3):
In what way should this affect the results? This can certainly be an issue for studies where knowledge of the various conditions can bias outcomes. Here this is not true. We did of course check that data from the authors did not differ in any meaningful way from other subjects (aside from individual differences), and it did not. Testing (and training) experienced psychophysical observers takes time, and authors tend to be experienced psychophysical observers.
The theoretical framework of our experiments (Comments 1 & 2):
We make an assumption about hierarchical processing within the visual system, as we outline in the introduction. We test predictions that arise from this. We don’t deny that feedback connections exist, but I don’t think their presence would alter the predictions outlined at the end of the introduction. We also make assumptions regarding the potential processing stages/sites underlying the various tasks examined. Of course we can’t be certain about this (and psychophysics is indeed ill-poised to test these assumptions) and that is the reason that no one task is linked to any specific neural locus, e.g. crowding shows neural correlates in visual areas V1-V4, as we state (e.g. p E3574). Considerable parts of the paper are then addressed at considering whether some tasks may be lower- or higher-level than others, and we outline a range of justifications for the arguments made. These are all testable assumptions, and it will be interesting to see how future work then addresses this.
All of these comments are really fixated on aspects of our theoretical background and minor details of the methods. None of this in any way negates our findings. Namely, there are distinct processes within the visual system, e.g. crowding and saccadic precision, that nonetheless show similarities in their pattern of variations across the visual field. We show several results that suggest these two processes to be dissociable (e.g. that the distribution of saccadic errors is identical for trials where crowded targets were correctly vs incorrectly identified). If they’re clearly dissociable tasks, how then to explain the correlation in their pattern of variation? We propose that these properties are inherited from earlier stages in the visual system. Future work can put this to the test.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 08, Lydia Maniatis commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 07, Lydia Maniatis commented:
I'm not an expert in statistics, but it seems to me that the authors have conducted multiple and sequential comparisons without applying the appropriate correction. In addition, the number of subjects is small.
Also, the definition of key variables - "crowding zone" and "saccade error zone" - seems arbitrary given that they are supposed to tap into fundamental neural features of the brain. The former is defined as "target-flanker separation at which performance reached 80% correct [i.e. 20% incorrect]...which we take as the dimensions of the crowding zone," the latter by fitting "2D Gaussian functions to the landing errors and defin[ing] an ellipse with major and minor axes that captured 80% of the landing positions (shown with a black dashed line in Fig. 1C). The major and minor axes of this ellipse were taken as the radial and tangential dimensions of the “saccade error zone.”"
What is the relationship between what the authors "take as" the crowding/saccade error zones and a presumptive objective definition? What is the theoretical significance of the 80% cut-off? What would the data look like if we used a 90% cut-off?
Is a "finding" that a hierarchical linear regression "explains" 7.3% of the variance meaningful? The authors run two models, and in one saccades are a "significant predictor" of the data while in the other they are no longer significant, while gap resolution and bisection are. Conclusions seem to be based more on chance than necessity, so to speak.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 07, Lydia Maniatis commented:
What would it mean for "crowding and saccade errors" to rely on a common spatial representation of the visual field? The phenomena are clearly not identical - one involves motor planning, for example - and thus their neural substrates will not be identical. To the extent that "spatial map" refers to a neural substrate, then these will not be identical. So I'm not understanding the distinction being made between spatial maps "with inherited topological properties" and "distinct spatial maps."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 02, Lydia Maniatis commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 02, Lydia Maniatis commented:
Part 6
With respect to line bisection:
It is mentioned by Arnheim (Art and Visual Perception) that if you ask a person to bisect a vertical line under the best conditions - that is, conditions of free-viewing without time limits - they will tend to place the mark too high:
"An experimental demonstration with regard to size is mentioned by Langfeld: "If one is asked to bisect a perpendicular line without measuring it, one almost invariably places the mark too high. If a line is actually bisected, it is with difficulty that one can convince oneself that the upper half is not longer than the lower half." This means that if one wants the two halves to look alike, one must make the upper half shorter. " (p. 30).
As the authors of this study don't seem to have taken this apparent, systematic bias into account, their "correct" and "incorrect" criterion of line bisection under the adverse conditions they impose may not be appropriate. It is also obvious that the results of the method used did not alert the authors to the possibility of such a bias.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 29, Lydia Maniatis commented:
Part 5
With respect to Vernier acuity, and in addition to my earlier objections, I would add that a "low-level" description seems to be at odds with the fact that Vernier acuity, which is also described as hyperacuity, is better than would be expected on the basis of the spacing of the receptors in the retina.
"Yet spatial distinctions can be made on a finer scale still: misalignment of borders can be detected with a precision up to 10 times better than visual acuity. This hyperacuity, transcending by far the size limits set by the retinal 'pixels', depends on sophisticated information processing in the brain....[the] quintessential example and the one for which the word was initially coined,[1] is vernier acuity: alignment of two edges or lines can be judged with a precision five or ten times better than acuity. " (Wikipedia entry on hyperacuity).
When an observer is asked a question about alignment of two line segments, the answer they give is, always, based on the percept, i.e. a high-level, conscious product of visual processing. It is paradoxical to argue that some percepts are high and others low-level, because even if one wanted to argue that some percepts reflect low-level activity, the decision to derive the percept or features thereof from a particular level in one and another level in another case would have to be high-level. The perceived better-than-it-should be performance that occurs in instances of so-called hyperacuity is effectivelyan inference, as are all interpretations of the retinal stimulation, whether a 3D Necker cube or the Mona Lisa. It's not always the case that two lines that are actually aligned will appear aligned. (Even a single continuous line may appear bent - yet line segments are supposed to be the V1 specialty). It all depends on the structure of the whole retinal configuration, and the particular, high-level, inferences to which this whole stimulation give rise in perception.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 29, Lydia Maniatis commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 28, Lydia Maniatis commented:
Part 3
I don't understand why it has become normalized for observers in psychophysical experiments to include the authors of a study. Here, authors form one quarter of the participants in the first experiment and nearly one third in the second. Aside from the authors, participants are described as "naive." As this practice is accepted at PNAS, I can only imagine that psychophysical experiments require a mix of subjects who are naive to the purpose and subjects who are highly motivated to achieve a certain result. I only wish the reasons for the practice were made explicit. Because it seems to me that if it's too difficult to find enough naive participants for a study that requires them, then it's too difficult to do the study.
If the inclusion of authors as subjects seems to taint the raw data, there is also a problem with the procedure to which the data are subjected prior to analysis. This essentially untestable, assumption-laden procedure is completely opaque, and mentioned fleetingly in the Methods:
"Psychometric functions were fitted to behavioral data using a cumulative Gaussian with three parameters (midpoint, slope, and lapse rate). "
The key term here is "lapse rate." The lapse rate concept is a controversial theoretical patch-up developed to deal with the coarseness of the methods adopted in psychophysics, specifically the use of forced choices. When subjects are forced to make a choice even when what they perceive doesn't fall into the two, three or four choices preordained by the experimenters, then they are forced to guess. The problem is serious because most psychophysical experiments are conducted under perceptually very poor conditions, such as low contrast and very brief stimulus presentations. This obviously corrupts the data. At some point, practitioners of the method decided they had to take into account this "lapse rate," i.e. the "guess rate." That the major uncertainty incorporated into the forced-choice methodology could not be satisfactorily resolved is illustrated in comments by Prins (2012/JOV), whose abstract I quote in full below:
"In their influential paper, Wichmann and Hill (2001) have shown that the threshold and slope estimates of a psychometric function may be severely biased when it is assumed that the lapse rate equals zero but lapses do, in fact, occur. Based on a large number of simulated experiments, Wichmann and Hill claim that threshold and slope estimates are essentially unbiased when one allows the lapse rate to vary within a rectangular prior during the fitting procedure. Here, I replicate Wichmann and Hill's finding that significant bias in parameter estimates results when one assumes that the lapse rate equals zero but lapses do occur, but fail to replicate their finding that freeing the lapse rate eliminates this bias. Instead, I show that significant and systematic bias remains in both threshold and slope estimates even when one frees the lapse rate according to Wichmann and Hill's suggestion. I explain the mechanisms behind the bias and propose an alternative strategy to incorporate the lapse rate into psychometric function models, which does result in essentially unbiased parameter estimates."
It should be obvious that calculating the rate at which subjects are forced to guess is highly-condition-sensitive and subject-sensitive, and that even if one believes the uncertainty can be removed by a data manipulation, there can be no one-size-fits all method. Which strategy for calculating guessing rate have Greenwood et al (2017) adopted? Why? What was the "lapse rate"? There would seem to be no point in even looking at the data unless their data manipulation and its rationale are made explicit.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 27, Lydia Maniatis commented:
Part 2b (due to size limit)
I would note, finally, that unless the authors are also believers in a transparent brain for some, but not other, perceived features resulting from a retinal stimulation event, the idiosyncratic/summative/inherited/low-level effects claims should presumably be detectable in a wide range of normal perceptual experiences, not only in peripheral vision under conditions which are so poor that observers have to guess at a response some unknown proportion of the time, producing very noisy data interpreted in vague terms with a large number of researcher degrees of freedom and a great deal of theoretical special pleading. Why not look for these hypothesized effects where they would be expected to be most clearly expressed?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 27, Lydia Maniatis commented:
Part 2 A related and equally untenable theoretical claim casually adopted by Greenwood et al (2017) is one which Teller (1984) has explicitly criticized and Graham (2011) has uncritically embraced and unintentionally satirized. It is the notion that some aspects of perceptual experience are directly related to - and can be used to discern - the behavior of neurons in the "lower levels" of the visual system, usually V1:
"Prior studies have linked variations in both acuity (26) and perceived object size (59) with idiosyncrasies in visual cortical regions as early as V1."
"To consider the origin of the relationship between crowding and saccades, we conducted a second experiment to compare crowding with two "lower-level" measures of spatial localization: gap resolution and bisection thresholds."
"If these similarities were to arise due to an inheritance of a common topology from earlier stages of the visual system, we would expect to see similar patterns of variations in tasks that derive from lower-level processes."
Before addressing the fatal flaws with the theoretical premise, I would like to note that the two references provided in no way rise to the occasion. Both are attempts to link some measures of task performance to area V1 based on fMRI results. fMRI is still a very crude method of studying neural function to begin with. Additionally, the interpretation of the scans is assumption-laden, and we are supposed to take all of the underlying assumptions as given, with no arguments or evidence. For example, from citation 26:
"To describe the topology of a given observer's V1, we fit these fMRI activity maps with a template derived from a conformal mapping method developed by Schwartz (Schwartz 1980, Schwartz 1994). According to Schwartz, two-dimensional visual space can be projected onto the two-dimensional flattened cortex using the formula w=k x log(z + a), where z is a complex number representing a point in visual space, and w represents the corresponding point on the flattened cortex. [n.b. It is well-known that visual experience cannot be explained on a point by point basis]. The parameter a reflects the proportion of V1 devoted to the foveal representation, and the parameter k is an overall scaling factor."
The 1994 Schwartz reference is to a book chapter, and the method being referenced appears to have been proposed in 1980 (pre-fMRI?). I guess we have to take it as given that it is valid.
From ref. 59:
"For pRF spread we used the raw, unsmoothed pRF spread estimates produced by our fine-fitting procedure. However, the quantification of surface area requires a smooth gradient in the eccentricity map without any gaps in the map and with minimal position scatter in pRF positions. therefore, we used the final smoothed prameter maps for this analysis. The results for pRF spread are very consistent when using smoothed parameter maps, but we reasoned that the unsmoothed data make fewer assumptions."
One would ask that the assumptions be made explicit and rationalzed. So, again, references act as window-dressing for unwarranted assertions that the tasks used by the authors directly reflect V1 activity.
The theoretical problem is that finding some correlation between some perceptual task and some empirical observations of the behavior of neurons in some part of the visual system in no way licenses the inference that the perceptual experience tapped by the task is a direction reflection of the activities of those particular neurons. Such correlations are easy to come by but the inference is not tenable in principle. If the presumed response properties of neurons in V1, for example, are supposed to directly cause feature x of a percept, we have to as a. how is this assumption reconciled with the fact that the activities of the same "low-level" neurons underlie all features of the percept and b. how is it that for this feature, all of the other interconnectivities with other neural layers and populations bypassed?
Tolerance for the latter problem was dubbed by Teller (1984) the "nothing mucks it up proviso." As an example of the fallacious nature of such thinking, she refers to the Mach bands and their supposed connection to the responses of ganglion cells as observed via single cell recordings:
"Under the right conditions, the physiological data "look like" the psychophysical data. The analogy is very appealing, but the question is, to what extent, or in what sense, do these results provide an explanation of why we see Mach bands?" (And how, I would add, is this presumed effect supposed to be expressed perceptually in response to all other patterns of retinal stmulation? How does it come about that the responses of ganglion cells are simultaneously shunted directly to perceptual experience, and at the same time participate in the normal course of events underlying visual process as a whole?)
Teller then points out that, in the absence of an explicit treatment of "the constraints that the hypothesis puts on models of the composit map from the peripheral neural level [in this she includes V1] and the bridge locus, and between the bridge locus and phenomenal states," the proposal is nothing more than a "remote homunculus theory," with a homunculus peering down at ganglion cell activity through "a magical Maxwellian telescope." The aganglion cell explanation continues to feature in perception textbooks and university perception course websites.
It is interesting to note that Greenwood et al's first mention of "lower-level" effects (see quote above) is placed between scare quotes, yet nowhere do they qualify the term explicitly.
The ease with which one can discover analogies between presumed neural behavior and psychophysical data was well-described by Graham (2011):
"The simple multiple-analyzers model shown in the top panel of Fig. 1 was and is a very good account, qualitatively and quantitatively, of the results of psychophysical experiments using near-threshold contrasts . And by 1985 there were hundreds of published papers each typically with many such experiments. It was quite clear by that time, however, that area V1 was only one of 10 or more different areas in the cortex devoted to vision. ...The success of this simple multiple-analyzers model seemed almost magical therefore. [Like a magical Maxwellian telescope?] How could a model account for so many experimental results when it represented most areas of the visual cortex and the whole rest of the brain by a simple decision rule? One possible explanation of the magic is this: In response to near-threshold patterns, only a small proportion of the analyzers are being stimulated above their baseline. Perhaps this sparseness of information going upstream limits the kinds of processing that the higher levels can do, and limits them to being described by simple decision rules because such rules may be close to optimal given the sparseness. It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen." Rather than challenging the “nothing mucks it up proviso” on logical and empirical grounds, Graham has uncritically and absurdly embraced it. (I would note that the reference to "near-threshold" refers only to a specific feature of the stimulation in question, not the stimulation as a whole, e.g. the computer screen on which stimuli are being flashed, which, of course, is above-threshold and stimulating the same neurons.)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 26, Lydia Maniatis commented:
Over thirty years ago, Teller (1984) attempted to inspire a course correction in a field that had become much too reliant on very weak arguments and untested, often implausible assumptions. In other words, she tried to free it from practices appropriate to pseudoscience. Unfortunately, as Greenwood et al (2017) illustrates so beautifully, the field not only ignored her efforts, it became, if anything, even less rigorous.
The crux of Teller's plea is captured by the passage below (emphasis mine). Her reference is to "linking propositions" which she defines as "statements that relate perceptual states to physiological states, and as such are one of the fundamental building blocks of visual science."
"Twenty years ago, Brindley pointed out that in using linking hypotheses, visual scientists often introduce unacknowledged, non-rigorous steps into their arguments. Brindley's remarks correctly sensitized us to the lack of rigor ** with which linking propositions have undoubltedly often been used, but led to few detailed, explicit discussions of linking propositions. it would seem usefule to encourage such discussions, and to encourage visual scientists to make linking propositions explicit **so that linking propositions can be subjected to the requirements of consistency and the risks of falsification appropriate to the evaluation of all scientific [as opposed to pseudoscientific] propositions."
Data itself tells us nothing; it must be interpreted. The interpretation of data requires a clear theoretical framework. One of the requirements of a valid theoretical framework is that its assumptions be a. consistent with each other; b. consistent with known facts; and c. testable, in the sense that it makes new predictions about potentially observable natural phenomena. ("Linking propositions" are effectively just another term for the link between data and theory, applied to a particular field). Theoretical claims, in other words, are not to be made arbitrarily and casually because they are key to the valid interpretation of data.
The major theoretical premise of Greenwood et al (2017_ is arbitrary and inconsistent with the facts aw we know them and as we can infer them logicaly. The authors don't even try to provide supporting citations that are anything more than window-dressing. The premise is contained in the following two exceprted statements:
"Given the hierarchical structure of th eviusal system, with inherited receptive field properties at each stage (35), variations in this topological representation could arise early in the viusal system, with pattenrs specific to each individual that are inherited throughout later stages." (Introduction, p. E3574).
"Given that the receptive fields at each stage in the visual system are likely built via the summation of inputs from the preceding stages (e.g. 58)..." (Discussion, p. E3580).
The statements are false, so it is no surprise that neither of the references provided is anywhere near adequate to support what we are supposed to accept as "given."
The first referenc is to Hubel and Wiesel (1962), an early study recording from the striate cortex of the cat. Its theoretical conclusions are early, speculative, based on a narrow set of stimulus conditions, and apply to a species with rather different visual skills than humans. Even so, the paper does not support Greenwood et al's breezy claim; it includes statements that contradict both of the quoted assertions e.g. (emphasis mine):
"Receptive fields were termed complex when the response to light could not be predicted from the arrangements of excitatory and inhibitory regions. Such regions could generally not be demonstrated; when they could the laws of summation and mutual antagonism did not apply." (p. 151). Even the conclusions that may seem to apply are subject to a conceptual error noted by Teller (1984); the notion that a neuron is specialized to detect the stimulus (of the set selected for testing) to which it fires the fastest. (She likens this error to treating each retinal cone as a detector of wavelength to which it fires the fastest, or at all, when as we know the neural code for color is contingent on relative firing rates of all three cones).
Well before Hubel and Wiesel, it had become abundantly clear that the link between retinal stimulation and perception could not remotely be described in terms of summative processes. (What receptive field properties have been inherited by the neurons whose activity is responsible for the perception of an edge in the absence of a luminance or spectral step? Or an amodal contour? or a double-layer? etc). Other than as a crude reflection of the fact that neurons are all interconnected in some way, the "inherited" story has no substance and no support.
And of course, it is well-known that neural connections in the brain are so extraordinarily dynamic and complex - feedforward, feedback, feed-sideways, diagonally...even the effect of the feedforward component, so to speak, is contingent on the general system state at a given moment of stimulation...that to describe it as "hierarchical" is basically to mislead.
The second supporting citation, to Felleman and van Essen (1991) is also to a paper in which the relevant claims are presented in a speculative fashion.
To be continued (in addition to additional theoretical problems, the method and analysis - mostly post hoc - is also highly problematic).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 12, Takashi Shichita commented:
As the comment on PubMed Common, the targeted site of our guide RNA2 was split over Exons 2 and 3. This is our mistake. However, we successfully obtained the Msr1-deficient RAW cell clone through limiting dilution. Our guide RNA1 is thought to function correctly for the disruption of Msr1 gene.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Oct 09, Nirajkumar Makadiya commented:
Hi Authors,
Have some questions regarding the methods section of PMID: 28394332. 1) Guide RNA 2 (5′-CTTCCTCACAGCACTAAAAA-3′) for the CRISPR is spanning over exon 2 and 3. Was wondering if that can work? 2) Did you try the other sgRNA sequences to knock out the Msr1 gene?
Thank you!
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 19, Seán Turner commented:
The accession number for the 16S rRNA gene sequence is incorrectly cited in the manuscript as LN598544.1. The correct number is LT598544.1.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 16, David Keller commented:
Thank you, again, for your illuminating and scholarly reply to my comments and questions. Your field of causal inference theory may provide a much-needed bridge spanning the chasm between the land of rigor where mathematicians dwell, and the land of rigor mortis inhabited by clinicians and patients. I will continue to follow your work, and that of your colleagues, with great interest.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Nov 16, Ian Shrier commented:
We completely agree on objectives and importance of estimating the per protocol effect. It is absolutely the effect that I am interested in as a patient, and therefore as a physician who wants to communicate important information to the patient.
I do think we have different experiences on how people interpret the words "per protocol analysis". Historically, this term has been used to mean an analysis that does not estimate the per protocol effect except under unusual contexts. More recently, some have used it to refer to a different type of analysis that does estimate the per protocol effect. The field of causal inference is still relatively new and there are other examples of changing terminology. I expect the terminology will stabilize over the next 10 years, which will make it much easier for readers, authors and reviewers.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Nov 15, David Keller commented:
Thank you for your thoughtful reply to my comment. In response, I would first remark that, while "it is sometimes difficult to disentangle the jargon", it is always worthwhile to clearly define our terms.
Commonly, an "analysis" is a method of processing experimental data, while an "effect" is a property of experimental data, observed after subjecting it to an "analysis".
As applied to our discussion, the "per protocol effect" would be observed by applying "per protocol analysis" to the experimental data.
My usage of "per protocol results" was meant to refer to the "per protocol effect" observed in a particular trial.
The above commonly-understood definitions may be a reason causal inference terminology "can get quite confusing to others who may not be used to reading this literature", for example, by defining "per protocol effect" differently than as "the effect of per protocol analysis".
Nevertheless, clinicians are interested in how to analyze clinical study data such that the resulting observed effects are most relevant to individual patients, especially those motivated to gain the maximal benefit from an intervention. For such patients, I want to know the average causal effect of the intervention protocol, assuming perfect adherence to protocol, and no intolerable side-effects or unacceptable toxicities. This tells the patient how much he can expect to benefit if he can adhere fully to the treatment protocol.
Of course, the patient must understand that his benefits will be diminished if he fails to adhere fully to treatment, or terminates it for any reason. Still, this "average expected benefit of treatment under ideal conditions" remains a useful goal-post and benchmark of therapy,despite any inherent bias it may harbor, compared with the results of intention-to-treat analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Oct 31, Ian Shrier commented:
Thank you for your comment. We seem to be in agreement on what many patients may be most interested in.
In your comment, you say "the per-protocol results of a treatment are, therefore, of interest to patients and their clinicians, and should be reported by clinical trials, along with appropriate statistical caveats and disclaimers."
The words "per protocol results" might mean different things to different people and I thought it important to clarify some of the terminology, which can get quite confusing to others who may not be used to reading this literature.
In the causal inference literature, it has been suggested that we use "per protocol analysis" to refer to an analysis that examines only those participants who follow their assigned treatment. This is different from the "per protocol effect" (also known as population average causal effect), which estimates the causal effect of what would be observed if the entire population received a treatment compared to the entire population not receiving a treatment.
Further, when we refer to the causal effect of “treatment”, we really mean the causal effect of a “treatment strategy”. For example, clinical practice would be to discontinue a medication if there is a serious side effect. In a trial, this would be part of the protocol. Therefore, a person with a serious side effect still counts as following the “treatment strategy” (i.e. the protocol of a per protocol effect) even though they are no longer on treatment.
In brief, the per protocol analysis and per protocol effect are only the same under certain conditions. Assume a randomized trial with the control group receiving usual care and also not having access to the active treatment. In this case, those who are assigned active treatment and do not take their active treatment still receive the same usual care as the control group. The per protocol analysis will be the same as the per protocol effect only if these non-adherent active treatment group participants receiving usual care have the same outcomes on average as those assigned to the control group receiving usual care. This is an assumption that many of us are reluctant to make because the reasons for non-adherence are often related to the probability of the outcome. This is why more sophisticated analyses are helpful in estimating the true population average causal effect.
I hope this makes sense. It is sometimes difficult to disentangle the jargon and still be 100% correct in statements.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Oct 24, David Keller commented:
Patients considering an intervention want to know the actual effect of receiving it
Motivated patients are not very interested in how much benefit they can expect to receive from being assigned to a therapy; they want to know the benefits and risks of actually receiving the treatment. The average causal effect of treatment is, for these patients, more clinically relevant than the average causal effect of assignment to treatment.
Intention-to-treat analysis may be ideal for making public health decisions, but it is largely irrelevant for treatment decisions involving particular individuals. Patients want personalized medical advice. A patient's genetic and environmental history may modify his expected results of receiving treatment, and the estimated effects should be discussed.
Regardless of their inherent biases, the per-protocol results of a treatment are, therefore, of interest to patients and their clinicians, and should be reported by clinical trials, along with appropriate statistical caveats and disclaimers.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 17, Lydia Maniatis commented:
To synopsize the problem as I understand it: The author is claiming that large sections of the population are walking around, in effect, with differently tinted lenses, some more bluish, some more white or yellowish.
This is a radical claim, and would have wide-ranging consequences, not least of which is that, as in the case of the dress, there would be a general disagreement about color. Such a general disagreement would not be detectable, as no one could know that what we all, for example, might call blue is actually perceived in different ways.
The reason we can know that we disagree about the colors of the dress is that we agree on colors generally, and the dress constitutes a surprising exception.
If the author believes in his hypothesis, a strong, direct experimental test is in order. (It would certainly falsify.) If he insists on focussing on correlations with "owls" and "larks," then he should better control his populations, e.g. use night watchmen for the owls, and park rangers for the larks, or investigate how the dress is perceived by populations in e.g. Norway, where the days and nights are months-long and the same for everyone. Do we get less variation there?
What doesn't seem worth pursuing is another uninterpretable replication based on poor quality, muddy and uncheckable data from anonymous readers of Slate.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 15, Lydia Maniatis commented:
Please see comments/author responses on PubPeer.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 12, Lydia Maniatis commented:
"Although the results from both our experiments appear to be consistent with previous research (Atsma et al., 2012; Fencsik et al., 2007; Franconeri et al., 2012; Howe & Holcombe, 2012; Iordanescu et al., 2009; Keane & Pylyshyn, 2006; Khan et al., 2010; Lovejoy et al., 2009; Luu & Howe, 2015; Szinte et al., 2015; Watamaniuk & Heinen, 2015), they do not seem to be consistent with each other. Obviously, the two experiments we have described here were not exactly the same. We will discuss some of the differences that might explain the seemingly conflicting results."
The conflict between the results has to also be a conflict between some of the results and the hypothesis being tested. The broad speculation as to which of the many confounds may be responsible just shows that there were too many confounds. Such as:
"the amount of attentional resources dedicated to the task might have been different between the two experiments. For both overtly tracked and covertly tracked targets, we see that the overall probe detection rate was higher in the second experiment compared to the first. Moreover, the feedback we received from several participants in both experiments suggests that tracking the objects in Experiment 1 was so easy that participants were very easily distracted by their thoughts, and that Experiment 2 was more challenging and engaging. We therefore speculate that participants focused their attention more strongly (i.e., dedicated more attentional resources) toward tracking each target during Experiment 2 than during Experiment 1."
The only way to test that speculation is to do another experiment, hopefully one less confounded. Otherwise - if speculation by itself can resolve serious confounds in an otherwise inconclusive experiment - why do any experiments at all? Just assume that any differences between future results and prediction will be due to confounds.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 12, Christopher Southan commented:
This is a welcome development, particularly from the instantiation of QC'd compounds. However, the utility would be enhanced by submission of the 4,707 structures to PubChem, including enabling selects for the 1,988 approved and 1,348 Ph1s (easily done via SID tags)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 11, Darko Lavrencic commented:
It is a new evidence supporting the presented hypothesis: http://www.med-lavrencic.si/research/the-intracraniovertebral-volumes/
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 21, Anju Anand commented:
http://www.thelancet.com/journals/lanres/article/PIIS2213-2600(17)30231-X/abstract
This is a commentary from our #rsjc on this article
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 24, Harri Hemila commented:
A manuscript version is available at hdl.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 07, Misha Koksharov commented:
comments withdrawn as unwelcomed
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 07, Tanai Cardona commented:
Quite an interesting article.
On page 16, you say: "However, several lines of evidence are consistent with the existence of oxygenic photosynthesis hundreds of millions of years before the Archean–Proterozoic boundary [...]"
Another line of evidence for an early origin of oxygenic photosynthesis comes from the evolution of the photochemical reaction centers and Photosystem II, the enzyme that oxidizes water to oxygen. I have recently shown that the earliest events in the evolution of water oxidation catalysis likely date back to the early Archaean. See Cardona, 2016, Front. Plant Sci. 7:257 doi: 10.3389/fpls.2016.00257; and also Cardona et al., 2017, BiorXiv, doi.org/10.1101/109447 for an in depth follow up.
I am very glad to read that a largely anaerobic Archaean atmosphere with oxygen levels as low as 10E-7 is not inconsistent with the presence of oxygenic photosynthesis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 06, Dale D O Martin commented:
It should be noted that myristoylation occurs on N-terminal glycines. It can only happen on internal glycines if there is a proteolytic event that generates a new N-terminal glycine on the new C-terminal fragment.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 03, Vijay Sankaran commented:
Our re-analysis of the gene expression data presented in this paper shows confounding due to variation in erythroid maturation. Correction for these changes results in alternative conclusions from those presented here. This re-analysis has been published: https://www.ncbi.nlm.nih.gov/pubmed/28615220
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 28, Natalie Clairoux commented:
Free version of this article available after April 29, 2018 at https://papyrus.bib.umontreal.ca/xmlui/handle/1866/19571
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 28, Preben Berthelsen commented:
The three original insights: John Snow issued the first warning on the prolonged use of oxygen in the new-born in his 1841 paper on “Asphyxia and Resuscitation of Still-born Children”.
In 1850, Snow was the first to advocate and use chloroform in the treatment of status asthmaticus.
The first description of “Publication Bias”, in the medical literature, was by Snow in 1858 in “On Chloroform and other Anæsthetics: Their Action and Administration”.
The misconception. Snow believed that death during chloroform anaesthesia was caused by “air too heavily charged with chloroform” and could be prevented by “judicious” as opposed to “freely” administration of the vapour.
Preben G. Berthelsen, MD. Charlottenlund, Denmark.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 07, Peter Hajek commented:
This paper presents some interesting data, but the language it uses is misleading. The word ‘initiation’ implies a start of regular use, but the data concern mostly a single instance when people tried an e-cigarette. Among non-smokers, progression to regular vaping is extremely rare. Trying vaping once or twice and never going back to it does not initiate anything. (Among smokers, a switch to vaping is a good thing).
Describing e-cigarette use as ‘e-cigarettes smoking’ is another misleading sound-bite. Vaping poses only a small fraction of risks of smoking.
Finally, preventing e-cigarette use is not ‘tobacco use prevention’. Vaping does not include use of tobacco. If the authors mean by this phrase that experimentation with e-cigarettes inevitably leads to smoking, there is no sign of that. Smoking prevalence in youth is declining at an unprecedented rate.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 24, Rafael Fridman commented:
Nice study but the title of the paper does not reflect the findings and thus is misleading. There is no functional evidence that the pathway presented (TM4SF1/DDR1) indeed "promotes metastasis". Invasion in vitro is not metastasis (a complex process). Cells may be invasive in vitro but not metastatic in vivo. Therefore, the authors are respectfully recommended to change the title of the paper to better represent the actual findings and the limitations of the experimental systems.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 02, William McAuliffe commented:
This is a valuable addition to the literature, but its generalizability is limited by the recruitment source of the prescription group. Most iatrogenically addicted patients do not seek treatment in a drug treatment program because of the demographic differences between them and the typical non-medical opioid addict. The pain patients are much more likely go to a pain clinic or are simply tapered off of the drug by the original providers. There are important differences between the pain patients that go to drug treatment programs and those that go to pain clinics that are attenuated in this study and likely result in its failure to find many significant effects.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 23, Md. Shahidul Islam commented:
In human beta cells TRPM5 is almost absent while its closest relative TRPM4 is abundant. Marabita F, and Islam MS. Pancreas. 2017 Jan;46(1):97-101. Expression of Transient Receptor Potential Channels in the Purified Human Pancreatic β-Cells. PMID: 27464700 DOI: 10.1097/MPA.0000000000000685
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 12, Yu-Chen Liu commented:
We sincerely appreciate your insightful feedbacks and constructive advices on our research. We strongly appreciate the advice on excluding all sequences that mapped on the mammal genome before the search of potential plant miRNAs. On the other hand, given the facts that the reads mapped on both plant and mammal, whether such reads were false positively mammal prompted cannot be assured before further experimental validation. On the prospect of potential candidate discovery attempts, reads mapped on both plant and mammal genomes should not be omitted indifferently. This, in my opinion, is a dilemma between the measures of avoiding false positive and increasing discovery rate. Maybe both measures should be taken in the future study.
Thank you again, for the great advices and dedication made in reviewing of this research.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 12, Kenneth Witwer commented:
Liu YC, 2017 reported mapping very low levels of mature plant miRNAs in a subset of public data gathered from 198 human plasma samples, concluding that this was evidence of "cross-kingdom RNAi"; however, both the authors and I observed that only one putatively foreign sequence, "MIR2910," mapped consistently and at levels above a reasonable noise threshold. No data were presented to support functional RNAi. I further noted that MIR2910 is a plant rRNA sequence, has been removed from miRBase, and also maps with 100% coverage and identity to human rRNA. In the comment below, Dr. Liu now links to unpublished predicted hairpin mapping data that were not included in the Liu YC, 2017 BMC Genomics conference article, which, like my comments, focused on mature putative xenomiRs. Dr. Liu states that mapping has been done not only to the putative MIR2910 mature sequence (as reported), but also to the predicted MIR2910 precursor hairpin sequence.
This is an interesting development, and I strongly and sincerely commend Liu et al for sharing their unpublished data in this forum. This is exactly what PubMed Commons is about: a place for scientists to engage in civil and constructive discourse.
However, examination of the new data reinforces my observation that the only consistently mapped "foreign" sequence in the Liu YC, 2017 study is a human rRNA sequence, not a plant miRNA, mature or otherwise. Beyond the 100% identity of the 21nt putative MIR2910 mature sequence with human rRNA, a 47nt stretch (80%) of the plant "pre-MIR2910" rRNA fragment aligns to human 18S rRNA with only one mismatch (lower-case), and indeed Liu et al allowed one mismatch:
Plant rRNA fragment:
UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGuUAACGAACGAGAC
Human rRNA fragment:
UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGaUAACGAACGAGAC
Dr. Liu provides the example of a plasma small RNA sequencing dataset, DRR023286, as the primary example of plant miRNA mapping, so let us examine this finding more closely. DRR023286 was by far the most deeply sequenced of the six plasma samples in the Ninomiya S, 2015 study (71.9 million reads), as re-analysed by Liu et al. Yet, like all other data examined in the Liu et al study, and despite the much deeper sequencing, DRR023286 yielded only the pseudo-MIR2910 as a clear "xenomiR" (mature or precursor). Of special note, previously reported dominant dietary plant xenomiRs such as MIR159, MIR168a, and the rRNA fragment "MIR2911" were not detected reliably, even with one mismatch.
The precursor coverage plots for DRR023286 (and less deeply sequenced datasets, for that matter), also according to the newly provided Liu et al data, show that any coverage is in the 5' 80% of the putative MIR2910 sequence: exactly the part of the sequence that matches human rRNA. The remaining 12 nucleotides at the 3' end of the purported MIR2910 precursor are conspicuously absent and never covered in their entirety. To give one example from the Liu et al data, in the deepest-sequenced DRR023286 dataset, even the single short read that includes just 11 of these 12 nucleotides has a mismatch. Furthermore, various combinations of these 3' sequences match perfectly to rRNA sequences in plant and beyond (protist, bacterial, etc.). Hence, the vanishingly small number of sequences that may appear to support a plant hairpin could just as convincingly be attributed to bacterial contamination...but we are already playing in the noise.
As noted, Liu et al allowed one mismatch to plant in their mapping and described no pre-filtering against human sequences. In the DRR023286 dataset, fully 90% of the putative mapping to plant MIR2910 included a mismatch (and thus human)...the other 10% were mostly sequences 100% identical to human.
In conclusion, the main points I raised previously have not been disputed by Liu et al:
1) numerous annotated plant miRNAs in certain plant "miRNA" databses appear to have been misannotated or the result of contamination, including some sequences reported by Liu et al that map to human and not plant;
2) the mature "MIR2910" sequence is a plant ribosomal sequence that also maps perfectly to human rRNA; and
3) the read counts for all but one putative plant xenomiR in the Liu et al study are under what one might consider a reasonable noise threshold for low-abundance RNA samples (plasma) with tens of millions of reads each.
Furthermore, the current interaction establishes a new point:
4) the plant rRNA sequence annotated by some as a "MIR2910" precursor maps almost entirely (and for all practical purposes entirely) to a human rRNA sequence.
Future, more rigorous searches for plant xenomiRs in mammalian tissues and fluids will require a pre-filtering step to exclude all sequences that map with one (or more) mismatches to all mammalian genomes/transcriptomes and preferably other possible contaminants, followed by a zero-mismatch requirement for foreign mapping.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 11, Yu-Chen Liu commented:
The authors appreciate the insightful feedbacks and agree with prospect that hypothesis derived from small RNA-seq data analysis deserve examination in skeptical views and further experimental validation. Regarding the skeptical view of Prof. Witwer on this issue, whether a specific sequence were indeed originate from plant can be validated through examining the 2’-O-methylation on their 3’ end (Chin, et al., 2016; Yu, et al., 2005). The threshold of potential copy per cell for plant miRNAs to affect human gene expression was also discussed in previous researches (Chin, et al., 2016; Zhang, et al., 2012).
Some apparent misunderstandings are needed to be clarified:
In the commentary of Prof. Witwer:
“A cross-check of the source files and articles shows that the plasma data evaluated by Liu et al were from 198 plasma samples, not 410 as reported. Ninomiya et al sequenced six human plasma samples, six PBMC samples, and 11 cultured cell lines 19. Yuan et al sequenced 192 human plasma libraries (prepared from polymer-precipitated plasma particles). Each library was sequenced once, and then a second time to increase total reads.”
Authors’ response:
First of all, the statement "410 samples" within the article was meant to the amount of runs of small RNA-seq run conducted in the referred researches. Whether multiple NGS runs conducted on same plasma sample should be count as individual experiment replicates is debatable. The analysis of each small RNA-seq run was conduct independently. The authors appreciate the kind comments for the potential confusion that can be made in this issue.
In the commentary of Prof. Witwer:
“Strikingly, the putative MIR2910 sequence is not only a fragment of plant rRNA; it has a 100% coverage, 100% identity match in the human 18S rRNA (see NR 003286.2 in GenBank; Table 3). These matches of putative plant RNAs with human sequences are difficult to reconcile with the statement of Liu et al that BLAST of putative plant miRNAs "resulted in zero alignment hit", suggesting that perhaps a mistake was made, and that the BLAST procedure was performed incorrectly.”
Authors’ response:
The precursor sequences of the plant miRNAs, including the stem loop sequences (precursor sequences) were utilized in the BLAST sequence alignment in this work. The precursor sequence of peu-MIR2910, “UAGUUGGUGGAGCGAUUUGUCUGGUUAAUUCCGUUAACGAACGAGACCUCAGCCUGCUA” was used. The alignment was not performed merely with the mature sequence, “UAGUUGGUGGAGCGAUUUGUC”. The stem loop sequences, as well as the alignment of the sequences against the plant genomes, was taken into consideration by using miRDeep2 (Friedländer, et al., 2012). As illustrated in the provided figures, sequencing reads were mapped to the precursor sequences of MIR2910 and MIR2916. As listed in the table below, a lot of sequencing reads can be aligned to other regions within the precursor sequences except the sequencing reads aligned to mature sequences. For instance, in small RNA-seq data of DRR023286, 5369 reads were mapped to peu-MIR2910, and 4010 reads were mapped to the other regions in the precursor sequences.
miRNA | Run |Total reads | on Mature | on precursor
peu-MIR2910 | DRR023286 | 9370 | 5369 | 4010
peu-MIR2910 | SRR2105454 | 3013 | 1433 | 1580
peu-MIR2914 | DRR023286 | 1036 | 19 | 1017
peu-MIR2916 |SRR2105342 | 556 | 227 | 329
(Check the file MIR2910_in_DRR023286.pdf, MIR2910_in_SRR2105454.pdf, MIR2914_in_DRR023286 and MIR2916_in_SRR2105342.pdf)
The pictures are available in the URL:
https://www.dropbox.com/sh/9r7oiybju8g7wq2/AADw0zkuGSDsTI3Aa_4x6r8Ua?dl=0
As described in the article, all reported reads mapped onto the plant miRNA sequences were also mapped onto the five conserve plant genomes. Within the provided link a compressed folder file “miRNA_read.tar.gz” is available. Results of the analysis through miRDeep2, were summarized in these pdf files. Each figure file was named according to the summarized reads, sequence run and the mapped plant genome. For example, reads from the run SRR2105181 aligned onto both Zea mays genome and peu-MIR2910 precursor sequences are summarized in the figure file “SRR2105181_Zea_mays_peu-MIR2910.pdf”.
In the commentary of Prof. Witwer:
“Curiously, several sequences did not map to the species to which they were ascribed by the PMRD. Unfortunately, the PMRD could not be accessed directly during this study; however, other databases appear to provide access to its contents.”
Authors’ response:
All the stem loop sequences of plant miRNAs were acquired from the 2016 updated version of PMRD (Zhang, et al., 2010), which was not properly referred. The used data were provided in the previously mentioned URL.
In the commentary of Prof. Witwer:
“Counts were presented as reads per million mapped reads (rpm). In contrast, Liu et al appear to have reported total mapped reads in their data table. Yuan et al also set an expression cutoff of 32 rpm (log2 rpm of 5 or above). With an average 12.5 million reads per sample (the sum of the two runs per library), and, on average, about half of the sequences mapped, the 32 rpm cutoff would translate to around 200 total reads in the average sample as mapped by Liu et al.”
Authors’ response:
Regarding the concern of reads per million mapped reads (rpm) threshold, the author appreciate the kind remind of the need to normalize sequence reads count into the unit in reads per million mapped reads (rpm) for proper comparison between samples of different sequence depth. However the comparison was unfortunately not conducted in this work. Given the fact that the reads were mapped onto plant genome instead of human genome, the normalization would be rather pointless, considering the overall mapped putative plant reads only consist of ~3% of the overall reads. On the other hand, the general amount of cell free RNA present in plasma samples was meant to be generally lower than within cellar samples (Schwarzenbach, et al., 2011).
Reference
Chin, A.R., et al. Cross-kingdom inhibition of breast cancer growth by plant miR159. Cell research 2016;26(2):217-228.
Friedländer, M.R., et al. miRDeep2 accurately identifies known and hundreds of novel microRNA genes in seven animal clades. Nucleic acids research 2012;40(1):37-52.
Schwarzenbach, H., Hoon, D.S. and Pantel, K. Cell-free nucleic acids as biomarkers in cancer patients. Nature Reviews Cancer 2011;11(6):426-437.
Yu, B., et al. Methylation as a crucial step in plant microRNA biogenesis. Science 2005;307(5711):932-935.
Zhang, L., et al. Exogenous plant MIR168a specifically targets mammalian LDLRAP1: evidence of cross-kingdom regulation by microRNA. Cell research 2012;22(1):107-126.
Zhang, Z., et al. PMRD: plant microRNA database. Nucleic acids research 2010;38(suppl 1):D806-D813.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 07, Kenneth Witwer commented:
Caution is urged in interpreting this conference article, as described in more detail in my recent commentary. For careful analysis of this issue, using greater numbers of studies and datasets and coming to quite the opposite conclusions, see Kang W, 2017 and Zheng LL, 2017. Here, Liu et al examined sequencing data from two studies of a total of 198 plasma samples (not 410 as reported). Although no canonical plant miRNAs were mapped above a reasonable background threshold, one rRNA degradation fragment that was previously and erroneously classified as a plant miRNA, MIR2910, was reported at relatively low but consistent counts. However, this rRNA fragment is found in human 18S rRNA and is thus most simply explained as part of the human degradome. The other reportedly detected plant miRNAs were mostly found in a small minority of samples and in those were mapped at average read counts of less than one per million. These sequences may be amplification, sequencing, or mapping errors, since reads were mapped directly to plant (with one mismatch allowed) with no pre-filtering against mammalian genomes/transcriptomes. Several purported plant sequences, e.g., ptc-MIRf12412-akr and ptc-MIRf12524-akr, map perfectly to human sequences but do not appear to map to Populus or to other plants, suggesting that the plant miRNA database used by the authors and published in 2010 may include some human sequences. This is not a surprise, given pervasive low-level contamination in sequencing data, as reported by many authors.
Of course, even if some of the mapped sequences were genuine plant RNAs, they would be present in blood at greatly subhormonal levels unlikely to affect biological processes. No evidence of function is provided, apart from in silico predictions of human targets of the putative MIR2910 sequence, which, as noted above, is a human sequence. Thus, the titular claim of "evidences of cross-kingdom RNAi" is wholly unsupported. Overall, the results of this study corroborate the findings of Kang W, 2017 and previous studies: that dietary xenomiR detection is likely artifactual.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 07, Atanas G. Atanasov commented:
A very interesting and innovative research topic… I have featured this manuscript at: http://healthandscienceportal.blogspot.com/2017/04/chronic-diseases-and-aging-potential.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
pubmed.ncbi.nlm.nih.gov pubmed.ncbi.nlm.nih.gov
-
On 2017 Jun 05, Frances Cheng commented:
Immobility in mice, calm or stressed?
Frances Cheng, PhD; Ingrid Taylor, DVM; Emily R Trunnell, PhD
People for the Ethical Treatment of Animals
This paper by Yackle, et al. claims to have found a link between breathing and calmness in mice. However, the conclusions made by the authors leave several critical questions unanswered.
Mice are naturally inquisitive and the expression of exploratory behavior is generally interpreted as good welfare. Conversely, being motionless or less exploratory is typically thought to be indicative of stress, pain, or poor welfare. There is no established relationship between calmness and sitting still; in fact the literature would likely attribute time spent immobile to anxiety in this species. Similarly, the relationship between grooming and calmness is not clear. Grooming can be elicited by both stressful and relaxing situations, and as such is problematic to use as an absolute marker of stress levels. In some cases, grooming can be a stress reliever and restraint stress can increase grooming (1). A study by Kaleuff and Tuohima attempted to differentiate between stress grooming and relaxed grooming, stating: “While a general pattern of self-grooming uninterrupted cephalocaudal progression is normally observed in no-stress (comfort) conditions in mice and other rodents, the percentage of ‘incorrect’ transitions between different stages and the percentage of interrupted grooming bouts may be used as behavioural marker of stress" (2). Indeed the preBötC ablated mouse in this supplementary video (http://science.sciencemag.org/content/sci/suppl/2017/03/29/355.6332.1411.DC1/aai7984s1.mp4) appears to be less active, but without other objective measurements, it is a leap to conclude that this mouse is calm.
The chamber used to measure the animals’ behavior is extremely small and inadequate for observing mouse behavior and making conclusions about calmness or other emotions, particularly when the differences in behavior between the two mice being compared are very subtle, as they are here. In addition, simply being in a chamber of this size could potentially be restraining and stress-inducing. A larger chamber would allow for more traditional measures of calmness and anxiety, such as exploratory behavior, where the amount of time the mouse spends along the wall of the chamber versus the amount of time he or she leaves the safety of the walls to explore the center area is measured and scored (the Open Field test).
As mentioned, sitting still does not necessarily dictate calmness, and in many behavioral paradigms, immobility is thought to be an outward sign of anxiety or distress. The authors mention that they observe different breathing rates associated with different behaviors, e.g., faster during sniffing and slower during grooming; however, observed breathing rate alone cannot be used as the sole measure to associate an emotion with a behavior. Consider that humans under stress can hyperventilate when sitting still. It would be more informative, and more applicable to calmness, to know whether or not ablation of Cdh9/Dbx1 double-positive preBötC neurons would influence one’s ability to control their breathing and potentially to breathe slower during a psychologically stressful situation, rather than how the ablation impacts breathing coupled with normal physiological functions.
It is not clear if the experimental group is physiologically capable of breathing faster in response to external stimuli. Not being able to do so—in other words being programmed to breathe a certain way—could be distressing. The lack of compensatory respiration mechanisms, such as increased respiration in unknown, potentially dangerous situations, could affect prey species such as mice in ways that have not been previously characterized.
The experimental groups were born with full use of the Cdh9/Dbx1 double-positive preBötC neurons. These neurons were ablated in adult animals. If these animals did not have a full range of control of their breathing after ablation, they might have experienced unpleasant psychological reactions to the forced change in breathing pattern, which could be distressing.
In the Methods section, the authors did not specify whether or not the behavioral experiment was performed during the mice's light or dark cycle. From the Supplementary video, linked above, the artificial lighting of the indoor facility also makes this determination impossible. As you may know, mice are nocturnal. Conducting behavioral tests during the light cycle, when the mice would normally be sleeping, can lead to dramatically different results (3,4). Interruption of a rodent’s normal sleeping period reduces welfare and increases stress (5,6). It has been recommended that for behavioral phenotyping of genetically engineered mice, dark-phase testing allows researchers to better discriminate these strains against wild-type animals and provides superior outcomes (7).
To fully assess calmness or stress, one can measure physiological parameters such as hormone levels or heart rate, to name a few. However, the authors did not examine any measure of stress beyond breathing rate, which they artificially manipulated, not even to measure the baseline stress level between groups. The use of theta rhythm as secondary external validation for emotion further supports our concerns that the authors have drawn a broad conclusion based on rather tenuous connections. The relationship between theta rhythm and arousal may depend entirely on locomotion. As noted by Biskamp and colleagues, “The power of hippocampal theta activity, which drives theta oscillations in the mPFC, depends on locomotion and is attenuated when animals remain immobile” (8). The authors conclude that mice are “calm” for simply sitting still, a behavior that has in most other cases been attributed to decreased well-being.
For the reasons listed above, we are concerned that the authors may have drawn premature and/or incorrect conclusions regarding the relative “calmness” of the mice with preBötC ablation. Importantly, the authors claim as a justification for their work that this data may be useful in understanding the effects of pranayama yoga on promoting “mental calming and contemplative states”. However, the practice of pranayama includes not only controlled breathing but also mental visualization and an increased emphasis on abdominal respiration. There are also periods when the breath is held deliberately. It cannot be assumed the various components of pranayama can individually achieve a calmer state in humans, and, crucially, these components cannot be modeled or replicated in animals.
References
1) S.D. Paolo et al., Eur J Pharmacol., 399, 43-47 (2000).
2) A.V. Kaleuff, P. Tuohimaa, Brain Res. Protoc., 13, 151-158 (2004).
3) A. Nejdi, J. M. Gustavino, R. Lalonde, Physiol. Behav., 59, 45-47 (1995).
4) A. Roedel, C. Storch, F. Holsboer, F. Ohl, Lab. Anim., 40, 371-381 (2006).
5) U. A. Abou-Ismail, O. H. P. Burman, C. J. Nicol, M. Mendl, Appl. Anim. Behav. Sci., 111, 329-341 (2008).
6) U. A. Abou-Ismail, R. A. Mohamed, S. Z, El-Kholya, Appl. Anim. Behav. Sci., 162, 47-57 (2015).
7) S. M. Hossain, B. K. Y. Wong, E. M. Simpson, Genes Brain Behav., 3, 167-177 (2004).
8) J. Biskamp, M. Bartos, J. Sauer, Sci. Rep., 7, 45508 (2017).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 02, Atanas G. Atanasov commented:
Thanks a lot for the excellent overview; I have featured the manuscript at: http://healthandscienceportal.blogspot.com/2017/04/recent-advances-in-parkinsons-disease.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 10, Thomas Heston commented:
This is an interesting concept which could improve scientific research and trust in science. One possible shortcoming of their proposal is that they propose a private network as opposed to an open network with no central authority as proposed in the Blockchain-based scientific study (Digit Med 2017;3:66-8). These concepts of combining blockchain technology with smart contracts are a step in the right direction towards making research studies more reproducible, reliable, and trusted.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 01, Lydia Maniatis commented:
Last sentence of abstract: "Our findings suggest participants integrate shape, motion, and optical cues to infer stiffness, with optical cues playing a major role for our range of stimuli."
The authors offer no criterion for "range of stimuli," even though they clearly limit their conclusions to this undefined set. This means that. if one wanted to attempt a replication, it would not be clear what "range of stimuli" would constitute a valid attempt. The relevance of this point can be appreciated in the context of statements like: "Compared with previous studies, we find a much less pronounced effect of shape cues compared with material cues (Han & Keyser, 2015, 2016; Paulun et al., 2017)." Speculation as to the reasons for this discrepancy are moot if we can't circumscribe our stimulus set in a theoretically clear way.
Also, the terms "shape, motion, and optical cues," as they are used by the authors, reflect an unproductive failure to distinguish between perceived and physical properties. Relevant physical properties of a stimulus are limited to the retinal stimulation it produces. The correct title for this paper would be "Inferring the stiffness of unfamiliar objects from their inferred surface properties, shapes, and motions."
Instead, the authors are treating stiffness as an inference and the rest as objective fact. (Not that thinking about perceptual qualities like stiffness, which seem more indirectly* inferred than the others, isn't interesting in itself, but the lines between perception and physics, and the relationships between them, shouldn't be blurred).
*Having said this, I don't think it's actually appropriate to characterize any perceived quality as more or less indirect than others. Even the seemingly simplest things - such as extent (which includes amodal completion), or lightness (which includes double layers, subjective contours), are not in any sense read directly off of the retinal stimulation.
The problem of confusing perceived and objective properties is the more acute given that the investigators aren't using real objects, but objects that rendered by a third-party computer program. "
"The render engine used to generate the final images was Maxwell 3.0.1.3 (NextLimit Technologies, Madrid, Spain).... Specifically, they were designed to approximate the following materials: black marble, white marble, porcelain, nickel, concrete paving, cement, ceramic, steel, copper, light wood, dark wood, silvered glass, glass, stone, leather, wax, gelatine, cardboard, plastic, paper, latex, cork, ice cream, lichen, waffle, denim, moss, and velvet. Some of these materials were downloaded or based on downloads from the Maxwell free resources library (http://resources.maxwellrender.com), and others were designed by us."
The authors are skipping all the good parts. What are the theoretical underpinnings of what are essentially assumptions about what various computer images will look like? Why is Maxwell's rendering of "velvet" equivalent, in terms of the retinal stimulation it generates, with real velvet? What are the criteria of a valid rendering of all of these perceived qualities and substances?
The criteria are empirical (see below), but loose. It is not clear that they are statistically valid, or how this could be assessed:
"Finally, Supplementary Figure S2 summarizes the results of the free material-naming task. Generally, they show that most of our renderings yielded compelling impressions of realistic materials that observers were able to reliably classify. In the following, the naming results were used to decide on the materials to test in Experiments 3 and 4 by choosing only materials that were identified as the same material by at least 50% of participants (see Stimuli section of Experiment 3)."
Fifty percent agreement seems like a pretty low bar. Why not at least 51% (which would still be low). Why not shoot for 100%? Is normal inter-individual variability in perception of materials this low in real-life? Or are the renderings generally inadequate? Even poor pictorial renderings of materials can contain cues - e.g. wood grain - which could produce seemingly clear answers that don't really reflect a valid percept in all the particulars. The very brief description of the naming task doesn't make it clear whether or not it was a forced answer, i.e. whether or not participants were allowed to say "not sure," which seems relevant.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 05, Gwinyai Masukume commented:
The authors state that South Africa is the “country with the highest global incidence of HIV/AIDS.”
This is an oversight because both Swaziland and Lesotho have an estimated HIV incidence rate among adults (15-49) greater than South Africa’s. The incidence rates as of 2015, the most recent available from http://aidsinfo.unaids.org/, are 1.44 for South Africa, 1.88 for Lesotho and 2.36 for Swaziland.
This translates into approximately 380 000 new HIV infections per year for South Africa, 18 000 for Lesotho and 11 000 for Swaziland. South Africa has a much larger population, about 55 million people compared to Lesotho’s of about 2 million and to Swaziland’s of about 1.5 million https://www.cia.gov/library/publications/the-world-factbook/rankorder/2119rank.html#sf.
Although the absolute numbers of new HIV infections is higher for South Africa, both Lesotho and Swaziland, for their population sizes, have disproportionately more new HIV infections (incidence) than South Africa.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 09, Kenneth J Rothman commented:
Lappe et al. (1) reported that women receiving vitamin D and calcium supplementation had 30% lower cancer risk than women receiving placebo after four years (hazard ratio (HR)=0.70, 95% confidence interval (CI): 0.47 to 1.02). Remarkably, they interpreted this result as indicating no effect. So did the authors of the accompanying editorial (2), who described the 30% lower risk for cancer as “the absence of a clear benefit,” because the P-value was 0.06. Given the expected bias toward a null result in a trial that comes from non-adherence coupled with an intent-to-treat analysis (3), the interpretation of the authors and editorialists is perplexing. The warning issued last year by the American Statistical Association (ASA) (4) about this type of misinterpretation of data should be embraced by researchers and journal editors. In particular, the ASA stated: “Scientific conclusions …should not be based only on whether a p-value passes a specific threshold.” Editors in particular ought to guide their readership and the public at large to avoid such mistakes and foster more responsible interpretation of medical research.
EE Hatch, LA Wise
Boston University School of Public Health
KJ Rothman
Research Triangle Institute & Boston University School of Public Health
References
(1) Lappe J,Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women. JAMA. 2017; 317:1234-1243. doi:10.1001/jama.2017.2115
(2) Manson JE, Bassuk SS, Buring JE. Vitamin D, Calcium, and Cancer. Approaching Daylight? JAMA 2017; 317:1217-1218.
(3) Rothman KJ. Six persistent research misconceptions. J Gen Intern Med 2014; 29:1060-1064. doi: 10.1007/s11606-013-2755-z
(4) ASA statement on statistical significance and P-values. Am Stat. 2016. doi:10.1080/ 00031305.2016.1154108.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 01, JOANN MANSON commented:
We are writing in response to the comment by EE Hatch, LA Wise, and KJ Rothman that was posted on PubMed Commons on May 9. The authors questioned our interpretation [1] of the key finding of the recent randomized trial by Lappe et al. [2], asserting that we relied solely on the p-value of 0.06 and noting that “Scientific conclusions…should not be based only on whether a p-value passes a specific threshold.” However, the p-value in isolation was not the basis for our interpretation of this trial’s results or our conclusion regarding the effectiveness of vitamin D supplementation as a chemopreventive strategy. As we stated in our editorial, “...the absence of a clear benefit for this endpoint [in the Lappe et al. trial] is in line with the totality of current evidence on vitamin D and/or calcium for prevention of incident cancer..... [F]indings from observational epidemiologic studies and randomized clinical trials to date have been inconsistent. Previous trials of supplemental vitamin D, albeit at lower doses ranging from 400 to 1100 IU/d and administered with or without calcium, have found largely neutral results for cancer incidence; a 2014 meta-analysis of 4 such trials [3-6] with a total of 4333 incident cancers among 45,151 participants yielded a summary relative risk (RR) of 1.00 (95% CI, 0.94-1.06) [7]. Similarly, previous trials of calcium administered with or without vitamin D have in aggregate demonstrated no effect on cancer incidence, with a 2013 meta-analysis reporting a summary RR of 0.95 (0.76-1.18) [8].” (Parenthetically, we note that, in aggregate, vitamin D trials do find a small reduction in cancer mortality [summary RR=0.88 (0.78-0.98)] [7], but, as stated in our editorial, “[t]he modest size, relatively short duration, and relatively small numbers of cancers in the [recent Lappe et al.] trial … preclude[d] robust assessment” of the cancer mortality endpoint.) If the commenters believe that a p-value of 0.06 in the context of the generally null literature (at least for the endpoint of cancer incidence) should be interpreted as a positive finding, then where do they draw the line? A p-value of 0.07, 0.10, 0.20, or elsewhere? Large-scale randomized trials of high-dose supplemental vitamin D are in progress and are expected to provide definitive answers soon regarding its utility for cancer prevention.
--JoAnn E. Manson, MD, DrPH1,2, Shari S. Bassuk, ScD1, Julie E. Buring, ScD1,2
1Division of Preventive Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston<br> 2Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston
References
- Manson JE, Bassuk SS, Buring JE. Vitamin D, calcium, and cancer: approaching daylight? JAMA 2017;317:1217-8.
- Lappe J, Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women: a randomized clinical trial. JAMA 2017;317:1234-43.
- Trivedi DP, Doll R, Khaw KT. Effect of four monthly oral vitamin D3 (cholecalciferol) supplementation on fractures and mortality in men and women living in the community: randomised double blind controlled trial. BMJ 2003;326:469.
- Wactawski-Wende J, Kotchen JM, Anderson GL, et al. Calcium plus vitamin D supplementation and the risk of colorectal cancer. N Engl J Med 2006;354:684-96.
- Lappe JM, Travers-Gustafson D, Davies KM, Recker RR, Heaney RP. Vitamin D and calcium supplementation reduces cancer risk: results of a randomized trial. Am J Clin Nutr 2007;85:1586-91.
- Avenell A, MacLennan GS, Jenkinson DJ, et al. Long-term follow-up for mortality and cancer in a randomized placebo-controlled trial of vitamin D3 and/or calcium (RECORD trial). J Clin Endocrinol Metab 2012;97:614-22.
- Keum N, Giovannucci E. Vitamin D supplements and cancer incidence and mortality: a meta-analysis. Br J Cancer 2014;111:976-80.
- Bristow SM, Bolland MJ, MacLennan GS, et al. Calcium supplements and cancer risk: a meta-analysis of randomised controlled trials. Br J Nutr 2013;110:1384-93.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 May 09, Kenneth J Rothman commented:
Lappe et al. (1) reported that women receiving vitamin D and calcium supplementation had 30% lower cancer risk than women receiving placebo after four years (hazard ratio (HR)=0.70, 95% confidence interval (CI): 0.47 to 1.02). Remarkably, they interpreted this result as indicating no effect. So did the authors of the accompanying editorial (2), who described the 30% lower risk for cancer as “the absence of a clear benefit,” because the P-value was 0.06. Given the expected bias toward a null result in a trial that comes from non-adherence coupled with an intent-to-treat analysis (3), the interpretation of the authors and editorialists is perplexing. The warning issued last year by the American Statistical Association (ASA) (4) about this type of misinterpretation of data should be embraced by researchers and journal editors. In particular, the ASA stated: “Scientific conclusions …should not be based only on whether a p-value passes a specific threshold.” Editors in particular ought to guide their readership and the public at large to avoid such mistakes and foster more responsible interpretation of medical research.
EE Hatch, LA Wise
Boston University School of Public Health
KJ Rothman
Research Triangle Institute & Boston University School of Public Health
References
(1) Lappe J,Watson P, Travers-Gustafson D, et al. Effect of vitamin D and calcium supplementation on cancer incidence in older women. JAMA. 2017; 317:1234-1243. doi:10.1001/jama.2017.2115
(2) Manson JE, Bassuk SS, Buring JE. Vitamin D, Calcium, and Cancer. Approaching Daylight? JAMA 2017; 317:1217-1218.
(3) Rothman KJ. Six persistent research misconceptions. J Gen Intern Med 2014; 29:1060-1064. doi: 10.1007/s11606-013-2755-z
(4) ASA statement on statistical significance and P-values. Am Stat. 2016. doi:10.1080/ 00031305.2016.1154108.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 20, MICHAEL BEETS commented:
Nice work
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The correct names of the authors are: "Khemiss M, Ben Khelifa M, Ben Rejeb M, Ben Saad H".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The correct name of the last author is "Ben Saad H".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The correct name of the last author is "Ben Saad H".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 19, Helmi BEN SAAD commented:
The correct name of the last author is "Ben Saad H".
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 19, Donald Forsdyke commented:
THE VIRUS-VIRUS ARMS RACE
For commentary on this paper please see ArXiv preprint (1). For further discussion see commentary on a BioRxiv preprint (2).
(1) Forsdyke DR (2016) Elusive preferred hosts or nucleic acid level selection? ArXiv Preprint (https://arxiv.org/abs/1612.02035).
(2) Shmakov SA, Sitnik V, Makarova KS, Wolf YI, Severinov KV, Koonin EV (2017) The CRISPR spacer space is dominated by sequences from the species-specific mobilome. BioRxiv preprint (http://biorxiv.org/content/early/2017/05/12/137356).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 12, Bastian Fromm commented:
Summary The author describes the results of a combined smallRNA sequencing and blasting approach in Taenia ovis. Specifically RNA was retrieved from Tov metacercaria and then mapped to the genome of T. solium. Mapping reads are then blasted against miRBase and so the author describes 34 miRNAs as present in Tov.
Major problems
- The author uses miRBase as reference for cestode miRNAs although it is very outdated (last update 2014). The author should rather have used available literature for comparisons (1-9).
- Consequently (?) the author fails to acknowledge his results in the light of standard work in the field of miRNA evolution in flatworms (10-12) and to draw conclusions about the completeness of his predictions.
- The approach of mapping a smallRNA sequencing library of a given species against another is problematic and I cannot understand why no the author does not at least try to use classical PCR to confirm loci.
Minor problems 1) page 3 line 61 author should make sentence more clear. It looks like author removed all reads that had adapter sequences. Recommendation 1) Author should get all available PRE-sequences for cestodes and ma his Tov reads with liberal settings to them and report results.
- Jiang, S., Li, X., Wang, X., Ban, Q., Hui, W. and Jia, B. (2016) MicroRNA profiling of the intestinal tissue of Kazakh sheep after experimental Echinococcus granulosus infection, using a high-throughput approach. Parasite, 23, 23.
- Kamenetzky, L., Stegmayer, G., Maldonado, L., Macchiaroli, N., Yones, C. and Milone, D.H. (2016) MicroRNA discovery in the human parasite Echinococcus multilocularis from genome-wide data. Genomics, 107, 274-280.
- Macchiaroli, N., Cucher, M., Zarowiecki, M., Maldonado, L., Kamenetzky, L. and Rosenzvit, M.C. (2015) microRNA profiling in the zoonotic parasite Echinococcus canadensis using a high-throughput approach. Parasit Vectors, 8, 83.
- Jin, X., Guo, X., Zhu, D., Ayaz, M. and Zheng, Y. (2017) miRNA profiling in the mice in response to Echinococcus multilocularis infection. Acta tropica, 166, 39-44.
- Bai, Y., Zhang, Z., Jin, L., Kang, H., Zhu, Y., Zhang, L., Li, X., Ma, F., Zhao, L., Shi, B. et al. (2014) Genome-wide sequencing of small RNAs reveals a tissue-specific loss of conserved microRNA families in Echinococcus granulosus. BMC genomics, 15, 736.
- Cucher, M., Prada, L., Mourglia-Ettlin, G., Dematteis, S., Camicia, F., Asurmendi, S. and Rosenzvit, M. (2011) Identification of Echinococcus granulosus microRNAs and their expression in different life cycle stages and parasite genotypes. International journal for parasitology, 41, 439-448.
- Ai, L., Xu, M.J., Chen, M.X., Zhang, Y.N., Chen, S.H., Guo, J., Cai, Y.C., Zhou, X.N., Zhu, X.Q. and Chen, J.X. (2012) Characterization of microRNAs in Taenia saginata of zoonotic significance by Solexa deep sequencing and bioinformatics analysis. Parasitology research, 110, 2373-2378.
- Wu, X., Fu, Y., Yang, D., Xie, Y., Zhang, R., Zheng, W., Nie, H., Yan, N., Wang, N., Wang, J. et al. (2013) Identification of neglected cestode Taenia multiceps microRNAs by illumina sequencing and bioinformatic analysis. BMC veterinary research, 9, 162.
- Ai, L., Chen, M.-X., Zhang, Y.-N., Chen, S.-H., Zhou, X.-N. and Chen, J.-X. (2014) Comparative analysis of the miRNA profiles from Taenia solium and Taenia asiatica adult. African Journal of Microbiology Research, 8, 895-902.
- Fromm, B., Worren, M.M., Hahn, C., Hovig, E. and Bachmann, L. (2013) Substantial Loss of Conserved and Gain of Novel MicroRNA Families in Flatworms. Molecular biology and evolution, 30, 2619-2628.
- Cai, P., Gobert, G.N. and McManus, D.P. (2016) MicroRNAs in Parasitic Helminthiases: Current Status and Future Perspectives. Trends Parasitol, 32, 71-86.
- Fromm, B., Ovchinnikov, V., Hoye, E., Bernal, D., Hackenberg, M. and Marcilla, A. (2016) On the presence and immunoregulatory functions of extracellular microRNAs in the trematode Fasciola hepatica. Parasite immunology.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 16, Michael Tatham commented:
Is CoAlation biologically relevant or a non-functional by-product of the chemical reaction between CoA and cysteine thiols in proximal proteins under certain redox conditions?
Firstly, in my opinion the work described here is technically sound. The development of the specific antibody for CoA, and the mass spectrometric method to detect the modification on peptides are key tools in the analysis of any post-translational modification. However, there is a risk when using these super-sensitive methods, that one can detect vanishingly small amounts of modified peptides, which inevitably calls relevance into question. More specifically, modern mass-spectrometry based proteomics in combination with peptide-level enrichment of modified species has allowed us to identify modification sites in the order of tens of thousands for phosphorylation, ubiquitination, acetylation and SUMOylation (as of May 2017). For these fields, the onus of the researcher has very quickly shifted from identification of sites, to evidence for biological meaning. In short, the question is no longer “Which proteins?”, but “Why?”.
Taking acetylation as an example: Phosphositeplus (www.phosphosite.org) lists over 37000 acetylation sites, the majority identified via MS-based proteomics where acetylated peptides have been enriched using acetylated lysine specific antibodies. However, further work investigating endogenous stoichiometry (or site occupancy) of acetylated lysines has revealed that the vast majority are below 1%. Meaning, for most sites, less than 1% of the pool of a protein actually has an acetyl group on a particular lysine (see https://www.ncbi.nlm.nih.gov/pubmed/26358839 and https://www.ncbi.nlm.nih.gov/pubmed/24489116). This clearly calls into question the ability of acetylation to drastically alter the function of most of the proteins identified as ‘targets’.
A very interesting hypothesis is emerging, whereby many of the identified sites of acetylation are not mediated by the specific transfer of acetyl groups via acetyl-transferase enzymes in cells, but are direct acceptors of acetyl groups from reactive chemicals such as acetyl-CoA, or acetyl-phosphate (an earlier review can be found here https://www.ncbi.nlm.nih.gov/pubmed/24725594). This is termed, non-enzymatic, or chemical modification.
Intriguingly, this proximity-based direct modification process may not be restricted to non-enzymatic modification systems. In fact the majority of enzyme-catalysed cellular post-translational modifications involve highly reactive intermediates (such as thioester-bonded ubiquitin or ubiquitin-like modifiers to E1 or E2 enzymes), which can modify lysines in absence of the specificity-determining enzymes (E3 ligases). So it follows that ‘unintended’ modifications can occur for any biologically relevant post-translational modification simply by spatial proximity. This actually also fits with the acetylation site occupancy studies that showed (relatively) higher occupancy in proteins that are themselves involved in acetylation dynamics. Couple these theories with the exquisitely sensitive detection methods used in modern proteomics studies, and we have the potential to create huge lists of modification sites where the proportion with true biological relevance is unknown.
Where does this all fit in with this work describing post-translational modification of cellular proteins with CoA? Reviewing these data bearing the above in mind, it seems the simplest explanation is that non-enzymatic CoAlation occurs in cells when the redox potential has shifted to tip the balance in favour of reaction of CoA with cysteine thiols in proximal proteins. Removal of oxidising agents would allow the balance to revert to more reducing conditions, and so reversal of the CoAlation. The data presented in this paper support this idea as CoAlation is redox-dependent and ‘targets’ proteins that are known to interact with CoA in the cell.
In short, as with many of the published post-translational modification proteomes, much needs to be done to give biological credibility to sites of CoAlation. In particular occupancy calculations and protein-specific evidence that CoAlation regulates function in vivo, will go a long way to putting the notion of biological relevance beyond reasonable doubt. Until then we should consider the possibility that in many cases, post-translational modifications identified by modern methods have the potential to be the unintended consequence of interactions between reactive molecules and nearby proteins. It is worth noting that such a situation does not exclude biological relevance, but it makes finding any very challenging.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 05, Leonie Brose commented:
Correction to Figure: The parts of figure 1 are in the wrong place so that the results and the figure legend refer to the wrong bar chart; the chart shown as 1c (workplaces) should be at the top of the figure as 1a, thereby shifting 1a (homes) to 1b, and 1b (extending law) to 1c. In the legend for 1b, ‘by socio-economic status’ is incorrect and should be omitted.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 28, Peter Hajek commented:
After stop-smoking treatment, smokers who quit successfully have no need to use e-cigarettes and so treatment successes are concentrated in the group that did not vape post-treatment. Quit rates are of course higher in this group.
A more informative analysis would compare quit rates at one year in people who failed to stop smoking after treatment and who did and did not try vaping during the follow-up period (though even this would face the problem of self-selection).
The results as reported just show that people who fail to stop smoking with other methods are more likely to try e-cigarettes than those who quit smoking successfully.
It is unfortunate that the Conclusions fail to point this out and instead indicate that vaping undermined quitting. It did no such thing, but as with previous such reports, this is how anti-vaping activists are likely to misrepresent this study.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 28, Miguel Lopez-Lazaro commented:
Cancer etiology: assumptions lead to erroneous conclusion
The authors claim that cancer is caused and driven by mutations, and that two-thirds of the mutations required for cancer are caused by unavoidable errors arising during DNA replication. The first claim is based on the somatic mutation theory. The second claim is based on a highly positive correlation between the lifetime number of stem cell divisions in a tissue and the risk of cancer in that tissue, and on their method for estimating the proportion of mutations that result from heredity (H mutations), environmental factors (E mutations) and unavoidable errors arising during DNA replication (R mutations). These claims raise several questions:
1. Sequencing studies have found zero mutations in the genes of a variable proportion of different cancer types (see, e.g., https://dx.doi.org/10.1093/jnci/dju405 and references therein). If cancer is caused by mutations in driver genes, could the authors explain what causes these cancers with zero mutations? Could the authors use their method for estimating the proportion of cancer risk that is preventable and unpreventable in people with tumors lacking driver gene mutations?
2. Environmental factors are known to affect stem cell division rates. According to IARC, drinking very hot beverages probably causes esophageal cancer (Group 2A). If you drink something hot enough to severely damage the cells lining the esophagus, the stem cells located in deeper layers have to divide to produce new cells to replace the damaged cells. These stem cell divisions, triggered by an environmental factor, will lead to mutations arising during DNA replication. However, these mutations are avoidable if you do not drink very hot beverages. Should these mutations be counted as environmental mutations (H mutations) or as unavoidable mutations arising during DNA replication (R mutations)?
3. The authors' work is based on the somatic mutation theory. This theory is primarily supported by the idea that cancer incidence increases exponentially with age. Since our cells are known to accumulate mutations throughout life, the accumulation of driver gene mutations in our cells would perfectly explain why the risk of cancer increases until death. However, it is now well established that cancer incidence does not increase exponentially with age for some cancers (acute lymphoblastic leukemia, testicular cancer, cervical cancer, Hodgkin lymphoma, thyroid cancer, bone cancer, etc). It is also well known that cancer incidence decreases late in life for many cancer types (lung cancer, breast cancer, prostate cancer, etc). For example, according to SEER cancer statistics review, 1975-2014, men in their 80s have approximately half the risk of developing prostate cancer than men in their 70s. The somatic mutation theory, which is the basis for this article, does not explain why the lifetime accumulation of driver gene mutations in the cells of many tissues is not translated into an increase in cancer incidence throughout life. Are the authors' conclusions applicable to all cancers or only to those few cancers in which incidence increases exponentially with age until death?
4. The authors estimate that 23% of the mutations required for the development of pancreatic cancer are associated with environmental and hereditary factors; the rest (77%) are mutations arising during DNA replication. However, Notta et al. recently found that 65.4% of pancreatic tumors develop catastrophic mitotic events that lead to mutations associated with massive genomic rearrangements (https://doi.org/10.1038/nature19823). In other words, Notta et al. demonstrate that cell division not only leads to mutations arising during DNA replication, but also to mutations arising during mitosis. For this cancer type, the authors could introduce a fourth source of mutations, and estimate the proportion of mutations arising during mitosis (M mutations) and re-estimate those arising during DNA replication (R mutations). Alternatively, they could reanalyze their raw data without assuming that the parameters “stem cell divisions” and ”DNA replication mutations” are interchangeable. Cell division, process by which a cell copies and separates its cellular components to finally split into two cells, can lead to mutations occurring during DNA replication, but also to other cancer-promoting errors, such as chromosome aberrations arising during mitosis, errors in the distribution of cell-fate determinants between the daughter cells, and failures to restore physical interactions with other tissue components. Would the authors' conclusions stand without assuming that the parameters “stem cell divisions” and ”DNA replication mutations” are interchangeable?
5. The authors report a striking correlation between the number of stem cell divisions in a tissue and the risk of cancer in that tissue. They do not report any correlation between the number of mutations in a tissue and the risk of cancer in that tissue; in fact, these parameters are not correlated (see. e.g., https://doi.org/10.1038/nature19768). In addition, the authors discuss that most of the mutations required for cancer are a consequence, not a cause, of the division of stem cells. So, why do the authors use their correlation to say that cancer is caused by the accumulation of mutations in driver genes instead of saying that cancer is caused by the accumulation of cell divisions in stem cells?
For references and additional information see: Comment on 'Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention' DOI: 10.13140/RG.2.2.28889.21602 https://www.researchgate.net/publication/318744904; also https://www.preprints.org/manuscript/201707.0074/v1/download
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 29, Daniel Corcos commented:
The authors confuse mutation incidence with cancer incidence. Furthermore the factors are not additive. Mutations are obviously related to the number of cell divisions, which is well known, but this does not tell anything on the contribution of heredity and environment.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 29, Atanas G. Atanasov commented:
Compliments to the authors for this so interesting work, I have featured it at: http://healthandscienceportal.blogspot.com/2017/03/new-study-points-that-two-thirds-of.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 26, Zhang Weihua commented:
These authors should have cited our publication that, for the first time, shows that multinucleated giant cells are drug resistant and capable of generating tumors and metastases from a single cell in vivo.
Formation of solid tumors by a single multinucleated cancer cell. Weihua Z, Lin Q, Ramoth AJ, Fan D, Fidler IJ. Cancer. 2011 Sep 1;117(17):4092-9. doi: 10.1002/cncr.26021. Epub 2011 Mar 1. PMID: 21365635
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 04, L Lefferts commented:
This study funded by members of the International Association of Color Manufacturers (IACM) and written by IACM staff, members, and consultants touting the safety of food dyes is so riddled with inaccuracies and misleading statements that it should be retracted and disregarded. Each of its conclusions is incorrect. The Corrigendum only partially and inadequately addresses the errors. Bastaki et al. mischaracterizes the relationship between the study’s exposure estimates and actual concentrations measured analytically by the US Food and Drug Administration (FDA), systematically underestimates food dye exposure, and relies on acceptable daily intake (ADI) estimates that are based on outdated animal studies that are incapable of detecting the kinds of adverse behavioral effects reported in multiple double-blind clinical trials in children. Bastaki ignores the nine recent reviews (including three meta-analyses) drawing from over 30 such double-blind clinical trials that all conclude that excluding food dyes, or adherence to a diet that eliminates food dyes as well as certain other foods and ingredients, reduces adverse behavior in some children (Arnold et al. 2012, Arnold et al. 2013, Faraone and Antshel 2014, Nigg et al. 2012, Nigg and Holton 2014, Schab and Trinh 2004, Sonuga-Barke et al. 2013, Stevens et al. 2011, Stevenson et al. 2014). While Bastaki et al. has been revised to delete the incorrectly reported doses used in the Southampton study, it makes misleading statements about the Southampton study.
Each erroneous conclusion is addressed in turn, in a letter sent to the editor, signed by myself, Lisa Lefferts, Senior Scientist, Center for Science in the Public Interest, and Jim Stevenson, Emeritus Professor of Developmental psychopathology, School of Psychology, University of Southampton, and available at <https://cspinet.org/sites/default/files/attachment/dyes Bastaki LTE.pdf>.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 23, Markus Meissner commented:
During this lengthy discussion, the Kursula group succeeded to solve part of the puzzle regarding the polymerisation mechanism of apicomplexan actin and confirmed our suspicions (see below) that sedimentation assays, while useful in other systems, lead to variable and unreliable results in case of apicomplexan actin (see: https://www.nature.com/articles/s41598-017-11330-w). Briefly, in this study polymerisation assays based on pyrene labelling were used to compare polymerisation kinetics of Plasmodium and rabbit actin and conclusively showed that: - Apicomplexan Actin polymerises in a cooperative manner with a similar critical concentration as canonical Actin - Shorter filament lengths result from higher depolymerisation rate. Since Skillmann et al., 2013 reached their conclusion exclusively based on sedimentation assays, their conclusion regarding an isodesmic polymerization mechanism of apicomplexan actin, as discussed below, should be seen with great scepticism. As discussed in this study these in vitro data also support our (Periz et al., 2017, Whitelaw et al., 2017 and Das et al., 2017) findings in vivo suggesting that a critical concentration of G-actin is required in order to form F-actin filaments. Therefore, the hypothesis of an isodesmic polymerisation mechanism can be considered as falsified.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 20, Robert Insall commented:
Professor Sibley's most extensive comments are based around a single paper (Skillman et al. 2013) that concluded the polymerization of Toxoplasma actin uses an isodesmic, rather than a nucleation-based mechanism. While this work was well-executed, and thorough, it is not on its own sufficient to support the level of absolutism that is in evidence in these comments. In particular, results from actin that has been exogenously expressed (in this case, in baculovirus) are less reliable than native apicomplexan actin. The folding of actin is infamously complex, with a full set of specialist chaperones and idiosyncratic N-terminal modifications. Even changes in the translation rate of native actin can affect its function and stability (see for example Zhang, 2010). Exogenously-expressed actin may be fully-folded, but still not representative of the physiological protein. Thus it is not yet appropriate to make dogmatic statements about the mechanism of apicomplexan actin function until native actin has been purified and its polymerization measured. When this occurs, as it surely will soon, stronger rulings may be appropriate.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 20, Markus Meissner commented:
We thank David Sibley for his last comment. As we mentioned previously, it was not the aim of this study to prove or disprove isodesmic polymerisation. We highlighted the current discussion in the field regarding isodesmic polymerisation (see previous comments). It is contra productive to turn the comments on this paper into a discussion on Skillmann et al., 2013, which is seen with great scepticism in the field. We made our views clear in previous responses and we hope that future results will help to clarify this issue. However, we find it concerning (and distracting) that– in contrast to his earlier comments, according to which our data can be consolidated with isodesmic polymerisation -David Sibley is now doubting the validity of our data, mentioning that CB might affect actin dynamics. This is certainly the case, as shown in the study and as is the case with most actin binding proteins used to measure actin dynamics in eukaryotic cells. This issue was discussed at length in the manuscript, by the reviewers comments and authors response, which can all be easily accessed: https://elifesciences.org/articles/24119 The above statement reflect the joint opinions of: Markus Meissner (University of Glasgow), Aoife Heaslip (University of Conneticut) and Robert Insall (Beatson Institute, Glasgow).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 19, L David Sibley commented:
Based on the most recent response by Dr. Meissner, it is clear that there is still some confusion about the difference between measuring the kinetics of actin polymerization in vivo vs. monitoring actin dynamics in vivo. These are fundamentally different processes, the former of which cannot be directly inferred from the later. Given this confusion, it is worth reviewing how these two processes are distinct, yet inter-related.
When referring to the mechanism of actin polymerization in vitro, nucleation is the process of forming stable trimers, which are normally limited by an intrinsic kinetic barrier imposed by unstable dimers. Due to this intrinsic instability, the nucleation step is normally revealed as a pronounced lag phase in the time course of polymerization, after which filaments undergo rapid elongation Pollard TD, 2000. TgACT1 lacks this nucleation step and instead uses a non-cooperative, isodesmic process. The Arp/23 complex facilitates formation of the trimer by acting as the barbed end, thus reducing the lag time and accelerating polymerization, typically by side branching from existing filaments. Toxoplasma has no use for such a step as it would not affect the efficiency of an isodesmic process since dimers and trimers normally form without a lag phase Skillman KM, 2011. By contrast, formins bind to barbed end of existing filaments and promote elongation, both by preventing capping protein from binding and by using profilin to gather actin monomers for addition to the barbed end. Formins may also nucleate F-actin by binding to two monomers to lower the lag phase for trimer formation, thus facilitating elongation, although this role is less well studied. Importantly, formins can act on actins that use either an intrinsic “nucleation-elongation” cooperative mechanism or an isodesmic process, such as that used by Toxoplasma. Hence, the fact that formins function in Toxoplasma has no bearing on the intrinsic polymerization mechanism of TgACT1.
Once the above definitions are clearly understood, it becomes apparent why the isodesmic process of actin nucleation used by Toxoplasma is fully compatible with both the short filament, rapid turnover dynamics that have been described previously Sahoo N, 2006, Skillman KM, 2011, Wetzel DM, 2003, and the new findings of long-stable filaments described in the present paper Periz J, 2017. These different states of actin polymerization represent dynamics that are driven by the combination of the intrinsic polymerization mechanism and various actin-binding proteins that modulate this process. However, the dynamic processes that affect the status of G and F-actin in vivo cannot be used to infer anything about the intrinsic mechanism of actin polymerization as it occurs in solution. As such, we strongly disagree that there is an issue to resolve regarding the intrinsic mechanism of actin polymerization in Toxoplasma nor do any of the studies in the present report address this point. Our data on the in vitro polymerization kinetics of TgACT1 clearly fit an isodesmic process Skillman KM, 2013 and we are unaware of any data that demonstrates otherwise. Hence we fail to see why this conclusion is controversial and find it surprising that these authors continue to question this point in their present work Periz J, 2017, previous report Whitelaw JA, 2017, and comments by Dr. Meissner. As it is not possible to predict the intrinsic mechanism of actin polymerization from the behavior observed in vivo, these comments are erroneous and misleading. On the other hand, if these authors have new data that speaks directly to the topic of the intrinsic polymerization mechanism of TgACT1, we would welcome them to provide it for discussion.
Although we disagree with the authors on the above points, we do agree that the fact that actin filaments can be visualized in Toxoplasma for the first time is interesting and certainly in contrast to previous studies. For example, previous studies failed to reveal such filaments using YFP-ACT1, despite the fact that this tagged form of actin is readily incorporated into Jasplakinolide-stabilized filaments Rosenberg P, 1989. As well, filaments have not been seen by CryoEM tomography Paredes-Santos TC, 2012 or by many studies using conventional transmission EM. This raises some concern that the use of chromobodies (Cb) that react to F-actin may stabilize filaments and thus affect dynamics. Although the authors make some attempt to monitor this in transfected cells, it is very difficult rule out that Cb are in fact enhancing filament formation. One example of this is seen in Figure 6 A, where in a transiently transfect cell, actin filaments are seen with both the Cb-staining and anti-actin, while in the non-transfected cell, it is much less clear that filaments are detected with anti-actin Periz J, 2017. Instead the pattern looks more like punctate clusters that concentrate at the posterior pole or residual body. Thus while we would agree that the Cb-stained filaments also stain with antibodies ot F-actin, it is much less clear that they exist in the absence of Cb expression. It would thus be nice to see these findings independently reproduced with another technique. It would also be appropriate to test the influence of Cb on TgACT1 in vitro to determine if it stabilizes filaments. There are published methods to express Toxoplasma actin in a functional state and so this could easily be tested Skillman KM, 2013. Given the isodesmic mechanism used by TgACT1, it is very likely that any F-actin binding protein would increase the stability of the short filaments that normally form spontaneously, thus leading to longer, more stable filaments. This effect is likely to be less pronounced when using yeast or mammalian actins as they intrinsically form stable filaments above their critical concentration. Testing the effects of Cb on TgACT1 polymerization in vitro would provide a much more sensitive readout than has been provided here, and would help address the question of whether expression of Cb alters in vivo actin dynamics.
In summary, we find the reported findings of interest, but do not agree that they change the view of how actin polymerization operates in Toxoplasma at the level of the intrinsic mechanism. They instead reveal an important aspect of in vivo dynamics and it will be import to determine what factors regulate this process in future studies.
The above statement reflect the joint opinions of: John Cooper (Washington University), Dave Sept (University of Michigan) and David Sibley (Washington University).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 16, Markus Meissner commented:
Thank you for your comment which appears to be only a slight update of the comments already made on the eLIFE website and it would be helpful for all readers who wish to follow this discussion if we could stick to the website where the discussion started (see: https://elifesciences.org/articles/24119).
Regarding the second comment of David Sibley: It is good to see that the authors of the Skillmann paper (Skilmann et al., 2013) are able to reconcile our data with their unusual, isodesmic polymerisation model, despite their initial interpretations that clearly states that “…an isodesmic mechanism results in a distribution of SMALL OLIGOMERS, which explains why TgACTI only sediments efficiently at higher g force. Our findings also explain why long TgACTI filaments have not been observed in parasites by any method, including EM, fluorescence imaging of GFP–TgACTI and Ph staining." While it appears that we will need a lengthy discussion about Skillmann et al., 2013 or even better more reliable assays to answer the question of isodesmic vs cooperative polymerisation, our study did not aim to answer this open issue that we briefly introduced in Periz et al., 2017 to give a more complete picture of the open questions regarding apicomplexan actin. As soon as more convincing evidences are available for cooperative or isodesmic polymerisation of apicomplexan actin, we will be happy to integrate it in our interpretation. Meanwhile we remain of the opinion that our in vivo data (see also Whitelaw et al., 2017) best reflects the known behaviours of canonical actin. While it seems that under the conditions used by Skillmann et al., 2013 apicomplexan actin polymerizes in an isodemic manner, in the in vivo situation F-actin behaviour appears very similar to other, well characterised model systems. However, we would like to point out that a major argument in the interpretation of Skillmann et al., 2013 for isodesmic polymerisation is that “This discovery explains previous differences from conventional actins and offers insight into the behaviour of the parasite in vivo. First, nucleation is not rate limiting, so that T.gondii does not need nucleation-promoting factors. Indeed, homologs of actin nucleating proteins, such as Arp2/3 complex have not been identified within apicomplexan genomes”. This statement is oversimplified and cannot be reconciled with the literature on eukaryotic actin. For example, Arp2/3 knockouts have been produced in various cell lines (and obviously their actin doesn’t switch to an isodesmic polymerisation process). Instead, within cells, regulated actin assembly is initiated by two major classes of actin nucleators, the Arp2/3 complex and the formins (Butler and Cooper, 2012). Therefore, we thought it is necessary to mention in Periz et al., 2017 that apicomplexans do possess nucleators, such as formins. Several studies agree, that apicomplexan formins efficiently NUCLEATE actin in vitro, both rabbit and apicomplexan actin (Skillmann et al., 2012, Daher et al., 2010 and Baum et al., 2008). In summary we agree that future experiments will be required to solve this issue and we are glad that David Sibley agrees with the primary findings of our study. We hope that future in vitro studies will help to solve the question of isodesmic vs cooperative polymerisation mechanism in the case of apicomplexan actin so that a better integration of in vivo and in vitro data will be possible.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jun 14, L David Sibley commented:
We feel it is worth briefly reviewing the concept of the critical concentration (Cc), and the properties of nucleation-dependent actin polymerization, since there seems to be some misconception about these terms as they are used in this paper Periz J, 2017.
Polymerization assays using muscle or yeast actin clearly show that these actins undergo nucleation dependent assembly. Nucleation is a cooperative assembly process in which monomers of actin (i.e. G-actin) form small, unstable oligomers that readily dissociate. The Cc is the concentration of free actin above which a stable nucleus is formed and the filament elongation begins, a process that is more thermodynamically favorable than the nucleation step. A key feature of this nucleation-elongation mechanism is that for total actin concentrations above the Cc, the concentration of free G-actin remains fixed at the Cc, and all of the additional actin, over and above the Cc, is polymerized into filaments (i.e. F-actin). In contrast, an isodesmic polymerization process is not cooperative, and all steps (formation of dimer, trimer, etc.,) occur with the same binding and rate constants. With isodesmic polymerization, the monomer concentration (G-actin concentration) does not display a fixed limit; instead, as total actin concentration increased, the G-actin concentration continues to increase. Another key difference with isodesmic polymerization is that polymer forms at all concentrations of total actin (i.e. there is no concept of a critical concentration, Cc, that must be exceeded in order to achieve polymer formation).
The inherent differences between nucleation-elongation and isodesmic polymerization give rise to distinct kinetic and thermodynamic signatures in experiments. Because the nucleation process is unfavorable and cooperative, the time course of nucleation-elongation polymerization shows a characteristic lag phase, with a relatively low rate of initial growth, before the favorable elongation phase occurs. In contrast, isodesmic polymerization shows no lag phase, but exhibits linear growth vs. time from the start at time zero. The thermodynamic differences are manifested in experiments examining the fractions of polymer (F-actin) and monomer (G-actin) at steady state. Since nucleation-elongation has a critical concentration (Cc), the monomer concentration plateaus at this value and remains flat as the total protein concentration is increased. Polymer concentration is zero until the total concentration exceeds the critical concentration, and above that point, all the additional protein exists as polymer. In the isodesmic model, in stark contrast, the monomer concentration continues to increase and polymer form at all concentrations of total protein. These two distinct behaviors are illustrated in Figure 1 from Miraldi ER, 2008.
Our previous study on yeast and Toxoplasma actin Skillman KM, 2013 shows sedimentation assays that are closely matched by the theoretical results discussed above. In our study yeast actin (ScACT, Figure 2c) displays the saturation behavior characteristic of a nucleation-elongation mechanism; however, for TgACT1 (Figure 2a), the monomer concentration (red) continues to increase as total actin increases. In addition, the inset to Figure 2a shows that filaments (blue) are present at the lowest concentrations of total actin, and also does not exhibit a lag Skillman KM, 2013. Based on these features, it is unequivocal that Toxoplasma actin follows an isodemic polymerization process, with no evidence of cooperativity.
Several of the comments in the response may lead the reader to confound polymerization behavior in vitro with that observed fro actin polymerization in vivo in cells. The question of whether actin polymerization occurs by a nucleation-elongation mechanism or by an isodesmic mechanism is one that can only be determined in vitro using a solution of pure actin, because this is a property of the actin molecule itself, irrespective of other components. While the in vitro polymerization behavior is relevant as the template upon which various actin-binding proteins act, the polymerization mechanism for the actin alone cannot be inferred from in vivo observations due to the presence of actin-interacting proteins.
The authors state that the presence of “nucleation centers” in the parasite is not easy to consolidate with the isodesmic model Periz J, 2017. We disagree completely and emphatically. We agree that there are “centers” of accumulation of F-actin in the cell, these foci should not be referred to as “nucleation” centers in this case, because the term “nucleation” has a specific meaning in regard to the polymerization mechanism. F-actin may accumulate in these foci over time as a result of any one or more of several dynamic processes – new filament formation, elongation of short filaments, decreased turnover, or clustering of pre-existing filaments. The result is interesting and important; however, the result cannot be used to infer a polymerization mechanism.
The authors imply that these centers of F-actin correspond with sites of action of formins Periz J, 2017, which are capable of binding to actin monomers or actin filaments and thereby promoting actin polymerization. With vertebrate or yeast actin, which has a nucleation-elongation mechanism, formins do accelerate the nucleation process, and they also promote the elongation process. In the case of the isodesmic model for actin polymerization, formins would still function to promote polymerization, by interacting with actin filaments and actin monomers. Indeed, the short filaments that formed with the isodesmic mechanism are ideal templates for elongation from the barbed end (which formins enhance). We have previously shown that when TgACT1 polymerized in the presence of formins assembles into clusters of intermediate sized filaments that resemble the in vivo centers Skillman KM, 2012. Hence, as we commented previously, the isodesmic mechanism is entirely consistent with the observed in vivo structures labeled by the chromobodies.
The authors also suggest that evidence of a nucleation-elongation mechanism, with a critical concentration, is provided by the observation that actin filaments seen by chromobodies in vivo do not form in a conditional knock down of TgACT1 Periz J, 2017. In our view, this conclusion is based on incorrectly using observations of in vivo dynamics to infer the intrinsic polymerization mechanism of pure actin protein. Higher total actin concentration leads to higher actin filament concentration under both models, with control provided by the various actin-binding proteins of the cell and their relative ability to drive filament formation and turnover in vivo. However, dependence on total actin concentration is not a reflection of the intrinsic polymerization mechanism. The polymerization mechanism of TgACT1, whether isodesmic or nucleation-elongation, is unlikely to be the critical determinant of actin dynamics in vitro; instead, actin monomers and filaments are substrates for numerous actin-binding proteins that regulation filament elongation, filament turnover, and G-actin sequestration, that is, the whole of actin cytoskeleton dynamics.
Although we agree that much more study is need to unlock the molecular basis of actin polymerization and dynamics in apicomplexans, it will be important to distinguish between properties that are intrinsic to the polymerization process as it occurs in vitro, vs. interactions with proteins that modulate actin dynamics in vivo. The challenge, as has been the case in better studied systems Pollard TD, 2000, will be to integrate both sets of findings into a cohesive model of actin regulation and function in apicomplexans.
The above statement reflect the joint opinions of: John Cooper (Washington University), Dave Sept (University of Michigan) and David Sibley (Washington University).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 05, thomas samaras commented:
This study supports the work of Lindeberg and Lundh some 25 years ago. They found no evidence of CHD or stroke among the natives of Kitava. They also reported no evidence of CHD and stroke in all of Papua New Guinea. Eaton et al, also reported the rarity or absence of CHD and stroke in the Solomon Islands, PNG, Kalahari bushmen, and Congo pygmies.
Lindeberg and Lundh, Apparent absence of stroke and ischaemic heart disease in a traditional Melanesian island: a clinical study in Kitava, Journal of Internal Medicine 1993; 233: 269-275.
Eaton, Konner, Sostak, Stone agers in the fast lane: chronic degenerative diseases in evolutionary perspective. The American Journal of Medicine, 1988; 84;, 739-749
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 02, Nicholas J L Brown commented:
This article has been retracted: http://onlinelibrary.wiley.com/doi/10.1111/apt.14044/full
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 27, Janet Kern commented:
In the Endres et al. study above, they found a marginally significant trend in decreased glutathione (GSH) signals between the two groups in the dorsolateral prefrontal cortex (p=0.076). It did not quite reach statistical significance. To achieve statistical significance, a study must have sufficient statistical power. Statistical power, or the power of a test to correctly reject the null hypothesis, is affected by the effect size, sample size, alpha significance criterion. In their study, the overall and single-group differences in neurometabolite signals, the level of significance was corrected for multiple tests using the Bonferroni approach (p < 0.025 due to performing the measurements in two independent regions). So, the alpha significance criterion was 0.025. Effect size is the magnitude of the sizes of associations or the sizes of differences. Typically, a small effect size is considered about 0.2; a medium effect size is considered about 0.5; and a large effect size is considered about 0.8. Assuming an alpha, two tailed, set at 0.025, and a large effect size of 0.8 and a power of 0.8, the total number of subjects required would need to be 62 (or 31 in each group). The sample size the Endres et al. study was 24 ASD patients and 18 matched control subjects. Was there truly no statistically significant difference between the two groups, or was the study underpowered?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Dec 03, Philippe Katz commented:
Very interesting article, will try your treatment.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Sep 07, Noel de Miranda commented:
Interesting report that confirms the works of Kloor et al. 2005 Cancer Res and Dierssen et al. 2007 BMC Cancer which are not cited in the current work.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 22, Vojtech Huser commented:
This is a nice example of semantic integration. Inclusion of these CDEs in the common NIH portal for CDEs would be an added bonus.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 10, Helge Knüttel commented:
Unfortunately, the search strategy for this systematic review was not published by the journal. It may be found here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 20, Atanas G. Atanasov commented:
Very good overview on MSM, I have featured this manuscript at: http://healthandscienceportal.blogspot.com/2017/03/the-novel-dietary-supplement.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 04, Steve Alexander commented:
The title describes plasma, as does the findings. But the methods talks about serum, although it describes EDTA collection. I'm assuming it's plasma throughout, but it is confusing.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 22, Paola Pizzo commented:
We thank the Authors for their Reply to our Letter. However, given the importance of this topic, we have to add an additional comment to further clarify some criticisms. First, the Authors reason that the average mitochondrial surface in contact with ER can be extracted from the data presented in table S1, S2, S3 of their paper (Naon D, 2016). However, these data have not the same relevance that presenting the total number of ER-mitochondria contacts in the two situations. Indeed, the percentage of mitochondrial surface in contact with ER substantially varies whenever the analysis is restricted only to mitochondria that display contacts with the ER, or includes also contact-deprived mitochondria. In their analysis, the Authors considered only those mitochondria that are engaged in the interaction with the ER (and not the total mitochondrial population), as they stated in the Results section (“we devised an ER-mitochondria contact coefficient (ERMICC) that computes [....] the perimeter of the mitochondria involved in the interaction”). This approach could be misleading: considering that in Mfn2-depleted cells a higher percentage of mitochondria endowed with contacts has been found (Filadi R, 2016), the fraction of contact-deprived mitochondria should be taken into account to calculate the real average OMM juxtaposition surface. Second, the Authors argue that the fluorescent organelle proximity probes they used (both ddGFP and FRET-based FEMP probe) do not artificially juxtapose organelles: we did not claim this in our Letter, and we apologize if, for space limitations, this was not clear enough. Nevertheless, as to the FEMP probe, its propensity to artificially force ER-mitochondria juxtaposition, already after few minutes from rapamycin treatment, has been clearly shown by EM analysis in the original paper describing this tool (Csordás G, 2010). Additionally, it is worth to mention that comparison of FRET values between different conditions is possible only when the dynamic range (i.e., the difference between minimal and maximal FRET values) of a given FRET probe is similar in these different conditions. The new data provided by the Authors in their Reply (Tables 1 and 2) show that the average rapamycin-induced FRETmaximal values are dramatically different between wt and Mfn2-depleted cells. Thus, we believe that at least some cautions should be adopted to claim the use of this probe as a reliable tool for the comparison of ER-mitochondria tethering in such different conditions. Recently, we suggested how the fragmented/altered mitochondrial and ER morphology, present in Mfn2-depleted cells, may impair the rapamycin-induced assembly of this probe (Filadi R, 2017), thus severely complicating the interpretation of any result. Regarding the other fluorescent probe (the ddGFP) used by Naon et al. (Naon D, 2016), it is unclear why only ~10 % of the transfected wt/control cells (mt-RFP positive cells; Fig. 1G and 2E) are positive for the ddGFP signal (claimed as organelles tethering indicator). Should not ER-mitochondria juxtaposition be a feature of every cell? Concerning the Ca<sup>2+</sup> experiments, we are forced to discuss additional criticisms present in the Authors’ Reply. Our observation that, in Naon et al. (Naon D, 2016), mitochondrial Ca<sup>2+</sup> peaks in control cells (mt-YFP traces), presented in Fig. 3F, are ~ 100-fold higher than those in Fig. 3B was rejected by the Authors, because in Fig. 3F “Mfn2<sup>flx/flx</sup> cells were preincubated in Ca<sup>2+</sup> -free media to equalize cytosolic Ca<sup>2+</sup> peaks”. However, in our Letter, we clearly referred to control, mt-YFP expressing cells, and not to Cre-infected Mfn2<sup>flx/flx</sup> cells. Nevertheless, even if a “preincubation in a Ca<sup>2+</sup> -free media to equalize cytosolic Ca<sup>2+</sup> peaks” (i.e., a treatment that decreases the ER Ca<sup>2+</sup> content) was applied to both cell types, the prediction is that, in Fig. 3F, the ATP-induced mitochondrial Ca<sup>2+</sup> peaks would be lower for both Mfn2<sup>flx/flx</sup> and control (mt-YFP) cells, and not higher than those presented in Fig. 3B. Lastly, as members of a lab where mitochondrial Ca<sup>2+</sup> homeostasis has been studied over the last three decades, we have here to point out that, in our opinion, the reported values for ATP-induced mitochondrial [Ca<sup>2+</sup> ] peaks (i.e., 160 nM and 390 nM in Fig. 3B and 3C, respectively) are unusually very low and hardly considerable to be over the basal mitochondrial matrix [Ca<sup>2+</sup> ] (~ 100 nM). Furthermore, these low [Ca<sup>2+</sup> ] values cannot be reliably measured by the mitochondrial aequorin probe (Brini M, 2008) used by Naon et al. (Naon D, 2016). Finally, concerning the speed of Ca<sup>2+</sup> accumulation in isolated mitochondria, we clearly stated in our Letter that at 50 uM CaCl2 in the medium and no Mg<sup>2+,</sup> the rate of Ca<sup>2+</sup> accumulation is limited by the activity of the respiratory chain (Heaton GM, 1976), and thus does not offer any information on the MCU content. We did not refer to problems in respiratory chain activity in Mfn2-depleted cells, as interpreted by Naon et al. in their Reply.<br> Overall, while we appreciate the attempt of the Authors to highlight some aspects of the controversy, we renew all the concerns we discussed in our Letter.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 19, Martine Crasnier-Mednansky commented:
The authors have previously reported that very little SgrT is made in E. coli as compared to Salmonella typhimurium, which led them to conclude that E. coli K12 has "lost the need for SgrT" (Wadler CS, 2009). Later on, the rationale for using S. typhimurium instead of E. coli for studying SgrT was reinforced (Balasubramanian D, 2013). In the present work, the authors use E. coli sgrS mutant strains overproducing SgrT. Therefore, the present work does not establish a 'physiological' role for SgrT in preventing the E. coli PTS-transport of glucose, thus the title of the article is misleading.
The authors’ interpretation of figure 1 does not agree with the following data. E. coli mutant strains lacking Enzyme IICB<sup>Glc</sup> (PtsG) do grow on glucose (see Curtis SJ, 1975; Table VIII in Stock JB, 1982). Mutant strains lacking both the glucose and mannose enzyme II grow very slowly on glucose. In other words, because growth had been observed on mannose, growth should have been observed on glucose. Furthermore, the authors should have been aware that an increased level of cAMP from overexpressing SgrT further impairs growth on glucose.
PtsG is not "comprised of three main functional domains", as the authors state. PtsG has two functional domains (IIB and IIC) connected by a flexible linker. In the nomenclature for PTS proteins (Saier MH Jr, 1992), PtsG translates into Enzyme IICB<sup>Glc,</sup> which is informative (and therefore should be preferred to any other designations) because it indicates a two-domain structure, a specificity for glucose, and the order of the domains (from N to C terminus).
Kosfeld A, 2012 clearly established, by cross-linking experiments, the interaction between SgrT and Enzyme IICB<sup>Glc</sup> in the presence of glucose. They also visualized the recruitment of SgrT to the membrane by in vivo fluorescence microscopy. It is therefore unwarranted for the authors to 'hypothesize' an interaction and localization to the membrane, and to state: "Once we established that SgrT inhibits PtsG specifically and its localization to the membrane …". In addition, the demonstration by Kosfeld A, 2012, the motif KTPGRED (in the flexible linker) is the main target for SgrT, is rather convincing.
Finally, the statement "SgrT-mediated relief of inducer exclusion may allow cells experiencing glucose-phosphate stress to utilize alternative carbon sources" is inaccurate because it ignores the positive effect of cAMP on the utilization of alternative carbon sources like lactose.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 28, Thomas Perls, MD, MPH commented:
Some studies cite this very compelling study as evidence against the Compression of Morbidity hypothesis. This study observes progressively higher prevalence rates of morbidity and disability with increasing age among octogenarians, nonagenarians and centenarians. However, they were unable to determine when in their lives these individuals developed these problems and therefore the work does not describe any differences in compression of disability of morbidity. One of the virtues of becoming a centenarian is the likelihood of compressing the time that you experience disability towards the end of your life. Surviving to ages that relatively approach human lifespan (e.g. >105 years) likely also entails compressing morbidity as well Andersen SL, 2012.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 19, Mauro Podda commented:
Dear Sir/Madam. Many thanks for showing your interest in our manuscript. We can guarantee that our systematic review with meta-analysis of RCTs comparing antibiotic therapy and surgery for uncomplicated acute appendicitis has been performed accordingly with the instructions provided by the Cochrane handbook for systematic review of interventions, and the PRISMA statement has been used for reporting research in the systematic review. Probably, this should better clarified in the text. With regards to the search keys, this is the strategy: (((((appendicitis) AND antibiotic treatment) OR conservative management) OR nonoperative management) OR nonoperative treatment) AND appendectomy
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 16, Michelle Fiander commented:
What is a Systematic Literature Search?
This review describes its literature search as "systematic" but provides no evidence to support the statement. Reproducible search strategies are not provided--per PRISMA 8, but two sets of keywords are. I created a PubMed search strategy (copied below) by making an educated guess as to how the authors combined the list of terms provided and in doing so, found over 3800 citations up to April 2016 (one month before the search date reported in the review). The review reports screening 938 citations so it is unclear how the evidence for this review were identified.
The authors say the meta-analysis was "performed in accordance with the recommendations from the...PRISMA Statement." PRISMA does not recommend methodological approaches to data analysis, it describes the data authors should provide in a systematic review manuscript. For recommendations on how to analyze data in a systematic review, sources such as The Cochrane Handbook should be consulted.
There has been recent research on the poor quality of published systematic reviews; journal editors should engage with methodologists conversant with systematic review methodology to ensure the reviews it publishes are rigorously reported.
PubMed: Search (antibiotic OR "nonoperative treatment" OR "conservative management" OR "nonoperative management" OR "medical treatment" OR appendectomy OR appendicectomy OR laparoscopy) AND ("acute appendicitis") AND Filters: Publication date to 2016/04/30
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 23, Wenqiang Yu commented:
My comments on this impressive paper are mainly regarding the relationship between the enhancer and microRNA. MicroRNAs are expressed in a tissue and cell type specific manner, as is the enhancer. Therefore, it would be intriguing to know whether miRNA and enhancer may be intrinsically linked while regulating gene expression. Results of this paper are interesting because Suzuki HI et al found that enhancer regions overlap with miRNA genome loci and may play a role in shaping the tissue-specific gene expression pattern. These findings directly support our earlier results from a 5-year long project that was finally published in RNA biology in the beginning of 2016, “MicroRNAs Activate Gene Transcription Epigenetically as an Enhancer Trigger”.(http://www.tandfonline.com/doi/abs/10.1080/15476286.2015.1112487?journalCode=krnb20) In this paper, we not only found that many miRNA genome loci overlap with enhancer regions, but also identified a subset of miRNAs in the nucleus that function as universal and natural gene activators emanating from the enhancer loci, which we termed NamiRNA (Nuclear activating miRNA; although this specific term was not used in the paper). These miRNAs are associated with active enhancers characterized by distinct H3K27ac enrichment, p300/CBP binding and DNase I hypersensitivity. We also presented evidence that NamiRNA promotes genome-wide gene transcription through the binding and activating of its targeted enhancers. Thus, we anticipate that NamiRNA-enhancer-mRNA activation network may be involved in cell behavior modulation during development and disease progression. Having said all that, we hope our results published in RNA biology can be cited by this paper. Meanwhile, we want to emphasize the dual functionality of miRNAs that is supported by our results---work as an activator via enhancer in the nucleus and as a traditional silencer in the cytoplasm. In light of this, more attention should be paid towards research that clarifies the detail of these NamiRNAs functions. It is in our belief that the miRNA-enhancer-gene activation network may be the intrinsic link between miRNA and enhancer when the two coordinate in regulating gene expression during cell fate transitions.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 04, Ralph Brinks commented:
One of the key methods used in this paper is not appropriate. Note the comment on this paper: Hoyer A, 2017
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 11, Lydia Maniatis commented:
Reading this article, one gets the impression that the authors don’t quite believe their own claims, or aren’t really sure about what they’re claiming. This is illustrated by the following statement (caps mine): “It could be that other factors often associated with perceptual inferences and top-down processing—scission, grouping, object formation, and so forth—could affect the filters OR ACTUALLY ARE THE FILTERS” (page 17).
Whereas up to this point the notion of “filters,” described as acting “early” in the visual process, has referred to a technical manipulation of the image based on sharpening or blurring luminance “edges” (a process which is unnecessarily and somewhat confusingly described in terms of removing “low or high spatial frequency content,” even though the elements referred to are not repeating) we are now told that this manipulation - the “simple filter” - may be equivalent to processes of perceptual organization that produce de facto inferences about the distal stimulus, such as the process of figure-ground segregation (which is, of course, prior to “object formation”). This is quite a surprise – in this case we perhaps could refer to, and more explicitly address, these organizing processes - assuming the authors decide definitively that this is what they mean, or unless the term “filter” is only intended to mean “whatever processes are responsible for certain products of perception.”
With respect to the specific story being told: As with all "spatial filtering" accounts to date, it is acknowledged to be ad hoc, and to apply to a small, arbitrary set of cases (those for which it has been found to "work"). Which means that it is falsified by those cases which it cannot explain. The ad hoc-ness is incorporated into the “hypothesis: “The results support the hypothesis that, under some conditions, high spatial frequency content remains invariant to changes in illuminant” (p. 17). Which conditions are being referred to is not specified, begging the question of the underlying rationale. The authors continue to say that “Of course, this hypothesis may not be true for complex scenes with multiple illuminants or large amounts of interreflection.” Arguably, all natural scenes are effectively under multiple illuminants due to shadows created by obstructions and orientations relative to light sources.
In fact, as with all filtering accounts to date, the account doesn’t even adequately explain the cases it is supposed to explain. The reason for this is that it doesn’t address the “double layers” present in perception with respect to illumination. It isn't fair to say that when a perceived surface is covered by a perceived shadow, we are discarding the illumination; the shadow is part of the percept. So to the extent that Dixon and Shapiro’s manipulation describes the perceptual product as containing only perceived surface lightness/color values but not illumination values, it is not representative of the percept corresponding to the image being manipulated.
Relatedly, Dixon and Shapiro don’t seem to understand the nature of the problem they are addressing. They say that: “Most explanations of the dress assume that a central task of color perception is to infer the reflectance of surface material by way of discounting the illumination falling on the object” (p. 14). This may be accurate with respect to “most explanations” but, again, such explanations are inapt. As I have noted in connection to one such explanation (https://pubpeer.com/publications/17A22CF96405DA0181E677D42CC49E), attributing perceived surface color to perceived illumination is equivalent to attributing perceived illumination color/quality to perceived surface color. They are simultaneous and correlated inferences, and treating one as a cause of the other is like treating the height of one side of a seesaw as the cause of the height of the other side. You need to explain both what you see and what you saw.
The confusion is similarly illustrated in Dixon and Shapiro's description of Purves’ cube demo, described as “an iconic image for illustrating the effect of illumination on color appearance…” (p. 3). But the demo is actually not illuminated if on a computer screen, and if observed on a page, is typically observed under ordinary lighting. Again, both the color of the surfaces and the color of the illumination are inferred on the basis of the chromatic structure of the unitary image (of its retinal projection); and both are effects, not causes, and both are represented in perception. I see the fabric of the dress as white and gold, and the illumination as bluish shadow. Neither is “discounted” in the sense that Dixon and Shapiro seem to be claiming.
With respect to the problem of the dress specifically, none of these explanations address why it interpreted one way by some, another way by others. The ambiguity of light/surface applies to all images, so general explanations in terms of illumination/surface color estimation don't differentiate cases in which there is agreement from those rarer ones for which there is disagreement. The reference to one perceptual outcome of the dress as indicating “poor color constancy” or “good color constancy” is inapt as the images do not differ in illumination, or surface color, but only in interpretation.
As I've also noted previously, the proof of understanding the dress is to be able to construct other images with similar properties. So far they've only been found by chance.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 29, Natalie Clairoux commented:
Free version of this article available after December 31, 2018 at https://papyrus.bib.umontreal.ca/xmlui/handle/1866/19671
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 18, Seán Turner commented:
The genus name Eggerthella is not novel: Eggerthella Wade et al. 1999 (PubMed Id 10319481). In the title, "gen. nov." should be replaced by "sp. nov."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 18, Seán Turner commented:
The authors refer to the type strain variably as "Marseille P-3114", "Marseille-P3114", and "Marseille-P-3114" in the manuscript. Both "Marseille-P3114" and "Marseille-P-3114" are used in the protologue of the novel species Metaprevotella massiliensis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 17, Seán Turner commented:
Contrary to the title, Pseudoflavonifractor is not a novel genus (Pseudoflavonifractor Carlier et al. 2010; PubMed Id 19654357).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 10, Melissa Vaught commented:
Prior randomized controlled trials have examined the impact of organizational social media promotion (i.e., using journal’s official social media accounts) on article views (Fox CS, 2015, Fox CS, 2016, Adams CE, 2016) or on article downloads and citations (Tonia T, 2016). With the exception of Adams CE, 2016, no significant effect of social media posting on page views or downloads has been observed. However, a key question has remained: Might sharing by individuals have an effect where publisher promotion has not?
The trial reported here attempts to address this question directly. The authors enlisted members and trainees on its editorial board to share links to articles from their personal social media accounts (enhanced Twitter intervention). Outcomes were compared to control and to sharing via the journal’s official Twitter account (basic Twitter intervention). Selected publications were between 2 months and more than 2 years old at the time of intervention.
Similar to Fox CS, 2015, Fox CS, 2016, & Tonia T, 2016, posts by the @JACRJournal account did not increase article views (though removing a ‘most read’ outlier that had been randomized to the control group changed this conclusion). As summarized in the abstract, weekly page views were higher in the enhanced intervention than in control and basic groups. In fact, the enhanced group outperformed the basic group in all 4 primary and secondary endpoints. Authors found no significant effect of publication age.
The authors note that the difference between enhanced and basic groups may derive from multiple vs. single posting of a link. The difference in effects is not proportional to the number of posts, and as the authors note, Fox CS, 2016 used a high frequency social media posting to no avail. In addition, the JACR authors observed that 1 team had a much larger effect on page views than the other 3, and the effect did not track with follower count.
I would first note that some limitations in the methods and/or reporting might influence interpretation of these comparisons. Methods state that team members were assigned 1 article to tweet per day, and they were to only post about each article once. However, there is no indication that participants’ accounts were reviewed to check adherence to the instructions, in particular whether all 4 team members posted the assigned article with a functioning link on the designated day. It was unclear to me whether team members were sent a link the day they were assigned to post it, or whether these might have been provided in batches with instruction to tweet on the assigned day. The article also does not discuss how teams were assigned and whether members knew who their teammates were. Finally, although the team effect did not correlate with follower number, it would have been useful to know the number of followers for @JACRJournal at the start of the intervention, for comparison.
Nonetheless, the outsized effect on outcomes for 1 team is interesting. Though largely beyond the scope of this article, additional analytics could provide the basis for some interesting exploratory analysis and might be worth consideration in future studies of this type. At the Twitter account level, the authors reported the number of followers for each team member, but the age and general activity of the account during the intervention period could be relevant. Follower overlap between team members (or more refined user network analysis, as suggested by the authors) might also be informative.
It also might have been useful to also gather tweet-level analytics from team members, to identify high-engagement tweets (e.g., based on URL clicks and/or replies). This could determine whether team performance was driven by a single member, particular publications/topics, or discussion about a publication. I liked that team members composed their own tweets about articles, so that there was a chance for the tweets in the intervention to have congruent “voice”/style. Pairing tweet-level analytics with content analysis—even as simple as whether a hashtag or an author’s Twitter handle were included—could offer some insight.
Overall, I appreciate this authors’ efforts to untangle questions about how organizational and individual social media promotion might differentially influence viewing (and perhaps reading) of scholarly publications.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 26, Atanas G. Atanasov commented:
Excellent review on a very interesting topic, I have featured it at: http://healthandscienceportal.blogspot.com/2017/03/cancer-acidity-and-its-potential-as.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 17, Atanas G. Atanasov commented:
Fascinating work… I have featured this study at: http://healthandscienceportal.blogspot.com/2017/03/stress-induced-despair-behavior-and.html
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 14, Gabriel Lima-Oliveira commented:
Please access full article free of charge
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 03, Lily Chu commented:
I have written a comment which was published on the Annals of Internal Medicine online comments section linked to this article, which can be accessed here:
I asked whether the authors had considered subgrouping subjects by infectious/ inflammatory symptoms and comparing their responses to treatment and mention two trials using another cytokine inhibitor (of TNF-alpha), etanercept, in the treatment of ME/CFS. Materials related to those trials can be accessed here:
- Vallings R. A report from the 5th International AACFS Conference. Availablet at: http://phoenixrising.me/conferences-2/a-report-from-the-fifth-international-aacfs-conference-by-dr-rosamund-vallings.
- Fluge, O. Tumor necrosis factor-alpha inhibition in chronic fatigue syndrome. Available at: https://clinicaltrials.gov/ct2/show/NCT01730495.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 11, Andrew Kewley commented:
I thank the authors for conducting this study and note that such studies are of value even if the outcome is a null result.
While I agree with the overall conclusion that subcutaneous anakinra is ineffective, I carefully note that the published manuscript contains an error.
In the abstract of the article, it states: "At 4 weeks, 8% (2 of 25) of anakinra recipients and 20% (5 of 25) of placebo recipients reached a fatigue level within the range reported by healthy persons."
The closest reference in the body of the manuscript is the following: "In the anakinra group, 2 patients (8%) were no longer severely fatigued after the intervention period (reflected by a CIS-fatigue score <35 [47]), compared with 5 patients (20%) in the placebo group difference, -12.0 percentage points [CI, -31.8 to 7.8 percentage points]; P = 0.22)."
Where the reference [47] was: 47. Wiborg JF, van Bussel J, van Dijk A, Bleijenberg G, Knoop H. Randomised controlled trial of cognitive behaviour therapy delivered in groups of patients with chronic fatigue syndrome. Psychother Psychosom. 2015;84:368-76. [PMID: 26402868] doi:10.1159 /000438867
However the claims made in the abstract refer to healthy ranges, but this is not the same as "severe fatigue" as operationalised by a CIS-fatigue score of less than or equal to 35.
The healthy ranges are instead provided by another study which has also been cited: 41. Vercoulen JH, Alberst M, Bleijenberg G. The Checklist Individual Strength (CIS). Gedragstherapie. 1999;32:131-6.
That study found in a group of 53 healthy controls (mean age of 37.1, SD 11.5) had a mean CIS-fatigue score of 17.3 (SD 10.1). This would provide a cut-off for the "healthy range" of ~27. The manuscript of the present RCT does not provide the results of how many patients met this cut-off score.
Also of note, in a study co-authored by one of the authors of the present study utilised a threshold for a "Level of fatigue comparable to healthy people" as less than or equal to 27.
See: Knoop H, Bleijenberg G, Gielissen MFM, van der Meer JWM, White PD: Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome? Psychother Psychosom 2007; 76: 171–176.
Therefore the claim made in the abstract of patients reaching "a fatigue level within the range reported by healthy persons" is not based on evidence provided in the manuscript, or is simply incorrect. I ask the authors to provide the results of how many patients in both groups met the criteria of having a CIS-fatigue score of less than 27.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 08, Lydia Maniatis commented:
The authors discuss “face signals” and “face aftereffects” and “face-selective neurons,” but these terms have no conceptual backing. The underlying basis of the “aftereffects” that have been reported using faces is not known; we could argue (always prematurely) that adaptation to particular contour patterns affects the organization of these contours into faces. To give a parallel, the case of color aftereffects doesn’t license us to posit adaptation of “color-selective” neurons, even though the perception of color is a “high-level” process; we know that they are the perceptual consequences of pre-perceptual, cone-level activity.
The idea of face-selective neurons is highly problematic, as are all claims that visual neurons act as “detectors.” (They are essentially homuncular, among other problems). What, exactly, is being detected? If we draw any shape and stick two dots on it along the horizontal, it becomes a face. It cannot be overstated the extent to which references to visual neural processes in this paper are premature.
With respect to functional explanations, if we don’t know the reason for the effects – again, references to “face-selective neurons” are hopelessly vague and have serious logical problems – then it’s premature to speculate if and what.
All of this seems almost moot given the comments in the discussion that “Our results seemingly contradict those of Kiani et al. (2014), who found that identity aftereffects did decay with time (and that this decay was accelerated when another face was presented between adaptor and test).” The authors don’t know why this is, but speculate that “it is likely that the drastically different temporal parameters between the two studies contributed to the different findings… It is plausible that this extended adaptation procedure would have induced larger and more persistent aftereffects than Kiani et al.'s procedure…. It is plausible that aftereffects resulting from short-term fatigue might recover rapidly with time, whereas aftereffects resulting from more structural changes might be more long-lived and require exposure to an unbiased diet of gaze directions to reset.”
Given that the authors evidently didn’t expect their results to contradict those being discussed, these casual post hoc speculations are not informative. “Plausibility” is not a very high bar, and it is rather subjective. What is clear is that the authors don’t understand (were not able to predict and control for) how variations in their parameters affect the phenomenon of interest.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 19, Paul Grossman commented:
Unfortunately, the authors of this paper so far refuse to discuss key most likely severely flawed assumptions of their "recommendations" paper in any open forum available to scientists (here, ResearchGate or PubMed Commons: see their reply to my qnd others's comments in ResearchGate, https://www.researchgate.net/publication/313849201_Heart_Rate_Variability_and_Cardiac_Vagal_Tone_in_Psychophysiological_Research_Recommendations_for_Experiment_Planning_Data_Analysis_and_Data_Reporting )
I have been active in this field for over 30 years. Vagal tone is not "variability in heart rate between inhalation and exhalation"; the latter is termed respiratory sinus arrhythmia (RSA, or also high-frequency heart-rate variability, HRV ) and under very specific conditions may sometimes partially reflect, or be a marker of, cardiac vagal tone. Cardiac vagal tone--on the other hand--is defined as the magnitude of mean heart rate change from one condition to another (e.g. rest to different levels of physical exertion or to pharmacological blockade of parasympathetic control) that is a specific consequence of parasympathetic effects. Obviously the two phenomena are not equivalent: Resipratory sinus arrhythmia is an inherently phasic (not tonic) phenomenon (heart rate shifting rhythmically from inspiration to expiration). Cardiac vagal tone characterizes the average effect of vagal influences upon heart rate during a particular duration of time. Changes in breathing frequency can have dramatic effects upon magnitude of RSA without any effects upon cardiac vagal tone. There are also other conditions in which the two phenomena do not change proportionally to each other: e.g. sometimes when sympathetic activity substantially changes; or when efferent vagal traffic to the heart is blocked by chemicals before it can reach the sinus atrial node; or probably when vagal discharge is so great that the vagal traffic saturates the sinus atrial node leading to profound slowing of heart rate during both inspiration and expiration. These effects are rather clearly shown in the autonomic cardiovascular physiological literature but fail to be acknowledged in much of the psychological or psychophysiological publications. Thus it is plain wrong to believe that RSA is vagal tone. There is really so much evidence that is often systematically ignored, particularly by psychologists working in the field.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 13, Paul Grossman commented:
The issue of the influence of respiration (breathing rate and volume) confounding heart-rate variability (HRV) as an index of within-individual changes of cardiac vagal tone remains inadequately covered in this review. My colleagues' and my 1991 and 1993 papers (Grossman , Karemaker & Wieling, 1991; Grossman & Kollai, 1993) using pharmacological blockade controls, rather conclusively show that respiratory sinus arrhythmia (RSA, or high-frequency HRV) under spontaneously varying rates and/or depths of breathing does not provide an accurate reflection of quantitative within-individual variations in cardiac vagal tone. Our results are also clearly far from the only findings demonstrating this fact (see also literature of Saul and Berger, as well as others). Yet this rich resource of findings is neither cited nor addressed in the paper. I would be curious why? The research clearly and consistently shows that when a person's heart rate changes from one condition to another are completely vagally mediated (documenting changes in cardiac vagal tone), changes in RSA WILL NOT ACCURATELY REFLECT those variations in cardiac vagal tone whenever breathing parameters substantially change as well: The alterations in RSA amplitude will be much more closely correlated with respiratory pattern changes, but may not at all reflect vagal tone alterations! The proper method to correct for this issue is, however, another question. The crucial point is that this confound must no longer be swept under the carpet. I welcome any dialogue about this from the authors or others..
A bit simpler explanation: We and others (e.g. Grossman , Karemaker & Wieling, 1991; Grossman & Kollai, 1993; Eckberg, various publications; JP Saul, various publications; R Berger, various publications) have consistently shown that changes in breathing rate and volume can easily and dramatically alter heart-rate variability (HRV) indices of cardiac vagal tone, without actual corresponding changes in cardiac vagal tone occurring! This point is not at all considered in this or many other HRV papers. Any paper purporting to provide standards in this area must deal with this issue. If the reader of this comment is a typically young healthy person, this point can easily be documented by noting your pulse rate as you voluntarily or spontaneously alter your breathing frequency substantially: slow breathing will bring about an often perceptibly more irregular pulse over the respiratory cycle than fast breathing, but there will be little-to-no change in average heart rate over the time (which would almost certainly have to occur for such dramatic perceptible changes in HRV to reflect cardiac vagal tone: heart rate should slow as vagal tone increases, and should speed as vagal tone decreases, provided there are no sympathetic shifts in activity--extremely unlikely in this little experiment!).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 07, Paul Brookes commented:
This is an interesting study, but I feel it's a little too close to a study published by my own lab' last year... http://www.jbc.org/content/291/38/20188.abstract
Specifically, several of the findings in this new paper are repeats of findings in our paper (such as the pH sensitive nature of 2HG generation by LDH and MDH, and the discovery that 2HG underlies acid induction of HIF). While it's always great to have your work validated, it's also nice to be CITED when that happens.
Our study was posted on BioRXiv on May 3rd 2016 (http://biorxiv.org/content/early/2016/05/03/051599), 6 weeks before this paper was submitted to Nature Chem Biol. At that time we also submitted to a journal where the senior author here is on the editorial board, only to be rejected. Our paper was finally published by JBC in August, >4 months before the revised version of this current study was submitted.
Furthermore, during the revision of our work, another paper came out from the lab of Josh Rabinowitz, showing 2HG generation by LDH, and also demonstrating pH sensitivity Teng X, 2016. We put an addendum in our JBC paper to specifically mention this work as a "note in added proof". The current paper doesnt' cite the Rabinowitz study either.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 15, Varshil Mehta commented:
Thank you very much for this article. Great work.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 07, Seán Turner commented:
Strain MCCC 1A03042 is not a type strain of Thalassospira xiamenensis. According to Lai and Shao (2012) [PubMed ID 23209216], strain MCCC 1A00209 is the type.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 16, Sebastien Delpeut commented:
One sentence was accidentally omitted from the acknowledgments.
We thank all laboratory members for continuing support and constructive discussion and Angelita Alcos for technical support.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 05, Lydia Maniatis commented:
“The present findings demonstrate that it is difficult to tease apart low-level (e.g., contrast) and midlevel (e.g., transparency) contributions to lightness phenomena in simple displays… Dissociating midlevel transparency explanations from low-level contrast explanations of the crispening effect will always be problematic, as by definition information is processed by “low-level” mechanisms before higher visual processing areas responsible for the midlevel segmentation of surfaces.”
As the above passage indicates, the authors of this article are endorsing the (untenable but common) notion that, within a visual percept, some features are reflections of “low-level” processes, i.e. activities of neurons at anatomical levels nearer to the retinal starting point, while other features are reflections of the activities of “mid-level” neurons, later in the anatomical pathway. Still others, presumably, are reflections of the activities of “high-level” neurons. Thus, when we observe that a grey square on a dark grey background appears lighter than the same grey square on light grey background, this is the result of “low-level” firing patterns, while if we perceive a grey film overlying both squares and backgrounds (an effect we can achieve by simply making certain alterations in the wider configuration, leaving the "target" area untouched), this is a consequence of “mid-level” firing activity. And so on. Relatedly, the story goes, we can effectively observe and analyze neural processes at selected levels by examining selected elements of the percepts to which various stimuli give rise.
These assumptions are not based on any evidence or rational arguments; the arguments, in fact, are all against.
That such a view constitutes a gross and unwarranted oversimplification of an unimaginably complex system whose mechanics, and the relationships between those mechanics and perception, we are not even close to understanding, should be self-evident.
Even if this were not the case, the view is paradoxical. It’s paradoxical for many reasons, but I’ll focus on one here. We know that at any given location in the visual percept – any patch – what is perceived – with respect to any and all features – is contingent on the entire area of stimulation. That is, with respect to the percept, we are not dealing with a process of “and-sum.” This has been demonstrated ad infinitum.
But the invocation of “low-level” processes is simultaneously an invocation of “local” processes. So to say that the color of area “x” in this visual percept is the product of local process “y” is tantamount to saying that for some reason, the normal, organized feedback/feedforward response to the retinal stimulation stopped short at this low-level. But when and how does the system decide when to stop processing at the lower-level? Wouldn't some process higher up, with a global perspective, need to ok this shutting down of the more global process (to be sure, for example, that a more extended view doesn’t imply transparency)? And if so, would we still be justified in attributing the feature to a low-level process?
In addition, the “mid-level segmentation of surfaces” has strong effects on perceived lightness; are these supposed to be added to the “low-level contrast effects” (with the "low-level" info simultaneously underpinning the "mid-level" activity)? A rationale is desperately needed.
Arbitrarily interpreting the visual percept in terms of piecemeal processes for one feature and semi- global processes for another and entirely global processes for a third, and some or all at the same time is not a coherent position.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 07, Lydia Maniatis commented:
We can have an interesting conversation right here. Unless you clarify your assumptions, and properly control your variables and sample sizes,further research efforts will be wasted. Every difference in conscious experience is inevitably correlated with differences in physiology. The trick is in how you interpret the inevitable variations in the latter.(At this point in our understanding of brain function, I submit that such efforts are extremely premature.) In your case, you don't even seem to know what experience you are trying to correlate with brain activity.
I believe that your small stimulus duration and the fixation on the red spots may have biased the perception of figure to the corresponding surfaces in older viewers. You clearly don't have a alternative hypothesis that doesn't suffer from serious logical problems (as I noted in an earlier comment).
Peer reviewers are obviously not infallible, which is why this forum exists, and invoking them doesn't count as a counterargument.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 06, Jordan W Lass commented:
There are many interesting followups to our work indeed, some of which I believe merit further study. You have begun to identify some of these, and it seems to me there is the possibility for a constructive conversation to be had here. I highly encourage you to stop by our upcoming poster at the Vision Sciences Society conference in Florida in May, where we extend this work by exploring electrophysiological correlates of performance on this task in various conditions in both age groups. Our research group would be happy to discuss the issues you are taking with our work, as well as potentially-fruitful followups that can further address the questions we have raised in this work and that you have touched in some of the above comments.
I believe that, especially due to the presentation of this work over a number of conferences where I was challenged by experts in the field who helped me formulate and refine the ideas presented, as well as the rigorous peer review editorial process leading to the publication of this work in a high quality journal, the rational of our hypothesis and interpretation of our results are clearly laid out in the paper.
Thank you again for your keen interest in this work.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 06, Jordan W Lass commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 05, Lydia Maniatis commented:
You say that what you mean by “reduced ability to resolve figure-ground competition…is an open question.” But the language is clear, and regardless of whether concave or convex regions are seen as figure, the image is still being resolved into figure and ground. In other words, your experiments in no sense provide evidence that older people are not resolving images into figure and ground, only that convexity may not be as dispositive a factor as in younger people. Perhaps they are influenced more by the location of the red dot, as I believe that it is more likely that fixated regions will be seen as figure, all other things being equal.
In your response you specify that ‘failure to resolve’ may be interpreted in the sense of “decreased stability of the dominant percept and increased flipping.” However, in your discussion, you note, that, on the contrary, other researchers have found increased stability of the initial percept and difficulty in reversing ambiguous stimuli in older adults. If your inhibition explanation is consistent with BOTH increased flipping and greater stability, then it’s clearly too flexible to be testable. And, again, increased flipping rate is not really the same thing as “inability to resolve.”
The second alternative you propose is that stimuli are “not perceived to have figure ground character, perhaps being perceived as flat patterns.” This is obviously also in conflict with the other studies cited above. If the areas are perceived as adjacent rather than as having a figure-ground relationship, this also involves perceptual organization. For normal viewers, such a percept – e.g. simultaneously seeing both faces and vase in the Rubin vase, is very difficult, so it hard to imagine it occurring in older viewers, but who knows. If such an idea is testable, then you should test it.
You say the logic of your hypothesis is sound and your interpretations parsimonious, but in fact it isn’t clear what your hypothesis is, (what failure to resolve means). If your results are replicable, you may have demonstrated that, under the conditions of your experiment, convex region is less dispositive a factor in older adults. But in no sense have you properly formed or tested any explanatory hypotheses as to why this occurred.
In addition, I don’t think its fair to say that you’ve excluded the possible effect of the brevity of the stimulus. 250ms is still pretty short, considering that saccades typically take about 200ms to initiate. We know that older people generally respond more slowly at any task. The fact that practical considerations make it hard to work with longer exposure times doesn’t make this less of a problem.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 04, Jordan W Lass commented:
The interpretation that “differences in the ability to resolve the competition between alternative figure-ground interpretations of those stimuli" comes from the combination of results across experiments, and the literature on figure-ground and convexity context effects in specific. Given that we used a two-alternative forced choice paradigm, which has been commonly used to measure perception even when stimuli are presented below threshold, chance performance is P(convex=figure) = .5. Our observation was regressions to chance in the older group in both convexity bias and CCEs, which is consistent with the interpretation that the older group showed reduced ability to resolve figure-ground competition. Interestingly, as you may be getting at, what "reduced ability to resolve figure-ground" means is an open question: could it be decreased stability of the dominant percept and increased flipping between them or time spent in transition states? could it be that the stimuli are not perceived to have figure-ground character, perhaps being perceived as flat patterns? These are interesting questions indeed, which your idea of adding another response option "no figure-ground observed" is one way of addressing, although it comes with its own set of limitations.
Alternatively, as you propose, it may be the case that the older adults are resolving equally well as younger adults, but with increased tendency of perceiving concave figures compared to the younger group, which would also bring P(convex=figure) closer to .5. However, I can think of no literature or reasoning as to why that would be the case, so I see that as a less parsimonious interpretation. I am intrigued though, and if you are able to develop a hypothesis as to why this would be the case, it could make for an interesting experiment that might shed light into the nature of figure-ground organization in healthy aging.
Critically, the results of Experiment 4 showed a strong CCE in older adults when only concave regions were homogeneously coloured, which is a stimulus class that has been shown to be processed more quickly in younger adults (e.g., Salvagio and Peterson, 2012). Since no conCAVity-context effects were observed when only convex regions were homogeneously coloured (the opposite stimulus properties of the reduced competition stimuli), the Experiment 4 results are strongly supportive of the notion that older adults do show the CCE pattern well-characterized in younger adults, but that the high competition stimuli used in Experiment 1 are are particularly difficult for them to resolve.
The logic of our hypothesis is sound, and our interpretation is the most parsimonious we are aware of based on all the results. Thank you for your question, I would be happy to discuss further if you would like further clarification, or are interested in discussing some of these interesting followups.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 04, Lydia Maniatis commented:
If I’m reading this paper correctly, there’s a problem with the logic of the argument.
The finding is that older people are less likely than young people to see convex regions of a stimulus as figure. The authors say that this implies age “differences in the ability to resolve the competition between alternative figure-ground interpretations of those stimuli.”
However, the question they are asking in Exp’t1 is whether a red spot is seen as “on or off the region perceived as figure.” This implies that in every case, one of two border regions is seen as figure; at least, the authors don’t suggest that older people saw neither region as figure - and the question doesn’t allow for this possibility. So my question is, why doesn’t seeing the concave region as figure count as a resolution, inhibitory-competition-wise?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 04, Lydia Maniatis commented:
What Ratnasingam and Anderson are doing here is analogous to this imaginary example: Let’s say that I have a strong allergy to food x, a milder one to food y, none to food z, and so on, and that my allergies produce various symptoms. Let’s assume also that some of these effects can be interpreted fairly straightforwardly in terms of formal structural relationships between my immune system and the molecular components of the foods, and others not. For these others, we can assume either a functional rationale or perhaps consider them a side effect of structure or function. We don’t know yet. For other individuals, other allergy/food combinations have corresponding effects. Again, if we know something about the individual we can predict some of the allergic reactions based on known principles.
How much sense would it make now, to conduct a study whose goal is: “to articulate general principles that can predict when the size of an allergic reaction will be large or small for arbitrarily chosen food/patient combinations…. What (single) target food generates the greatest allergic difference when ingested by two arbitrarily chosen patients?” (“Our goal is to articulate general principles that can predict when the size of induction will be large or small for arbitrarily chosen pairs of center-surround displays…. What (single) target color generates the greatest perceptual difference when placed on two arbitrarily chosen surround colors?”)
Furthermore, having gotten their results, our researchers now decline to attempt to interpret them in terms of the nuanced understanding already available.
The most striking thing about the present study is that a researcher who has done (unusually) good work in studying the role of structure and chromatic/lightness relationships in the perception of color is now throwing all this insight overboard, ignoring what is known about these factors and lumping them all together, in the hope of arriving at some magic, universal formula for “simultaneous contrast” that is blind to them. Obviously the effort is bound to fail, and the title – framed as a question, not an answer – is evidence of this. Here is a sample, revealing caveat:
“Finally, it should also be noted that although some of our comparisons involved target–surround combinations in which some targets can appear as both an increment and decrement relative to the two surrounds, which would induce differences in both hue and saturation (e.g., red and green). Such pairs may be rated as more dissimilar than two targets of the same hue (e.g., red and redder), but it could be argued that this does not imply that the size of simultaneous contrast is larger in these conditions. However, it should be noted that such conditions are only a small subset of those tested herein.” Don’t bother us with specifics, we’re lumping.
As the authors discuss in their introduction, studies (treating “simultaneous contrast” in a crude, structure-and-relationship-blind way) produce conflicting results: “The conflicting empirical findings make it difficult to articulate a general model that predicts when simultaneous contrast effects will be large or small, since there is currently no model that captures how the magnitude of induction varies independently of method used…. “ Of course. When you don’t take into account relevant principles, and control for relevant factors, your results will always mystify you.
The conflation between, or refusal to distinguish explicitly, cases in which transparency arises and in which it does not arise is really inexplicable.
"The suggestion that the strongest forms of simultaneous contrast arise in conditions that induce the perception of transparency gains conceptual support from evidence showing that transparency can generate dramatic transformations in both perceived lightness and color..." But the contextual conditions that produce transparency are really quite...transparent...There's no clear reason to lump these with situations that are perceptually and logically distinct.
Also: "In simultaneous contrast displays, the targets and surrounds are also texturally continuous, in the sense that they are both uniform, but there are no strong geometric cues for the continuation of the surround through the target region of the kind known to give rise to vivid percepts of transparency (such as contours or textures). It is therefore difficult to generate a prediction for when transparency should be induced in homogeneous center-surround patterns, or how the induction of transparency should modulate the chromatic appearance of a target as a function of the chromatic difference between a target and its surround."
First, I'll pay him the compliment of saying that I don't think that it would be that difficult for Anderson to generate predictions for when transparency should occur...(I think even I could do it). Second, if this theoretical gap really exists, then this is the problem that should be addressed, not "what happens if we test a lot of random combinations and average the results." It might be useful to take into consideration a demo devised by Soranzo, Galmonte & Agostini (2010) which is a case of transparency effect that lacks the "cues" mentioned here - and thus by these authors' criteria qualifies as a basic simultaneous contrast display. (I don't think its that difficult to explain, but maybe I haven't thought about it enough.)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 09, Lydia Maniatis commented:
First, I apologize for the error vis a vis the open-loop experiment.
With respect to "very standard procedures:" Vision science is riddled with references to "standard," "popular," "traditional" "common" "widely used" procedures that have no theoretical rationale. "It is considered safe" is also not a rationale.
With respect to fitting, you calculate r-squared by fitting data in the context of very specific conditions whose selection is without a clear rationale, and thus it is very likely that conditions similar in principle but different in detail would yield different results. For example, you use Gabors, which are widely used but seem to be based on the idea that the visual process performs a Fourier analysis - and a local one at that - for which there are no arguments in favor.
Your findings don't warrant any substantial conclusion. You claim in the paper that: “we have shown that eye and hand movements made toward identical targets can end up in different locations, revealing that they are based on different spatial information.” Only the former claim is true.
Your prior discussion reveals that the conclusions couldn't be more speculative and go far beyond the data : “This difference between hand movements and saccades might reflect the different functional specificity of the two systems… One interpretation of the current results is that there are two distinct spatial maps, or spatial representations, of the visual world.”
Your arguments in favor of this explanation are peppered with casual assumptions: "...the priority for the saccade system might be to shift the visual axis toward the target as fast as possible with little cost for small foveating errors. If integrating past sensory signals with the current input increases processing time (Greenwald, Knill, & Saunders, 2005), the saccadic system might prefer to use current input and maximize the speed of the eye movement. For hand movements instead, a small error might make the hand miss its target with potentially large behavioral costs."
Might + might + might + might (eleven in all in the discussion) means that the effect that you report is far too limited in its implications to warrant the claim you make, quite unequivocally, in your title. Most of your "mights" are not currently testable, and almost certainly represent an overly simplistic view of a system of which we have only the crudest understanding at the neural level. The meaning of the term "sensory signals" also needs clarification, as it also implies a misunderstanding of the nature of perceptual processes.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 07, Matteo Lisi commented:
Thanks for the interest in our study.
I invite you to read more carefully the article, the experiment that you indicate as "post-hoc"is actually the open-loop pointing experiment: the details of the method are described at page 4-5, and the results are reported at page 7.
The truncation of response latencies is a very standard procedure, to prevent extreme spurious RTs (e.g. due to anticipation or attention lapses) from being included in the analysis. While there is no universal agreement on the ideal procedure for selecting cut-offs criteria, it is considered safe to use extreme cut-offs that result in the exclusion of only a tiny fraction of trials (<0.5%); see for example the recommendations by Ulrich R, 1994, page 69.
I am confused by your comment regarding the "fitting". The statistical model used in the analysis is a standard multivariate linear regression, with the x, y location of the response as dependent variables. It doesn't require any particular assumption, other than the usual assumptions of all linear models (such as independence of errors, homoscedasticity, normally-distributed residuals), which were not violated in our dataset. These are the same assumptions required also by other common linear models such as simple linear regression and ANOVA.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 07, Lydia Maniatis commented:
The discussion of this paper refers to a post hoc experiment whose results apparently used in the analysis of the reported results, but gives details of neither methods nor results, nor any citation. This seems oddly casual:
"To test this hypothesis, we repeated the pointing task in a condition in which vision was blocked during the execution of the movement by means of shutter glasses (open-loop hand pointing), making it impossible to use visual feedback for online correction of the hand movement. The results of this experiment replicated those of the experiment with normal pointing; the only difference was a moderate increase in the variability of finger landing positions, which is reflected in the decreased r2 values of the model used to analyze pointing locations in the open-loop pointing condition with respect to the “normal” pointing condition (see also Figures 1D, E and 2)."
As is usual but hard to understand, an author made up a large proportion of the subjects (1/6), confusing the issue of whether naivete is or is not an important condition to control: "all [subjects] except the author were naïve to the specific purpose of the experiments."
And this:
"In the experiment involving saccades, we excluded trials with latency less than 100 ms or longer than 600 ms (0.36% of total trials); the average latency of the remaining trials was 279.89 ms (SD = 45.88 ms). In the experiment involving pointing, we excluded trials in which the total response time (i.e., the interval between the presentation of the target and the recording of a touch response on the tactile screen) was longer than 3 s (normal pointing: 0.45% of total trials; open-loop pointing: 0.26% of total trials). The average response time in the remaining trials was 1213.53 ms (SD = 351.61 ms) for the experiment with normal pointing and 1004.70 ms (SD = 209.83 ms) for the experiment with open-loop pointing."
It's not clear if this was a post hoc decision, or, whether planned or not, what was the rationale.
As usual, there was a lot of fitting, using assumptions whose rationale is also not clear.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 06, Lydia Maniatis commented:
The authors’ theoretical position does not seem coherent. They are making an unintelligible distinction between what they call “low-level” stimulus features – among which they list “brightness, contrast, color or spatial frequency” – on the one hand, and “high-level information such as depth cues.” The latter include “texture and shading.” But in an image, the latter are simply descriptions of perceptual effects of variations in luminance, etc. For example, in a black and white photo what we might refer to as “shading” is objectively changes in the luminance of the surface, and the reaction of our visual system to these variations. Similarly for texture. So when they say that “The perception of depth can have the effect of over-riding some of the salient 2-D cues,” one wonders whether they mean to suggest that the “perception of depth” is based on some kind of clairvoyance. And when they say that “The results lend support to a depth cue invariant mechanism for object inspection and gaze planning” they’re basically just saying “how we look at something depends on what it looks like.” And what it looks like depends on…
With respect to the division of perceptual features into “high-level” and “low-level,” this is also a theoretical non-starter, which I’ve discussed in various comments, including a recent one on Schmid and Anderson (2017), copied below.
The methods are for the most part pre-packaged, from various sources. Their theoretical underpinnings are questionable. The figure 2 example of a face based on texture just doesn’t look like a face at all. We’re told that it was generated using the method described by Liu et al (2005). I guess that will have to do…The use of forced choices is indefensible, resulting in the loss of information and the need to invent untestable “guess rates” and “lapse rates:” “The guessing and the lapse rates were fixed to 0.25 and 0.001, respectively.” The stimuli were vary ambiguous, rendering the recognition task difficult, which necessitated certain post hoc measures to clean up the data. Basically we end up comparing a couple of arbitrary manipulations without any interpretable theoretical significance.
From comment on Schmid and Anderson (2017) https://pubpeer.com/publications/8BCF47A7F782E357ECF987E5DBFC55#fb117951
“The present findings demonstrate that it is difficult to tease apart low-level (e.g., contrast) and midlevel (e.g., transparency) contributions to lightness phenomena in simple displays… Dissociating midlevel transparency explanations from low-level contrast explanations of the crispening effect will always be problematic, as by definition information is processed by “low-level” mechanisms before higher visual processing areas responsible for the midlevel segmentation of surfaces.”
As the above passage indicates, the authors of this article are endorsing the (untenable but common) notion that, within a visual percept, some features are reflections of “low-level” processes, i.e. activities of neurons at anatomical levels nearer to the retinal starting point, while other features are reflections of the activities of “mid-level” neurons, later in the anatomical pathway. Still others, presumably, are reflections of the activities of “high-level” neurons. Thus, when we observe that a grey square on a dark grey background appears lighter than the same grey square on light grey background, this is the result of “low-level” firing patterns, while if we perceive a grey film overlying both squares and backgrounds (an effect we can achieve by simply making certain alterations in the wider configuration, leaving the "target" area untouched), this is a consequence of “mid-level” firing activity. And so on. Relatedly, the story goes, we can effectively observe and analyze neural processes at selected levels by examining selected elements of the percepts to which various stimuli give rise.
These assumptions are not based on any evidence or rational arguments; the arguments, in fact, are all against.
That such a view constitutes a gross and unwarranted oversimplification of an unimaginably complex system whose mechanics, and the relationships between those mechanics and perception, we are not even close to understanding, should be self-evident.
Even if this were not the case, the view is paradoxical. It’s paradoxical for many reasons, but I’ll focus on one here. We know that at any given location in the visual percept – any patch – what is perceived – with respect to any and all features – is contingent on the entire area of stimulation. That is, with respect to the percept, we are not dealing with a process of “and-sum.” This has been demonstrated ad infinitum.
But the invocation of “low-level” processes is simultaneously an invocation of “local” processes. So to say that the color of area “x” in this visual percept is the product of local process “y” is tantamount to saying that for some reason, the normal, organized feedback/feedforward response to the retinal stimulation stopped short at this low-level. But when and how does the system decide when to stop processing at the lower-level? Wouldn't some process higher up, with a global perspective, need to ok this shutting down of the more global process (to be sure, for example, that a more extended view doesn’t imply transparency)? And if so, would we still be justified in attributing the feature to a low-level process?
In addition, the “mid-level segmentation of surfaces” has strong effects on perceived lightness; are these supposed to be added to the “low-level contrast effects” (with the "low-level" info simultaneously underpinning the "mid-level" activity)? A rationale is desperately needed.
Arbitrarily interpreting the visual percept in terms of piecemeal processes for one feature and semi- global processes for another and entirely global processes for a third, and some or all at the same time is not a coherent position.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 03, M Mangan commented:
There's a strong response to this by the NASEM. Full version: http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=312017b&_ga=1.59874159.399318741.1481664833
"....The National Academies of Sciences, Engineering, and Medicine have a stringent, well-defined, and transparent conflict-of-interest policy, with which all members of this study committee complied. It is unfair and disingenuous for the authors of the PLOS article to apply their own perception of conflict of interest to our committee in place of our tested and trusted conflict-of-interest policies...."
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 09, Norberto Chavez-Tapia commented:
Since the first descriptions about pioglitazone treatment in NAFLD we were skeptical, unfortunately at the present time we still lack pharmacological treatment options for this prevalent disease. Despite the favorable results of this manuscript still does not adequately guide clinicians in decision making; based on this we decide to perform trial sequential analysis for assessment of data reliabity in the cumulative meta-analysis; for improvement in advanced cirrhosis for all NASH patients the analysis shows the robustness, reaching the sample size to cross the monitoring boundaries, indicating that the available data is more than enough to sustain the conclusion. This statistical robustness is companied with a number needed to treat for improvement in advanced cirrhosis for all NASH patients of 14 (95% CI 8.9-29.7), and 11 (95% CI 7.2-22) for rosiglitazone-pioglitazone, and pioglitazone respectively. The best candidate for this therapy are those with advanced fibrosis, for this case the number needed to treat is 3 (95%CI 1.8-4.1), and 2 (95%CI 1.4-2.9) for rosiglitazone-pioglitazone, and pioglitazone respectively. The clinical decision should be balanced with the risks described earlier with pioglitazone use, being one of the most relevant the increased risk of bladder cancer (HR of 2.642; 95%CI: 1,106, 6.31, p=0.029) with a number needed to harm of 1200. This data emphasizes the need of suitable assessment of fibrosis, to promote pioglitazone in highly selected patients, which will offer better benefits, reducing exposure to potential adverse effects. Finally, the references in the figures are incorrect, being difficult to follow the original sources.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 22, NephJC - Nephrology Journal Club commented:
This trial on the comparison of peritoneal dialysis versus furosemide in pediatric post-operative acute kidney injury, was discussed on May 23rd and 24th 2017 on #NephJC, the open online nephrology journal club. Introductory comments written by Michelle Rheault are available at the NephJC website here The discussion was quite detailed, with over 90 participants, including pediatric and adult nephrologists and fellows, and joined by author Dave Kwiatkowski. The highlights of the tweetchat were:
The authors should be commended for designing and conducting this important trial, with funding received from the American Heart Association–Great Rivers Affiliate and the Cincinnati Children’s Hospital Medical Center
Overall, it was thought to be a well-designed and well-conducted trial, with possible weaknesses being the use of bolus (rather than continuous infusion) of furosemide in the control arm, and the importance of negative fluid balance at day 1 as an important outcome being possible weaknesses
The results were thought to be quite valid and important, and given the not uncommon risk of acute kidney injury and fluid overload in this setting, that preemptive peritoneal dialysis catheters should be considered more often in children at high risk Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website. Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 28, Natalie Clairoux commented:
Free version of this article available at https://papyrus.bib.umontreal.ca/xmlui/handle/1866/19574
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 07, Claudiu Bandea commented:
Are the conclusions of the Lancet Neurology article by Stopschinski and Diamond flawed?
In their article entitled “The prion model for progression and diversity of neurodegenerative diseases”(1), Barbara Stopschinski and Marc Diamond conclude: “We do not know if common neurodegenerative diseases (e.g. Alzheimer's disease, Parkinson's disease, ALS and Huntington's disease) involve transcellular propagation of protein aggregation, as predicted by the prion model. Until specific interventions are able to block protein propagation and successfully treat patients, this model will be mainly speculative” (italics and parenthesis added).
Given that one of the authors, Marc Diamond, published several previous articles in which he refers to the primary proteins implicated in neurodegenerative diseases as 'prions' [see for example: “Tau Prion Strains Dictate Patterns of Cell Pathology, Progression Rate, and Regional Vulnerability In Vivo” (2) and “Propagation of prions causing synucleinopathies in cultured cells” (3)], it is surprising to learn in this new article (1) that, in fact, we do not know if these neurodegenerative disorders are caused by 'prions'. Does this mean that there are no “Tau Prion Strains” and there are no “Propagation of prions causing synucleinopathies”?
Also, how do the authors reconcile their conclusion with the following statement in the Abstract of a concurrently published paper entitled “Cellular Models for the Study of Prions”(4): “It is now established that numerous amyloid proteins associated with neurodegenerative diseases, including tau and α-synuclein, have essential characteristics of prions, including the ability to create transmissible cellular pathology in vivo”?
Further, Stopschinski and Diamond state that “until specific interventions are able to block protein propagation and successfully treat patients” the prion model remains speculative. If that’s the case, have these “specific interventions” (which would prove that Alzheimer's, Parkinson's, ALS and Huntington's are indeed caused by ‘prions') been used to also validate that the disorders traditionally defined as ‘prion diseases’, such as Creutzfeld-Jakob disease, are indeed caused by 'prions'? If not, is the prion model for Creutzfeld-Jakob disease just speculative?
In their outline of future directions, the authors write: “Given the wide ranging role of self-replicating protein aggregates in biology, we propose that pathological aggregation might in fact represent a dysregulated, but physiological function of some proteins—ie, the ability to change conformation, self-assemble, and propagate” (1).
This is a remarkable statement in that it points to the radical idea that the pathological aggregation of the proteins implicated in neurodegenerative diseases, such as tau, amyloid β, α-synuclein and ‘prion protein’, is an intrinsic phenomenon associated with their physiological function, which is a profound departure from the conventional view presented in thousands of publications over the last few decades. However, I have a problem with the formulation of the statement, specifically with “…we propose…”. The authors might not be fully familiar with the literature on neurodegenerative diseases, but what they are proposing has been the primary topic of articles published several years ago (e.g. 5, 6).
Given the extraordinary medical, public health and economic burden associated with neurodegenerative diseases, it should be expected for the authors or the editorial team/reviewers (7) to address the questions and issues posted in this comment.
References
(1) Stopschinski BE, Diamond MI. 2017. The prion model for progression and diversity of neurodegenerative diseases. Lancet Neurology. doi: 10.1016/S1474-4422(17)30037-6. Stopschinski BE, 2017
(2) Kaufman SK, Sanders DW, Thomas TL et al. 2016. Tau Prion Strains Dictate Patterns of Cell Pathology, Progression Rate, and Regional Vulnerability In Vivo. Neuron. 92(4):796-812. Kaufman SK, 2016
(3) Woerman AL, Stöhr J, Aoyagi A, et al. 2015. Propagation of prions causing synucleinopathies in cultured cells. Proc Natl Acad Sci U S A. 112(35):E4949-58. Woerman AL, 2015
(4) Holmes BB, Diamond MI. 2017. Cellular Models for the Study of Prions.Cold Spring Harb Perspect Med. doi: 10.1101/cshperspect.a024026. Holmes BB, 2017
(5) Bandea CI. 2009. Endogenous viral etiology of prion diseases. Nature Precedings. http://precedings.nature.com/documents/3887/version/1/files/npre20093887-1.pdf
(6) Bandea CI. 2013. Aβ, tau, α-synuclein, huntingtin, TDP-43, PrP and AA are members of the innate immune system: a unifying hypothesis on the etiology of AD, PD, HD, ALS, CJD and RSA as innate immunity disorders. bioRxiv. doi:10.1101/000604; http://biorxiv.org/content/biorxiv/early/2013/11/18/000604.full.pdf
(7) George S, Brundin P. 2017. Solving the conundrum of insoluble protein aggregates. Lancet Neurol. doi: 10.1016/S1474-4422(17)30045-5. George S, 2017
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 05, Anders von Heijne commented:
If you like this article, you simply must read this: Lawson AE1, Daniel ES. Inferences of clinical diagnostic reasoning and diagnostic error.J Biomed Inform. 2011 Jun;44(3):402-12. The US philosopher Charles Sanders Peirce has at lot important things to say about how we create our hypotheses!
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Aug 18, Tamás Ferenci commented:
I congratulate the authors for making the results of Nolte et al much more accessible. To further facilitate the application and investigation of this model, I implemented their reparameterized version under the free and open-source mrgsolve which can be run using R.
The model is available at https://github.com/tamas-ferenci/NolteWeisser_AluminiumKinetics.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 29, Dalia Al-Karawi commented:
Dear Authors, This meta-analysis has previously been published in the Phytotherapy Research Journal since February 2016. The data included in this meta-analysis is identical to the one we published before. The original publication, The Role of Curcumin Administration in Patients with Major Depressive Disorder: Mini Meta-Analysis of Clinical Trials, has included the same studies and quantified the same effect size as you did in this paper. The same goes with the interpretation of data and the conclusions you came up with. I wonder what was your take on this topic that wasn't mentioned before?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 05, Anil Makam, MD, MAS commented:
We thank Dr. Atkin and colleagues for publishing long-term outcomes after a one-time screening flexible sigmoidoscopy.(1) Although we agree that this strategy reduces the relative risk of colorectal cancer (CRC) diagnoses and death, we disagree with the methods the authors used to calculate the absolute magnitude of benefit, which is critical in determining whether screening is actually worth the burden, harms and cost.(2) By relying on per protocol analyses, the authors overestimate the absolute benefit of flexible sigmoidoscopy given healthy user and adherer biases inherent in preventive health interventions— i.e., those who adhere to CRC screening also have other behaviors that reduce their overall risk of cancer and death (e.g. diet, smoking habits, exercise, etc.) independent of the screening test itself.(3, 4) There is strong evidence for the presence of these biases in the UK Flexible Sigmoidoscopy Screening Trial given the marked differences in all-cause mortality within the invited group when stratified by those who were adherent versus those who were not adherent to flexible sigmoidoscopy (20.7% versus 29.5%), a screening test that does not reduce overall mortality. Assessing the absolute benefits for screening from the intention-to-treat analyses gives the most accurate estimates and avoids the pitfalls of these biases. This approach results in markedly attenuated estimates of the benefits (Table). Because screening does not save lives (number needed to screen of infinity for all-cause mortality), accurate estimates of the absolute benefit on reducing CRC diagnoses and CRC-related death are key to informing decision aid development and shared decision making.
See Table here: https://twitter.com/AnilMakam/status/860490959225847809
Anil N. Makam, MD, MAS; Oanh K. Nguyen, MD, MAS
Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas, USA
We declare no competing interests.
REFERENCES (1) Atkin W, Wooldrage K, Parkin DM, Kralj-Hans I, MacRae E, Shah U, Duffy S, Cross AJ. Long term effects of once-only flexible sigmoidoscopy screening after 17 years of follow-up: the UK Flexible Sigmoidoscopy Screening randomised controlled trial. Lancet. 2017. (2) Makam AN, Nguyen OK. An Evidence-Based Medicine Approach to Antihyperglycemic Therapy in Diabetes Mellitus to Overcome Overtreatment. Circulation. 2017;135(2):180-195. (3) Shrank WH, Patrick AR, Brookhart MA. Healthy user and related biases in observational studies of preventive interventions: a primer for physicians. J Gen Intern Med. 2011;26(5):546-550. (4) Imperiale TF, Monahan PO, Stump TE, Glowinski EA, Ransohoff DF. Derivation and Validation of a Scoring System to Stratify Risk for Advanced Colorectal Neoplasia in Asymptomatic Adults: A Cross-sectional Study. Ann Intern Med. 2015;163(5):339-346.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 03, Peter Hajek commented:
Meta-analyses do not normally use different approaches than a widely accepted standard and this field has one (see over a dozen Cochrane meta-analyses, Russell Standard, and any other norm in this field). As far as I know, no meta-analysis or individual study over the past 20 years or so included completers only. As explained earlier, the key point is that ‘missingness’ in this field is not considered random. If among 100 smokers who had treatment only 10 answer follow-up calls and report abstinence, the success rate is considered to be 10% rather than 100% as you would report it.
Re: studies that exclude treatment successes, imagine a treatment with good efficacy that helps 50% of patients, but only the 50% that were not helped are followed-up. These treatment failures may have worse outcomes than a random comparator group (they could have been treatment resistant e.g. because they have a more severe condition or other adverse circumstances). Your approach would interpret the finding as showing that the treatment is not only ineffective, but that it causes harm – when in fact it shows no such thing and is simply an artefact of the selection bias.
I appreciate that getting these things right can be difficult and would leave this alone if it was not such an important topic open to ideological misuse. And I agree that more studies are needed for the definitive answers to emerge.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 17, Regina El Dib commented:
In his comment, Dr. Hajek states that ‘in smoking cessation trials, drop-outs are classified as non-abstainers;. The approach to dealing with missing data in meta-analysis is differnet from that in trials. A survey of the methods literature identified four proposed approaches for dealing with missing outcome data when conducting a meta-analysis (https://www.ncbi.nlm.nih.gov/pubmed/26202162). All approaches recommended the use of a complete case analysis as the primary analysis. This is exactly how we conducted our meta-analysis (figure 5 in the published paper); the pooled relative ratio (RR) was 2.03 (95% CI 0.94 to 4.38) for smoking cessation with ENDS relative to ENNDS.
The same proposed approaches recommended additional sensitivity analyses using different imputation methods. The main purpose of these additional analyses is to assess the extent to which missing data may be biasing the findings of the primary analysis (https://www.ncbi.nlm.nih.gov/pubmed/23451162). Accordingly, we have conducted two sensitivity analyses respectively assuming that all participants with missing data had success or failure in smoking cessation. When assuming success, the pooled RR was 0.95 (95% CI 0.76 to 1.18, p=0.63) with ENDS relative to ENNDS; when assuming failure, the pooled RR was 2.27 (95% CI 1.04 to 4.95, p=0.04). This dramatic variation in the results when making different assumptions is clearly an indicator that the missingness of data is associated with a risk of bias, and that decreases our confidence in the results. We have already reflected that judgment in our risk of bias assessment of these two studies, in table 4 and figure 2; and in our assessment of the quality of evidence in table 7.
Even if we were going to consider the RR of 2.27 as the best effect estimate (i.e., assuming all those with missing data had failure with smoking cessation), the findings would not be supporting the effectiveness of e-cigarettes on smoking cessation. Indeed, the included trials do not address that question, and our review found no study comparing e-cigarettes to no e-cigarettes. The included trials compare two forms of e-cigarettes.
When assessing an intervention A (e.g., e-cigarettes) that has two types A1 (e.g., ENDS) and A2 (e.g., ENNDS), it would be important to first compare A (A1 and/or A2) to the standard intervention (e.g., no intervention or nicotine replacement therapy (NRT)), before comparing A1 to A2. If A1 and A2 are inferior to the standard intervention with A1 being less inferior than A2 (but still inferior to the standard intervention), focusing on the comparison of A1 to A2 (and ignoring the comparison to the standard intervention) will show that A1 is better than A2. That could also falsely suggest that at least A1 (and maybe A2) is favorable. Therefore, a recommendation of A1 vs. A2 should be considered only if A is already recommended over the standard intervention (i.e. A is non inferior to the standard intervention).
Dr. Hajek also criticizes the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke. When discussing treatment and examining evidence, we refer to effectiveness (known as pragmatic; a treatment that works under real-world conditions). This includes (among other criteria) the inclusion all participants who have the condition of interest, regardless of their anticipated risk, responsiveness, comorbidities or past compliance. Therefore, the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke had a role in portraying the impact of ENDS and ENNDS in cigarette smokers on long-term tobacco use.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 02, Wasim Maziak commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 02, Peter Hajek commented:
Wow! Let me try to explain these points in turn.
Re excluding drop-outs: Imagine 100 people get real treatment and 40 quit, 100 get placebo and 20 quit. All those who were successful attend for follow up (they feel good, grateful, get praised) but many less among treatment failures are willing to face the music (feel that they disappointed clinicians, may be told off, or feel that treatment was rubbish). If the same proportion of failures attend in each group (or if none attend), the success rate among attenders will be identical for both study arms, despite the real quit rates being 40% vs 20%. Check https://www.ncbi.nlm.nih.gov/pubmed/15733243 I cannot think of a mechanism through which this could act in the opposite way as you assert.
Re including irrelevant studies, your response provided no explanation for doing this.
Re. your statements about 'industry people' who should conduct independent science, I do not have and never had any links with any tobacco or e-cigarette manufacturers; and have published dozens of studies on smoking cessation treatments. I believe that anti-vaping activism presented as science needs challenging because misinforming smokers keeps them smoking and undermining much less risky alternatives to cigarettes protects cigarettes monopoly and harms public health.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 02, Wasim Maziak commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 01, Peter Hajek commented:
The conclusion that further trials of e-cigarettes are needed is correct, but there are two major problems with this review of the work that has been done so far.
In smoking cessation trials, drop-outs are classified as non-abstainers because treatment failures are less likely to engage in further contact than treatment successes. Practically all smoking cessation trials and reviews published in the last 10 years or so use this approach. Removing drop-outs from the sample, as was done here, dilutes any treatment effect.
The second serious issue is the inclusion of studies that recruited smokers who used e-cigarettes in the past but continue to smoke. Such studies have a higher proportion of treatment failures in the ‘tried e-cig cohort’ and so have less quitting in this subgroup, but they provide no useful information on the efficacy of e-cigarettes. Saying that they provide low quality evidence is wrong – they provide no evidence at all.
The narrative that some studies show that vaping helps quitting and some that it hinders misrepresents the evidence. No study showed that vaping hinders quitting smoking. The two RCTs with long-term outcome, if analysed in an unbiased way, show a positive effect despite controlling for sensorimotor effects and using low nicotine delivery products.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.sciencedirect.com www.sciencedirect.com
-
On 2017 Aug 08, Christopher Tench commented:
Can you possibly provide the coordinates used as it is not possible to understand exactly what analysis has been performed without them.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 24, KEVIN BLACK commented:
Nature.com: ... a link to the "MRI-specific module to complement the methods reporting checklist," please?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 24, Mary Klem commented:
A primary rationale provided by Cooper et al. for conducting this study is that a previous review (Olatunji et al. 2014) used a search strategy that "was not systematic or clearly defined" (pg 111). Cooper et al. claim that they conducted a systematic search "...following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance...(pg 110). However, details provided in the Methods section strongly suggest that Cooper et al. themselves failed to conduct searches that were systematic and that they have failed to provide a clear explanation of what and how they searched. Thus, this review suffers from the same flaws Cooper et al. identified in Olatunji et al 2014.
PRISMA recommends that authors provide, for each database searched, the database name, the platform used to search the database, and the start and end dates for the search of each database. Cooper et al., fail to do this, providing only database names and no start-end dates. They also include EBSCO in the list of databases searched, even though EBSCO is not a database. EBSCO is a platform that can be used to search a variety of databases, and it is not clear from the paper which databases were searched using this platform.
PRISMA also recommends that authors present the complete search strategy used for at least one of the major databases searched. Cooper et al. fail to do this, providing only what appears to be a simple description of their search terms. The authors note that the search terms "were searched in key words, title, abstract, and as MeSH subject headings" (pg 112). This is an odd statement, because the authors say they searched multiple databases, yet MeSH are controlled vocabulary only available for use in PubMed. So did the authors utilize each database's controlled vocabulary e.g., Emtree terms for Embase, PsycINFO thesarus terms? Or did they somehow attempt to use MeSH in these other databases and fail to use the appropriate controlled vocabulary? Given this apparent confusion about the nature of subject headings, I have little confidence that the authors conducted systematic or comprehensive searches.
Overall, then, this review suffers from a lack of clarity about the search strategies used, and the search strategies themselves are suspect. With these limits, it is unlikely that the findings of this study can add to the current literature in any substantive way.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 25, Marcia Herman-giddens commented:
It is already known that C6 is not a test for acute LB. This study found "All (34/34) seropositive blood donors followed over time remained seropositive at follow-up after 22-29 months." Perhaps seropositivity remains much longer than this. (Doesn't this question beg for a longer study unless the whole test is thrown out as useless?) This could explain why the older people get the more likely they are to be positive since they would have a longer exposure period ("seroprevalence was significantly higher in males and increased with age"). Males are usually more exposed to the outdoors over a lifetime than women. Thus, it would seem that the conclusion of the study should be that a lot of people have had LB infections in Kalmar County, and that a positive C6 could well indicate ongoing infection or a past infection, rather than the study's implication of false positivity. One wonders who would want blood from a C6 positive donor? The bottom line from this study seems to be that C6 is a useless test for blood screening for LB and should not be used.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 May 22, Martine Crasnier-Mednansky commented:
The terminology 'induction prevention' does not apply to cAMP-dependent Carbon Catabolite Repression (CCR). It has been used to illustrate CCR in Bacillus subtilis, see figure 2 in Görke B, 2008. Escherichia coli cAMP-dependent CCR does not cause induction prevention. It is therefore incorrect to state: "The current model for glucose preference over other sugars involves inducer exclusion and induction prevention, both of which are strictly dependent on the phosphorylation state of EIIA<sup>Glc</sup> in E. coli". Moreover, the major player in induction prevention is HPr, not EnzymeIIA<sup>Glc</sup>.
Some referenced papers were misinterpreted by the authors. Lengeler J, 1972 reported, in wild type cells, induction of the mannitol operon 'is not prevented' by glucose. Lengeler J, 1978 reported expression of the mtl operon, in both constitutive and inducible strains, is 'resistant to CCR' caused by glucose. This expression was nearly insensitive to cAMP addition, even though expression of the mtl operon is dependent on cAMP (a cya mutant strain does not grow on mannitol). Hence, the level of cAMP in glucose-growing cells is probably sufficient for expression of the mannitol operon. It is unclear how the authors monitored CCR with their inducible strains, as data were not shown.
It was proposed induction of the mannitol operon may take place in the absence of PTS transport as follows. In the unphosphorylated state, transport of mannitol by Enzyme IICBA<sup>Mtl</sup> (MtlA) occurred by facilitated diffusion, upon high-affinity binding of mannitol to the IIC domain Lolkema JS, 1990. Thus, the IIC domain appears as a transporter by itself translocating mannitol at a slow rate. This provides an explanation for the observations that (1) mutant strains lacking Enzyme I and/or HPr were still inducible by mannitol (which originally led to the proposal mannitol may be the inducer of the mannitol operon Solomon E, 1972) and (2) mutant strains lacking Enzyme IICBA<sup>Mtl</sup> could not be induced unless mannitol was artificially generated in the cytoplasm. It was therefore concluded mannitol was the inducer of the mannitol operon. Interestingly, in the phosphorylated state, transport of free mannitol by Enzyme IICBA<sup>Mtl</sup> can be detected on the condition that the transporter has a poor phosphorylation activity Otte S, 2003.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 24, John Roger Andersen commented:
Read publishers pdf version in ReaderCube for FREE: click here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 23, Marko Premzl commented:
The third party data gene data set of eutherian kallikrein genes LT631550-LT631670 was deposited in European Nucleotide Archive under research project "Comparative genomic analysis of eutherian genes" (https://www.ebi.ac.uk/ena/data/view/LT631550-LT631670). The 121 complete coding sequences were curated using tests of reliability of eutherian public genomic sequences included in eutherian comparative genomic analysis protocol including gene annotations, phylogenetic analysis and protein molecular evolution analysis (RRID:SCR_014401).
Project leader: Marko Premzl PhD, ANU Alumni, 4 Kninski trg Sq., Zagreb, Croatia
E-mail address: Marko.Premzl@alumni.anu.edu.au
Internet: https://www.ncbi.nlm.nih.gov/myncbi/mpremzl/cv/130205/
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 12, Christian J. Wiedermann commented:
After critical reading of the paper, the authors' conclusions suggesting renal safety of hydroxyethyl starch (HES) do not appear warranted. Since BMC Anesthesiology does not offer a correspondence or letters-to-the-editor section, where some of the study's limitations could be discussed, I would like to post a comment here.
Zhang et al. describe a multicenter, double-blind, controlled randomized clinical trial that evaluated the renal safety of HES in patients undergoing hip arthroplasty under spinal anesthesia, apparently showing that there is no increase in renal injury with 6% HES 130/0.4 compared with lactated Ringer’s solution during this type of orthopedic surgery:
- The reported methodology for the study provided no information on the statistical approach, either regarding the sample size calculation or the comparisons between groups for each outcome. Thus, it is impossible to determine whether this study, which involved a relatively small patient population (120 patients randomized), was powered sufficiently to detect statistically significant and clinically important differences between HES and Ringer’s lactate in the primary and secondary outcomes.
- Eligibility criteria meant that the patient population did not include patients with American Society of Anesthesiologists physical status score >III, thus limiting this elderly population to those at a lower risk of developing AKI.
- The primary outcome of the study was the levels of urine and plasma neutrophil gelatinase-associated lipocalin (NGAL) and plasma interleukin 18 (IL-18), which were used as biomarkers for the early detection of AKI. NGAL has been widely investigated as a biomarker for AKI; however, its clinical utility remains unclear because of difficulties in interpreting results due to different settings, sampling time points, measurement methods, and cut-off values [Singer E, 2013]. Although IL-18 holds promise as a biomarker for the prediction of AKI, it has only moderate diagnostic value [Lin X, 2015].
- The follow-up period in the study was only 5 days, and consequently HES-induced AKI may have been missed (the FDA and EMA recommended monitoring of renal function in patients for at least 90 days [Wiedermann CJ, 2017].
Thus, no conclusions regarding the renal safety of HES can be drawn from the study. The assessment of the benefit/risk profile associated with the perioperative administration of HES will require rigorously designed, sufficiently powered randomized controlled clinical trials incorporating a clinically meaningful outcome for the licensed dosage/indication in an appropriate patient population. Most clinical research fails to be useful not because of its findings but because of its design [Ioannidis JP, 2016], which is particularly true for studies with HES [Wiedermann CJ, 2014].
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Oct 04, william jobin commented:
In the situation where people are using the boats to cross to the island, it is obviously a place to do snail control through periodic weed removal and application of bayluscide. This technique was successfully demonstrated years ago on Volta Lake. Why do you limit yourself to drugs?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 27, Harri Hemila commented:
Djulbegovic and Guyatt do not refute my criticism of their 3 novel EBM principles
Djulbegovic and Guyatt challenge my short critique of Djulbegovic B, 2017 by stating that “the BMJ’s rating of EBM as one of medicine’s 15 most important advances since 1840 is but one testimony to the impact of its conceptualization of key principles of medical practice, see BMJ 2007 Jan 20;334(7585):111”.
The short news article to which they refer to has the title “BMJ readers choose the sanitary revolution as greatest medical advance since 1840”.
First, that BMJ news article is a 305-word summary of findings from a Gallup poll of 11341 readers of the BMJ, only one third of whom were physicians. Reporting the opinions of BMJ readers does not refute my criticisms.
Second, the 305-word BMJ text was published in 2007. The BMJ readers could not have anticipated in 2007 the revision of the basic principles of EBM a decade later by Djulbegovic B, 2017. The findings in a 10-year-old survey are not relevant to the current discussion of whether the 3 novel EBM principles are reasonable.
Third, the short BMJ text does not mention the term EBM anywhere in the piece. The text states that “sanitation topped the poll, followed closely by the discovery of antibiotics and the development of anaesthesia.” There are no references to EBM whatsoever.
Fourth, one major proposal of the original EBM-paper in JAMA (1992) was that physicians should not lean uncritically on authorities: “the new [EBM] paradigm puts a much lower value on authority”, see p. 2421 in Evidence-Based Medicine Working Group., 1992. Thus, it is very odd that Djulbegovic and Guyatt argue for the importance of the EBM-approach, while they simultaneously consider that the BMJ is such an important authority. It is especially surprising that the mere reference to a 305-word text in the BMJ somehow would refute my comments, even though the text does not discuss EBM or other issues related to my comments either literally or by implication.
Fifth, Djulbegovic and Guyatt state that the 305-word text in the BMJ is a “testimony” in favor of EBM. Testimonies do not seem relevant to this kind of academic discussion. Testimonies are popularly used when trying to impress uncritical readers about claims to which there is no sound support, such as testimonies for homeopathy on numerous pages in the internet.
Djulbegovic and Guyatt also write “The extent to which EBM ideas are novel or, rather, an extension, packaging and innovative presentation of antecedents, is a matter we find of little moment.”
I do not agree with their view. If there is no novelty in the 3 new EBM principles proposed by Djulbegovic B, 2017, and if there is no reasonable demarcation line between “evidence-based medicine” and “ordinary medicine” or simply “medicine”, why should we reiterate the prefix “evidence-based” instead of simply stating the we are discussing “medicine” and we try to make progress in medicine. If “evidence-based” does not give any added meaning in a discussion, why should such a prefix be used?
Djulbegovic and Guyatt stated that I “do not appear to disagree with [their] overview of the progress in EBM during last quarter of century”. That statement is not entirely correct. A short commentary must have a narrow focus. The focus in my comment was simply on the 3 new EBM principles that were presented by Djulbegovic B, 2017. The absence of any other comments on their overview does not logically mean that I agree with other parts of their overview.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Nov 23, BENJAMIN DJULBEGOVIC commented:
Hemila does not appear to disagree with our overview of the progress in EBM during last quarter of century. His main concerns seem to relate to the origin of the ideas. The extent to which EBM ideas are novel or, rather, an extension, packaging and innovative presentation of antecedents, is a matter we find of little moment. The BMJ’s rating of EBM as one of medicine’s 15 most important advances since 1840 is but one testimony to the impact of its conceptualization of key principles of medical practice (see BMJ. 2007 Jan 20; 334(7585): 111.doi: 10.1136/bmj.39097.611806.DB)
Benjamin Djulbegovic & Gordon Guyatt
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Nov 19, Harri Hemila commented:
The three novel principles for EBM are old: the emperor has no clothes
In their paper, Djulbegovic B, 2017 describe three novel principles for EBM.
Djulbegovic and Guyatt write (p. 416): “the first EBM epistemological principle is that not all evidence is created equal, and that the practice of medicine should be based on the best available evidence.”
There is no novelty in that statement. Even before 1992 scientists including those in the medical fields, understood that some types of research give more reliable answers.
Furthermore, Djulbegovic and Guyatt do not follow the first principle in their own paper. They write (p. 416): “Millions of healthy women were prescribed hormone replacement therapy [HRT] on the basis of hypothesised reduction in cardiovascular risk; randomised trials refuted these benefits and demonstrated that hormone replacement therapy increased the incidence of breast cancer.”
In an earlier paper, Vandenbroucke JP, 2009 wrote “Recent reanalyses have brought the results from observational and randomised studies into line. The results are surprising. Neither design held superior truth. The reasons for the discrepancies were rooted in the timing of HRT and not in differences in study design.” In another paper, Vandenbroucke JP, 2011 wrote “Four meta-analyses contrasting RCTs and observational studies of treatment found no large systematic differences … the notion that RCTs are superior and observational studies untrustworthy … rests on theory and singular events”.
Djulbegovic and Guyatt thus reiterate old assumptions about the unambiguous superiority of RCTs compared with observational studies. They do not follow their own first EBM principle that arguments ”should be based on the best available evidence”. The above mentioned Vandenbroucke’s papers had already been published and were therefore available; thus they should have been taken into account when Djulbegovic and Guyatt argued for the superiority of RCTs in 2017.
Djulbegovic and Guyatt further write (p. 416): “the second [EBM] principle endorses the philosophical view that the pursuit of truth is best accomplished by evaluating the totality of the evidence, and not selecting evidence that favours a particular claim.”
There is no novelty in espousing that principle either. Objectivity has been a long term goal in the natural sciences, and also in the medical fields.
Furthermore, Djulbegovic and Guyatt do not follow the second principle in their own paper. Their reference 94 is to the paper by Lau J, 1992, to which Djulbegovic ja Guyatt refer with the following statement (p. 420): “the history of a decade-or-more delays in implementing interventions, such as thrombolytic therapy for myocardial infarction.” However, in the same paper, Lau J, 1992 also calculated that there was very strong evidence that magnesium was a useful treatment for infarctions with an OR = 0.44 (95% CI: 0.27 - 0.71). However, in the ISIS-4 trial, magnesium had no effects: “Lessons from an effective, safe, simple intervention that wasn't ”, see Egger M, 1995.
Thus, Djulbegovic and Guyatt cherry picked one intervention (trombolytic therapy) to support their statement that many interventions should have been taken into use much more rapidly, but they dismissed another intervention in the paper by Lau J, 1992, that would serve as an unequivocal counter example of the same statement. This surely is an example of “selecting evidence that favours a particular claim”.
Principles 1 ja 2 had already been advocated in James Lind’s book on scurvy (1753), which was listed as reference number 1 in Djulbegovic B, 2017. Lind wrote: “As it is no easy matter to root out old prejudices, or to overturn opinions which have acquired an establishment by time, custom and great authorities; it became therefore requisite for this purpose, to exhibit a full and impartial view of what has hitherto been published on the scurvy; and that in a chronological order, by which the sources of those mistakes may be detected. Indeed, before this subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish.” See Milne I, 2012.
Thus, Djulbegovic ja Guyatt’s EBM principles 1 and 2 are not new; they are at least over 250 years old.
Djulbegovic and Guyatt write further (p. 416): “the third epistemological principle of EBM is that clinical decision making requires consideration of patients’ values and preferences.”
The importance of patient autonomy is not an innovation that is attributal to EBM, however.
When the EBM-movement started with the publication of the JAMA (1992) paper by the Evidence-Based Medicine Working Group., 1992, there was novelty in the proposals. The suggestion that each physician should himself or herself read original literature to the extent proposed by EBM enthusiasts in 1992 was novel as far as I can comprehend from the history. The strength of the suggestion to restrict to RCTs as the source of valid evidence about the efficacy of medical interventions was also novel as far as I can see. Thus, the Evidence-Based Medicine Working Group., 1992 had novel ideas and described the background for those ideas. We can disagree about the 1992 proposals as many have done but I do not consider that it is fair to claim that the JAMA (1992) paper had no novelty.
In contrast, the aforementioned three principles described by Djulbegovic B, 2017 are not novel. The principles can be traced back to times long before even 1992. In addition, none of the principles alone or in combination set any unambiguous demarkation line as to what EBM is in 2017 and what it is not. How does evidence-based medicine differ from “ordinary” medicine, which has been using the same three principles for ages. If there is no difference between the two, why should the “evidence-based” term be used instead of simply writing “medicine”.
In their paper, Djulbegovic and Guyatt also describe their visions for the future, but I cannot see that any of their visions is specific to EBM. We could as well write their visions for future by changing “EBM” to “medicine”.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 02, Gwinyai Masukume commented:
In an article he co-authored in The Lancet the late Hans Rosling, internationally acclaimed for giving entertaining and informative videos on global health Maxmen A, 2016, contends the discipline has entered the post-fact era Nordenstedt H, 2016. The article makes a case that in the post-fact era data is tweaked for advocacy purposes, in medical journals, with the inadvertent consequence of misguiding investments needed to achieve the Sustainable Development Goals.
In this article on male circumcision for HIV prevention Downs JA, 2017, it is stated that a “systematic review estimated a risk reduction between 38% and 66%” for female-to-male HIV transmission conferred by voluntary medical male circumcision. It is not clarified whether this risk reduction is absolute or relative although the cited systematic review, in its abstract, remarks it is a relative risk reduction Siegfried N, 2009. An article titled, “Misleading communication of risk” Gigerenzer G, 2010, discusses how such risk communication is non-transparent. By omitting the baseline risk the bigger numbers make better headlines and better advocacy. However, Hans Rosling cautioned against operating on tweaked facts designed for advocacy, especially in medical journals, because journals can easily propagate an inaccurate understanding of the situation which can make fixing global health problems more challenging Maxmen A, 2016 Nordenstedt H, 2016.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 07, Victoria MacBean commented:
Plain English Summary:
This study examined differences in children’s awareness of breathing difficulty, specifically the influence of weight and asthma. With obesity on the rise in Western society and asthma being a common long-term medical condition, it is crucial to understand why obese, asthmatic children report more breathlessness than asthmatic children who are not overweight, even when there are no differences in the severity of their asthma. It has previously been suggested that overweight children may have an increased awareness of breathing effort.
This study compared various aspects of breathing across three groups of children: asthmatic children with healthy weight, overweight children with asthma and a control group of healthy weight children. The project involved the children breathing through a device which added resistance to breathing. Children were asked to rate how hard they felt it was to breathe, and the tests also measured the children’s breathing muscle activity to find out how hard the breathing muscles were working as the researchers purposefully increased the children’s effort to breathe.
The anticipated results were that healthy weight asthmatic children and healthy weight children would show similar results, that is that their breathing effort scores would steadily increase as they found it harder to breathe, with the breathing muscles working gradually harder. Meanwhile, the overweight asthmatic children would show a much steeper increase.
From the 27 children who were studied, the results showed that the overweight children gave higher effort scores throughout the tests, but that these increased at the same rate. There were no differences in the way the children’s breathing muscles responded to the tests. The reason for the higher overall effort scores in the overweight asthmatic children was that their muscles are already working harder than the other two groups before the experiment due to the changes that occur in the lung with increased weight. It was then concluded that overweight asthmatic children do not have differences in their awareness of breathing effort, but that their additional body mass means their muscles are already working harder.
This summary was produced by Sarah Ezzeddine, Year 13 student from Harris Academy Peckham, London and Neta Fibeesh, Year 13 student from JFS School, Harrow, London as part of the authors' departmental educational outreach programme.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 20, Daniel Corcos commented:
1) What you call"underlying increasing incidence" of breast cancer is the increase due to x-ray induced carcinogenesis. 2) As expected, the major spike of breast cancer incidence (invasive + in situ) is at the end of the 1990's and at the beginning of the 2000's in the USA, contrasting with very few change in women under 50. Similar changes are seen in other countries in the appropriate age group, after implementation of mammography screening.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 20, Daniel B Kopans commented:
Response Daniel Corcos' two concerns: 1. Actually, the annual incidence of "true" breast cancers has not been stable. It had been increasing going back to the 1940's. That is the basic point. When women are screened, the cancers detected earlier are layered on top of the increasing baseline. The incidence of invasive breast cancers was increasing in the U.S. and the other countries you mention long before screening became available. .
It is fundamental epidemiology that when screening begins (the prevalence screen) you find the cancers that would have become clinically evident that year PLUS the cancers that were clinically evident that year but over looked PLUS the cancers that are found 1,2, or 3 years earlier by the screening test. Consequently, when new women begin screening the cancers detected (unfortunately called annual incidence) jump up. If this is done in one year (rarely) then it will go up and come down back toward the baseline incidence. If screening continues it never reaches back to the baseline since there will be new women each year having their prevalence screen. In addition, since the incidence of breast cancer increases with age, and screening advances the date of detection (a 47 year old woman will have the incidence of a 49 year old if screening finds cancers 2 years earlier) the annual detection rate will come back down toward the “baseline”, but not reach it. In the U.S. SEER data you can see that the prevalence numbers remained high from the mid 1980’s (when screening began) to 1999 when they turned back down. This is because, in the U.S. the number of women participating in screening steadily increased each year (new prevalence screens) until it plateaued in 1999. This was followed by a decline in the annual detection rate back toward the baseline. However, you will note that the entire SEER curve is tilted up. This is because the baseline was almost certainly increasing by 1% per year over the same time period (as it had been doing going back to 1940). This is why, despite a fairly steady participation in screening after 1999, the annual incidence in 2012 is higher than in 1978. It is not because screening is finding fake cancers, but because the underlying incidence of breast cancer has been steadily increasing. This is evident in other countries as well. 2. Radiation risk to the breast is age related and drops rapidly with increasing age so that by the age of 40 there is no measurable risk at mammographic doses. All of the estimates are extrapolated and even these are below even the smallest benefit from screening. Millions (hundreds of millions??) of mammograms were performed in the 1980’s. If mammography was causing cancers then we would have expected a major spike in breast cancer at the end of the 1990’s (a latency of 8-10 years). Instead, the incidence of breast cancer began to fall in 1999 consistent with the end of the prolonged prevalence peak. Even those who are trying to reduce access to screening no longer point to the radiation risk because there are no data to support it for women ages 40 and over.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Feb 20, Daniel Corcos commented:
Clearly, all the evidence for the overdiagnosis epidemics rests on the assumption that the annual incidence of "true" (unable to spontaneously regress) breast cancers is stable after implementation of mammography screening. You acknowledge that breast cancer incidence has increased in the USA after implementation of screening. You should also acknowledge that breast cancer incidence has increased in every country after implementation of screening. This cannot be a coincidence. However, as you have noticed, these cancers must be "true" cancers. So, there is misinformation, but it comes also from those pretending that low-dose x-rays are safe.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 19, Alexander Kraev commented:
DOI: 10.1186/1745-6150-9-18
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 18, Seán Turner commented:
The genus name Roultibacter is not novel. It was published previously by the same research group: "Raoultibacter" Traore et al. 2016 (PubMed Id 27595003).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 17, Clive Bates commented:
It's a surprise that the authors are apparently unaware of the efforts that have been made to reduce the supply and demand of nicotine in its most dangerous form (smoking cigarettes). These includes high taxation, advertising bans, public smoking bans, warnings, plain packaging, communications campaigns, smoking cessation services and so on. In fact, a whole WHO treaty (the FCTC) is devoted to it.
The idea of reducing the supply and demand of the very low-risk alternative is obviously absurd. The whole point of harm reduction is to expand the supply of and demand for the low-risk harm-reduction alternative at the expense of the high-risk product or behaviour. Are they seriously suggesting that we should take measures to reduce the supply of clean needles or reduce the demand for condoms in high HIV risk settings?
The main problem is that many commentators from this school are thoughtful harm-reductionists when it comes to illicit drugs, sexual behaviours and other risks, but inexplicably become 'abstinence-only' when it comes to the mildly psychoactive drug nicotine. It is a glaring inconsistency that this article helps to illuminate.
The point is that having low-risk alternatives to smoking is synergistic with the tobacco control measures favoured by these authors. E-cigarette, smokeless tobacco and heated tobacco products increase the range of options for smokers to respond to the pressures from tobacco control policies (e.g. taxation) without requiring abstinence, recourse to the black market or enduring the unwanted effects of tobacco policies on continuing smokers - like regressive tax burdens. That should appeal to those with genuine concerns for public health and wider wellbeing unless part of the purpose is to force smokers into an abstinence-only 'quit or die' choice and to make life harder for them.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
www.bmj.com www.bmj.com
-
On 2017 Apr 07, Harri Hemila commented:
Statistical problems in the vitamin D review by Martineau et al.
In their abstract, Martineau et al. state that "Vitamin D supplementation reduced the risk of acute respiratory tract infection among all participants (adjusted odds ratio 0.88 ...".
The odds ratio [OR] is often used as an approximation for the risk ratio [RR]. Martineau's abstract suggests that vitamin D might reduce the risk of respiratory infections by 12%. However, when the incidence of events is high, then OR can be highly misleading as it exaggerates the size of the effect, see eg. Viera AJ, 2008, Knol MJ, 2012, Katz KA, 2006, and Holcomb WL Jr, 2001 .
Acute respiratory infections are not rare. In Figure 2 of the Martineau et al. meta-analysis, only 2 of the 24 trials had event rates less than 20% in both groups. I reproduced their Figure 2 using the random effects Mantel-Haenszel approach and calculated OR=0.82 (95% CI 0.72 to 0.95) for the 24 trials. The minor discrepancy with their published OR (i.e. OR=0.80) in Figure 2 is explained by adjustments. The Figure 2 data gives RR=0.92 (95% CI 0.87 to 0.98). Thus, the OR suggests that the incidence of respiratory infections might be reduced by 18%, but the RR shows that 8% reduction is the valid estimate. Thus, OR exaggerates the effect of vitamin D by over two times.
Further statistical problems are described at the BMJ pages: http://www.bmj.com/content/356/bmj.i6583/rr-3 and http://www.bmj.com/content/356/bmj.i6583/rr-8 and in some other BMJ rapid responses.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Mar 05, Andrea Messori commented:
Is baricitinib more effective than adalimumab?
by Andrea Messori (HTA unit, ESTAR, Firenze, Italy) and Daniele Mengato (Dept. of Pharmaceutical Sciences, University of Padova, Padova, Italy)
To increase the amount of clinical evidence supporting biosimilars, one report[1] has recently proposed to carry out a network meta-analysis (NETMA) that includes not only the equivalence study comparing the biosimilar with the originator, but also all randomized studies (RCTs) comparing the originator with the previous non-biologic standard of care; most of these RCTs were conducted over the interval between the registration of the originator and the registration of the biosimilar. This approach, originally aimed at biosimilars, can also be employed to better evaluate a newly developed biologic rather than a newly registrered biosimilar. In the case of a newly registered biosimilar, the objective is to establish if the equivalence between the biosimilar and the originator, already demonstrated in the registrative RCT, is also confirmed by the NETMA. In the case of a newly developed biologic, the objective is to establish if the superiority of the new biologic over the old one, already demonstrated in the pivotal RCT, is also confirmed by the NETMA.
In patients with rhematoid arthritis, baricitinib (recently developed) has been shown to be more effective than adalimumab (end-point=ACR20; odds-ratio=1.44, 95% confidence interval: 1.06 to 1.95)[2]. We have reassessed this comparison using an "enhanced evidence" NETMA (bayesian approach, random-effect model, 60,000 iterations) in which 7 RCTs were included (Table 1). Our results (odds-ratio=1.44; 95% credible interval: 0.50 to 3.83) did not confirm the superiority of baricitinib over adalimumab.
References
[1] Messori A, Trippoli S, Marinai C. Network meta-analysis as a tool for improving the effectiveness assessment of biosimilars based on both direct and indirect evidence: application to infliximab in rheumatoid arthritis. Eur J Clin Pharmacol. 2016 Dec 14. [Epub ahead of print] PubMed PMID: 27966035.
[2] Taylor PC, Keystone EC, van der Heijde D, et al. Baricitinib versus Placebo or Adalimumab in Rheumatoid Arthritis. N Engl J Med. 2017;376:652-662.
[3] Hazlewood GS, Barnabe C, Tomlinson G, Marshall D, Devoe D, Bombardier C. Methotrexate monotherapy and methotrexate combination therapy with traditional and biologic disease modifying antirheumatic drugs for rheumatoid arthritis: abridged Cochrane systematic review and network meta-analysis. BMJ. 2016;353:i1777.
Table 1. Achievement of ACR20 in 7 randomized trials: the 6 trials comparing adalimumab+methotrexate vs methotrexate alone have been reported by Hazlewood et al.[3] while the 3-arm trial by Taylor et al.[2] has recently been published in the NEJM.
ACR20 at 24/26 weeks Duration
STUDY BARICITINIB METHOTREXATE ADALIMUMAB<br> +METHOTREXATE MONOTHERAPY +METHOTREXATE
Kim HY et al. - 40/65 23/63 24 weeks
ARMADA trial - 45/67 9/62 24 weeks
HOPEFUL 1 study - 129/171 92/163 26 weeks
Keystone EC et al. - 131/207 59/200 24 weeks
Weinblatt ME et al. - 39/59 23/61 24 weeks
OPTIMA trial - 207/466 112/460 26 weeks
TAYLOR et al. 360/487 219/330 179/488 26 weeks
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 28, SANGEETA KASHYAP commented:
I appreciate the comment by Dr. Weiss as medical therapy for diabetes is constantly evolving and improving. However, patients enrolled in this trial were poorly controlled despite using 3 or more glucose lowering agents at baseline including over half requiring basal bolus insulin. This coupled with the fact that two thirds had class 2 or greater severity of obesity, made them somewhat refractory to IMT. It is unlikely that patients like these would ever be able to maintain therapeutic targets of tight glycemic control for five years. Those that do, obviously should not consider bariatric surgery. Being in a rigorous clinical trial as this, all subjects had benefits of care that many real world patients do not and develop complications of the disease. Although the medical algorithm developed for this trial incorporated elements from ACCORD, titration of medical therapy was in some ways patient driven in that weight gain and hypoglycemia limit adherence to therapy. In medically refractory patients like this, surgery was more effective in treating hyperglycemia for five years with less medications overall.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 03, JOHN KIRWAN commented:
The success of the medical-therapy-alone arm in the STAMPEDE trial is clearly evidenced by the data, i.e., a 1.7% reduction in HbA1c at 1-year in this patient group. When one considers that RCTs evaluating the effectiveness of diabetes medications alone report HbA1c reductions of <1.0%, the outcome for the combined drug/lifestyle/education approach in STAMPEDE is consistent and it could be argued, is superior to most pharmacotherapy interventions currently reported in the extant-literature. If one looks at this from a slightly different perspective and compares the medical therapy arm of STAMPEDE to LookAHEAD, another intensive intervention (exercise/diet/education and pharmacotherapy) for obese patients (average BMI 36 kg/m2) with type 2 diabetes where the reduction in HbA1c was less than 1% at 1 year, then like the previous example, it is clear that the medical-therapy-alone arm in STAMPEDE was highly effective.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 02, Deepak Bhatt commented:
Having been involved with designing several trials, I would state that the control arm of STAMPEDE did indeed provide optimal medical therapy which exceeded what is generally obtained in real world practice. Randomized trials of surgical procedures are relatively uncommon, and STAMPEDE has helped greatly advance knowledge of the benefits of metabolic surgery. Adherence to polypharmacy is understandably challenging for many patients, and surgery gets around this barrier quite effectively. Furthermore, this landmark trial should not be viewed as surgery versus medical therapy, but rather surgery versus no surgery on a background of excellent medical therapy.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 01, PHILIP SCHAUER commented:
I beg to differ with Dr. Weiss in that the control group of our study was provided intensive, multi-agent medical therapy as tolerated with an intent to reach a HbA1c < 6%- as per ACCORD. Furthermore, medication choice, intensification, dose and frequency were managed by a highly qualified, experienced team of expert endocrinologists at an academic center. A favorable decrease in HbA1c by 1.7% from baseline (> 9% HbA1c) was achieved initially in the control group which was already heavily medicated at baseline (average of 3+ diabetes drugs). Thus, many would agree that our approach was "intensive". This initial improvement, however, was not sustained possibly due to inherent limitations of medical therapy related to adherence, side effects, and cost. Surgery is much less adherence-dependent, which likely accounts for some of the sustained benefit of surgery. Many will disagree with Dr. Weiss that ACCORD defines “true intensive medical therapy” since that regimen actually increased mortality compared to standard therapy, likely due to drug related effects (eg. hypoglycemia). On the contrary, more than 10 comparative, non-randomized studies show a long-term mortality reduction with surgery compared to medical therapy alone (1). New, widely endorsed guidelines by the American Diabetes Association and others now support the role of surgery for treating diabetes in patients with obesity, especially for patients who are not well controlled on medical therapy (2). 1)Schauer et al.Diabetes Care 2016 Jun;39(6):902-11 2)Rubino et al. Diabetes Care. 2016 Jun;39(6):861-77
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Mar 15, Daniel Weiss commented:
The benefit of weight loss on glycemic control for those with Type 2 Diabetes has been recognized for decades. The five-year outcomes of STAMPEDE are not surprising. However there was a major flaw in the design of this trial: despite its title, the control group was not provided “intensive medical therapy”.
The primary outcome was to compare “intensive medical therapy” alone to bariatric surgery plus medical therapy in achieving a glycated hemoglobin of 6% or less. The medical therapy group was seen every 3 months and had minimal increase in medications (mean of 2.8 medications at baseline and 3 at one year). And, at the one-year and five year time points, substantially fewer patients were on insulin as compared to baseline. At one year, 41 percent were on a glucagon-like peptide-1 receptor agonist.
Minimally intensified medical therapy obviously would bias results toward surgery. True intensive medical therapy as in the landmark ACCORD trial (Action to Control Cardiovascular Risk in Diabetes) involved visits every 2-4 weeks with actual medication intensification.
Reference The Action to Control Cardiovascular Risk in Diabetes Study Group. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008;358:2545-59.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Apr 10, Paul Sullins commented:
Reported findings of "no differences" by parent type in this study are an artifact of a well-known sampling error which conflates same-sex couples with a larger group of miscoded different-sex couples. Large disparities between the reported sample and same-sex couple population data reported by Statistics Netherlands strongly confirm this conclusion. The remainder of this comment presents detailed analysis supporting these claims. A longer critique, with standard citations and a table, is available at http://ssrn.com/author=2097328 .
The authors report that same-sex couples were identified using “information about the gender of the participating parent and the gender of the participant’s partner” (p. 5). However, validation studies of the use of this procedure on other large representative datasets, including the 2000 U.S. Census, the U.S. National Health Interview Survey (NHIS), and the National Longitudinal Study of Adolescent to Adult Health (“Add Health”), have found that most "same-sex couples" identified in this way are actually misclassified different-sex couples.
The problem stems from the fact that, like all survey items, the indication of one’s own sex or the sex of one’s partner is subject to a certain amount of random error. Respondents may inadvertently mark the wrong box or press the wrong key on the keyboard, thus indicating by mistake that their partner is the same sex as themselves. Black et al., who examined this problem on the U.S. Census, explains that “even a minor amount of measurement error, when applied to a large group, can create a major problem for drawing inferences about a small group in the population. Consider, for example, a population in which 1 out of 100 people are HIV-positive. If epidemiologists rely on a test that has a 0.01 error rate (for both false positives and false negatives), approximately half of the group that is identified as HIV-positive will in fact be misclassified” The measurement of same-sex unmarried partner couples in the 2000 US Census. Since same-sex couples comprise less than one percent of all couples in the population of Dutch parent couples studied by Bos et al., even a small random error in sex designation can result in a large inaccuracy in specifying the members of this tiny subpopulation.
A follow up consistency check can effectively correct the problem; however without this it can be quite severe. When the NHIS inadvertently skipped such a consistency check for 3.5 years, CDC estimated that from 66% to 84% of initially identified same-sex married couples were erroneously classified different-sex married couples Division of Health Interview Statistics, National Center for Health Statistics. 2015. Changes to Data Editing Procedures and the Impact on Identifying Same-Sex Married Couples: 2004-2007 National Health Interview Survey. Likewise, Black reported that in the affected portion of the 2000 Census “only 26.6 percent of same-sex female couples and 22.2 percent of same-sex male couples are correctly coded” Black et al, p. 10. The present author found, in an Add Health study that ignored a secondary sex verification, that 61% of the cases identified as “same-sex parents” actually consisted of different-sex parent partners The Unexpected Harm of Same-sex Marriage: A Critical Appraisal, Replication and Re-analysis of Wainright and Patterson’s Studies of Adolescents with Same-sex Parents. British Journal of Education, Society & Behavioural Science, 11(2)..
The 2011 Statistics Netherlands data used by Bos et al. are based on computer assisted personal interviews (CAPI), in which the respondent uses a computer keyboard to indicate his or her responses to interview questions that are presented by phone, website or in person. Sex of respondent and partner is indicated is indicated by the respondent entering "1" or "2" on the keyboard, a procedure in which a small rate of error, hitting the wrong key, would be quite normal. The Statistics Netherlands interview lacks any additional verification of sex designation, making sample contamination very probable. [Centraal Bureau voor de Statistiek Divisie Sociale en Ruimtelijke Statistieken Sector Dataverzameling. (2010). Jeugd en Opgroeien (SCP) 2010 Vraagteksten en schema’s CAPI/CATI. The Hague].
Several key features of the reported control sample strongly confirm that sample contamination has occurred. First, in the Netherlands in 2011, the only way for a same-sex co-parent to have parent rights was to register an adoption, so we would expect one of the partners, for most same-sex couples, to be reported as an adoptive parent [Latten, J., & Mulder, C. H. (2012). Partner relationships at the dawn of the 21st century: The case of the Netherlands. In European Population Conference pp. 1–19]. But in Bos et al.'s sample, none of the same-sex parents are adoptive parents, and both parents indicate that the child is his/her "own child" (eigen kind). This is highly unlikely for same-sex couples, but what we would expect to see if a large proportion of the "same-sex" couples were really erroneously-coded opposite-sex couples. Second, the ratio of male to female same-sex couples in the Bos et al. sample is implausibly high. In every national and social setting studied to date, far fewer male same-sex couples raise children than do female ones. Statistics Netherlands reports that in 2011 the disparity in the Netherlands was about seven to one: Of the (approximately) 30,000 male and 25,000 female same-sex couples counted in that year “[o]nly 3% (nearly 800) of the men's pairs had one or more children, compared to 20% (almost 5000) of the female couples.” [de Graaf, A. (2011). Gezinnen in cijfers, in Gezinsrapport 2011: Een portret van het gezinsleven in Nederland. The Hague: The Netherlands Institute for Social Research.] Yet Bos et al. report, implausibly, that they found about equal numbers of both lesbian and gay male couples with children, actually more male couples (68) than female (63) with children over age 5. They also report that 52% of Dutch same-sex parenting couples in 2011 were male, but Statistics Netherlands reports only 14%. The Bos sample is in error exactly to the degree that we would expect if these were (mostly) different-sex couples that were inaccurately classified as being same-sex due to random errors in partner sex designation.
Third, according to figures provided by Eurostat and Statistics Netherlands [Eurostats. (2015). People in the EU: who are we and how do we live? - 2015 Edition. Luxembourg: Publications Office of the European Union.] [Nordholt, E. S. (2014). Dutch Census 2011: Analysis and Methodology. The Hague: Statistics Netherlands.] (www.cbs.nl/informat), same-sex parents comprised an estimated 0.28 percent of all Dutch parenting couples in 2011, but in the Bos sample the prevalence is more than three times this amount, at 0.81 percent. From this disparity, it can be estimated roughly that about 65% of the Bos control sample consisted of misclassified different-sex parents. This rate of sample contamination is very similar to that estimated for the three datasets discussed above (61% for Add Health; 66% or higher for NHIS, and about 75% for the 2000 U.S. Census.)<br> The journal Family Process has advised that it is not interested in addressing errors of this type in its published studies. I therefore invite the authors to provide further population evidence in this forum, if possible, showing why their findings should be considered credible and not spurious.
Paul Sullins, Ph.D. Catholic University of America sullins@cua.edu
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Apr 10, Paul Sullins commented:
None
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jul 26, Martine Crasnier-Mednansky commented:
I do appreciate your answer to my comment, to which I gladly reply. First, there is prior work by Ghosh S, 2011 indicating colonization was attenuated in mutant strains that were incapable of utilizing GlcNAc, which included a nagE mutant strain. Second, Mondal M, 2014 analyzed the products of the ChiA2 reaction and found GlcNAc was the most abundant product. In fact, the amount of (GlcNAc)2 was found to be very low as compared to GlcNAc and (GlcNAc)3. Therefore, it is fully legitimate to conclude the PTS substrate GlcNAc is utilized in the host by V. cholerae for growth and survival.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 26, Ankur Dalia commented:
Re: Martine Crasnier-Mednansky
I appreciate your evaluation of the manuscript, however, I have to disagree with your comment. The study by Mondal et al. indicates that ChiA2 can liberate GlcNAc from mucin in vitro and that it is critical for bacterial growth in vivo, however, they did not test the role for GlcNAc uptake and/or catabolism in that study. In our manuscript, however, we demonstrate that loss of all PTS transporters (including the GlcNAc transporter) does not result in attenuation in the same infant mouse model, which is a more formal test for the role of GlcNAc transport during infection. It is formally possible that other carbohydrate moieties are liberated via the action of ChiA2 that are required for growth of V. cholerae in vivo, however, our results would indicate that these are not translocated by the PTS. Alternatively, the reduced virulence of the ChiA2 mutant observed in the Mondal et al study may indicate that ChiA2 has other effects in vivo (i.e. on immune evasion, resistance to antimicrobial peptides, etc.).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2017 Jul 24, Martine Crasnier-Mednansky commented:
The authors’ proposal 'the PTS has a limited role during infection' and concluding remark 'PTS carbohydrates are not available and/or not utilized in the host' are both questionable. Mondal M, 2014 established, when Vibrio cholerae colonizes the intestinal mucus, the PTS-substrate GlcNAc is utilized for growth and survival in the host intestine (upon mucin hydrolysis).
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Feb 21, Simon Young commented:
This paper reports a concentration of tryptamine in human cerebrospinal fluid (CSF) of 60 nmol/L. A concentration that high seems unlikely. The concentration in human CSF of the related compound 5-hydroxytryptamine (serotonin) is very much lower. Although levels of serotonin in human CSF reported in the literature vary over several orders of magnitude, most of the results reported are probably false due to lack of rigorous methodology and analytical inaccuracy Young SN, 2010. In a study with rigorous methodology, measurements were performed in two different laboratories using different HPLC columns and eluting buffers Anderson GM, 2002. One lab used an electrochemical detector (detection limit 7 – 8 pg/ml for serotonin) and the other a fluorometric detector (detection limit 7 – 15 pg/ml). In both labs, N-methylserotonin was used as an internal standard and a sample was injected directly into the HPLC after removal of proteins. Neither system could detect serotonin in any CSF sample. The conclusion was that the real value was less than 10 pg/ml (0.057 nmol/L, about three orders of magnitude less than the level reported for tryptamine). Anderson et al Anderson GM, 2002 suggest that the higher values for serotonin reported in the literature can be attributed to a failure to carry out rigorous validation steps needed to ensure that a peak in HPLC is in fact the analyte of interest and not another compound with a similar retention time and fluorescent or electrochemical properties.
The concentration of tryptamine in rat brain is very much lower than the concentration of serotonin Juorio AV, 1985, and levels of the tryptamine metabolite, indoleacetic acid, in human CSF are lower than the levels of the serotonin metabolite, 5-hydroyindoleactic acid Young SN, 1980. Thus, the finding that the concentration of tryptamine in human CSF is about a thousand times greater than the concentration of serotonin does not seem plausible. There are three possible explanations for this finding. First, there may be some unknown biochemical or physiological factor that explains the finding. Second, the result may be due to the use of CSF obtained postmortem instead of from a live human. Levels of some neuroactive compounds change rapidly after death. For example, levels of acetylcholine decrease rapidly after death due to the continued action of acetylcholinesterase, the enzyme that breaks down acetylcholine Schmidt DE, 1972. Serotonin can be measured in postmortem samples because the rate limiting enzyme in the conversion of tryptophan to serotonin, tryptophan hydroxylase, and the main enzyme metabolizing serotonin, monoamine oxidase, both require oxygen. The brain becomes anoxic quickly after death thereby preventing synthesis or catabolism of serotonin. Tryptamine is synthesized by the action of aromatic amino acid decarboxylase, which does not require oxygen, but is metabolized by monoamine oxidase, which does require oxygen. Autopsies usually occur many hours after death, and therefore the high levels of tryptamine reported in this study may reflect continued synthesis, and the absence of catabolism, of tryptamine after death. Third, there may be problems with the HPLC and fluorometric detection of tryptamine in this paper, in the same way that there have been many papers reporting inaccurate measurements of serotonin in human CSF, as outlined above. The method reported in this paper would have greater credibility if the same results were obtained with two different methods, as for serotonin Anderson GM, 2002.
In conclusion, more work needs to be done to establish a reliable method for measuring tryptamine in CSF obtained from living humans. Levels in human CSF obtained postmortem may have no physiological relevance.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Nov 07, Victoria MacBean commented:
Plain English Summary:
Neural respiratory drive (NRD) is commonly used as a measure of respiratory function, as it measures the overall muscular effort required to breathe in the presence of the changes that occur in lung disease. Both bronchoconstriction (airway narrowing) and hyperinflation (over-inflation of the chest, caused by air trapped in deep parts of the lung) occur in lung disease and are known to have detrimental effects on breathing muscle activity. Electromyography (EMG) is a measure of electrical activity being supplied to a muscle and can be used to measure the NRD leaving the brain towards respiratory muscles (in this study the parasternal intercostals – small muscles at the front of the chest). This study aimed to research the individual contributions of bronchoconstriction and hyperinflation on EMG and the overall effectiveness of the EMG as an accurate marker of lung function.
A group of 32 young adults were tested as subjects for this study, all of which had lung function within normal limits at rest, prior to testing. The subjects inhaled increasing concentrations of the chemical methacholine to stimulate the contraction of airway muscles – imitating a mild asthma attack. Subjects’ EMG, spirometry (to measure airway narrowing) and IC (inspiratory capacity) was measured to test for hyperinflation. Detailed statistical testing was used to assess the relationships between all the measures.
The results show that obstruction of the airway was closely related to the increase in EMG, however inspiratory capacity was not related. The data suggests that the overinflation of the chest had less of an effect on the EMG than the airway diameter (bronchoconstriction). This helps advance the understanding of how EMG can be used to assess lung disease.
This summary was produced by Talia Benjamin, Year 13 student from JFS School, Harrow, London as part of the authors' departmental educational outreach programme.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2018 Jan 11, Prashant Sharma, MD, DM commented:
A full-text, read-only version of this article is available at http://rdcu.be/nxtG.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
-
europepmc.org europepmc.org
-
On 2017 Jun 19, Martine Crasnier-Mednansky commented:
Novick A, 1957 reaffirmed a fully induced culture could be maintained fully induced at low inducer concentrations. In this paper, the authors reported preinduced cells with melibiose do not maintain induction of the melibiose (mel) operon in the presence of 1 mM TMG. However, experimental conditions and data interpretation are both questionable in view of the following.
The authors used a lacY strain whose percentage of induction by 1 mM TMG is less than 0.2%, 100% being for melibiose as the inducer (calculated from data in Table 1 and 3). They transfer the cells from a minimal-medium-melibiose to a minimal-medium-glycerol supplemented with 1 mM TMG. The cells therefore have to 'enzymatically adapt' to glycerol while facing pyrimidine starvation (Jensen KF, 1993, Soupene E, 2003). Under these conditions, cells are unlikely to maintain induction of the mel operon (even if they could, see below) because uninduced cells have a significant growth advantage over induced cells. Incidentally, Novick A, 1957 noted, "the fact that a maximally induced culture can be maintained maximally induced for many generations [by using a maintenance concentration of inducer] shows that the chance of a bacterium becoming uninduced under these conditions is very small. Were any uninduced organisms to appear, they would be selected for by their more rapid growth". Advancing further, the percentage of induction by TMG for the mel operon in a wild type strain (lacY<sup>+</sup>) is 16% (calculated as above). This induction is due mostly to TMG transport by LacY considering the sharp decrease in the percentage of induction with a lacY strain (to <0.2%). Consequently, in the presence of TMG, any uninduced lacY cells remain uninduced. Thus, it appears a population of uninduced cells is likely to 'take over' rapidly under the present experimental conditions.
In the presence of LacY, the internal TMG concentration is about 100 times the medium one, and under these conditions, induction of the mel operon by TMG is only 16%. Therefore, the cells could not possibly maintain their full level of induction simply because TMG is a relatively poor inducer of the mel operon. It seems the rationale behind this experiment does not make much sense.
Note: The maintenance concentration of inducer is the concentration of inducer added to the medium of fully induced cells and allowing maintenance of the enzyme level for at least 25 generations (Figure 3 in Novick A, 1957). It is not the intracellular level of inducer, as used in this paper.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-