2 Matching Annotations
  1. Jul 2018
    1. On 2016 Dec 08, Ole Jakob Storebø commented:

      Spin and double spin ̶ the Letter to the Editor by Romanos et al. (2016) is indeed spinning.

      Response to “Check and Double Check ̶ the Cochrane review by Storebø et al. is indeed flawed” 

      (This letter was rejected by the editors in Zeitschrift für Kinder- und Jugendpsychiatrie und Psychotherapie).

      Romanos et al. 2016 (Romanos M, 2016) continue to publish disagreements with the findings of our Cochrane systematic review (Storebø OJ, 2015) that do not have any meaningful effect on our estimate of effect size regarding methylphenidate for children and adolescents with ADHD. Our main point is that due to the very low quality of all the evidence one cannot state anything for sure about the true magnitude of the effect.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children will change the effect size from 0.77 to 0.89. We have responded several times to this group of authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499) regarding these trials which were included in keeping with our protocol which was published a priori (Storebø OJ, 2015).

      We agree that there might be an effect of both Clonidine and behavioral therapy. However, the effects are balanced by their use as add-on therapies in both arms of the trials i.e. the methylphenidate and no-methylphenidate arms. Such an analysis would be a post hoc decision made purely to increase the effect size and would be in conflict with our reviewed protocol (Storebø OJ, 2015).

      There is no evidence for providing a valid cut off score for the effect size of the standardized minimal clinical difference (SMD) that can be used by clinicians. When reporting a SMD one of the challenges facing researchers is to determine the significance of any differences observed and communicate this to clinicians who will apply the results of the systematic review to clinical practice.

      The use of a Minimal Clinical Relevant Difference (MIREDIF) is a valid way to express the minimum clinically important improvement considered worthwhile by clinicians and patients (Copay AG, 2007). The variability of MIREDIF is also important, which is why we reported the 95% confidence intervals of the transformed mean value in our review. Even with a difference in means below the MIREDIF, a proportion of the patients will have a value above the MIREDIF. Similarly, a proportion of the patients will have a value below the MIREDIF.

      The use of end-of-period data in cross-over trials is problematic due to the risk for “carry- over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org.). In addition, we have tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity and this could mean that the risk of “unit of analysis error” and “carry-over effect” was in fact real.

      We have continued to argue that the well known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep, can be detected by teachers. We highlighted this in our review (Storebø OJ, 2015) and answered this point in several replies to these authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499). It is not about controlling the amount of food children eat in the schoolyard or assessing their sleep quality at night. The well known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness and even weight loss.

      There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Lundh A, 2012, other citations). We did receive a table for the Coghill 2013 trial from the authors. We did, however, not ask for information about funding as it was clearly stated in Coghill 2013 that this trial was funded by Shire Development LLC (Coghill D, 2013).

      It is true that some participants in the MTA study (10%) allocated to the methylphenidate treatment were titrated to dextroamphetamine (Anonymous, 1999). We wanted to conduct a reanalysis of the data excluding the participants who did not receive methylphenidate. We contacted Dr. Swanson and he proved several helpful comments. He also enclosed published articles, but we did not receive additional data, in part because of the time frame of our review (Storebø OJ, 2015). Sensitivity analysis excluding the MTA study does not significantly change the effect estimate.

      We have seriously considered the persistent, repeated criticisms by Ramonas et al. published in a number of different journals, however, none of these have provided evidence which justify changing our conclusions about the effects of MPH and the very low quality of evidence of methylphenidate trials. We had no preconceptions of the findings of this review and followed the published protocol, therefore any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high quality meta-analyses. As it is unlikely that any further criticisms from these authors will change and we feel we have repeatedly responded clearly to each of these criticism, we propose to agree to disagree.

      Ole Jakob Storebø, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Morris Zwi, Islington CAMHS, Whittington Health, London, UK

      Carlos Renato Moreira-Maia, Federal University of Rio Grande do Sul, Porto Alegre, Brazil

      Camilla Groth, Pediatric Department, Herlev University Hospital, Herlev, Denmark

      Donna Gillies, Western Sydney Local Health District; Mental Health, Parramatta, Australia

      Erik Simonsen, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Christian Gluud, Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Dec 08, Ole Jakob Storebø commented:

      Spin and double spin ̶ the Letter to the Editor by Romanos et al. (2016) is indeed spinning.

      Response to “Check and Double Check ̶ the Cochrane review by Storebø et al. is indeed flawed” 

      (This letter was rejected by the editors in Zeitschrift für Kinder- und Jugendpsychiatrie und Psychotherapie).

      Romanos et al. 2016 (Romanos M, 2016) continue to publish disagreements with the findings of our Cochrane systematic review (Storebø OJ, 2015) that do not have any meaningful effect on our estimate of effect size regarding methylphenidate for children and adolescents with ADHD. Our main point is that due to the very low quality of all the evidence one cannot state anything for sure about the true magnitude of the effect.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children will change the effect size from 0.77 to 0.89. We have responded several times to this group of authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499) regarding these trials which were included in keeping with our protocol which was published a priori (Storebø OJ, 2015).

      We agree that there might be an effect of both Clonidine and behavioral therapy. However, the effects are balanced by their use as add-on therapies in both arms of the trials i.e. the methylphenidate and no-methylphenidate arms. Such an analysis would be a post hoc decision made purely to increase the effect size and would be in conflict with our reviewed protocol (Storebø OJ, 2015).

      There is no evidence for providing a valid cut off score for the effect size of the standardized minimal clinical difference (SMD) that can be used by clinicians. When reporting a SMD one of the challenges facing researchers is to determine the significance of any differences observed and communicate this to clinicians who will apply the results of the systematic review to clinical practice.

      The use of a Minimal Clinical Relevant Difference (MIREDIF) is a valid way to express the minimum clinically important improvement considered worthwhile by clinicians and patients (Copay AG, 2007). The variability of MIREDIF is also important, which is why we reported the 95% confidence intervals of the transformed mean value in our review. Even with a difference in means below the MIREDIF, a proportion of the patients will have a value above the MIREDIF. Similarly, a proportion of the patients will have a value below the MIREDIF.

      The use of end-of-period data in cross-over trials is problematic due to the risk for “carry- over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org.). In addition, we have tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity and this could mean that the risk of “unit of analysis error” and “carry-over effect” was in fact real.

      We have continued to argue that the well known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep, can be detected by teachers. We highlighted this in our review (Storebø OJ, 2015) and answered this point in several replies to these authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499). It is not about controlling the amount of food children eat in the schoolyard or assessing their sleep quality at night. The well known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness and even weight loss.

      There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Lundh A, 2012, other citations). We did receive a table for the Coghill 2013 trial from the authors. We did, however, not ask for information about funding as it was clearly stated in Coghill 2013 that this trial was funded by Shire Development LLC (Coghill D, 2013).

      It is true that some participants in the MTA study (10%) allocated to the methylphenidate treatment were titrated to dextroamphetamine (Anonymous, 1999). We wanted to conduct a reanalysis of the data excluding the participants who did not receive methylphenidate. We contacted Dr. Swanson and he proved several helpful comments. He also enclosed published articles, but we did not receive additional data, in part because of the time frame of our review (Storebø OJ, 2015). Sensitivity analysis excluding the MTA study does not significantly change the effect estimate.

      We have seriously considered the persistent, repeated criticisms by Ramonas et al. published in a number of different journals, however, none of these have provided evidence which justify changing our conclusions about the effects of MPH and the very low quality of evidence of methylphenidate trials. We had no preconceptions of the findings of this review and followed the published protocol, therefore any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high quality meta-analyses. As it is unlikely that any further criticisms from these authors will change and we feel we have repeatedly responded clearly to each of these criticism, we propose to agree to disagree.

      Ole Jakob Storebø, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Morris Zwi, Islington CAMHS, Whittington Health, London, UK

      Carlos Renato Moreira-Maia, Federal University of Rio Grande do Sul, Porto Alegre, Brazil

      Camilla Groth, Pediatric Department, Herlev University Hospital, Herlev, Denmark

      Donna Gillies, Western Sydney Local Health District; Mental Health, Parramatta, Australia

      Erik Simonsen, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Christian Gluud, Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.