2 Matching Annotations
  1. Jul 2018
    1. On 2017 Mar 03, Ole Jakob Storebø commented:

      In their editorial, Gerlach and colleagues make several critical remarks (Gerlach M, 2017) regarding our Cochrane systematic review on methylphenidate for children and adolescents with attention-deficit hyperactivity disorder (ADHD) (Storebø OJ, 2015). While we thank them for drawing attention to our review we shall here try to explain our findings and standpoints.

      They argue, on the behalf of the World Federation of ADHD and EUNETHYDIS, that the findings from our Cochrane systematic review contrast with previously published systematic reviews and meta-analyses, (National Collaborating Centre for Mental Health (UK), 2009, Faraone SV, 2010, King S, 2006, Van der Oord S, 2008) which all judged the included trials more favorably than we did.

      There are methodological flaws in most of these reviews that could have led to inaccurate estimates of effect. For example, most of these reviews did not publish an a priori protocol (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or present data on spontaneous adverse events (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), nor did they report on adverse events as measured by rating scales (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or systematically assess the risk of random errors, risk of bias, and trial quality (Faraone SV, 2010, King S, 2006,Van der Oord S, 2008). King at al. emphasised in the quality assessments for the NICE review that almost all studies did not score very well in the quality assessments and, consequently, results should be interpreted with caution (King S, 2006).

      The authors of this editorial refer to many published critical editorials and they argue that the issues they have raised have not adequately been addressed adequately by us. On closer examination, it is clear that virtually the same criticism has been levelled at us each time by the same group of authors, published in several journal articles, blogs, letters, and comments (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.

      Each time, we have refuted repeatedly with clear counter-arguments, recalculation of data, and detailed explanations (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      Our main point is that the very low quality of the evidence makes it impossible to estimate, with any certainty, what the true magnitude of the effect might be.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children changes the standardised mean difference effect size from 0.77 to 0.89. However, even if the effect size increases upon excluding these trials, the overall risk of bias and quality of the evidence deems this discussion irrelevant. As mentioned above, we have responded several times to this group of authors Storebø OJ, 2016, Storebø OJ, 2016,< PMID: 27138912, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      We did not exclude any trials for the use of the cross-over design, as these were included in a separate analysis. The use of end-of-period data in cross-over trials is problematic due to the risk for “carry-over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org). In addition, we tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity. This means that the risk of “unit of analysis error” and “carry-over effect” is uncertain, and could be real. The aspect about our bias assessment have been raised earlier by these authors and others affiliated to the EUNETHYDIS. In fact, we see nothing new here. There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Flacco ME, 2015, Lathyris DN, 2010, Kelly RE Jr, 2006). Moreover, the AMSTAR tool for methodological quality assessment of systematic reviews includes funding and conflicts of interest as a domain (http://amstar.ca/). The Cochrane Bias Methods Group (BMG) is currently working on including vested interests in the upcoming version of the Cochrane Risk of Bias tool.

      The aspect about whether teachers can detect well known adverse events of methylphenidate have also been raised earlier by these authors and others affiliated to the EUNETHYDIS (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.). We have continued to argue that teachers can detect the well-known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep. We highlighted this in our review (Storebø OJ, 2015) and have answered this point in several replies to these authors (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment. The well-known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness, and weight loss.

      We have considered the persistent, repeated criticism by these authors seriously, but no evidence was provided to justify changing our conclusions regarding the very low quality of evidence of methylphenidate trials, which makes the true estimate of the methylphenidate effect unknowable. This is a methodological rather than a clinical or philosophical issue.<br> We had no preconceptions of the findings of this review and followed the published protocol; therefore, any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high-quality meta-analyses. As we have repeatedly responded clearly to the criticism of these authors, and it is unlikely that their view of our (transparent) work is going to change, we propose to agree to disagree.

      Finally, we do not agree that the recent analysis from registries provides convincing evidence on the long-term benefits of methylphenidate due to multiple limitations of this type of kind of study, albeit that interesting perspectives are provided. They require further study to be regarded as reliable.

      Ole Jakob Storebø, Morris Zwi, Helle B. Krogh, Erik Simonsen, Carlos Renato Maia, Christian Gluud


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Mar 03, Ole Jakob Storebø commented:

      In their editorial, Gerlach and colleagues make several critical remarks (Gerlach M, 2017) regarding our Cochrane systematic review on methylphenidate for children and adolescents with attention-deficit hyperactivity disorder (ADHD) (Storebø OJ, 2015). While we thank them for drawing attention to our review we shall here try to explain our findings and standpoints.

      They argue, on the behalf of the World Federation of ADHD and EUNETHYDIS, that the findings from our Cochrane systematic review contrast with previously published systematic reviews and meta-analyses, (National Collaborating Centre for Mental Health (UK), 2009, Faraone SV, 2010, King S, 2006, Van der Oord S, 2008) which all judged the included trials more favorably than we did.

      There are methodological flaws in most of these reviews that could have led to inaccurate estimates of effect. For example, most of these reviews did not publish an a priori protocol (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or present data on spontaneous adverse events (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), nor did they report on adverse events as measured by rating scales (Faraone SV, 2010, King S, 2006, Van der Oord S, 2008), or systematically assess the risk of random errors, risk of bias, and trial quality (Faraone SV, 2010, King S, 2006,Van der Oord S, 2008). King at al. emphasised in the quality assessments for the NICE review that almost all studies did not score very well in the quality assessments and, consequently, results should be interpreted with caution (King S, 2006).

      The authors of this editorial refer to many published critical editorials and they argue that the issues they have raised have not adequately been addressed adequately by us. On closer examination, it is clear that virtually the same criticism has been levelled at us each time by the same group of authors, published in several journal articles, blogs, letters, and comments (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.

      Each time, we have refuted repeatedly with clear counter-arguments, recalculation of data, and detailed explanations (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      Our main point is that the very low quality of the evidence makes it impossible to estimate, with any certainty, what the true magnitude of the effect might be.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children changes the standardised mean difference effect size from 0.77 to 0.89. However, even if the effect size increases upon excluding these trials, the overall risk of bias and quality of the evidence deems this discussion irrelevant. As mentioned above, we have responded several times to this group of authors Storebø OJ, 2016, Storebø OJ, 2016,< PMID: 27138912, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment.

      We did not exclude any trials for the use of the cross-over design, as these were included in a separate analysis. The use of end-of-period data in cross-over trials is problematic due to the risk for “carry-over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org). In addition, we tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity. This means that the risk of “unit of analysis error” and “carry-over effect” is uncertain, and could be real. The aspect about our bias assessment have been raised earlier by these authors and others affiliated to the EUNETHYDIS. In fact, we see nothing new here. There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Flacco ME, 2015, Lathyris DN, 2010, Kelly RE Jr, 2006). Moreover, the AMSTAR tool for methodological quality assessment of systematic reviews includes funding and conflicts of interest as a domain (http://amstar.ca/). The Cochrane Bias Methods Group (BMG) is currently working on including vested interests in the upcoming version of the Cochrane Risk of Bias tool.

      The aspect about whether teachers can detect well known adverse events of methylphenidate have also been raised earlier by these authors and others affiliated to the EUNETHYDIS (Banaschewski T, 2016, BMJ comment,Banaschewski T, 2016, Hoekstra PJ, 2016, Hoekstra PJ, 2016, Romanos M, 2016, Mental Elf blog.). We have continued to argue that teachers can detect the well-known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep. We highlighted this in our review (Storebø OJ, 2015) and have answered this point in several replies to these authors (Storebø OJ, 2016, Storebø OJ, 2016,Storebø OJ, 2016, Pubmed commment, Storebø OJ, 2016, BMJ comments, Responses on Mental Elf, Pubmed comment. The well-known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness, and weight loss.

      We have considered the persistent, repeated criticism by these authors seriously, but no evidence was provided to justify changing our conclusions regarding the very low quality of evidence of methylphenidate trials, which makes the true estimate of the methylphenidate effect unknowable. This is a methodological rather than a clinical or philosophical issue.<br> We had no preconceptions of the findings of this review and followed the published protocol; therefore, any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high-quality meta-analyses. As we have repeatedly responded clearly to the criticism of these authors, and it is unlikely that their view of our (transparent) work is going to change, we propose to agree to disagree.

      Finally, we do not agree that the recent analysis from registries provides convincing evidence on the long-term benefits of methylphenidate due to multiple limitations of this type of kind of study, albeit that interesting perspectives are provided. They require further study to be regarded as reliable.

      Ole Jakob Storebø, Morris Zwi, Helle B. Krogh, Erik Simonsen, Carlos Renato Maia, Christian Gluud


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.