8 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 29, David Keller commented:

      Possible explanations why treated subjects rated as responders thought they received sham treatments

      This comment will focus on the actively treated subjects who were rated as "responders" to therapy, yet, when surveyed at the end of the study, answered that they thought they had received sham therapy. A responder to therapy is a subject who reports perceived benefits from therapy, which is inconsistent with the belief that he received sham therapy. A subject who believes he was treated with sham therapy must not have perceived any benefit from therapy, or he would not think it was sham. Since tinnitus is a purely subjective phenomenon, a lack of perceived benefit is inconsistent with response to therapy.

      Each of the "responders" who nevertheless believed they had received sham therapy must fall into one of the following categories:

      1) The subject perceived benefit from therapy, but did not understand that, by definition, sham therapy does not provide benefit.

      2) The subject perceived no benefit from therapy, but replied erroneously to questions in the Tinnitus Functional Index (TFI), causing it to mis-categorize him as a responder to therapy.

      3) The TFI is a faulty metric for the assessment of tinnitus, mis-categorizing subjects as "responders" to therapy even though these subjects perceived no benefit from therapy.

      Categories 1 and 2 above represent experimental errors resulting from failure to properly instruct the subjects of the trial. In category 1, the subjects must be taught, and understand, the defining distinction between active and sham therapy before being asked which they think they received. In category 2, the subjects must be instructed how to properly reply to the questions in the TFI. With improved instruction and education of the experimental subjects, the contradictory results noted in the results of this trial could be reduced or disappear in future trials.

      Category 3 represents experimental error resulting from erroneous measurement of the effects of therapy, which would require fundamental redesign of the Tinnitus Functional Index (TFI), the metric employed to assess and report the results of this trial.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 26, David Keller commented:

      If treated subjects thought they received sham therapy, how could their tinnitus scores improve?

      The main outcome of this trial was based on improvements in the Tinnitus Functional Index (TFI). In his reply to my letter, Dr. Folmer indicated that there were subjects who received active treatment in his study, who exhibited significant improvements on their TFI score, and yet these subjects believed that they received sham treatments.

      Experimental subjects should be informed that sham treatments are, by definition and design, not capable of causing any true benefit. Thus, if properly informed, subjects should only guess that they received sham treatment when they truly cannot perceive any benefit from treatment. If the TFI scores of such subjects nevertheless improved significantly, then the reported TFI scores are not measuring tinnitus in a way that is clinically meaningful. That is, the TFI seems to be reporting clinical benefits which are not perceived by the subjects. This calls into question the results of the whole study.

      Tinnitus is a subjective problem. When a metric like the TFI measures significant benefits in a subject who thinks he received sham treatment, the metric is measuring something that must not be relevant to the subject's condition. Folmer's paper informs us that the American Academy of Otolaryngology (AAO) recommends against using repetitive transcranial magnetic stimulation (rTMS) to treat tinnitus. Folmer attributes the failure of rTMS to ameliorate tinnitus in past studies in part on older tinnitus rating scales not being sensitive enough to detect the benefits. The TFI seems to address the lack of sensitivity of older scales to small improvements in tinnitus, but was this achieved by making it so sensitive that it detects improvements that are too small for subjects to perceive? If so, then the only purpose it serves is to convert failed studies into ones that can report statistically significant improvements in tinnitus.

      The small, perhaps imperceptible, benefits detected by the TFI may have been artifacts of unblinding and expectation effects, which were ascertained by asking the blinding question once at the end of the study, when these effects confound each other. If the blinding question had been tracked throughout the study, we would have unconfounded data from the beginning of the study, and could see how expectation effects, treatment effects and unblinding evolved throughout the study.

      Park's editorial warned that asking the blinding question before the end of the study could cause patients to drop out, by reminding them that they may have been randomized to sham treatment. However, any patients who forgot that they might be randomized to sham treatment are not in a state of fully informed consent, and they must be reminded. Further, Park's advice to only ask the blinding question at the end of the study seemed to be conjecture based on anecdotal experience, and did not present supportive randomized data.

      The purpose of PubMed Commons is to discuss study results in greater depth, answer open questions, rebut criticisms and debate controversies. A one-line dismissive reply is as unhelpful as no reply at all.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 26, Robert L Folmer commented:

      These issues were already discussed in correspondence published by JAMA Otolaryngology-Head & Neck Surgery 2015;141(11):1031-1032.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 20, David Keller commented:

      Why the blinding of experimental subjects should be tracked during a study, from start to finish

      I wish to address the points raised by Folmer and Theodoroff in their reply [1] to my letter to the editor of JAMA Otolaryngology [2] concerning issues they encountered with unblinding of subjects in their trial of therapeutic MRI for tinnitus. These points are important to discuss, in order to help future investigators optimize the design of future studies of therapies for tinnitus, which are highly subject to the placebo, nocebo, Pygmalion and other expectation effects.

      First, Folmer and Theodoroff object to my suggestion of asking the experimental subjects after each and every therapy session whether they think they have received active or sham placebo therapy in the trial so far (the "blinding question"). They quote an editorial by Park et al [3] which states that such frequent repetition of the blinding question might increase "non-compliance and dropout" by subjects. Park's statement is made without any supportive data, and appears to be based on pure conjecture, as is his recommendation that subjects be asked the blinding question only at the end of a clinical trial. I offer the following equally plausible conjecture: if you ask a subject the blinding question after each session, it will soon become a familiar part of the experimental routine, and will have no more effect on the subject's behavior than did his informed consent to be randomized to active treatment or placebo in the first place. Moreover, the experimenters will obtain valuable information about the evolution of the subjects' state of mind as the study progresses. We have no such data for the present study, which impairs our ability to interpret the subjects' answers to the blinding question, when it is asked only once at the end of the study.

      Second, Folmer and Theodoroff state that I "misinterpreted" their explanation of why so many of their subjects guessed they had received placebo, even if they had experienced "significant improvement" in their tinnitus score. They object to my characterization of this phenomenon as due to the "smallness of the therapeutic benefit" of their intervention, but my wording summarizes their lengthier explanation, that their subjects had a prior expectation of much greater benefit, so subjects incorrectly guessed they had been randomized to sham therapy even if they exhibited a small but significant benefit from the active treatment. In other words, the "benefit" these subjects experienced was imperceptible to them, truly a distinction without a difference.

      A therapeutic trial hopes for the opposite form of unblinding of subjects, which is when the treatment is so dramatically effective that the subjects who were randomized to active therapy are able to answer the blinding question with 100% accuracy.

      Folmer and Theodoroff state that, in their experience, even if subjects with tinnitus "improve in several ways" due to treatment, some will still be disappointed if their tinnitus is not cured. Do these subjects then answer the blinding question by guessing they received placebo because their benefit was disappointing to them, imperceptible to them, as revenge against the trial itself, or for some other reason? Regardless, if you want to know how well they were blinded, independent of treatment effects and of treatment expectation effects, then you must ask them early in the trial, before treatment expectations have time to take hold. Ask the blinding question early and often. Clinical trials should not be afraid to collect data. Data are good; more data are better.

      References:

      1: Folmer RL, Theodoroff SM. Assessment of Blinding in a Tinnitus Treatment Trial-Reply. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031-1032. doi: 10.1001/jamaoto.2015.2422. PubMed PMID: 26583514.

      2: Keller DL. Assessment of Blinding in a Tinnitus Treatment Trial. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031. doi: 10.1001/jamaoto.2015.2425. PubMed PMID: 26583513.

      3: Park J, Bang H, Cañette I. Blinding in clinical trials, time to do it better. Complement Ther Med. 2008 Jun;16(3):121-3. doi: 10.1016/j.ctim.2008.05.001. Epub 2008 May 29. PubMed PMID: 18534323.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Nov 20, David Keller commented:

      Why the blinding of experimental subjects should be tracked during a study, from start to finish

      I wish to address the points raised by Folmer and Theodoroff in their reply [1] to my letter to the editor of JAMA Otolaryngology [2] concerning issues they encountered with unblinding of subjects in their trial of therapeutic MRI for tinnitus. These points are important to discuss, in order to help future investigators optimize the design of future studies of therapies for tinnitus, which are highly subject to the placebo, nocebo, Pygmalion and other expectation effects.

      First, Folmer and Theodoroff object to my suggestion of asking the experimental subjects after each and every therapy session whether they think they have received active or sham placebo therapy in the trial so far (the "blinding question"). They quote an editorial by Park et al [3] which states that such frequent repetition of the blinding question might increase "non-compliance and dropout" by subjects. Park's statement is made without any supportive data, and appears to be based on pure conjecture, as is his recommendation that subjects be asked the blinding question only at the end of a clinical trial. I offer the following equally plausible conjecture: if you ask a subject the blinding question after each session, it will soon become a familiar part of the experimental routine, and will have no more effect on the subject's behavior than did his informed consent to be randomized to active treatment or placebo in the first place. Moreover, the experimenters will obtain valuable information about the evolution of the subjects' state of mind as the study progresses. We have no such data for the present study, which impairs our ability to interpret the subjects' answers to the blinding question, when it is asked only once at the end of the study.

      Second, Folmer and Theodoroff state that I "misinterpreted" their explanation of why so many of their subjects guessed they had received placebo, even if they had experienced "significant improvement" in their tinnitus score. They object to my characterization of this phenomenon as due to the "smallness of the therapeutic benefit" of their intervention, but my wording summarizes their lengthier explanation, that their subjects had a prior expectation of much greater benefit, so subjects incorrectly guessed they had been randomized to sham therapy even if they exhibited a small but significant benefit from the active treatment. In other words, the "benefit" these subjects experienced was imperceptible to them, truly a distinction without a difference.

      A therapeutic trial hopes for the opposite form of unblinding of subjects, which is when the treatment is so dramatically effective that the subjects who were randomized to active therapy are able to answer the blinding question with 100% accuracy.

      Folmer and Theodoroff state that, in their experience, even if subjects with tinnitus "improve in several ways" due to treatment, some will still be disappointed if their tinnitus is not cured. Do these subjects then answer the blinding question by guessing they received placebo because their benefit was disappointing to them, imperceptible to them, as revenge against the trial itself, or for some other reason? Regardless, if you want to know how well they were blinded, independent of treatment effects and of treatment expectation effects, then you must ask them early in the trial, before treatment expectations have time to take hold. Ask the blinding question early and often. Clinical trials should not be afraid to collect data. Data are good; more data are better.

      References:

      1: Folmer RL, Theodoroff SM. Assessment of Blinding in a Tinnitus Treatment Trial-Reply. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031-1032. doi: 10.1001/jamaoto.2015.2422. PubMed PMID: 26583514.

      2: Keller DL. Assessment of Blinding in a Tinnitus Treatment Trial. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031. doi: 10.1001/jamaoto.2015.2425. PubMed PMID: 26583513.

      3: Park J, Bang H, Cañette I. Blinding in clinical trials, time to do it better. Complement Ther Med. 2008 Jun;16(3):121-3. doi: 10.1016/j.ctim.2008.05.001. Epub 2008 May 29. PubMed PMID: 18534323.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 26, Robert L Folmer commented:

      These issues were already discussed in correspondence published by JAMA Otolaryngology-Head & Neck Surgery 2015;141(11):1031-1032.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 26, David Keller commented:

      If treated subjects thought they received sham therapy, how could their tinnitus scores improve?

      The main outcome of this trial was based on improvements in the Tinnitus Functional Index (TFI). In his reply to my letter, Dr. Folmer indicated that there were subjects who received active treatment in his study, who exhibited significant improvements on their TFI score, and yet these subjects believed that they received sham treatments.

      Experimental subjects should be informed that sham treatments are, by definition and design, not capable of causing any true benefit. Thus, if properly informed, subjects should only guess that they received sham treatment when they truly cannot perceive any benefit from treatment. If the TFI scores of such subjects nevertheless improved significantly, then the reported TFI scores are not measuring tinnitus in a way that is clinically meaningful. That is, the TFI seems to be reporting clinical benefits which are not perceived by the subjects. This calls into question the results of the whole study.

      Tinnitus is a subjective problem. When a metric like the TFI measures significant benefits in a subject who thinks he received sham treatment, the metric is measuring something that must not be relevant to the subject's condition. Folmer's paper informs us that the American Academy of Otolaryngology (AAO) recommends against using repetitive transcranial magnetic stimulation (rTMS) to treat tinnitus. Folmer attributes the failure of rTMS to ameliorate tinnitus in past studies in part on older tinnitus rating scales not being sensitive enough to detect the benefits. The TFI seems to address the lack of sensitivity of older scales to small improvements in tinnitus, but was this achieved by making it so sensitive that it detects improvements that are too small for subjects to perceive? If so, then the only purpose it serves is to convert failed studies into ones that can report statistically significant improvements in tinnitus.

      The small, perhaps imperceptible, benefits detected by the TFI may have been artifacts of unblinding and expectation effects, which were ascertained by asking the blinding question once at the end of the study, when these effects confound each other. If the blinding question had been tracked throughout the study, we would have unconfounded data from the beginning of the study, and could see how expectation effects, treatment effects and unblinding evolved throughout the study.

      Park's editorial warned that asking the blinding question before the end of the study could cause patients to drop out, by reminding them that they may have been randomized to sham treatment. However, any patients who forgot that they might be randomized to sham treatment are not in a state of fully informed consent, and they must be reminded. Further, Park's advice to only ask the blinding question at the end of the study seemed to be conjecture based on anecdotal experience, and did not present supportive randomized data.

      The purpose of PubMed Commons is to discuss study results in greater depth, answer open questions, rebut criticisms and debate controversies. A one-line dismissive reply is as unhelpful as no reply at all.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 29, David Keller commented:

      Possible explanations why treated subjects rated as responders thought they received sham treatments

      This comment will focus on the actively treated subjects who were rated as "responders" to therapy, yet, when surveyed at the end of the study, answered that they thought they had received sham therapy. A responder to therapy is a subject who reports perceived benefits from therapy, which is inconsistent with the belief that he received sham therapy. A subject who believes he was treated with sham therapy must not have perceived any benefit from therapy, or he would not think it was sham. Since tinnitus is a purely subjective phenomenon, a lack of perceived benefit is inconsistent with response to therapy.

      Each of the "responders" who nevertheless believed they had received sham therapy must fall into one of the following categories:

      1) The subject perceived benefit from therapy, but did not understand that, by definition, sham therapy does not provide benefit.

      2) The subject perceived no benefit from therapy, but replied erroneously to questions in the Tinnitus Functional Index (TFI), causing it to mis-categorize him as a responder to therapy.

      3) The TFI is a faulty metric for the assessment of tinnitus, mis-categorizing subjects as "responders" to therapy even though these subjects perceived no benefit from therapy.

      Categories 1 and 2 above represent experimental errors resulting from failure to properly instruct the subjects of the trial. In category 1, the subjects must be taught, and understand, the defining distinction between active and sham therapy before being asked which they think they received. In category 2, the subjects must be instructed how to properly reply to the questions in the TFI. With improved instruction and education of the experimental subjects, the contradictory results noted in the results of this trial could be reduced or disappear in future trials.

      Category 3 represents experimental error resulting from erroneous measurement of the effects of therapy, which would require fundamental redesign of the Tinnitus Functional Index (TFI), the metric employed to assess and report the results of this trial.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.