4 Matching Annotations
  1. Jul 2018
    1. On 2015 Oct 21, James C Coyne commented:

      I am delighted that your group has formed a PubMed Commons Journal Club and you have selected our article for comment. Hopefully my colleagues will respond as well, but here are my reactions.

      Our study grew out of an ambitious dissertation project in which the PhD student sought to implement best practices for recruiting a consecutive sample. Rather than obtaining the anticipated sample size, her effort stands as a cautionary note for anyone who would consider mounting such a randomized trial as part of a PhD effort without a lot of resources that are typically available.

      I would now suggest that if a PhD student wishes to conduct an evaluation of an intervention largely on their own, they should stick to evaluating the feasibility and acceptability, not expecting to accrue enough patience for an adequately powered estimate of effect size. We have far too many underpowered studies claiming to produce effect sizes that, in the end, are not reliable.

      Because recruitment for our study was not part of routine care, assent had to be obtained for approaching patients.

      The Dutch practice of talking to every patient who wants a discussion is admirable, butit is not screening. Throughout medical practice, screening involves making decisions about who will have a further discussion based on their obtaining a score above a cutpoint. Indeed, if taken seriously and literally, the widely touted international standards for screening threaten an excellent, well-established Dutch practice.

      I think that your comments, like a lot of the conventional understanding of cancer patients' interest in psychotherapy, reflect an overoptimism about their uptake. Our experience is actually quite consistent with other data that suggest interest in psychotherapy or counseling is a lot lower than generally assumed.

      With more resources, perhaps we could have relied on touchscreen assessments, but I doubt the yield would be much better than what we obtained.

      There is no evidence that systematic and routine screening for distress of cancer patients produces are better outcome than simply allowing patients access to services without the intervening screening. However, a large number of studies demonstrate that most patients who are interested in psychological services are not distressed enough to register an effect of receipt of those services. That is quite a dilemma.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 17, Radboudumc Psycho-Oncology Journal Club commented:

      This interesting study which has important implications for psychosocial oncology researchers was discussed by our Journal Club on 16th September, 2015 and generated a lively discussion. This study reports on difficulties associated with using distress screening to identify patients for randomisation to psychological therapy intervention trials. The authors conclude that although distress screening of consecutive patients to determine psychological trial entry is recommended by guidelines, it may be an inefficient use of resources and introduced bias.

      During the plenary discussion of this paper our journal club raised the following points:

      • 1) In this study nurses distributed screening instruments at Time 1 but the results of those questionnaires were mailed back to the researcher. At Time 2 all screening questionnaires were conducted by the researcher via the mail with a follow-up phone by the researcher. Our group discussed that this scenario does not reflect how screening is (or should) be used in most clinical settings. To ensure the maximum benefits of screening we believe it is important for clinical staff responsible for screening to discuss the results with patients and offer services (or studies) according to identified needs. We wondered whether the uptake of the PST intervention study may have been higher if this approach had been adopted?
      • 2) This study also raises some broader issues about patient perceptions of psychological therapies and how best to explain what is offered in psychological intervention trials to patients. We noted with interest that need for help was explored with a rather general question “Would you like to talk to a care provider about your situation?” however, what was ultimately offered to the patient was the opportunity to speak with a psychologist. Our group felt an important component of trial uptake is ensuring a good match between what is screened and the services ultimately offered.
      • 3) The authors identify that a large component of the 17 hours calculated to recruit one patients to the study was taken up by data management issues. They recommend that automated methods may be more efficient. We are wondering if the authors have a sense of the efficiency gains offered by automated screening and whether using automated methods makes the process of distress screening for trial entry efficient?
      • 4) We note with interest the current dilemma that although it may be inefficient the screening of consecutive patients for trial entry is currently a required component of best practice guidelines on the conduct of randomised controlled trials (eg. CONSORT). Do the authors have any suggestions for what can be done to address this problem if distress screening for trial entry indeed introduces bias and threatens external validity?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Sep 17, Radboudumc Psycho-Oncology Journal Club commented:

      This interesting study which has important implications for psychosocial oncology researchers was discussed by our Journal Club on 16th September, 2015 and generated a lively discussion. This study reports on difficulties associated with using distress screening to identify patients for randomisation to psychological therapy intervention trials. The authors conclude that although distress screening of consecutive patients to determine psychological trial entry is recommended by guidelines, it may be an inefficient use of resources and introduced bias.

      During the plenary discussion of this paper our journal club raised the following points:

      • 1) In this study nurses distributed screening instruments at Time 1 but the results of those questionnaires were mailed back to the researcher. At Time 2 all screening questionnaires were conducted by the researcher via the mail with a follow-up phone by the researcher. Our group discussed that this scenario does not reflect how screening is (or should) be used in most clinical settings. To ensure the maximum benefits of screening we believe it is important for clinical staff responsible for screening to discuss the results with patients and offer services (or studies) according to identified needs. We wondered whether the uptake of the PST intervention study may have been higher if this approach had been adopted?
      • 2) This study also raises some broader issues about patient perceptions of psychological therapies and how best to explain what is offered in psychological intervention trials to patients. We noted with interest that need for help was explored with a rather general question “Would you like to talk to a care provider about your situation?” however, what was ultimately offered to the patient was the opportunity to speak with a psychologist. Our group felt an important component of trial uptake is ensuring a good match between what is screened and the services ultimately offered.
      • 3) The authors identify that a large component of the 17 hours calculated to recruit one patients to the study was taken up by data management issues. They recommend that automated methods may be more efficient. We are wondering if the authors have a sense of the efficiency gains offered by automated screening and whether using automated methods makes the process of distress screening for trial entry efficient?
      • 4) We note with interest the current dilemma that although it may be inefficient the screening of consecutive patients for trial entry is currently a required component of best practice guidelines on the conduct of randomised controlled trials (eg. CONSORT). Do the authors have any suggestions for what can be done to address this problem if distress screening for trial entry indeed introduces bias and threatens external validity?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Oct 21, James C Coyne commented:

      I am delighted that your group has formed a PubMed Commons Journal Club and you have selected our article for comment. Hopefully my colleagues will respond as well, but here are my reactions.

      Our study grew out of an ambitious dissertation project in which the PhD student sought to implement best practices for recruiting a consecutive sample. Rather than obtaining the anticipated sample size, her effort stands as a cautionary note for anyone who would consider mounting such a randomized trial as part of a PhD effort without a lot of resources that are typically available.

      I would now suggest that if a PhD student wishes to conduct an evaluation of an intervention largely on their own, they should stick to evaluating the feasibility and acceptability, not expecting to accrue enough patience for an adequately powered estimate of effect size. We have far too many underpowered studies claiming to produce effect sizes that, in the end, are not reliable.

      Because recruitment for our study was not part of routine care, assent had to be obtained for approaching patients.

      The Dutch practice of talking to every patient who wants a discussion is admirable, butit is not screening. Throughout medical practice, screening involves making decisions about who will have a further discussion based on their obtaining a score above a cutpoint. Indeed, if taken seriously and literally, the widely touted international standards for screening threaten an excellent, well-established Dutch practice.

      I think that your comments, like a lot of the conventional understanding of cancer patients' interest in psychotherapy, reflect an overoptimism about their uptake. Our experience is actually quite consistent with other data that suggest interest in psychotherapy or counseling is a lot lower than generally assumed.

      With more resources, perhaps we could have relied on touchscreen assessments, but I doubt the yield would be much better than what we obtained.

      There is no evidence that systematic and routine screening for distress of cancer patients produces are better outcome than simply allowing patients access to services without the intervening screening. However, a large number of studies demonstrate that most patients who are interested in psychological services are not distressed enough to register an effect of receipt of those services. That is quite a dilemma.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.