2 Matching Annotations
  1. Jul 2018
    1. On 2013 Oct 25, Paul Glasziou commented:

      These comments are from a journal club at CREBP - www.crebp.net.au/ - where we later had the opportunity to put our questions to the papers first author (F Legare). Chris Del Mar presented this cluster RCT; Elaine Beller presented the Study Protocol (BMC Fam Pract 2011) and pilot Protocol (BMC Fam Pract 2007)

      Issues:

      1. The paper suggests that randomisation occurred after baseline data were collected (suboptimal). Although loss from each arm was similar, some baseline characteristics were not balanced (eg Table 3 and Table 4). However Légaré tells us in fact the randomisation took place before the baseline data were analysed. An added refinement might be to stratify by prescribing rates (detected in the Baseline phase).
      2. The patient recruitment was low: average of 3/physician for the whole season (and some physicians recruited none!). This was apparently because of ‘specialisation’ within practices – (‘Clusters’) -- which are very large (20-40 GPs), and some see only obstetrics or child development, etc (and hence recruited 0 patients). Nevertheless all the GPs in Clusters got the intervention, whether or not they see ‘drop-ins’ (more likely to be ARIs). Lagare comment: Patients were recruited by an RA who was attached full-time in the waiting room of the practice. Thus most eligible patients were recruited in each practice. This means the effect is unlikely to have been greater than reported (by a halo effect).
      3. Outcomes: ‘intention to use antibiotics’ is clearly a sub-optimal primary outcome because it is so soft. It was not possible to measure actual ABs dispensed (about 70% are in the private sector, unsubsidised by the Province).
      4. What was the intervention? As with all complex interventions, the effective components are sometimes hard to tease out. In this case, was it the ‘epidemiological’ education that did the trick, or the introduction to ‘shared decision-making’?
      5. Was the effect sustained? Legare comment: This was not measured in the study, but in the earlier pilot the effect was sustained.
      6. Were doctors paid to recruit patients? Legare comment: No – their main incentive was CEM credit.
      7. Did the intervention include the option of “delayed prescribing”? Legare comment: No – this was not considered as acceptable practice at the time.
      8. More minor things: a. More clusters would be better (and easier in Australia where practices appear to be smaller) b. What’s the financial influence (perverse incentives)? For this trial the practice clusters were not fee-for-service, (although they were in the pilot study. Leblanc A, Legare F, Labrecque M, Godin G, Thivierge R, Laurier C, et al. Feasibility of a randomised trial of a continuing medical education program in shared decision-making on the use of antibiotics for acute respiratory infections in primary care: the DECISION+ pilot trial. Implementation science : IS. 2011;6:5. PubMed PMID: 21241514. Pubmed Central PMCID: 3033351. Epub 2011/01/19. eng. or capitation). c. Was there contamination (despite the controls not having access to the Training, because the GPs were academic doctors who may all have known about the trial, its intent, and its hoped for outcome? We know that the control practices were on a wait-list (they were offered the intervention after the trial, although some the data from this – not published – showed a more modest response than the data from the trial proper. This might have been different (not CIs) offering the intervention to the control clusters.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2013 Oct 25, Paul Glasziou commented:

      These comments are from a journal club at CREBP - www.crebp.net.au/ - where we later had the opportunity to put our questions to the papers first author (F Legare). Chris Del Mar presented this cluster RCT; Elaine Beller presented the Study Protocol (BMC Fam Pract 2011) and pilot Protocol (BMC Fam Pract 2007)

      Issues:

      1. The paper suggests that randomisation occurred after baseline data were collected (suboptimal). Although loss from each arm was similar, some baseline characteristics were not balanced (eg Table 3 and Table 4). However Légaré tells us in fact the randomisation took place before the baseline data were analysed. An added refinement might be to stratify by prescribing rates (detected in the Baseline phase).
      2. The patient recruitment was low: average of 3/physician for the whole season (and some physicians recruited none!). This was apparently because of ‘specialisation’ within practices – (‘Clusters’) -- which are very large (20-40 GPs), and some see only obstetrics or child development, etc (and hence recruited 0 patients). Nevertheless all the GPs in Clusters got the intervention, whether or not they see ‘drop-ins’ (more likely to be ARIs). Lagare comment: Patients were recruited by an RA who was attached full-time in the waiting room of the practice. Thus most eligible patients were recruited in each practice. This means the effect is unlikely to have been greater than reported (by a halo effect).
      3. Outcomes: ‘intention to use antibiotics’ is clearly a sub-optimal primary outcome because it is so soft. It was not possible to measure actual ABs dispensed (about 70% are in the private sector, unsubsidised by the Province).
      4. What was the intervention? As with all complex interventions, the effective components are sometimes hard to tease out. In this case, was it the ‘epidemiological’ education that did the trick, or the introduction to ‘shared decision-making’?
      5. Was the effect sustained? Legare comment: This was not measured in the study, but in the earlier pilot the effect was sustained.
      6. Were doctors paid to recruit patients? Legare comment: No – their main incentive was CEM credit.
      7. Did the intervention include the option of “delayed prescribing”? Legare comment: No – this was not considered as acceptable practice at the time.
      8. More minor things: a. More clusters would be better (and easier in Australia where practices appear to be smaller) b. What’s the financial influence (perverse incentives)? For this trial the practice clusters were not fee-for-service, (although they were in the pilot study. Leblanc A, Legare F, Labrecque M, Godin G, Thivierge R, Laurier C, et al. Feasibility of a randomised trial of a continuing medical education program in shared decision-making on the use of antibiotics for acute respiratory infections in primary care: the DECISION+ pilot trial. Implementation science : IS. 2011;6:5. PubMed PMID: 21241514. Pubmed Central PMCID: 3033351. Epub 2011/01/19. eng. or capitation). c. Was there contamination (despite the controls not having access to the Training, because the GPs were academic doctors who may all have known about the trial, its intent, and its hoped for outcome? We know that the control practices were on a wait-list (they were offered the intervention after the trial, although some the data from this – not published – showed a more modest response than the data from the trial proper. This might have been different (not CIs) offering the intervention to the control clusters.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.