2 Matching Annotations
  1. Jul 2018
    1. On 2017 May 06, Hilda Bastian commented:

      The conclusion that implicit bias in physicians "does not appear to impact their clinical decision making" would be good news, but this systematic review does not support it. Coming to any conclusion at all on this question requires a strong body of high quality evidence, with representative samples across a wide range of representative populations, using real-life data not hypothetical situations. None of these conditions pertain here. I think the appropriate conclusion here is that we still do not know what role implicit racial bias, as measured by this test, has on people's health care.

      The abstract reports that "The majority of studies used clinical vignettes to examine clinical decision making". In this instance, "majority" means "all but one" (8 out of 9). And the single exception has a serious limitation in that regard, according to Table 1: "pharmacy refills are only a proxy for decision to intensify treatment". The authors' conclusions are thus related, not to clinical decision making, but to hypothetical decision making.

      Of the 9 studies, Table 1 reports that 4 had a low response rate (37% to 53%), and in 2 studies the response rate was unknown. As this is a critical point, and an adequate response rate was not defined in the report of this review, I looked at the 3 studies (albeit briefly). I could find no response rate in any of the 3. In 1 of these (Haider AH, 2014), 248 members of an organization responded. That organization currently reports having over 2,000 members (EAST, accessed 6 May 2017). (The authors report that only 2 of the studies had a sample size calculation.)

      It would be helpful if the authors could provide the full scoring: given the limitations reported, it's hard to see how some of these studies scored so highly. This accepted manuscript version reports that the criteria themselves are available in a supplement, but that supplement was not included.

      It would have been helpful if additional important methodological details of the included studies were reported. For example, 1 of the studies I looked at (Oliver MN, 2014) included an element of random allocation of race to patient photos in the vignettes: design elements such as this were not included in the data extraction reported here. Along with the use of a non-validated quality assessment method (9 of the 27 components of the instrument that was modified), these issues leave too many questions about the quality rating of included studies. Other elements missing from this systematic review (Shea BJ, 2007) are a listing of the excluded studies and assessing the risk of publication bias.

      The search strategy appears to be incompletely reported: it ends with an empty bullet point, and none of the previous bullet points refer to implicit bias or the implicit association test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 May 06, Hilda Bastian commented:

      The conclusion that implicit bias in physicians "does not appear to impact their clinical decision making" would be good news, but this systematic review does not support it. Coming to any conclusion at all on this question requires a strong body of high quality evidence, with representative samples across a wide range of representative populations, using real-life data not hypothetical situations. None of these conditions pertain here. I think the appropriate conclusion here is that we still do not know what role implicit racial bias, as measured by this test, has on people's health care.

      The abstract reports that "The majority of studies used clinical vignettes to examine clinical decision making". In this instance, "majority" means "all but one" (8 out of 9). And the single exception has a serious limitation in that regard, according to Table 1: "pharmacy refills are only a proxy for decision to intensify treatment". The authors' conclusions are thus related, not to clinical decision making, but to hypothetical decision making.

      Of the 9 studies, Table 1 reports that 4 had a low response rate (37% to 53%), and in 2 studies the response rate was unknown. As this is a critical point, and an adequate response rate was not defined in the report of this review, I looked at the 3 studies (albeit briefly). I could find no response rate in any of the 3. In 1 of these (Haider AH, 2014), 248 members of an organization responded. That organization currently reports having over 2,000 members (EAST, accessed 6 May 2017). (The authors report that only 2 of the studies had a sample size calculation.)

      It would be helpful if the authors could provide the full scoring: given the limitations reported, it's hard to see how some of these studies scored so highly. This accepted manuscript version reports that the criteria themselves are available in a supplement, but that supplement was not included.

      It would have been helpful if additional important methodological details of the included studies were reported. For example, 1 of the studies I looked at (Oliver MN, 2014) included an element of random allocation of race to patient photos in the vignettes: design elements such as this were not included in the data extraction reported here. Along with the use of a non-validated quality assessment method (9 of the 27 components of the instrument that was modified), these issues leave too many questions about the quality rating of included studies. Other elements missing from this systematic review (Shea BJ, 2007) are a listing of the excluded studies and assessing the risk of publication bias.

      The search strategy appears to be incompletely reported: it ends with an empty bullet point, and none of the previous bullet points refer to implicit bias or the implicit association test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.