5 Matching Annotations
  1. Jul 2018
    1. On 2018 Jan 06, Ivan Buljan commented:

      Dear Hilda,

      Thank you for your comments. You have made several excellent observations and we are glad to clarify those issues.

      As it was addressed in the limitations, high dropout rate was present in trials with consumers and physicians, while there was none in student trials. Unlike the student’s trial, where we tested the efficacy of the formats, our intention was to test their effectiveness (the real world application). We believe that our results are representative because in the real world there will be a significant amount of patients and physicians who would not want to read the CSR. It is true that in the student trial there was no difference in reading experience between PLS and infographics, but although there was significant difference between infographic and PLS in consumer and physician trials, it has to be admitted (as we did in the article) that the difference was very small (only couple of points on reading experience scale, which poses the question whether that is a “clinical significance”).

      There were differences between infographics and PLS concerning the number of numerical expressions, but it cannot be claimed that that was the only reason for the dropout from the trials. Participants may have refused to participate in the trials even before reading the text, or before giving the answers about the summary. Also, even with different number of numerical expressions, numeracy was the predictor of results for all formats, meaning that participant with higher numeracy levels were better in understanding the results in a summary format even when fewer numerical expressions were used.

      Although there were differences in the formats with regard to the presentation of quality of the evidence between infographic and PLS. We accounted for that when scoring the results. In the scoring of the answers, we scored the answer as correct if the answer described the studies with terms like “low quality of evidence”, “the quality of studies differed greatly”, “the strength of the evidence from the studies differed”, “the studies were too small”, whereas the incorrect answers would be such as “the quality of evidence was very good”, “there was no differences in quality of studies” or “all studies produced equal strength of the evidence”. The result was that there was no difference between formats in number of correct answers on that question in any of the three trials.

      As for the metaanalysis, we had all individual level data from the three trials, so it was not necessary to use metaanalysis to estimate the effect. We presented the pooled results from all three trials.

      The ways of presentation of evidence to public using infographic are still not well explored. We do not know whether the symbols used in this research were appropriate to consumers for the purposes they are intended to. To explore that, we would need to ask participant their opinion about the symbols used in the infographic, which would require a qualitative approach, which is underway.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2018 Jan 01, Hilda Bastian commented:

      It is great to see randomized trials to test the effects of an infographic. However, I have concerns with the interpretation of the results of this set of 3 trials. The abstract states that these were randomized trials of 171 students, 99 consumers, and 64 doctors. However, those are the numbers of people who completed the knowledge and reading experience questions, not the number randomized: 171 students, 212 consumers, and 108 doctors were randomized. The extremely high dropout rate (e.g. 53% for consumers) leaves only the trial in students as a reliable base for conclusions. And for them, there was no difference in knowledge or reported reading experience - they did not prefer the infographic.

      The authors point out that the high dropout rate may have affected the results for consumers and doctors, especially as they faced a numeracy test after being given the infographic or summary to read. That must have skewed the results. In particular, since the infographic (here) has such different content to the plain language summary (here), this seems inevitably related to the issue of numeracy: the plain language summary is almost number-free, while the infographic is number-heavy (an additional 16 numerical expressions).

      The knowledge test comprised 10 questions, one of which related to the quality of the evidence included in the systematic review. The infographic and plain language summary contained very different information on this. The article's appendix suggests that the correct answer expected was included in the infographic but not in the plain language summary. It would be helpful to know whether this affected the knowledge scores for readers of the plain language summary.

      Cohen's d effect sizes are not reported for the 3 trials separately, and given the heterogeneity in those results, it is not accurate to use the combined result to conclude that all 3 participant groups preferred the infographic and reading it. (In addition, the method for the meta-analysis of effect sizes of the 3 trials is not reported.)

      The specific summary and infographic, although high quality, also point to some of the underlying challenges in communicating with these media to consumers. For example, the infographic uses a coffin as pictograph for mortality, which I don't believe is appropriate in patient information. This highlights the risks inherent in using graphic elements where there aren't well-established conventions. Both the infographic and the plain language summary focus on information about the baby's wellbeing and the birth - but not the impact of the intervention on the pregnant woman, or their views of it. Whatever the format, issues remain with the process of determining the content of research summaries for consumers. (I have written more about the evidence on infographics and this study here.)

      Disclosure: The Cochrane (text) plain language summaries were an initiative of mine in the early days of the Cochrane Collaboration, when I was a consumer advocate. Although I wrote or edited most of those early Cochrane summaries, I had no involvement with the one studied here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Dec 28, Karen Woolley commented:

      In a consumer-focused world, focusing on consumer preferences is understandable, if not desirable. If an infographic approach to summarising health information appeals to consumers of health information without interfering with knowledge translation, should the Plain Language Expectations for Authors of Cochrane Summaries (PLEACS) be updated to consider the use of graphics? Currently, the PLEACS guidelines focus primarily on text - the use of data visualisation / graphics is not mentioned. The skills of designers and editors can complement the skills of medical writers to produce high-quality plain-language documents. If PubMed Commons made it easier to post graphics (vs hyperlinking to graphics like this https://twitter.com/KWProScribe/status/946247110600540160), then it could serve as a logical, free, readily accessible, international repository for visually engaging plain-language summaries...that could sit right under their matching scientific (and less consumer-friendly) publications. Disclosures: Financial: I am a paid employee of Envision Pharma Group, which provides medical communication services and technology solutions. I have shares in Johnson & Johnson and have been a government-appointed director on the board of 5 hospitals. Nonfinancial: I am an active member and past director of associations that advocate for ethical publication practices. I am a research partner with international patient leaders and advocacy organisations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Dec 28, Karen Woolley commented:

      In a consumer-focused world, focusing on consumer preferences is understandable, if not desirable. If an infographic approach to summarising health information appeals to consumers of health information without interfering with knowledge translation, should the Plain Language Expectations for Authors of Cochrane Summaries (PLEACS) be updated to consider the use of graphics? Currently, the PLEACS guidelines focus primarily on text - the use of data visualisation / graphics is not mentioned. The skills of designers and editors can complement the skills of medical writers to produce high-quality plain-language documents. If PubMed Commons made it easier to post graphics (vs hyperlinking to graphics like this https://twitter.com/KWProScribe/status/946247110600540160), then it could serve as a logical, free, readily accessible, international repository for visually engaging plain-language summaries...that could sit right under their matching scientific (and less consumer-friendly) publications. Disclosures: Financial: I am a paid employee of Envision Pharma Group, which provides medical communication services and technology solutions. I have shares in Johnson & Johnson and have been a government-appointed director on the board of 5 hospitals. Nonfinancial: I am an active member and past director of associations that advocate for ethical publication practices. I am a research partner with international patient leaders and advocacy organisations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2018 Jan 01, Hilda Bastian commented:

      It is great to see randomized trials to test the effects of an infographic. However, I have concerns with the interpretation of the results of this set of 3 trials. The abstract states that these were randomized trials of 171 students, 99 consumers, and 64 doctors. However, those are the numbers of people who completed the knowledge and reading experience questions, not the number randomized: 171 students, 212 consumers, and 108 doctors were randomized. The extremely high dropout rate (e.g. 53% for consumers) leaves only the trial in students as a reliable base for conclusions. And for them, there was no difference in knowledge or reported reading experience - they did not prefer the infographic.

      The authors point out that the high dropout rate may have affected the results for consumers and doctors, especially as they faced a numeracy test after being given the infographic or summary to read. That must have skewed the results. In particular, since the infographic (here) has such different content to the plain language summary (here), this seems inevitably related to the issue of numeracy: the plain language summary is almost number-free, while the infographic is number-heavy (an additional 16 numerical expressions).

      The knowledge test comprised 10 questions, one of which related to the quality of the evidence included in the systematic review. The infographic and plain language summary contained very different information on this. The article's appendix suggests that the correct answer expected was included in the infographic but not in the plain language summary. It would be helpful to know whether this affected the knowledge scores for readers of the plain language summary.

      Cohen's d effect sizes are not reported for the 3 trials separately, and given the heterogeneity in those results, it is not accurate to use the combined result to conclude that all 3 participant groups preferred the infographic and reading it. (In addition, the method for the meta-analysis of effect sizes of the 3 trials is not reported.)

      The specific summary and infographic, although high quality, also point to some of the underlying challenges in communicating with these media to consumers. For example, the infographic uses a coffin as pictograph for mortality, which I don't believe is appropriate in patient information. This highlights the risks inherent in using graphic elements where there aren't well-established conventions. Both the infographic and the plain language summary focus on information about the baby's wellbeing and the birth - but not the impact of the intervention on the pregnant woman, or their views of it. Whatever the format, issues remain with the process of determining the content of research summaries for consumers. (I have written more about the evidence on infographics and this study here.)

      Disclosure: The Cochrane (text) plain language summaries were an initiative of mine in the early days of the Cochrane Collaboration, when I was a consumer advocate. Although I wrote or edited most of those early Cochrane summaries, I had no involvement with the one studied here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.