5 Matching Annotations
  1. Jul 2018
    1. On 2016 Mar 15, Wichor Bramer commented:

      Well, one can imagine, as there are 120 reviews, some have limited their dataset to English language articles only, others have translated foreign language articles. Likewise, some reviews I performed the searches for have included non published articles from registries, where they do make an important difference compared to reviews that only included published articles (such as Jaspers L, 2016, but that is not used for this research).

      Some reviews excluded conference papers (especially if the number of hits was high in the reviewers eyes, we resort to using that to reduce the number of hits), others included them. I must say that I don't see why these would not be found in embase/medline, as this is particularly a problem when searching embase, while medline hardly includes detailed conference proceedings.

      In this research we only looked at the included references that had been published in a journal, and we considered conference proceedings, published as supplements to journals to fall into that category.

      Regarding searching cochrane central, this results will be shown in upcoming articles from partially overlapping data, i must say that sofar for the 2500 included reviews of 60+ Published reviews Cochrane Central has not identified one single included reference that was not also retrieved by another database.

      In my opinion, when doing a systematic review, the authors should aim to find all relevant articles that can answer the research question. If that is not the goal, then it should not be called a systematic review, they can combine three MeSH terms in PubMed, extract some conclusions and automatically generate a rapid review.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 15, Hilda Bastian commented:

      Many thanks, Wichor and Dean - that's really helpful. Still not clear on whether there was a language restriction or not. I looked at a couple of the reviews you link to (thanks!), but couldn't see an answer in those either.

      On the question of implications for reviews: being included is a critical measure of value of the search results, but with such major resource implications, it's not enough. One of the reasons more detail about the spread of topics, and the nature of what was not found is important, is to explain the difference in these results compared to other studies (for example, Waffenschmidt S, 2015, Halladay CW, 2015, Golder S, 2014, Lorenzetti DL, 2014).

      Even if studies like this don't go as far as exploring what it might mean to the conclusions of reviews, there are several aspects - like language - that matter. For example, the Cochrane trials register was searched and other places as well. If studies were included from these sources based only on abstracts from conference proceedings for example, then it's clear why they may not be found in EMBASE/MEDLINE. Methodological issues such as language restriction, or whether or not to include non-journal sources, are important questions for a range of reasons.

      One way that the potential impact of studies can be considered is the quality/risk of bias assessment of the studies that would not have been found. As Halladay CW, 2015 found, the impact of studies on systematic reviews can be modest (if they have an impact at all).

      Disclosure: I am the lead editor of PubMed Health, a clinical effectiveness resource and project that adds non-MEDLINE systematic reviews to PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Mar 15, Wichor Bramer commented:

      Dear Hilda,

      Thank you for your insightful comments, much appreciated. I have left comments via PubMed Commons before, but have never received any from other researchers. I will respond to your comments point by point:

      1) As we described in the last line of the second to last paragraph of the methods section of our paper, we searched all three databases post-hoc for included references.

      2) We searched the largest Ovid Medline files comprising Ovid MEDLINE® In-Process & Other Non-Indexed Citations. For clarity for endusers at Erasmus MC this is the only Medline database shown, and it is referred to as Medline, though it includes non-Medline PMC records. Articles retrieved from PubMed, the subset as supplied by publishers, were not classified as resulting from Medline Ovid searches, but rather as unique results from PubMed publisher subset (a classification not used in this article, but that will be used in other articles from partially overlapping datasets).

      3) As you pointed out, Bramer WM, 2015 is not a systematic review. After article acceptance, I realized it would have been wise to limit our study to medical research questions only (this being the only non-medical topic). Not all 120 searches have resulted in published systematic reviews. In some cases, the process is is ongoing and in others results were used to create other end products, such as clinical practice guidelines, grant proposals and chapters for theses. In 47 of the searches used in this research the resulting articles have been published in PubMed. That selection can be viewed via http://bit.ly/bramer-srs-gs.

      4) Criteria for searches to be included this research were that

      a) researchers had requested librarian-mediated searches because they intended to write a systematic review (in that view, the title should be read as 120 systematic review requests)

      b) titles and abstracts for the results for all databases had been reviewed

      c) the full text of the relevant abstract had been critically read and

      d) the resulting relevant references had been reported to us or were extractable from the resulting publication.

      Whether the searches result in finished published systematic reviews is independent of the search process. Retrospectively, it would have been wise to include a paragraph on this in the article.

      5) One of the peer reviewers also mentioned the expected difference between certain topics, and advised us to investigate that relation. However, it would be very complicated to group 120 unique and diverse topics systematically and even within broad subjects such as surgery or pediatrics one can expect variation between research questions. For very distinct topics such as nursing or psychology one can expect differences, because of the need to search Cinahl, respectively PsycINFO, but these research topics were scarce among our set. We do not believe huge differences were to occur regarding the performance of GS between different topics, as the overall performance remains too low. We did observe that for uncomplicated questions GS performed better than for search strategies with many synonyms.

      6) We chose not to investigate in detail what the missed studies would have meant for the conclusion of the reviews. Partially because of the vast number of topics, but also because we feel this does not add value to our conclusion about coverage, precision and recall. If searches in GS were likely to find fewer than 40% of all relevant references, or in Embase a high likelihood that fewer than 80% were retrieved, expected recall is too low for the systematic review, no matter what the quality was of retrieved results. In follow-up research where best database combinations are compared (in that case for published medical systematic reviews, so only partially overlapping with this set) we plan to investigate in detail why certain references were found by GS but not by traditional databases. One of the reasons could be that articles are retrieved from lower quality journals, as GS lacks quality requirements for inclusion, however there can be other reasons.

      Kind regards,

      Wichor Bramer


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Mar 12, Hilda Bastian commented:

      An interesting and very useful study of Google Scholar (GS). I am unclear, though, about the methods used to compare it with other databases. The abstract includes this step after the systematic review authors had a final list of included studies: "All three databases were then searched post hoc for included references not found in the original search results". That step is clearly described in the article for GS.

      However, for the other 2 databases (EMBASE and MEDLINE Ovid), the article describes the step this way: "We searched for all included references one-by-one in the original files in Endnote". "Overall coverage" is reported only for GS. Could you clarify whether the databases were searched post hoc for all 3 databases?

      I am also unclear about the MEDLINE Ovid search. It is stated that there was also a search of "a subset of PubMed to find recent articles". Were articles retrieved in this way classified as from the MEDLINE Ovid search? And if recent articles from PubMed were searched, does that mean that the MEDLINE Ovid search was restricted to MEDLINE content only, and not additional PubMed records (such as those via PMC)?

      There is little description of the 120 systematic reviews and citations are only provided for 5. One of those (Bramer WM, 2015) is arguably not a systematic review. What kind of primary literature was being sought is not reported, nor whether studies in languages other than English were included. And with only 5 topics given, it is not clear what role the subject matter played here. As Hoffmann T, 2012 showed, research scatter can vary greatly according to the subject. It would be helpful to provide the list of 120 systematic reviews.

      No data or description is provided about the studies missed with each strategy. Firstly, that makes it difficult to ascertain to what extent this reflects the quality of the retrieval rather than the contents of the databases. And secondly, with numbers alone and no information about the quality of the studies missed, the critical issue of the value of the missing studies is a blank space.

      Disclosure: I am the lead editor of PubMed Health, a clinical effectiveness resource and project that adds non-MEDLINE systematic reviews to PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Mar 12, Hilda Bastian commented:

      An interesting and very useful study of Google Scholar (GS). I am unclear, though, about the methods used to compare it with other databases. The abstract includes this step after the systematic review authors had a final list of included studies: "All three databases were then searched post hoc for included references not found in the original search results". That step is clearly described in the article for GS.

      However, for the other 2 databases (EMBASE and MEDLINE Ovid), the article describes the step this way: "We searched for all included references one-by-one in the original files in Endnote". "Overall coverage" is reported only for GS. Could you clarify whether the databases were searched post hoc for all 3 databases?

      I am also unclear about the MEDLINE Ovid search. It is stated that there was also a search of "a subset of PubMed to find recent articles". Were articles retrieved in this way classified as from the MEDLINE Ovid search? And if recent articles from PubMed were searched, does that mean that the MEDLINE Ovid search was restricted to MEDLINE content only, and not additional PubMed records (such as those via PMC)?

      There is little description of the 120 systematic reviews and citations are only provided for 5. One of those (Bramer WM, 2015) is arguably not a systematic review. What kind of primary literature was being sought is not reported, nor whether studies in languages other than English were included. And with only 5 topics given, it is not clear what role the subject matter played here. As Hoffmann T, 2012 showed, research scatter can vary greatly according to the subject. It would be helpful to provide the list of 120 systematic reviews.

      No data or description is provided about the studies missed with each strategy. Firstly, that makes it difficult to ascertain to what extent this reflects the quality of the retrieval rather than the contents of the databases. And secondly, with numbers alone and no information about the quality of the studies missed, the critical issue of the value of the missing studies is a blank space.

      Disclosure: I am the lead editor of PubMed Health, a clinical effectiveness resource and project that adds non-MEDLINE systematic reviews to PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.