5 Matching Annotations
  1. Jul 2018
    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 05, Lydia Maniatis commented:

      You should be faulted for the nature and extent of the problems, not for making them explicit, which is, indeed, useful. I think the nature of the problems reflect an unproductive approach to science.

      Below, I contrast this approach with an alternative and briefly explain why the latter is productive while the former is not. (The alternative is presented first.)

      1. Someone makes a guess about the causal basis of an effect or phenomenon. These guesses entail various assumptions. The assumptions should be adequate to explain the thing they have been proposed to explain. Additionally, one may derive certain other implications, pointing to effects or facts that have not, as yet, been observed, i.e. predicted effects or facts. The data in an experiment or investigation designed to produce those predicted effects, or discover these predicted facts, thus act as a test of those predictions and the related assumptions. The criterion for provisional acceptance of assumptions is, in other words, the match between their observable implications and observation.

      2. Data collected are interpreted on the basis of assumptions which they are not designed, and cannot, test. Thus, the experiment plays no role in corroborating or falsifying these assumptions. The criterion for adopting them is simply the personal preference of the investigator - a criterion which, again, is independent of experiment.

      Because this approach is not designed to test assumptions, it is inherently uninformative as to their verisimilitude, their relationship to the "ground truth." The titles of the corresponding articles are similarly uninformative. They report "exploring," "characterizing," "measuring," various effects, or simply state the topic area they are addressing without giving any hint of their conclusions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jun 05, Michael Tarr commented:

      Interested parties should read the entire paper and make up their own minds. Every scientific study has limitations and we should not be faulted for making them explicit so as to inform interested readers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jun 03, Lydia Maniatis commented:

      “Pick your poison”

      I’m speculating, but it seems to me that, no longer able to bar post publication critiques of low quality studies (via muscular refusal to publish critical Letters to the Editor), journal editors have adopted a compromise strategy of continuing to accept low-quality papers, while requiring authors to enumerate (many of) the study’s (often fatal) flaws, in a section referring to “limitations of the study” This is an improvement on the old system; however, this special section should be placed at the front, not at the tail end, of the paper. More often than not, it reveals that methods were confounded and interpretation based on flimsy, vaguely elucidated and untested assumptions; as such, conclusions can carry little or no theoretical weight.

      This is the case here; among the “Issues and limitations,” of the study the authors mention:

      a. That the study lacks an important control condition (“we did not collect neural responses for any untrained face stimuli…” (p. 18)) and that it is not clear how the necessary control might be achieved.

      b. That it is unclear whether results generalize in order to explain what they are supposed to explain; this, we’re told, is contingent on whether certain vague assumptions adopted by the authors, about what observers are doing, actually hold ("we hypothesized that this task prompted subjects to learn...").

      c. That it is not clear that subjects were discriminating faces holistically, or only on the basis of the simple variations used in the stimulus set. The authors explain that they prefer to make the more convenient assumption that “the task used in our study was biased towards facial discrimination rather than facial part discrimination.”

      d. Most interestingly, we learn that, in order for the results of the imaging technique used to be interpretable, it is necessary to impose certain constraints on the analysis, and that the choice of constraints “can lead to source reconstructions that are different from the true activity in the brain.” What should a scientist do, in the absence of information about the proper (true) assumptions to make? I would say, make and test the assumptions you need in order to ascertain the proper constraints. Instead, the authors take a riskier route. In the absence of knowledge, they say, one has to “pick his poison by simply choosing some constraints.” Of course, the authors “tried to choose reasonable constraints…”

      To make situation clear: To interpret their data, collected on the basis of conditions which are heavily confounded (and which the authors unconfound simply on the basis of wishful thinking), they must make further, untested assumptions in a way that is so rigorous that they analogize it to picking a poison. Conclusions are thus wholly contingent on layers of highly speculative assumptions. Until these are clarified, tested, and corroborated, the empirical content of this project - the theoretical weight of the conclusions - is null.

      Assuming these articles are actually written to be read, the poison label should be affixed prominently at the top.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Jun 03, Lydia Maniatis commented:

      “Pick your poison”

      I’m speculating, but it seems to me that, no longer able to bar post publication critiques of low quality studies (via muscular refusal to publish critical Letters to the Editor), journal editors have adopted a compromise strategy of continuing to accept low-quality papers, while requiring authors to enumerate (many of) the study’s (often fatal) flaws, in a section referring to “limitations of the study” This is an improvement on the old system; however, this special section should be placed at the front, not at the tail end, of the paper. More often than not, it reveals that methods were confounded and interpretation based on flimsy, vaguely elucidated and untested assumptions; as such, conclusions can carry little or no theoretical weight.

      This is the case here; among the “Issues and limitations,” of the study the authors mention:

      a. That the study lacks an important control condition (“we did not collect neural responses for any untrained face stimuli…” (p. 18)) and that it is not clear how the necessary control might be achieved.

      b. That it is unclear whether results generalize in order to explain what they are supposed to explain; this, we’re told, is contingent on whether certain vague assumptions adopted by the authors, about what observers are doing, actually hold ("we hypothesized that this task prompted subjects to learn...").

      c. That it is not clear that subjects were discriminating faces holistically, or only on the basis of the simple variations used in the stimulus set. The authors explain that they prefer to make the more convenient assumption that “the task used in our study was biased towards facial discrimination rather than facial part discrimination.”

      d. Most interestingly, we learn that, in order for the results of the imaging technique used to be interpretable, it is necessary to impose certain constraints on the analysis, and that the choice of constraints “can lead to source reconstructions that are different from the true activity in the brain.” What should a scientist do, in the absence of information about the proper (true) assumptions to make? I would say, make and test the assumptions you need in order to ascertain the proper constraints. Instead, the authors take a riskier route. In the absence of knowledge, they say, one has to “pick his poison by simply choosing some constraints.” Of course, the authors “tried to choose reasonable constraints…”

      To make situation clear: To interpret their data, collected on the basis of conditions which are heavily confounded (and which the authors unconfound simply on the basis of wishful thinking), they must make further, untested assumptions in a way that is so rigorous that they analogize it to picking a poison. Conclusions are thus wholly contingent on layers of highly speculative assumptions. Until these are clarified, tested, and corroborated, the empirical content of this project - the theoretical weight of the conclusions - is null.

      Assuming these articles are actually written to be read, the poison label should be affixed prominently at the top.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.