4 Matching Annotations
  1. Jul 2018
    1. On 2016 Feb 16, Lydia Maniatis commented:

      Another example of the problem discussed by Teller can be found in Pelli (1990), who is proposing a model of perception:

      "In order to make the model as general as possible, yet still be able to measure its parameters, we need threeassumptions,or constraints. First we assume that the observer's level of performance increases monotonically with the contrast of the signal (when everything else is fixed). This guarantees that there will be a unique threshold. Secondly, as indicated on the diagram by the prefix contrast-invariant we assume that the calculation performed is independent of the contrast of the effective stimulus, which is its immedi- ate input. Together, assumptions 1 and 2 are a linking hypothesis. They imply that the observer's squared contrast threshold (which we can measure) is pro- portional to the effective noise level (which is inaccessible). Thirdly, we assume that the equivalent input noise is independent of the amplitude of the input noise and signal, or at least that it too is contrast- invariant, independent of the contrast of the effective image. These assumptions allow us to use two threshold measurements at different external noise levels to estimate the equivalent noise level. In effect, the assumptions state that the proportionality con- stant and the equivalent noise Neq are indeed constant, independent of the contrast of the effective image. These three assumptions are just enough to allow us to make psychophysical measurements that uniquely determine the parameters of our black-box model. Our model makes several testable predictions, as will be discussed below."

      It's worth noting that the main function of the rationale seems to be one of practical convenience.

      Theoretically, Pelli (1990) is proposing to make the case that: "the idea of equivalent input noise and a simplify- ing assumption called 'contrast invariance' allow the observer's overall quantum efficiency (as defined by Barlow, 1962a) to be factored into two components: transduction efficiency (called 'quantum efficiency of the eye' by Rose, 1948) and calculation efficiency..."

      Although he claims his model makes testable predictions, he also states that they had not, as of publication been tested.

      Pelli and Farrell (1999) seem to be referencing the untested, two-component model when they state that: "it is not widely appreciated that visual sensitivity is a product of two factors. By measuring an additional threshold, on a background of visual noise, one can partition visual sensitivity into two compo- nents representing the observer鳠efficiency and equivalent noise. Although they require an extra threshold measurement, these factors turn out to be invariant with respect to many visual parameters and are thus more easily characterized and understood than their product, the traditional contrast threshold."

      No references are provided to suggest the proposal has been corroborated. This problem is not remedied by the subsequent statement that: "Previous authors have presented compelling theoreti- cal reasons for isolating these two quantities in order to understand particular aspects of visual function (refs. 3 㠱3)," since all of the references predate the Pelli (1990) claims. Conveniently, the authors "ignore theory, to focus on the empirical properties of the two factors, especially their remarkable invari- ances, which make them more useful than sensitivity"). Ignoring theory unfortunately seems to be a hallmark of modern vision science.

      In this way, Pelli papers over a theoretical vacuum via technical elaboration of an untested model.

      Both Pelli (1990) and Pelli and Farrell (1999) are referenced by more recent papers as a support for the use of the "Equivalent noise" model.

      Pelli (1990) is cited by Solomon, May & Tyler (2016), without a further rationale for adopting the model (I've commented on that article here: https://pubpeer.com/publications/62E7CB814BC0299FBD4726BE07EA69).

      Dakin, Bex, Cass and Watt (2009) cite Pelli and Farrell (1999), their rationale being that the model "has been widely used elsewhere."

      I feel inclined to describe that what is going on here (it is not uncommon) as a kind of "theory-laundering," where ideas are proposed uncritically, then uncritically repeated, then become popular, and their popularity acts as a substitute for the missing rationale. Is this science?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 15, Lydia Maniatis commented:

      In reading a number of papers in visual perception, I repeatedly noticed authors using the term "linking hypothesis." "Linking hypotheses" - or assertions about how some experimental effect was linked to some cause -would be "introduced" without any ado, i.e. casually, without a rationale. It seemed as though we were supposed to take these "hypotheses" as givens in the interpretation of data, which didn't seem right. I googled the term "linking hypothesis" looking for some kind of formal definition, but didn't find anything of that kind. I did, however, find this article, which confirmed all my worst suspicions. I've only looked over as yet, but here is the final, jaw-dropping paragraph:

      "Perhaps we should include more often in our publications a paragraph or two making our linking propositions explicit. It will be important to define the ranges of intended applicability of individual linking propositions, the kinds of support they receive from prior experiments, their consistency with other broadly accepted linking propositions, the constraints they place on the composite map, the ancil- lary assumptions involved and the overall fit of the linking propositions into the current theoretical net- work of visual science. Within a few years, enough such descriptions should become available that it would be possible to refer to and build upon earlier explications, rather than starting from scratch each time. Similarly, perhaps reviewers could be encouraged to use the explicitness and potential values of linking propositions as one of the criteria of excellence for papers in which arguments containing link- ing propositions are explicitly or implicitly made."

      Essentially, this paragraph reveals that, at least as of the time the article was written, vision "scientists" were basing research on arbitrary claims for which they did NOT define ranges of intended applicability; defend with respect to previous research findings; ensure that they were consistent with accepted ideas (or at least how they challenged these); explain their general implications and connection to the current theoretical framework. They were, rather, "starting from scratch" on the basis of arbitrary assertions. This is not a recognisable picture of the scientific method.

      It doesn't appear that Teller's hopes for correction of these problems were realised. As I read the literature, the culture of "linking hypotheses" and all that that entails has become normalised. In fact, it may have gotten worse, as it has combined with the fashion for model-fitting in vision science. Thus, data collected is arbitrarily assumed to be "linked" to underlying processes, and the data are fitted to an algorithm, which, due to the high degree of predictability of the general outcomes (e.g. that attention will limit performance) and large number of free parameters, will always be achieved "more or less," without, of course, having general applicability.

      I've criticised a number of articles from this unfortunate new (maybe not so new) tradition (a few examples below):

      https://pubpeer.com/publications/90941136CC181AFE4896477BF5BB44

      https://pubpeer.com/publications/62E7CB814BC0299FBD4726BE07EA69

      https://pubpeer.com/publications/F0FDE1805AE5BDE583E3883A8AF39C

      A particularly devilish aspect of this approach is that the absence of intellectual/theoretical rigor is masked not only by incoherence but also by mathematics, which may confuse or scare off potential critics. But the same thing applies to math as to computing: Garbage in, garbage out. The rationale is the thing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Feb 15, Lydia Maniatis commented:

      In reading a number of papers in visual perception, I repeatedly noticed authors using the term "linking hypothesis." "Linking hypotheses" - or assertions about how some experimental effect was linked to some cause -would be "introduced" without any ado, i.e. casually, without a rationale. It seemed as though we were supposed to take these "hypotheses" as givens in the interpretation of data, which didn't seem right. I googled the term "linking hypothesis" looking for some kind of formal definition, but didn't find anything of that kind. I did, however, find this article, which confirmed all my worst suspicions. I've only looked over as yet, but here is the final, jaw-dropping paragraph:

      "Perhaps we should include more often in our publications a paragraph or two making our linking propositions explicit. It will be important to define the ranges of intended applicability of individual linking propositions, the kinds of support they receive from prior experiments, their consistency with other broadly accepted linking propositions, the constraints they place on the composite map, the ancil- lary assumptions involved and the overall fit of the linking propositions into the current theoretical net- work of visual science. Within a few years, enough such descriptions should become available that it would be possible to refer to and build upon earlier explications, rather than starting from scratch each time. Similarly, perhaps reviewers could be encouraged to use the explicitness and potential values of linking propositions as one of the criteria of excellence for papers in which arguments containing link- ing propositions are explicitly or implicitly made."

      Essentially, this paragraph reveals that, at least as of the time the article was written, vision "scientists" were basing research on arbitrary claims for which they did NOT define ranges of intended applicability; defend with respect to previous research findings; ensure that they were consistent with accepted ideas (or at least how they challenged these); explain their general implications and connection to the current theoretical framework. They were, rather, "starting from scratch" on the basis of arbitrary assertions. This is not a recognisable picture of the scientific method.

      It doesn't appear that Teller's hopes for correction of these problems were realised. As I read the literature, the culture of "linking hypotheses" and all that that entails has become normalised. In fact, it may have gotten worse, as it has combined with the fashion for model-fitting in vision science. Thus, data collected is arbitrarily assumed to be "linked" to underlying processes, and the data are fitted to an algorithm, which, due to the high degree of predictability of the general outcomes (e.g. that attention will limit performance) and large number of free parameters, will always be achieved "more or less," without, of course, having general applicability.

      I've criticised a number of articles from this unfortunate new (maybe not so new) tradition (a few examples below):

      https://pubpeer.com/publications/90941136CC181AFE4896477BF5BB44

      https://pubpeer.com/publications/62E7CB814BC0299FBD4726BE07EA69

      https://pubpeer.com/publications/F0FDE1805AE5BDE583E3883A8AF39C

      A particularly devilish aspect of this approach is that the absence of intellectual/theoretical rigor is masked not only by incoherence but also by mathematics, which may confuse or scare off potential critics. But the same thing applies to math as to computing: Garbage in, garbage out. The rationale is the thing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 16, Lydia Maniatis commented:

      Another example of the problem discussed by Teller can be found in Pelli (1990), who is proposing a model of perception:

      "In order to make the model as general as possible, yet still be able to measure its parameters, we need threeassumptions,or constraints. First we assume that the observer's level of performance increases monotonically with the contrast of the signal (when everything else is fixed). This guarantees that there will be a unique threshold. Secondly, as indicated on the diagram by the prefix contrast-invariant we assume that the calculation performed is independent of the contrast of the effective stimulus, which is its immedi- ate input. Together, assumptions 1 and 2 are a linking hypothesis. They imply that the observer's squared contrast threshold (which we can measure) is pro- portional to the effective noise level (which is inaccessible). Thirdly, we assume that the equivalent input noise is independent of the amplitude of the input noise and signal, or at least that it too is contrast- invariant, independent of the contrast of the effective image. These assumptions allow us to use two threshold measurements at different external noise levels to estimate the equivalent noise level. In effect, the assumptions state that the proportionality con- stant and the equivalent noise Neq are indeed constant, independent of the contrast of the effective image. These three assumptions are just enough to allow us to make psychophysical measurements that uniquely determine the parameters of our black-box model. Our model makes several testable predictions, as will be discussed below."

      It's worth noting that the main function of the rationale seems to be one of practical convenience.

      Theoretically, Pelli (1990) is proposing to make the case that: "the idea of equivalent input noise and a simplify- ing assumption called 'contrast invariance' allow the observer's overall quantum efficiency (as defined by Barlow, 1962a) to be factored into two components: transduction efficiency (called 'quantum efficiency of the eye' by Rose, 1948) and calculation efficiency..."

      Although he claims his model makes testable predictions, he also states that they had not, as of publication been tested.

      Pelli and Farrell (1999) seem to be referencing the untested, two-component model when they state that: "it is not widely appreciated that visual sensitivity is a product of two factors. By measuring an additional threshold, on a background of visual noise, one can partition visual sensitivity into two compo- nents representing the observer鳠efficiency and equivalent noise. Although they require an extra threshold measurement, these factors turn out to be invariant with respect to many visual parameters and are thus more easily characterized and understood than their product, the traditional contrast threshold."

      No references are provided to suggest the proposal has been corroborated. This problem is not remedied by the subsequent statement that: "Previous authors have presented compelling theoreti- cal reasons for isolating these two quantities in order to understand particular aspects of visual function (refs. 3 㠱3)," since all of the references predate the Pelli (1990) claims. Conveniently, the authors "ignore theory, to focus on the empirical properties of the two factors, especially their remarkable invari- ances, which make them more useful than sensitivity"). Ignoring theory unfortunately seems to be a hallmark of modern vision science.

      In this way, Pelli papers over a theoretical vacuum via technical elaboration of an untested model.

      Both Pelli (1990) and Pelli and Farrell (1999) are referenced by more recent papers as a support for the use of the "Equivalent noise" model.

      Pelli (1990) is cited by Solomon, May & Tyler (2016), without a further rationale for adopting the model (I've commented on that article here: https://pubpeer.com/publications/62E7CB814BC0299FBD4726BE07EA69).

      Dakin, Bex, Cass and Watt (2009) cite Pelli and Farrell (1999), their rationale being that the model "has been widely used elsewhere."

      I feel inclined to describe that what is going on here (it is not uncommon) as a kind of "theory-laundering," where ideas are proposed uncritically, then uncritically repeated, then become popular, and their popularity acts as a substitute for the missing rationale. Is this science?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.