6 Matching Annotations
  1. Jul 2018
    1. On 2016 Jul 23, Lydia Maniatis commented:

      Part three: Here's an instant update: It seems that, a few years after this publication, the "merged models" were tested. They failed. This is to be expected with ad hoc models, but at least it shows that the authors were sincere in their attempts, though this doesn't excue sloppy reasoning at that level. They also seem inclined to continue in the same style ("adding components" to the failed models rather than rethinking their ideas). The transparency-requiring low-level channel assumptions are apparently untouchable, non-negotiable. I know it's a cliche, but it's a recipe for misguided Ptolemaic solar system-level complexity, as the authors demonstrate.

      The article is "Probed-sinewave paradigm: a test of models of light-adaptation dynamics. Hood DC1, Graham N, von Wiegand TE, Chase VM" and its authors frankly acknowledge the utter failure of their hypothesis:

      "Our purpose here was to explore a relatively unused paradigm, the probed-sinewave paradigm, as a vehicle for distinguishing among candidate models of light adaptation. The paradigm produced orderly data with clear features. The candidate light-adaptation models, however, were unable to predict these features and our attempts to rescue them by changing parameter values and the decision rule were unsuccessful. While it is plausible that other modifications would produce predictions closer to the data, it is hard to believe these models can be rescued without adding additional components. For discussion, we divide these possible components into those that seem to require an additional channel vs those components that can be added to the single-channel of the models in Fig. 6."

      "While this discussion suggests a number of plausible directions for future work, it is not at all clear which direction or model will ultimately prove most successful. We started with a paradigm we thought would be a strong test of existing models; the test turned out to be even stronger than we expected."

      So pretty much starting at zero, after all that. Performing a strong test was the right move. The wrong move was not realizing, on the basis of solid reasoning, that the theoretical foundation of the model was flimsy and that models with such flimsy foundations are bound to fail unless you are extraordinarily lucky.

      Let me add that all models eventually fail, but it is the case of well-reasoned ones that these failures are informative, and necessary.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 23, Lydia Maniatis commented:

      Part two: Popper has discussed traditions such as those described and illustrated above, and concluded that they are not the kind of traditions that endow scientific activity with its progressive character. He suggested that we choose not to immunize our hypotheses with the tactics discussed above, e.g. leaving problems for the future and piling ad hoc model on top of ad hoc model, and having tolerance for logical inconsistencies. But that's a harder way than the vagueness, sloppiness and special pleading masked in technical complexity that has come to dominate contemporary vision science and in fact almost the whole of science today.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jul 23, Lydia Maniatis commented:

      This article is illustrative of the degradation of theory and practice in vision science. The article is not new (but compare with, e.g. Graham, 2011), but many of its assumptions and the style of argument - tolerance for layers upon layers of uncorroborated assumptions and inadequate, ad hoc models - are still very current and underpin a broad swath of contemporary (arguably pseudoscientific) activity. Below are quotes from the article and my comments.

      The abstract: "Light adaptation has been studied using both aperiodic and periodic stimuli. Two well-documented phenomena are described: the background-onset effect (from an aperiodic-stimulus tradition) and high-temporal-frequency linearity (from the periodic-stimulus tradition). These phenomena have been explained within two different theoretical frameworks. Here we briefly review those frameworks. We then show that the models developed to predict the phenomenon from one tradition cannot predict the phenomenon from the other tradition, but that the models from the two traditions can be merged into a class of models that predicts both phenomena."

      Comment: One wonders whether the merger was ultimately successful, and whether falsifying phenomena from yet other traditions could be merged with these two, to expand the ever-expanding ad hoc circle of tradition.

      Note that the piecemeal "merger" philosophy expressed by Graham and Hood seems to preclude falsification. Falsifying phenomena from another "tradition" simply lead to the summing of individual ad hoc models into a compound ad hoc model, and so on. The complexity but not the information content of the models will thus increase. (This will also likely produce internal inconsistencies that not be noticed because they will not be looked for.) The question of whether these models correspond with the facts that they are supposedly trying to explain never really arises.

      All this can be garnered from the abstract: If anything, the text is even worse.

      "None of the parts of the merged models, however, is necessarily correct in detail. Many modifications or substitutes would clearly work just as well at this qualitative level. …Also, one could certainly expand the model to include more parts. …Similarly, the initial gain-controlling process might be composed of several processes having somewhat different properties"

      Comment: The models are arbitrary, ad hoc, incomplete on their own terms, and there is no rational criterion for choosing among an infinite number of alternatives.

      "No attempt was made to fine-tune either model to predict all the details in any one set of psychophysical data…much less in all the other psychophysical results that such a model might bear on…To attempt such a project in the future might be worthwhile...particularly if the experimental results were all collected on the same subjects..."

      Comment: In other words, because the models are ad hoc they may only explain the results to which they were tailored. It may (or may not??) be worth checking to see whether this is the case. I.e. testing models for correspondence with the phenomena is optional. The effects in question are too fragile and too little understood to be tested across subjects with varying methods.

      "The more vaguely-stated possibility suggested by this observation, however, is that any decision rule of the kind considered here may be in principle inadequate for the suprathreshold-discrimination case. It may be impossible to explain suprathreshold discriminations without more explicit modeling of higher-level of higher-level visual processes than is necessary to explain detection (where detection is a discrimination between a stimulus and a neutral stimulus.)... Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action THAT THEY BECOME TRANSPARENT." [caps mine].

      Comment: The idea that there are certain experimental conditions in which the conscious percept consists in the reading off of the activity of "low-level" neurons (as though there was a direct qualitative correspondence between neural firing and seeing, e.g., a blank screen!) is patently absurd and has been criticized in depth and from different angles by Teller (1984).

      Pretending that we take it seriously, we could refute it by noting that the presumably simplest of all stimuli - a white surface free of any imperfections - produces the perception of a three-dimensional fog. Other experiments have also shown that organizational processes are engaged even at threshold conditions, and that, indeed, they influence thresholds.

      The "transparency theory" is repeated by Graham (2011) in an article in Vision Research: "It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen."

      This casually proposed "transparency" theory of near-threshold stimulation is apparently the basis of the widespread belief in spatial frequency channels and the extraordinary elaboration of associated assumptions. Without the transparency theory, it is not clear that even cherry-picked evidence can support this popular field of research. (If you've ever wondered at the widespread use of "Gabor patches" in vision research, this is the root cause - they are considered elemental due to the transparency theory-supported low-level neuron spatial frequency sensitivity hypothesis. I suspect many of the people using then don't even know why).

      Graham and Hood (1992) also go into a little finer detail about the putative basis of transparency: " In the suprathreshold discrimination case, the observer is trying to discriminate between two sets of neural responses…both of which sets contain many non-baseline responses. In the detection case…the observer is simply trying to detect some non-baseline responses in either set (since that is the set most likely to be the non-blank stimulus). Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action that they become transparent."

      Comment: The suggestion that investigators are managing to set their low-level visual neurons at baseline at the start of experiments also seems implausible. I certainly don't think this has been explicitly discussed from the point of view of method. "However, at this moment in time, we seem to be able to explain background onset with a subtractive process…Thus, we can just leave as a marker for the future – should troubles in fully explaining the data arise – the possibility that the background-onset effect in particular (and perhaps all the results) will not be explained in detail without including more about higher-level visual processing."

      Comment: Translation: We'll accept our ad hoc hypo thesis for now, but it's probably wrong. But we'll worry about that later, if and when anyone makes trouble by bothering with actual tests. Because there's always a (zero) chance that it will be corroborated.

      Finally, the article also provides a good example of misleading use of references: "Quantal noise exists in the visual stimulus, as is well known (see Pelli for a current discussion.)"

      Comment: From Pelli, 1990: "It will be important to test the model…For gratings in dynamic white noise, this [model] prediction has been confirmed by Pelli (1981), disconfirmed by Kersten (1984) and reconfirmed by Thomas (1985). More work is needed." But we don't need to wait for evidence to firmly believe in "quantal noise."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Jul 23, Lydia Maniatis commented:

      This article is illustrative of the degradation of theory and practice in vision science. The article is not new (but compare with, e.g. Graham, 2011), but many of its assumptions and the style of argument - tolerance for layers upon layers of uncorroborated assumptions and inadequate, ad hoc models - are still very current and underpin a broad swath of contemporary (arguably pseudoscientific) activity. Below are quotes from the article and my comments.

      The abstract: "Light adaptation has been studied using both aperiodic and periodic stimuli. Two well-documented phenomena are described: the background-onset effect (from an aperiodic-stimulus tradition) and high-temporal-frequency linearity (from the periodic-stimulus tradition). These phenomena have been explained within two different theoretical frameworks. Here we briefly review those frameworks. We then show that the models developed to predict the phenomenon from one tradition cannot predict the phenomenon from the other tradition, but that the models from the two traditions can be merged into a class of models that predicts both phenomena."

      Comment: One wonders whether the merger was ultimately successful, and whether falsifying phenomena from yet other traditions could be merged with these two, to expand the ever-expanding ad hoc circle of tradition.

      Note that the piecemeal "merger" philosophy expressed by Graham and Hood seems to preclude falsification. Falsifying phenomena from another "tradition" simply lead to the summing of individual ad hoc models into a compound ad hoc model, and so on. The complexity but not the information content of the models will thus increase. (This will also likely produce internal inconsistencies that not be noticed because they will not be looked for.) The question of whether these models correspond with the facts that they are supposedly trying to explain never really arises.

      All this can be garnered from the abstract: If anything, the text is even worse.

      "None of the parts of the merged models, however, is necessarily correct in detail. Many modifications or substitutes would clearly work just as well at this qualitative level. …Also, one could certainly expand the model to include more parts. …Similarly, the initial gain-controlling process might be composed of several processes having somewhat different properties"

      Comment: The models are arbitrary, ad hoc, incomplete on their own terms, and there is no rational criterion for choosing among an infinite number of alternatives.

      "No attempt was made to fine-tune either model to predict all the details in any one set of psychophysical data…much less in all the other psychophysical results that such a model might bear on…To attempt such a project in the future might be worthwhile...particularly if the experimental results were all collected on the same subjects..."

      Comment: In other words, because the models are ad hoc they may only explain the results to which they were tailored. It may (or may not??) be worth checking to see whether this is the case. I.e. testing models for correspondence with the phenomena is optional. The effects in question are too fragile and too little understood to be tested across subjects with varying methods.

      "The more vaguely-stated possibility suggested by this observation, however, is that any decision rule of the kind considered here may be in principle inadequate for the suprathreshold-discrimination case. It may be impossible to explain suprathreshold discriminations without more explicit modeling of higher-level of higher-level visual processes than is necessary to explain detection (where detection is a discrimination between a stimulus and a neutral stimulus.)... Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action THAT THEY BECOME TRANSPARENT." [caps mine].

      Comment: The idea that there are certain experimental conditions in which the conscious percept consists in the reading off of the activity of "low-level" neurons (as though there was a direct qualitative correspondence between neural firing and seeing, e.g., a blank screen!) is patently absurd and has been criticized in depth and from different angles by Teller (1984).

      Pretending that we take it seriously, we could refute it by noting that the presumably simplest of all stimuli - a white surface free of any imperfections - produces the perception of a three-dimensional fog. Other experiments have also shown that organizational processes are engaged even at threshold conditions, and that, indeed, they influence thresholds.

      The "transparency theory" is repeated by Graham (2011) in an article in Vision Research: "It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen."

      This casually proposed "transparency" theory of near-threshold stimulation is apparently the basis of the widespread belief in spatial frequency channels and the extraordinary elaboration of associated assumptions. Without the transparency theory, it is not clear that even cherry-picked evidence can support this popular field of research. (If you've ever wondered at the widespread use of "Gabor patches" in vision research, this is the root cause - they are considered elemental due to the transparency theory-supported low-level neuron spatial frequency sensitivity hypothesis. I suspect many of the people using then don't even know why).

      Graham and Hood (1992) also go into a little finer detail about the putative basis of transparency: " In the suprathreshold discrimination case, the observer is trying to discriminate between two sets of neural responses…both of which sets contain many non-baseline responses. In the detection case…the observer is simply trying to detect some non-baseline responses in either set (since that is the set most likely to be the non-blank stimulus). Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action that they become transparent."

      Comment: The suggestion that investigators are managing to set their low-level visual neurons at baseline at the start of experiments also seems implausible. I certainly don't think this has been explicitly discussed from the point of view of method. "However, at this moment in time, we seem to be able to explain background onset with a subtractive process…Thus, we can just leave as a marker for the future – should troubles in fully explaining the data arise – the possibility that the background-onset effect in particular (and perhaps all the results) will not be explained in detail without including more about higher-level visual processing."

      Comment: Translation: We'll accept our ad hoc hypo thesis for now, but it's probably wrong. But we'll worry about that later, if and when anyone makes trouble by bothering with actual tests. Because there's always a (zero) chance that it will be corroborated.

      Finally, the article also provides a good example of misleading use of references: "Quantal noise exists in the visual stimulus, as is well known (see Pelli for a current discussion.)"

      Comment: From Pelli, 1990: "It will be important to test the model…For gratings in dynamic white noise, this [model] prediction has been confirmed by Pelli (1981), disconfirmed by Kersten (1984) and reconfirmed by Thomas (1985). More work is needed." But we don't need to wait for evidence to firmly believe in "quantal noise."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 23, Lydia Maniatis commented:

      Part two: Popper has discussed traditions such as those described and illustrated above, and concluded that they are not the kind of traditions that endow scientific activity with its progressive character. He suggested that we choose not to immunize our hypotheses with the tactics discussed above, e.g. leaving problems for the future and piling ad hoc model on top of ad hoc model, and having tolerance for logical inconsistencies. But that's a harder way than the vagueness, sloppiness and special pleading masked in technical complexity that has come to dominate contemporary vision science and in fact almost the whole of science today.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jul 23, Lydia Maniatis commented:

      Part three: Here's an instant update: It seems that, a few years after this publication, the "merged models" were tested. They failed. This is to be expected with ad hoc models, but at least it shows that the authors were sincere in their attempts, though this doesn't excue sloppy reasoning at that level. They also seem inclined to continue in the same style ("adding components" to the failed models rather than rethinking their ideas). The transparency-requiring low-level channel assumptions are apparently untouchable, non-negotiable. I know it's a cliche, but it's a recipe for misguided Ptolemaic solar system-level complexity, as the authors demonstrate.

      The article is "Probed-sinewave paradigm: a test of models of light-adaptation dynamics. Hood DC1, Graham N, von Wiegand TE, Chase VM" and its authors frankly acknowledge the utter failure of their hypothesis:

      "Our purpose here was to explore a relatively unused paradigm, the probed-sinewave paradigm, as a vehicle for distinguishing among candidate models of light adaptation. The paradigm produced orderly data with clear features. The candidate light-adaptation models, however, were unable to predict these features and our attempts to rescue them by changing parameter values and the decision rule were unsuccessful. While it is plausible that other modifications would produce predictions closer to the data, it is hard to believe these models can be rescued without adding additional components. For discussion, we divide these possible components into those that seem to require an additional channel vs those components that can be added to the single-channel of the models in Fig. 6."

      "While this discussion suggests a number of plausible directions for future work, it is not at all clear which direction or model will ultimately prove most successful. We started with a paradigm we thought would be a strong test of existing models; the test turned out to be even stronger than we expected."

      So pretty much starting at zero, after all that. Performing a strong test was the right move. The wrong move was not realizing, on the basis of solid reasoning, that the theoretical foundation of the model was flimsy and that models with such flimsy foundations are bound to fail unless you are extraordinarily lucky.

      Let me add that all models eventually fail, but it is the case of well-reasoned ones that these failures are informative, and necessary.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.