2 Matching Annotations
  1. Jul 2018
    1. On 2016 Dec 28, Lydia Maniatis commented:

      Cherniawsky and Mullen’s (2016) article lies well within the perimeter of a school of thought that, despite its obvious intellectual and empirical absurdity, is popular within the vision science community.

      The school persists, and is relentlessly prolific, because it has insulated itself from the possibility of falsification, mainly by ignoring both fact and reason.

      Explanatory schemes are concocted with respect to a narrow set of stimuli and conditions. Data generated under this narrow set of conditions are always interpreted in terms of the narrow scheme of assumptions, via permissive post hoc modeling. When, as here, results contradict expectation, additional ad hoc assumptions are made with reference to the specific, narrow type of stimuli used, which then, of course, may subsequently be corroborated, more or less, using those same stimuli or mild variants thereof.

      The process continues ad infinitum via the same ad hoc route. This is the reason that, as Kingdom (2011) has noted, the study of lightness, brightness and transparency (and I would add, vision science in general) is divided into camps “each with its own preferred stimuli and methodology” and characterized by “ideological divides.“ The term “ideological” is highly appropriate here, as it indicates a refusal to face facts and arguments that contradict or challenge the preferred view. It is obviously antithetical to the scientific attitude and, unfortunately, very typical of virtually all of contemporary vision science.

      The title of this paper ”The whole is other than the sum...” indicates that a prediction of “summation” failed even under the gentle treatment it received. The authors don’t quite know what to make of their results, but a conclusion of “other” is enough by today’s standards.

      The ideological camp to which this article belongs is a scandal on many counts. First, it adopts the view that there are certain figures whose retinal projections trigger visual processes such that the ultimate percept directly reflects local “low-level” processes. More specifically, it reflects “low-level” processes as they are currently (and crudely) understood. The figures supposed to have this quality are those for which the appropriate “low-level” story du jour has been concocted.

      The success of the method is well-described by Graham (1997, discussed in PubPeer), who notes that countless experiments were "consistent" with the behavior of V1 neurons at a time when V1 had only begun to be explored and when researchers were unaware not only of the complexities of V1 but also of the many hierarchically higher-level and processes that intervene between retina and percept. This amazing success is rationalized (if we may use the term loosely) by Graham, who with magical thinking reckons that under certain conditions the brain becomes “transparent” down to the initial processing levels. Teller (1984) had earlier (to no apparent effect) described such a view as “the nothing mucks it up proviso,” and pointed out the obvious logical problems.

      Cherniawsky and Mullen premise their article on this view with their opening sentence: “Two-dimensional orthogonal gratings (plaids) are a useful tool in the study of complex form perception, as early spatial vision is well described by responses to simple one-dimensional sinusoidal gratings…” In fact, the “one-dimensional sinusoidal gratings” in question typically produce 3D percepts of light and shadow, and the authors’ plaids in Figure 1 appear curved and partially obscured by a foggy overlay. So as illogical as the transparent brain hypothesis is to begin with, the stimuli supposed to tap into lower level processes aren’t even consistent with a strictly “low-level” interpretive process.

      The uninitiated might wonder why the authors use the term “spatial vision.” It is because they have uncritically adopted the partner of the transparent brain hypothesis, the view that the early visual processes perform a Fourier analysis on the retinal projection. It is not clear that this is at all realistic at the physiological level, but there is also no apparent functional reason for such a challenging process, as it would in no way further the achievement of the goal of organizing the incoming light into figures and grounds as the basis for further interpretation leading to a (usually) veridical representation of the environment. The Fourier conceit is, of course, maintained by employing sinusoidal gratings while ignoring their actual perceptual effects. That is, the sinusoidal gratings and combinations thereof are said to tap into the low-level frequency channels, which then determine contrast via summation, inhibition, etc, (whatever post hoc interpretation the data of any particular experiment seem to require). These contrast impressions, though experienced in the context of, e.g. impressions of partially-shadowed tubes, are never considered with respect to these complex 3D percepts. Lacking necessary interpretive assumptions, investigators are reduced to describing their results in terms of “other,” precisely described, but theoretically unintelligible and tangled effects.

      The idea that “summation” of local neural activities can explain perception is contradicted by a million cases, and counting, including the much-loved sinusoidal gratings and their shape-from-shading effects. But ideology is stronger and, apparently, good enough for vision science today.

      Finally, the notion of “detectors” is a staple of this school and the authors’ discussion; for a discussion of why this concept is untenable, please see Teller (1984).

      p.s. As usual, I’ll ask why its ok for an author to be one of a small number of subjects, the rest of whom are described as “naïve.” If it’s important to be naïve, then…

      Also, why use forced choices, and thus inject more uncertainty than necessary into the results? It’s theoretically possible that observers never see what you think they’re seeing…Obviously, if you’re committed to interpreting results a certain way, it’s convenient to force the data to look a certain way…

      Also, no explanation is given for methodological choices, e.g. the (very brief) presentation times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Dec 28, Lydia Maniatis commented:

      Cherniawsky and Mullen’s (2016) article lies well within the perimeter of a school of thought that, despite its obvious intellectual and empirical absurdity, is popular within the vision science community.

      The school persists, and is relentlessly prolific, because it has insulated itself from the possibility of falsification, mainly by ignoring both fact and reason.

      Explanatory schemes are concocted with respect to a narrow set of stimuli and conditions. Data generated under this narrow set of conditions are always interpreted in terms of the narrow scheme of assumptions, via permissive post hoc modeling. When, as here, results contradict expectation, additional ad hoc assumptions are made with reference to the specific, narrow type of stimuli used, which then, of course, may subsequently be corroborated, more or less, using those same stimuli or mild variants thereof.

      The process continues ad infinitum via the same ad hoc route. This is the reason that, as Kingdom (2011) has noted, the study of lightness, brightness and transparency (and I would add, vision science in general) is divided into camps “each with its own preferred stimuli and methodology” and characterized by “ideological divides.“ The term “ideological” is highly appropriate here, as it indicates a refusal to face facts and arguments that contradict or challenge the preferred view. It is obviously antithetical to the scientific attitude and, unfortunately, very typical of virtually all of contemporary vision science.

      The title of this paper ”The whole is other than the sum...” indicates that a prediction of “summation” failed even under the gentle treatment it received. The authors don’t quite know what to make of their results, but a conclusion of “other” is enough by today’s standards.

      The ideological camp to which this article belongs is a scandal on many counts. First, it adopts the view that there are certain figures whose retinal projections trigger visual processes such that the ultimate percept directly reflects local “low-level” processes. More specifically, it reflects “low-level” processes as they are currently (and crudely) understood. The figures supposed to have this quality are those for which the appropriate “low-level” story du jour has been concocted.

      The success of the method is well-described by Graham (1997, discussed in PubPeer), who notes that countless experiments were "consistent" with the behavior of V1 neurons at a time when V1 had only begun to be explored and when researchers were unaware not only of the complexities of V1 but also of the many hierarchically higher-level and processes that intervene between retina and percept. This amazing success is rationalized (if we may use the term loosely) by Graham, who with magical thinking reckons that under certain conditions the brain becomes “transparent” down to the initial processing levels. Teller (1984) had earlier (to no apparent effect) described such a view as “the nothing mucks it up proviso,” and pointed out the obvious logical problems.

      Cherniawsky and Mullen premise their article on this view with their opening sentence: “Two-dimensional orthogonal gratings (plaids) are a useful tool in the study of complex form perception, as early spatial vision is well described by responses to simple one-dimensional sinusoidal gratings…” In fact, the “one-dimensional sinusoidal gratings” in question typically produce 3D percepts of light and shadow, and the authors’ plaids in Figure 1 appear curved and partially obscured by a foggy overlay. So as illogical as the transparent brain hypothesis is to begin with, the stimuli supposed to tap into lower level processes aren’t even consistent with a strictly “low-level” interpretive process.

      The uninitiated might wonder why the authors use the term “spatial vision.” It is because they have uncritically adopted the partner of the transparent brain hypothesis, the view that the early visual processes perform a Fourier analysis on the retinal projection. It is not clear that this is at all realistic at the physiological level, but there is also no apparent functional reason for such a challenging process, as it would in no way further the achievement of the goal of organizing the incoming light into figures and grounds as the basis for further interpretation leading to a (usually) veridical representation of the environment. The Fourier conceit is, of course, maintained by employing sinusoidal gratings while ignoring their actual perceptual effects. That is, the sinusoidal gratings and combinations thereof are said to tap into the low-level frequency channels, which then determine contrast via summation, inhibition, etc, (whatever post hoc interpretation the data of any particular experiment seem to require). These contrast impressions, though experienced in the context of, e.g. impressions of partially-shadowed tubes, are never considered with respect to these complex 3D percepts. Lacking necessary interpretive assumptions, investigators are reduced to describing their results in terms of “other,” precisely described, but theoretically unintelligible and tangled effects.

      The idea that “summation” of local neural activities can explain perception is contradicted by a million cases, and counting, including the much-loved sinusoidal gratings and their shape-from-shading effects. But ideology is stronger and, apparently, good enough for vision science today.

      Finally, the notion of “detectors” is a staple of this school and the authors’ discussion; for a discussion of why this concept is untenable, please see Teller (1984).

      p.s. As usual, I’ll ask why its ok for an author to be one of a small number of subjects, the rest of whom are described as “naïve.” If it’s important to be naïve, then…

      Also, why use forced choices, and thus inject more uncertainty than necessary into the results? It’s theoretically possible that observers never see what you think they’re seeing…Obviously, if you’re committed to interpreting results a certain way, it’s convenient to force the data to look a certain way…

      Also, no explanation is given for methodological choices, e.g. the (very brief) presentation times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.