2 Matching Annotations
  1. Jul 2018
    1. On 2017 Jun 21, Lydia Maniatis commented:

      This paper seems to hinge on a notion that makes no functional sense, has never been corroborated, and has led to an absurd claim by one of its main proponents attempting to salvage it.

      The notion is that the visual system performs a Fourier analysis on the "image" via "spatial frequency filters." As far as I can see, the reasons for the adoption of this functionally absurd notion were an ill-conceived analogy to hearing and the Fourier analysis that happens in the inner ear, combined with a gross over-interpretaion of the early findings of Hubel and Wiesel on the striate cortex of the cat. Psychophysicists were fabulously successful at corroborating the notion, accumulating heaps of evidence in its favor. Unfortunately, as Graham (2011) points out, the evidence generated had been interpreted in terms of a visual system that consisted solely of V1 (which was supposed to contain the orientation/frequency detectors) while it was later understood to be far more complex. The sitmuli that had been interpreted as tapping into V1 had somehow been ignored by neurons in V2, V3, V4, etc.! In these circumstances, Graham considers the "success" of the psychophysical program as something akin to "magic," and decides to explain it by arguing that, in the case of very simple stimuli, the brain becomes "transparent" down to the lower levels. Earlier, Teller (1984) had censured such attitudes as examples of an untenable "nothing mucks it up proviso." Below is the relevant passage from Graham (2011):

      "The simple multiple-analyzers model shown in the top panel of Fig. 1 was and is a very good account, qualitatively and quantitatively, of the results of psychophysical experiments using near-threshold contrasts . And by 1985 there were hundreds of published papers each typically with many such experiments. It was quite clear by that time, however, that area V1 was only one of 10 or more different areas in the cortex devoted to vision. ...The success of this simple multiple-analyzers model seemed almost magical therefore. How could a model account for so many experimental results when it represented most areas of the visual cortex and the whole rest of the brain by a simple decision rule? One possible explanation of the magic is this: In response to near-threshold patterns, only a small proportion of the analyzers are being stimulated above their baseline. Perhaps this sparseness of information going upstream limits the kinds of processing that thehigher levels can do, and limits them to being described by simple decision rules because such rules may be close to optimal given the sparseness. It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen." Or, as is well-known, it's always possible to arrange experiments, including employing a very restricted set of stimuli, so as to achieve consistency with any hypothesis.

      I guess the fairly brief exposures used in the present experiment are supposed to qualify then for transparency status, unless the authors have their own views about the anatomical loci of the supposed spatial frequency filters, but all of this really should be discussed and defended explicitly.

      The idea that the visual system performs a Fourier analysis i.e. analyzes the visual stimulation into spatial frequency patterns, is absurd for a number of reasons. First, the retinal stimulation is initially point stimulation, a mosaic of points (photoreceptors) whose activities at any given moment depend on the intensity/wavelength of the photons striking them. Therefore, to organize this mosaic into a set of images based on spatial frequency is not first a problem of detection, but of organization. The spatial frequency kind of organization in no way furthers, but rather impedes, the task that the visual system has to achieve, which is to group points of the mosaic such that the boundaries of those groups correspond to the boundaries of the objects in the visual field. So it is not only incredibly difficult (no mechanism has been proposed), it is a pointless diversion. Even if this were not the case, the requirement for a 'transparency hypothesis" renders it absurd on that basis alone. There is no credible evidence in its favor.

      Other seemingly absurd claims include the statement that: "it recently has been shown that the availability of horizontal structure underlies the face-specific N170 response...." Is there such a thing as an image lacking "horizontal structure," or lacking structure in any direction? In other words, can we test this statement by controlling for "horizontal structure"? The term is too general to serve as a substitute for the much more specific and theoretically baseless image manipulation to which it refers.

      Another problem I have is the problem of sampling. With complex stimuli, the number of potential confounds is large. Not only is the sample of stimuli used here small; in addition, the authors don't indicate that they were randomly selected, only that they were "selected." On what basis? It seems to me that faces with well-defined eyebrows, for example, would be more likely to produce the desired results, given that the vertical filters seem to make them disappear in the sample provided in the article.

      I agree that familiarity makes perceptual tasks easier, and even that we notice relationships across the vertical axis more easily than across the horizontal, but the present experiment has nothing to do with that.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Jun 21, Lydia Maniatis commented:

      This paper seems to hinge on a notion that makes no functional sense, has never been corroborated, and has led to an absurd claim by one of its main proponents attempting to salvage it.

      The notion is that the visual system performs a Fourier analysis on the "image" via "spatial frequency filters." As far as I can see, the reasons for the adoption of this functionally absurd notion were an ill-conceived analogy to hearing and the Fourier analysis that happens in the inner ear, combined with a gross over-interpretaion of the early findings of Hubel and Wiesel on the striate cortex of the cat. Psychophysicists were fabulously successful at corroborating the notion, accumulating heaps of evidence in its favor. Unfortunately, as Graham (2011) points out, the evidence generated had been interpreted in terms of a visual system that consisted solely of V1 (which was supposed to contain the orientation/frequency detectors) while it was later understood to be far more complex. The sitmuli that had been interpreted as tapping into V1 had somehow been ignored by neurons in V2, V3, V4, etc.! In these circumstances, Graham considers the "success" of the psychophysical program as something akin to "magic," and decides to explain it by arguing that, in the case of very simple stimuli, the brain becomes "transparent" down to the lower levels. Earlier, Teller (1984) had censured such attitudes as examples of an untenable "nothing mucks it up proviso." Below is the relevant passage from Graham (2011):

      "The simple multiple-analyzers model shown in the top panel of Fig. 1 was and is a very good account, qualitatively and quantitatively, of the results of psychophysical experiments using near-threshold contrasts . And by 1985 there were hundreds of published papers each typically with many such experiments. It was quite clear by that time, however, that area V1 was only one of 10 or more different areas in the cortex devoted to vision. ...The success of this simple multiple-analyzers model seemed almost magical therefore. How could a model account for so many experimental results when it represented most areas of the visual cortex and the whole rest of the brain by a simple decision rule? One possible explanation of the magic is this: In response to near-threshold patterns, only a small proportion of the analyzers are being stimulated above their baseline. Perhaps this sparseness of information going upstream limits the kinds of processing that thehigher levels can do, and limits them to being described by simple decision rules because such rules may be close to optimal given the sparseness. It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen." Or, as is well-known, it's always possible to arrange experiments, including employing a very restricted set of stimuli, so as to achieve consistency with any hypothesis.

      I guess the fairly brief exposures used in the present experiment are supposed to qualify then for transparency status, unless the authors have their own views about the anatomical loci of the supposed spatial frequency filters, but all of this really should be discussed and defended explicitly.

      The idea that the visual system performs a Fourier analysis i.e. analyzes the visual stimulation into spatial frequency patterns, is absurd for a number of reasons. First, the retinal stimulation is initially point stimulation, a mosaic of points (photoreceptors) whose activities at any given moment depend on the intensity/wavelength of the photons striking them. Therefore, to organize this mosaic into a set of images based on spatial frequency is not first a problem of detection, but of organization. The spatial frequency kind of organization in no way furthers, but rather impedes, the task that the visual system has to achieve, which is to group points of the mosaic such that the boundaries of those groups correspond to the boundaries of the objects in the visual field. So it is not only incredibly difficult (no mechanism has been proposed), it is a pointless diversion. Even if this were not the case, the requirement for a 'transparency hypothesis" renders it absurd on that basis alone. There is no credible evidence in its favor.

      Other seemingly absurd claims include the statement that: "it recently has been shown that the availability of horizontal structure underlies the face-specific N170 response...." Is there such a thing as an image lacking "horizontal structure," or lacking structure in any direction? In other words, can we test this statement by controlling for "horizontal structure"? The term is too general to serve as a substitute for the much more specific and theoretically baseless image manipulation to which it refers.

      Another problem I have is the problem of sampling. With complex stimuli, the number of potential confounds is large. Not only is the sample of stimuli used here small; in addition, the authors don't indicate that they were randomly selected, only that they were "selected." On what basis? It seems to me that faces with well-defined eyebrows, for example, would be more likely to produce the desired results, given that the vertical filters seem to make them disappear in the sample provided in the article.

      I agree that familiarity makes perceptual tasks easier, and even that we notice relationships across the vertical axis more easily than across the horizontal, but the present experiment has nothing to do with that.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.