2 Matching Annotations
  1. Jul 2018
    1. On 2017 Jan 07, Lydia Maniatis commented:

      “In conclusion, using a psychophysical method, for the first time we showed that the timescale of adaption mechanisms for the mid-level visual areas were substantially slower than those for the early visual areas.”

      Psychophysical methods are amazing. They let you tap into specific levels of the brain, just by requiring observers to press one of two buttons. As you can imagine, some heavy-duty theoretical and empirical preparation has gone into laying the ground for such a simple but penetrating method.

      One example of this preparation is the assertion by Graham (1992) that under certain “simple” conditions, the brain becomes transparent, so that the percept is a direct reflection of, e.g. the activity of V1 neurons. (Like bright students, the higher levels can’t be bothered to respond to very boring stimulation). She concluded this after a subset of a vast number of experiments performed in the very active field had proven “consistent” with the “classical” view of V1 behavior, at a time when V1 was thought to be pretty much all there was (for vision). (The “classical” view was later shown to be premature and inadequate, making the achievement of consistency in this body of work even more impressive). If one wanted to be ornery, one might compare Graham’s position to saying that we can drop an object into a Rube Goldberg contraption and trigger only the first event in the series, while the other events simply disengage, due to the simplicity of the object – perhaps a simple, sinusoidal surface. To be fair, though, the visual system is not as integrated or complex as those darned contraptions.

      The incorporation of this type of syllogism into the interpretation of psychophysical data was duly noted by Teller (1984), who, impressed, dubbed it the “nothing mucks it up proviso.” It has obviously remained a pillar of psychophysical research, then and now.

      The other important proviso is the assumption that the visual system performs a Fourier analysis, or a system of little Fourier analyses, or something, on the image. There is no evidence for or logic to this proviso (e.g. no imaginable functional reason or even remotely adequate practical account), but, in conjunction with the transparency assumption, it becomes a very powerful tool: little sinusoidal patches tap directly into particular neural populations, or “spatial filters,” whose activity may be observed via a perceiving subject’s button tap (and a few dozen other “linking propositions,” methodological choices and number-crunching/modeling choices for which we have to consult each study individually). (There are also certain minor logical problems with the notion of “detectors,” a concept invoked in the present paper; interested readers should consult Teller (1984))

      The basic theoretical ground has been so thoroughly packed that there is little reason for authors to explain their rationale before launching into their methods and results. The gist of the matter, as indicated in the brief introduction, is that Hancock and Pierce (2008) “proposed that the exposure to the compound [grating] pattern gave rise to more adaptation in the mid-level visual areas (e.g., V4) than the exposure to the component gratings.” Hancock and Pierce (2008) doubtless had a good reason for so proposing. Mei et al (2017) extend these proposals, via more gratings, and button presses, to generate even more penetrating proposals. These may become practically testable at some point in the distant future; the rationale, as mentioned, is already well-developed.

      n.b. Due to transparency considerations, results apply to gratings only, either individual or overlapping.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Jan 07, Lydia Maniatis commented:

      “In conclusion, using a psychophysical method, for the first time we showed that the timescale of adaption mechanisms for the mid-level visual areas were substantially slower than those for the early visual areas.”

      Psychophysical methods are amazing. They let you tap into specific levels of the brain, just by requiring observers to press one of two buttons. As you can imagine, some heavy-duty theoretical and empirical preparation has gone into laying the ground for such a simple but penetrating method.

      One example of this preparation is the assertion by Graham (1992) that under certain “simple” conditions, the brain becomes transparent, so that the percept is a direct reflection of, e.g. the activity of V1 neurons. (Like bright students, the higher levels can’t be bothered to respond to very boring stimulation). She concluded this after a subset of a vast number of experiments performed in the very active field had proven “consistent” with the “classical” view of V1 behavior, at a time when V1 was thought to be pretty much all there was (for vision). (The “classical” view was later shown to be premature and inadequate, making the achievement of consistency in this body of work even more impressive). If one wanted to be ornery, one might compare Graham’s position to saying that we can drop an object into a Rube Goldberg contraption and trigger only the first event in the series, while the other events simply disengage, due to the simplicity of the object – perhaps a simple, sinusoidal surface. To be fair, though, the visual system is not as integrated or complex as those darned contraptions.

      The incorporation of this type of syllogism into the interpretation of psychophysical data was duly noted by Teller (1984), who, impressed, dubbed it the “nothing mucks it up proviso.” It has obviously remained a pillar of psychophysical research, then and now.

      The other important proviso is the assumption that the visual system performs a Fourier analysis, or a system of little Fourier analyses, or something, on the image. There is no evidence for or logic to this proviso (e.g. no imaginable functional reason or even remotely adequate practical account), but, in conjunction with the transparency assumption, it becomes a very powerful tool: little sinusoidal patches tap directly into particular neural populations, or “spatial filters,” whose activity may be observed via a perceiving subject’s button tap (and a few dozen other “linking propositions,” methodological choices and number-crunching/modeling choices for which we have to consult each study individually). (There are also certain minor logical problems with the notion of “detectors,” a concept invoked in the present paper; interested readers should consult Teller (1984))

      The basic theoretical ground has been so thoroughly packed that there is little reason for authors to explain their rationale before launching into their methods and results. The gist of the matter, as indicated in the brief introduction, is that Hancock and Pierce (2008) “proposed that the exposure to the compound [grating] pattern gave rise to more adaptation in the mid-level visual areas (e.g., V4) than the exposure to the component gratings.” Hancock and Pierce (2008) doubtless had a good reason for so proposing. Mei et al (2017) extend these proposals, via more gratings, and button presses, to generate even more penetrating proposals. These may become practically testable at some point in the distant future; the rationale, as mentioned, is already well-developed.

      n.b. Due to transparency considerations, results apply to gratings only, either individual or overlapping.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.