2 Matching Annotations
  1. Jul 2018
    1. On 2016 Dec 28, Lydia Maniatis commented:

      Followers of the school of thought to which the authors of this article belong believe, among other odd things, in the notion that visual perception can be studied without reference to form. Thus, the reference made in the title of this paper to "regular sparse micro-patterns." There are (micro)-patterns and there are (micro)-patterns; do the present conclusions apply to any and all "regular, sparse micro-patterns?" Or only selected ones?

      Among the other beliefs of this school is the notion that different retinal projections trigger processing at different levels of the visual system, such that, for example, the activities of V1 neurons may be directly discerned in a “simple” percept. These supposed V1 (etc) signatures, of course, only apply to restricted features of a restricted set of stimuli (e.g. "grid-textures") under restricted contexts. The supposed neural behaviors and their links to perception are simple, involving largely local summation and inhibition.

      The idea that different percepts/features selectively tap different layers of visual processing is not defensible, and no serious attempt has ever been made to defend it. The problem was flagged by Teller (1984), who labeled it the “nothing mucks it up proviso” highlighting the failure to explain the role of the levels of the visual system (whose processes involved unimaginably complex feedback effects) not invoked by a particular “low-level” explanation. With stunning lack of seriousness Graham (e.g 1992, see comments in PubPeer) proposed that under certain conditions the brain becomes transparent through to the lower levels, and contemporary researchers have implicitly embraced this view. The fact is, however, that even the stimuli that are supposed to selectively tap into low-level processes (sine wave gratings/Gabor patches) produce 3D percepts with the impression of light and shadow; these facts are never addressed by devotees of the transparent brain, whose models are not interested in and certainly couldn’t handle them.

      The use of “Gabor patches” is a symptom of the other untenable assumption that “low-levels” of the visual system perform a Fourier analysis of the luminance structure of the retinal projection at each moment. There is no conceivable reason why the visual system should do this, or how, as it would not contribute to use of luminance patterns to construct a representation of the environment. There is also no evidence that it does this.

      In addition, it is also, with no evidence, asserted that the neural “signal” is “noisy.” This assumption is quite convenient, as the degree of supposed “noise” can be varied ad lib for the purposes of model-fitting. It is not clear how proponents of a “signal detecting mechanism with noise” conceive of the distinction between neural activity denoting “signal” and neural activity denoting “noise.” In order to describe the percept as the product of “signal” and “noise,” investigators have to define the “signal,” i.e. what should be contained in the percept in the absence of (purely hypothetical) “noise;” But that means that rather than observing how the visual process handles stimulation, they preordain what the percept should be, and describe (and "model") deviations as being due to “noise.”

      Furthermore, observers employed by this school are typically required to make forced, usually binary, choices, such that the form of the data will comply with model assumptions, as opposed to being complicated by what observers actually perceive, (and by the need to describe this with precision).

      Taken together, the procedures and assumptions employed by Baker and Meese (2016) and many others in the field are very convenient, insofar as “theory” at no point has to come into contact with fact or logic. It is completely bootstrapped, as follows: A model of neural function is constructed, and stimuli are selected/discovered which are amenable to an ad hoc description in terms of this model; aspects of the percepts produced by the stimulus figure (as well as percepts produced by other known figures) that are not consistent with the model are ignored, as are all the logical problems with the assumptions (many of which Teller (1984), a star of the field, tried to call attention to with no effect); the stimulus is then recursively treated as evidence for the model. Variants of the restricted set of stimulus types may produce minor inconsistencies with the models, which are then adjusted accordingly, refitted, and so on. (Here, Baker and Meese freely conclude that perception of their stimulus indicates a "mid-level" (as they conceive it) contribution). It is a perfectly self-contained system - but it isn’t science. In fact, I think it is exactly the kind of activity that Popper was trying to propose criteria for excluding from empirical science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Dec 28, Lydia Maniatis commented:

      Followers of the school of thought to which the authors of this article belong believe, among other odd things, in the notion that visual perception can be studied without reference to form. Thus, the reference made in the title of this paper to "regular sparse micro-patterns." There are (micro)-patterns and there are (micro)-patterns; do the present conclusions apply to any and all "regular, sparse micro-patterns?" Or only selected ones?

      Among the other beliefs of this school is the notion that different retinal projections trigger processing at different levels of the visual system, such that, for example, the activities of V1 neurons may be directly discerned in a “simple” percept. These supposed V1 (etc) signatures, of course, only apply to restricted features of a restricted set of stimuli (e.g. "grid-textures") under restricted contexts. The supposed neural behaviors and their links to perception are simple, involving largely local summation and inhibition.

      The idea that different percepts/features selectively tap different layers of visual processing is not defensible, and no serious attempt has ever been made to defend it. The problem was flagged by Teller (1984), who labeled it the “nothing mucks it up proviso” highlighting the failure to explain the role of the levels of the visual system (whose processes involved unimaginably complex feedback effects) not invoked by a particular “low-level” explanation. With stunning lack of seriousness Graham (e.g 1992, see comments in PubPeer) proposed that under certain conditions the brain becomes transparent through to the lower levels, and contemporary researchers have implicitly embraced this view. The fact is, however, that even the stimuli that are supposed to selectively tap into low-level processes (sine wave gratings/Gabor patches) produce 3D percepts with the impression of light and shadow; these facts are never addressed by devotees of the transparent brain, whose models are not interested in and certainly couldn’t handle them.

      The use of “Gabor patches” is a symptom of the other untenable assumption that “low-levels” of the visual system perform a Fourier analysis of the luminance structure of the retinal projection at each moment. There is no conceivable reason why the visual system should do this, or how, as it would not contribute to use of luminance patterns to construct a representation of the environment. There is also no evidence that it does this.

      In addition, it is also, with no evidence, asserted that the neural “signal” is “noisy.” This assumption is quite convenient, as the degree of supposed “noise” can be varied ad lib for the purposes of model-fitting. It is not clear how proponents of a “signal detecting mechanism with noise” conceive of the distinction between neural activity denoting “signal” and neural activity denoting “noise.” In order to describe the percept as the product of “signal” and “noise,” investigators have to define the “signal,” i.e. what should be contained in the percept in the absence of (purely hypothetical) “noise;” But that means that rather than observing how the visual process handles stimulation, they preordain what the percept should be, and describe (and "model") deviations as being due to “noise.”

      Furthermore, observers employed by this school are typically required to make forced, usually binary, choices, such that the form of the data will comply with model assumptions, as opposed to being complicated by what observers actually perceive, (and by the need to describe this with precision).

      Taken together, the procedures and assumptions employed by Baker and Meese (2016) and many others in the field are very convenient, insofar as “theory” at no point has to come into contact with fact or logic. It is completely bootstrapped, as follows: A model of neural function is constructed, and stimuli are selected/discovered which are amenable to an ad hoc description in terms of this model; aspects of the percepts produced by the stimulus figure (as well as percepts produced by other known figures) that are not consistent with the model are ignored, as are all the logical problems with the assumptions (many of which Teller (1984), a star of the field, tried to call attention to with no effect); the stimulus is then recursively treated as evidence for the model. Variants of the restricted set of stimulus types may produce minor inconsistencies with the models, which are then adjusted accordingly, refitted, and so on. (Here, Baker and Meese freely conclude that perception of their stimulus indicates a "mid-level" (as they conceive it) contribution). It is a perfectly self-contained system - but it isn’t science. In fact, I think it is exactly the kind of activity that Popper was trying to propose criteria for excluding from empirical science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.