6 Matching Annotations
  1. Jul 2018
    1. On 2017 May 30, Lydia Maniatis commented:

      Comment 3: The speciousness of the interpretation may perhaps be better grasped if we imagine that the centroids contained patches differing in ways other than color. If, for example, some had been shaped like rectangles and others stars, would we have been justified in concluding that we were measuring the activities of rectangle and star "filters"? Or if some had been x's and some had been o's...etc. Color might seem like a simpler property than shape, but given that it is wholly mediated by the organization of the visual field and the resulting shape properties, this intuition is in error (the tendency of vision science publications to refer to color as a "low-level" property notwithstanding.)

      In fact, while we're talking about shape, there can be little doubt that the arrangement (e.g. symmetrical vs asymmetrical) of the differently colored patches in the present type of experiment will affect the accuracy of the responses. The effects might, perhaps, be averaged out, but this doesn't mean that these "high-level" effects of organization aren't mediating the purportedly "low-level" effects of color at all times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 29, Lydia Maniatis commented:

      Comment 2:

      There seems to have been a kind of natural selection in vision science (and of course not only vision science) in which the following practice has come to dominate: The results of ad hoc measurements made under arbitrary (poorly rationalized) conditions and fitted to “models” with the help of ad hoc post hoc mathematical adjustments are treated as though they amounted to, or could amount to, functional principles.

      Thus, here, the data generated by a particular task are reified; the data patterns are labelled “attention filters,” and the latter are treated as though they corresponded to a fundamental principle of visual processing. But principles have generality, while the "attention filter" moniker is here applied in a strictly ad hoc fashion:

      First, the model is based on an arbitrary definition of color in terms of isolated ""colors" on a "neutral" background (i.e. conditions producing the perception of particular color patches on a neutral background), whose attributes we are told are “fully described by the relative stimulation of long, medium and short wavelength sensitive retinal cones.” These conditions and, thus, the specific patterns of stimulation correlated with them, constitute only one of an infinite number of possible conditions and thus of patterns of stimulation. (The naive equating of cone activity with color perception is a manifestation of the conceptual problems discussed in my earlier comment.)

      Second, the model is ad hoc (“particularized”); “The inference process is illustrated by the model of selective attention illustrated in Fig. 1B particularized for the present experiments.” What would the generalized form of the model look like?

      Third, the results only apply to individual subject/context combinations: “The model’s optimally predictive filter fk(i)fk(i) is called the observed attention filter. It typically is a very good [post hoc] predictor of a subject’s observed centroid judgments.† Therefore, we say for short that fk(i)fk(i) is the subject’s attention filter for attending to color CkCk in that context.”

      It is the case that different colors vary in their salience. We could perform any number of experiments under any number of conditions with any number of observers, and generate various numbers that reflected this fact. Our experiments would, hopefully, succeed in reproducing the general facts, but the actual numbers would differ. Unless underpinned by potentially informative rationalizations guiding experimental conditions, none of these quantifications would carry any more theoretical weight than any of the others (the value-added via quantification would be zero). There is, in other words, nothing special about the numbers generated by Sun et al (2017). They make no testable claims; their specific "predictions" are all post hoc. Their results are entirely self-referential.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 May 27, Lydia Maniatis commented:

      "The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA)."

      There is something very wrong here. There are no visual images in the eyes, if by images we mean organized percepts - shaped figures with features such as colors, relative locations, etc. Such things are the products of the whole perceptual process, i.e. the products of a process that begins with the effects of the point stimulation of light striking photoreceptors on the retina, setting into motion dynamic interactions of the integrated neural elements of the visual system, ultimately leading to conscious percepts.

      Thus, the mechanism being referenced (if it exists) is selecting from features of the conscious products of these perceptual processes, not from the initial point information or early stages of processing in the retina.

      "...a color-attention filter describes the relative effectiveness with which each color in the retinal input ultimately influences performance."

      Again, there are no colors in the retinal input, color being a perceptual property of the organized output. So we are missing a retinal-state-based description of what the proposed "filters" are supposed to be attending to. This is a problem since, as is well-known, the physical (wavelength) correlate of any perceived color can have pretty much any composition, because what is perceived locally is contingent on the global context.

      The use of the term filter here seems inappropriate, its misuse linked to the failure to distinguish between the proximal stimulation and perceptual facts. The implication seems to be that we are dealing with a constraint on what will be perceived, whereas on the contrary we are dealing with selection from available perceptual facts, .

      The theoretical significance of measuring jnd's is not clear, as they are known to be condition-sensitive in a way not predictable on the basis of available theory. The failure to discriminate between physical/perceptual facts also means it isn't clear which of these potential differences is being referred to.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 May 27, Lydia Maniatis commented:

      "The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA)."

      There is something very wrong here. There are no visual images in the eyes, if by images we mean organized percepts - shaped figures with features such as colors, relative locations, etc. Such things are the products of the whole perceptual process, i.e. the products of a process that begins with the effects of the point stimulation of light striking photoreceptors on the retina, setting into motion dynamic interactions of the integrated neural elements of the visual system, ultimately leading to conscious percepts.

      Thus, the mechanism being referenced (if it exists) is selecting from features of the conscious products of these perceptual processes, not from the initial point information or early stages of processing in the retina.

      "...a color-attention filter describes the relative effectiveness with which each color in the retinal input ultimately influences performance."

      Again, there are no colors in the retinal input, color being a perceptual property of the organized output. So we are missing a retinal-state-based description of what the proposed "filters" are supposed to be attending to. This is a problem since, as is well-known, the physical (wavelength) correlate of any perceived color can have pretty much any composition, because what is perceived locally is contingent on the global context.

      The use of the term filter here seems inappropriate, its misuse linked to the failure to distinguish between the proximal stimulation and perceptual facts. The implication seems to be that we are dealing with a constraint on what will be perceived, whereas on the contrary we are dealing with selection from available perceptual facts, .

      The theoretical significance of measuring jnd's is not clear, as they are known to be condition-sensitive in a way not predictable on the basis of available theory. The failure to discriminate between physical/perceptual facts also means it isn't clear which of these potential differences is being referred to.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 29, Lydia Maniatis commented:

      Comment 2:

      There seems to have been a kind of natural selection in vision science (and of course not only vision science) in which the following practice has come to dominate: The results of ad hoc measurements made under arbitrary (poorly rationalized) conditions and fitted to “models” with the help of ad hoc post hoc mathematical adjustments are treated as though they amounted to, or could amount to, functional principles.

      Thus, here, the data generated by a particular task are reified; the data patterns are labelled “attention filters,” and the latter are treated as though they corresponded to a fundamental principle of visual processing. But principles have generality, while the "attention filter" moniker is here applied in a strictly ad hoc fashion:

      First, the model is based on an arbitrary definition of color in terms of isolated ""colors" on a "neutral" background (i.e. conditions producing the perception of particular color patches on a neutral background), whose attributes we are told are “fully described by the relative stimulation of long, medium and short wavelength sensitive retinal cones.” These conditions and, thus, the specific patterns of stimulation correlated with them, constitute only one of an infinite number of possible conditions and thus of patterns of stimulation. (The naive equating of cone activity with color perception is a manifestation of the conceptual problems discussed in my earlier comment.)

      Second, the model is ad hoc (“particularized”); “The inference process is illustrated by the model of selective attention illustrated in Fig. 1B particularized for the present experiments.” What would the generalized form of the model look like?

      Third, the results only apply to individual subject/context combinations: “The model’s optimally predictive filter fk(i)fk(i) is called the observed attention filter. It typically is a very good [post hoc] predictor of a subject’s observed centroid judgments.† Therefore, we say for short that fk(i)fk(i) is the subject’s attention filter for attending to color CkCk in that context.”

      It is the case that different colors vary in their salience. We could perform any number of experiments under any number of conditions with any number of observers, and generate various numbers that reflected this fact. Our experiments would, hopefully, succeed in reproducing the general facts, but the actual numbers would differ. Unless underpinned by potentially informative rationalizations guiding experimental conditions, none of these quantifications would carry any more theoretical weight than any of the others (the value-added via quantification would be zero). There is, in other words, nothing special about the numbers generated by Sun et al (2017). They make no testable claims; their specific "predictions" are all post hoc. Their results are entirely self-referential.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 May 30, Lydia Maniatis commented:

      Comment 3: The speciousness of the interpretation may perhaps be better grasped if we imagine that the centroids contained patches differing in ways other than color. If, for example, some had been shaped like rectangles and others stars, would we have been justified in concluding that we were measuring the activities of rectangle and star "filters"? Or if some had been x's and some had been o's...etc. Color might seem like a simpler property than shape, but given that it is wholly mediated by the organization of the visual field and the resulting shape properties, this intuition is in error (the tendency of vision science publications to refer to color as a "low-level" property notwithstanding.)

      In fact, while we're talking about shape, there can be little doubt that the arrangement (e.g. symmetrical vs asymmetrical) of the differently colored patches in the present type of experiment will affect the accuracy of the responses. The effects might, perhaps, be averaged out, but this doesn't mean that these "high-level" effects of organization aren't mediating the purportedly "low-level" effects of color at all times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.