2 Matching Annotations
  1. Jul 2018
    1. On 2016 Jun 24, Lydia Maniatis commented:

      The authors do two things in this study:

      First, they point out that past studies on “constancy” have been hopelessly confounded due a. to condition-sensitivity of and ambiguity in what is actually percieved and b. questions that are confusing to observers because they are vague, ambiguous, unintelligible or unanswerable on the basis of the percept, thus forcing respondents to try to guess at the right answer. As a result, the designers of these studies have generated often incoherent data and proferred vague speculations as to the reasons for the randomness of the results.

      Second, as though teaching (what to avoid) by example, they produce yet another study embodying all of the problems they describe. Using arbitrary sets of stimuli, they ask an arbitrary set of questions of varying clarity/answerability-on-the-basis-of-the-percept, and generate the typically heterogeneous, highly variable set of outcomes, accompanied by the usual vague and non-committal discussion. (The conditions and questions are arbitrary in the sense that we could easily produce a very different set of outcomes by (especially) changing the colors of the stimuli or (less importantly) changing the questions asked.) Thus, the only possible value of these experiments would be to show the condition-dependence of the outcomes. But this was an already established fact, and it is, furthermore, a fact that any experimenter in any field should be aware of. It's the reason that planning an experiment requires careful, theory-guided control of conditions.

      The authors make no attempt to hide the fact that some of the questions they ask participants cannot be anwered by referring to the percept. For example,, they are asked about some physical characteristic of the simulus, which, of course, is inaccessible to either the human visual system and unavailable in the conscious percept. In these cases, we are not studying perception of the color of surfaces, but a different kind of problem-solving. The authors refer to answers “based on reasoning.” If we're interested in studying color perception, then the simple answer would be not to use this type of question. The authors seem to agree: “Although we believe that the question of how subjects can reason from their percepts is interesting in its own right, we think it is a different question from how objects appear. Our view is that instructional effects are telling us about the former, and that to get at the latter neutral instructions are the most likely to succeed...In summary, our results suggest that certain types of instructions cause subjects to employ strategies based on explicit reasoning— which are grounded in their perceptions and developed using the information provided in the instructions and training—to achieve the response they believe is requested by the experimenter.” This was all clearly known on the basis of prior experimence, as described in the introduction.

      So, at any rate, the investigators express an interest in what is actually perceived by observers. But what is the question they're interested in answering? This is the real problem. The question, or goal, seems to be, “How do we measure color constancy?” But we don't measure things for measurement's sake. The natural follow-up is “Why do we want to measure color constancy?” What is the theoretical goal, or question we want to answer? This question matters because we can never, ever, arrive at some kind of universal, general, number for this phenomenon, which is totally condition-dependent. But I'm not able to discern, in these authors' work, any indication of their purpose in making these highly unreliable measurements.

      Color constancy refers to the fact that sometimes, a surface “x” will continue to appear the same color even as the kind and intensity of the light it is projecting to the eye changes. On the other hand, it is equally possible for that same surface to appear to change color, even as the kind and intensity of the light it is reflecting to the eye remains the same. In both cases – constancy and inconstancy – the outcome depends on the total light projecting to the eye, and the way the visual system organizes it. In both cases – constancy and inconstancy – the visual principles mediating the outcome are the same.

      The authors, in this and in previous studies, “measure constancy.” Sometimes it's higher, sometimes it's lower. It's condition-dependent. Even if they were actually measuring “constancy” in the sense of testing how an actually stable surface behaves under varying conditions, what would be the value of this data? We already know that constancy is condition-dependent, that it is often good or good enough, and that it can fail under certain well-understood conditions. (That these conditions are fairly well-understood is the reason the authors possess a graphics program for simulating “constancy” effects). How does simply measuring this rise and fall under random conditions (random because not guided by theory, meaning that the results won't help clarify any particular theoretical question) provide any useful information? What is, in short, the point?

      Yet another twist in the plot is that in their experiments, the authors aren't actually measuring constancy. Because we are talking about simulations, in order to exhibit “constancy,” observes need (often) to actually judge two surface with different spectral characteristics as being the same. This criterion is based on assumptions made by the investigators as to what surfaces should look the same under different conditions/spectral properties. But this doesn't make sense. What does it mean, for example, if an observer returns “low constancy” results? It means that the conditions required for two actually spectrally different surfaces to appear the same simply didn't hold, in other words, that the investigators' assumptions as to the conditions that should produce this “constancy” result didn't hold. If the different stimuli were designed to actually test original assumptions about the specific conditions that do or do not produce constancy, fine. But this is not the case. The stimuli are simply and crudely labelled “simplified” and “more realistic.” This means nothing with respect to constancy-inducing conditions. Both of these kinds of stimuli can produce any degree of “constancy” or “inconstancy” you want.

      In short, we know that color perception is condition-sensitive, and that some questions may fail to tap percepts; illustrating this this yet again is the most that this experiment can be said to have accomplished.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Jun 24, Lydia Maniatis commented:

      The authors do two things in this study:

      First, they point out that past studies on “constancy” have been hopelessly confounded due a. to condition-sensitivity of and ambiguity in what is actually percieved and b. questions that are confusing to observers because they are vague, ambiguous, unintelligible or unanswerable on the basis of the percept, thus forcing respondents to try to guess at the right answer. As a result, the designers of these studies have generated often incoherent data and proferred vague speculations as to the reasons for the randomness of the results.

      Second, as though teaching (what to avoid) by example, they produce yet another study embodying all of the problems they describe. Using arbitrary sets of stimuli, they ask an arbitrary set of questions of varying clarity/answerability-on-the-basis-of-the-percept, and generate the typically heterogeneous, highly variable set of outcomes, accompanied by the usual vague and non-committal discussion. (The conditions and questions are arbitrary in the sense that we could easily produce a very different set of outcomes by (especially) changing the colors of the stimuli or (less importantly) changing the questions asked.) Thus, the only possible value of these experiments would be to show the condition-dependence of the outcomes. But this was an already established fact, and it is, furthermore, a fact that any experimenter in any field should be aware of. It's the reason that planning an experiment requires careful, theory-guided control of conditions.

      The authors make no attempt to hide the fact that some of the questions they ask participants cannot be anwered by referring to the percept. For example,, they are asked about some physical characteristic of the simulus, which, of course, is inaccessible to either the human visual system and unavailable in the conscious percept. In these cases, we are not studying perception of the color of surfaces, but a different kind of problem-solving. The authors refer to answers “based on reasoning.” If we're interested in studying color perception, then the simple answer would be not to use this type of question. The authors seem to agree: “Although we believe that the question of how subjects can reason from their percepts is interesting in its own right, we think it is a different question from how objects appear. Our view is that instructional effects are telling us about the former, and that to get at the latter neutral instructions are the most likely to succeed...In summary, our results suggest that certain types of instructions cause subjects to employ strategies based on explicit reasoning— which are grounded in their perceptions and developed using the information provided in the instructions and training—to achieve the response they believe is requested by the experimenter.” This was all clearly known on the basis of prior experimence, as described in the introduction.

      So, at any rate, the investigators express an interest in what is actually perceived by observers. But what is the question they're interested in answering? This is the real problem. The question, or goal, seems to be, “How do we measure color constancy?” But we don't measure things for measurement's sake. The natural follow-up is “Why do we want to measure color constancy?” What is the theoretical goal, or question we want to answer? This question matters because we can never, ever, arrive at some kind of universal, general, number for this phenomenon, which is totally condition-dependent. But I'm not able to discern, in these authors' work, any indication of their purpose in making these highly unreliable measurements.

      Color constancy refers to the fact that sometimes, a surface “x” will continue to appear the same color even as the kind and intensity of the light it is projecting to the eye changes. On the other hand, it is equally possible for that same surface to appear to change color, even as the kind and intensity of the light it is reflecting to the eye remains the same. In both cases – constancy and inconstancy – the outcome depends on the total light projecting to the eye, and the way the visual system organizes it. In both cases – constancy and inconstancy – the visual principles mediating the outcome are the same.

      The authors, in this and in previous studies, “measure constancy.” Sometimes it's higher, sometimes it's lower. It's condition-dependent. Even if they were actually measuring “constancy” in the sense of testing how an actually stable surface behaves under varying conditions, what would be the value of this data? We already know that constancy is condition-dependent, that it is often good or good enough, and that it can fail under certain well-understood conditions. (That these conditions are fairly well-understood is the reason the authors possess a graphics program for simulating “constancy” effects). How does simply measuring this rise and fall under random conditions (random because not guided by theory, meaning that the results won't help clarify any particular theoretical question) provide any useful information? What is, in short, the point?

      Yet another twist in the plot is that in their experiments, the authors aren't actually measuring constancy. Because we are talking about simulations, in order to exhibit “constancy,” observes need (often) to actually judge two surface with different spectral characteristics as being the same. This criterion is based on assumptions made by the investigators as to what surfaces should look the same under different conditions/spectral properties. But this doesn't make sense. What does it mean, for example, if an observer returns “low constancy” results? It means that the conditions required for two actually spectrally different surfaces to appear the same simply didn't hold, in other words, that the investigators' assumptions as to the conditions that should produce this “constancy” result didn't hold. If the different stimuli were designed to actually test original assumptions about the specific conditions that do or do not produce constancy, fine. But this is not the case. The stimuli are simply and crudely labelled “simplified” and “more realistic.” This means nothing with respect to constancy-inducing conditions. Both of these kinds of stimuli can produce any degree of “constancy” or “inconstancy” you want.

      In short, we know that color perception is condition-sensitive, and that some questions may fail to tap percepts; illustrating this this yet again is the most that this experiment can be said to have accomplished.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.