2 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 08, Lydia Maniatis commented:

      As was the case with the authors' previous article this year (Testing the role of luminance edges..), this article should be read starting with the last section of the discussion (pp. 14-15). It is here that readers are filled in on methodological problems so severe that they (almost) render any theoretical criticism of the study moot.

      We learn that pilot experiments showed that there was a “high variability in responses even between experienced observers.” In some conditions the “test patch” was so “hard to detect” that “some observers looked at the stimulus for a very long time trying to detect the test patch, while others simply matched the lightness of the grating bar.” In other words, in some conditions the “test patch” was for some viewers a purely theoretical construct of the authors.

      The authors then attempted to make the task “more objective” (meaning the tried to make the data appear less messy) by imposing a forced-choice paradigm, but this couldn't solve the problem of the unseen target. On further testing, the authors found that “reducing presentation time...led to more similar behavior across subjects, so [they] opted for a lightness-matching paradigm with short presentation times.” Does “similar behavior” translate into comparable and/or interpretable behavior, or are subjects using an unknown strategy to achieve a common solution to an awkward task? What are they perceiving? Are they all perceiving the same thing? The authors don't know and don't seem to care, as long as the data seem consistent.

      Unfortunately, despite all these manipulations, the data remained messy: “...two of our 11 observers showed behavior very different from the others, and the magnitude of both White's illusion and the effect of noise differed widely across observers...”

      The authors understand that “One possible explanation for these difficulties is that the noise may make the matching task perceptually ill-defined.” In addition to ambiguity, it appears that “the noise can appear as a layer of clouds or haze in front of a homogeneous grating...At very high noise frequencies, the noise is so fine-grained that it is not difficult to get an impression of the average lightness of the test patch, which...may appear textured...At intermediate noise frequencies...it can be difficult even to detect the test patch as a separate region, which makes the matching most difficult.” Most difficult, indeed.

      Three related facts should be very clear from the above description.

      1) The data are uninterpretable and thus have no theoretical significance.

      2) The descriptor “noise frequency” is wholly inadequate to describe a manipulation that produces qualitatively different effects, ranging from haze/clouds to textured surfaces to invisibility. It is theoretically and practically meaningless.

      It is something like adding lines to a square to produce a cube percept, and describing the manipulation simply as “adding an intermediate number of lines;” or adding various chemical substances to a solution and describing them on the basis of their color. Such theoretically blind (a term that in perception science has a literal as well as metaphorical meaning) manipulations are antithetical to scientific investigation.

      3) The first of the effects in (2) – transparent layers - cannot, to my knowledge (and please correct me if I'm wrong), currently be accounted for by any of the ad hoc spatial filtering models being tested, all of which are “successful” for a narrow set of stimulus types, but cannot begin to handle most perceptual lightness effects. Thus, they are already extensively falsified. The fact that contemporary perception science seems to give standing to failed models incapable of rationalizing their failures - even in principle - should not be allowed to obscure this fact.

      In sum, this study contains no usable data and has no necessary theoretical purpose.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Nov 08, Lydia Maniatis commented:

      As was the case with the authors' previous article this year (Testing the role of luminance edges..), this article should be read starting with the last section of the discussion (pp. 14-15). It is here that readers are filled in on methodological problems so severe that they (almost) render any theoretical criticism of the study moot.

      We learn that pilot experiments showed that there was a “high variability in responses even between experienced observers.” In some conditions the “test patch” was so “hard to detect” that “some observers looked at the stimulus for a very long time trying to detect the test patch, while others simply matched the lightness of the grating bar.” In other words, in some conditions the “test patch” was for some viewers a purely theoretical construct of the authors.

      The authors then attempted to make the task “more objective” (meaning the tried to make the data appear less messy) by imposing a forced-choice paradigm, but this couldn't solve the problem of the unseen target. On further testing, the authors found that “reducing presentation time...led to more similar behavior across subjects, so [they] opted for a lightness-matching paradigm with short presentation times.” Does “similar behavior” translate into comparable and/or interpretable behavior, or are subjects using an unknown strategy to achieve a common solution to an awkward task? What are they perceiving? Are they all perceiving the same thing? The authors don't know and don't seem to care, as long as the data seem consistent.

      Unfortunately, despite all these manipulations, the data remained messy: “...two of our 11 observers showed behavior very different from the others, and the magnitude of both White's illusion and the effect of noise differed widely across observers...”

      The authors understand that “One possible explanation for these difficulties is that the noise may make the matching task perceptually ill-defined.” In addition to ambiguity, it appears that “the noise can appear as a layer of clouds or haze in front of a homogeneous grating...At very high noise frequencies, the noise is so fine-grained that it is not difficult to get an impression of the average lightness of the test patch, which...may appear textured...At intermediate noise frequencies...it can be difficult even to detect the test patch as a separate region, which makes the matching most difficult.” Most difficult, indeed.

      Three related facts should be very clear from the above description.

      1) The data are uninterpretable and thus have no theoretical significance.

      2) The descriptor “noise frequency” is wholly inadequate to describe a manipulation that produces qualitatively different effects, ranging from haze/clouds to textured surfaces to invisibility. It is theoretically and practically meaningless.

      It is something like adding lines to a square to produce a cube percept, and describing the manipulation simply as “adding an intermediate number of lines;” or adding various chemical substances to a solution and describing them on the basis of their color. Such theoretically blind (a term that in perception science has a literal as well as metaphorical meaning) manipulations are antithetical to scientific investigation.

      3) The first of the effects in (2) – transparent layers - cannot, to my knowledge (and please correct me if I'm wrong), currently be accounted for by any of the ad hoc spatial filtering models being tested, all of which are “successful” for a narrow set of stimulus types, but cannot begin to handle most perceptual lightness effects. Thus, they are already extensively falsified. The fact that contemporary perception science seems to give standing to failed models incapable of rationalizing their failures - even in principle - should not be allowed to obscure this fact.

      In sum, this study contains no usable data and has no necessary theoretical purpose.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.