2 Matching Annotations
  1. Jul 2018
    1. On 2016 Oct 11, Lydia Maniatis commented:

      The authors of this study transparently misrepresent and/or misunderstand the theoretical situation with respect to the subject they are investigating.

      The most blatant oversight involves the failure to note that the type of stimulus being used here – a checkerboard configuration – was shown decades ago to reduce, rather than enhance, perceived contrast in adjacent surfaces. (see demo here: https://www.researchgate.net/figure/225158523_fig5_Fig-5-The-DeValois-and-DeValois-Checkerboard-stimulus). This fact was similarly overlooked by Maertens, Wichmann and Shapely (2015), who simply assumed the opposite.

      The checkerboard configuration is clearly a special and unusual case, in the sense that it elicits the impression of adjacent surfaces rather the more typical experience of surfaces on top of backgrounds (figure/ground relationships). It is unlikely that an ad hoc model that involves averaging of luminances would work for both the former and the latter sets of conditions.*

      The Missing Segmentation Step

      In general, Wiebel et al (2016) have chosen not to address the fundamental factor mediating color and lightness, i.e the structure of the stimulus and the principles by which the visual system organizes the retinal projection to form the percept. Yet Zeiner and Maertens (2014), whose “normalized contrast model” the authors are applying here, acknowledged this gap in retrospect:

      “The important piece of information that is still missing, and which we secretly inserted, was the knowledge about regions of different contrast range. Here we simply used the values that we knew to originate from the checks within the regions corresponding to plain view, shadow, or transparent media, but for a model to be applicable to any image this segmentation step still needs to be elucidated.”

      But the entire problem - the ability to predict perceived lightness of any surface - lies precisely in the "segmentation step." Furthermore, since they haven't taken connection of structure to assimilation and contrast into account, it is, as noted above, doubtful that their model would work in general, even if they were to similarly sneak segmentation in through the back door.

      Wiebel, Singh and Maertens sidestep the issue, simply assuming “that the visual system is sensitive to differences in contrast range and can use them to detect regions seen in plain view, because they have the highest contrast range.”

      The problem, of course, is that the contrast ranges of different “regions” of an image depend on how the image is divided up in the perceptual process; again, they depend on the “segmentation step” that Wiebel, Singh and Maertens, like Zeiner and Maertens, (2016) sneak in unanalyzed.

      The failure to address structure and principles of organization is also reflected in the fact that their definition of contrast depends on comparing luminances in an arbitrary, local area of the total image, rather than everything the observer could see, both on and off the screen. Again, the consequences of this global image depend on the “segmentation” step.

      Model Failures Can't Be Redeemed By Ad Hoc Successes

      The “model” the authors leave us with is not only ad hoc, it fails tests within this study. The abstract indicates the Zeiner Maertens model model fails even for the narrow (and very unusual) set of conditions chosen, but that “model extensions” “fit the observed data well.” In the discussion we find that the fits weren’t all that good: “For the normalized contrast model, significant differences between model predictions and observed data were shown for Reflectance 6 (p < 0.05) in the light transparency. For the dark transparency, significant differences were found for Reflectances 3, 5, 6, 8, and 9 (p < 0.05).” In other words, even for the highly selective conditions used, fits were hit-and-miss. The authors don’t have the theoretical tools to explain why (though the tools are arguably available). But they speculate, and make the following very odd statement.

      In contrasting their “normalized contrast model” with the “contrast ratio model,” they note that one works for better for the “light transparency” conditions and the other for the “dark transparency” conditions. This, they argue, is due to the anchoring assumptions of each model. They propose to create new stimuli to “decide between the two models.”

      But each of the models has already failed. These failures won’t be undone by any future “successes.” Any such successes would obviously be ad hoc, another theoretically and factually agnostic exercise such as the one we are discussing. No amount of such structure-blind, ad hoc model-building could take us even to the point of knowledge already available, though ignored.

      Short Summary: 1. The authors construct ad hoc models of lightness perception without taking into account the fundamental mediator of lightness, i.e. stimulus structure and the principles by which the visual system organizes the retinal projection.

      1. Their model fails a number of tests in this study, but they propose to keep on testing it, in order to “decide” between it and an alternative, which has also failed a number of the tests put to it in this study.

      It is not clear what is the purpose in testing two failed, ad hoc models. Clearly, they are both capable of “succeeding” and of failing in an infinite number of cases.

      *The assimilation that we see in the checkerboard demo is linked to the fact that contrast enhancement is highly correlated with perceived figure ground relationships, making the borders of the figure more visually salient. Percieved figure-ground relationships in turn hinge on conditions indicating figural overlap, such as intersecting continuous edges of potentially closed figures. In the case of the checkerboard, the perceptual relationships between squares is one of adjacency rather than overlap. Luminance changes interpreted as occurring within a single surface tend to produce assimilation, as for example in the case of the Koffka ring.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Oct 11, Lydia Maniatis commented:

      The authors of this study transparently misrepresent and/or misunderstand the theoretical situation with respect to the subject they are investigating.

      The most blatant oversight involves the failure to note that the type of stimulus being used here – a checkerboard configuration – was shown decades ago to reduce, rather than enhance, perceived contrast in adjacent surfaces. (see demo here: https://www.researchgate.net/figure/225158523_fig5_Fig-5-The-DeValois-and-DeValois-Checkerboard-stimulus). This fact was similarly overlooked by Maertens, Wichmann and Shapely (2015), who simply assumed the opposite.

      The checkerboard configuration is clearly a special and unusual case, in the sense that it elicits the impression of adjacent surfaces rather the more typical experience of surfaces on top of backgrounds (figure/ground relationships). It is unlikely that an ad hoc model that involves averaging of luminances would work for both the former and the latter sets of conditions.*

      The Missing Segmentation Step

      In general, Wiebel et al (2016) have chosen not to address the fundamental factor mediating color and lightness, i.e the structure of the stimulus and the principles by which the visual system organizes the retinal projection to form the percept. Yet Zeiner and Maertens (2014), whose “normalized contrast model” the authors are applying here, acknowledged this gap in retrospect:

      “The important piece of information that is still missing, and which we secretly inserted, was the knowledge about regions of different contrast range. Here we simply used the values that we knew to originate from the checks within the regions corresponding to plain view, shadow, or transparent media, but for a model to be applicable to any image this segmentation step still needs to be elucidated.”

      But the entire problem - the ability to predict perceived lightness of any surface - lies precisely in the "segmentation step." Furthermore, since they haven't taken connection of structure to assimilation and contrast into account, it is, as noted above, doubtful that their model would work in general, even if they were to similarly sneak segmentation in through the back door.

      Wiebel, Singh and Maertens sidestep the issue, simply assuming “that the visual system is sensitive to differences in contrast range and can use them to detect regions seen in plain view, because they have the highest contrast range.”

      The problem, of course, is that the contrast ranges of different “regions” of an image depend on how the image is divided up in the perceptual process; again, they depend on the “segmentation step” that Wiebel, Singh and Maertens, like Zeiner and Maertens, (2016) sneak in unanalyzed.

      The failure to address structure and principles of organization is also reflected in the fact that their definition of contrast depends on comparing luminances in an arbitrary, local area of the total image, rather than everything the observer could see, both on and off the screen. Again, the consequences of this global image depend on the “segmentation” step.

      Model Failures Can't Be Redeemed By Ad Hoc Successes

      The “model” the authors leave us with is not only ad hoc, it fails tests within this study. The abstract indicates the Zeiner Maertens model model fails even for the narrow (and very unusual) set of conditions chosen, but that “model extensions” “fit the observed data well.” In the discussion we find that the fits weren’t all that good: “For the normalized contrast model, significant differences between model predictions and observed data were shown for Reflectance 6 (p < 0.05) in the light transparency. For the dark transparency, significant differences were found for Reflectances 3, 5, 6, 8, and 9 (p < 0.05).” In other words, even for the highly selective conditions used, fits were hit-and-miss. The authors don’t have the theoretical tools to explain why (though the tools are arguably available). But they speculate, and make the following very odd statement.

      In contrasting their “normalized contrast model” with the “contrast ratio model,” they note that one works for better for the “light transparency” conditions and the other for the “dark transparency” conditions. This, they argue, is due to the anchoring assumptions of each model. They propose to create new stimuli to “decide between the two models.”

      But each of the models has already failed. These failures won’t be undone by any future “successes.” Any such successes would obviously be ad hoc, another theoretically and factually agnostic exercise such as the one we are discussing. No amount of such structure-blind, ad hoc model-building could take us even to the point of knowledge already available, though ignored.

      Short Summary: 1. The authors construct ad hoc models of lightness perception without taking into account the fundamental mediator of lightness, i.e. stimulus structure and the principles by which the visual system organizes the retinal projection.

      1. Their model fails a number of tests in this study, but they propose to keep on testing it, in order to “decide” between it and an alternative, which has also failed a number of the tests put to it in this study.

      It is not clear what is the purpose in testing two failed, ad hoc models. Clearly, they are both capable of “succeeding” and of failing in an infinite number of cases.

      *The assimilation that we see in the checkerboard demo is linked to the fact that contrast enhancement is highly correlated with perceived figure ground relationships, making the borders of the figure more visually salient. Percieved figure-ground relationships in turn hinge on conditions indicating figural overlap, such as intersecting continuous edges of potentially closed figures. In the case of the checkerboard, the perceptual relationships between squares is one of adjacency rather than overlap. Luminance changes interpreted as occurring within a single surface tend to produce assimilation, as for example in the case of the Koffka ring.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.