On 2016 Jan 14, Lydia Maniatis commented:
Conceptual problems The authors argue that when the visual system gets things wrong, the problem is that the scene is “impoverished” and “unnatural.” Such terms are not specific enough to allow them to be evaluated scientifically. If we use the term “natural” to mean created by natural processes, then the stimuli used in this study were not natural. If the authors mean more than that, they need to both specify what they mean and to control experimentally for the relevant factors. Otherwise, the definition of natural becomes tautological – any scene that produces more or less veridical percepts is defined as “natural.” A non-tautological step forward require examining cases where vision fails, discerning a potential distinction between the features of the cases were it succeeds and the cases where it fails, and testing the assumption that the distinguishing factors matter. Simply defining conditions where vision fails as “impoverished” and “unnatural” doesn't allow such a test (and/or the claims are easily falsified); it leads to confusion, since, for example, it would be difficult to argue that a photograph is an “impoverished” stimulus, or, again, that man-made objects are “natural.” If the authors are suggesting that all types of asymmetry compromise the accuracy of aspects of the 3D percept, then they need to rationalize the claim theoretically and/or frame it precisely enough that it can be tested. (I'm quite sure any clearly-framed symmetry claim can be easily falsified.)
It is surprising, finally, that the authors claim not to understand the value of illusions in studying perception. The value is in testing hypotheses vis a vis the principles underlying visual performance. For example, the pictorial perception of 3D shapes, the Ames room, contradict Marr's 2.5D sketch concept, i.e. the primacy of depth maps in perception of a third dimension.
The confusion as to the role of illusions is correlated with an epistemological confusion as to the use and logic of falsification. “It follows,” write the authors, “that with the types of models we are using, falsification is not your “best friend” as it often is elsewhere. In 3D vision, there is usually no model you can turn to, so there is nothing to falsify. Simply put, scientific discovery in 3D vision is not accomplished by using an ANOVA or Bayesian tests to reject some hypotheses. Discovery is accomplished by correctly guessing which cost function is actually being used by the visual system. objects within this space veridically.” But to corroborate a guess (the asumptions incorporated in a model) we must test it, and the test may prove it wrong, i.e. may falsify it. This does not mean testing merely by using ANOVA's to show significance, or lack thereof, in cases where the outcome is foggy, or “Bayesian” tests (to show...whatever), but actually showing that our assumptions – e.g. that binocular vision is not necessary or sufficient to the perception of 3D shape - do, or do not, hold up to rigorous testing.
Finally, it's not made clear which “Gestalt-like” constraints are being referred to. (Organisational principles are necessary to all perception; is there meant to be more specificity than this?)
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.