2 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 20, Lydia Maniatis commented:

      This paper has all the hallmarks of the “Bayesian program” - a predictable prediction confirmed, draped over a sketchy, ad hoc (I will dare to say unintelligible) mathematical model structured around a normal distribution, which model is then said to have “influenced” the results (as opposed to simply being (roughly and post hoc) correlated with them) – spiced, here, by the improper discarding of inconvenient datasets.

      To begin with the last point, in Experiment 1, two out of fifteen subjects' datasets were “excluded from analysis.” The brief rationale “because they performed below 55% correct” might mislead readers into assuming it means that, for the purposes of the experiment, subjects were performing at chance level. However, although the task did involve a forced choice between presence or absence of a contour, the question at issue was whether certain features of contours make them more or less detectable. This question could have been answered using this data set – out of the 55% correct, did more of the detected contours contain the features that the authors predicted would lead to more reliable detection? Given the weak and noisy effects, it's possible that these data would have undermined the story; their removal is, at any rate, highly improper.

      Based on the data that were used we are told, as part of the “basic analysis,” that “subjects were significantly better at detecting leaf shapes than animal shapes.” So what? There was no prediction, no experimental question, regarding the difference in the detectability of leaf vs animal shapes. And while this is the first result reported, the authors make no attempt to explain it.

      Let me take a stab at it. Looking at their sample forms, I notice that the leaf shapes are fronto-parallel and symmetrical (or very nearly so), while the animal forms are not – some are side views, some three-quarters views, but none except a fish shape are bilaterally symmetrical. The symmetry of the fish is around the horizontal (and thus tends not to be salient for human viewers). This very clear and obvious shape factor is very pertinent to the authors' discussion about local vs global shape properties; but it is not at any point discussed. Despite these suggestive results, the authors do not test or refer to symmetry in their artificial shape condition. (Symmetry is a confound in their Experiment 2, because their least “complex” shape is also the one that is bilaterally symmetrical.)

      Fortunately, this oversight doesn't prevent the authors from detecting the well-known fact that detection of shapes (i.e. perceptual organization of the visual field) depends on global properties, and that more “complex” (i.e. more discontinuous) shapes are less salient than less complex ones. (Because the shapes are presented in a highly unnatural setting (a field of noise) it is not clear what these particular results imply for more typical conditions. The authors are aware of this, and describe their results as showing that “subjects' detection of closed contours in noise is impaired, slightly but reliably...”)

      Despite the prominence of the term, it is not clear that the authors have tested the role of “closed contours,” because they have not included an “open-contour” condition. And, in fact, they don't actually claim to have specifically tested the role of complexity in closed contours: “Our results show that when observers seek a closed contour amid noise, they are influenced by the complexity of the bounded shape.” Given their experimental parameters, there is no difference between this statement and the statement: “Our results show that when observers seek a closed contour amid noise, they are influenced by the complexity of the CONTOUR.” The role of contour complexity was the subject of another recent paper by these authors (http://jov.arvojournals.org/article.aspx?articleid=2291654). As far as the analysis goes, the data offered in the present paper would not seem to allow a differentiation between closed and open contours. The paper is thus effectively a repeat of the previous one.

      The conclusions are trivial in the sense that they corroborate one of the most well-corroborated claims in visual perception: “Our results corroborate the Gestalt view that a complete understanding of contours is not, in and of itself, sufficient to understand whole shapes [I would note that a line (a long skinny object) also has a shape].” So what does the study offer? “The quantification of the complexity of whole shapes...” Given the unnatural task difficulty, noisy data, small effect sizes, unexplained results, it's not likely that the present effort has any value in this direction. The authors don't dare say that it does - concluding (very) modestly that: “...we hope that more attention to the problem of representing whole shapes, and specifically to skeletal representations, will spur the development of a more comprehensive account.” These hopes are not supported by this study.

      Finally, although the authors pay lip service to “global shape” their models are fundamentally local “and-sum” approaches; their “features” are not truly global features, such as symmetry or compactness.

      References that should have been included:

      On shape skeletons:

      1. Arnheim, Rudolf. Art and visual perception: A psychology of the creative eye. Univ of California Press, 1954.

      2. 2.Firestone, Chaz, and Brian J. Scholl. "“Please Tap the Shape, Anywhere You Like” Shape Skeletons in Human Vision Revealed by an Exceedingly Simple Measure." Psychological science (2014): 0956797613507584.

      On quantification of global shape features:

      Articles by Pizlo and colleagues presenting computational models of shape.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Nov 20, Lydia Maniatis commented:

      This paper has all the hallmarks of the “Bayesian program” - a predictable prediction confirmed, draped over a sketchy, ad hoc (I will dare to say unintelligible) mathematical model structured around a normal distribution, which model is then said to have “influenced” the results (as opposed to simply being (roughly and post hoc) correlated with them) – spiced, here, by the improper discarding of inconvenient datasets.

      To begin with the last point, in Experiment 1, two out of fifteen subjects' datasets were “excluded from analysis.” The brief rationale “because they performed below 55% correct” might mislead readers into assuming it means that, for the purposes of the experiment, subjects were performing at chance level. However, although the task did involve a forced choice between presence or absence of a contour, the question at issue was whether certain features of contours make them more or less detectable. This question could have been answered using this data set – out of the 55% correct, did more of the detected contours contain the features that the authors predicted would lead to more reliable detection? Given the weak and noisy effects, it's possible that these data would have undermined the story; their removal is, at any rate, highly improper.

      Based on the data that were used we are told, as part of the “basic analysis,” that “subjects were significantly better at detecting leaf shapes than animal shapes.” So what? There was no prediction, no experimental question, regarding the difference in the detectability of leaf vs animal shapes. And while this is the first result reported, the authors make no attempt to explain it.

      Let me take a stab at it. Looking at their sample forms, I notice that the leaf shapes are fronto-parallel and symmetrical (or very nearly so), while the animal forms are not – some are side views, some three-quarters views, but none except a fish shape are bilaterally symmetrical. The symmetry of the fish is around the horizontal (and thus tends not to be salient for human viewers). This very clear and obvious shape factor is very pertinent to the authors' discussion about local vs global shape properties; but it is not at any point discussed. Despite these suggestive results, the authors do not test or refer to symmetry in their artificial shape condition. (Symmetry is a confound in their Experiment 2, because their least “complex” shape is also the one that is bilaterally symmetrical.)

      Fortunately, this oversight doesn't prevent the authors from detecting the well-known fact that detection of shapes (i.e. perceptual organization of the visual field) depends on global properties, and that more “complex” (i.e. more discontinuous) shapes are less salient than less complex ones. (Because the shapes are presented in a highly unnatural setting (a field of noise) it is not clear what these particular results imply for more typical conditions. The authors are aware of this, and describe their results as showing that “subjects' detection of closed contours in noise is impaired, slightly but reliably...”)

      Despite the prominence of the term, it is not clear that the authors have tested the role of “closed contours,” because they have not included an “open-contour” condition. And, in fact, they don't actually claim to have specifically tested the role of complexity in closed contours: “Our results show that when observers seek a closed contour amid noise, they are influenced by the complexity of the bounded shape.” Given their experimental parameters, there is no difference between this statement and the statement: “Our results show that when observers seek a closed contour amid noise, they are influenced by the complexity of the CONTOUR.” The role of contour complexity was the subject of another recent paper by these authors (http://jov.arvojournals.org/article.aspx?articleid=2291654). As far as the analysis goes, the data offered in the present paper would not seem to allow a differentiation between closed and open contours. The paper is thus effectively a repeat of the previous one.

      The conclusions are trivial in the sense that they corroborate one of the most well-corroborated claims in visual perception: “Our results corroborate the Gestalt view that a complete understanding of contours is not, in and of itself, sufficient to understand whole shapes [I would note that a line (a long skinny object) also has a shape].” So what does the study offer? “The quantification of the complexity of whole shapes...” Given the unnatural task difficulty, noisy data, small effect sizes, unexplained results, it's not likely that the present effort has any value in this direction. The authors don't dare say that it does - concluding (very) modestly that: “...we hope that more attention to the problem of representing whole shapes, and specifically to skeletal representations, will spur the development of a more comprehensive account.” These hopes are not supported by this study.

      Finally, although the authors pay lip service to “global shape” their models are fundamentally local “and-sum” approaches; their “features” are not truly global features, such as symmetry or compactness.

      References that should have been included:

      On shape skeletons:

      1. Arnheim, Rudolf. Art and visual perception: A psychology of the creative eye. Univ of California Press, 1954.

      2. 2.Firestone, Chaz, and Brian J. Scholl. "“Please Tap the Shape, Anywhere You Like” Shape Skeletons in Human Vision Revealed by an Exceedingly Simple Measure." Psychological science (2014): 0956797613507584.

      On quantification of global shape features:

      Articles by Pizlo and colleagues presenting computational models of shape.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.