4 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 22, Lydia Maniatis commented:

      Another issue that deserves consideration is the distinction between perceptual salience and "detection." The visual system "sees" the data but the percept is a selective description of this data. The so-called laws of perceptual organisation are an attempt to determine how the data are exploited in the attempt to construct a (for practical purposes) veridical representation of the world.

      "Detection" is not the right paradigm for describing the contents of perception, any more than "discovery" is the right description of the scientific process (with respect to discovering (I can't help it) the laws of nature. The phenomena of subjective contours and amodal completions should, on the basis of a detection paradigm, be errors, failure to detect the absence of edges/surfaces.

      Conversely, as the Gestaltists have amply shown, the mere presence of a contour in the stimulus does not mean it will be perceived. But this does not mean it was not "detected."

      We can prove this on the basis that a change in a distant part of the stimulus may make a previously unseen "contour" salient. A very simple example is if we arrange four dots so that they mark the four corners of a square, then we will tend to perceive a square shape, i.e. we will mentally connect the dots to form a square. We will not tend to mentally connect the dots diagonally. But if we remove one of the "corners," then one of these diagonal connections will become salient, as we will typically perceive a triangle shape.

      It's the same with science. Natural laws aren't discovered, they're created with the goal of making them match the facts as closely as possible. Just as with the example above, a chance discovery can cause an entire theoretical structure to rearrange.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 03, Lydia Maniatis commented:

      As is the case with “Bayesians” in general, the authors here perform a sleight-of-hand: They make highly predictable predictions, to which they have attached some type of “Bayesian” rationale; the prediction is borne out; they claim (invalidly) that their “Bayesian framework” has been validated.

      The authors' proposition is that increasing complexity of a "contour" (via increasing bending) decreases its detectability. We are repeatedly told that this is a well-known fact:

      “A number of recent studies have demonstrated that...detection performance declines with larger turning angles” (p. 1)

      “Of course...it is well-known that contour curvature decreases detectability...” (p. 5).

      “Contour curvature is well known to influence human detection of contours” (p. 12).

      Lo and behold, the authors' results show that “ subjects are sensitive to the complexity of the target contour's shape, as expected under the Bayesian framework and the assumed generative model. This finding gives strong prima facie support for the Bayesian approach...” (p. 8). Thus, our predictable prediction comes out as expected, and is treated as corroboration of our fundamentally post-hoc model. This train of thought is analogous to my claiming that invisible angels push the sun up every morning, and consider the coming of the dawn as prima facie evidence for my theological framework. But in fact, my angel hypothesis can only be corroborated by testing its unique components - the existence of invisible angels. It happens to be vague and untestable. Similarly, the results of this study can only validate the “Bayesian framework” by clarifying and testing its uniquely “Bayesian” assumptions (whatever these may be), not by straightforwardly modeling "well-known" facts.

      In addition, the authors frame their question in a way that doesn't reflect a real perceptual problem, but is amenable to their “model.” They state that “The detection of coherent objects among noisy backgrounds is an essential function of perceptual organization, allowing the visual system to distinguish discrete, whole forms from random clutter” (p. 1). As I look around me, all I see are discrete, whole forms. None is set against a background of noise. Nor are their long, skinny objects (“open contours”). The problem is framed this way because it corresponds nicely to the authors' “hammer.” The task is an awkward, not a natural one, akin to exploring motor patterns by tying subjects legs together. It is not an obvious candidate for modeling natural perceptual processes; we are told that conditions were set so that, in a pilot study, subjects performed near 75% “correct.” Results were, predictably, mixed.

      The claims made are easy to falsify. Specifically, the authors are suggesting that subjects performance in detecting the “contours” presented is due to “noisy” encoding of turning angles, thus “depressing their performance relative to an ideal observer in possession of perfect image data. “ (p. 14). The definition of contours as chains of points of identical luminance in a “noisy” background, as well as the reference to “perfect image data” reveals a certain naivete on the part of the authors vis a vis basic principles of visual perception. Their model would obviously fail to predict performance of subjects shown images that produce illusory contours, or amodal completions, for example. It would not even predict grouping of points with gaps between them – for example, a straight contour composed of identical black pixels alternated with white ones.

      Finally, one of the authors, (Feldman) has repeatedly and strenuously argued against the “frequentist” definition of probability (e.g. Feldman, 2015), contrasting it with the belief-based “Bayesian” definition. Here, he inscrutably cross-breeds “frequentism” and “Bayesianism,” referring, for example, to a “Bernoulli Bayesian decision problem ” (p. 13)). Thus, it is not clear where, for Bayesians, “frequentism” ends and belief begins.

      References

      Feldman, Jacob. "Bayesian models of perception: a tutorial introduction." Wagemans, J eds (2014).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Nov 03, Lydia Maniatis commented:

      As is the case with “Bayesians” in general, the authors here perform a sleight-of-hand: They make highly predictable predictions, to which they have attached some type of “Bayesian” rationale; the prediction is borne out; they claim (invalidly) that their “Bayesian framework” has been validated.

      The authors' proposition is that increasing complexity of a "contour" (via increasing bending) decreases its detectability. We are repeatedly told that this is a well-known fact:

      “A number of recent studies have demonstrated that...detection performance declines with larger turning angles” (p. 1)

      “Of course...it is well-known that contour curvature decreases detectability...” (p. 5).

      “Contour curvature is well known to influence human detection of contours” (p. 12).

      Lo and behold, the authors' results show that “ subjects are sensitive to the complexity of the target contour's shape, as expected under the Bayesian framework and the assumed generative model. This finding gives strong prima facie support for the Bayesian approach...” (p. 8). Thus, our predictable prediction comes out as expected, and is treated as corroboration of our fundamentally post-hoc model. This train of thought is analogous to my claiming that invisible angels push the sun up every morning, and consider the coming of the dawn as prima facie evidence for my theological framework. But in fact, my angel hypothesis can only be corroborated by testing its unique components - the existence of invisible angels. It happens to be vague and untestable. Similarly, the results of this study can only validate the “Bayesian framework” by clarifying and testing its uniquely “Bayesian” assumptions (whatever these may be), not by straightforwardly modeling "well-known" facts.

      In addition, the authors frame their question in a way that doesn't reflect a real perceptual problem, but is amenable to their “model.” They state that “The detection of coherent objects among noisy backgrounds is an essential function of perceptual organization, allowing the visual system to distinguish discrete, whole forms from random clutter” (p. 1). As I look around me, all I see are discrete, whole forms. None is set against a background of noise. Nor are their long, skinny objects (“open contours”). The problem is framed this way because it corresponds nicely to the authors' “hammer.” The task is an awkward, not a natural one, akin to exploring motor patterns by tying subjects legs together. It is not an obvious candidate for modeling natural perceptual processes; we are told that conditions were set so that, in a pilot study, subjects performed near 75% “correct.” Results were, predictably, mixed.

      The claims made are easy to falsify. Specifically, the authors are suggesting that subjects performance in detecting the “contours” presented is due to “noisy” encoding of turning angles, thus “depressing their performance relative to an ideal observer in possession of perfect image data. “ (p. 14). The definition of contours as chains of points of identical luminance in a “noisy” background, as well as the reference to “perfect image data” reveals a certain naivete on the part of the authors vis a vis basic principles of visual perception. Their model would obviously fail to predict performance of subjects shown images that produce illusory contours, or amodal completions, for example. It would not even predict grouping of points with gaps between them – for example, a straight contour composed of identical black pixels alternated with white ones.

      Finally, one of the authors, (Feldman) has repeatedly and strenuously argued against the “frequentist” definition of probability (e.g. Feldman, 2015), contrasting it with the belief-based “Bayesian” definition. Here, he inscrutably cross-breeds “frequentism” and “Bayesianism,” referring, for example, to a “Bernoulli Bayesian decision problem ” (p. 13)). Thus, it is not clear where, for Bayesians, “frequentism” ends and belief begins.

      References

      Feldman, Jacob. "Bayesian models of perception: a tutorial introduction." Wagemans, J eds (2014).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 22, Lydia Maniatis commented:

      Another issue that deserves consideration is the distinction between perceptual salience and "detection." The visual system "sees" the data but the percept is a selective description of this data. The so-called laws of perceptual organisation are an attempt to determine how the data are exploited in the attempt to construct a (for practical purposes) veridical representation of the world.

      "Detection" is not the right paradigm for describing the contents of perception, any more than "discovery" is the right description of the scientific process (with respect to discovering (I can't help it) the laws of nature. The phenomena of subjective contours and amodal completions should, on the basis of a detection paradigm, be errors, failure to detect the absence of edges/surfaces.

      Conversely, as the Gestaltists have amply shown, the mere presence of a contour in the stimulus does not mean it will be perceived. But this does not mean it was not "detected."

      We can prove this on the basis that a change in a distant part of the stimulus may make a previously unseen "contour" salient. A very simple example is if we arrange four dots so that they mark the four corners of a square, then we will tend to perceive a square shape, i.e. we will mentally connect the dots to form a square. We will not tend to mentally connect the dots diagonally. But if we remove one of the "corners," then one of these diagonal connections will become salient, as we will typically perceive a triangle shape.

      It's the same with science. Natural laws aren't discovered, they're created with the goal of making them match the facts as closely as possible. Just as with the example above, a chance discovery can cause an entire theoretical structure to rearrange.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.