2 Matching Annotations
  1. Jul 2018
    1. On 2016 Apr 21, Lydia Maniatis commented:

      The authors of this article never test the notion in which they are interested (if percepts are influenced by “undersampling”), but merely assume that it is the case, and crunch the data accordingly. Given that the rationale for the “undersampling” account is very tenuous to begin with, running the numbers seems moot. (I explain this assessment of the study below).

      The authors propose that perception is affected by something called “under-sampling” and that the locus of this effect is in the retina. It is clear that this proposal is speculative, but we are not given even a thumbnail sketch of arguments in its favor, only told that “Image sampling by cone photoreceptors is frequently cited as the neural limit to resolution acuity for central vision (Green, 1970; Williams, 1985a), whereas ganglion cells have been cited as the limiting array in peripheral vision (Anderson, 1996; Anderson, Drasdo, & Thompson, 1995; [etc].” Frequency of citation is, however, not an argument. As I've noted in many comments on vision publications, it is, unfortunately, very often used as a substitute. At any rate, it should be clear that the idea is speculative and that if there is a coherent rationale for it, it is to be found elsewhere. I would ask the authors where such a rationale is to be found.

      The applicability of the sampling notion is apparently rather narrow: “According to the sampling theory of visual resolution, when other limiting factors are avoided ... resolution acuity for sinusoidal gratings is set by the spatial density of neural sampling elements.” (Five of the references offered in support of this claim date from before 1960, and of these two are from the 1800's, so their relevance for this technical claim seems doubtful). In light of this statement, I would ask the authors why the notion of under-sampling is supposed to apply especially to sinusoidal gratings. Why do the authors say that “Double lines, double dots, geometrical figures, and letters are traditional stimuli for measuring acuity across the visual field (Aubert & Förster, 1857; Genter, Kandel, & Bedell, 1981; Weymouth, 1958) but resolution of these stimuli is not necessarily a sampling-limited task.”? How do they come to the conclusion that tasks using such stimuli are “not necessarily a sampling-limited,” especially given that the very existence of sampling limitations is what they are investigating? When would tasks with such very simple figures be predicted to be “sampling limited,” and when not?

      The explanation for the authors' actual choice of stimuli is also thin: “we used sinusoidal gratings because they provide the simplest, most direct link to the sampling theory of visual resolution for the purpose of demarcating the neural bandwidth and local anisotropy of veridical perception.” More specifically, “sampling theory” has been tailored to specific stimuli, and there is no rationale for extending it beyond these. But does “sampling theory” provide a rationale for its claims for the use of sinusoidal gratings, and where might this be found?

      In fact, the broad underlying rationale for this study has a common flaw that makes it invalid on its face. This is the notion that particular stimuli can produce percepts that tap specific “low-level” neural sensitivities, and can reveal these to the investigator. So our perception of sinusoidal gratings, for example, are supposed to tell us about the properties of retinal ganglion cells, or V1 neurons, etc. As Teller (1984) has noted, such arguments lack face validity. Low-level neurons are the basis of all percepts, and any claim for a direct link between their properties and a particular percept carries with it the burden to explain how and why these mechanisms do, or do not, influence, or “muck up” all percepts. The fact that the effect of a neuron or neural population is contingent on the interaction of its activity with feedback and feedforward mechanisms of extraordinary complexity, and that percepts are highly inferential and go well beyond “the information given,” makes arguments linking percepts to neural function (in the sense that the former alone can reveal the latter) highly untenable.

      The problem remains to the discussion, which begins: “This study measured the highest spatial frequency of a sinusoidal grating stimulus that is perceived veridically at selected locations in the visual field. For gratings just beyond this resolution limit, the stimulus always remained visible but was misperceived as an alias that WE ATTRIBUTE [caps mine] to under-sampling by the retinal mosaic of neurons.” To the end, it appears that the presumption of under-sampling is never tested, merely assumed. The complex computations performed on the basis of the data generated carry no weight in corroborating a hypothesis that was not tested, does not seem to possess the level of specificity needed to be testable, and lacks face validity.

      Similarly: The authors seem to me to be confusing assumption and corroborated fact. They first tell us that aliasing is “defined as the misperception of scenes caused by insufficient density of sampling elements, [and] is more likely for peripheral than for central vision because the density of retinal neurons declines with eccentricity...” They then state that: “perceptual aliasing is the proof that resolution is sampling-limited.” If aliasing is defined on the basis of assumed under-sampling, then it is difficult to see how aliasing can also be the proof of under-sampling. That is, if the aliasing/ under-sampling link is presumptive, then aliasing cannot, without explicit theoretical arguments, be offered as a proof of under-sampling. If the under-sampling notion is false, then, by the proposed definition, aliasing is non-existent.

      Given that the sampling notion is speculative, I'm puzzled about how the authors can state with confidence that they use “methodology that ensures sampling-limited performance.” Again, if the assumption that “under-sampling” affects perception is what is being tested, then how can we assume that our methodology ensures “sampling-limited performance”? The claims seem circular.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Apr 21, Lydia Maniatis commented:

      The authors of this article never test the notion in which they are interested (if percepts are influenced by “undersampling”), but merely assume that it is the case, and crunch the data accordingly. Given that the rationale for the “undersampling” account is very tenuous to begin with, running the numbers seems moot. (I explain this assessment of the study below).

      The authors propose that perception is affected by something called “under-sampling” and that the locus of this effect is in the retina. It is clear that this proposal is speculative, but we are not given even a thumbnail sketch of arguments in its favor, only told that “Image sampling by cone photoreceptors is frequently cited as the neural limit to resolution acuity for central vision (Green, 1970; Williams, 1985a), whereas ganglion cells have been cited as the limiting array in peripheral vision (Anderson, 1996; Anderson, Drasdo, & Thompson, 1995; [etc].” Frequency of citation is, however, not an argument. As I've noted in many comments on vision publications, it is, unfortunately, very often used as a substitute. At any rate, it should be clear that the idea is speculative and that if there is a coherent rationale for it, it is to be found elsewhere. I would ask the authors where such a rationale is to be found.

      The applicability of the sampling notion is apparently rather narrow: “According to the sampling theory of visual resolution, when other limiting factors are avoided ... resolution acuity for sinusoidal gratings is set by the spatial density of neural sampling elements.” (Five of the references offered in support of this claim date from before 1960, and of these two are from the 1800's, so their relevance for this technical claim seems doubtful). In light of this statement, I would ask the authors why the notion of under-sampling is supposed to apply especially to sinusoidal gratings. Why do the authors say that “Double lines, double dots, geometrical figures, and letters are traditional stimuli for measuring acuity across the visual field (Aubert & Förster, 1857; Genter, Kandel, & Bedell, 1981; Weymouth, 1958) but resolution of these stimuli is not necessarily a sampling-limited task.”? How do they come to the conclusion that tasks using such stimuli are “not necessarily a sampling-limited,” especially given that the very existence of sampling limitations is what they are investigating? When would tasks with such very simple figures be predicted to be “sampling limited,” and when not?

      The explanation for the authors' actual choice of stimuli is also thin: “we used sinusoidal gratings because they provide the simplest, most direct link to the sampling theory of visual resolution for the purpose of demarcating the neural bandwidth and local anisotropy of veridical perception.” More specifically, “sampling theory” has been tailored to specific stimuli, and there is no rationale for extending it beyond these. But does “sampling theory” provide a rationale for its claims for the use of sinusoidal gratings, and where might this be found?

      In fact, the broad underlying rationale for this study has a common flaw that makes it invalid on its face. This is the notion that particular stimuli can produce percepts that tap specific “low-level” neural sensitivities, and can reveal these to the investigator. So our perception of sinusoidal gratings, for example, are supposed to tell us about the properties of retinal ganglion cells, or V1 neurons, etc. As Teller (1984) has noted, such arguments lack face validity. Low-level neurons are the basis of all percepts, and any claim for a direct link between their properties and a particular percept carries with it the burden to explain how and why these mechanisms do, or do not, influence, or “muck up” all percepts. The fact that the effect of a neuron or neural population is contingent on the interaction of its activity with feedback and feedforward mechanisms of extraordinary complexity, and that percepts are highly inferential and go well beyond “the information given,” makes arguments linking percepts to neural function (in the sense that the former alone can reveal the latter) highly untenable.

      The problem remains to the discussion, which begins: “This study measured the highest spatial frequency of a sinusoidal grating stimulus that is perceived veridically at selected locations in the visual field. For gratings just beyond this resolution limit, the stimulus always remained visible but was misperceived as an alias that WE ATTRIBUTE [caps mine] to under-sampling by the retinal mosaic of neurons.” To the end, it appears that the presumption of under-sampling is never tested, merely assumed. The complex computations performed on the basis of the data generated carry no weight in corroborating a hypothesis that was not tested, does not seem to possess the level of specificity needed to be testable, and lacks face validity.

      Similarly: The authors seem to me to be confusing assumption and corroborated fact. They first tell us that aliasing is “defined as the misperception of scenes caused by insufficient density of sampling elements, [and] is more likely for peripheral than for central vision because the density of retinal neurons declines with eccentricity...” They then state that: “perceptual aliasing is the proof that resolution is sampling-limited.” If aliasing is defined on the basis of assumed under-sampling, then it is difficult to see how aliasing can also be the proof of under-sampling. That is, if the aliasing/ under-sampling link is presumptive, then aliasing cannot, without explicit theoretical arguments, be offered as a proof of under-sampling. If the under-sampling notion is false, then, by the proposed definition, aliasing is non-existent.

      Given that the sampling notion is speculative, I'm puzzled about how the authors can state with confidence that they use “methodology that ensures sampling-limited performance.” Again, if the assumption that “under-sampling” affects perception is what is being tested, then how can we assume that our methodology ensures “sampling-limited performance”? The claims seem circular.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.