4 Matching Annotations
  1. Jul 2018
    1. On 2017 May 25, Lydia Maniatis commented:

      Comment 2:Below are some of the assumptions entailed by the sixty-year-old "signal detection theory," as described by Nevin (1969) in a review of Green and Swets (1966), the founding text of sdt.

      "Signal detection theory [has proposed] an indirectly derived measure of sensitivity...This measure is defined as the separation...between a pair of hypothesized normal density functions representing the internally observed effects of signal plus noise, an noise alone."

      In other words, for any image an investigator might present, the nervous system of the observer generates a pair of probability functions related to the presence of absence of a feature of that image that the investigator has in mind and which he/she has instructed the observer to watch for. The observer perceives this feature on the basis of some form of knowledge of these functions. These functions have no perceptual correlate, nor is the observer aware of them, nor is there any explanation of how or why they would be represented at the neural level.

      "The subject's pre-experimental biases, his expectations based on instructions and the a priori probability of signal, and the effects of the consequences of responding, are all subsumed under the parameter beta. The subject is assumed to transform his observations into a likelihood ratio, which is the ratio of the probability density of an observation if a signal is present to the probability density of that observation in the absence of signal. He is assumed, further, to partition the likelihood ratio continuum so that one response occurs if the likelihood ratio exceeds beta, and the other if it is less than beta."

      Wow. None of these assumptions have any relationship to perceptual experience. Are they in the least plausible, or in any conceivable way testable? They underlie much of the data collection in contemporary vision science. They are dutifully taught by instructors; learning such material clearly requires that students set aside any critical thinking instincts.

      The chief impetus behind SDT seems to have been a desire for mathematical neatness, rather than for the achievement of insight and discovery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 23, Lydia Maniatis commented:

      "For over 60 years, signal detection theory has been used to analyze detection and discrimination tasks. Typically, sensory data are assumed to be Gaussian with equal variances but different means for signal-absent and signal-present trials. To decide, the observer compares the noisy sensory data to a fixed decision criterion. Performance is summarized by d0 (discriminability) and c (decision criterion) based on measured hit and false-alarm rates."

      What should be noted about the above excerpt is the way in which a statement of historical fact is offered as a substitute for a rationale. What scientist could argue with a 60-year-old practice?

      The "typical" assumptions are not credible, but if the authors believe in them, it would be great if they could propose a way to test them, as well as work out arguments against the seemingly insurmountable objections to treating neurons as detectors, objections raise by, for example, Teller (1984).

      While they are at it, they might explain what they mean by "sensory data." Are they referring to the reaction of a single photoreceptor when struck by a photon of a particular wavelength and intensity? Or to one of the infinitely variable combinations of photon intensities/wavelengths hitting the entire retina at any given moment - combinations which mediate what is perceived at any local point in the visual field? How do we get a Gaussian distribution when every passing state of the whole retina, and even parts of it, is more than likely unique? When, with eyes open, is the visual system in a "signal-absent" state?

      There is clearly a perfect confusion here about the "decision" by the visual process that produces the conscious percept and the decision by the conscious observer trying to recall and compare percepts presented under suboptimal conditions (very brief presentation times) and decide whether they conform to an extrinsic criterion. (What is the logic of the brief presentation? And why muddy the waters with forced choices? (I suspect it's to ensure the necessary "noisiness" of results)).

      "For 5 out of 10 observers in the covert-criterion task, the exponentially weighted movingaverage model fit the best. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the limited-memory models, one was fit best by the Bayesian selection, exponentially weighted moving-average, and the reinforcement learning models, one was fit best by the Bayesian selection and the reinforcement learning models, one was fit best by the exponentially weighted moving-average and reinforcement learning models, and one was best fit by the reinforcement learning model. At the group level, the exceedance probability for the exponentially weighted moving-average is very high (慸ponential = .95) suggesting that given the group data, it is a more likely model than the alternatives (Table 1). In the overt-criterion task, the exponentially weighted moving-average model fit best for 5 out of 10 observers. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the reinforcement learning models, two were fit best by the reinforcement-learning model, and two were fit best by the limited-memory model. At the group level, the exceedance probability for the exponentially weighted moving-average model (慸ponential = .78) is higher than the alternatives suggesting that it is more likely given the group data (Table 2)."

      Note how, in the modern conception of vision science practice, failure is not an option; the criterion is simply which of a number of arbitrary models "fits best," overall. Inconsistency with experiment is not cause to reject a "model", as long as other models did worse or did as well, but in fewer case.

      What is the aim here?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 May 23, Lydia Maniatis commented:

      "For over 60 years, signal detection theory has been used to analyze detection and discrimination tasks. Typically, sensory data are assumed to be Gaussian with equal variances but different means for signal-absent and signal-present trials. To decide, the observer compares the noisy sensory data to a fixed decision criterion. Performance is summarized by d0 (discriminability) and c (decision criterion) based on measured hit and false-alarm rates."

      What should be noted about the above excerpt is the way in which a statement of historical fact is offered as a substitute for a rationale. What scientist could argue with a 60-year-old practice?

      The "typical" assumptions are not credible, but if the authors believe in them, it would be great if they could propose a way to test them, as well as work out arguments against the seemingly insurmountable objections to treating neurons as detectors, objections raise by, for example, Teller (1984).

      While they are at it, they might explain what they mean by "sensory data." Are they referring to the reaction of a single photoreceptor when struck by a photon of a particular wavelength and intensity? Or to one of the infinitely variable combinations of photon intensities/wavelengths hitting the entire retina at any given moment - combinations which mediate what is perceived at any local point in the visual field? How do we get a Gaussian distribution when every passing state of the whole retina, and even parts of it, is more than likely unique? When, with eyes open, is the visual system in a "signal-absent" state?

      There is clearly a perfect confusion here about the "decision" by the visual process that produces the conscious percept and the decision by the conscious observer trying to recall and compare percepts presented under suboptimal conditions (very brief presentation times) and decide whether they conform to an extrinsic criterion. (What is the logic of the brief presentation? And why muddy the waters with forced choices? (I suspect it's to ensure the necessary "noisiness" of results)).

      "For 5 out of 10 observers in the covert-criterion task, the exponentially weighted movingaverage model fit the best. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the limited-memory models, one was fit best by the Bayesian selection, exponentially weighted moving-average, and the reinforcement learning models, one was fit best by the Bayesian selection and the reinforcement learning models, one was fit best by the exponentially weighted moving-average and reinforcement learning models, and one was best fit by the reinforcement learning model. At the group level, the exceedance probability for the exponentially weighted moving-average is very high (慸ponential = .95) suggesting that given the group data, it is a more likely model than the alternatives (Table 1). In the overt-criterion task, the exponentially weighted moving-average model fit best for 5 out of 10 observers. Of the remaining five observers, one was fit equally well by the exponentially weighted moving-average and the reinforcement learning models, two were fit best by the reinforcement-learning model, and two were fit best by the limited-memory model. At the group level, the exceedance probability for the exponentially weighted moving-average model (慸ponential = .78) is higher than the alternatives suggesting that it is more likely given the group data (Table 2)."

      Note how, in the modern conception of vision science practice, failure is not an option; the criterion is simply which of a number of arbitrary models "fits best," overall. Inconsistency with experiment is not cause to reject a "model", as long as other models did worse or did as well, but in fewer case.

      What is the aim here?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 25, Lydia Maniatis commented:

      Comment 2:Below are some of the assumptions entailed by the sixty-year-old "signal detection theory," as described by Nevin (1969) in a review of Green and Swets (1966), the founding text of sdt.

      "Signal detection theory [has proposed] an indirectly derived measure of sensitivity...This measure is defined as the separation...between a pair of hypothesized normal density functions representing the internally observed effects of signal plus noise, an noise alone."

      In other words, for any image an investigator might present, the nervous system of the observer generates a pair of probability functions related to the presence of absence of a feature of that image that the investigator has in mind and which he/she has instructed the observer to watch for. The observer perceives this feature on the basis of some form of knowledge of these functions. These functions have no perceptual correlate, nor is the observer aware of them, nor is there any explanation of how or why they would be represented at the neural level.

      "The subject's pre-experimental biases, his expectations based on instructions and the a priori probability of signal, and the effects of the consequences of responding, are all subsumed under the parameter beta. The subject is assumed to transform his observations into a likelihood ratio, which is the ratio of the probability density of an observation if a signal is present to the probability density of that observation in the absence of signal. He is assumed, further, to partition the likelihood ratio continuum so that one response occurs if the likelihood ratio exceeds beta, and the other if it is less than beta."

      Wow. None of these assumptions have any relationship to perceptual experience. Are they in the least plausible, or in any conceivable way testable? They underlie much of the data collection in contemporary vision science. They are dutifully taught by instructors; learning such material clearly requires that students set aside any critical thinking instincts.

      The chief impetus behind SDT seems to have been a desire for mathematical neatness, rather than for the achievement of insight and discovery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.