2 Matching Annotations
  1. Jul 2018
    1. On 2016 Oct 21, Lydia Maniatis commented:

      According to Kingdom: "A longstanding issue in vision research concerns whether the internal noise involved in contrast transduction is fixed or variable in relation to contrast magnitude."

      This statement is precisely analogous to saying: A longstanding problem in chemistry is whether phlogiston is evenly or unevenly distributed in relation to object density.

      The notion of "internal noise" is crude, lumping together every element of the visual process between light hitting the retina and the conscious percept. It is flatly inconsistent with perceptual experience, which is in no way "noisy," yet most proponents of this view would have us accept that the conscious percept directly reflects "low-level" and noisy spiking activity of individual or sets of neurons. In any event, no attempt has ever been made to corroborate the noise assumption.

      It is not even clear what the criteria would be for corroboration on the basis of measurements at the physiological level. It would have to be shown, presumably, that identical "sensory inputs" produce a range and distribution of neural responses, this range and distribution being somewhat predictable; however "inputs" to brain activity don't come only from the external receptor organs, no matter how well we might be able to control these. Even if we could (inconceivably) control inputs perfectly, and even if we were able to say that (as is often claimed) at V1 neural responses are noisy, we would have to explain why this noise doesn't affect the conscious percept (which, again, is very stable) and yet is detectable on the basis of conscious experience. Graham (1992; 2011) has adopted the hypothesis that under certain conditions the brain becomes "transparent" so that the activities at lower levels of the processing hierarchy are act directly on the percept. It should be reasonably clear that such a view isn't worth entertaining, but if one wants to entertain it there are massive theoretical difficulties to overcome. It seems to imply that feedback and feedforward processes for some reason are frozen and some alternative, direct pathway to consciousness exists, all while other pathways are still active (because the inference generally applies to a discontinuity on a screen in a room, all of which are maintained in perception.)

      Not surprisingly given the concept's vagueness, the case for "internal noise" has never been credibly made. But it is widely accepted.

      Those who simply accept the internal noise assumption "measure internal noise" by analyzing simple "detection and discrimination" datasets on the basis of multiple layers of untested, untestable, or empirically untenable assumptions rolled into "computational models" including indispensable, multiple free parameters. (For a detailed examination of this technique, see PubPeer comments on Pelli (1985)).

      In the absence of clear and explicit assumptions, relevant confounds remain unspecified, and tests, as here, are always ad hoc, hinging on particular datasets, and counting up "successes" as though by adding these together, they can outweigh unexplained failures. But failures are dispositive, of course, when we are aiming at a general explanation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Oct 21, Lydia Maniatis commented:

      According to Kingdom: "A longstanding issue in vision research concerns whether the internal noise involved in contrast transduction is fixed or variable in relation to contrast magnitude."

      This statement is precisely analogous to saying: A longstanding problem in chemistry is whether phlogiston is evenly or unevenly distributed in relation to object density.

      The notion of "internal noise" is crude, lumping together every element of the visual process between light hitting the retina and the conscious percept. It is flatly inconsistent with perceptual experience, which is in no way "noisy," yet most proponents of this view would have us accept that the conscious percept directly reflects "low-level" and noisy spiking activity of individual or sets of neurons. In any event, no attempt has ever been made to corroborate the noise assumption.

      It is not even clear what the criteria would be for corroboration on the basis of measurements at the physiological level. It would have to be shown, presumably, that identical "sensory inputs" produce a range and distribution of neural responses, this range and distribution being somewhat predictable; however "inputs" to brain activity don't come only from the external receptor organs, no matter how well we might be able to control these. Even if we could (inconceivably) control inputs perfectly, and even if we were able to say that (as is often claimed) at V1 neural responses are noisy, we would have to explain why this noise doesn't affect the conscious percept (which, again, is very stable) and yet is detectable on the basis of conscious experience. Graham (1992; 2011) has adopted the hypothesis that under certain conditions the brain becomes "transparent" so that the activities at lower levels of the processing hierarchy are act directly on the percept. It should be reasonably clear that such a view isn't worth entertaining, but if one wants to entertain it there are massive theoretical difficulties to overcome. It seems to imply that feedback and feedforward processes for some reason are frozen and some alternative, direct pathway to consciousness exists, all while other pathways are still active (because the inference generally applies to a discontinuity on a screen in a room, all of which are maintained in perception.)

      Not surprisingly given the concept's vagueness, the case for "internal noise" has never been credibly made. But it is widely accepted.

      Those who simply accept the internal noise assumption "measure internal noise" by analyzing simple "detection and discrimination" datasets on the basis of multiple layers of untested, untestable, or empirically untenable assumptions rolled into "computational models" including indispensable, multiple free parameters. (For a detailed examination of this technique, see PubPeer comments on Pelli (1985)).

      In the absence of clear and explicit assumptions, relevant confounds remain unspecified, and tests, as here, are always ad hoc, hinging on particular datasets, and counting up "successes" as though by adding these together, they can outweigh unexplained failures. But failures are dispositive, of course, when we are aiming at a general explanation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.