2 Matching Annotations
  1. Jul 2018
    1. On 2017 Feb 12, Romain Brette commented:

      From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus.

      In sound localization for example, one of the four examples in this essay, a relevant problem for a predator or a prey is to locate the source of a sound, i.e. absolute localization. But models such as the recent model mentioned in the essay (which is influential but not consensual) have been proposed on the basis of their performance in discriminating between identical sounds played at slightly different angles, a common experimental paradigm. Focusing on this paradigm leads to models that maximize sensitivity, but perform very poorly on the more ecologically relevant task of absolute localization, which casts doubts on the models (Brette R, 2010; Goodman DF, 2013). Unfortunately the available set of relevant behavioral data is incomplete (e.g., what is the precision of sound localization with physical sound sources in ecological environments, and are orienting responses invariant to non-spatial aspects of sounds?). Thus I sympathize with the statement in this essay that more proper behavioral work should be done.

      In other words, a good model should not only explain laboratory data: it should also work (i.e. explain how the animal manages to do what it does). It is good to remind this crucial epistemological point.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Feb 12, Romain Brette commented:

      From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus.

      In sound localization for example, one of the four examples in this essay, a relevant problem for a predator or a prey is to locate the source of a sound, i.e. absolute localization. But models such as the recent model mentioned in the essay (which is influential but not consensual) have been proposed on the basis of their performance in discriminating between identical sounds played at slightly different angles, a common experimental paradigm. Focusing on this paradigm leads to models that maximize sensitivity, but perform very poorly on the more ecologically relevant task of absolute localization, which casts doubts on the models (Brette R, 2010; Goodman DF, 2013). Unfortunately the available set of relevant behavioral data is incomplete (e.g., what is the precision of sound localization with physical sound sources in ecological environments, and are orienting responses invariant to non-spatial aspects of sounds?). Thus I sympathize with the statement in this essay that more proper behavioral work should be done.

      In other words, a good model should not only explain laboratory data: it should also work (i.e. explain how the animal manages to do what it does). It is good to remind this crucial epistemological point.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.