7 Matching Annotations
  1. Jul 2018
    1. On 2016 May 19, Lydia Maniatis commented:

      Below is my reply to the author. My polemic tone notwithstanding, I appreciate his taking the time to respond. I've interspersed my responses with his text.

      Author: A researcher I highly respect once told me that a good review paper is one that engages and stimulates the reader to think critically and broadly about a particular phenomenon. In this sense I appreciate the commentary by Prof. Maniatis, which suggests the review at least succeeded in stimulating critical thought in at least one distinguished reader. And I will add that, though my initial reaction was that Prof. Maniatis' commentary is a polemic, it is clear that my critic takes the issues very seriously and raises some important research questions suggesting future experimental work.

      Me: My commentary is a polemic, if by that you mean it raises serious objections. I'm not recommending future work along the same lines; I'm saying the rationale for such work is vague to non-existent, not least because it conflicts with known facts. (My (ongoing) comments on Ariely (2001), which this article and many others treat as as “seminal,” as well as the other comments I've cited here, may make this clearer. Disagreement with the facts is supposed to be disqualifying in science, unless and until theoretical alignment can be achieved. Avoiding the (easy) possibility of falsification by choosing the route that Runeson describes (quoted in my second comment on Dube and Sekuler) is not the same thing as subjecting a hypotheses to serious tests. Inconclusive tests and an avoidance of critical discussion to point out logical inconsistencies/inconsistencies with the phenomena ensures that more work is always needed.

      Author: Nonetheless, the response, roughly a third of which seems to revolve around a passing reference to work by Koffka that has little to no bearing on the main points and conclusions of the review (and which misses the point of the reference to Koffka)...

      Me: If I've missed the point, then please let me know what I've missed. I consider the mistake that I flagged serious because it implies that the work of the Gestaltists supports the work being discussed here, when in fact the opposite is true.

      Author:...contains a number of misintepretations of the points made in the review. I take responsibility for any lack of clarity that may have produced this. I will detail a couple of examples that seem most directly related to the review (discussion of modeling methods, which don't fit algorithms as Prof. Maniatis stated but use algorithms to fit models, has to do with standard practice in the field itself and not the review).

      Me: Standard practice isn't necessarily good practice.

      Author: Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles in a set of circles with different diameters, may or may not involve direct "perception" of the average in the sense used by Prof. Maniatis. The relevant experiments, I suspect, have yet to be conducted.

      Me: Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles, may or may not actually happen. Scientific hypotheses are indeed guesses, but to be worthy of testing there needs to be a rationale and a clear articulation of associated assumptions. Relevant experiments pre-suppose that the idea has been developed enough to specify, for the purpose of testing, what these assumptions are. If investigators, after decades, haven't even decided whether direct perception is involved (which it clearly isn't – it's the nature of direct perception to be self-evident), then what have they been doing?

      Author: For this reason, "perceptual" may not be the best term and several different terms for the effects we have described are in in use (ensemble representation, statistical summary representation, etc). In my prior work (Dubé et al., 2014) I have discussed conceptual difficulties related to this term, and in my current work I favor "statistical summary representation" for this reason. However, the findings detailed in the review are indisputable.

      Me: I dispute them, partly along the lines of Runeson. I think when we look on a case by case basis, we find serious problems of method and/or misrepresentations in the interpretation.

      Author: There is a clear consensus in the literature that participants can accurately recall the average.

      Me: It's interesting that experimenters jump to the recall stage but skip the (presumably less challenging) perception stage. Why are observers being forced to recall what they are supposed to be perceiving?

      Author: If they can accurately recall it, they must have encoded and stored it. There is no question as to whether such memories exist. I just returned from VSS at which there were around 50 presentations on the topic of summary statistical representation, according to one talk, and the special issue of JoV in which our review appeared was devoted entirely to summary statistical representation. Clearly a decent number of scientists remains convinced that the effects exist!

      Me: Numbers of proponents is not an argument. I've criticized some of these authors' work, and when there's a response its not very convincing.

      Author: The final comment in the review, which Prof. Maniatis takes as our own admission that the existence of statistical representation is questionable, was meant to be somewhat tongue-in-cheek. How can the effects that have been attributed to remembered averages be due to memory for fine details of individual items when several studies, including the seminal one by Ariely (2001), demonstrate memory for the average despite chance performance on memory tests of the individual items from which the average was computed?

      Me: I'm in the process of commenting on Ariely (2001). His methods and interpretations are questionable and his arguments are full of inaccuracies and inconsistencies. It is an extremely casual, not a seminal, study.

      Author: It is in no way a statement that the effects don't exist (or even that we suspect they don't), even if taken at face value, and as I have detailed there is a quite large amount of empirical evidence to contradict the philosophical position of Prof. Maniatis. I will not detail all of these studies here, since a review detailing them already exists: Dubé and Sekuler (2015).

      Me: There are often other ways to interpret performances that have been attributed to some kind of mental calculation. The brain can use rules of thumb, as Gigerenzer has discussed. One example is how baseball players can catch a ball without subconsciously doing the complex math that some thought was required. When you a. ignore falsifying facts and b. don't consider alternative interpretations, then you have no doubts.

      Author: In my view, the conceptual nuances involved in discussion of summary statistical representation are suggestive of a need for more concrete, computational modeling, less verbal theorizing, and more neural data in this area."

      Me: If by verbal theorizing you mean critical discussion and exchange of ideas, I would say that more is desperately needed. The conceptual problems aren't “nuances,” they're huge. Useful data collection presupposes clear theories; otherwise its a waste of time, money and people. (As Darwin said, if you don't have a hypothesis, you might as well count the stones on Brighton Beach). The normalized view (espoused by journal editors) that critical discussion is the enemy of progress is convenient, but unscientific and wasteful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 18, Chad Dube commented:

      A researcher I highly respect once told me that a good review paper is one that engages and stimulates the reader to think critically and broadly about a particular phenomenon. In this sense I appreciate the commentary by Prof. Maniatis, which suggests the review at least succeeded in stimulating critical thought in at least one distinguished reader. And I will add that, though my initial reaction was that Prof. Maniatis' commentary is a polemic, it is clear that my critic takes the issues very seriously and raises some important research questions suggesting future experimental work. Nonetheless, the response, roughly a third of which seems to revolve around a passing reference to work by Koffka that has little to no bearing on the main points and conclusions of the review (and which misses the point of the reference to Koffka), contains a number of misintepretations of the points made in the review. I take responsibility for any lack of clarity that may have produced this.

      I will detail a couple of examples that seem most directly related to the review (discussion of modeling methods, which don't fit algorithms as Prof. Maniatis stated but use algorithms to fit models, has to do with standard practice in the field itself and not the review).

      Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles in a set of circles with different diameters, may or may not involve direct "perception" of the average in the sense used by Prof. Maniatis. The relevant experiments, I suspect, have yet to be conducted. For this reason, "perceptual" may not be the best term and several different terms for the effects we have described are in in use (ensemble representation, statistical summary representation, etc). In my prior work (Dubé et al., 2014) I have discussed conceptual difficulties related to this term, and in my current work I favor "statistical summary representation" for this reason. However, the findings detailed in the review are indisputable. There is a clear consensus in the literature that participants can accurately recall the average. If they can accurately recall it, they must have encoded and stored it. There is no question as to whether such memories exist. I just returned from VSS at which there were around 50 presentations on the topic of summary statistical representation, according to one talk, and the special issue of JoV in which our review appeared was devoted entirely to summary statistical representation. Clearly a decent number of scientists remains convinced that the effects exist!

      The final comment in the review, which Prof. Maniatis takes as our own admission that the existence of statistical representation is questionable, was meant to be somewhat tongue-in-cheek. How can the effects that have been attributed to remembered averages be due to memory for fine details of individual items when several studies, including the seminal one by Ariely (2001), demonstrate memory for the average despite chance performance on memory tests of the individual items from which the average was computed? It is in no way a statement that the effects don't exist (or even that we suspect they don't), even if taken at face value, and as I have detailed there is a quite large amount of empirical evidence to contradict the philosophical position of Prof. Maniatis. I will not detail all of these studies here, since a review detailing them already exists: Dubé and Sekuler (2015).

      In my view, the conceptual nuances involved in discussion of summary statistical representation are suggestive of a need for more concrete, computational modeling, less verbal theorizing, and more neural data in this area.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 May 17, Lydia Maniatis commented:

      (For some reason the system's not allowing me to edit at the moment: The following is the edited version of the earlier comment). I have a number of issues with this paper, among them that it refers to "perceptual averaging" as though it actually exists, before admitting, at the very end, that it might not. It's an odd situation; can an experience be perceptual in the absence of something being actually perceived? (Who are you going to believe, the investigators or your lying eyes?) Apparently, the authors and others they cite have never personally perceived perceptual averages (if they had there would be little doubt of their perceptual existence), but they have purportedly generated a great deal of evidence that other humans do experience such percepts. If, on the other hand, they mean to refer to some type of blindsight, then it is still the case that the term perceptual is not quite right, since the striking thing about blindsight is precisely the lack of a percept.

      I've addressed the averaging issue and its existence in other comments, including a comment here on Bauer (2015) and Solomon, May and Tyler (2016).

      Another problem I have is that the authors refer to perception in terms of "signal detection," in which the main problem is what to discard, as though "features" were an intrinsic part of the proximal stimulus pre-organization, and we just have to decide what to get rid of or to "summarise." I've criticised the signal detection model in various comments including one on Allard and Faubert (2014). Such descriptions completely miss the point, which is that "features" such as shape and even color are not simply given in the stimulus, which consists of disconnected photons striking the retina. The failure of contemporary "signal detection" and "statistical summary" proponents to understand the fundamental problems of perception, and the tendency to essentially oversimplify the problems, puts them in the same boat with the structuralists, behaviourists and psychophysicists of yore. Of course, what is not simple are the fussy algorithms that are constantly being fitted and adjusted to the data, but this is just a technical not a conceptual complexity. If these activities had heuristic value, rather than being exercises in post hoc data-fitting, then we wouldn't be in a position where Bauer (2015), in a review of related research up to 2000, can state that some people continue to doubt the very existence of the assumed processes, or where the present authors could say pretty much the same thing: "Future work will be required to evaluate such claims fully, determining whether what appear to be the effects of perceptual averages might in the end reduce to the effects of memory for fine details of individual stimuli." Even though he's gotten a lot of flack for it, Helmholtz's "unconscious inferences" are basically a description of unconscious processes. Unperceived percepts, on the other hand, are simply paradoxical.

      To the failure of the authors to appreciate the subtle but fundamental problems that the Gestaltists were addressing is added the misrepresentation of the latter, who are described as an early version of the former: "As Haberman and Whitney (2012) have noted, the idea that the visual system extracts summary statistics at the expense of individual features is far from a new one, going back at least as far as the writings of first-generation Gestalt psychologists (e.g., Koffka, 1935, pp. 270, 273). Despite the idea's long history and the recent increase in efforts to understand the statistical representations generated by the visual system, the basic structural mechanisms and functional significance of such summary statistics remain unclear." The reference to Koffka is completely off, as in the relevant passages he's discussing binocular vision and the matching or "summarising" of two retinal images to produce the perception of a single one in depth. The Gestaltists would have been acutely sensitive to the failure of visual statisticians to understand that the problem is not extraction and reaction but construction. They have extensively argued against past incarnations of these fallacies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 May 17, Lydia Maniatis commented:

      I have a number of issues with this paper, among them that it refers to "perceptual averaging" as though it actually exists, before admitting, at the very end, that it might not. It's an odd situation, since "perceptual;" can an experience be perceptual in the absence of something being actually perceived. (Who are you going to believe, the investigators or your lying eyes?) Apparently, the authors and others they cite have never personally perceived perceptual averages (if they had there would be little doubt of their perceptual existence), but they have purportedly generated a great deal of evidence that other humans do. If, on the other hand, they mean to refer to some type of blindsight, then it is still the case that the term perceptual is not quite right, since the striking thing about blindsight is precisely the lack of a percept.

      I've addressed the averaging issue and its existence in other comments, including a comment here on Bauer (2015) and Solomon, May and Tyler (2016).

      Another problem I have is that the authors refer to perception in terms of "signal detection," in which the main problem is what to discard, as though "features" were an intrinsic part of the proximal stimulus pre-organization, and we just have to decide what to get rid of or to "summarise." I've criticised the signal detection model in various comments including one on Allard and Faubert (2014). Such descriptions completely miss the point, which is that "features" such as shape and even color are not simply given in the stimulus, which consists of disconnected photons striking the retina. The failure of contemporary "signal detection" and "statistical summary" proponents to understand the fundamental problems of perception, and the tendency to essentially oversimplify the problems, puts them in the same boat with the structuralists, behaviourists and psychophysicists of yore. Of course, what is not simple are the fussy algorithms that are constantly being fitted and adjusted to the data, but this is just a technical not a conceptual complexity. If these activities had heuristic value, rather than being exercises in post hoc data-fitting, then we wouldn't be in a position where, as Bauer (2015) in a review of related research up to 2000, can state that some people continue to doubt the very existence of the assumed processes, or where the present authors could say pretty much the same thing: "Future work will be required to evaluate such claims fully, determining whether what appear to be the effects of perceptual averages might in the end reduce to the effects of memory for fine details of individual stimuli." Even though he's gotten a lot of flack for it, Helmholtz's "unconscious inferences" are basically a description of unconscious processes. Unperceived percepts, on the other hand, are simply paradoxical.

      To the failure of the authors to appreciate the subtle but fundamental problems that the Gestaltists were addressing is added the misrepresentation of the latter, who are described as an early version of the former: "As Haberman and Whitney (2012) have noted, the idea that the visual system extracts summary statistics at the expense of individual features is far from a new one, going back at least as far as the writings of first-generation Gestalt psychologists (e.g., Koffka, 1935, pp. 270, 273). Despite the idea's long history and the recent increase in efforts to understand the statistical representations generated by the visual system, the basic structural mechanisms and functional significance of such summary statistics remain unclear." The reference to Koffka is completely off, as in the relevant passages he's discussing binocular vision and the matching or "summarising" of two retinal images to produce the perception of a single one in depth. The Gestaltists would have been acutely sensitive to the inadequacies and contradictions in the arguments in the current literature. They have extensively argued against past incarnations of the same fallacies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 May 17, Lydia Maniatis commented:

      I have a number of issues with this paper, among them that it refers to "perceptual averaging" as though it actually exists, before admitting, at the very end, that it might not. It's an odd situation, since "perceptual;" can an experience be perceptual in the absence of something being actually perceived. (Who are you going to believe, the investigators or your lying eyes?) Apparently, the authors and others they cite have never personally perceived perceptual averages (if they had there would be little doubt of their perceptual existence), but they have purportedly generated a great deal of evidence that other humans do. If, on the other hand, they mean to refer to some type of blindsight, then it is still the case that the term perceptual is not quite right, since the striking thing about blindsight is precisely the lack of a percept.

      I've addressed the averaging issue and its existence in other comments, including a comment here on Bauer (2015) and Solomon, May and Tyler (2016).

      Another problem I have is that the authors refer to perception in terms of "signal detection," in which the main problem is what to discard, as though "features" were an intrinsic part of the proximal stimulus pre-organization, and we just have to decide what to get rid of or to "summarise." I've criticised the signal detection model in various comments including one on Allard and Faubert (2014). Such descriptions completely miss the point, which is that "features" such as shape and even color are not simply given in the stimulus, which consists of disconnected photons striking the retina. The failure of contemporary "signal detection" and "statistical summary" proponents to understand the fundamental problems of perception, and the tendency to essentially oversimplify the problems, puts them in the same boat with the structuralists, behaviourists and psychophysicists of yore. Of course, what is not simple are the fussy algorithms that are constantly being fitted and adjusted to the data, but this is just a technical not a conceptual complexity. If these activities had heuristic value, rather than being exercises in post hoc data-fitting, then we wouldn't be in a position where, as Bauer (2015) in a review of related research up to 2000, can state that some people continue to doubt the very existence of the assumed processes, or where the present authors could say pretty much the same thing: "Future work will be required to evaluate such claims fully, determining whether what appear to be the effects of perceptual averages might in the end reduce to the effects of memory for fine details of individual stimuli." Even though he's gotten a lot of flack for it, Helmholtz's "unconscious inferences" are basically a description of unconscious processes. Unperceived percepts, on the other hand, are simply paradoxical.

      To the failure of the authors to appreciate the subtle but fundamental problems that the Gestaltists were addressing is added the misrepresentation of the latter, who are described as an early version of the former: "As Haberman and Whitney (2012) have noted, the idea that the visual system extracts summary statistics at the expense of individual features is far from a new one, going back at least as far as the writings of first-generation Gestalt psychologists (e.g., Koffka, 1935, pp. 270, 273). Despite the idea's long history and the recent increase in efforts to understand the statistical representations generated by the visual system, the basic structural mechanisms and functional significance of such summary statistics remain unclear." The reference to Koffka is completely off, as in the relevant passages he's discussing binocular vision and the matching or "summarising" of two retinal images to produce the perception of a single one in depth. The Gestaltists would have been acutely sensitive to the inadequacies and contradictions in the arguments in the current literature. They have extensively argued against past incarnations of the same fallacies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 17, Lydia Maniatis commented:

      (For some reason the system's not allowing me to edit at the moment: The following is the edited version of the earlier comment). I have a number of issues with this paper, among them that it refers to "perceptual averaging" as though it actually exists, before admitting, at the very end, that it might not. It's an odd situation; can an experience be perceptual in the absence of something being actually perceived? (Who are you going to believe, the investigators or your lying eyes?) Apparently, the authors and others they cite have never personally perceived perceptual averages (if they had there would be little doubt of their perceptual existence), but they have purportedly generated a great deal of evidence that other humans do experience such percepts. If, on the other hand, they mean to refer to some type of blindsight, then it is still the case that the term perceptual is not quite right, since the striking thing about blindsight is precisely the lack of a percept.

      I've addressed the averaging issue and its existence in other comments, including a comment here on Bauer (2015) and Solomon, May and Tyler (2016).

      Another problem I have is that the authors refer to perception in terms of "signal detection," in which the main problem is what to discard, as though "features" were an intrinsic part of the proximal stimulus pre-organization, and we just have to decide what to get rid of or to "summarise." I've criticised the signal detection model in various comments including one on Allard and Faubert (2014). Such descriptions completely miss the point, which is that "features" such as shape and even color are not simply given in the stimulus, which consists of disconnected photons striking the retina. The failure of contemporary "signal detection" and "statistical summary" proponents to understand the fundamental problems of perception, and the tendency to essentially oversimplify the problems, puts them in the same boat with the structuralists, behaviourists and psychophysicists of yore. Of course, what is not simple are the fussy algorithms that are constantly being fitted and adjusted to the data, but this is just a technical not a conceptual complexity. If these activities had heuristic value, rather than being exercises in post hoc data-fitting, then we wouldn't be in a position where Bauer (2015), in a review of related research up to 2000, can state that some people continue to doubt the very existence of the assumed processes, or where the present authors could say pretty much the same thing: "Future work will be required to evaluate such claims fully, determining whether what appear to be the effects of perceptual averages might in the end reduce to the effects of memory for fine details of individual stimuli." Even though he's gotten a lot of flack for it, Helmholtz's "unconscious inferences" are basically a description of unconscious processes. Unperceived percepts, on the other hand, are simply paradoxical.

      To the failure of the authors to appreciate the subtle but fundamental problems that the Gestaltists were addressing is added the misrepresentation of the latter, who are described as an early version of the former: "As Haberman and Whitney (2012) have noted, the idea that the visual system extracts summary statistics at the expense of individual features is far from a new one, going back at least as far as the writings of first-generation Gestalt psychologists (e.g., Koffka, 1935, pp. 270, 273). Despite the idea's long history and the recent increase in efforts to understand the statistical representations generated by the visual system, the basic structural mechanisms and functional significance of such summary statistics remain unclear." The reference to Koffka is completely off, as in the relevant passages he's discussing binocular vision and the matching or "summarising" of two retinal images to produce the perception of a single one in depth. The Gestaltists would have been acutely sensitive to the failure of visual statisticians to understand that the problem is not extraction and reaction but construction. They have extensively argued against past incarnations of these fallacies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 May 18, Chad Dube commented:

      A researcher I highly respect once told me that a good review paper is one that engages and stimulates the reader to think critically and broadly about a particular phenomenon. In this sense I appreciate the commentary by Prof. Maniatis, which suggests the review at least succeeded in stimulating critical thought in at least one distinguished reader. And I will add that, though my initial reaction was that Prof. Maniatis' commentary is a polemic, it is clear that my critic takes the issues very seriously and raises some important research questions suggesting future experimental work. Nonetheless, the response, roughly a third of which seems to revolve around a passing reference to work by Koffka that has little to no bearing on the main points and conclusions of the review (and which misses the point of the reference to Koffka), contains a number of misintepretations of the points made in the review. I take responsibility for any lack of clarity that may have produced this.

      I will detail a couple of examples that seem most directly related to the review (discussion of modeling methods, which don't fit algorithms as Prof. Maniatis stated but use algorithms to fit models, has to do with standard practice in the field itself and not the review).

      Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles in a set of circles with different diameters, may or may not involve direct "perception" of the average in the sense used by Prof. Maniatis. The relevant experiments, I suspect, have yet to be conducted. For this reason, "perceptual" may not be the best term and several different terms for the effects we have described are in in use (ensemble representation, statistical summary representation, etc). In my prior work (Dubé et al., 2014) I have discussed conceptual difficulties related to this term, and in my current work I favor "statistical summary representation" for this reason. However, the findings detailed in the review are indisputable. There is a clear consensus in the literature that participants can accurately recall the average. If they can accurately recall it, they must have encoded and stored it. There is no question as to whether such memories exist. I just returned from VSS at which there were around 50 presentations on the topic of summary statistical representation, according to one talk, and the special issue of JoV in which our review appeared was devoted entirely to summary statistical representation. Clearly a decent number of scientists remains convinced that the effects exist!

      The final comment in the review, which Prof. Maniatis takes as our own admission that the existence of statistical representation is questionable, was meant to be somewhat tongue-in-cheek. How can the effects that have been attributed to remembered averages be due to memory for fine details of individual items when several studies, including the seminal one by Ariely (2001), demonstrate memory for the average despite chance performance on memory tests of the individual items from which the average was computed? It is in no way a statement that the effects don't exist (or even that we suspect they don't), even if taken at face value, and as I have detailed there is a quite large amount of empirical evidence to contradict the philosophical position of Prof. Maniatis. I will not detail all of these studies here, since a review detailing them already exists: Dubé and Sekuler (2015).

      In my view, the conceptual nuances involved in discussion of summary statistical representation are suggestive of a need for more concrete, computational modeling, less verbal theorizing, and more neural data in this area.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.