2 Matching Annotations
  1. Jul 2018
    1. On 2017 Mar 24, Dorothy V M Bishop commented:

      It is a pleasure to see this paper, which has the potential to transform the field of ERP research by setting new standards for reproducibility.

      I have one suggestion to add to those already given for reducing the false discovery rate in this field, and that is to include dummy conditions where no effect is anticipated. This is exactly what the authors did in their demonstration example, but it can also be incorporated in an experiment. We started to do this in our research on mismatch negativity (MMN), inspired by a study by McGee et al 1997; they worked at a time when it was not unusual for the MMN to be identified by 'experts' – and what they showed was that experts were prone to identify MMNs when the standard and deviant stimuli were identical. We found this approach – inclusion of a 'dummy' mismatch – invaluable when attempting to study MMN in individuals (Hardiman and Bishop, 2010). It was particularly helpful, for instance, when validating an approach for identifying time periods of significant mismatch in the waveform.

      Another suggestion is that the field could start to work more collaboratively to address these issues. As the authors note, replication is the best way to confirm that one has a real effect. Sometimes it may be possible to use an existing dataset to replicate a result, but data-sharing is not yet the norm for the field – journals could change that by requiring deposition of the data for published papers. But, more generally, if journals and/or funders started to require replications before work could be published, then one might see more reciprocal arrangements, whereby groups would agree to replicate each other's findings. Years ago, when I suggested this, I remember some people said you could not expect findings to replicate because everyone had different systems for data acquisition and processing. But if our data are specific to the lab that collected them, then surely we have a problem.

      Finally, I have one request, which is that the authors make their simulation script available. My own experience is that working with simulations is the best way to persuade people that the problems you have highlighted are real and not just statistical quibbles, and we need to encourage researchers in this area to become familiar with this approach.

      Bishop, D. V. M., & Hardiman, M. J. (2010). Measurement of mismatch negativity in individuals: a study using single-trial analysis. Psychophysiology, 47, 697-705 doi:10.1111/j.1469-8986.2009.00970.x

      McGee, T., Kraus, N., & Nicol, T. (1997). Is it really a mismatch negativity? An assessment of methds for determining response validity in individual subjects. Electroencephalography and Clinical Neurophysiology, 104, 359-368.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2017 Mar 24, Dorothy V M Bishop commented:

      It is a pleasure to see this paper, which has the potential to transform the field of ERP research by setting new standards for reproducibility.

      I have one suggestion to add to those already given for reducing the false discovery rate in this field, and that is to include dummy conditions where no effect is anticipated. This is exactly what the authors did in their demonstration example, but it can also be incorporated in an experiment. We started to do this in our research on mismatch negativity (MMN), inspired by a study by McGee et al 1997; they worked at a time when it was not unusual for the MMN to be identified by 'experts' – and what they showed was that experts were prone to identify MMNs when the standard and deviant stimuli were identical. We found this approach – inclusion of a 'dummy' mismatch – invaluable when attempting to study MMN in individuals (Hardiman and Bishop, 2010). It was particularly helpful, for instance, when validating an approach for identifying time periods of significant mismatch in the waveform.

      Another suggestion is that the field could start to work more collaboratively to address these issues. As the authors note, replication is the best way to confirm that one has a real effect. Sometimes it may be possible to use an existing dataset to replicate a result, but data-sharing is not yet the norm for the field – journals could change that by requiring deposition of the data for published papers. But, more generally, if journals and/or funders started to require replications before work could be published, then one might see more reciprocal arrangements, whereby groups would agree to replicate each other's findings. Years ago, when I suggested this, I remember some people said you could not expect findings to replicate because everyone had different systems for data acquisition and processing. But if our data are specific to the lab that collected them, then surely we have a problem.

      Finally, I have one request, which is that the authors make their simulation script available. My own experience is that working with simulations is the best way to persuade people that the problems you have highlighted are real and not just statistical quibbles, and we need to encourage researchers in this area to become familiar with this approach.

      Bishop, D. V. M., & Hardiman, M. J. (2010). Measurement of mismatch negativity in individuals: a study using single-trial analysis. Psychophysiology, 47, 697-705 doi:10.1111/j.1469-8986.2009.00970.x

      McGee, T., Kraus, N., & Nicol, T. (1997). Is it really a mismatch negativity? An assessment of methds for determining response validity in individual subjects. Electroencephalography and Clinical Neurophysiology, 104, 359-368.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.