Reviewer #1 (Public review):
The authors investigate the function and neural circuitry of reentrant signals in the visual cortex. Recurrent signaling is thought to be necessary to common types of perceptual experience that are defined by long-range relationships or prior expectations. Contour illusions - where perceptual objects are implied by stimuli characteristics - are a good example of this. The perception of these illusions is thought to emerge as recurrent signals from higher cortical areas feedback onto the early visual cortex, to tell the early visual cortex that it should be seeing object contours where none are actually present.
The authors test the involvement of reentrant cortical activity in this kind of perception using a drug challenge. Reentrance in the visual cortex is thought to rely on NMDAR-mediated glutamate signalling. The authors accordingly employ an NMDA antagonist to stop this mechanism, looking for the effect of this manipulation on visually evoked activity recorded in EEG.
The motivating hypothesis for the paper is that NMDA antagonism should stop recurrent activity and that this should degrade perceptual activity supporting the perception of a contour illusion, but not other types of visual experience. Results in fact show the opposite. Rather than degrading cortical activity evoked by the illusion, memantine makes it more likely that machine learning classification of EEG will correctly infer the presence of the illusion.
On the face of it, this is confusing, and the paper currently does not entirely resolve this confusion. But there are relatively easy ways to improve this. The authors would be well served by entertaining more possible outcomes in the introduction - there's good reason to expect a positive effect of memantine on perceptual brain activity, and I provide details on this below. The authors also need to further emphasize that the directional expectations that motivated E1 were, of course, adapted after the results from this experiment emerged. The authors presumably at least entertained the notion that E2 would reproduce E1 - meaning that E2 was motivated by a priori expectations that were ultimately met by the data.
I broadly find the paper interesting, graceful, and creative. The hypotheses are clear and compelling, the techniques for both manipulation of brain state and observation of that impact are cutting edge and well suited, and the paper draws clear and convincing conclusions that are made necessary by the results. The work sits at the very interesting crux of systems neuroscience, neuroimaging, and pharmacology. I believe the paper can be improved in revision, but my suggestions are largely concerning presentation and nuance of interpretation.
(1) I miss some treatment of the lack of behavioural correlate. What does it mean that metamine benefits EEG classification accuracy without improving performance? One possibility here is that there is an improvement in response latency, rather than perceptual sensitivity. Is there any hint of that in the RT results? In some sort of combined measure of RT and accuracy?
(2) An explanation is missing, about why memantine impacts the decoding of illusion but not collinearity. At a systems level, how would this work? How would NMDAR antagonism selectively impact long-range connectivity, but not lateral connectivity? Is this supported by our understanding of laminar connectivity and neurochemistry in the visual cortex?
(3) The motivating idea for the paper is that the NMDAR antagonist might disrupt the modulation of the AMPA-mediated glu signal. This is in line with the motivating logic for Self et al., 2012, where NMDAR and AMPAR efficacy in macacque V1 was manipulated via microinfusion. But this logic seems to conflict with a broader understanding of NMDA antagonism. NMDA antagonism appears to generally have the net effect of increasing glu (and ACh) in the cortex through a selective effect on inhibitory GABA-ergic cells (eg. Olney, Newcomer, & Farber, 1999). Memantine, in particular, has a specific impact on extrasynaptic NMDARs (that is in contrast to ketamine; Milnerwood et al, 2010, Neuron), and this type of receptor is prominent in GABA cells (eg. Yao et al., 2022, JoN). The effect of NMDA antagonists on GABAergic cells generally appears to be much stronger than the effect on glutamergic cells (at least in the hippocampus; eg. Grunze et al., 1996).
This all means that it's reasonable to expect that memantine might have a benefit to visually evoked activity. This idea is raised in the GD of the paper, based on a separate literature from that I mentioned above. But all of this could be better spelled out earlier in the paper, so that the result observed in the paper can be interpreted by the reader in this broader context.
To my mind, the challenging task is for the authors to explain why memantine causes an increase in EEG decoding, where microinfusion of an NMDA antagonist into V1 reduced the neural signal Self et al., 2012. This might be as simple as the change in drug... memantine's specific efficacy on extrasynaptic NMDA receptors might not be shared with whatever NMDA antagonist was used in Self et al. 2012. Ketamine and memantine are already known to differ in this way.
(4) The paper's proposal is that the effect of memantine is mediated by an impact on the efficacy of reentrant signaling in visual cortex. But perhaps the best-known impact of NMDAR manipulation is on LTP, in the hippocampus particularly but also broadly. Perception and identification of the kanisza illusion may be sensitive to learning (eg. Maertens & Pollmann, 2005; Gellatly, 1982; Rubin, Nakayama, Shapley, 1997); what argues against an account of the results from an effect on perceptual learning? Generally, the paper proposes a very specific mechanism through which the drug influences perception. This is motivated by results from Self et al 2012 where an NMDA antagonist was infused into V1. But oral memantine will, of course, have a whole-brain effect, and some of these effects are well characterized and - on the surface - appear as potential sources of change in illusion perception. The paper needs some treatment of the known ancillary effects of diffuse NMDAR antagonism to convince the reader that the account provided is better than the other possibilities.
(5) The cross-decoding approach to data analysis concerns me a little. The approach adopted here is to train models on a localizer task, in this case, a task where participants matched a kanisza figure to a target template (E1) or discriminated one of the three relevant stimuli features (E2). The resulting model was subsequently employed to classify the stimuli seen during separate tasks - an AB task in E1, and a feature discrimination task in E2. This scheme makes the localizer task very important. If models built from this task have any bias, this will taint classifier accuracy in the analysis of experimental data. My concern is that the emergence of the kanisza illusion in the localizer task was probably quite salient, respective to changes in stimuli rotation or collinearity. If the model was better at detecting the illusion to begin with, the data pattern - where drug manipulation impacts classification in this condition but not other conditions - may simply reflect model insensitivity to non-illusion features.
I am also vaguely worried by manipulations implemented in the main task that do not emerge in the localizer - the use of RSVP in E1 and manipulation of the base rate and staircasing in E2. This all starts to introduce the possibility that localizer and experimental data just don't correspond, that this generates low classification accuracy in the experimental results and ineffective classification in some conditions (ie. when stimuli are masked; would collinearity decoding in the unmasked condition potentially differ if classification accuracy were not at a floor? See Figure 3c upper, Figure 5c lower).
What is the motivation for the use of localizer validation at all? The same hypotheses can be tested using within-experiment cross-validation, rather than validation from a model built on localizer data. The argument may be that this kind of modelling will necessarily employ a smaller dataset, but, while true, this effect can be minimized at the expense of computational cost - many-fold cross-validation will mean that the vast majority of data contributes to model building in each instance.
It would be compelling if results were to reproduce when classification was validated in this kind of way. This kind of analysis would fit very well into the supplementary material.