- Nov 2020
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #2:
In the manuscript by Gore et al., the authors show evidence that MMP9 is a key regulator of synaptic and neuronal plasticity in Xenopus tadpoles. Importantly, they demonstrate a role for MMP9s in valproic acid-induced disruptions in development of synaptic connectivity, a finding that may have particular relevance to autism spectrum disorder (ASD), as prenatal exposure to VPA leads to a higher risk for the disorder. Specifically, the authors show that hyper-connectivity induced by VPA is mimicked by overexpression of MMP9 and reduced by MMP9 knockdown and pharmacological inhibition, suggesting a causal link. The experiments appear to be well executed, analyzed appropriately, and are beautifully presented. I have only a few suggestions for improvement of the manuscript and list a few points of clarification that the authors should address.
1) The authors refer to microarray data as the rationale for pursuing the role for MMP9 in VPA-induced hyperconnectivity. How many other MMPs or proteases with documented roles in development are similarly upregulated? The authors should say how other possible candidate genes did or did not change, perhaps presenting the list with data in a table (at least other MMPs and proteases). If others have changed, the authors should discuss their data in that context.
2) Please cite the microarray study(ies?).
3) In a related issue, the authors should comment on the specificity of the SB-3CT, particularly with regard to other MMPs or proteases that may/may not have been found to be upregulated in the microarray experiment.
4) Results, first paragraph: although it is in the methods, please state briefly the timing of the VPA exposure and the age/stage at which the experiments were performed. Within the methods, please give an approximate age in days after hatching for the non-tadpole experts.
5) The finding that a small number of MMP9 overexpressing cells is fascinating. Have the authors stained the tissue for MMP9 after VPA? 6) Do the authors have data on the intrinsic cell properties (input resistance, capacitance, etc.)? If so, they should include that data either in Supplemental information or in the text. These factors could absolutely influence hyperconnectivity or measurements of the synaptic properties, so at least the authors should discuss their findings in the context of the findings of James, et al.
Minor Comments:
1) Page 15: 'basaly low' may be better worded as 'low at baseline'.
2) The color-coding is very useful and facilitates communicating the results. The yellow on Figure 5, however, is really too light. Consider another color.
-
Reviewer #1:
This study is based on previous work that exposure to valproic acid (VPA), which is used to model autism spectrum disorders, produces excess local synaptic connectivity, increased seizure susceptibility, abnormal social behavior, and increased MMP-9 mRNA expression in Xenopus tadpoles. VPA is an interesting compound that is also used as an antimanic and mood stabilizing agent in the treatment of bipolar disorder, although the therapeutic targets of VPA for its treatment of mania or as a model of neurodevelopmental disorders have remained elusive. The authors validate that VPA exposed tadpoles have increased MMP9 mRNA expression and then test whether the increased levels of MMP9 mediate the effects of VPA in the tadpole model. The authors report that overexpression of MMP-9 increases spontaneous synaptic activity and network connectivity, whereas pharmacological and genetic inhibition with antisense oligos rescues the VPA induced effects, and then tie the findings to experience dependent synaptic reorganization.
1) What is the exact nature of "increased connectivity"? Is there an increase in synapse numbers or solely an increase in dendritic complexity coupled with a functional plasticity? The authors should document properties of mEPSCs and mIPSCs recording in TTX to isolate synaptic properties. Coupling this "mini" analysis to quantification of synapse numbers will address whether the changes are solely due to structural plasticity or also due to a functional potentiation of transmission. These experiments should at least be conducted in MMP-9 overexpression, VPA treatment and VPA treatment+MMP-9 loss-of-function cases to validate the basic premise that there is an increased connectivity.
2) It is unclear why the authors focused on MMP-9 compared to other genes dysregulated by VPA. This point should be further discussed.
3) How does VPA alter MMP-9 levels? Is this through an HDAC dependent mechanism? Granted VPA has been proposed to work through a variety of mechanisms including HDAC inhibition.
4) Does SB-3CT rescue the expression levels of MMP-9?
5) How is increased MMP-9 produces the synaptic and behavioral effects? What is the downstream target (specific receptor?) that would produce the broad changes in synaptic and behavioral phenotypes? Or is this a rather non-specific effect of extracellular matrix? Based on years of data on MMP-9 function its impact on "structural plasticity" in general terms is not surprising but further mechanistic details and specific targets would help move this field forward.
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #2:
General assessment:
The study investigated transient coupling between EEG and fMRI during resting state in 15 elderly participants using the previously established Hidden Markov Model approach. Key findings include: 1) deviations of the hemodynamic response function (HDR) in higher-order versus sensory brain networks, 2) Power law scaling for duration and relative frequency of states, 3) associations between state duration and HDR alterations, 4) cross-sectional associations between HDR alterations, white matter signal anomalies and memory performance.
The work is rigorously designed and very well presented. The findings are potentially of strong significance to several neuroscience communities.
Major concerns:
My enthusiasm was only somewhat mitigated by methodological issues related to the sample size for cross-sectional reference and missed opportunities for more specific analysis of the EEG.
1) Statistical power analysis has been conducted prior to data collection, which is very laudable. Nevertheless, n=15 is a very small sample for cross-sectional inference and commonly leads to false positives despite large observed effect sizes and small p-values (it takes easily up to 200 samples to detect true zero correlations). On the other hand, the within-subject results are far more well-posed from a statistical view, hence, more strongly supported by the data.
Recommendations:
The issue should be non-defensively addressed in a well-identified section or paragraph inside the discussion. The sample size should be mentioned in the abstract too.
The authors could put more emphasis on the participants as replication units for observations. For the theoretical perspective, the work by Smith and Little may be of help here: https://link.springer.com/article/10.3758/s13423-018-1451-8. In terms of methods, more emphasis should be put on demonstrating representativeness for example using prevalence statistics (see e.g. Donnhäuser, Florin & Baillet https://doi.org/10.1371/journal.pcbi.1005990)
Supplements should display the most important findings for each subject to reveal representatives of the group averages.
For state duration analysis (boxplots) linear mixed effect models (varying slope models) may be an interesting option to inject additional uncertainty into the estimates and allow for partial pooling through shrinkage of subject-level effects.
Show more raw signals / topographies to build some trust for the input data. It could be worthwhile to show topographic displays for the main states reported in characteristic frequencies. See also next concern.
2) The authors seem to have missed an important opportunity to pinpoint the characteristic drivers in terms of EEG frequency bands. The current analysis is based on broadband signals between 4 and 30 Hz, which seems untypical and reduces the specificity of the analysis. Analyzing the spectral drivers of the different state would not only enrich the results in terms of EEG but also provide a more nuanced interpretation. Are the VisN and DAN-states potentially related to changes in alpha power, potentially induced by spontaneous opening and closing of the eyes? What is the most characteristic spectral of the DMN state? ... etc.
Recommendations:
Display the power spectrum indexed by state, ideally for each subject. This would allow inspecting modulation of the power spectra by the state and reveal the characteristic spectral signature without re-analysis.
Repeat essential analyses after bandpass filtering in alpha or beta range. For example, if main results look very similar after filtering 8-12 one can conclude that most observations are related to alpha band power.
While artifacts have been removed using ICA and the network states do not look like source-localized EOG artifacts, some of the spectral changes e.g. in DAN/VisN might be attributed to transient visual deprivation. This could be investigated by performing control analysis regressing the EOG-channels amplitudes against the HMM states. These results could also enhance the discussion regarding activation/deactivation.
-
Reviewer #1:
This manuscript uses simultaneous fMRI-EEG to understand the haemodynamic correlates of electrophysiological markers of brain network dynamics. The approach is both interesting and innovative. Many different analyses are conducted, but the manuscript is in general quite hard to follow. There are grammatical errors throughout, sentences/paragraphs are long and dense, and they often use vague/imprecise language or rely on (often) undefined jargon. For example, sentences such as the following example are very difficult to decipher and are found throughout the manuscript: "if replicated, an association between high positive BOLD responsiveness and a DAN electrophysiological state, characterized by low amplitude (i.e., desynchronized) activity deviating from energetically optimal spontaneous patterns, would be consistent with prior evidence that the DMN and DAN represent alternate regimes of intrinsic brain function". As a result, the reader has to work very hard to follow what has been done and to understand the key messages of the paper.
Much is made of a potential power-law scaling of lifetime/interval times as being indicative of critical dynamics. A power-law distribution does not guarantee criticality, and could arise through other properties. Moreover, to accurately determine whether the proposed power-law is indicative of a scale-free system, the empirical property must be assessed over several orders of magnitude. It appears that only the 25-250 ms range was considered here.
The KS statistic is used to quantify the distance between the empirical and power-law distributions, which is then used as a marker of the degree of criticality. It is unclear that this metric is appropriate, given that transitions in and out of criticality can be highly non-linear. Moreover, the physiological significance of having some networks in a critical state while others are not is unclear. Each network is part of a broader system (i.e., the brain). How should one interpret apparent gradations of criticality in different parts of the system?
The sample size is small. I appreciate the complexity of the experimental paradigm, but the correlations do not appear to be robust. The scatterplots mask this to some extent by overlaying results from different brain regions, but close inspection suggests that the correlations in Fig 6 are driven by 2-3 observations with negative BOLD responses, the correlations in Fig 7A-B are driven by two subjects with positive WMSA volume, and Fig 7B is driven by 3 or so subjects with negative power-law fit values (indeed, x~0 in this plot is associated with a wide range of recall scores). Some correction for multiple comparisons is also required given the number of tests performed.
Figure 1 - panel labels would make it much easier to understand what is shown in this figure relative to the caption.
Figure 2- the aDMN does not look like the DMN at all. It is just the frontal lobe. Similarly, the putative DAN is not the DAN, but the lateral and medial parietal cortex, and cuneus.
P6, Line 11 - please define "simulation testing"
-
Summary: All reviewers appreciated the technical innovation of the work, but they also shared concerns about the robustness of some of the analyses, results, and content of the manuscript.
-
-
www.biorxiv.org www.biorxiv.org
-
Author Response:
Summary: This is an interesting topic, and these findings are potentially of theoretical significance for the field of sleep and memory consolidation, as well as potentially of practical importance. However, reviewers raised potential issues with the methods and interpretation. Specifically, reviewers were not confident that the paper reveals major new mechanistic insights that will make a major impact on a broad range of fields.
Reviewer #1:
This work claims to show that learning of word associations during sleep can impair learning of similar material during wakefulness. The effect of sleep on learning depended on whether slow-wave sleep peaks were present during the presentation of that material during sleep. This is an interesting finding, but I have a lot of questions about the methods that temper my enthusiasm.
We thank the reviewer for the careful reading of our manuscript and for the helpful comments. Most of the issues that were raised concern the clarity of writing. We will remove sleep-specific jargon where possible and will add relevant theoretical background and methodological details in the revised version of our manuscript.
1) The proposed mechanism doesn't make sense to me: "synaptic down-scaling of hippocampal and neocortical language-related neurons, which were then too saturated for further potentiation required for the wake-relearning of the same vocabulary". Also lines 105-122. What is 'synaptic down-scaling'? what are 'language related neurons'? ' How were they 'saturated'? What is 'deficient synaptic renormalization'? How can the authors be sure that there are 'neurons that generated the sleep- and ensuing wake-learning of ... semantic associations'? How can inferences about neuronal mechanisms (ie mechanisms within neurons) be drawn from what is a behavioural study?<br> We will improve the writing of our manuscript and will add formal definitions of the key concepts (synaptic down-scaling / renormalization, synaptic saturation, …) to clarify the proposed mechanism. We are also open to discuss alternative explanations.
We admit that there is no way of truly knowing whether there were specific neurons or neuronal networks that encoded the semantic associations for word pairs that were played during sleep or during ensuing wakefulness. However, the behavioural data of the implicit memory test and the recall test suggest that participants formed memories for the word pairs played during sleep and during learning in the waking state. These memories must be represented in the brain – most likely in the hippocampus and in cortical regions involved in the processing of language. Indeed, our previous report suggested that successful retrieval of sleep-played semantic associations recruited hippocampus and language sites (Züst et al., 2019, Curr. Biol.).
2) On line 54 the authors say "Here, we present additional data from a subset of participants of our previous study in whom we investigated how sleep-formed memories interact with wake-learning." It isn't clear what criteria were used to choose this 'subset of participants'. Were they chosen randomly? Why were only a subset chosen, anyway?
The dataset we reported in Current Biology (Züst et al. 2019) consisted of two samples. Participants of the first sample stayed in the sleep laboratory following waking to perform the implicit memory test and the wake-learning task in the sleep laboratory. Participants of the second sample were escorted to the MR centre following waking to perform the implicit memory test in the MR scanner. These participants did not take a wake-learning task. Therefore, we could not include them in the study of wake-learning. Nevertheless, we do include ALL data of the first sample. We will clarify this in the revised version of our manuscript.
3) The authors do not appear to have checked whether their nappers had explicit memory of the word pairs that had been presented. Why was this not checked, and couldn't explicit memory explain the implicit memory traces described in lines 66-70 (guessing would be above chance if the participants actually remembered the associations).
Previous work from our own group (Ruch et al, 2014) as well as from other groups (Andrillon & Kouider, 2016; Cox et al., 2014; Arzi et al., 2012) clearly suggests that sleep-played sounds and words are not remembered consciously after waking up. This is why we administered an implicit memory test following waking. We only asked participants at the end of the experiment – i.e. after they had completed the wake-learning task – whether they had noticed or heard something unusual or unexpected during sleep. This first question was followed by the second question of whether participants had heard words during sleep. All participants denied having heard anything during sleep. This suggests that participants had no explicit memory for the sleep-played vocabulary. We will mention this in the revised version of our manuscript.
Importantly, if participants had explicit memory for sleep-played vocabulary, the finding that these memories suppress conscious re-learning of the same or similar contents during subsequent wakefulness would oppose previous findings demonstrating that repeated learning improves retention.
Reviewer #2:
This paper reports on a very interesting and potentially highly important finding - that so-called "sleep learning" does not improve relearning of the same material during wake, but instead paradoxically hinders it. The effect of stimulus presentation during sleep on re-learning was modulated by sleep physiology, namely the number of slow wave peaks that coincide with presentation of the second word in a word pair over repeated presentations. These findings are of theoretical significance for the field of sleep and memory consolidation, as well as of practical importance.
We appreciate the reviewer’s enthusiasm for our work and are grateful for the detailed and helpful comments.
Concerns and recommendations:
1) The authors' results suggest that "sleep learning" leads to an impairment in subsequent wake learning. The authors suggest that this result is due to stimulus-driven interference in synaptic downscaling in hippocampal and language-related networks engaged in the learning of semantic associations, which then leads to saturation of the involved neurons and impairment of subsequent learning. Although at first the findings seem counter-intuitive, I find this explanation to be extremely interesting. Given this explanation, it would be interesting to look at the relationship between implicit learning (as measured on the size judgment task) and subsequent explicit wake-relearning. If this proposed mechanism is correct, then at the trial level one would expect that trials with better evidence of implicit learning (i.e. those that were judged "correctly" on the size judgment task) should show poorer explicit relearning and recall. This analysis would make an interesting addition to the paper, and could possibly strengthen the authors' interpretation.
The main findings did not change when we controlled for implicit memory performance. Most importantly, the reported interaction between re-learning Condition (congruent vs. incongruent translations) and the number of slow-wave peaks during sleep (0-1 vs. 2-4) remained significant when we included implicit memory retrieval as predictor. Furthermore, this interaction was not mediated by implicit retrieval performance (no significant 3-way interaction).
We decided against reporting these analyses in the manuscript because including performance in the implicit memory test as additional predictor reduced the trial count to critically low levels in some conditions, making significance testing unreliable.
2) In some cases, a null result is reported and a claim is based on the null result (for example, the finding that wake-learning of new semantic associations in the incongruent condition was not diminished). Where relevant, it would be a good idea to report Bayes factors to quantify evidence for the null.
We will include Bayes Factors for our post-hoc analyses in the revised version of our manuscript.
3) The authors report that they "further identified and excluded from all data analyses the two most consistently small-rated and the two most consistently large-rated foreign words in each word lists based on participants' ratings of these words in the baseline condition in the implicit memory test." Although I realize that the same approach was applied in their original 2019 paper, this decision point seems a bit arbitrary, particularly in the context of the current study where the focus is on explicit relearning and recall, rather than implicit size judgments. As a reader, I wonder whether the results hold when all words are included in the analysis.
We wanted the analyses to be consistent with the original report in Current Biology (Züst et al, 2019). Nevertheless, for this revision, we will include all learning material in the analyses. Note that the changes in the overall pattern of results are minuscule and the message remains the same when stereotypical/biased words are included vs. excluded.
4) In the main analysis examining interactions between test run, condition (congruent/incongruent) and number of peak-associated stimulations during sleep (0-1 versus 3-4), baseline trials (i.e. new words that were not presented during sleep) are excluded. As such, the interactions shown in the main results figure (Figure D) are a bit misleading and confusing, as they appear to reflect comparisons relative to the baseline trials (rather than a direct comparison between congruent and incongruent trials, as was done in the analysis). It also looks like the data in the "new" condition is just replicated four times over the four panes of the figure. I recommend reconstructing the figure so that a direct visual comparison can be made between the number of peaks within the congruent and incongruent trials. This change would allow the figure to more accurately reflect the statistical analyses and results that are reported in the manuscript.
We will update the figure with a panel that presents the results for all conditions on the same axes. This will facilitate direct comparisons between all conditions.
5) In addition to the main analysis, the authors report that they also separately compared the conscious recall of congruent and incongruent pairs that were never or once vs. repeatedly associated with slow-wave peaks with the conscious recall in the baseline condition. Given that four separate analyses were carried out, some correction for multiple comparisons should be done. It is unclear whether this was done as it does not seem to be reported.
We will clarify where and how we corrected for multiple comparisons in the revised version of the manuscript.
-
Reviewer #2:
This paper reports on a very interesting and potentially highly important finding - that so-called "sleep learning" does not improve relearning of the same material during wake, but instead paradoxically hinders it. The effect of stimulus presentation during sleep on re-learning was modulated by sleep physiology, namely the number of slow wave peaks that coincide with presentation of the second word in a word pair over repeated presentations. These findings are of theoretical significance for the field of sleep and memory consolidation, as well as of practical importance.
Concerns and recommendations:
1) The authors' results suggest that "sleep learning" leads to an impairment in subsequent wake learning. The authors suggest that this result is due to stimulus-driven interference in synaptic downscaling in hippocampal and language-related networks engaged in the learning of semantic associations, which then leads to saturation of the involved neurons and impairment of subsequent learning. Although at first the findings seem counter-intuitive, I find this explanation to be extremely interesting. Given this explanation, it would be interesting to look at the relationship between implicit learning (as measured on the size judgment task) and subsequent explicit wake-relearning. If this proposed mechanism is correct, then at the trial level one would expect that trials with better evidence of implicit learning (i.e. those that were judged "correctly" on the size judgment task) should show poorer explicit relearning and recall. This analysis would make an interesting addition to the paper, and could possibly strengthen the authors' interpretation.
2) In some cases, a null result is reported and a claim is based on the null result (for example, the finding that wake-learning of new semantic associations in the incongruent condition was not diminished). Where relevant, it would be a good idea to report Bayes factors to quantify evidence for the null.
3) The authors report that they "further identified and excluded from all data analyses the two most consistently small-rated and the two most consistently large-rated foreign words in each word lists based on participants' ratings of these words in the baseline condition in the implicit memory test." Although I realize that the same approach was applied in their original 2019 paper, this decision point seems a bit arbitrary, particularly in the context of the current study where the focus is on explicit relearning and recall, rather than implicit size judgments. As a reader, I wonder whether the results hold when all words are included in the analysis.
4) In the main analysis examining interactions between test run, condition (congruent/incongruent) and number of peak-associated stimulations during sleep (0-1 versus 3-4), baseline trials (i.e. new words that were not presented during sleep) are excluded. As such, the interactions shown in the main results figure (Figure D) are a bit misleading and confusing, as they appear to reflect comparisons relative to the baseline trials (rather than a direct comparison between congruent and incongruent trials, as was done in the analysis). It also looks like the data in the "new" condition is just replicated four times over the four panes of the figure. I recommend reconstructing the figure so that a direct visual comparison can be made between the number of peaks within the congruent and incongruent trials. This change would allow the figure to more accurately reflect the statistical analyses and results that are reported in the manuscript.
5) In addition to the main analysis, the authors report that they also separately compared the conscious recall of congruent and incongruent pairs that were never or once vs. repeatedly associated with slow-wave peaks with the conscious recall in the baseline condition. Given that four separate analyses were carried out, some correction for multiple comparisons should be done. It is unclear whether this was done as it does not seem to be reported.
-
Reviewer #1:
This work claims to show that learning of word associations during sleep can impair learning of similar material during wakefulness. The effect of sleep on learning depended on whether slow-wave sleep peaks were present during the presentation of that material during sleep. This is an interesting finding, but I have a lot of questions about the methods that temper my enthusiasm.
1) The proposed mechanism doesn't make sense to me: "synaptic down-scaling of hippocampal and neocortical language-related neurons, which were then too saturated for further potentiation required for the wake-relearning of the same vocabulary". Also lines 105-122. What is 'synaptic down-scaling'? what are 'language related neurons'? ' How were they 'saturated'? What is 'deficient synaptic renormalization'? How can the authors be sure that there are 'neurons that generated the sleep- and ensuing wake-learning of ... semantic associations'? How can inferences about neuronal mechanisms (ie mechanisms within neurons) be drawn from what is a behavioural study?
2) On line 54 the authors say "Here, we present additional data from a subset of participants of our previous study in whom we investigated how sleep-formed memories interact with wake-learning." It isn't clear what criteria were used to choose this 'subset of participants'. Were they chosen randomly? Why were only a subset chosen, anyway?
3) The authors do not appear to have checked whether their nappers had explicit memory of the word pairs that had been presented. Why was this not checked, and couldn't explicit memory explain the implicit memory traces described in lines 66-70 (guessing would be above chance if the participants actually remembered the associations).
-
Summary: This is an interesting topic, and these findings are potentially of theoretical significance for the field of sleep and memory consolidation, as well as potentially of practical importance. However, reviewers raised potential issues with the methods and interpretation. Specifically, reviewers were not confident that the paper reveals major new mechanistic insights that will make a major impact on a broad range of fields.
-