- Jan 2021
This impressive manuscript describes a comprehensive, multifaceted analysis of the morphological and molecular changes that accompany photosynthetic establishment during seedling de-etiolation. Morphological data, focusing in particular on the photosynthetic thylakoid membranes, are derived using transmission electron microscopy (TEM), serial block face scanning electron microscopy (SBF-SEM), and confocal microscopy, while quantitative molecular data on the abundances of proteins and lipids are derived using mass spectrometry and western blotting. The various data are acquired over a time course between 0 h and 96 h post illumination, and with a high level of temporal resolution. The data allow the authors to develop a mathematical model for the expansion of the surface area of thylakoids (reaching 500-times the surface area of the cotyledon leaf), which matches well with experimental observations from the SBF-SEM analysis for earlier, but not later, stages of de-etiolation. Moreover, the data point to a two-phase organization of the de-etiolation process, with the first phase ("Structure Establishment") characterized by thylakoid assembly and photosynthetic establishment, and the second phase ("Chloroplast Proliferation") characterized by chloroplast division and cell expansion.
The data are of a high standard, and the depth and breadth of analysis in a single, unified study is unprecedented. While it is arguable that there are few major, completely novel insights reported here (indeed, in the Discussion, the authors very helpfully point out how many of the parameters they have measured are consistent with data reported elsewhere by others), this should not detract from the overall value of the study; a major and unique strength here is that all of the data have been acquired together and so are directly comparable. I have no doubt that this dataset will be extremely interesting to many researchers, and prove to be an invaluable resource for the plant science community. Consequently, I am sure that it will attract many citations.
I have a few specific comments that I would like the authors to consider carefully, as follows.
1) Figure 3. The 3D reconstructions are undoubtedly useful for deriving quantitative data, as they enable the derivation of thylakoid surface area data to verify the mathematical model. However, it is very difficult to see anything clearly in the images shown in the Figure. I wonder if the authors can make the images clearer, and then also point to and describe some of the key features. The videos do help a bit, but even these are not that clear.
2) Page 9, second paragraph. It is here that the "two phases" model is first proposed. I really could not see a clear basis for proposing this model here, using the data that had been presented thus far. As I see it (and based on the way the two phases are described in the Discussion), one can't really propose this model until after the chloroplast number and cell size data have been presented.
Moreover, the description of the second phase here ("and a second phase...") seems a bit inconsistent with the statement in the paragraph above that thylakoid surface area increases dramatically between T4 and T24, and much less between T24 and T96.
3) Figure 6, and the related supplementary figure. Loading controls are missing here, and should be added. Also, it is stated that a number of proteins (PsbA, PsbD, PsbO, Lhcb2) are "detectable" at T0 (line 348, page 11). To me, they look UNdetectable.
4) Dividing chloroplasts. On page 13, line 412-413, it is stated that the volume of dividing chloroplasts was measured, and we are referred to Figures 8E and 4B in support of this statement. However, it is not explained how this was done. More clear and specific explanation is needed. Was it the case that the authors sought out and measured dumbbell-shaped organelles, and quantified those? If so, images are needed to illustrate this point. And, I don't see anything relevant in Fig. 4B - this callout apparently belongs in the following sentence. The statement that the average size of dividing chloroplasts was higher than that of all chloroplasts (lines 413-414) is not really surprising if the authors were measuring organelles just on the point of becoming two organelles.
5) Page 13, beginning of modelling section. The motivation for this section needs to be better introduced. When I first read it, I could not understand why the authors wished to again "determine the thylakoid membrane surface area", as this had already been discussed earlier in the manuscript.
Also related to the modelling: Did the authors take into account the existence of appressed membranes when calculating the surface area exposed to the stroma (lines 431-432). And, assuming it is clearly established that there is a 1:1 relationship between these proteins and the relevant complexes (lines 441-443), perhaps this should be stated and the relevant literature cited.
The study investigates key components of the entorhinal circuits through which signals from the hippocampus are relayed to the neocortex. The question addressed is important but the stated claim that layer 5b (L5b) to layer 5a (L5a) connections mediate hippocampal-cortical outputs in LEC but not MEC appears to be an over-interpretation of the data. First, the experiments do not test hippocampal to L5a connections, but instead look at L5b to L5a connections. Second, the data provide evidence that there are L5b to L5a projections in LEC and MEC, which contradicts the claim made in the title. These projections do appear denser in LEC under the experimental conditions used, but possible technical explanations for the difference are not carefully addressed. If these technical concerns were addressed, and the conclusions modified appropriately, then I think this study could be very important for the field and would complement well recent work from several labs that collectively suggests that information processing in deep layers of MEC is more complex than has been appreciated (e.g. Sürmeli et al. 2015, Ohara et al. 2018, Wozny et al. 2018, Rozov et al. 2020). Major Concerns:
1) An impressive component of the study is the introduction of a new mouse line that labels neurons in layer 5b of MEC and LEC. However, in each area the line appears to label only a subset (30-50%) of the principal cell population. It's unclear whether the unlabelled neurons have similar connectivity to the labelled neurons. If the unlabelled neurons are a distinct subpopulation then it's difficult to see how the experiments presented could support the conclusion that L5b does not project to L5a; perhaps there is a projection mediated by the unlabelled neurons? I don't think the authors need to include experiments to investigate the unlabelled population, but given that the labelling is incomplete they should be more cautious about generalising from data obtained with the line.
2) For experiments using the AAV conditionally expressing oChIEF-citrine, the extent to which the injections are specific to LEC/MEC is unclear. This is a particular concern for injections into LEC where the possibility that perirhinal or postrhinal cortex are also labelled needs to be carefully considered. For example, in Figure 3D it appears the virus has spread to the perirhinal cortex. If this is the case then axonal projections/responses could originate there rather than from L5b of LEC. I suggest excluding any experiments where there is any suggestion of expression outside LEC/MEC or where this can not be ruled out through verification of the labelling. Alternatively, one might include control experiments in which the AAV is targeted to the perirhinal and postrhinal cortex. Similar concerns should be addressed for injections that target the MEC to rule out spread to the pre/parasubiculum.
3) It appears likely from the biocytin fills shown that the apical dendrites of some of the recorded L5a neurons have been cut (e.g. Figure 4A, Figure 4-Supplement 1D, neuron v). Where the apical dendrite is clearly intact and undamaged synaptic responses to activation of L5b neurons are quite clear (e.g. Figure 4-Supplement 1D, neuron x). Given that axons of L5b cells branch extensively in L3, it is possible that any synapses they make with L5a neurons would be on their apical dendrites within L3. It therefore seems important to restrict the analysis only to L5a neurons with intact apical dendrites; a reasonable criteria would be that the dendrite extends through L3 at a reasonable distance (> 30 μm?) below the surface of the slice.
4) Throughout the manuscript the data is over-interpreted. Here are some examples:
The title over-extrapolates from the results and should be changed. A more accurate title would be along the lines of "Evidence that L5b to L5a connections are more effective in lateral compared to medial entorhinal cortex".
"the conclusion that the dorsal parts of MEC lack the canonical hippocampal-cortical output system" seems over-stated given the evidence (see comments above).
Discussion, para 1, "Our key finding is that LEC and MEC are strikingly different with respect to the hippocampal-cortical pathway mediated by LV neurons, in that we obtained electrophysiological evidence for the presence of this postulated crucial circuit in LEC, but not in MEC". This is misleading as there is also evidence for L5b to L5a connections in MEC, although this projection may be relatively weak. Recent work by Rozov et al. demonstrating a projection from intermediate hippocampus to L5a provides good evidence for an alternative model in which MEC does relay hippocampal outputs. This needs to be considered.
5) What proportion of responses are mono-synaptic? How was this tested?
- Nov 2020
This is a longitudinal aging study of the physiological changes in a specific Drosophila neural circuit that participates in flight and escape responses. To date there have been few examples of longitudinal aging studies looking at the vulnerability or resilience of neurophysiology at the resolution presented in this study. The analyses have revealed different trajectories for individual neural components of the studied behaviors during aging. The study also reveals different sensitivities of neural components to stressors that are known to alter lifespan (temperature, oxidative stress). The study is well-written and the experiments are performed at a high level. A concern is that the study is highly descriptive and provides very little mechanism to explain the differences in the vulnerability or resilience of neural functions observed. In addition, the authors present little evidence other than lifespan to support their interpretation of the effects of the experimental conditions at the cellular level.
1) Overall, the study is highly descriptive and there is a lack of experiments aimed at understanding the cellular effects of aging on neural function.
2) There is a lack of supporting data or discussion about the expected cellular mechanisms of the high temperature manipulations or SOD mutants. While it is true that both of these manipulations shorten lifespan, their relationship in the natural process of aging remains controversial. The ability to extend the resilience of the neural components studied by a manipulation that extends lifespan would be very supportive (i.e. diet, insulin signaling mutants).
3) The data from the current study demonstrates that the major effect of SOD mutants on neural function and mortality exists in newly eclosed animals suggesting significant issues during development in SOD mutants. This complicates the comparison of this condition to the other conditions or even considering it a manipulation of aging. The authors should also consider showing that the effects on neural function by SOD mutants is mimicked by other conditions that alter ROS more acutely such as paraquat exposure or test mutations in insulin signaling (i.e. chico) which have been shown to increase antioxidant expression.
4) The authors contend that the changes in neural function, particularly in regards to seizure susceptibility, provide indices for age progression. It is unclear to this author how these neural functions described in this study, including the appearance of seizures, contribute to lifespan of the flies. One could imagine that changes in flight distance or escape response could contribute to lifespan in the wild, but do changes in flight, jump response, or seizure susceptibility have any bearing on the lifespan of flies in vials? Why would seizure susceptibility be predictive of mortality? In addition, the assays presented here utilize experimental conditions (intense whole head stimulation) that are seemingly non-physiological so it is unclear what the declines represent in a normal aging fly. The authors need to discuss this.
5) There are no experiments aimed at understanding the cellular or molecular nature of the functional declines presented.
The authors show that neonatal LPS (nLPS) treatment is associated with downregulated PFC levels of ATPase phospholipid tranporting8A2 (ATP8A2) that is associated with elevated IFN in serum and PFC and blocked by an IFN blocking antibody. Antibody treatment marginally antagonized effects of nLPS to cause depressive-like behavior, but was ineffective when females alone were examined.
This paper adds to a long list of publications reporting alterations in a number of diverse signaling molecules after nLPS treatment. Strengths are that it is generally well done, with appropriate attention to experimental design (eg litter effects) and statistical treatment. However, while the down regulation of ATP8A2 is indisputable, a major weakness is that there is no functional relationship revealed between this and any subsequent behavioral, anatomical or physiological alterations. While the possible role of IFN in causing the increased depressive-like behavior is of some interest, the data here are not convincing. Furthermore, while other work has reported extensively on sex-specific alterations in behavior after nLPS, the behavioral analysis here ( FST, TST) is rather limited.
1) There is little justification for reverting to the non-alpha corrected LSD test when the Tukey does not show significance.
2) The extensive literature on the effects of nLPS is only superficially reviewed.
3) The direct involvement of ATP8A2 in any behavioral or functional outcomes should be tested.
4) How does IFN cause down regulation of the ATP8A2?
5) Other behavioral alterations should be tested such as open field that are less stressful than FST or TST.
In this study, Mackay and colleagues show that resting calcium levels are increased in axons of neurons derived from YAC128 mice, a Huntington Disease model expressing full-length mutant Huntingtin with 128 CAG-repeats in a yeast artificial chromosome. This increase in baseline calcium signaling is due to continuous leak of calcium from the ER that leads to increased spontaneous neurotransmission and reduced evoked neurotransmission. Overall, the manuscript thoroughly documents a clear example of inverse regulation of spontaneous and evoked glutamate release in a well-established monogenic neurological disease model. Moreover, the authors link this observation of dysregulation of calcium release/leak from presynaptic endoplasmic reticulum. I have some relatively minor comments that may help improve this work.
1) While the authors nicely document and interrogate the relationship between resting axonal calcium signals and spontaneous release, the impact of dysfunctional ER calcium signaling on evoked release is not causally linked. For instance, it would be nice to show that buffering excess baseline calcium (EGTA-AM?) can equilibrate the difference in evoked release phenotype between wild type and YAC128 neurons.
2) Figure 7: The authors state that evoked glutamate release is reduced in YAC128 neurons, can they show this? i.e. a bar graph with the absolute values of iGluSnFR amplitudes.
3) Minor: Figure panels are labeled with small letters in the figures but with capital letters in the main text.
In this study, Deng and colleagues have sought to assess the neural correlates of individual differences in responsiveness variability across wakefulness and moderate levels of propofol-induced anaesthesia. In addition to resting state scanning and an auditory story task, the participants underwent behavioural assessments including memory recall and a target detection task. Furthermore, the auditory story task was independently rated by a separate group for its "suspensefulness". Focusing their analysis on three major large-scale brain networks, the group-level results first indicated significant differences in the between network interactions of the chosen networks across wakefulness and sedation, specifically in the narrative condition. Furthermore, during the same condition, there was reduced cross-subject correlation between wakefulness versus sedation centred mainly on the sensorimotor brain regions. Moreover, based on the responses in the target detection task, the participants were grouped into fast and slow responders which then showed significant differences in gray matter volume as well as connectivity differences in the wakeful auditory story task condition within the executive control network.
Overall, this is a well-written manuscript. However, my initial enthusiasm about the question of interest was hampered by major theoretical and methodological concerns related to this study. Below I outline these points in the hopes that they improve this study and its outcomes.
First and foremost, the authors state that their major interest in this study was to assess individual differences in sedation-induced response variability and its potential brain bases. Despite the attractiveness of this topic, which is undoubtedly of interest both to the academic community and the general public, I do not believe that the current study design would allow the authors to answer this question. First of all, although I completely appreciate the difficulty in recruiting participants to take part in such pharmacological studies, I do not think that a group of 17 participants is enough to be able to assess "individual differences". For this to work, there has to be a large enough sample based on adequate power calculations, keeping in mind all the spurious false positive effects that are generated by pharmacological interventions and their downstream effects on connectivity estimates (e.g. motion, global signal etc.). Second, though it is perfectly valid to carry out the initial within-group connectivity and whole-brain activity analyses for the task/rest (which I believe are the only statistically and experimentally sound sections), following these results, the authors mainly carry out multiple exploratory analyses that aim to infer what happened to 3 non-respondent participants (or 6 slow responders). This to me is closer to a case study rather than an experimental study with proper statistics. Overall this fast/slow responder split only comes as an afterthought and does not seem to be the main intention behind the study. This is at odds with the major goal stated in the introduction that the main aim of the authors was to assess inter-individual differences. As such, I do not think that the analyses highlighted by the authors provide enough evidence to support their claims. More detailed points are provided below:
• The introduction is well-written, citing as much of the relevant literature on this topic as possible. Having said that, I am not really convinced about the justification for selecting the dorsal attention, executive control and default mode networks as the sole focus of the authors' analysis. Although it is true that there is a strong a priori basis that these associative networks play an important role in maintaining consciousness, the references that the authors refer to are equally biased in focusing their analyses on specific higher-order networks, creating a circular argument. In light of evidence highlighting the importance of sensorimotor networks in this context, as well as the balance in their interactions with associative cortices, I would argue that a whole-brain approach would be better suited. Furthermore, as indicated by the whole-brain analysis during the auditory story task, most alterations were centered on the primary somatosensory regions. This is at odds with the justification of the authors on focusing their connectivity analyses solely on associative brain networks.
• Given the wide age range (and its potential influence on the obtained results), it would be great for the authors to provide the mean and standard deviation of age within groups, and whether the groups were age-matched (though the range seems similar).
• The authors state that only the reaction time was measured in the auditory target detection task, but later in the results section they mention "omissions". Given that such omissions might be strongly indicative of unresponsiveness/sleep, it is unclear how one can interpret the observed brain-based effects solely from the perspective of reduced information processing (especially when the data was collected under eyes-closed conditions).
• The authors provide a thorough description of the sedation administration procedures, which is excellent. Nevertheless, I was wondering whether the blood plasma propofol concentrations could be used to explain some of the results in individual differences or even a nuisance regressor to show that the effects were not simply driven by this factor.
• I failed to find any information in the methods section as to why/how the authors have decided on a mean-split of the participants to fast/slow responders. Given the already small sample size, further reducing degrees of freedom by a split of 11 versus 6 participants makes it very problematic in terms of any statistical tests that can be carried out.
• Line 441 - Results should not be reported if it did not reach statistical significance.
• Line 448 - For the two analyses on this page the authors indicate that although in the wakefulness condition there were significant brain activity that correlated with (not "predicted") task stimulus, no significant effects were observed in the sedation condition. This absence of evidence should not be then taken as evidence of absence. In other words, such lack of evidence can be explained by a variety of factors not attributable to the effect of sedation on brain activity (e.g. simply by the fact that the participants were not paying attention to the story or falling asleep).
• Line 484 - I do not think it is acceptable/justifiable to carry out post-doc tests, when there was no significant difference in the main ANOVA.
• Line 503 - I am not really sure about the justification behind the assessment of gray matter volume. Besides the issues related to small sample size, the observed differences in functional connectivity may then simply be due to differences in the quality of the data that can be extracted from the defined ROIs in a subset of participants. Was this analysis corrected for age (as a continuous variable)? In any case, as far as I am aware, there is no simple relationship between gray matter volume and functional connectivity (i.e. greater/smaller gray matter volume does not necessarily mean greater/smaller functional connectivity). Hence, once cannot make the conclusion that: "These results lend support to the functional connectivity results above, and together they strongly suggest that connectivity within the ECN, and especially the frontal aspect of this network, underlies individual differences in behavioural responsiveness under moderate anaesthesia."
• Line 509 - Again, I am not really sure about the justification behind the analysis carried out here. The authors state that the ROIs that were found in the gray matter volume analysis overlapped with a priori ROIs which they suggest explain differences observed in functional connectivity. They then select a subset of these ROIs and again show that there are differences in connectivity. This seems rather circular.
• The authors state that "Rather, only the functional connectivity within the ECN during the wakeful narrative condition differentiated the participants' responsiveness level, with significantly stronger ECN connectivity in the fast, relative to slow responders." I apologise if I am missing something, but I do not see any evidence for such a strong claim. All that the authors have found was that there were significant functional connectivity differences in the executive control network in the wakefulness condition between fast and slow responders (which was defined and grouped by the authors themselves), with no significant effect of condition or state. I fail to understand why this one result from a multitude of exploratory analyses that were conducted was picked out as the "main finding" when one cannot make any inferences about its direct relation to sedation.
• Overall, I would urge the authors to re-think their analysis strategy and the corresponding discussion of their results.
This paper contributes to the large number of papers currently posted on BioRxiv showing that the N protein of SARS CoV2 can undergo liquid-liquid phase separation on its own and in the presence of RNA, and that this behavior can be modulated by phosphorylation. The work here is somewhat different from much of the other work in that the authors have generated the N protein from mammalian cells. The authors have also examined the effects of known drugs on the phase separation process. Given the importance of coronavirus it is imperative to get out information on its biology. But it is also imperative that the information be correct, interpreted with appropriate caution, and of sufficient depth to be valuable to others in the field and not potentially misdirect future research and clinical efforts. In this respect, I think the authors need to clean up some of their experiments and pull back on some of their claims, as I detail below.
1) In general, the authors' use of size, number and morphology of droplets to assess the effects of small molecules in figure 4 is problematic. The authors should be measuring the effects of the compounds on the phase separation threshold concentration (of N+RNA or of salt) to see whether the compounds stabilize or destabilize the droplets. Changes in size, number and morphology can be due to many factors, many of which are unlikely to be relevant to viral assembly.
For example, the authors report that nelfinavir mesylate and LDK378 produced fewer but larger droplets, and conclude that these compounds could disrupt virion assembly. This is problematic for two reasons. Most importantly, it is almost impossible to interpret what fewer larger droplets means. Are they nucleating more slowly and/or growing more rapidly? Are they more viscous and thus less disrupted by handling? Are they denser and thus settling more rapidly? Has the thermodynamic threshold to phase separation changed? Secondarily, because of these uncertainties, it is an overinterpretation to state based on the data that these compounds could act by disrupting virion assembly.
The class II molecules, which increase both size and number of droplets, are probably more relevant, since concomitant increases in both probably mean that the threshold concentration for LLPS has decreased, and thus the compound has stabilized the droplets.
The changes in morphology induced by the class III molecules are also hard to interpret. Does the change reflect greater adhesion to and spreading on the slide surface (probably irrelevant to drug action)? Or changes in droplet dynamics--slowed fusion or increased viscosity? What does it mean that nilotinib causes the morphology of N+RNA condensates to become filamentous, and could this same effect account for the (modest) decrease in N protein foci in cells upon drug treatment?<br> I honestly am concerned that the authors conclude the paper urging use of nilotinib in clinical trials, and the effects of drugs on phase separation as a proxy for vRNP formation, based on these very thin data.
2) In Figure 1 (and beyond), it is not good practice to use fractional areas of droplets that have settled to a slide surface to quantify droplet formation in LLPS experiments. Droplets fall to the slide surface at different rates depending on their sizes, which in turn depend on many factors, some biochemical (the relative rates of nucleation and growth; density; all of which can vary with buffer conditions) and some technical (exactly how the sample was handled). Turbidity, which also is imperfect, is nevertheless a more reliable measure; so is microscopic assessment of the presence or absence of droplets. The authors should provide at least some additional measure in these initial experiments to show the numbers obtained from the fractional area are qualitatively correct.
3) In figure 1C, the dissolution with salt is not a measure of liquid-like properties, as claimed at the bottom of page 3. The authors should look for evidence of droplet fusion, spherical shape (for droplets larger than the diffraction limit) and rapid exchange with solvent.
4) The claims on page 4 that the condensates formed with viral RNA fragments are gel-like should be supported with some measure of dynamics, and not simply the shape of the objects that settle to the slide surface.
5) In the CLMS experiments, how do the authors know that the changes observed are due to LLPS per se and not to differences in structure induced by differences in salt? It seems like additional controls are warranted to make this claim. Relatedly, the authors should state/examine whether higher salt affects dimerization of the dimerization domain.
6) The analogy made on page 4 between the asymmetric structures observed upon mixing N and viral RNA fragments to the strings of vRNPs observed by cryoEM is quite a stretch. The vRNPs are 15 nm in diameter. The structures observed here are vastly larger. Such associated but non-fused droplets are often observed for solidifying phase separating systems. The superficial similarity of connected particles between the cellular vRNPs and the structures here is, in my opinion, unlikely to be meaningful.
In the manuscript by Gore et al., the authors show evidence that MMP9 is a key regulator of synaptic and neuronal plasticity in Xenopus tadpoles. Importantly, they demonstrate a role for MMP9s in valproic acid-induced disruptions in development of synaptic connectivity, a finding that may have particular relevance to autism spectrum disorder (ASD), as prenatal exposure to VPA leads to a higher risk for the disorder. Specifically, the authors show that hyper-connectivity induced by VPA is mimicked by overexpression of MMP9 and reduced by MMP9 knockdown and pharmacological inhibition, suggesting a causal link. The experiments appear to be well executed, analyzed appropriately, and are beautifully presented. I have only a few suggestions for improvement of the manuscript and list a few points of clarification that the authors should address.
1) The authors refer to microarray data as the rationale for pursuing the role for MMP9 in VPA-induced hyperconnectivity. How many other MMPs or proteases with documented roles in development are similarly upregulated? The authors should say how other possible candidate genes did or did not change, perhaps presenting the list with data in a table (at least other MMPs and proteases). If others have changed, the authors should discuss their data in that context.
2) Please cite the microarray study(ies?).
3) In a related issue, the authors should comment on the specificity of the SB-3CT, particularly with regard to other MMPs or proteases that may/may not have been found to be upregulated in the microarray experiment.
4) Results, first paragraph: although it is in the methods, please state briefly the timing of the VPA exposure and the age/stage at which the experiments were performed. Within the methods, please give an approximate age in days after hatching for the non-tadpole experts.
5) The finding that a small number of MMP9 overexpressing cells is fascinating. Have the authors stained the tissue for MMP9 after VPA? 6) Do the authors have data on the intrinsic cell properties (input resistance, capacitance, etc.)? If so, they should include that data either in Supplemental information or in the text. These factors could absolutely influence hyperconnectivity or measurements of the synaptic properties, so at least the authors should discuss their findings in the context of the findings of James, et al.
1) Page 15: 'basaly low' may be better worded as 'low at baseline'.
2) The color-coding is very useful and facilitates communicating the results. The yellow on Figure 5, however, is really too light. Consider another color.
Lang et al. Investigate and document the role of myeloid-endogenous circadian cycling on the host response to and progression of endotoxemia in the mouse LPS-model. As a principal finding, Lang et al. report how disruption of the cell-intrinsic myeloid circadian clock by myeloid-specific knockdown of either CLOCK or BMAL1 does not prevent circadian patterns of morbidity and mortality in endotoxemic mice. As a consequence of these and other findings from endotoxemia experiments in mice kept in the dark or the observation of circadian cytokine production in CLOCK KO animals, the authors conclude that myeloid responses critical to endotoxemia are not governed by their local cell-intrinsic clock. Moreover they conclude that the source of circadian timing and pace giving that is critical for the host response to endotoxemia must lie outside the myeloid compartment. Finally, the authors also report a general (non-circadian) reduced susceptibility of mice devoid of myeloid CLOCK or BMAL1, which they take as proof that myeloid circadian cycling is important in the host response to endotoxemia, yet does not dictate the circadian pattern in mortality and cytokine responses.
The paper is well conceived, experiments are very elegant and well carried out, statistics are appropriate, ethic statements are OK. The conclusions of this study, as summarized above, are important and will be of much interest to readers from the circadian field and beyond, also to sepsis and inflammation researchers. To me, there is one major flaw in the argumentative line of this story, as the study relies on the assumption that the systemic cytokine response provided by myeloid cells is paramount and central to the course and intensity of endotoxemia. While this is assumed by many, a rigorous proof of this connection and its causality is still lacking (most evidence is of correlative nature). As a matter of fact, there is an increasing body of more recent experimental evidence that argues against a prominent role of myeloid cells in the cytokine storm. Overall I would like to raise the following points and suggestions.
• As mentioned, a weakness of this paper is that it assumes systemic cytokine levels as produced by myeloid cells are center stage in endotoxemic shock (e.g. see line 164). However, recent evidence has shown that over 90 % of most of systemically released cytokines in sepsis are produced by non-myeloid cells (as proven e.g. by use of humanized mice, which allows to discriminate (human) cytokines produced by blood cells from (murine) cytokines produced by parenchyma (see e.g. PMID: 31297113). (Interestingly, there is one major exception to that rule, and that is TNFa). Considering this, it is not surprising that circadian cytokine levels do not change in myeloid CLOCK/BMAl1 KO mice. Also, assuming that myeloid-produced cytokines are not critical drivers, the same applies to the observation that circadian mortality pattern is preserved in those mice. I recommend that the authors more critically discuss this alternative explanation in the paper. In fact, this line of arguing would be in line with the concept that the source for the circadian susceptibility /mortality in endotoxemia resides in a non-myeloid cell compartment, which is essentially the major finding of this manuscript.
• Intro (lines 51-54): the authors describe one scenario as the mechanism of sepsis-associated organ failure. This appears too one-sided and absolute to me, many more hypotheses and models exist. It would be good to mention that and/or tone down the wording.
• Very analogous to Light/Darkness cycles, ambient temperature has been shown to have a strong impact on mortality from endotoxemia (e.g. PMID: 31016449). Did the authors keep their animals in thermostated ambient conditions? Please describe and discuss in the text.
• Fig.2C; The large difference in mortality in the control lys-MCre line looks somewhat worrying to me. Could this be a consequence of well-known Cre off-target activities? Did the authors check this by e.g. sequencing myeloid cells of or using control mouse strains?
• Line 320: Bmal1flox/flox (Bmal-flox)  or Clockflox/flox (Clock-flox)  were bred with LysM-Cre to target Bmal1. I suggest showing a prototypical genotyping result, perhaps as a supplemental figure.
• Line 365: the authors state that mice that did not show signs of disease were sorted out. What proportion of mice (%) did not react to LPS? It would be useful to state this number in the methods section.
• It is not fully clear to me if male or female or both were used for the principal experiments, please specify. If females were used, please describe how menstruation cycle was taken into account.
The study investigated transient coupling between EEG and fMRI during resting state in 15 elderly participants using the previously established Hidden Markov Model approach. Key findings include: 1) deviations of the hemodynamic response function (HDR) in higher-order versus sensory brain networks, 2) Power law scaling for duration and relative frequency of states, 3) associations between state duration and HDR alterations, 4) cross-sectional associations between HDR alterations, white matter signal anomalies and memory performance.
The work is rigorously designed and very well presented. The findings are potentially of strong significance to several neuroscience communities.
My enthusiasm was only somewhat mitigated by methodological issues related to the sample size for cross-sectional reference and missed opportunities for more specific analysis of the EEG.
1) Statistical power analysis has been conducted prior to data collection, which is very laudable. Nevertheless, n=15 is a very small sample for cross-sectional inference and commonly leads to false positives despite large observed effect sizes and small p-values (it takes easily up to 200 samples to detect true zero correlations). On the other hand, the within-subject results are far more well-posed from a statistical view, hence, more strongly supported by the data.
The issue should be non-defensively addressed in a well-identified section or paragraph inside the discussion. The sample size should be mentioned in the abstract too.
The authors could put more emphasis on the participants as replication units for observations. For the theoretical perspective, the work by Smith and Little may be of help here: https://link.springer.com/article/10.3758/s13423-018-1451-8. In terms of methods, more emphasis should be put on demonstrating representativeness for example using prevalence statistics (see e.g. Donnhäuser, Florin & Baillet https://doi.org/10.1371/journal.pcbi.1005990)
Supplements should display the most important findings for each subject to reveal representatives of the group averages.
For state duration analysis (boxplots) linear mixed effect models (varying slope models) may be an interesting option to inject additional uncertainty into the estimates and allow for partial pooling through shrinkage of subject-level effects.
Show more raw signals / topographies to build some trust for the input data. It could be worthwhile to show topographic displays for the main states reported in characteristic frequencies. See also next concern.
2) The authors seem to have missed an important opportunity to pinpoint the characteristic drivers in terms of EEG frequency bands. The current analysis is based on broadband signals between 4 and 30 Hz, which seems untypical and reduces the specificity of the analysis. Analyzing the spectral drivers of the different state would not only enrich the results in terms of EEG but also provide a more nuanced interpretation. Are the VisN and DAN-states potentially related to changes in alpha power, potentially induced by spontaneous opening and closing of the eyes? What is the most characteristic spectral of the DMN state? ... etc.
Display the power spectrum indexed by state, ideally for each subject. This would allow inspecting modulation of the power spectra by the state and reveal the characteristic spectral signature without re-analysis.
Repeat essential analyses after bandpass filtering in alpha or beta range. For example, if main results look very similar after filtering 8-12 one can conclude that most observations are related to alpha band power.
While artifacts have been removed using ICA and the network states do not look like source-localized EOG artifacts, some of the spectral changes e.g. in DAN/VisN might be attributed to transient visual deprivation. This could be investigated by performing control analysis regressing the EOG-channels amplitudes against the HMM states. These results could also enhance the discussion regarding activation/deactivation.
In this paper, Fiscella and colleagues report the results of behavioral experiments on auditory perception in healthy participants. The paper is clearly written, and the stimulus manipulations are well thought out and executed.
In the first experiment, audiovisual speech perception was examined in 15 participants. Participants identified keywords in English sentences while viewing faces that were either dynamic or still, and either upright or rotated. To make the task more difficult, two irrelevant masking streams (one audiobook with a male talker, one audiobook with a female talker) were added to the auditory speech at different signal-to-noise ratios for a total of three simultaneous speech streams.
The results of the first experiment were that both the visual face and the auditory voice influenced accuracy. Seeing the moving face of the talker resulted in higher accuracy than a static face, while seeing an upright moving face was better than a 90-degree rotated face which was better than an inverted moving face. In the auditory domain, performance was better when the masking streams were less loud.
In the second experiment, 23 participants identified pitch modulations in auditory speech. The task of the participants was considerably more complicated than in the first experiment. First, participants had to learn an association between visual faces and auditory voices. Then, on each trial, they were presented with a static face which cued them which auditory voice to attend to. Then, both target and distracter voices were presented, and participants searched for pitch modulations only in the target voice. At the same time, audiobook masking streams were presented, for a total of 4 simultaneous speech streams. In addition, participants were assigned a visual task, consisting of searching for a pink dot on the mouth of the visually-presented face. The visual face matched either the target voice or the distracter voice, and the face was either upright or inverted.
The results of the second experiment was that participants were somewhat more accurate (7%) at identifying pitch modulations when the visual face matched the target voice than when it did not.
As I understand it, the main claim of the manuscript is as follows: For sentence comprehension in Experiment 1, both face matching (measured as the contrast of dynamic face vs. static face) and face rotation were influential. For pitch modulation in Experiment 2, only face matching (measured as the contrast of target-stream vs. distracter-stream face) was influential. This claim is summarized in the abstract as "Although we replicated previous findings that temporal coherence induces binding, there was no evidence for a role of linguistic cues in binding. Our results suggest that temporal cues improve speech processing through binding and linguistic cues benefit listeners through late integration."
The claim for Experiment 2 is that face rotation was not influential. However, the authors provide no evidence to support this assertion, other than visual inspection (page 15, line 235): "However, there was no difference in the benefit due to the target face between the upright and inverted condition, and therefore no benefit of the upright face (Figure 2C)."
In fact, the data provided suggests that the opposite may be true, as the improvement for upright faces (t=6.6) was larger than the improvement for inverted faces (t=3.9). An appropriate analysis to test this assertion would be to construct a linear mixed-effects model with fixed factors of face inversion and face matching, and then examine the interaction between these factors.
However, even if this analysis was conducted and the interaction was non-significant, that would not necessarily be strong support for the claim. As the canard has it, "absence of evidence is not evidence of absence". The problem here is that the effect is rather small (7% for face matching). Trying to find significant differences of face inversion within the range of the 7% effect of face matching is difficult but would likely be possible given a larger sample size, assuming that the effect size found with the current sample size holds (t = 6.6 vs. t = 3.9).
In contrast, in experiment 1, the range is very large (improvement from ~40% for the static face to ~90% for dynamic face) making it much easier to find a significant effect of inversion.
One null model would be to assume that the proportional difference in accuracy due to inversion is similar for speech perception and pitch modulation (within the face matching effect) and predict the difference. In experiment 1, inverting the face at 0 dB reduced accuracy from ~90% to ~80%, a ~10% decrease. Applying this to the 7% range found in Experiment 2 would predict that inverted accuracy would be ~6.3% vs. 7%. The authors could perform a power calculation to determine the necessary sample size to detect an effect of this magnitude.
When reporting the effects of linear effects models or other regression models, it is important to report the magnitude of the effect, measured as the actual values of the model coefficients. This allows readers to understand the relative amplitude of different factors on a common scale. For experiment 1, the only values provided are imputed statistical significance, which are not good measures of effect size.
The duration of the pitch modulations in Experiment 2 are not clear. It would help the reader to provide a supplemental figure showing the speech envelope of the 4 simultaneous speech streams and the location and duration of the pitch modulations in the target and distracter streams.
If the pitch modulations were brief, it should be possible to calculate reaction time as an additional dependent measure. If the pitch modulations in the target and distracter streams occurred at different times, this would also allow more accurate categorization of the responses as correct or incorrect by creation of a response window. For instance, if a pitch modulation occurred in both streams and the participant responded "yes", then the timing of the pitch modulation and the response could dissociate a false-positive to the distractor stream pitch modulation from the target stream pitch modulation.
It is not clear from the Methods, but it seems that the results shown are only for trials in which a single distracter was presented in the target stream. A standard analysis would be to use signal detection theory to examine response patterns across all of the different conditions.
In selective attention experiments, the stimulus is usually identical between conditions while only the task instructions vary. The stimulus and task are both different between experiments 1 and 2, making it difficult to claim that "linguistic" vs. "temporal" is the only difference between the experiments.
At a more conceptual level, it seems problematic to assume that inverting the face dissociates linguistic from temporal processing. For instance, a computer face recognition algorithm whose only job was to measure the timing of mouth movements (temporal processing) might operate by first identifying the face using eye-nose-mouth in vertical order. Inverting the face would disrupt the algorithm and hence "temporal processing", invalidation the assumption that face inversion is a pure manipulation of "linguistic processing".
This paper reports on a very interesting and potentially highly important finding - that so-called "sleep learning" does not improve relearning of the same material during wake, but instead paradoxically hinders it. The effect of stimulus presentation during sleep on re-learning was modulated by sleep physiology, namely the number of slow wave peaks that coincide with presentation of the second word in a word pair over repeated presentations. These findings are of theoretical significance for the field of sleep and memory consolidation, as well as of practical importance.
Concerns and recommendations:
1) The authors' results suggest that "sleep learning" leads to an impairment in subsequent wake learning. The authors suggest that this result is due to stimulus-driven interference in synaptic downscaling in hippocampal and language-related networks engaged in the learning of semantic associations, which then leads to saturation of the involved neurons and impairment of subsequent learning. Although at first the findings seem counter-intuitive, I find this explanation to be extremely interesting. Given this explanation, it would be interesting to look at the relationship between implicit learning (as measured on the size judgment task) and subsequent explicit wake-relearning. If this proposed mechanism is correct, then at the trial level one would expect that trials with better evidence of implicit learning (i.e. those that were judged "correctly" on the size judgment task) should show poorer explicit relearning and recall. This analysis would make an interesting addition to the paper, and could possibly strengthen the authors' interpretation.
2) In some cases, a null result is reported and a claim is based on the null result (for example, the finding that wake-learning of new semantic associations in the incongruent condition was not diminished). Where relevant, it would be a good idea to report Bayes factors to quantify evidence for the null.
3) The authors report that they "further identified and excluded from all data analyses the two most consistently small-rated and the two most consistently large-rated foreign words in each word lists based on participants' ratings of these words in the baseline condition in the implicit memory test." Although I realize that the same approach was applied in their original 2019 paper, this decision point seems a bit arbitrary, particularly in the context of the current study where the focus is on explicit relearning and recall, rather than implicit size judgments. As a reader, I wonder whether the results hold when all words are included in the analysis.
4) In the main analysis examining interactions between test run, condition (congruent/incongruent) and number of peak-associated stimulations during sleep (0-1 versus 3-4), baseline trials (i.e. new words that were not presented during sleep) are excluded. As such, the interactions shown in the main results figure (Figure D) are a bit misleading and confusing, as they appear to reflect comparisons relative to the baseline trials (rather than a direct comparison between congruent and incongruent trials, as was done in the analysis). It also looks like the data in the "new" condition is just replicated four times over the four panes of the figure. I recommend reconstructing the figure so that a direct visual comparison can be made between the number of peaks within the congruent and incongruent trials. This change would allow the figure to more accurately reflect the statistical analyses and results that are reported in the manuscript.
5) In addition to the main analysis, the authors report that they also separately compared the conscious recall of congruent and incongruent pairs that were never or once vs. repeatedly associated with slow-wave peaks with the conscious recall in the baseline condition. Given that four separate analyses were carried out, some correction for multiple comparisons should be done. It is unclear whether this was done as it does not seem to be reported.
This paper uses a clever application of the well known Simultaneous Localization and Mapping model (+ replay) to the neuroscience of navigation. The authors capture aspects of the relationship between EC-HPC that are often not captured within one paper/model. Here online prediction error between the EC/HPC systems in the model trigger offline probabilistic inference, or the fast propagation of traveling waves enabling neural message passing between place and grid cell representing non-local states. The authors thus model how such replay - i.e. fast propagation of offline traveling waves passing messages between EC/HP - leads to inference and explains the function of coordinated EC-HP replay. I enjoyed reading the paper and the supplementary material.
First, I'd like to say that I am impressed by this paper. Second, I see my job as a reviewer merely to give suggestions to help improve the accessibility and clarity of the present manuscript. This could help the reader appreciate a beautiful application of SLAM to HPC-EC interactions as well as the novelty of the present approach in bringing in a number of HPC-EC properties together in one model.
1) The introduction is rather brief and lacks citations standard for this field. This is understandable as it may be due to earlier versions having been prepared for NeurIPS. It may be helpful if the authors added a bit more background to the introduction so readers can orient themselves and localize this paper in the larger map of the field. It would be especially helpful to repeat this process not only in the intro but throughout the text even if the authors have already cited papers elsewhere, since the authors are elegantly bringing together various different neuroscientific concepts and findings, such as replay, structures, offline traveling waves, propagation speed, shifter cell, etc. A bigger picture intro will help the reader be prepared for all the relevant pieces that are later gradually unfolded.
It would be especially helpful to offer an overall summary of the main aspects of HPC-EC literature in relation to navigation that will later appear. This will frontload the larger, and in my opinion clever narrative, of the paper where replay, memory, and probabilistic models meet to capture aspects of the literature not previously addressed.
2) The SLAM (simultaneous localization and mapping) model is used broadly in mobile phones, robotics, automotive, and drones. The authors do not introduce SLAM to the reader, and SLAM (in broad strokes) may not be familiar to potential readers. Even for neuroscientists who may be familiar with SLAM, it may not be clear from the paper which aspects of it are directly similar to existing other models and which aspects are novel in terms of capturing HPC/EC findings. I would strongly encourage an entire section dedicated to SLAM, perhaps even a simple figure or diagram of the broader algorithm. It would be especially helpful if the authors could clarify how their structure replay approach extends existing offline SLAM approaches. This would make the novel approaches in the present paper shine for both bio & ML audiences.
Providing this big picture will make it easier for the reader to connect aspects of SLAM that are known, with the clever account of traveling waves and other HPC-EC interactions, which are largely overlooked in contemporary models of HPC-EC models of space and structures. It is perhaps also worth to mention RatSLAM, which is another bio-inspired version of SLAM, and the place cell/hippocampus inspiration for SLAM.
D Ball, S Heath, J Wiles, G Wyeth, P Corke, M Milford, "OpenRatSLAM: an open source brain-based SLAM system", in Autonomous Robots, 34 (3), 149-176, 2013
3) At first glance, it may appear that there are many moving parts in the paper. To the average neuroscience reader, this may be puzzling, or require going back and forth with some working memory overload to put the pieces together. My suggestion is to have a table of biological/neural functions and the equivalent components of the present model. This guide will allow the reader to see the big picture - and the value of the authors' hard work - in one glance, and be able to look more closely at each section more closely and with the bigger picture in mind. I believe this will only increase the clarity and accessibility of the manuscript.
4) The authors could perhaps spend a little more time comparing previous modeling attempts at capturing the HP-EC phenomena and traveling through various models, noting caveats of previous models, and advantages and caveats of their model. This could be in the discussion, or earlier, but would help localize the reader in this space a bit better.
5) Perhaps the authors could briefly clarify where merely Euclidean vs. non-euclidean representations would be expected of the model, and whether they can accommodate >2D maps, e.g. in bats or in nonspatial interactions of HPC-EC.
6) The discussion could also be improved by synthesizing the old and the new, the significant contribution of this paper and modifications to SLAM, as well as a big picture summary of the various phenomena that come together in the HPC-EC interactions, e.g. via traveling waves.
The authors present a work related to the survey of the bacterial community in the Cam River (Cambridgeshire, UK) using one of the latest DNA sequencing technologies using a target sequencing approach (Oxford Nanopore). The work consisted in a test for the sequencing and analysis method, benchmarking some programs using mock data, to decide which one was the best for their analysis.
After selecting the best tool, they provide a family level taxonomy profiling for the microbial community along the Cam river through a 4-month window of time. In addition to the general and local snapshots of the bacterial composition, they correlate some physicochemical parameters with the abundance shift of some taxa.
Finally, they report the presence of 55 potentially pathogenic bacterial genera that were further studied using a phylogenetic analysis.
Page 6. There is a "data not shown" comment in the text:
"Benchmarking of the classification tools on one aquatic sample further confirmed Minimap2's reliable performance in a complex bacterial community, although other tools such as SPINGO (Allard, Ryan, Jeffery, & Claesson, 2015), MAPseq (Matias Rodrigues, Schmidt, Tackmann, & von Mering, 2017), or IDTAXA (Murali et al., 2018) also produced highly concordant results despite variations in speed and memory usage (data not shown)."
Nowadays, there is no reason for not showing data. In case the speed and memory usage was not recorded, it is advisable to rerun the analysis and quantify these variables, rather than mentioning them and not report them.
Or what are the reasons for not showing the results?
Figure 2 is too dense and crowded. In the end, all figures are too tiny and the message they should deliver is lost. That also makes the footnote very long. I would suggest moving some of the figure panels, maybe b), c) and d), as separate supp. figures.
Figure 3 has the same problem. I think there is too much information that could be moved as supp. mat.
In addition to Figure 4, it would be important to calculate if the family PCA component contribution differences in time are differentially significant. In Panel B, is depicted the most evident variance difference but what about other taxa which might not be very abundant but differ in time? you can use the fitFeatureModel function from the metagenomeSeq R library and a P-adjusted threshold value of 0.05, to validate abundance differences in addition to your analysis.
Page 12-13. In the paragraph:
"Using multiple sequence alignments between nanopore reads and pathogenic species references, we further resolved the phylogenies of three common potentially pathogenic genera occurring in our river samples, Legionella, Salmonella and Pseudomonas (Figure 7a-c; Material and Methods). While Legionella and Salmonella diversities presented negligible levels of known harmful species, a cluster of reads in downstream sections indicated a low abundance of the opportunistic, environmental pathogen Pseudomonas aeruginosa (Figure 7c). We also found significant variations in relative abundances of the Leptospira genus, which was recently described to be enriched in wastewater effluents in Germany (Numberger et al., 2019) (Figure 7d)."
Here it is important to mention the relative abundance in the sample. Please, discuss that the presence of DNA from pathogens in the sample, has to be confirmed by other microbiology methodologies, to validate if there are viable organisms. Definitively, it is a big warning finding pathogen's DNA but also, since it is characterized only at genus level, further investigation using whole metagenome shotgun sequencing or isolation, would be important.
This phrase is used in the abstract , introduction and discussion, although not exactly written the same:
"Using an inexpensive, easily adaptable and scalable framework based on nanopore sequencing..."
I wouldn't use the term "inexpensive" since it is relative. Also, it should be discussed that although is technically convenient in some aspects compared to other sequencers, there are still protocol steps that need certain reagents and equipment that are similar or the same to those needed for other sequencing platforms. Probably, common bottlenecks such as DNA extraction methods, sample preservation and the presence of inhibitory compounds should be mentioned and stressed out.
Page 15: "This might help to establish this family as an indicator for bacterial community shifts along with water temperature fluctuations."
Temperature might not be the main factor for the shift. There could be other factors that were not measured that could contribute to this shift. There are several parameters that are not measured and are related to water quality (COD, organic matter, PO4, etc).
"A number of experimental intricacies should be addressed towards future nanopore freshwater sequencing studies with our approach, mostly by scrutinising water DNA extraction yields, PCR biases and molar imbalances in barcode multiplexing (Figure 3a; Supplementary Figure 5)."
Here you could elaborate more on the challenges like those mentioned in my previous comment.