7,272 Matching Annotations
  1. Nov 2020
    1. Reviewer #2:

      Lang et al. Investigate and document the role of myeloid-endogenous circadian cycling on the host response to and progression of endotoxemia in the mouse LPS-model. As a principal finding, Lang et al. report how disruption of the cell-intrinsic myeloid circadian clock by myeloid-specific knockdown of either CLOCK or BMAL1 does not prevent circadian patterns of morbidity and mortality in endotoxemic mice. As a consequence of these and other findings from endotoxemia experiments in mice kept in the dark or the observation of circadian cytokine production in CLOCK KO animals, the authors conclude that myeloid responses critical to endotoxemia are not governed by their local cell-intrinsic clock. Moreover they conclude that the source of circadian timing and pace giving that is critical for the host response to endotoxemia must lie outside the myeloid compartment. Finally, the authors also report a general (non-circadian) reduced susceptibility of mice devoid of myeloid CLOCK or BMAL1, which they take as proof that myeloid circadian cycling is important in the host response to endotoxemia, yet does not dictate the circadian pattern in mortality and cytokine responses.

      The paper is well conceived, experiments are very elegant and well carried out, statistics are appropriate, ethic statements are OK. The conclusions of this study, as summarized above, are important and will be of much interest to readers from the circadian field and beyond, also to sepsis and inflammation researchers. To me, there is one major flaw in the argumentative line of this story, as the study relies on the assumption that the systemic cytokine response provided by myeloid cells is paramount and central to the course and intensity of endotoxemia. While this is assumed by many, a rigorous proof of this connection and its causality is still lacking (most evidence is of correlative nature). As a matter of fact, there is an increasing body of more recent experimental evidence that argues against a prominent role of myeloid cells in the cytokine storm. Overall I would like to raise the following points and suggestions.

      Major Points:

      • As mentioned, a weakness of this paper is that it assumes systemic cytokine levels as produced by myeloid cells are center stage in endotoxemic shock (e.g. see line 164). However, recent evidence has shown that over 90 % of most of systemically released cytokines in sepsis are produced by non-myeloid cells (as proven e.g. by use of humanized mice, which allows to discriminate (human) cytokines produced by blood cells from (murine) cytokines produced by parenchyma (see e.g. PMID: 31297113). (Interestingly, there is one major exception to that rule, and that is TNFa). Considering this, it is not surprising that circadian cytokine levels do not change in myeloid CLOCK/BMAl1 KO mice. Also, assuming that myeloid-produced cytokines are not critical drivers, the same applies to the observation that circadian mortality pattern is preserved in those mice. I recommend that the authors more critically discuss this alternative explanation in the paper. In fact, this line of arguing would be in line with the concept that the source for the circadian susceptibility /mortality in endotoxemia resides in a non-myeloid cell compartment, which is essentially the major finding of this manuscript.

      • Intro (lines 51-54): the authors describe one scenario as the mechanism of sepsis-associated organ failure. This appears too one-sided and absolute to me, many more hypotheses and models exist. It would be good to mention that and/or tone down the wording.

      • Very analogous to Light/Darkness cycles, ambient temperature has been shown to have a strong impact on mortality from endotoxemia (e.g. PMID: 31016449). Did the authors keep their animals in thermostated ambient conditions? Please describe and discuss in the text.

      • Fig.2C; The large difference in mortality in the control lys-MCre line looks somewhat worrying to me. Could this be a consequence of well-known Cre off-target activities? Did the authors check this by e.g. sequencing myeloid cells of or using control mouse strains?

      • Line 320: Bmal1flox/flox (Bmal-flox) [48] or Clockflox/flox (Clock-flox) [38] were bred with LysM-Cre to target Bmal1. I suggest showing a prototypical genotyping result, perhaps as a supplemental figure.

      • Line 365: the authors state that mice that did not show signs of disease were sorted out. What proportion of mice (%) did not react to LPS? It would be useful to state this number in the methods section.

      • It is not fully clear to me if male or female or both were used for the principal experiments, please specify. If females were used, please describe how menstruation cycle was taken into account.

    1. Reviewer #2:

      General assessment:

      The study investigated transient coupling between EEG and fMRI during resting state in 15 elderly participants using the previously established Hidden Markov Model approach. Key findings include: 1) deviations of the hemodynamic response function (HDR) in higher-order versus sensory brain networks, 2) Power law scaling for duration and relative frequency of states, 3) associations between state duration and HDR alterations, 4) cross-sectional associations between HDR alterations, white matter signal anomalies and memory performance.

      The work is rigorously designed and very well presented. The findings are potentially of strong significance to several neuroscience communities.

      Major concerns:

      My enthusiasm was only somewhat mitigated by methodological issues related to the sample size for cross-sectional reference and missed opportunities for more specific analysis of the EEG.

      1) Statistical power analysis has been conducted prior to data collection, which is very laudable. Nevertheless, n=15 is a very small sample for cross-sectional inference and commonly leads to false positives despite large observed effect sizes and small p-values (it takes easily up to 200 samples to detect true zero correlations). On the other hand, the within-subject results are far more well-posed from a statistical view, hence, more strongly supported by the data.

      Recommendations:

      • The issue should be non-defensively addressed in a well-identified section or paragraph inside the discussion. The sample size should be mentioned in the abstract too.

      • The authors could put more emphasis on the participants as replication units for observations. For the theoretical perspective, the work by Smith and Little may be of help here: https://link.springer.com/article/10.3758/s13423-018-1451-8. In terms of methods, more emphasis should be put on demonstrating representativeness for example using prevalence statistics (see e.g. Donnhäuser, Florin & Baillet https://doi.org/10.1371/journal.pcbi.1005990)

      • Supplements should display the most important findings for each subject to reveal representatives of the group averages.

      • For state duration analysis (boxplots) linear mixed effect models (varying slope models) may be an interesting option to inject additional uncertainty into the estimates and allow for partial pooling through shrinkage of subject-level effects.

      • Show more raw signals / topographies to build some trust for the input data. It could be worthwhile to show topographic displays for the main states reported in characteristic frequencies. See also next concern.

      2) The authors seem to have missed an important opportunity to pinpoint the characteristic drivers in terms of EEG frequency bands. The current analysis is based on broadband signals between 4 and 30 Hz, which seems untypical and reduces the specificity of the analysis. Analyzing the spectral drivers of the different state would not only enrich the results in terms of EEG but also provide a more nuanced interpretation. Are the VisN and DAN-states potentially related to changes in alpha power, potentially induced by spontaneous opening and closing of the eyes? What is the most characteristic spectral of the DMN state? ... etc.

      Recommendations:

      • Display the power spectrum indexed by state, ideally for each subject. This would allow inspecting modulation of the power spectra by the state and reveal the characteristic spectral signature without re-analysis.

      • Repeat essential analyses after bandpass filtering in alpha or beta range. For example, if main results look very similar after filtering 8-12 one can conclude that most observations are related to alpha band power.

      • While artifacts have been removed using ICA and the network states do not look like source-localized EOG artifacts, some of the spectral changes e.g. in DAN/VisN might be attributed to transient visual deprivation. This could be investigated by performing control analysis regressing the EOG-channels amplitudes against the HMM states. These results could also enhance the discussion regarding activation/deactivation.

    1. Reviewer #2:

      In this paper, Fiscella and colleagues report the results of behavioral experiments on auditory perception in healthy participants. The paper is clearly written, and the stimulus manipulations are well thought out and executed.

      In the first experiment, audiovisual speech perception was examined in 15 participants. Participants identified keywords in English sentences while viewing faces that were either dynamic or still, and either upright or rotated. To make the task more difficult, two irrelevant masking streams (one audiobook with a male talker, one audiobook with a female talker) were added to the auditory speech at different signal-to-noise ratios for a total of three simultaneous speech streams.

      The results of the first experiment were that both the visual face and the auditory voice influenced accuracy. Seeing the moving face of the talker resulted in higher accuracy than a static face, while seeing an upright moving face was better than a 90-degree rotated face which was better than an inverted moving face. In the auditory domain, performance was better when the masking streams were less loud.

      In the second experiment, 23 participants identified pitch modulations in auditory speech. The task of the participants was considerably more complicated than in the first experiment. First, participants had to learn an association between visual faces and auditory voices. Then, on each trial, they were presented with a static face which cued them which auditory voice to attend to. Then, both target and distracter voices were presented, and participants searched for pitch modulations only in the target voice. At the same time, audiobook masking streams were presented, for a total of 4 simultaneous speech streams. In addition, participants were assigned a visual task, consisting of searching for a pink dot on the mouth of the visually-presented face. The visual face matched either the target voice or the distracter voice, and the face was either upright or inverted.

      The results of the second experiment was that participants were somewhat more accurate (7%) at identifying pitch modulations when the visual face matched the target voice than when it did not.

      As I understand it, the main claim of the manuscript is as follows: For sentence comprehension in Experiment 1, both face matching (measured as the contrast of dynamic face vs. static face) and face rotation were influential. For pitch modulation in Experiment 2, only face matching (measured as the contrast of target-stream vs. distracter-stream face) was influential. This claim is summarized in the abstract as "Although we replicated previous findings that temporal coherence induces binding, there was no evidence for a role of linguistic cues in binding. Our results suggest that temporal cues improve speech processing through binding and linguistic cues benefit listeners through late integration."

      The claim for Experiment 2 is that face rotation was not influential. However, the authors provide no evidence to support this assertion, other than visual inspection (page 15, line 235): "However, there was no difference in the benefit due to the target face between the upright and inverted condition, and therefore no benefit of the upright face (Figure 2C)."

      In fact, the data provided suggests that the opposite may be true, as the improvement for upright faces (t=6.6) was larger than the improvement for inverted faces (t=3.9). An appropriate analysis to test this assertion would be to construct a linear mixed-effects model with fixed factors of face inversion and face matching, and then examine the interaction between these factors.

      However, even if this analysis was conducted and the interaction was non-significant, that would not necessarily be strong support for the claim. As the canard has it, "absence of evidence is not evidence of absence". The problem here is that the effect is rather small (7% for face matching). Trying to find significant differences of face inversion within the range of the 7% effect of face matching is difficult but would likely be possible given a larger sample size, assuming that the effect size found with the current sample size holds (t = 6.6 vs. t = 3.9).

      In contrast, in experiment 1, the range is very large (improvement from ~40% for the static face to ~90% for dynamic face) making it much easier to find a significant effect of inversion.

      One null model would be to assume that the proportional difference in accuracy due to inversion is similar for speech perception and pitch modulation (within the face matching effect) and predict the difference. In experiment 1, inverting the face at 0 dB reduced accuracy from ~90% to ~80%, a ~10% decrease. Applying this to the 7% range found in Experiment 2 would predict that inverted accuracy would be ~6.3% vs. 7%. The authors could perform a power calculation to determine the necessary sample size to detect an effect of this magnitude.

      Other Comments

      When reporting the effects of linear effects models or other regression models, it is important to report the magnitude of the effect, measured as the actual values of the model coefficients. This allows readers to understand the relative amplitude of different factors on a common scale. For experiment 1, the only values provided are imputed statistical significance, which are not good measures of effect size.

      The duration of the pitch modulations in Experiment 2 are not clear. It would help the reader to provide a supplemental figure showing the speech envelope of the 4 simultaneous speech streams and the location and duration of the pitch modulations in the target and distracter streams.

      If the pitch modulations were brief, it should be possible to calculate reaction time as an additional dependent measure. If the pitch modulations in the target and distracter streams occurred at different times, this would also allow more accurate categorization of the responses as correct or incorrect by creation of a response window. For instance, if a pitch modulation occurred in both streams and the participant responded "yes", then the timing of the pitch modulation and the response could dissociate a false-positive to the distractor stream pitch modulation from the target stream pitch modulation.

      It is not clear from the Methods, but it seems that the results shown are only for trials in which a single distracter was presented in the target stream. A standard analysis would be to use signal detection theory to examine response patterns across all of the different conditions.

      In selective attention experiments, the stimulus is usually identical between conditions while only the task instructions vary. The stimulus and task are both different between experiments 1 and 2, making it difficult to claim that "linguistic" vs. "temporal" is the only difference between the experiments.

      At a more conceptual level, it seems problematic to assume that inverting the face dissociates linguistic from temporal processing. For instance, a computer face recognition algorithm whose only job was to measure the timing of mouth movements (temporal processing) might operate by first identifying the face using eye-nose-mouth in vertical order. Inverting the face would disrupt the algorithm and hence "temporal processing", invalidation the assumption that face inversion is a pure manipulation of "linguistic processing".

    1. Reviewer #2:

      This paper reports on a very interesting and potentially highly important finding - that so-called "sleep learning" does not improve relearning of the same material during wake, but instead paradoxically hinders it. The effect of stimulus presentation during sleep on re-learning was modulated by sleep physiology, namely the number of slow wave peaks that coincide with presentation of the second word in a word pair over repeated presentations. These findings are of theoretical significance for the field of sleep and memory consolidation, as well as of practical importance.

      Concerns and recommendations:

      1) The authors' results suggest that "sleep learning" leads to an impairment in subsequent wake learning. The authors suggest that this result is due to stimulus-driven interference in synaptic downscaling in hippocampal and language-related networks engaged in the learning of semantic associations, which then leads to saturation of the involved neurons and impairment of subsequent learning. Although at first the findings seem counter-intuitive, I find this explanation to be extremely interesting. Given this explanation, it would be interesting to look at the relationship between implicit learning (as measured on the size judgment task) and subsequent explicit wake-relearning. If this proposed mechanism is correct, then at the trial level one would expect that trials with better evidence of implicit learning (i.e. those that were judged "correctly" on the size judgment task) should show poorer explicit relearning and recall. This analysis would make an interesting addition to the paper, and could possibly strengthen the authors' interpretation.

      2) In some cases, a null result is reported and a claim is based on the null result (for example, the finding that wake-learning of new semantic associations in the incongruent condition was not diminished). Where relevant, it would be a good idea to report Bayes factors to quantify evidence for the null.

      3) The authors report that they "further identified and excluded from all data analyses the two most consistently small-rated and the two most consistently large-rated foreign words in each word lists based on participants' ratings of these words in the baseline condition in the implicit memory test." Although I realize that the same approach was applied in their original 2019 paper, this decision point seems a bit arbitrary, particularly in the context of the current study where the focus is on explicit relearning and recall, rather than implicit size judgments. As a reader, I wonder whether the results hold when all words are included in the analysis.

      4) In the main analysis examining interactions between test run, condition (congruent/incongruent) and number of peak-associated stimulations during sleep (0-1 versus 3-4), baseline trials (i.e. new words that were not presented during sleep) are excluded. As such, the interactions shown in the main results figure (Figure D) are a bit misleading and confusing, as they appear to reflect comparisons relative to the baseline trials (rather than a direct comparison between congruent and incongruent trials, as was done in the analysis). It also looks like the data in the "new" condition is just replicated four times over the four panes of the figure. I recommend reconstructing the figure so that a direct visual comparison can be made between the number of peaks within the congruent and incongruent trials. This change would allow the figure to more accurately reflect the statistical analyses and results that are reported in the manuscript.

      5) In addition to the main analysis, the authors report that they also separately compared the conscious recall of congruent and incongruent pairs that were never or once vs. repeatedly associated with slow-wave peaks with the conscious recall in the baseline condition. Given that four separate analyses were carried out, some correction for multiple comparisons should be done. It is unclear whether this was done as it does not seem to be reported.

    1. Reviewer #2:

      This paper uses a clever application of the well known Simultaneous Localization and Mapping model (+ replay) to the neuroscience of navigation. The authors capture aspects of the relationship between EC-HPC that are often not captured within one paper/model. Here online prediction error between the EC/HPC systems in the model trigger offline probabilistic inference, or the fast propagation of traveling waves enabling neural message passing between place and grid cell representing non-local states. The authors thus model how such replay - i.e. fast propagation of offline traveling waves passing messages between EC/HP - leads to inference and explains the function of coordinated EC-HP replay. I enjoyed reading the paper and the supplementary material.

      First, I'd like to say that I am impressed by this paper. Second, I see my job as a reviewer merely to give suggestions to help improve the accessibility and clarity of the present manuscript. This could help the reader appreciate a beautiful application of SLAM to HPC-EC interactions as well as the novelty of the present approach in bringing in a number of HPC-EC properties together in one model.

      1) The introduction is rather brief and lacks citations standard for this field. This is understandable as it may be due to earlier versions having been prepared for NeurIPS. It may be helpful if the authors added a bit more background to the introduction so readers can orient themselves and localize this paper in the larger map of the field. It would be especially helpful to repeat this process not only in the intro but throughout the text even if the authors have already cited papers elsewhere, since the authors are elegantly bringing together various different neuroscientific concepts and findings, such as replay, structures, offline traveling waves, propagation speed, shifter cell, etc. A bigger picture intro will help the reader be prepared for all the relevant pieces that are later gradually unfolded.

      It would be especially helpful to offer an overall summary of the main aspects of HPC-EC literature in relation to navigation that will later appear. This will frontload the larger, and in my opinion clever narrative, of the paper where replay, memory, and probabilistic models meet to capture aspects of the literature not previously addressed.

      2) The SLAM (simultaneous localization and mapping) model is used broadly in mobile phones, robotics, automotive, and drones. The authors do not introduce SLAM to the reader, and SLAM (in broad strokes) may not be familiar to potential readers. Even for neuroscientists who may be familiar with SLAM, it may not be clear from the paper which aspects of it are directly similar to existing other models and which aspects are novel in terms of capturing HPC/EC findings. I would strongly encourage an entire section dedicated to SLAM, perhaps even a simple figure or diagram of the broader algorithm. It would be especially helpful if the authors could clarify how their structure replay approach extends existing offline SLAM approaches. This would make the novel approaches in the present paper shine for both bio & ML audiences.

      Providing this big picture will make it easier for the reader to connect aspects of SLAM that are known, with the clever account of traveling waves and other HPC-EC interactions, which are largely overlooked in contemporary models of HPC-EC models of space and structures. It is perhaps also worth to mention RatSLAM, which is another bio-inspired version of SLAM, and the place cell/hippocampus inspiration for SLAM.

      D Ball, S Heath, J Wiles, G Wyeth, P Corke, M Milford, "OpenRatSLAM: an open source brain-based SLAM system", in Autonomous Robots, 34 (3), 149-176, 2013

      3) At first glance, it may appear that there are many moving parts in the paper. To the average neuroscience reader, this may be puzzling, or require going back and forth with some working memory overload to put the pieces together. My suggestion is to have a table of biological/neural functions and the equivalent components of the present model. This guide will allow the reader to see the big picture - and the value of the authors' hard work - in one glance, and be able to look more closely at each section more closely and with the bigger picture in mind. I believe this will only increase the clarity and accessibility of the manuscript.

      4) The authors could perhaps spend a little more time comparing previous modeling attempts at capturing the HP-EC phenomena and traveling through various models, noting caveats of previous models, and advantages and caveats of their model. This could be in the discussion, or earlier, but would help localize the reader in this space a bit better.

      5) Perhaps the authors could briefly clarify where merely Euclidean vs. non-euclidean representations would be expected of the model, and whether they can accommodate >2D maps, e.g. in bats or in nonspatial interactions of HPC-EC.

      6) The discussion could also be improved by synthesizing the old and the new, the significant contribution of this paper and modifications to SLAM, as well as a big picture summary of the various phenomena that come together in the HPC-EC interactions, e.g. via traveling waves.

    1. Reviewer #2:

      The authors present a work related to the survey of the bacterial community in the Cam River (Cambridgeshire, UK) using one of the latest DNA sequencing technologies using a target sequencing approach (Oxford Nanopore). The work consisted in a test for the sequencing and analysis method, benchmarking some programs using mock data, to decide which one was the best for their analysis.

      After selecting the best tool, they provide a family level taxonomy profiling for the microbial community along the Cam river through a 4-month window of time. In addition to the general and local snapshots of the bacterial composition, they correlate some physicochemical parameters with the abundance shift of some taxa.

      Finally, they report the presence of 55 potentially pathogenic bacterial genera that were further studied using a phylogenetic analysis.

      Comments:

      Page 6. There is a "data not shown" comment in the text:

      "Benchmarking of the classification tools on one aquatic sample further confirmed Minimap2's reliable performance in a complex bacterial community, although other tools such as SPINGO (Allard, Ryan, Jeffery, & Claesson, 2015), MAPseq (Matias Rodrigues, Schmidt, Tackmann, & von Mering, 2017), or IDTAXA (Murali et al., 2018) also produced highly concordant results despite variations in speed and memory usage (data not shown)."

      Nowadays, there is no reason for not showing data. In case the speed and memory usage was not recorded, it is advisable to rerun the analysis and quantify these variables, rather than mentioning them and not report them.

      Or what are the reasons for not showing the results?

      Figure 2 is too dense and crowded. In the end, all figures are too tiny and the message they should deliver is lost. That also makes the footnote very long. I would suggest moving some of the figure panels, maybe b), c) and d), as separate supp. figures.

      Figure 3 has the same problem. I think there is too much information that could be moved as supp. mat.

      In addition to Figure 4, it would be important to calculate if the family PCA component contribution differences in time are differentially significant. In Panel B, is depicted the most evident variance difference but what about other taxa which might not be very abundant but differ in time? you can use the fitFeatureModel function from the metagenomeSeq R library and a P-adjusted threshold value of 0.05, to validate abundance differences in addition to your analysis.

      Page 12-13. In the paragraph:

      "Using multiple sequence alignments between nanopore reads and pathogenic species references, we further resolved the phylogenies of three common potentially pathogenic genera occurring in our river samples, Legionella, Salmonella and Pseudomonas (Figure 7a-c; Material and Methods). While Legionella and Salmonella diversities presented negligible levels of known harmful species, a cluster of reads in downstream sections indicated a low abundance of the opportunistic, environmental pathogen Pseudomonas aeruginosa (Figure 7c). We also found significant variations in relative abundances of the Leptospira genus, which was recently described to be enriched in wastewater effluents in Germany (Numberger et al., 2019) (Figure 7d)."

      Here it is important to mention the relative abundance in the sample. Please, discuss that the presence of DNA from pathogens in the sample, has to be confirmed by other microbiology methodologies, to validate if there are viable organisms. Definitively, it is a big warning finding pathogen's DNA but also, since it is characterized only at genus level, further investigation using whole metagenome shotgun sequencing or isolation, would be important.

      This phrase is used in the abstract , introduction and discussion, although not exactly written the same:

      "Using an inexpensive, easily adaptable and scalable framework based on nanopore sequencing..."

      I wouldn't use the term "inexpensive" since it is relative. Also, it should be discussed that although is technically convenient in some aspects compared to other sequencers, there are still protocol steps that need certain reagents and equipment that are similar or the same to those needed for other sequencing platforms. Probably, common bottlenecks such as DNA extraction methods, sample preservation and the presence of inhibitory compounds should be mentioned and stressed out.

      Page 15: "This might help to establish this family as an indicator for bacterial community shifts along with water temperature fluctuations."

      Temperature might not be the main factor for the shift. There could be other factors that were not measured that could contribute to this shift. There are several parameters that are not measured and are related to water quality (COD, organic matter, PO4, etc).

      "A number of experimental intricacies should be addressed towards future nanopore freshwater sequencing studies with our approach, mostly by scrutinising water DNA extraction yields, PCR biases and molar imbalances in barcode multiplexing (Figure 3a; Supplementary Figure 5)."

      Here you could elaborate more on the challenges like those mentioned in my previous comment.

    1. Converting Angular components into Svelte is largely a mechanical process. For the most part, each Angular template feature has a direct corollary in Svelte. Some things are simpler and some are more complex but overall it's pretty easy to do.
    1. Reviewer #2:

      In this work Chatzikalymniou et al. use models of hippocampus of different complexities to understand the emergence and robustness of intra-hippocampal theta rhythms. They use a segment of highly detailed model as a bridge to leverage insights from a minimal model of spiking point neurons to the level of a full hippocampus. This is an interesting approach as the minimal model is more amenable to analysis and probing the parameter space while the detailed model is potentially closer to experiment yet difficult and costly to explore.

      The study of network problems is very demanding, there are no good ways to address robustness of the realistic models and the parameter space makes brute force approaches impractical. The angle of attack proposed here is interesting. While this is surely not the only approach tenable, it is sensible, justified, and actually implemented. The amount of work which entered this project is clear. I essentially accept the proposed reasoning and the hypotheses put forward. The few remarks I have are rather minor, but I think they merit a response.

      1) l. 528-530 "This is particularly noticeable in Figure 9D where theta rhythms are present and can be seen to be due to the PYR cell population firing in bursts of theta frequency. Even more, we notice that the pattern of the input current to the PYR cells isn't theta-paced or periodic (see Figure 10Bi)."

      This is a loose statement. When you look at the raw LFP theta is also not apparent (e.g. Figure 9.Ei or Fi). What happens once you look at the spectrum of the activity shown in 10.Bi? Do you see theta or not?

      2) l. 562 "This implies that the different E-I balances in the segment model that allow LFP theta rhythms to emerge are not all consistent with the experimental data, and by extension, the biological system."

      This is speculative. We do not know how generic the results of Amilhon et al. are. They showed what you can find experimentally, not what you cannot find experimentally. I agree with the statement from l.581, though : "Thus, from the perspective of the experiments of Amilhon et al. (2015) theta rhythm generation via a case a type pathway seems more biologically realistic ..."

      3) There are several problems with access to code and data provided in the manuscript.

      l. 986, 1113 - osf.io does not give access<br> l. 1027 - bitbucket of bezaire does not allow access l. 1030 - simtracker link is down l. 1129, 1141 - the github link does not exist (private repo?)

      4) l. 1017 - Afferent inputs from CA3 and EC are also included in the form of Poisson-distributed spiking units from artificial CA3 and EC cells.

      Not obvious if Poisson is adequate here - did you check on the statistics of inputs? Any references? Different input statistics may induce specific correlations which might affect the size of fluctuations of the input current. I do not think this would be a significant effect here unless the departure from Poisson is highly significant. Any comments might be useful.

      5) l. 909 - "Euler integration method is used to integrate the cell equations with a timestep of 0.1 msec."

      This seems dangerous. Is the computation so costly that more advanced integration is not viable?

    1. Reviewer #2:

      This manuscript asked the question of how axons vs dendrites are lost by the live-imaging cortex of rTg4510 tau transgenic mice. Overall, this manuscript is well-done and well-written, and confirms previous findings. However, there are a number of key controls missing from the experimental data (please see below). Statistical analyses are satisfactory (with some caveats, please see below).

      Figures 1+2 replicate previous findings also in rTg4510 (Crimins et al., 2012; Jackson et al., 2017; Kopeikina et al., 2013); Figures 3+4 (Ramsden et al., 2005; SantaCruz et al., 2005; Spires et al., 2006; Crimins et al., 2012; Kopeikina et al., 2013; Helboe et al., 2017; Jackson et al., 2017). The novelty here are the differing patterns of bouton and spine turnover shortly before axons and dendrites, respectively, are lost, which is a finding uniquely enabled by 2-photon. Thus, findings in Fig. 5/6 should be highlighted and solidified. Further, the manuscript lacks mechanistic insight.

      It is not clear how the authors ensure that the perceived loss of spines/boutons/dendrites/axons is not due to bleaching or loss of the GFP signal. Please validate loss of spines/boutons and actual synapses using fixed tissue imaging or electron microscopy on a separate cohort of mice.

      Did the authors control for gliosis after the repeated imaging (very short after viral injection and cranial window implant on the same site)? Could it be that the repeated imaging itself on a damaged tissue induces blebbing on the already more vulnerable spines in the tau mice? Please show Iba1 and GFAP with and without doxycycline administration should be included in supplemental along with area staining quantification. Transgenic mice without manipulation (viral injection/cranial window/2P imaging) should also act as a control to ensure no gliosis is observed.

      rTg4510 transgene insertion: Gamache et al. recently showed that the integration sites of both the CaMKIIα-tTA and MAPT-P301L transgenes impact the expression of endogenous mouse genes. The disruption of the Fgf14 gene in particular contributes to the pathological phenotype of these mice, making it difficult to directly ascribe the phenotypes seen in the manuscript to MAPT-P301L transgene overexpression. Although this limitation is acknowledged in the discussion, the T2 mice employed in this paper (Gamache et al., 2019) would be suitable controls to better evaluate the contribution of tauP301L alone on the neuropathology and disease progression observed in the authors' experiments, at least in fixed synapse imaging.

    1. Reviewer #2:

      The paper titled "Brain Network Reconfiguration for Narrative and Argumentative Thought" sought to uncover the common neural processing sequences (time-locked activations and deactivations; inter-subject correlations and inter-subject functional connectivity) underlying narrative and argumentative thought. In particular, the study aimed to provide evidence that would help adjudicate between two current theories: the Content-Dependent Hypothesis (narrative argumentative) and the Content-Independent Hypothesis (narrative = argumentative). In order to assess these possibilities they tested participants in an fMRI scanner as they listened to validated narrative and argumentative texts. Each text condition was directly compared to resting state and scrambled versions of the texts. Across a range of interesting analyses that focus on how each participant's brain synchronized with other participants' brains throughout the same narrative and argumentative texts, they primarily found support for the content-dependent hypothesis with a few differences and commonalities across text conditions. Relative to the scrambled conditions, listening to narrative texts was more associated with default mode activity across participants and listening to argumentative texts only activated a common network of superior fronto-parietal control regions and language regions. Argumentative texts did not differ much from scrambled versions of the same text. These patterns reveal themselves in both ISC and ISFC data. Overall, I feel like this paper is really well written and is a novel approach to distinguishing the neural processes between similar, but different types of thought. At times the manuscript loses touch with its primary brain coordination metrics (ISC and ISFC), describing the findings more like a GLM or functional connectivity study.

      Comments:

      Introduction:

      1) The introduction is very clearly written and uses a wonderful variety of sentence structure. Well done!

      2) While the writing is beautiful, a few sentences are less easy to comprehend than others. For example the use of outstands in line 36 is a bit difficult to parse on first read. Consider simplifying the language some.

      3) There seems to be an opportunity to discuss this work and its findings in a broad context of narrative or argumentative self-generated internal thought (not based on listening to texts). For instance, I think there could be a few sentences tying this work to studies of autobiographical memory retrieval or mind wandering (for argumentation perhaps studies of the cognitive and neural processes behind complex decision making). This is captured to some extent in the introduction and discussion, but I think it could go further with citations beyond those just associated with listening to various types of text.

      4) Appreciate the thorough discussion of hypotheses and background.

      5) It is not necessary, but it might be interesting to show some basic functional connectivity analyses of the individual participant activations in supplemental analyses (no ISC or ISFC).

      Methods:

      1) Please clarify how the ISFC analysis can be directional in any way? Does unidirectional mean that you're just taking one value for each pairwise connection Cij?

      Results:

      1) To what extent is there a concern that participants would still try to stitch together the scrambled narratives even if they are less coherent? Was this even possible given the nature of the stimuli?

      2) In line 125 and throughout the authors should consistently remind the reader that 'engagement' in this case means that there were consistent and correlated increases in the bold response across participants. This differs in some ways to task engagement in event-related GLM studies.

      3) The language throughout should reflect consistent involvement across participants at particular time points in each of the narratives vs the argumentative.

      4) It seems like argumentative is more similar to the scrambled in many ways. Might it be that argumentative texts are just less coherent and structured than narrative texts?

      5) It seems clear that the neural processing of argumentative texts (64 distinct edges) were very different from the narrative texts (2348 distinct edges), but that the current contrasts did not clearly and consistently distinguish argumentative thought from the scrambled argument conditions. A discussion of the analyses that might be necessary to better elucidate the dynamics of processing for argumentative thought would be helpful.

      Discussion:

      1) Were there any neural differences between the narrative vs argument scrambled-texts? This might reveal any differences in the processing of the scrambled texts for each condition and might help shine light on features of the scrambled argument condition that contributed to the overall lack of distinction relative to the narrative vs scrambled narrative conditions.

      2) Throughout the results from ISC and ISFC findings are convolved with the findings from univariate or GLM results from prior studies. Please compare and contrast how ISC and ISFC findings might relate to univariate or GLM findings early in the discussion.

      3) Related to point 2 in the introduction, please also cite studies from autobiographical memory retrieval studies that also show the frontoparietal control system working as information is iteratively accumulated and updated over long temporal windows (St. Jacques et al., 2011; Inman et al., 2018; Daselaar et al., 2008).

      4) Please reconsider how the ISC findings are discussed as 'activation'. While the BOLD activity of these areas are certainly coordinated across participants at similar points in the text, I feel like the term activation fits best with studies that convolve the brain activity with an HRF. In particular, from what I understand ISC, a common decrease in BOLD activity across participants at the same time in a read text would also lead to activity or 'activation' of that area in an ISC analysis. This seems counterintuitive. The 2nd paragraph of the discussion describes ISC and ISFC well in terms of what it shows across a sample (synchronization of fluctuations in BOLD activity across participants for the same stimuli). "Activity" may capture this, but please consider some more nuanced ways to refer to these ISC and ISFC findings.

      Figures:

      1) Please double check the box plots in figure 1a for Scene Construction. Another method of displaying this likert rating data might be helpful. While appreciating the attempt to display the individual data points, the simple main points get somewhat obscured by all of the information in the graph.

      2) Overall, I appreciate the attention to detail in all of the figures and the completeness of the data visualization with several useful supplemental figures.

    1. Reviewer #2:

      General assessment:

      This manuscript presents an improved methodology for extracting distinct early auditory evoked potentials from the EEG response to continuous natural speech, including a novel method for obtaining simultaneous responses from different frequency bands. It is a clever approach and the first results are promising, but more rigorous evaluation of the method and critical evaluation of the results is needed. It could provide a valuable tool for investigating the effect of corticofugal modulation of the early auditory pathway during speech processing. However, the claims made of its use investigating speech encoding or clinical diagnosis seem too speculative and unspecific.

      General comments:

      1) Despite repeated claims, I don't think a convincing case is made here that this method can provide insight on how speech is processed in the early auditory pathway. The response is essentially a click-like response elicited by the glottal pulses in the stimulus; it averages out information related to dynamic variations in envelope and pitch that are essential for speech perception; at the same time, it is highly sensitive to sound features that do not affect speech perception. What reason is there to assume that these responses contain information that is specific or informative about speech processing?

      2) Similarly, the claim that the methodology can be used as a clinical application is not convincing. It is not made clear what pathology these responses can detect that current methods ABR cannot, or why. As explained in the Discussion, the response size is inherently smaller than standard ABRs because of the higher repetition rate of the glottal pulses, and the response may depend on more complex neural interactions that would be difficult to quantify. Do these features not make them less suitable for clinical use?

      3) It needs to be rigorously confirmed that the earliest responses are not contaminated or influenced by responses from later sources. There seems to be some coherent activity or offset in the baseline (pre 0 ms), in particular with the lower filter cut off. One way to test this might be to simulate a simple response by filtering and time shifting the stimulus waveforms, adding these up plus realistic noise, and applying the deconvolution to see whether the input is accurately reproduced. It might be useful to see how the response latencies and amplitudes correlate to those of conventional click responses, and how they depend on stimulus level.

      4) The multiband responses show a variation of latency with frequency band that indicates a degree of cochlear frequency specificity. The latency functions reported here looks similar to those obtained by Don et al 1993 for derived band click responses, but the actual numbers for the frequency dependent delays (as estimated by eye from figures 4,6 and 7) seem shorter than those reported for wave V at 65 dB SPL (Don et al 1993 table II). The latency function would be better fitted to an exponential, as in Strelcyk et al 2009 (equation 1), than a quadratic function; the fitted exponent could be directly compared to their reported value.

      5) The fact that differences between narrators leads to changes to the ABR response is to be expected, and was already reported in Maddox and Lee 2018. I don't understand why it needs to be examined and discussed at such length here. The space devoted to discussing the recording time also seems very long. Neither abstract or introduction refers to these topics, and they seem to be side-issues that could be summarised and discussed much more briefly.

      L142-144. Is it possible to apply the pulse train regressor to the unaltered speech response? If so, does this improve the response, i.e. make it look more similar to the peaky speech response? It would be interesting to know whether improvement is due to the changed regressor or the stimulus modification or both.

      L208 -211. What causes the difference between the effect of high-pass filtering and subtracting the common response? If they serve the same purpose, but have different results, this raises the question which is more appropriate.

      L244. This seems a misinterpretation. The similarity between broadband and summated multiband responses indicates that the band filtered components in the multiband stimulus elicited responses that add linearly in the broadband response. It does not imply that the responses to the different bands originate from non-overlapping cochlear frequency regions.

      L339-342. Is this measure of SNR appropriate, when the baseline is artificially constructed by deconvolution and filtering? Perhaps noise level could be assessed by applying the deconvolution to a silent recording instead? It might also be useful to have a measure of the replicability of the response.

    1. Reviewer #2:

      General assessment of the work:

      In this manuscript Higgs and colleagues test the hypothesis that imprinted gene expression is enriched in the brain, and that identifying specific brain regions of enrichment will aid in uncovering physiological roles for imprinted pathways. The authors claim that the hypothesis that imprinted genes are enriched in key brain functions has never been formally/systematically tested. Moreover, they suggest that their analysis represents an unbiased systems-biology approach to this question.

      In our assessment the authors fail to meet these criteria on several major grounds. Firstly, there are multiple instances of methodological bias in their analysis (detailed below). Secondly, the authors claim that their findings are validated by similar test results in 'matched' datasets. However, throughout the authors appear to have avoided identifying individual imprinted genes that are enriched in their analysis (they can be found in a minimally annotated supplementary file). Due to this it is impossible to judge to what extent there is agreement between matched datasets and between levels of the analysis. For these reasons the analysis appears arbitrary rather than systematic, and lacks rigor. Consequently we do not feel that the work of Higgs and colleagues goes beyond previous systematic reports of imprinting in the brain (for example, Gregg, 2010, Babak 2015, in ms reference list).

      Numbered summary of substantive concerns:

      1) Imprinted genes that were identified as enriched are not clearly named or listed

      -The authors use two or more independent datasets at each level to "strengthen any conclusions with convergent findings" (p4 ln96). By this the authors mean that both datasets pass the F-test criteria for enrichment. However, they should show which imprinted genes are allocated to each region, and clearly present the overlap. Are the same genes enriched in the two datasets? Similarly, are the same genes that are enriched in, e.g. the hypothalamus the same genes that are enriched in the ARC?

      -The authors discuss how their main aim of identifying expression "hotspots" helps inform imprinted gene function in the brain. An analysis of the actual genes is therefore crucial (and the assumed next step after identifying the location of enrichment).

      -The authors allocate parental expression enrichment to the brain regions but do not state why they do this analysis.

      -Are imprinted genes in the same cluster co-expressed, as might be expected?

      2) Selection of datasets needs to be more clearly explained (i.e. a selection criteria)

      -Their reason for selection "to create a hierarchical sequence of data analysis" - suggests that there could be potential bias in their selection based on previous knowledge of IG action in the brain.

      -A selection criteria would explain the level of similarity between datasets, which is important before datasets are systematically analyzed

      3) The study is more like a set of independent analyses of individual datasets (rather than one systematic/meta-analysis)

      -Each dataset was individually processed (filtered and normalized) following the original authors' procedure, rather than processing all the raw datasets the same way.

      -"A consistent filter, to keep all genes expressed in at least 20 cells or (when possible) with at least 50 reads" (p7 ln115), our emphasis - which filter was used? This should be consistent throughout.

      -Two different cut-offs were used to identify genes with upregulated expression, making the identification of enriched genes arbitrary (p7 para2).

      -Some datasets contain tissues from various time-points and sexes, but there is no clarification if all the data was included in the analysis. (e.g. the Ximerakis et al. dataset was originally an analysis of young and old mouse brains). This is particularly difficult to interpret when embryonic data is likened to adult data, which is in no way equivalent.

      -The cell-type and tissue-type identities were supplied by the dataset authors, based on their original clustering methods. This can be variable, particularly at the sub-population level.

      4) These differences make it hard to draw connections between the findings from each dataset

      -In some levels, the authors compare two datasets for a "convergence" of IG over-expression. Yet the above differences between datasets and analyses makes them difficult to compare. (e.g. the comparison of hypothalamic neuronal subtypes with enriched IG expression between two datasets in level 3.a.2 is quite speculative).

      -More generally, the authors draw connections between their findings from each level, but the lack of consistency between analyses may not justify these connections.

      5) Hence, the study does not lead to a definitive set of findings that is new to the field

      -The above reasons suggest that this is not an objective set of data about IG expression in the brain, but rather evidence of certain hotspots for targeted analysis. However, these hotspots were already known.

      -A systematic analysis of raw data using fewer datasets, that then includes and discusses the imprinted genes, may lead to novel findings and a paper with a clearer narrative.

    1. Reviewer #2:

      The authors describe the development and use of a D-Serine sensor based on a periplasmic ligand binding protein (DalS) from Salmonella enterica in conjunction with a FRET readout between enhanced cyan fluorescent protein and Venus fluorescent protein. They rationally identify point mutations in the binding pocket that make the binding protein somewhat more selective for D-serine over glycine and D-alanine. Ligand docking into the binding site, as well as algorithms for increasing the stability, identified further mutants with higher thermostability and higher affinity for D-serine. The combined computational efforts lead to a sensor for D-serine with higher affinity for D-serine (Kd = ~ 7 µM), but also showed affinity for the native D-alanine (Kd = ~ 13 uM) and glycine (Kd = ~40 uM). Molecular simulations were then used to explain how remote mutations identified in the thermostability screen could lead to the observed alteration of ligand affinity. Finally, the D-SerFS was tested in 2P-imaging in hippocampal slices and in anesthetized mice using biotin-straptavidin to anchor exogenously applied purified protein sensor to the brain tissue and pipetting on saturating concentrations of D-serine ligand.

      Although presented as the development of a sensor for biology, this work primarily focuses on the application of existing protein engineering techniques to alter the ligand affinity and specificity of a ligand-binding protein domain. The authors are somewhat successful in improving specificity for the desired ligand, but much context is lacking. For any such engineering effort, the end goals should be laid out as explicitly as possible. What sorts of biological signals do they desire to measure? On what length scale? On what time scale? What is known about the concentrations of the analyte and potential competing factors in the tissue? Since the authors do not demonstrate the imaging of any physiological signals with their sensor and do not discuss in detail the nature of the signals they aim to see, the reader is unable to evaluate what effect (if any) all of their protein engineering work had on their progress toward the goal of imaging D-serine signals in tissue.

      As a paper describing a combination of protein engineering approaches to alter the ligand affinity and specificity of one protein, it is a relatively complete work. In its current form trying to present a new fluorescent biosensor for imaging biology it is strongly lacking. I would suggest the authors rework the story to exclusively focus on the protein engineering or continue to work on the sensor/imaging/etc until they are able to use it to image some biology.

      Additional Major Points:

      1) There is no discussion of why the authors chose to use non-specific chemical labeling of the tissue with NHS-biotin to anchor their sensor vs. genetic techniques to get cell-type specific expression and localization. There is no high-resolution imaging demonstrating that the sensor is localized where they intended.

      2) Why does the fluorescence of both the CFP and they YFP decrease upon addition of ligand (see e.g. Supplementary Figure 2)? Were these samples at the same concentration? Is this really a FRET sensor or more of an intensiometric sensor? Is this also true with 2P excitation? How does the Venus fluorescence change when Venus is excited directly? Perhaps fluorescence lifetime measurements could help inform what is happening.

      3) How reproducible are the spectral differences between LSQED and LSQED-T197Y? Only one trace for each is shown in Supplementary Figure 2 and the differences are very small, but the authors use these data to draw conclusions about the protein open-closed equilibrium.

      4) The first three mutations described are arrived upon by aligning DalS (which is more specific for D-Ala) with the NMDA receptor (which binds D-Ser). The authors then mutate two of the ligand pocket positions of DalS to the same amino acid found in NMDAR, but mutate the third position to glutamine instead of valine. I really can't understand why they don't even test Y148V if their goal is a sensor that hopefully detects D-Ser similar to the native NMDAR. I'm sure most readers will have the same confusion.

  2. Oct 2020
    1. Start with your objectiveBefore writing, choose an objective to focus your thinking.
    2. Our writing processThe goal of your first draft isn’t to say things well. Save that for rewriting.Your first draft is for generating ideas: Brainstorm talking points.Connect dots between those points to learn what you’re really trying to say.This works best when you’re exploring ideas that most interest you. The more self-indulgent you are, the better your article.
    1. It happened in 2000, when Gore had more popular votes than Bush yet fewer electoral votes, but that was the first time since 1888.

      it happened again in 2016

    1. Disciplined by understanding,one abandons both good and evil deeds;so arm yourself for discipline—discipline is skill in actions.

      It is not enough by only understanding the discipline, because discipline is skill that should be show by actions.

    2. People will tellof your undying shame,and for a man of honorshame is worse than death

      In this statement Krishna explaining the honor of participation in the battle than death, because if Arjuna do not participate in the battle then it will be a big shame for the rest of his life.

    3. Death is certain for anyone born,and birth is certain for the dead;

      In my view Indian people believes to the life after death, but what it means the adjective of certain for birth and death?

    4. decisively—Which is better?I am your pupil.Teach me what I seek!

      Arjuna is kinda confused with Krishna's speech. although Arjuna is not agree with Krishna in some parts, but still why he keep asking for his guide?

    5. Krishna, how can I fightagainst Bhishma and Dronawith arrowswhen they deserve my worship?

      Krishna believe that fighting against Bhishma and Drono is a big sin for him. because they do not deserve to be killed instead they deserve to worship.

  3. moodle.southwestern.edu moodle.southwestern.edu
    1. unbiased

      The Republican party will never stop claiming the media is bias, so I am surprised they are claiming they have resolved this issue. I would think they would want to keep acknowledging it as an issue.

  4. moodle.southwestern.edu moodle.southwestern.edu
    1. "The President has been regulating to death a free market economy" - it's interesting how much this preamble throws Trump under the bus

    2. "our enemies no longer fear us and our friends no long trust us" - I guess the democrats and republicans agree on this.

    3. "This platform is optimistic because the American people are optimistic." This is completely unsupported by everything stated before it.

    4. "covenant" "Creator" "God-given natural resources" "prepared to deal with evil in the world" show religious tone

    1. Friends and foes alike neither admire nor fear President Trump’s leadership

      I feel like there are countries who fear his leadership.

    2. The challenges before us—the worst public health crisis in a century, the worst economic downturn since the Great Depression, the worst period of global upheaval in a generation, the urgent global crisis posed by climate change, the intolerable racial injustice that still stains the fabric of our nation—will test America’s character like never before.

      I know that we are making history but it doesn't exactly feel like it. The election feels like a joke. There is a stark difference between what came out of Roosevelt's mouth and either of the presidential candidates mouth's. Now it is a matter of choosing the lesser of two evils than a heroic leader to help our country achieve greatness.

    3. a more perfect union

      I feel like this goal has been abandoned.

    1. Bess, J. L., & Dee, J. R. (2008). Understanding college and university organization: Theories for effective policy and practice Volume 1 (1st ed). Stylus.

    Tags

    Annotators

  5. Sep 2020
    1. So how should we think about federalism in the ageof coronavirus? The answer is to emphasize theimportance of building social solidarity — the beliefin a shared fate for all Americans that transcendsstate or regional identities

      What makes Americans not have a social solidarity?

    2. institutional antagonism willprevent the concentration of power, encouragesindividualist mentalities that lead to self-interestedactions and erode national unity.

      What makes Americans so individualistic? What is different about Taiwan’s society that made their people more selfless?

    1. “We’re changing federalism from the idea of shared expertise in different policy areas into partisan stakes in the ground that are meant to obstruct opponents,” Robertson says.

      This is so true with the Trump Administrations "Alternative Facts" it is as though we will soon be living in the dystopian novel Brave New World.

    2. “The coronavirus response is actually sort of a perfect measuring stick of our transition to our contemporary, very polarized model of federalism.”

      I want to reference the Netflix documentary Social Dilemma. The documentary says that the reason politics has become so polarized is because of social media. Everyone is operating off of a different set of facts.

    3. He has threatened to withhold federal funds from school districts that don’t open for in-person instruction.

      Is it within Trump's right to do this?

    4. He has threatened to withhold federal funds from school districts that don’t open for in-person instruction.

      Is it within trump's rights to do this?

    1. It could create incentives for action by conditioning a portion of funds going to states in any future relief packages on states’ adherence to the measures

      Why did this not happen? I feel like it isn’t the federalist system in general that are failing us— it’s the leaders of the system. Why did congress not make a playbook and create incentives for states to follow them? This reminds me of how the drinking age became 21 in every state from the funding of the highways.

    2. Lacking strong federal leadership to guide a uniform response, the United States quickly fulfilled the World Health Organization’s prediction that it would become the new epicenter of Covid-19.

      I wonder if a democrat was in office when covid hit if we would have stronger federal leadership. Would we have been in a state of emergency if someone who believed in the facts of science wasn’t in office? I have trouble believing that there is nothing the president could have done to prevent covid from getting this out of hand.

    3. subject to constitutionally protected individual rights such as due process, equal protection, and freedom of travel and association

      I didn’t know that it is within our rights to travel and associate with whomever we choose. I wouldn’t think the government would be able to control who would be able to leave their house or hang out with who anyway. I guess this shows how right the article about uninformed citizens we read last week is right.

    4. Strong, decisive national action is therefore imperative.

      I could not agree with this statement more. I think if the US had some kind of national healthcare program the coronavirus would be much more under control.

    1. His speech is fire, his breath is death

      What does the writer meant by the fire in his speech?

    1. RRID:ZDB-ALT-101018-2

      DOI: 10.1016/j.celrep.2020.03.024

      Resource: (ZFIN Cat# ZDB-ALT-101018-2,RRID:ZFIN_ZDB-ALT-101018-2)

      Curator: @ethanbadger

      SciCrunch record: RRID:ZFIN_ZDB-ALT-101018-2


      What is this?

    2. RRID:ZDB-ALT-050503-2

      DOI: 10.1016/j.celrep.2020.03.024

      Resource: (ZFIN Cat# ZDB-ALT-050503-2,RRID:ZFIN_ZDB-ALT-050503-2)

      Curator: @ethanbadger

      SciCrunch record: RRID:ZFIN_ZDB-ALT-050503-2


      What is this?

    1. RRID:ZFIN_ZDB-GENO-100820-2

      DOI: 10.7554/eLife.44431

      Resource: (ZFIN Cat# ZDB-GENO-100820-2,RRID:ZFIN_ZDB-GENO-100820-2)

      Curator: @evieth

      SciCrunch record: RRID:ZFIN_ZDB-GENO-100820-2


      What is this?

    2. RRID:ZFIN_ZDB-ALT-060322-2

      DOI: 10.7554/eLife.44431

      Resource: (ZFIN Cat# ZDB-ALT-060322-2,RRID:ZFIN_ZDB-ALT-060322-2)

      Curator: @evieth

      SciCrunch record: RRID:ZFIN_ZDB-ALT-060322-2


      What is this?

    1. ZFIN: ZDB-ALT-120117-2

      DOI: 10.1016/j.cub.2020.04.020

      Resource: (ZFIN Cat# ZDB-ALT-120117-2,RRID:ZFIN_ZDB-ALT-120117-2)

      Curator: @ethanbadger

      SciCrunch record: RRID:ZFIN_ZDB-ALT-120117-2


      What is this?

    2. ZFIN: ZDB-ALT-070118-2

      DOI: 10.1016/j.cub.2020.04.020

      Resource: (ZFIN Cat# ZDB-ALT-070118-2,RRID:ZFIN_ZDB-ALT-070118-2)

      Curator: @ethanbadger

      SciCrunch record: RRID:ZFIN_ZDB-ALT-070118-2


      What is this?

    1. ZFIN ID: ZDB-ALT-130724-2

      DOI: 10.7554/eLife.53995

      Resource: (ZFIN Cat# ZDB-ALT-130724-2,RRID:ZFIN_ZDB-ALT-130724-2)

      Curator: @evieth

      SciCrunch record: RRID:ZFIN_ZDB-ALT-130724-2

      Curator comments: ZFIN Cat# ZDB-ALT-130724-2


      What is this?

  6. Aug 2020
    1. Zhu, F.-C., Guan, X.-H., Li, Y.-H., Huang, J.-Y., Jiang, T., Hou, L.-H., Li, J.-X., Yang, B.-F., Wang, L., Wang, W.-J., Wu, S.-P., Wang, Z., Wu, X.-H., Xu, J.-J., Zhang, Z., Jia, S.-Y., Wang, B.-S., Hu, Y., Liu, J.-J., … Chen, W. (2020). Immunogenicity and safety of a recombinant adenovirus type-5-vectored COVID-19 vaccine in healthy adults aged 18 years or older: A randomised, double-blind, placebo-controlled, phase 2 trial. The Lancet, 0(0). https://doi.org/10.1016/S0140-6736(20)31605-6

    1. Amanat, F., White, K. M., Miorin, L., Strohmeier, S., McMahon, M., Meade, P., Liu, W.-C., Albrecht, R. A., Simon, V., Martinez‐Sobrido, L., Moran, T., García‐Sastre, A., & Krammer, F. (2020). An In Vitro Microneutralization Assay for SARS-CoV-2 Serology and Drug Screening. Current Protocols in Microbiology, 58(1), e108. https://doi.org/10.1002/cpmc.108

  7. Jul 2020
  8. Jun 2020
    1. Oh, and of course, there’s the fact that “sit on the ground” is mapped to the same control as “strangle the nearest person”, which can apparently lead to some pretty robust brainstorming sessions.

      I love this!

  9. May 2020
    1. Grifoni, A., Weiskopf, D., Ramirez, S. I., Mateus, J., Dan, J. M., Moderbacher, C. R., Rawlings, S. A., Sutherland, A., Premkumar, L., Jadi, R. S., Marrama, D., de Silva, A. M., Frazier, A., Carlin, A., Greenbaum, J. A., Peters, B., Krammer, F., Smith, D. M., Crotty, S., & Sette, A. (2020). Targets of T cell responses to SARS-CoV-2 coronavirus in humans with COVID-19 disease and unexposed individuals. Cell, S0092867420306103. https://doi.org/10.1016/j.cell.2020.05.015

    1. Un convertor BUN spre FOARTE BUN din multe puncte de vedere, dar din păcate fișierul DOCX rezultat NU poate fi paginat după cum vreau eu (ORICÂT m-am chinuit NU am reușit să formatez pagina la A4 și marginile la „normal”=2,5 cm)

    1. Un convertor BUN spre FOARTE BUN din multe puncte de vedere, dar din păcate fișierul DOCX rezultat NU poate fi paginat după cum vreau eu (ORICÂT m-am chinuit NU am reușit să formatez pagina la A4 și marginile la „normal”=2,5 cm)

    1. Regular Expression Functions There are three regular-expression functions that operate on strings: matches() tests if a regular expression matches a string. replace() uses regular expressions to replace portions of a string. tokenize() returns a sequence of strings formed by breaking a supplied input string at any separator that matches a given regular expression. Example:   

      Test question: how many are there regular-expression functions in XSLT?

    2. position()

      The position function returns a number equal to the context position from the expression evaluation context.

    3. What’s the difference between xsl:value-of, xsl:copy-of, and xsl:sequence? xsl:value-of always creates a text node. xsl:copy-of always creates a copy. xsl:sequence returns the nodes selected, subject possibly to atomization. Sequences can be extended with xsl:sequence.

      What’s the difference between xsl:value-of, xsl:copy-of, and xsl:sequence?

    4. <xsl:variable name="date" select="xs:date('2003-11-20')"/>

      How to declare the date in the variable in XSLT 2?

    5. Types XSLT 2.0 allows you to declare: The type of variables. The return type of templates. The type of sequences (constructed with xsl:sequence) The return type of (user-declared) functions. Both the type and required type of parameters.

      What are the types that one can declare in XSLT 2?

  10. Apr 2020
    1. L’exclu, c’est celui qui ne séduit pas dans la vraie vie et tombe dans le piège des sites de rencontres pensant qu’enfin, il pourra choper. Mais si on ne séduit pas dans la vraie vie, on ne séduit pas sur les sites de rencontre. Il y a 1000 et une façon de définir les critères de séduction : la beauté, l’humour, l’intelligence, un métier cool, du fric… Mais l’exclu n’a rien de tout ça. Et il se prend encore plus de râteaux que dans la vie. Parce que dans la vie, il va brancher une nana, une fois par semaine mais sur un site, on peut parler à 200 personnes et se prendre 200 râteaux ! On a l’impression que l’exclusion est décuplée tellement on se mange de râteaux. Et on les repère sur un site à l’aigreur qui transparait soit dans leurs annonces soit dans leur propos. « Les filles arrêtez de me snober, venez me parler. » Ou des gens qui partent défaitistes dès le début de la conversation. Ils ont conscience d’être exclus et entretiennent tous les jours cette situation.

      La subjectivité de l'auteur ici est importante, la séduction est un champ lexical de grande envergure ou chacun est libre d'apprécier à sa manière ce qu'il perçoit et ressent , tout autant que le sentiment d'exclusion qui a des valeurs intrinsèques singulières.

    1. Le mode « multitâche » du cerveau est ainsi quasiment constant. J.‑P. Lachaux évoque le dilemme du « chercheur d’or » en train d’exploiter son petit filon tout en étant tenté d’aller voir plus loin s’il n’y a pas mieux. Ce dilemme entre l’exploitation (poursuivre le travail en cours) et l’exploration (aller voir ailleurs) est notre lot quotidien.

      Nous avons ici le deuxième argument. Il est tout à fait normal d'être distrait, cela est une particularité commune à tous. Il faut cependant faire attention à ne pas toujours aller de distractions en distractions.

    1. Ce plongeon s’effectue par une pirouette méthodologique qui m’a poussée à m’intéresser aux discours que les journalistes tiennent à propos de leurs propres pratiques, à partir d’un corpus constitué par des manuels de journalisme et des mémoires publiés par des journalistes.

      A vérifier la véracité des informations qui ont permis de constituer ce retracement , et l'impartialité de ceux qui les ont donné

  11. Mar 2020
    1. Passez du temps dans la nature. De nombreuses études montrent que la nature a un effet calmant sur le système nerveux, renforce le système immunitaire, fait baisser la tension artérielle et booste même la capacité visuelle mise à rude épreuve par trop de temps à fixer un écran.

      L'argument de passer du temps dans la nature est mis en lien avec des études sur l'effet de la nature sur le corps. Or quand on clique sur le lien de l'étude, on se retrouve sur un site populaire avec des faits scientifiques. On se questionne dans un premier temps sur la véracité de "ces études" avant de réfléchir à celle de l'argument dans ce raisonnement épistémique abduction.

    2. Des études menée aux États-Unis et en Europe rapportent que 38 % de la population globale souffre de trouble de dépendance à Internet (TDI), également nommé cyberaddiction. L’une des causes avancées pour expliquer cette addiction est une altération physique du cerveau au niveau structurel. En effet, l’usage d’Internet affecte certaines parties du cerveau préfrontal associées au souvenir de détails, à la capacité à planifier et à hiérarchiser les tâches, nous rendant ainsi incapables d’établir des priorités dans notre vie. En conséquence, passer du temps en ligne devient prioritaire, et les tâches de la vie quotidienne passent après.

      Son argument sur l'altération physique du cerveau au niveau structural est pertinent. Elle persuade le lecteur par l’enchaînement des idées faisant de son augmentation, un raisonnement rhétorique de type logos. En mettant en avant "des études" qui sont difficle à retrouver lorsque l'on clique sur le lien, elle pose une "hypothèse" qui est une généralité (le fait de prioriser son temps passer en ligne au détriment des tâches de la vie quotidienne) et qui vient renforcer son point de vue . Or elle passe à côté de beaucoup d'informations sur l'addiction et son mécanisme en générale et la cyberaddiction comme les facteur de vulnérabilité (individuelle, environnementale et liés directement à Internet). Ses sources sont faibles et utilisées exclusivement à son point de vue. Elle tente de généraliser l'addiction qui est une pathologie, dans une volonté de permettre au lecteur de s'identifier et qu'il adhère à son discours. Le manque de précision et de crédits scientifiques sur ce point fait perdre de la force à son article.

    3. Ce scénario vous paraît familier ? Selon une étude menée par Microsoft, la capacité de concentration de l’homme est passée de 12 à 8 secondes en dix ans. La cause ? L’omniprésence des écrans. Une étude de l’université de Californie à Irvine montre que travailler en étant constamment interrompu augmente le niveau de stress, car on a tendance à travailler plus vite pour rattraper le temps perdu. Aujourd’hui, une personne sur quatre vérifie son smartphone toutes les 30 minutes et 25 % des Millennials le consultent plus de cent fois par jour.

      L'auteure cite des études scientifiques avec les liens de celle-ci pour argumenter sa première partie. Elle enchaîne et présente les différents études sans réellement les mettre en lien entre elles. On peut considérer qu'il s'agit d'un raisonnement épistémique de type inductif où les faits vérifiés (de l'étude) permettent d'élaborer une conclusion générale émise juste après. On peut néanmoins mettre en avant le raccourci que fait l'auteure en citant ces différentes études une par une pour finir par poser sa conclusion, regroupant les conclusions des différentes études sans lien apparent entre elles.