продуктами
приложениями?
продуктами
приложениями?
от
неразрывный пробел
при
неразрывный пробел
продуктами
приложениями?
или
неразрывный пробел
на
неразрывный пробел
не
неразрывный пробел
Kaspersky
Я бы написала "Возможности Kaspersky Secure Connection" или "Что может Kaspersky Secure Connection?"
с
неразрывный пробел
Author Response
Reviewer #2 (Public Review):
Here, a simple model of cerebellar computation is used to study the dependence of task performance on input type: it is demonstrated that task performance and optimal representations are highly dependent on task and stimulus type. This challenges many standard models which use simple random stimuli and concludes that the granular layer is required to provide a sparse representation. This is a useful contribution to our understanding of cerebellar circuits, though, in common with many models of this type, the neural dynamics and circuit architecture are not very specific to the cerebellum, the model includes the feedforward structure and the high dimension of the granule layer, but little else. This paper has the virtue of including tasks that are more realistic, but by the paper’s own admission, the same model can be applied to the electrosensory lateral line lobe and it could, though it is not mentioned in the paper, be applied to the dentate gyrus and large pyramidal cells of CA3. The discussion does not include specific elements related to, for example, the dynamics of the Purkinje cells or the role of Golgi cells, and, in a way, the demonstration that the model can encompass different tasks and stimuli types is an indication of how abstract the model is. Nonetheless, it is useful and interesting to see a generalization of what has become a standard paradigm for discussing cerebellar function.
We appreciate the Reviewer’s positive comments. Regarding the simplifications of our model, we agree that we have taken a modeling approach that abstracts away certain details to permit comparisons across systems. We now include an in-depth discussion of our simplifying assumptions (Assumptions & Extensions section in the Discussion) and have further noted the possibility that other biophysical mechanisms we have not accounted for may also underlie differences across systems.
Our results predict that qualitative differences in the coding levels of cerebellum-like systems, across brain regions or across species, reflect an optimization to distinct tasks (Figure 7). However, it is also possible that differences in coding level arise from other physiological differences between systems.
Reviewer #3 (Public Review):
1) The paper by Xie et al is a modelling study of the mossy fiber-to-granule cell-to-Purkinje cell network, reporting that the optimal type of representations in the cerebellar granule cell layer depends on the type task. The paper stresses that the findings indicate a higher overall bias towards dense representations than stated in the literature, but it appears the authors have missed parts of the literature that already reported on this. While the modelling and analysis appear mathematically solid, the model is lacking many known constraints of the cerebellar circuitry, which makes the applicability of the findings to the biological counterpart somewhat limited.
We thank the Reviewer for suggesting additional references to include in our manuscript, and for encouraging us to extend our model toward greater biological plausibility and more critically discuss simplifying assumptions we have made. We respond to both the comment about previous literature and about applicability to cerebellar circuitry in detail below.
2) I have some concerns with the novelty of the main conclusion, here from the abstract: ’Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classic theories.’ Stated like this, this has in principle already been shown, i.e. for example: Spanne and Jo¨rntell (2013) Processing of multi-dimensional sensorimotor information in the spinal and cerebellar neuronal circuitry: a new hypothesis. PLoS Comput Biol. 9(3):e1002979. Indeed, even the 2 DoF arm movement control that is used in the present paper as an application, was used in this previous paper, with similar conclusions with respect to the advantage of continuous input-output transformations and dense coding. Thus, already from the beginning of this paper, the novelty aspect of this paper is questionable. Even the conclusion in the last paragraph of the Introduction: ‘We show that, when learning input-output mappings for motor control tasks, the optimal granule cell representation is much denser than predicted by previous analyses.’ was in principle already shown by this previous paper.
We thank the Reviewer for drawing our attention to Spanne and Jo¨rntell (2013). Our study shares certain similarities with this work, including the consideration of tasks with smooth input-output mappings, such as learning the dynamics of a two-joint arm. However, our study differs substantially, most notably the fact that we focus our study on parametrically varying the degree of sparsity in the granule cell layer to determine the circumstances under which dense versus sparse coding is optimal. To the best of our ability, we can find no result in Spanne and J¨orntell (2013) that indicates the performance of a network as a function of average coding level. Instead, Spanne and Jo¨rntell (2013) propose that inhibition from Golgi cells produces heterogeneity in coding level which can improve performance, which is an interesting but complementary finding to ours. We therefore do not believe that the quantitative computations of optimal coding level that we present are redundant with the results of this previous study. We also note that a key contribution of our study is mathemetical analysis of the inductive bias of networks with different coding levels which supports our conclusions.
We have included a discussion of Spanne and Jo¨rntell (2013) and (2015) in the revised version of our manuscript:
"Other studies have considered tasks with smooth input-output mappings and low-dimensional inputs, finding that heterogeneous Golgi cell inhibition can improve performance by diversifying individual granule cell thresholds (Spanne and J¨orntell, 2013). Extending our model to include heterogeneous thresholds is an interesting direction for future work. Another proposal states that dense coding may improve generalization (Spanne and Jo¨rntell, 2015). Our theory reveals that whether or not dense coding is beneficial depends on the task."
3) However, the present paper does add several more specific investigations/characterizations that were not previously explored. Many of the main figures report interesting new model results. However, the model is implemented in a highly generic fashion. Consequently, the model relates better to general neural network theory than to specific interpretations of the function of the cerebellar neuronal circuitry. One good example is the findings reported in Figure 2. These represent an interesting extension to the main conclusion, but they are also partly based on arbitrariness as the type of mossy fiber input described in the random categorization task has not been observed in the mammalian cerebellum under behavior in vivo, whereas in contrast, the type of input for the motor control task does resemble mossy fiber input recorded under behavior (van Kan et al 1993).
We agree that the tasks we consider in Figure 2 are simplified compared to those that we consider elsewhere in the paper. The choice of random mossy fiber input was made to provide a comparison to previous modeling studies that also use random input as a benchmark (Marr 1969, Albus 1971, Brunel 2004, Babadi and Sompolinsky 2014, Billings 2014, LitwinKumar et al., 2017). This baseline permits us to specifically evaluate the effects of lowdimensional inputs (Figure 2) and richer input-output mappings (Figure 2, Figure 7). We agree with the Reviewer that the random and uncorrelated mossy fiber activity that has been extensively used in previous studies is almost certainly an unrealistic idealization of in vivo neural activity—this is a motivating factor for our study, which relaxes this assumption and examines the consequences. To provide additional context, we have updated the following paragraph in the main text Results section:
"A typical assumption in computational theories of the cerebellar cortex is that inputs are randomly distributed in a high-dimensional space (Marr, 1969; Albus, 1971; Brunel et al., 2004; Babadi and Sompolinsky, 2014; Billings et al., 2014; Litwin-Kumar et al., 2017). While this may be a reasonable simplification in some cases, many tasks, including cerebellumdependent tasks, are likely best-described as being encoded by a low-dimensional set of variables. For example, the cerebellum is often hypothesized to learn a forward model for motor control (Wolpert et al., 1998), which uses sensory input and motor efference to predict an effector’s future state. Mossy fiber activity recorded in monkeys correlates with position and velocity during natural movement (van Kan et al., 1993). Sources of motor efference copies include motor cortex, whose population activity lies on a lowdimensional manifold (Wagner et al., 2019; Huang et al., 2013; Churchland et al., 2010; Yu et al., 2009). We begin by modeling the low dimensionality of inputs and later consider more specific tasks."
4) The overall conclusion states: ‘Our results....suggest that optimal cerebellar representations are task-dependent.’ This is not a particularly strong or specific conclusion. One could interpret this statement as simply saying: ‘if I construct an arbitrary neural network, with arbitrary intrinsic properties in neurons and synapses, I can get outputs that depend on the intensity of the input that I provide to that network.’ Further, the last sentence of the Introduction states: ‘More broadly, we show that the sparsity of a neural code has a task-dependent influence on learning...’ This is very general and unspecific, and would likely not come as a surprise to anyone interested in the analysis of neural networks. It doesn’t pinpoint any specific biological problem but just says that if I change the density of the input to a [generic] network, then the learning will be impacted in one way or another.
We agree with the Reviewer that our conclusions are quite general, and we have removed the final sentence as we agree it was unspecific. However, we disagree with the Reviewer’s paraphrasing of our results.
First, we do not select arbitrary intrinsic properties of neurons and synapses. Rather, we construct a simplified model with a key quantity, the neuronal threshold, that we vary parametrically in order to assess the effect of the resulting changes in the representation on performance. Second, we do not vary the intensity/density of inputs provided to the network – this is fixed throughout our study for all key comparisons we perform. Instead, we vary the density (coding level) of the expansion layer representation and quantify its effect on inductive bias and generalization. Finally, our study’s key contribution is an explanation of the heterogeneity in average coding level observed across behaviors and cerebellum-like systems. We go beyond the empirical statement that there is a dependence of performance on the parameter that we vary by developing an analytical theory. Our theory describes the performance of the class of networks that we study and the properties of learning tasks that determine the optimal expansion layer representation.
To clarify our main contributions, we have updated the final paragraph of the Introduction. We have also removed the sentence that the Reviewer objects to, as it was less specific than the other points we make here.
"We propose that these differences can be explained by the capacity of representations with different levels of sparsity to support learning of different tasks. We show that the optimal level of sparsity depends on the structure of the input-output relationship of a task. When learning input-output mappings for motor control tasks, the optimal granule cell representation is much denser than predicted by previous analyses. To explain this result, we develop an analytic theory that predicts the performance of cerebellum-like circuits for arbitrary learning tasks. The theory describes how properties of cerebellar architecture and activity control these networks’ inductive bias: the tendency of a network toward learning particular types of input-output mappings (Sollich, 1998; Jacot et al., 2018; Bordelon et al., 2020; Canatar et al., 2021; Simon et al., 2021). The theory shows that inductive bias, rather than the dimension of the representation alone, is necessary to explain learning performance across tasks. It also suggests that cerebellar regions specialized for different functions may adjust the sparsity of their granule cell representations depending on the task."
5) The interpretation of the distribution of the mossy fiber inputs to the granule cells, which would have a crucial impact on the results of a study like this, is likely incorrect. First, unlike the papers that the authors cite, there are many studies indicating that there is a topographic organization in the mossy fiber termination, such that mossy fibers from the same inputs, representing similar types of information, are regionally co-localized in the granule cell layer. Hence, there is no support for the model assumption that there is a predominantly random termination of mossy fibers of different origins. This risks invalidating the comparisons that the authors are making, i.e. such as in Figure 3. This is a list of example papers, there are more: van Kan, Gibson and Houk (1993) Movement-related inputs to intermediate cerebellum of the monkey. Journal of Neurophysiology. Garwicz et al (1998) Cutaneous receptive fields and topography of mossy fibres and climbing fibres projecting to cat cerebellar C3 zone. The Journal of Physiology. Brown and Bower (2001) Congruence of mossy fiber and climbing fiber tactile projections in the lateral hemispheres of the rat cerebellum. The Journal of Comparative Neurology. Na, Sugihara, Shinoda (2019) The entire trajectories of single pontocerebellar axons and their lobular and longitudinal terminal distribution patterns in multiple aldolase C-positive compartments of the rat cerebellar cortex. The Journal of Comparative Neurology.
6) The nature of the mossy fiber-granule cell recording is also reviewed here: Gilbert and Miall (2022) How and Why the Cerebellum Recodes Input Signals: An Alternative to Machine Learning. The Neuroscientist. Further, considering the re-coding idea, the following paper shows that detailed information, as it is provided by mossy fibers, is transmitted through the granule cells without any evidence of re-coding: Jo¨rntell and Ekerot (2006) Journal of Neuroscience; and this paper shows that these granule inputs are powerfully transmitted to the molecular layer even in a decerebrated animal (i.e. where only the ascending sensory pathways remains) Jo¨rntell and Ekerot 2002, Neuron.
We agree that there is strong evidence for a topographic organization in mossy fiber to granule cell connectivity at the microzonal level. We thank the Reviewer for pointing us to specific examples. We acknowledge that our simplified model does not capture the structure of connectivity observed in these studies.
However, the focus of our model is on cerebellar neurons presynaptic to a single Purkinje cell. Random or disordered distribution of inputs at this local scale is compatible with topographic organization at the microzonal scale. Furthermore, while there is evidence of structured connections at the local scale, models with random connectivity are able to reproduce the dimensionality of granule cell activity within a small margin of error (Nguyen et al., 2022). Finally, our finding that dense codes are optimal for learning slowly varying tasks is consistent with evidence for the lack of re-coding – for such tasks, re-coding may absent because it is not required.
We have dedicated a section on this issue in the Assumptions and Extensions portion of our Discussion:
"Another key assumption concerning the granule cells is that they sample mossy fiber inputs randomly, as is typically assumed in Marr-Albus models (Marr, 1969; Albus, 1971; LitwinKumar et al., 2017; Cayco-Gajic et al., 2017). Other studies instead argue that granule cells sample from mossy fibers with highly similar receptive fields (Garwicz et al., 1998; Brown and Bower, 2001; J¨orntell and Ekerot, 2006) defined by the tuning of mossy fiber and climbing fiber inputs to cerebellar microzones (Apps et al., 2018). This has led to an alternative hypothesis that granule cells serve to relay similarly tuned mossy fiber inputs and enhance their signal-to-noise ratio (Jo¨rntell and Ekerot, 2006; Gilbert and Chris Miall, 2022) rather than to re-encode inputs. Another hypothesis is that granule cells enable Purkinje cells to learn piece-wise linear approximations of nonlinear functions (Spanne and J¨orntell, 2013). However, several recent studies support the existence of heterogeneous connectivity and selectivity of granule cells to multiple distinct inputs at the local scale (Huang et al., 2013; Ishikawa et al., 2015). Furthermore, the deviation of the predicted dimension in models constrained by electron-microscopy data as compared to randomly wired models is modest (Nguyen et al., 2022). Thus, topographically organized connectivity at the macroscopic scale may coexist with disordered connectivity at the local scale, allowing granule cells presynaptic to an individual Purkinje cell to sample heterogeneous combinations of the subset of sensorimotor signals relevant to the tasks that Purkinje cell participates in. Finally, we note that the optimality of dense codes for learning slowly varying tasks in our theory suggests that observations of a lack of mixing (J¨orntell and Ekerot, 2002) for such tasks are compatible with Marr-Albus models, as in this case nonlinear mixing is not required."
7) I could not find any description of the neuron model used in this paper, so I assume that the neurons are just modelled as linear summators with a threshold (in fact, Figure 5 mentions inhibition, but this appears to be just one big lump inhibition, which basically is an incorrect implementation). In reality, granule cells of course do have specific properties that can impact the input-output transformation, PARTICULARLY with respect to the comparison of sparse versus dense coding, because the low-pass filtering of input that occurs in granule cells (and other neurons) as well as their spike firing stochasticity (Saarinen et al (2008). Stochastic differential equation model for cerebellar granule cell excitability. PLoS Comput. Biol. 4:e1000004) will profoundly complicate these comparisons and make them less straight forward than what is portrayed in this paper. There are also several other factors that would be present in the biological setting but are lacking here, which makes it doubtful how much information in relation to the biological performance that this modelling study provides: What are the types of activity patterns of the inputs? What are the learning rules? What is the topography? What is the impact of Purkinje cell outputs downstream, as the Purkinje cell output does not have any direct action, it acts on the deep cerebellar nuclear neurons, which in turn act on a complex sensorimotor circuitry to exert their effect, hence predictive coding could only become interpretable after the PC output has been added to the activity in those circuits. Where is the differentiated Golgi cell inhibition?
Thank you for these critiques. We have made numerous edits to improve the presentation of the details of our model in the main text of the manuscript. Indeed, granule cells in the main text are modeled as linear sums of mossy fiber inputs with a threshold-linear activation function. A more detailed description of the model for granule cells can now be found in Equation 1 in the Results section:
"The activity of neurons in the expansion layer is given by: h = φ(Jeffx − θ), (1) where φ is a rectified linear activation function φ(u) = max(u,0) applied element-wise. Our results also hold for other threshold-polynomial activation functions. The scalar threshold θ is shared across neurons and controls the coding level, which we denote by f, defined as the average fraction of neurons in the expansion layer that are active."
Most of our analyses use the firing rate model we describe above, but several Supplemental Figures show extensions to this model. As we mention in the Discussion, our results do not depend on the specific choice of nonlinearity (Figure 2-figure supplement 2). We have also considered the possibility that the stochastic nature of granule cell spikes could impact our measures of coding level. In Figure 7-figure supplement 1 we test the robustness of our main conclusion using a spiking model where we model granule cell spikes with Poisson statistics. When measuring coding level in a population of spiking neurons, a key question is at what time window the Purkinje cell integrates spikes. For several choices of integration time windows, we show that dense coding remains optimal for learning smooth tasks. However, we agree with the Reviewer that there are other biological details our model does not address. For example, our spiking model does not capture some of the properties the Saarinen et al. (2008) model captures, including random sub-threshold oscillations and clusters of spikes. Modeling biophysical phenomena at this scale is beyond the scope of our study. We have added this reference to the relevant section of the Discussion:
"We also note that coding level is most easily defined when neurons are modeled as rate, rather than spiking units. To investigate the consistency of our results under a spiking code, we implemented a model in which granule cell spiking exhibits Poisson variability and quantify coding level as the fraction of neurons that have nonzero spike counts (Figure 7-figure supplement 1; Figure 7C). In general, increased spike count leads to improved performance as noise associated with spiking variability is reduced. Granule cells have been shown to exhibit reliable burst responses to mossy fiber stimulation (Chadderton et al., 2004), motivating models using deterministic responses or sub-Poisson spiking variability. However, further work is needed to quantitatively compare variability in model and experiment and to account for more complex biophysical properties of granule cells (Saarinen et al., 2008)."
A second concern the Reviewer raises is our implementation of Golgi cell inhibition as a homogeneous rather than heterogeneous input onto granule cells. In simplified models, adding heterogeneous inhibition does not dramatically change the qualitative properties of the expansion layer representation, in particular the dimensionality of the representation (Billings et al., 2014, Cayco-Gajic et al., 2017, Litwin-Kumar et al., 2017). We have added a section about inhibition to our Discussion:
"We also have not explicitly modeled inhibitory input provided by Golgi cells, instead assuming such input can be modeled as a change in effective threshold, as in previous studies (Billings et al., 2014; Cayco-Gajic et al., 2017; Litwin-Kumar et al., 2017). This is appropriate when considering the dimension of the granule cell representation (Litwin-Kumar et al., 2017), but more work is needed to extend our model to the case of heterogeneous inhibition."
Regarding the mossy fiber inputs, as we state in response to paragraph 3, we agree with the Reviewer that the random and uncorrelated mossy fiber activity that has been used in previous studies is an unrealistic idealization of in vivo neural activity. One of the motivations for our model was to relax this assumption and examine the consequences: we introduce correlations in the mossy fiber activity by projecting low-dimensional patterns into the mossy fiber layer (Figure 1B):
"A typical assumption in computational theories of the cerebellar cortex is that inputs are randomly distributed in a high-dimensional space (Marr, 1969; Albus, 1971; Brunel et al., 2004; Babadi and Sompolinsky, 2014; Billings et al., 2014; Litwin-Kumar et al., 2017). While this may be a reasonable simplification in some cases, many tasks, including cerebellumdependent tasks, are likely best-described as being encoded by a low-dimensional set of variables. For example, the cerebellum is often hypothesized to learn a forward model for motor control (Wolpert et al., 1998), which uses sensory input and motor efference to predict an effector’s future state. Mossy fiber activity recorded in monkeys correlates with position and velocity during natural movement (van Kan et al., 1993). Sources of motor efference copies include motor cortex, whose population activity lies on a low-dimensional manifold (Wagner et al., 2019; Huang et al., 2013; Churchland et al., 2010; Yu et al., 2009). We begin by modeling the low dimensionality of inputs and later consider more specific tasks.
We therefore assume that the inputs to our model lie on a D-dimensional subspace embedded in the N-dimensional input space, where D is typically much smaller than N (Figure 1B). We refer to this subspace as the “task subspace” (Figure 1C)."
The Reviewer also mentions the learning rule at granule cell to Purkinje cell synapses. We agree that considering online, climbing-fiber-dependent learning is an important generalization. We therefore added a new supplemental figure investigating whether we would still see a difference in optimal coding levels across tasks if online learning were used instead of the least squares solution (Figure 7-figure supplement 2). Indeed, we observed a similar task dependence as we saw in Figure 2F. We have added a new paragraph in the Discussion under Assumptions and Extensions describing our rationale and approach in detail:
"For the Purkinje cells, our model assumes that their responses to granule cell input can be modeled as an optimal linear readout. Our model therefore provides an upper bound to linear readout performance, a standard benchmark for the quality of a neural representation that does not require assumptions on the nature of climbing fiber-mediated plasticity, which is still debated. Electrophysiological studies have argued in favor of a linear approximation (Brunel et al., 2004). To improve the biological applicability of our model, we implemented an online climbing fiber-mediated learning rule and found that optimal coding levels are still task-dependent (Figure 7-figure supplement 2). We also note that although we model several timing-dependent tasks (Figure 7), our learning rule does not exploit temporal information, and we assume that temporal dynamics of granule cell responses are largely inherited from mossy fibers. Integrating temporal information into our model is an interesting direction for future investigation."
Finally, regarding the function of the Purkinje cell, our model defines a learning task as a mapping from inputs to target activity in the Purkinje cell and is thus agnostic to the cell’s downstream effects. We clarify this point when introducing the definition of a learning task:
"In our model, a learning task is defined by a mapping from task variables x to an output f(x), representing a target change in activity of a readout neuron, for example a Purkinje cell. The limited scope of this definition implies our results should not strongly depend on the influence of the readout neuron on downstream circuits."
8) The problem of these, in my impression, generic, arbitrary settings of the neurons and the network in the model becomes obvious here: ‘In contrast to the dense activity in cerebellar granule cells, odor responses in Kenyon cells, the analogs of granule cells in the Drosophila mushroom body, are sparse...’ How can this system be interpreted as an analogy to granule cells in the mammalian cerebellum when the model does not address the specifics lined up above? I.e. the ‘inductive bias’ that the authors speak of, defined as ‘the tendency of a network toward learning particular types of input-output mappings’, would be highly dependent on the specifics of the network model.
We agree with the Reviewer that our model makes several simplifying assumptions for mathematical tractability. However, we note that our study is not the first to draw analogies between cerebellum-like systems, including the mushroom body (Bell et al., 2008; Farris, 2011). All the systems we study feature a sparsely connected, expanded granule-like layer that sends parallel fiber axons onto densely connected downstream neurons known to exhibit powerful synaptic plasticity, thus motivating the key architectural assumptions of our model. We have constrained anatomical parameters of the model using data as available (Table 1). However, we agree with the Reviewer that when making comparisons across species there is always a possibility that differences are due to physiological mechanisms we have not fully understood or captured with a model. As such, we can only present a hypothesis for these differences. We have modified our Discussion section on this topic to clearly state this.
"Our results predict that qualitative differences in the coding levels of cerebellum-like systems, across brain regions or across species, reflect an optimization to distinct tasks (Figure 7). However, it is also possible that differences in coding level arise from other physiological differences between systems."
9) More detailed comments: Abstract: ‘In these models [Marr-Albus], granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli.’ Yes, I would agree with the first part, but I contest the second part of this statement. I think what is true for sparse coding is that the learning of random stimuli will be faster, as in a perceptron, but not necessarily better. As the sparsification essentially removes information, it could be argued that the quality of the learning is poorer. So from that perspective, it is not optimal. The authors need to specify from what perspective they consider sparse representations optimal for learning.
This is an important point that we would like to clarify. It is not the case that sparse coding simply speeds up learning. In our study and many related works (Barak et al. 2013; Babadi and Sompolinsky 2014; Litwin-Kumar et al. 2017), learning performance is measured based on the generalization ability of the network – the ability to predict correct labels for previously unseen inputs. As our study and previous studies show, sparse codes are optimal in the sense that they minimize generalization error, independent of any effect on learning speed. To communicate this more effectively, we have added the following sentence to the first paragraph of the Introduction:
"Sparsity affects both learning speed (Cayco-Gajic et al., 2017), and generalization, the ability to predict correct labels for previously unseen inputs (Barak et al., 2013; Babadi and Sompolinsky, 2014; Litwin-Kumar et al., 2017)."
10) Introduction: ‘Indeed, several recent studies have reported dense activity in cerebellar granule cells in response to sensory stimulation or during motor control tasks (Knogler et al., 2017; Wagner et al., 2017; Giovannucci et al., 2017; Badura and De Zeeuw, 2017; Wagner et al., 2019), at odds with classic theories (Marr, 1969; Albus, 1971).’ In fact, this was precisely the issue that was addressed already by Jo¨rntell and Ekerot (2006) Journal of Neuroscience. The conclusion was that these actual recordings of granule cells in vivo provided essentially no support for the assumptions in the Marr-Albus theories.
In our reading, the main finding of J¨orntell and Ekerot (2006) is that individual granule cells are activated by mossy fibers with overlapping receptive fields driven by a single type of somatosensory input. However, there is also evidence of nonlinear mixed selectivity in granule cells in support of the re-coding hypothesis (Huang et al., 2013; Ishikawa et al., 2015). Jo¨rntell and Ekerot (2006) also suggest that the granule cell layer shares similar topographic organization as mossy fibers, organized into microzones. The existence of topographic organization does not invalidate Marr-Albus theories. As we have suggested earlier, a local combinatorial expansion can coexist with a global topographic organization.
We have described these considerations in the Assumptions and Extensions portion of the Discussion:
"Another key assumption concerning the granule cells is that they sample mossy fiber inputs randomly, as is typically assumed in Marr-Albus models (Marr, 1969; Albus, 1971; LitwinKumar et al., 2017; Cayco-Gajic et al., 2017). Other studies instead argue that granule cells sample from mossy fibers with highly similar receptive fields (Garwicz et al., 1998; Brown and Bower, 2001; J¨orntell and Ekerot, 2006) defined by the tuning of mossy fiber and climbing fiber inputs to cerebellar microzones (Apps et al., 2018). This has led to an alternative hypothesis that granule cells serve to relay similarly tuned mossy fiber inputs and enhance their signal-to-noise ratio (Jo¨rntell and Ekerot, 2006; Gilbert and Chris Miall, 2022) rather than to re-encode inputs. Another hypothesis is that granule cells enable Purkinje cells to learn piece-wise linear approximations of nonlinear functions (Spanne and J¨orntell, 2013). However, several recent studies support the existence of heterogeneous connectivity and selectivity of granule cells to multiple distinct inputs at the local scale (Huang et al., 2013; Ishikawa et al., 2015). Furthermore, the deviation of the predicted dimension in models constrained by electron-microscopy data as compared to randomly wired models is modest (Nguyen et al., 2022). Thus, topographically organized connectivity at the macroscopic scale may coexist with disordered connectivity at the local scale, allowing granule cells presynaptic to an individual Purkinje cell to sample heterogeneous combinations of the subset of sensorimotor signals relevant to the tasks that Purkinje cell participates in. Finally, we note that the optimality of dense codes for learning slowly varying tasks in our theory suggests that observations of a lack of mixing (J¨orntell and Ekerot, 2002) for such tasks are compatible with Marr-Albus models, as in this case nonlinear mixing is not required."
We have also included the Jo¨rntell and Ekerot (2006) study as a citation in the Introduction:
"Indeed, several recent studies have reported dense activity in cerebellar granule cells in response to sensory stimulation or during motor control tasks (Jo¨rntell and Ekerot, 2006; Knogler et al., 2017; Wagner et al., 2017; Giovannucci et al., 2017; Badura and De Zeeuw, 2017; Wagner et al., 2019), at odds with classic theories (Marr, 1969; Albus, 1971)."
11) Results: 1st para: There is no information about how the granule cells are modelled.
We agree that this should information should have been more readily available. We now more completely describe the model in the main text. Our model for granule cells can be found in Equation 1 in the Results section and also the Methods (Network Model):
"The activity of neurons in the expansion layer is given by: h = φ(Jeffx − θ), (2)
where φ is a rectified linear activation function φ(u) = max(u,0) applied element-wise. Our results also hold for other threshold-polynomial activation functions. The scalar threshold θ is shared across neurons and controls the coding level, which we denote by f, defined as the average fraction of neurons in the expansion layer that are active."
12) 2nd para: ‘A typical assumption in computational theories of the cerebellar cortex is that inputs are randomly distributed in a high-dimensional space.’ Yes, I agree, and this is in fact in conflict with the known topographical organization in the cerebellar cortex (see broader comment above). Mossy fiber inputs coding for closely related inputs are co-localized in the cerebellar cortex. I think for this model to be of interest from the point of view of the mammalian cerebellar cortex, it would need to pay more attention to this organizational feature.
As we discuss in our response to paragraphs 5 and 6, we see the random distribution assumption at the local scale (inputs presynaptic to a single Purkinje cell) as being compatible with topographic organization occurring at the microzone scale. Furthermore, as discussed earlier, we specifically model low-dimensional input as opposed to the random and high-dimensional inputs typically studied in prior models.
"A typical assumption in computational theories of the cerebellar cortex is that inputs are randomly distributed in a high-dimensional space (Marr, 1969; Albus, 1971; Brunel et al., 2004; Babadi and Sompolinsky, 2014; Billings et al., 2014; Litwin-Kumar et al., 2017). While this may be a reasonable simplification in some cases, many tasks, including cerebellumdependent tasks, are likely best-described as being encoded by a low-dimensional set of variables. For example, the cerebellum is often hypothesized to learn a forward model for motor control (Wolpert et al., 1998), which uses sensory input and motor efference to predict an effector’s future state. Mossy fiber activity recorded in monkeys correlates with position and velocity during natural movement (van Kan et al., 1993). Sources of motor efference copies include motor cortex, whose population activity lies on a low-dimensional manifold (Wagner et al., 2019; Huang et al., 2013; Churchland et al., 2010; Yu et al., 2009). We begin by modeling the low dimensionality of inputs and later consider more specific tasks. We therefore assume that the inputs to our model lie on a D-dimensional subspace embedded in the N-dimensional input space, where D is typically much smaller than N (Figure 1B). We refer to this subspace as the “task subspace” (Figure 1C)."
References
Albus, J.S. (1971). A theory of cerebellar function. Mathematical Biosciences 10, 25–61.
Apps, R., et al. (2018). Cerebellar Modules and Their Role as Operational Cerebellar Processing Units. Cerebellum 17, 654–682.
Babadi, B. and Sompolinsky, H. (2014). Sparseness and expansion in sensory representations. Neuron 83, 1213–1226.
Badura, A. and De Zeeuw, C.I. (2017). Cerebellar granule cells: dense, rich and evolving representations. Current Biology 27, R415–R418.
Barak, O., Rigotti, M., and Fusi, S. (2013). The sparseness of mixed selectivity neurons controls the generalization–discrimination trade-off. Journal of Neuroscience 33, 3844– 3856.
Bell, C.C., Han, V., and Sawtell, N.B. (2008). Cerebellum-like structures and their implications for cerebellar function. Annual Review of Neuroscience 31, 1–24.
Billings, G., Piasini, E., Lo˝rincz, A., Nusser, Z., and Silver, R.A. (2014). Network structure within the cerebellar input layer enables lossless sparse encoding. Neuron 83, 960–974.
Bordelon, B., Canatar, A., and Pehlevan, C. (2020). Spectrum dependent learning curves in kernel regression and wide neural networks. International Conference on Machine Learning 1024–1034.
Brown, I.E. and Bower, J.M. (2001). Congruence of mossy fiber and climbing fiber tactile projections in the lateral hemispheres of the rat cerebellum. Journal of Comparative Neurology 429, 59–70.
Brunel, N., Hakim, V., Isope, P., Nadal, J.P., and Barbour, B. (2004). Optimal information storage and the distribution of synaptic weights: perceptron versus Purkinje cell. Neuron 43, 745–757.
Canatar, A., Bordelon, B., and Pehlevan, C. (2021). Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Nature Communications 12, 1–12.
Cayco-Gajic, N.A., Clopath, C., and Silver, R.A. (2017). Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks. Nature Communications 8, 1–11.
Chadderton, P., Margrie, T.W., and Ha¨usser, M. (2004). Integration of quanta in cerebellar granule cells during sensory processing. Nature 428, 856–860.
Churchland, M.M., et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience 13, 369–378.
Farris, S.M. (2011). Are mushroom bodies cerebellum-like structures? Arthropod structure & development 40, 368–379.
Garwicz, M., Jorntell, H., and Ekerot, C.F. (1998). Cutaneous receptive fields and topography of mossy fibres and climbing fibres projecting to cat cerebellar C3 zone. The Journal of Physiology 512 ( Pt 1), 277–293.
Gilbert, M. and Chris Miall, R. (2022). How and Why the Cerebellum Recodes Input Signals: An Alternative to Machine Learning. The Neuroscientist 28, 206–221.
Giovannucci, A., et al. (2017). Cerebellar granule cells acquire a widespread predictive feedback signal during motor learning. Nature Neuroscience 20, 727–734.
Huang, C.C., et al. (2013). Convergence of pontine and proprioceptive streams onto multimodal cerebellar granule cells. eLife 2, e00400.
Ishikawa, T., Shimuta, M., and Ha¨usser, M. (2015). Multimodal sensory integration in single cerebellar granule cells in vivo. eLife 4, e12916.
Jacot, A., Gabriel, F., and Hongler, C. (2018). Neural tangent kernel: Convergence and generalization in neural networks. Advances in Neural Information Processing Systems 31.
Jo¨rntell, H. and Ekerot, C.F. (2002). Reciprocal Bidirectional Plasticity of Parallel Fiber Receptive Fields in Cerebellar Purkinje Cells and Their Afferent Interneurons. Neuron 34, 797–806.
Jorntell, H. and Ekerot, C.F. (2006). Properties of Somatosensory Synaptic Integration in Cerebellar Granule Cells In Vivo. Journal of Neuroscience 26, 11786–11797.
Knogler, L.D., Markov, D.A., Dragomir, E.I., Stih, V., and Portugues, R. (2017). Senso-ˇ rimotor representations in cerebellar granule cells in larval zebrafish are dense, spatially organized, and non-temporally patterned. Current Biology 27, 1288–1302.
Litwin-Kumar, A., Harris, K.D., Axel, R., Sompolinsky, H., and Abbott, L.F. (2017). Optimal degrees of synaptic connectivity. Neuron 93, 1153–1164. Marr, D. (1969). A theory of cerebellar cortex. Journal of Physiology 202, 437–470.
Nguyen, T.M., et al. (2022). Structured cerebellar connectivity supports resilient pattern separation. Nature 1–7.
Saarinen, A., Linne, M.L., and Yli-Harja, O. (2008). Stochastic Differential Equation Model for Cerebellar Granule Cell Excitability. PLOS Computational Biology 4, e1000004.
Simon, J.B., Dickens, M., and DeWeese, M.R. (2021). A theory of the inductive bias and generalization of kernel regression and wide neural networks. arXiv: 2110.03922.
Sollich, P. (1998). Learning curves for Gaussian processes. Advances in Neural Information Processing Systems 11.
Spanne, A. and Jo¨rntell, H. (2013). Processing of Multi-dimensional Sensorimotor Information in the Spinal and Cerebellar Neuronal Circuitry: A New Hypothesis. PLOS Computational Biology 9, e1002979.
Spanne, A. and Jo¨rntell, H. (2015). Questioning the role of sparse coding in the brain. Trends in Neurosciences 38, 417–427.
van Kan, P.L., Gibson, A.R., and Houk, J.C. (1993). Movement-related inputs to intermediate cerebellum of the monkey. Journal of Neurophysiology 69, 74–94.
Wagner, M.J., Kim, T.H., Savall, J., Schnitzer, M.J., and Luo, L. (2017). Cerebellar granule cells encode the expectation of reward. Nature 544, 96–100.
Wagner, M.J., et al. (2019). Shared cortex-cerebellum dynamics in the execution and learning of a motor task. Cell 177, 669–682.e24.
Wolpert, D.M., Miall, R.C., and Kawato, M. (1998). Internal models in the cerebellum. Trends in Cognitive Sciences 2, 338–347.
Yu, B.M., et al. (2009). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology 102, 614–635.
docker compose --profile download up --build
如果是windows wsl ,命令前需要添加 sudo
Tissue clearing is currently revolutionizing neuroanatomy by enabling organ-level imaging with cellular resolution. However, currently available tools for data analysis require a significant time investment for training and adaptation to each laboratory’s use case, which limits productivity. Here, we present FriendlyClearMap, an integrated toolset that makes ClearMap1 and ClearMap2’s CellMap pipeline easier to use, extends its functions, and provides Docker Images from which it can be run with minimal time investment. We also provide detailed tutorials for each step of the pipeline.For more precise alignment, we add a landmark-based atlas registration to ClearMap’s functions as well as include young mouse reference atlases for developmental studies. We provide alternative cell segmentation method besides ClearMap’s threshold-based approach: Ilastik’s Pixel Classification, importing segmentations from commercial image analysis packages and even manual annotations. Finally, we integrate BrainRender, a recently released visualization tool for advanced 3D visualization of the annotated cells.As a proof-of-principle, we use FriendlyClearMap to quantify the distribution of the three main GABAergic interneuron subclasses (Parvalbumin+, Somatostatin+, and VIP+) in the mouse fore- and midbrain. For PV+ neurons, we provide an additional dataset with adolescent vs. adult PV+ neuron density, showcasing the use for developmental studies. When combined with the analysis pipeline outlined above, our toolkit improves on the state-of-the-art packages by extending their function and making them easier to deploy at scale.
This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giad035 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:
Reviewer Chris Armit
This Technical Note paper describes "FriendlyClearMap: An optimized toolkit for mouse brain mapping and analysis".
Whereas the core concept of a data analysis tool to assist in spatial mapping of cleared mouse tissues is perfectly reasonable, there are multiple issues with the documentation that renders this toolkit very difficult to use. I detail below some of the issues I have encountered.
GitHub repositoryThe installation instructions are missing from the following GitHub repository: https://github.com/MoritzNegwer/FriendlyClearMap-scriptsThe closest reference I could find to installation instructions is the following: "Please see the Appendices 1-3 of our <X_upcoming> publication for detailed instructions on how to use the pipelines. <X_protocols.io goes here>"Step-bystep installation instructions should be included in the GitHub repository. In addition, the authors should add the protocols.io links to their GitHub repository.
Protocols.ioThe installation instructions are missing from the following protocols.io links:Run Clearmap 1 docker dx.doi.org/10.17504/protocols.io.eq2lynnkrvx9/v1Run Clearmap 2 docker dx.doi.org/10.17504/protocols.io.yxmvmn9pbg3p/v1Both of these protocols include the following instruction:* "Then, download the docker container from our repository: XXX docker container goes here"In the documentation, the authors need to unambiguously refer to the specific Docker container that a user needs to install for each software tool.
Test Data I could not find the test data in the form of image stacks that would be needed to test the FriendlyClearMap protocols. Figure 1 refers to 16-bit TIFF image stacks, and I presume these to be the input data that is needed for the image analysis pipelines described in the manuscript. The authors should provide details of the test imaging dataset, including links if necessary to where the image stacks data can be downloaded, in the 'Data Availability' section of the manuscript.
Platform / Operating SystemsIn the 'Data Availability' section of the manuscript, the authors specify that the Operating Systems are "platform-independent". However, the protocols.io documents lists a set of requirements for Windows and LINUX, but not for MacOS. The authors should provide installation instructions and system requirements for MacOS.I reject this manuscript on the grounds that, due to lack of appropriate documentation and installation instructions, the software tool is too difficult to use and therefore has extremely low reuse potential.
This thread is locked.
Yet another example of why it's dumb for Microsoft to lock Community threads. This is in the Bing search results as the top article for my issue with 1,911 views. Since 2011 though, there have been new developments! The new Media Player app in Windows 10 natively supports Zune playlist files! Since the thread is locked, I can't put this news in a place where others following my same search path will find it.
Guess that's why it makes sense to use Hypothes.is 🤷♂️
s the first GUI-based personal
So with the Apple lisa being the first I can see a few things that are simlar to today. Icons, little images that one can select with describing text. Windows with closing buttons and other stuff like that. I think it probably flopped because of the price. Lisa 1983.
still need to know what's your graphics card! If it's Intel and doesn't have GL drivers on Windows, then your issue is a dupe of the fact that we need to use ANGLE in your case (known issue).
CONSIDERATION
I tested this on my HP TouchSmart tx2 laptop which runs Windows 7 and that machine loads the sites such as https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/google/particles/index.html fine.
ISSUE_REPRODUCTION
Seen while running Mozilla/5.0 (Windows; Windows NT 6.1; WOW64; rv:2.0b3) Gecko/20100805 Firefox/4.0b3. I tried in 32 bit mode and the same thing happens. STR: 1. Enable WebGL with the about:config pref 2. Load the site in the URL, or one of the demos from http://www.ambiera.com/copperlicht/demos.html Expected: it would work as it does using Windows XP and Vista using the same build Actual: I get an error saying that "This demo requires a WebGL-enabled browser" I have seen Bug 560975 and I do not have those prefs set in about:config since I am using a new profile.
PROBLEM_DESCRIPTION
Author Response
Reviewer #2 (Public Review):
1) The authors in reality do not analyze oscillations themselves in this manuscript but only the power of signals filtered at determined frequency bands. This is particularly misleading when the authors talk about "spindles". Spindles are classically defined as a thalamico-cortical phenomenon, not recorded from hippocampus LFPs. Thus, the fact that you filter the signal in the same frequency range matching cortical spindles does not mean you are analyzing spindles. The terminology, therefore, is misleading. I would recommend the authors to change spindles to "beta", which at least has been reported in the hippocampus, although in very particular behavioral circumstances. However, one must note that the presence of power in such bands does not guarantee one is recording from these oscillations. For example, the "fast gamma" band might be related to what is defined as fast gamma nested in theta, but it might also be related to ripples in sleep recordings. The increase of "spindle" power in sleep here is probably related to 1/f components arising from the large irregular activity of slow wave sleep local field potentials. The authors should avoid these conceptual confusions in the manuscript, or show that these band power time courses are in fact matching the oscillations they refer to (for example, their spindle band is in fact reflecting increased spindle occurrence).
We thank the reviewer for allowing us to clarify this subject. We completely agree with concerns raised in the comments. To avoid any confusion, we have replaced throughout the manuscript the word ‘spindle’ with ‘beta’.
2) The shuffling procedure to control for the occupancy difference between awake and sleep does not seem to be sufficient. From what I understand, this shuffling is not controlling for the autocorrelation of each band which would be the main source of bias to be accounted for in this instance. Thus, time shifts for each band would be more appropriate. Further, the controls for trial durations should be created using consecutive windows. If you randomly sample sleep bins from distant time points you are not effectively controlling for the difference in duration between trial types. Finally, it is not clear from the text if the UMAP is recomputed for each duration-matched control. This would be a rigorous control as it would remove the potential bias arising from the unbalance between awake and sleep data points, which could bias the subspace to be more detailed for the LFP sleep features. It is very likely the results will hold after these controls, given it is not surprising that sleep is a more diverse state than awake, but it would be good practice to have more rigorous controls to formalize these conclusions.
We are grateful to the reviewer for suggesting alternative analysis. We have used this direction, to create surrogate datasets obtained by time shifting each band and obtained their respective UMAP projections (see modified Figure 2D). Additionally, as suggested, for duration-matched controls, we have selected consecutive windows, rather than random points (Figure 2 – figure supplement 1C). UMAP projections were obtained for each duration-matched control and occupancy was computed. The text in the method section has been modified to indicate the analysis. As expected, the results were identical.
3) Lots of the observations made from the state space approach presented in this manuscript lack any physiological interpretation. For example, Figure 4F suggests a shift in the state space from Sleep1 to Sleep2. The authors comment there is a change in density but they do not make an effort to explain what the change means in terms of brain dynamics. It seems that the spectral patterns are shifting away from the Delta X Spindle region (concluding this by looking at Fig4B) which could be potentially interesting if analyzed in depth. What is the state space revealing about the brain here? It would be important to interpret the changes revealed by this method otherwise what are we learning about the brain from these analyses? This is similar to the results presented in Figure 5, which are merely descriptions of what is seen in the correlation matrix space. It seems potentially interesting that non-REM seems to be split into two clusters in the UMAP space. What does it mean for REM that delta band power in pyramidal and lm layers is anti-correlated to the power within the mid to fast gamma range? What do the transition probabilities shown in Figures 6B and C suggest about hippocampal functioning? The authors just state there are "changes" but they don't characterize these systematically in terms of biology. Overall, the abstract multivariate representation of the neural data shown here could potentially reveal novel dynamics across the awake-sleep cycle, but in the current form of this manuscript, the observations never leave the abstract level.
We thank the reviewer for allowing us to clarify this aspect of the manuscript. We have now edited the main text to include considerations on the biological relevance of the findings of Figure 4, 5 and 6.
Additions to figure 4: In particular, non-REM states in sleep2 tended to concentrate in a region of increased power in the delta and beta bands, which could be the results of increased interactions with cortical activity modulated in the same range. It is also likely that such effect was induced by the exposure to relevant behavioral experience. In fact, changes in density of individual oscillations after learning have been reported using traditional analytical methods and are thought to support memory consolidation (Bakker et al., 2015; Eschenko et al., 2008, 2006). Nevertheless, while traditional methods provide information about individual components, the novel approach used here provides additional information about the combinatorial shift in the dynamics of network oscillations after learning or exploration. Thus, it provides the basis for identifying how coordinated activity among different oscillations supports memory consolidation processes, as those occurring during non-REM sleep after exploration, which cannot be elucidated using traditional analytical methods.
Additions to figure 5: Gamma segregation and delta decoupling offer a picture of hippocampal REM sleep as being more akin to awake locomotion (with the major difference of a stronger medium gamma presence) while also suggesting a substantial independence from cortical slow oscillations. On the other hand, the across-scale coherence of non-REM sleep is consistent with this sleep stage being dominated by brain-wide collective fluctuations engaging oscillations at every range. Distinct cross frequency coupling among various individual pairs of oscillations such as theta-gamma, delta-gamma etc., have been already reported (Bandarabadi et al., 2019; Clemens et al., 2009; Hammer et al., 2021; Scheffzük et al., 2011). However, computing cross frequency coupling on the state space provides the additional information on how multiple oscillations, obtained from distinct CA1 hippocampal layers (stratum pyramidale, stratum radiatum and stratum lacunosum moleculare), are coupled with each other during distinct states of sleep and wakefulness. Furthermore, projecting the correlation matrices on 2D plane, provides a compact tool that allows to visualize the cross-frequency interactions among various hippocampal oscillations. Altogether, this approach reveals the complex nature of coupling dynamics occurring in hippocampus during distinct behavioral states
Additions to Figure 6: We found that transitions occurring from REM-to-REM sleep and non-REM-to-non-REM sleep (intra-state transitions) are more vulnerable to plasticity after exploration as compared to inter-state transitions (such as non-REM to REM, REM-to-intermediate etc.) (Fig 6E, F). These changes in intra-state transitions were observed to be beyond randomness (Fig S9 E, F) indicating a specificity in plastic changes in state transitions after exploration. In particular, while the average REM period duration is unaltered after exploration (Fig 4G), REM temporal structure is reorganized. In fact, increased probability of REM to REM transitions indicates a significant prolongation of REM bout duration. Similarly, the increase in non-REM to non-REM transition probability reflects an increased duration of non-REM bouts. Therefore, environment exploration was accompanied by an increased separation between REM and non-REM periods, possibly as a response to increased computational demands. More in general, the network state space allows to characterize the state transitions in hippocampus and how they are affected by novel experience or learning. By observing the state transition patterns, this analytical framework allows to detect and identify state-specific changes in the hippocampal oscillatory dynamics, beyond the possibilities offered by more traditional univariate and bivariate methods. We next investigated how fast the network flows on the state space and assessed whether the speed is uniform, or it exhibits specific region-dependent characteristics.
Reviewer #3 (Public Review):
1) My primary concern is to provide clear evidence that this approach will provide key insights of high physiological significance, especially for readers who may think the traditional approaches are advantageous (for example due to their simplicity). I think the authors' findings of distinct sleep state signatures or altered organization of the NLG3-KO mouse could serve this purpose. However, right now the physiological significance of these results is unclear. For example, do these sleep state signatures predict later behavior performance, or is altered organization related to other functional impairments in the disease model? Do neurons with distinct sleep state signatures form distinct ensembles and code for related information?
We are thankful to the reviewer for raising a very interesting line of questioning regarding sleep signatures and distinct ensemble. In this study, we show that sleep state signatures can predict how individual cells may participate in information processing during open field exploration. However, further analysis exploring the recruitment of neuronal ensembles are in preparation for another manuscript and is beyond the scope of this article.
We have further modified the description of the results (as also suggested by other reviewers) to highlight the key advantages of this approach over traditional methods.
Regarding functional impairment: as described in the manuscript, the altered organization in animal model of autism could possibly due to alterations in cellular and synaptic mechanisms as those described in previous reports (Modi et al 2019, Foldy et al 2013)
2) For cells with different mean firing rates during exploration: is that because they are putative fast-spiking interneurons and pyramidal cells? From the reported mean firing rates, I think some of these cells are interneurons. Since mean firing rates are well known to vary with cell type, this should be addressed. For example, the sleep state signatures may be distinct for different putative pyramidal cells and interneurons. This would be somewhat expected considering prior work that has shown different cell types have different oscillatory coupling characteristics. I think it would be more interesting to determine if pyramidal cells had distinct sleep state signatures and, if so, whether pyramidal cells from the same sleep state signature have similar properties like they code for similar things or commonly fire together in an ensemble ms the number of cells in Fig. 8 may be limited for this analysis. The authors could use the hc-11 data in addition, which was also tested in this work.
We thank the reviewer for suggesting this additional analysis to better describe the data. To this end, we have added an additional Figure in supplementary data (analysis of hc11 dataset: Figure Figure 8 – figure supplement 3), to demonstrate that interneurons and pyramidal cells have distinct sleep signatures. These findings are in agreement with dataset presented in Figure 8D, E.
As shown in the manuscript, the spatial firing (sparsity) has large variability for cells having similar network signatures (Fig 8E). Thus, additional parameters beside oscillations may be involved in cells encoding. Different network state spaces are required to be explored in future studies to further understand this phenomenon in detail.
We agree that investigating neuronal ensembles and state space are an interesting direction to follow. In another study (in preparation) which are investigating in detail the recruitment of neuronal ensemble by oscillatory state space. Thus, those findings are beyond the scope of this introductory article.
3) Example traces are needed to show how LFPs change over the state-space. Example traces should be included for key parts of the state-space in Figures 2 and 3.
We thank the reviewer for this key insight on data representation. Example traces of how LFP varies on the state space have been added (see Figure 4 – figure supplement 1).
4) What is the primary rationale for 200ms time bins? Is this time scale sufficient to capture the slow dynamics of delta rhythm (1-5Hz) with a maximum of 1s duration?
Time scale of binning depends on the scale of investigation. We also replicated the results with different time bins (such as 50 ms and 1 seconds) and the results are identical. For delta rhythms, with 200 ms time bins, the dynamics will be captured across multiple bins. Additionally, the binned power time series are also smoothed before obtaining projections.
5) Since oscillatory frequency and power are highly associated with running speed, how does speed vary over the state space. Is the relationship between speed and state-space similar to the results of previous studies for theta (Slawinska and Kasicki, Brain Res 1998; Maurer et al, Hippocampus 2005) and gamma oscillations (Ahmed and Mehta J. Neurosci 2012; Kemere et al PLOS ONE 2013), or does it provide novel insights?
We thank the reviewer for highlighting this crucial link between oscillation and locomotion. While various articles have focused on individual oscillations, the combinatorial effects of multiple oscillations from multiple brain areas in regulating the speed of the animal during exploration is definitely worth exploring with this novel approach. These set of results will be introduced in another study, currently in preparation.
6) The separation of 9 states (Fig. 6ABC) seems arbitrary, where state 1 (bin 1) is never visited. I suggest plotting the density distribution of the data in Fig. 2A or Fig. 6A to better determine how many states are there within the state space. For example, five peaks in such a density plot might suggest five states. Alternately, clustering methods could be useful to determine how the number of states.
We thank the reviewer for this this useful suggestion. We agree that additional clustering methods can be used to identify non-canonical sleep states. These are currently being explored in our lab and will be part of future studies. As for this dataset, the density plots are available in figure 4E, which determines how many states are in each part of the state space.
7) The results in Fig. 4G are very interesting and suggest more variation of sub-states during non REM periods in sleep1 than in sleep2. What might explain this difference? Was it associated with more frequent ripple events occurring in sleep2?
The reviewer is right in looking for the source of the decreased of state variability in sleep2. Considering the distribution of relative frequency power in the state space, the higher concentration in sleep 2 corresponds to higher content in the slower delta and spindle frequency bands, rather than the higher frequencies of SWRs. This result can be interpreted in the light of enhanced cortical activity (which is known to heavily recruit those bands) and possibly of enhanced cortical-hippocampal communication following relevant behavioral experience. In fact, it is also necessary to mention that with our recording setup we cannot rule out the effects of volume conductance completely, and thus we cannot exclude that the increase in the delta and spindle bands in the hippocampus were a spurious effect of purely cortical frequency modulations.
8) The state transition results in Fig. 6 are confusing because they include two fundamentally different timescales: fast transitions between oscillatory states and slow dynamics of sleep states. I recommend clarifying the description in the results and the figure caption. Furthermore, how can an animal transition between the same sleep state (Fig. 6EF)? Would they both be in a single sleep state?
The transitions capture the fast oscillatory scales (as they are investigated over a timeframe of 1 second). The sleep stages (REM, non-REM etc.) are used as labels from which the states originate on the state space. This allows us to characterize fast oscillatory dynamics in various sleep stages.
Regarding same state transition: An increase in same state transition probability corresponds to increase in prolongation of that particular state, thereby altering the temporal structure of a given sleep state.
Reviewer #2 (Public Review):
Modi and colleagues describe a multivariate framework to analyze local field potentials, which is specifically applied to CA1 data in this work. Multivariate approaches are welcome in the field and the effort of the authors should be appreciated. However, I found the analyses presented here are too superficial and do not seem to bring new insights into hippocampal dynamics. Further, some surrogate methods used are not necessarily controlling for confounding variables. These concerns are further detailed below.
1. The authors in reality do not analyze oscillations themselves in this manuscript but only the power of signals filtered at determined frequency bands. This is particularly misleading when the authors talk about "spindles". Spindles are classically defined as a thalamico-cortical phenomenon, not recorded from hippocampus LFPs. Thus, the fact that you filter the signal in the same frequency range matching cortical spindles does not mean you are analyzing spindles. The terminology, therefore, is misleading. I would recommend the authors to change spindles to "beta", which at least has been reported in the hippocampus, although in very particular behavioral circumstances. However, one must note that the presence of power in such bands does not guarantee one is recording from these oscillations. For example, the "fast gamma" band might be related to what is defined as fast gamma nested in theta, but it might also be related to ripples in sleep recordings. The increase of "spindle" power in sleep here is probably related to 1/f components arising from the large irregular activity of slow wave sleep local field potentials. The authors should avoid these conceptual confusions in the manuscript, or show that these band power time courses are in fact matching the oscillations they refer to (for example, their spindle band is in fact reflecting increased spindle occurrence).
2. The shuffling procedure to control for the occupancy difference between awake and sleep does not seem to be sufficient. From what I understand, this shuffling is not controlling for the autocorrelation of each band which would be the main source of bias to be accounted for in this instance. Thus, time shifts for each band would be more appropriate. Further, the controls for trial durations should be created using consecutive windows. If you randomly sample sleep bins from distant time points you are not effectively controlling for the difference in duration between trial types. Finally, it is not clear from the text if the UMAP is recomputed for each duration-matched control. This would be a rigorous control as it would remove the potential bias arising from the unbalance between awake and sleep data points, which could bias the subspace to be more detailed for the LFP sleep features. It is very likely the results will hold after these controls, given it is not surprising that sleep is a more diverse state than awake, but it would be good practice to have more rigorous controls to formalize these conclusions.
3. Lots of the observations made from the state space approach presented in this manuscript lack any physiological interpretation. For example, Figure 4F suggests a shift in the state space from Sleep1 to Sleep2. The authors comment there is a change in density but they do not make an effort to explain what the change means in terms of brain dynamics. It seems that the spectral patterns are shifting away from the Delta X Spindle region (concluding this by looking at Fig4B) which could be potentially interesting if analyzed in depth. What is the state space revealing about the brain here? It would be important to interpret the changes revealed by this method otherwise what are we learning about the brain from these analyses? This is similar to the results presented in Figure 5, which are merely descriptions of what is seen in the correlation matrix space. It seems potentially interesting that non-REM seems to be split into two clusters in the UMAP space. What does it mean for REM that delta band power in pyramidal and lm layers is anti-correlated to the power within the mid to fast gamma range? What do the transition probabilities shown in Figures 6B and C suggest about hippocampal functioning? The authors just state there are "changes" but they don't characterize these systematically in terms of biology. Overall, the abstract multivariate representation of the neural data shown here could potentially reveal novel dynamics across the awake-sleep cycle, but in the current form of this manuscript, the observations never leave the abstract level.
Press and hold the [Windows] key > Click the [Tab] key once. A row of screen shots representing all the open applications will appear.
[Window][Tab]
Windows Switch Between Open Windows Applications

Exclamation
It's not exclaimation, it's default beep.
need more work on Windows
Future work is needed.
Blackboard Collaborate Blackboard Collaborate works best with the Google Chrome browser, you can download and install this free. Chrome is available on Windows, Mac OS, Android (Google Play Store) and iOS (App Store) devices. Join a Collaborate Session You will be invited to the session via a link. A Blackboard Collaborate guest link looks like this, https://ca.bbcollab.com/guest/9b3e2fdd4ffd4bffb0b22fa76bd98f96 . This link is a test session in my Test course. You can join this at a time to suit you and explore Collaborate as a participant. When you click on the link it will open the session in your Browser. Add your name, then click the ‘Join Session’ button. Once you enter the Collaborate room, there are some functions to be aware of. Collaborate Panel Click the purple tab, bottom right, to open it Click the Settings cog wheel icon bottom right of the expanded panel, this is where you check your audio and video. Scroll down to set session notifications settings Click the Speech bubble icon bottom left. In here you can type your comments or conversation whilst the audio-visual presentation is going on. Session Controls There’s a row of four buttons bottom centre. My Status and Settings – click to check your status and give emoji feedback. Mute or unmute your microphone. Switch on or off your webcam video. Raise your hand to let the lecturer know you wish to speak. You should get a practice or test session with academic staff before the actual teaching session. This will help you understand the parts of the tool you need to interact with. This should be minimal, the academic will manage the session itself. Mobile device access to Collaborate Blackboard Collaborate allows you to connect to sessions with an Apple iPad, or iPhone, Android or Windows devices via a browser. A video guide to Collaborate on mobile devices opens in YouTube (18.06 mins) [Transcript]. You can do these things with Blackboard Collaborate on a mobile browser View the Whiteboard. View an Application on another user’s desktop. Join breakout rooms. Send and receive chat messages with the entire room. Listen to other speakers and speak to the room. Respond to polls. Collaborate Accessibility For the best Collaborate experience with your screen reader use Chrome and JAWS on a Windows system. On a Mac use Safari and VoiceOver. Windows 10 – Chrome with JAWS: Provisional Windows 7 – Chrome with JAWS: Compatible MacOS – Safari with VoiceOver: Certified MacOS – Chrome with VoiceOver: Provisional There are some useful keyboard shortcuts in Blackboard Collaborate, including: To turn the microphone on and off, press Alt + M in Windows. On a Mac, press Option + M. To turn the camera on and off, press Alt + C in Windows. On a Mac, press Option + C. To raise and lower your hand, press Alt + H in Windows. On a Mac, press Option + H. For detailed support for accessibility with Collaborate visit Blackboard Help
delete
Windows 11 is introducing a new app called Dev Home, currently in preview, specifically designed for developers. As the name implies, Dev Home aims to enhance developers’ productivity and streamline their workflow in software creation.
When will it come?
Women lined the rooftop and windows of the ten-story building and jumped, landing in a “mangled, bloody pulp.”
That sounds like a terrible sight to witness.
Suggested by Cato Minor<br /> Mac, Windows, Linux, iOS, Android
Windows 11 optional updates: Here why you should wait Status is online Joshua Bevis We help London based businesses transform their IT
Reviewer #3 (Public Review):
In their manuscript, Schneider et al. aim to develop voyAGEr, a web-based tool that enables the exploration of gene expression changes over age in a tissue- and sex-specific manner. The authors achieved this goal by calculating the significance of gene expression alterations within a sliding window, using their unique algorithm, Shifting Age Range Pipeline for Linear Modelling (ShARP-LM), as well as tissue-level summaries that calculated the significance of the proportion of differentially expressed genes by the windows and calculated enrichments of pathways for showing biological relevance. Furthermore, the authors examined the enrichment of cell types, pathways, and diseases by defining the co-expressed gene modules in four selected tissues. The voyAGEr was developed as a discovery tool, providing researchers with easy access to the vast amount of transcriptome data from the GTEx project. Overall, the research design is unique and well-performed, with interesting results that provide useful resources for the field of human genetics of aging. I have a few questions and comments, which I hope the authors can address.
1. In the gene-centric analyses section of the result, to improve this manuscript and database, linear regression tests accounting for the entire range of age should be added. The authors' algorithm, ShARP-LM, tests locally within a 16-year window which makes it has lower power than the linear regression test with the whole ages. I suspect that the power reduction is strongly affected in the younger age range since a larger number of GTEx donors are enriched in old age. By adding the results from the lm tests, readers would gain more insight and evidence into how significantly their interest genes change with age.<br /> 2. In line with the ShARP-LM test results, it is not clear which criterion was used to define the significant genes and the following enrichment analyses. I assume that the criterion is P < 0.05, but it should be clearly noted. Additionally, the authors should apply adjusted p-values for multiple-test correction. The ideal criterion is an adjusted P < 0.05. However, if none or only a handful of genes were found to be significant, the authors could relax the criteria, such as using a regular P < 0.01 or 0.05.<br /> 3. In the gene-centric analyses section, authors should provide a full list of donor conditions and a summary table of conditions as supplementary.<br /> 4. The tissue-specific assessment section has poor sub-titles. Every title has to contain information.<br /> 5. I have an issue understanding the meaning of NES from GSEA in the tissue-specific assessment section. The authors performed GSEA for the DEGs against the background genes ordered by t-statistics (from positive to negative) calculated from the linear model. I understand the p-value was two-tailed, which means that both positive and negative NES are meaningful as they represent up-regulated expression direction (positive coefficient) and down-regulated expression direction (negative coefficient) with age, respectively, within a window. However, in the GSEA section of Methods, authors were not fully elaborate on this directionality but stated, "The NES for each pathway was used in subsequent analyses as a metric of its over- or down-representation in the Peak". The authors should clearly elaborate on how to interpret the NES from their results.<br /> 6. In the Modules of co-expressed genes section, the authors did not explain how or why they selected the four tissues: brain, skeletal muscle, heart (left ventricle), and whole blood. This should be elaborated on.<br /> 7. In the modules of the co-expressed genes section, the authors did not provide an explanation of the "diseases-manual" sub-tab of the "Pathway" tab of the voyAGEr tool. It would be helpful for readers to understand how the candidate disease list was prepared and what the results represent.
With Blue - uncertain - stumbling Buzz - Between the light - and me - And then the Windows failed - and then I could not see to see -
This is Dickinson's way of expressing her belief that there is no bright light once you die.
And then the Windows failed
When she mentioned “the windows” she was referring to her eyes(The eyes are commonly known as “the windows of the soul”) as they were shutting down and she lost contact with the outside world.
With Blue - uncertain - stumbling Buzz - Between the light - and me - And then the Windows failed - and then I could not see to see -
In these lines, the buzzing sound of the fly is described as a "Blue—uncertain stumbling Buzz." Here, Dickinson compares the sound of the fly to a stumbling movement, emphasizing its presence and intrusion within the solemn atmosphere of death. The use of the word "uncertain" further adds to the unsettling nature of the scene, suggesting a lack of clarity and a disturbance in the narrator's perception.
*
Ruskin: Praeterita III. 1. "But now that I had a missal of my own and could touch its leaves and tum, and even here and there understand the Latin of it, no girl of seven years old with a new doll is prouder or happier; but the feeling was something between the girl's with the doll and Aladdin's fn a new spirit-slave to build palaces for him with jewel windows. For truly a well illuminated missal is a fairy cathedral full of painted windows, bound together to carry in one's pocket, with the music and the blessing of all its prayers besides. And then followed, of course, the discovery that all beautiful prayers were Catholic, all wise interpretations of the Bible, Catholic; and every manner of Protestant written service whatsoever, either insolently altered corruptions, or washed-out and ground-down rags and debris of the great Catholic collects, litanies and songs of praise." A comparison of the old Hebrew prayer book with its modem substitute, tempts me to cite this opinion of Ruskin with much approval.
Praeterita : outlines of scenes and thoughts, perhaps worthy of memory in my past life / by John Ruskin.
gait score in the longitudinal evolution
Are you using the same aggregation windows and same number of repeats for the gait score and the individual features?
Another Microsoft related problemwith WebDAV and Windows 2000involves an “unchecked buffer” thathandles the DAV protocol.
This is an ordinary bug and not worthy of note in a document about the relative strengths and weaknesses of WebDAV (or any other protocol); it's not an inherent consequence of WebDAV as designed...
Reviewer #1 (Public Review):
We conclude that this descriptive study has some strengths but additionally, we propose several ways in which to increase its potential impact and to strengthen some of the claims. This study describes the remodeling of Merkel cells and their innervating sensory axons in the skin. By using transgenic mouse lines in which these cells were genetically fluorescently labeled, the authors performed a series of analyses mostly focusing on the number and location of Merkel cells and the sensory axons that innervate them.
One of the major strengths of the study is the establishment of intravital imaging techniques to investigate the dynamic simultaneous behavior of Merkel cells and their innervation during homeostasis and hair regeneration. However, how the findings integrate into the existing knowledge of skin development is unfortunately only partially addressed.
To the best of our understanding, a few technical limitations of the study define its major weaknesses: First, Merkel cell loss is dramatic and it's unclear whether this reduction is part of a developmentally controlled reduction in cell number, or whether additional cells are expected to be integrated into the system. Longer windows of imaging might help here. Second, the depilation agent might be too aggressive and lead to cell death and thus better controls might be suggested. Similarly, ablating Merkal cells throughout development might cause developmental issues that might mask the proposed homeostasis analyses. A controlled adult specific ablation might be suggested. Finally, the TrkC based transgenic mouse is expected to be heterozygous - could that be an issue? Either better controls, or a textual addressing of this topic are advised.
All in all, we think this study has the potential to establish a high resolution description of Merkel cells - sensory axon dynamic interactions. We hope that the authors will be encouraged to improve the paper based on our comments, something that will likely improve its potential significance and impact.
Reviewer #1 (Public Review):
This manuscript reports the results of a clever chemogenetic manipulation study designed to probe how stimulation (excitation and inhibition) of D1-expressing spiny projection neurons in the striatum (direct pathway) influences hemodynamic characteristics of local and global regions in the brain in mice. Stimulating in the dorsal caudate, the authors found alteration of the local hemodynamic BOLD responses in adjacent areas of the striatum, regions of the thalamus known to project back to the striatum, and more unimodal cortical regions. The authors also observed a decrease in functional connectivity between several cortical regions and the striatum with direct pathway excitation, and the opposite effect with direct pathway inhibition. Put together the results begin to paint a picture of the macroscopic signatures of direct pathway stimulation (and inhibition) that could be used to help infer global BOLD patterns in task-related experiments.
Overall, this appears to be a timely study. The rise of papers using chemo/optogenetic methods to probe the underlying mechanisms of the BOLD response is crucial for the neuroimaging field to continue moving forward. The methods are, for the most part, rigorous and clear. There are, however, a few open issues that require addressing in order for readers to reach the same conclusion as the authors.
MAJOR CONCERNS
1. Classifier as an evaluation method.
The authors largely rely on a support vector machine (SVM) classifier to predict whether BOLD dynamics within atlas-defined regions reflect stimulation-on or stimulation-off windows. While in one way this is a conservative method for evaluating stimulation effects in the resting BOLD fluctuations, the authors largely report their findings as accuracies of the classifier. Figures 3-5 largely only report model accuracy effects, but we get no sense as to what exactly is happening to the BOLD dynamics in each region. The autocorrelation analysis (Fig. 6) somewhat tries to get at this, but only for a subset of regions and the results are largely unclear (see comment below). As a result, a key goal of the study is left largely unaddressed for the reader: i.e. how do intra-region BOLD dynamics change with direct pathway stimulation? The study needs more effort put into this descriptive level of analysis to complement the rigorous classifier analysis.
Also, the classifier method itself seems highly parameterized. The hctsa method returns 7702 features for each time series. It is unclear exactly how many were in the final set used to run the classification, but even if half of the features were removed, it would still make the classification problem highly overparameterized (e.g., 23 and 25 observations against thousands of features for the excitation and inhibition classifiers respectively). Assuming the authors used cross-validation correctly (which we need more information to support), the risk of inflated classification performance is mitigated. However, we need the details to be able to vet that the bias-variance tradeoff was resolved effectively in this model. In addition, it would be nice to know the features that loaded highly on the final model to resolve the questions about what changes in the local BOLD dynamics from excitation and inhibition of the direct pathway.
2. Local stimulation effects in the striatum
Figure 3 is quite confusing. The classifier is supposed to predict stimulation (excitation or inhibition) on vs. stimulation off (control) periods. This would predict a single number (balanced prediction accuracy) per striatal nuclei. Yet the heat maps shown in this figure show classification accuracies for both stimulation and control conditions. Where do the two numbers come from? Also, given the extremely limited short-range lateral connectivity in the striatum, why are the only stimulation effects observed not in the subnucleus being stimulated (Cpl,dm,cd), but for adjacent subnuclei (CPre, CPivmv) and *only* for excitation conditions? This lack of direct change in BOLD dynamics at the stimulated site seems important and largely ignored.
3. Autocorrelation findings.
The one attempt to characterize what happens in the intra-region BOLD dynamics is the autocorrelation analysis reported on page 11 and in Figure 6. However this analysis a) only focuses on the thalamic nuclei (not also the cortical and the single striatal site shown to exhibit stimulation effects) and b) only focuses on a few time series measures. Why this limited focus of both target (thalamic nuclei) and measure (first lag of the autocorrelation)? There are many measures for characterizing the temporal characteristics of autocorrelated series from the hctsa analysis. This selective focus seems both narrow and incomplete.
4. Connectivity results
The changes in functional connectivity as a result of direct pathway stimulation (excitation and inhibition) are both fascinating and limited. There is a clear excitation/inhibition difference in effects, as shown in Figure 7 B-C. However, Figure 7B suggests something different than the change results shown in Figure 7C. It appears that the application of clozapine increases functional connectivity in the control mice (black line Fig. 7B). This effect is exaggerated in the inhibition condition, but (most importantly) direct pathway excitation does not really reveal a significant change in the BOLD connectivity patterns. Now this does not change the authors' overall conclusions (connectivity is suppressed with direct pathway excitation relative to control mice), but the nuance of what is happening in the control mice is important for interpretation purposes: direct pathway excitation does not necessarily decrease functional connectivity but does not express the increase in connectivity observed from the application of clozapine. This needs to be elaborated more.
Along the same lines, there is an interesting disconnect between the intra-region results and the inter-region (connectivity) results. It is clear that resting BOLD dynamics in thalamic nuclei that project back to the striatum, as well as more unimodal cortical areas, change from direct pathway stimulation in the dorsal caudate. Yet, only one cortical region (MOp) with significant functional connectivity changes overlaps with the set of nuclei that exhibit intra-region BOLD changes. This suggests that local BOLD dynamics and global connectivity are largely disconnected effects. Yet this seems to be largely ignored in the current work. It would be nice to see more analysis, and discussion, of the intra-region and inter-region stimulation effects.
Windows
any version level dependencies ? for any of the mentioned OS?
Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
We are grateful for the thoughtful comments and suggestions from the reviewers which we feel have resulted in a manuscript that is both clearer to the reader and more rigorous. We have addressed the suggested revisions point by point below:
Reviewer #1 Major comments:
Figure 2E. This shows the residence time of Cse4 without Ndc10 association. How does this compare to the residence time on mutant CEN3 (Supplemental Figure 1). It looks like Cse4 still binds to CEN3 with some specificity even in the absence of Ndc10. Does this suggest that Cse4 has some intrinsic ability to recognize CEN3? Alternatively, Ndc10 is still required for Cse4 binding but is below detection in the Cse4-alone residence lifetimes. Ideally, the authors would compare this with Cse4 binding in an Ndc10 mutant.
We thank the reviewer for this interesting question. As suggested, we analyzed Cse4 behavior on the mutant CDEIIImut CEN3 DNA, which does not stably recruit Ndc10 (Figure 1C), using the real-time colocalization assay. Although overall Cse4 associations were reduced, we still observed transient interactions, consistent with the possibility that Cse4 has some intrinsic ability to recognize CEN3. Kaplan-Meier analysis of the lifetimes of Cse4 colocalizations on CDEIIImut CEN3 DNA were significantly reduced when compared to CEN DNA (Figure EV1G, H). We have added these points to the text (p. 10, lines 29-31 and p. 11, lines 1-8).
Figure 3 explores the very interesting relationship between Scm3 dynamics and Cse4 binding but I feel that the data is not fully flushed out. A key finding is that Cse4 can potentially bind to CEN DNA prior to engaging with Scm3 to be incorporated. This predicts that Cse4 should bind first and then colocalize with Scm3. Can this be detected in the timing of the traces? How often does Scm3 bind to already prebound Cse4 and does this correlate with Cse4 residing longer?
We performed new and more rigorous analyses of the data to address these questions in the revised manuscript. After our analysis to calculate ternaryScm3 off-rates, we analyzed the relative timing of these ternary residences and found that indeed Cse4 can bind to CEN DNA prior to Scm3 and these events do correlate with Cse4 residing longer. Complete analysis of the binding order of Cse4 and Scm3 ternary residences revealed that Scm3 bound to CEN3 DNA prior to Cse4 more often than Cse4 preceding Scm3 (46% vs 34% of ternary residences) with the remaining 20% arriving simultaneously (Figure EV2E). Despite a difference in prevalence, the median lifetimes of both Scm3-Firstand Scm3-Last contexts were similar to each other (Figure EV2F) and both were significantly stabilized when compared to non-ternary residences. These results highlight a potential mechanism where Scm3 catalyzes stable Cse4 incorporation at centromeric DNA instead of delivering it to the centromere regardless of the order of arrival. These data are now reported and discussed in the revision (p. 12, line 11-p.13, line 10).
A related and perhaps even more interesting point is whether Scm3 is involved in "loading" of Cse4. If so, then one would expect that once Cse4 is assembled into nucleosomes it should be stable, even if Scm3 leaves. Can the authors extract this from the data? Alternatively, it is possible that Scm3 remains associated to Cse4 to maintain the nucleosome which would imply a more extended role for Scm3 apart from assembly alone. It would be interesting if information on this can be extracted from the data.
Using our updated analysis of ternaryScm3 Cse4 residences, feel we have addressed this possibility indirectly in a couple of ways. First, we found that in instances where ternary Scm3/Cse4 complexes are formed, Scm3 co-occupied the CEN DNA an average of 56.0% of the total Cse4 residence time, which is distinctly shorter than the 84% of the total Cse4 residence that was occupied by Ndc10 in Cse4/Ndc10 ternary residences. These findings are consistent with a Scm3-turnover mechanism that occurs post ternary complex formation with Cse4 as we found that Cse4 off-rates were significantly reduced after Scm3 association (Figure 3D).
Second, further analysis of Scm3 residence behavior revealed that there was no significant stabilization of Scm3 after ternary Cse4/Scm3 complex formation vs non-ternary Scm3 residences found in either off-rates (33 hr-1 vs 32 hr-1, Figure EV2C) or median lifetimes (45 s vs 52 s, Figure EV2D). These results indicate that Scm3 association is not stabilized like Cse4 after ternary complex formation and points to a potential catalytic role in Cse4 nucleosome formation, leaving a stable Cse4 nucleosome after turnover. We reported these findings in the revision results section (p. 12, lines 11-16) as well as briefly within the discussion.
Even in the presence of Scm3 and CCAN components, Cse4 appears to have a limited lifetime in the in vitro assay compared to in vivo stability. The authors should speculate on whether activities exist in their extract that actively disassembles nucleosomes. Perhaps ATP could be depleted to inactivate remodellers?
This is an excellent suggestion that we addressed with a new experiment. We performed an endpoint localization experiment with lysate containing fluorescently labeled Ndc10 and Cse4 and then removed the lysate and incubated the slide for 24hr in imaging buffer at RT. Strikingly, the proteins were maintained at the CEN DNA with a high protein total colocalization (~75% retention) (shown in Figure EV1B, C). These data suggest that the lysates may contain negative regulatory factors and we have added this point in the revised text (p. 9 lines 6-25).
We were not able to address whether the removal of ATP stabilized the proteins because we previously found that ATP depletion of the lysates completely abrogates kinetochore assembly in extracts. We will need to eventually dissect the role of remodelers in future work using a different approach.
For Figure 6, it is not clear why AT-track mutants of CDEII are labeled as genetically stable and genetically unstable. This is confusing as the "genetically stable" show a more than 10-fold increase in chromosome loss rates. Perhaps these can be renamed into "unstable" and "very unstable" or "weak" and "strong" mutants, just to make clear that these classes are both poorer than wild type.
We had deferred to the nomenclature used in the previous study (Baker and Rogers, 2005) which initially characterized these mutants. To avoid this confusion, we have renamed these mutants “unstable” and “very unstable” as suggested to make it clearer that none of these synthetic mutants fully recapitulate WT CEN3 behavior.
Finally, it would be wonderful to include data to assess whether a full Cse4 nucleosome is assembled or a partial nucleosome e.g. just Cse4/H4 tetrasomes. This could be done by tracking the accumulation of H2A or H2B at the CEN3. This would give further insight into what step Scm3 catalyses.
This is a very interesting suggestion that we were not able to directly address. Epitope tagging of these histone proteins in Saccharomyces cerevisiae with endogenous fluorophores has proved challenging due to gene duplication, overall sensitivity to histone levels within the cells and disruption of histone function by epitope tagging. We hope to find a workable method to address this in the future to address this question directly.
Minor comments:
Typo on page 5, line 1 "nucleosom" missing an e.
We have corrected this in the revised text.
Kaplan-Meyer should be spelled Kaplan-Meier
We have corrected this in the revised text.
The term "censored" is mentioned across many figures but comes up just ones in the methods where it is not clearly explained. Perhaps this could be clarified in the legend.
We have now provided a clear explanation of the term “censored” in the text on p. 28, lines 25-27. It has also been added to the figure legends and reported in the Statistical tests section of the methods section to address this point.
The abstract states that Cse4 can arrive at the centromere without its chaperone. More likely, Cse4 is in complex with other chaperones that may allow it to bind. Perhaps the abstract can be modified to read "Cse4 can arrive at the centromere without its dedicated centromere-specific chaperone Scm3..."
We updated the abstract to reflect this point in the revised text.
Related to this point, the discussion states the possibility that Cse4 can initially bind to CEN3 via other more general chaperones. However, it should be acknowledged that transient Cse4 binding in their assay may simply occur through mass action due to high concentrations of CEN3 DNA. In vivo, this transient binding may not be that relevant.
We acknowledge this potential caveat in the discussion section (p. 20 lines 15-18), although we feel this is somewhat unlikely due to our observation of significantly reduced Cse4 binding on CDEIIImut DNA despite DNA concentrations being similar to previous assays (Figure EV3A). We speculate that some of this transient behavior is at least in part driven by two major factors: negative regulatory factors within our cellular extracts that counter nucleosome formation (as explored in Figure EV1B-C) and photostability of the endogenous fluorophores used within the study (Figure EV1D-E). These points were highlighted within the second paragraph of page 9.
Reviewer #2 Major comments:
- Figure S1A-D seem like some of the most compelling data in the paper to bolster the rigor of their experimental setup. There appears to be plenty of space to include these data in the main figure set in Figure 1 after panel D. The authors would be well served to consider moving S1A-D somewhere in the main figure set.
We appreciate the helpful feedback on the importance of the date found in Supplemental Figure 1 and have now incorporated it into Figure 1 within the main text as suggested.
The authors conclude that Ndc10 recruits HJURP(Smc3) to the yeast point centromeres. If this is the case, can the TIRFM assay measure ternary residence lifetimes complexes between Ndc10/HJURP(Smc3)/CenDNA?
We made the conclusion that Ndc10 recruits Scm3 based on previous publications showing this requirement in vivo. We have now attempted to address this in our assay indirectly by monitoring Scm3 behavior on the CDEIIImut CEN3 DNA that lacks Ndc10. Surprisingly, we found that Scm3 interacted similarly with CDEIIImut CEN3 DNA and actually showed an increase in median lifetimes vs. CEN3 DNA (Figure EV2B), suggesting its intrinsic DNA-binding activity may be the primary driver of its CEN DNA binding and that stable Cse4 association is required for its turnover. These data suggest that Ndc10 is not driving Scm3 interaction (or targeting) to CEN3. We are grateful to the reviewer for pointing this out and have adjusted our conclusions in the revised manuscript (p. 12, line 26-p.13 line 10).
Throughout the manuscript short- and long- term residence lifespans are mentioned, referencing the figures containing lines with lengths depicting residence times. This is a qualitative reference to short and long residences. Can the authors provide a graph for short-term ( 300 s) residence life-spans for, CENPA alone, CENPA/Ndc10, and CENPA/HJURP on CEN3 DNA? Or some figure similar to Figure 3C, but reporting the proportion of short-term vs long-term residence?
We typically used Kaplan-Meier survival function estimates to compare binding behavior but agree that quantification of residences within these contexts may be easier for the reader to follow. We have therefore quantified short-term ( 300 s) as suggested and added them as a panel (F) to Figure EV1 and as panel (E) in Figure 3.
The choice of CCAN components for analysis in Fig. 5 is interesting, but many readers may be curious why Mif2 wasn't selected for disruption, since it has such a cozy placement with CENP-A and CEN DNA. Can choice to not include Mif2 mutants/degrons be mentioned/justified in the text (unless, even better yet, they choose to address Mif2 role directly in new experimentation)?
We relied on structural models to choose CCAN proteins that are in close proximity to the DNA. Because Mif2 is not in these structures, we did not include it in our studies. We have explained this in the revised text (p. 16 lines 14-17) and agree it is an interesting future area of study.
Minor comments:
- Are these whole cell extracts (WCE) DNA-free? I'm curious if there is any competition from endogenous DNA from the yeast cellular extract.
The extracts are not DNA-free so it is likely there is some competition from endogenous DNA. We have avoided enzymatically removing the DNA since the TIRF assay depends on the integrity of DNA.
In relation to Mif2 and comment #4 above, do the authors make any connection to their results with synthetic nucleosome sequences not being conducive to yeast centromere formation with the prior observation (Allu et al 2019) using recombinant components that the human version of Mif2 more easily saturates its binding sites on CENP-A nucleosomes when they are assembled with natural centromere DNA rather than the Widom 601 sequence?
We did not speculate on the role of Mif2 and stability of synthetic nucleosome sequences. This is an interesting point but the differences between the yeast and human systems combined with the fact we have not yet started to study Mif2 made it seem too premature to include in this manuscript.
Providing a gel (or other measure) of the DNA templates (750, 250, and 80 bp) used in TIRFM assay would be nice to show to confirm the designed size of the pre-tethered DNA.
We agree this is a helpful control and we have now included it as a panel in Figure EV5B.
- Some of the references to figures/figure panels in the main text do not match the figures. (discussion, pg 16, paragraph 1 & 2;pg 18, paragraph 1).
We have updated references mentioned to reflect to the correct figure in both sections of the discussion.
Reviewer #3 Major comments: -->Statistics are highly recommended for all the data in the ms.
We have included log-rank analysis to instances where two survival functions were plotted together where appropriate. P-values for all these analyses were reported in the appropriate figure legends.
- At what rate is data collected in the TIRFM setup. For clarity for the reader, it is important to provide imaging details for time-lapse. What is the impact of photobleaching on the stability of the fluorophore signal? Please clarify.
This is a helpful suggestion and we have now included imaging details (like time intervals for each channel) when the real-time assay is introduced in the results section (p. 8, lines 10-12). We have also provided additional details for the photobleaching estimates and how these might censor data in turn (p. 9, lines 14-25). Although photobleaching is a primary limitation of the time-lapse assays, we point out that it is appropriate to compare protein behavior under identical imaging parameters within differing contexts. We also noted that we typically compared time-lapse behavior (which is affected by photobleaching) with endpoint assays to ensure consistent behavior.
- The power of single-molecule technique is precisely that such data can be made quantitative. Indeed, the Kaplan-Meyer graphs do show nice quantitative results. Unfortunately, in the text few quantitative measurements are reported. In fact, the Kaplan-Meyer graphs can be interpreted in a quantitative manner such as probability of residency time. Although in most cases the statistical significance between two conditions can be expected, this is not formally calculated and shown. What is the 50% survival time of Cse4 alone or with Ndc10, for instance? This manuscript would greatly benefit from a quantitative approach to the data, including a summary table of the results of the various conditions tested. Please add this important table.
We initially put the quantitative data in the figure legends but omitted it from the main text for simplicity but appreciate the Reviewer’s point. We note that we performed log-rank tests on all Kaplan-Meier analyses that are plotted on the same graph to provide statistical differences where applicable and have included all P-values in the figure legends. In response to the suggestion, we have now also included a table (Table 1) that contains the median survival time for all proteins tested as well as the median survival times for the differing contexts tested for quick reference and easier comparison for the reader.
- This reviewer would expect that the endpoint (90 min) would roughly reflect the occupancy results from time-lapse (45 min) experiments. Based on the data presented in Figures 1, 2, S1-3, this does not appear to be the case. 50% of Cse4-GFP and 70% Ndc10-mCherry colocalized with CEN3 DNA in the endpoint experiment, whereas in Fig 2C, ~30 and ~52 traces were positive for Cse4-GFP and Ndc10-mCherry, resp. with the former having drastically lower residency survival compared to Ndc10-mCherry. If indeed, 50% of Cse4-GFP makes it to the endpoint, about 50% of all traces should reach the end of the 45 minutes time-lapse window. Only about 1/3 of all positive Cse4-GFP traces can be seen at the end of the 45 min window. Could this be due to technical issues of photostability of GFP? Why does the colocalization of Cse4 signal with the DNA signal disappear so readily? Are Cse4 so unstable? Is the same true for canonical H3 nucleosomes? This unlikely true for nucleosomes in cells.
This is a valid concern, and we appreciate the thoughtful critique. The inconsistency noted between the initial endpoint colocalization and those reported later in the paper is mainly due to the difference between yeast strains carrying Cse4 tagged alone in comparison to multiple kinetochore proteins with tags that exhibit mild genetic interactions. This point is now explained in the revised text (p. 8, line 29-p. 9. line 3).
Photostability is also a factor in the live imaging experiments compared to the endpoint localization assays. However, our photobleaching estimates suggest that the Cse4 lifetimes are not limited by photobleaching (Figure EV1D, E) so we do not believe that accounts for the differences between experiments and it is mainly the presence of multiple epitope tags.
In regard to why Cse4 is not more stable, Reviewer 1 had the same question so we performed an experiment to address whether the lysate contains negative regulatory factors. We found that Cse4 is stable once the lysate is removed (Figure EV1B, C), consistent with the idea that there are factors that disrupt it in the lysate. We discuss these potential reasons for transient association in the revised text (p. 9, lines 4-25).
It should also be noted that there are clear differences in nucleosome formation in reconstitutions and within our extracts, as evident by the Widom-601 DNA data (Figure 6D). This was not necessarily unexpected, as extracts are a much more complex medium, but we are encouraged by the fact that at least once formed, these Cse4-containing particles on CEN DNA are perhaps more stable than their reconstituted counterparts that seem to be so far unsuitable for structural studies.
Along the same lines, in Suppl Fig 3 there is a disconnect between residency survival and endpoint colocalization on either CEN3, CEN7, or CEN9. What could be the underlying mechanism between the discordance of endpoint results and time-lapse results? Could this be the result of experimental differences?
We are grateful that this discrepancy was highlighted to us, as upon closer examination we discovered that endpoint colocalization analysis had not been correctly updated in the figure to include data from equivalent genetic backgrounds as the CEN3 and CEN9 assays. Updating the figure in Appendix Figure S2 to include this data remedied this discrepancy.
- What fraction of particles show colocalization between Cse4-GFP and Ndc10-mCherry? What fraction of occupancy time show colocalization between Cse4-GFP and Ndc10-mCherry? Altogether, understanding the limitation and benefits of endpoint analysis and time-lapse analysis in this particular experimental set-up is important to be able to interpret the results. Please clarify these points.
We have now added particle numbers to all survival estimate plots which makes it much easier for the reader to interpret the proportion of Cse4 residences that are ternary vs. non-ternary in instances where off-rates were quantified and Kaplan-Meier analysis was performed on the resulting lifetimes. We determined that for ternary Cse4-Ndc10 residences, Cse4 and Ndc10 co-occupied the CEN DNA an average of ~84% of the total Cse4 residence.
- Page 9, third sentence of third paragraph it is stated that the "results suggests that Scm3 helps promote more stable binding of Cse4 ...". This is indeed one possible explanation of the results, and this possibility is tested by overexpressing Psh1 or Scm3 by endpoint colocalization analysis. 1) Taking the concerns regarding the endpoint vs time-lapse results into account, wouldn't it be more accurate to compare either time-lapse results against each other or endpoint results? 2) Alternatively, more stable Cse4 particles are able to recruit Scm3, simply because of the increased binding opportunity of a more stable particle. In this scenario, just the residency time of Cse4 alone is the predicting factor in likelihood to associate with Scm3. To test the latter possibility, Cse4 stability would need to be altered. Please consider this experiment- should be relatively easy with the right mutant of either CSE4 or CDEII (see Luger or Wu papers).
We appreciate the points raised here and addressed both as follows. For point (1) we altered the text in the third paragraph of the section, The conserved chaperone Scm3HJURP is a limiting cofactor that promotes stable association Cse4CENP-A with the centromere, to make it more clear to the reader that in the experiments presented in Figure EV4, endpoint analysis results were only compared to each other, and likewise time-lapse experiments were only compared to each other for each genetic background. While the results were consistent between experiments, we did not directly compare results from one to the results of another, but instead we used both assays to characterize Cse4CENP-A behavior more completely in differing contexts.
To test the alternative hypothesis proposed in point (2), we sought to avoid potential selection bias by utilizing off-rate analysis, which enabled us to separate the portions of Cse4 residences that occurred prior to ternary association with either Ndc10 or Scm3. This unbiased approach allowed us to compare Cse4 residence lifetimes pre and post ternary association and we found that there were still significant differences in off-rates and median lifetimes of the associated ternary and non-ternary residences using this updated analysis. We thank the reviewer for helping to guide us towards this more robust analysis.
Based on the recommendation in point (2), we also sought to directly compare the behavior of Cse4 and Scm3 on the “Very Unstable” CDEII mutants described in the section, DNA-composition of centromeres contributes to genetic stability through Cse4CENP-A recruitment. In this case, equivalent extracts were used and Cse4 stability was altered directly via the DNA template. When the off rates of ternaryScm3 Cse4 residences were compared, we found a significant increase in off-rates of Cse4 on the “Very Unstable” CDEII mutant CEN DNA (Appendix Figure S3B) compared to WT CEN DNA. If the alternative hypothesis proposed in point (2) were true, we would expect this reduction in median lifetime to correlate with a lower proportion of Cse4-Scm3 ternary association but quantification yielded proportions that, while varied, were not on average lower than the proportion of Cse4-Scm4 association on CEN3 DNA (.23 vs .31, Appendix Figure S3A). This finding, taken together with the fact that it would be difficult for us to propose an alternative hypothesis that explains the results outlined in Figure EV4, supports our hypothesis that Scm3 helps promote more stable binding of Cse4 and that this stabilization is directly influenced by DNA sequence composition.
- In Figure 1C and Supplemental Figure 5B, there appears to be foci that CEN3-ATTO-647 positive, but Cse4-GFP negative and visa verse. It seems logical that there are DNA molecules that didn't reconstitute Cse4 nucleosomes. But how can there be Cse4-GFP positive foci without a positive DNA signal? Is it possible that unlabeled DNA make it onto the flow chamber? If so, can these unlabeled DNA be visualized by Sytox Orange for instance to confirm that no spurious Cse4 deposition occurred? Please clarify.
Because it is unlikely that random associations will colocalize with the labeled DNA based on control assays (Supp. Figure 1C) that show this occurs rarely (
- On page 10, at the end of the first paragraph, growth phenotype of pGAL-SCM3 and pGAL-PSH1 mutants were tested. On GAL plates, pGAL-PSH1 shows reduced growth, but not pGAL-SCM3. Wouldn't a more accurate conclusion be that Psh1 is limiting stable centromeric nucleosome formation, instead of Scm3?
The growth defects on galactose don’t necessarily mean that a factor is limiting in cells. Instead, they report on whether changing the relative amounts of the complex lead to phenotypes in cells that could be the result of many causes that would require characterization of the phenotypes to understand. In this case, we presume that Psh1 titrates Scm3 away from Cse4 to prevent nucleosome formation in vivo. However, we have not directly tested this so we just concluded that the relative levels of the complexes are important for cell growth.
- In the section where DNA was tethered at either one or both ends, an important control is missing. How does such a set-up impact nucleosome formation in general. Can H3 nucleosomes form on random DNA that is either tethered at one or both ends? Does this potentially affect the unwrapping potential/topology of AT-tract DNA? Please comment.
This is an interesting point and one that we hope to explore further in the future but was beyond the scope of this paper. We suspect that restrictions via tethering would also limit canonical nucleosome formation on random DNA. We envision that unwrapping may be affected as well and hope to explore this via other, potentially better suited techniques like optical tweezers.
Minor comments + Censored data points are not explained in the text.
A brief explanation of censorship was added to the figure legends and we have now provided a clear explanation of the term “censored” in the text on p. 28, lines 25-27. It has also been reported in the Statistical tests section of the methods section to address this point.
- Number of particles tested should be reported in the main and supplemental figures, not just the legends for those readers who first skim the manuscript before deciding to read it.
We add these values to all Kaplan-Meier plots in all figures (Main, Expanded View, and Appendix)
- Typo on page 5, first line: "nucleosom" should be "nucleosome".
We fixed this in the text.
- Typo on page 9, second line: sentence is missing something "... is required for Scm3-dependent ..."
We fixed this in the text.
- It is unclear how the difference in Supplemental figure 5D was calculated.
We included log-rank test generated P-values as well as description in the figure legend of EV4.
- Figure 4C: why are there more Ndc10-mCherry foci observed in double tethered constructs vs single tethered constructs?
There can be variances in DNA density between slides, particularly with non-dye labeled DNA template. We updated figure panel C to include a representative image with similar Ndc10 density.
- For the display of individual traces as shown in Fig 2B, 3A, 4E, and 5E, it might be more informative if the z-normalized signal and the binary read-out are shown in separate windows to better appreciate how the z-normalized signal was interpretated.
Due to spacing limits within figures we attempted to accommodate this by reducing the thickness of the binary read-out and ensured that the raw data traces were overlaid for easier interpretation by the reader.
- Page 17, fifth line of the second paragraph, it is stated that a conserved feature of centromeres is their AT-richness. This is most certainly true for the majority of species studied thus far, but bovine centromeres for instance are about 54% GC rich. Indeed, Melters et al 2013 Genome Biol showed that in certain clades centromeres can be comprised of GC-rich sequences. It might be worthwhile to nuance this statement.
We have updated the text to reflect that AT-rich DNA is widely conserved but not a universal feature of centromeres.
- Page 17, last paragraph. Work by Karolin Luger and Carl Wu is cited in relationship to AT-rich DNA being unfavorable for canonical nucleosome deposition. A citation is missing here: Stormberg & Lyubchenko 2022 IJMS 23(19): 11385. Also, the first person to show that AT-tracts affect nucleosome positioning are Andrew Travers and Drew. This landmark work should be cited.
We thank the Reviewer for noticing this and have added the appropriate citations.
- Page 18, 9th line from the top, it is stated that yeast centromeres are sensitive to negative genetic drift. This reviewer is of the understanding that genetic drift is a statistical fluctuation of allele frequency, which can result in either gain or loss of specific alleles. Population size is a major factor in the potential power of genetic drift. The smaller a population, the greater the effect. Budding yeast is found large numbers, which would mean that drift only has limited predicted impact. Maybe the authors meant to use the term purifying selection?
We appreciate this clarification and agree with the reviewer, we have updated the manuscript to cite purifying selection and not genetic drift as at centromeres.
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
The manuscript "Single molecule visualization of native centromeric nucleosome formation reveals coordinated deposition by kinetochore proteins and centromere DNA sequence" by Popchock and colleagues describes a new high-throughput single-molecule technique that combines both in vitro and in vivo sample sources. Budding yeast centromeres are genetically defined centromeres, which makes them ideal for studying short DNA segments at the single-molecule level. By flowing in whole cell lysates, Cse4 nucleosomes can form under near physiological conditions. Two analytical experiments were performed: endpoint and time-lapse. In the former case, nucleosomes were allowed to form within 90 minutes and the latter case, nucleosome formation was tracked for up to 45 minutes. In addition, well described genetic mutants were used to assess the stability of Cse4 nucleosomes, as well as different DNA sequences (this we particularly liked- well done). Overall, this is a very interesting technique with potential to be useful for studying any DNA-based effect, ranging from DNA repair to kinetochore assembly. This is strong and impactful work, and the potential this kind of microscopy has for solving kinetic problems in the field. We think it's worthy of publication after revising technical and experimental concerns that would elevate the ms significantly.
Major comments:
Statistics are highly recommended for all the data in the ms.
Minor comments
This study developed an in vitro imaging technique that allows native proteins from whole cell lysates to associate with a specific DNA sequence that is fixed to a surface. By labeling proteins with specific fluorophore-tags colocalization provides insightful proximity data. By creating mutants, the assembly or disassembly of protein complexes on native or mutated DNAs can therefore be tracked in real time. In a way, this is a huge leap forward from gel shift EMSA assays that have traditionally been used to do comparative kinetics in biochemistry.
This makes this technique ideal for studying DNA binding complexes, and potentially, even RNA-binding complexes. This study shows both the importance of using genetic mutants, as well as testing the effects of the fixed DNA sequence. As many individual fixed DNA molecules can be tracked at one, it allows for high-throughput analysis, similar to powerful DNA curtain work from Eric Greene's lab. Overall, this is a promising new single-molecule technique that combines in vitro and ex vivo sample sources, and will likely appeal to a broad range of molecular and biophysics scientists.
Connect
Scrollable windows can cause some confusion, leading to difficulties in understanding and errors in sentence breaks when using TTS.
the room is empty, but for sunlight pouring through two high windows in the back wall. The room is solemn, even forbidding.
The room is pierced by sunlight coming from two back windows, highlighting a dramatic contrast between the natural joy and promise of spring outside and the stark, gloomy oppressive and intimidating interior ("solemn, even forbidding").
Joint Public Review:
In the current paper, Jones et al. describe a new framework, named coccinella, for real-time high-throughput behavioral analysis aimed at reducing the cost of analyzing behavior. In the setup used here each fly is confined to a small circular arena and able to walk around on an agar bed spiked with nutrients or pharmacological agent. The new framework, built on the researchers' previously developed platform Ethoscope, relies on relatively low-cost Raspberry Pi video cameras to acquire images at ~0.5 Hz and pull out, in real time, the maximal velocity (parameter extraction) during 10 second windows from each video. Thus, the program produces a text file, and not voluminous videos requiring storage facilities for large amounts of video data, a prohibitive step for many behavioral analyses. The maximal velocity time-series is then fed to an algorithm called Highly Comparative Time-Series Classification (HCTSA)(which itself is based on a large number of feature extraction algorithms) developed by other researchers. HCTSA identifies statistically salient features in the time-series which are then passed on to a type of linear classifier algorithm called support vector machines (SVM). In cases where such analyses are sufficient for characterizing the behaviors of interest this system performs as well as other state-of-the-art systems used in behavioral analysis (e.g., DeepLabCut).
In a pharmacobehavior paradigm testing different chemicals, the authors show that coccinella can identify specific compounds as effectively as other more time-consuming and resource-consuming systems.<br /> The new paradigm should be of interest to researchers involved in drug screens, and more generally, in high-throughput analysis focused on gross locomotor defects in fruit flies such as identification of sleep phenotypes. By extracting/saving only the maximal velocity from video clips, the method is fast. However, the rapidity of the platform comes at a cost--loss of information on subtle but important behavioral alterations. When seeking subtle modifications in animal behavior, solutions like DeepLabCut, which are admittedly slower but far superior in terms of the level of details they yield, would be more appropriate.
The manuscript reads well, and it is scientifically solid.
1- The fact that Coccinella runs on Ethoscopes, an open source hardware platform described by the same group, is very useful because the relevant publication describes Ethoscope in detail. However, the current version of the paper does not offer details or alternatives for users that would like to test the framework, but do not have an Ethoscope. Would it be possible to overcome this barrier and have coccinella run with any video data (and, thus, potentially be used to analyze data obtained from other animal models)?
2- Readers who want background on the analytical approaches that the platform relies on following maximal velocity extraction, will have to consult the original publications. In particular, the current manuscript does not provide much information on Highly Comparative Time-Series Classification (HCTSA) or SVM; this may be reasonable because the methods were developed earlier by others. While some readers may find that the lack of details increases the manuscript's readability, others may be left wanting to see more discussion on these not-so-trivial approaches. In addition, it is worth noting that the same authors who published the HCTSA method also described a shorter version named catch22, that runs faster with a similar output. Thus, explaining in more detail how HCTSA operates, considering that it is a relatively new method, will make the method more convincing.
Windows 下如何使用雾凇拼音
rime 输入法教程 #todo
Use software like BleachBit (Linux, Windows), BC Wipe, DeleteOnClick and Eraser (Windows) or Permanent Eraser or ‘secure empty trash’ (Mac) to safely delete the data.
Note that some conventional secure deletion methods do not translate well from HDDs to SSDs and data on SSDs deleted with software designed for HDDs may not always be physically overwritten as SSDs often 'pretend' to behave like HDDs to operating systems for legacy compatibility and actually do different things from what they report.
There were a lot of rules for the desk: no phones, no watches, nothing to take pictures with. No paper, no pens, nothing to take notes with. A few times, we saw drones flying outside our windows. Supposedly spies were trying to film what was going on in the company. Everyone was instructed to lower the curtains when that happened. Once, a journalist was outside the building. We were told not to leave the building and not to talk to this journalist. For the company, the journalists were like enemies.
Quote of a digital worker's words. Source (following note 12): "Die Zeit, Feuilleton Ausgabe no. 14, 30 March 2023—a joint cultural supplement project on artificial intelligence by Marie Serah Ebcinoglu, Eike Kühn, Jan Lichte, Peter Neumann, Hanno Rauterberg, Malin Schulz, Hito Steyerl and Tobias Timm. This interview was conducted by Tobias Timm."
Author Response
Reviewer #1 (Public Review):
The authors present a study of visuo-motor coupling primarily using wide-field calcium imaging to measure activity across the dorsal visual cortex. They used different mouse lines or systemically injected viral vectors to allow imaging of calcium activity from specific cell-types with a particular focus on a mouse-line that expresses GCaMP in layer 5 IT (intratelencephalic) neurons. They examined the question of how the neural response to predictable visual input, as a consequence of self-motion, differed from responses to unpredictable input. They identify layer 5 IT cells as having a different response pattern to other cell-types/layers in that they show differences in their response to closed-loop (i.e. predictable) vs open-loop (i.e. unpredictable) stimulation whereas other cell-types showed similar activity patterns between these two conditions. They analyze the latencies of responses to visuomotor prediction errors obtained by briefly pausing the display while the mouse is running, causing a negative prediction error, or by presenting an unpredicted visual input causing a positive prediction error. They suggest that neural responses related to these prediction errors originate in V1, however, I would caution against over-interpretation of this finding as judging the latency of slow calcium responses in wide-field signals is very challenging and this result was not statistically compared between areas.
Surprisingly, they find that presentation of a visual grating actually decreases the responses of L5 IT cells in V1. They interpret their results within a predictive coding framework that the last author has previously proposed. The response pattern of the L5 IT cells leads them to propose that these cells may act as 'internal representation' neurons that carry a representation of the brain's model of its environment. Though this is rather speculative. They subsequently examine the responses of these cells to anti-psychotic drugs (e.g. clozapine) with the reasoning that a leading theory of schizophrenia is a disturbance of the brain's internal model and/or a failure to correctly predict the sensory consequences of self-movement. They find that anti-psychotic drugs strongly enhance responses of L5 IT cells to locomotion while having little effect on other cell-types. Finally, they suggest that anti-psychotics reduce long-range correlations between (predominantly) L5 cells and reduce the propagation of prediction errors to higher visual areas and suggest this may be a mechanism by which these drugs reduce hallucinations/psychosis.
This is a large study containing a screening of many mouse-lines/expression profiles using wide-field calcium imaging. Wide-field imaging has its caveats, including a broad point-spread function of the signal and susceptibility to hemodynamic artifacts, which can make interpretation of results difficult. The authors acknowledge these problems and directly address the hemodynamic occlusion problem. It was reassuring to see supplementary 2-photon imaging of soma to complement this data-set, even though this is rather briefly described in the paper.
We will expand on the discussion of caveats as suggested.
Overall the paper's strengths are its identification of a very different response profile in the L5 IT cells compared other layers/cell-types which suggests an important role for these cells in handling integration of self-motion generated sensory predictions with sensory input. The interpretation of the responses to anti-psychotic drugs is more speculative but the result appears robust and provides an interesting basis for further studies of this effect with more specific recording techniques and possibly behavioral measures.
Reviewer #2 (Public Review):
Summary:
This work investigates the effects of various antipsychotic drugs on cortical responses during visuomotor integration. Using wide-field calcium imaging in a virtual reality setup, the researchers compare neuronal responses to self-generated movement during locomotion-congruent (closed loop) or locomotion-incongruent (open loop) visual stimulation. Moreover, they probe responses to unexpected visual events (halt of visual flow, sudden-onset drifting grating). The researchers find that, in contrast to a variety of excitatory and inhibitory cell types, genetically defined layer 5 excitatory neurons distinguish between the closed and the open loop condition and exhibit activity patterns in visual cortex in response to unexpected events, consistent with unsigned prediction error coding. Motivated by the idea that prediction error coding is aberrant in psychosis, the authors then inject the antipsychotic drug clozapine, and observe that this intervention specifically affects closed loop responses of layer 5 excitatory neurons, blunting the distinction between the open and closed loop conditions. Clozapine also leads to a decrease in long-range correlations between L5 activity in different brain regions, and similar effects are observed for two other antipsychotics, aripripazole and haloperidol, but not for the stimulant amphetamine. The authors suggest that altered prediction error coding in layer 5 excitatory neurons due to reduced long-range correlations in L5 neurons might be a major effect of antipsychotic drugs and speculate that this might serve as a new biomarker for drug development.
Strengths:
- Relevant and interesting research question:
The distinction between expected and unexpected stimuli is blunted in psychosis but the neural mechanisms remain unclear. Therefore, it is critical to understand whether and how antipsychotic drugs used to treat psychosis affect cortical responses to expected and unexpected stimuli. This study provides important insights into this question by identifying a specific cortical cell type and long-range interactions as potential targets. The authors identify layer 5 excitatory neurons as a site where functional effects of antipsychotic drugs manifest. This is particularly interesting as these deep layer neurons have been proposed to play a crucial role in computing the integration of predictions, which is thought to be disrupted in psychosis. This work therefore has the potential to guide future investigations on psychosis and predictive coding towards these layer 5 neurons, and ultimately improve our understanding of the neural basis of psychotic symptoms.
- Broad investigation of different cell types and cortical regions:
One of the major strengths of this study is quasi-systematic approach towards cell types and cortical regions. By analysing a wide range of genetically defined excitatory and inhibitory cell types, the authors were able to identify layer 5 excitatory neurons as exhibiting the strongest responses to unexpected vs. expected stimuli and being the most affected by antipsychotic drugs. Hence, this quasi-systematic approach provides valuable insights into the functional effects of antipsychotic drugs on the brain, and can guide future investigations towards the mechanisms by which these medications affect cortical neurons.
- Bridging theory with experiments
Another strength of this study is its theoretical framework, which is grounded in the predictive coding theory. The authors use this theory as a guiding principle to motivate their experimental approach connecting visual responses in different layers with psychosis and antipsychotic drugs. This integration of theory and experimentation is a powerful approach to tie together the various findings the authors present and to contribute to the development of a coherent model of how the brain processes visual information both in health and in disease.
Weaknesses:
- Unclear relevance for psychosis research
From the study, it remains unclear whether the findings might indeed be able to normalise altered predictive coding in psychosis. Psychosis is characterised by a blunted distinction between predicted and unpredicted stimuli. The results of this study indicate that antipsychotic drugs further blunt the distinction between predicted and unpredicted stimuli, which would suggest that antipsychotic drugs would deteriorate rather than ameliorate the predictive coding deficit found in psychosis. However, these findings were based on observations in wild-type mice at baseline. Given that antipsychotics are thought to have little effects in health but potent antipsychotic effects in psychosis, it seems possible that the presented results might be different in a condition modelling a psychotic state, for example after a dopamine-agonistic or a NMDA-antagonistic challenge. Therefore, future work in models of psychotic states is needed to further investigate the translational relevance of these findings.
We fully agree that it is unclear how the effects of antipsychotics in mice relate to the drug effects that would be observed in schizophrenic patients. It is also correct that the reduction of the difference between closed and open loop locomotion onset response in L5 IT neurons (Figure 4) is not what we would have expected to find under the assumption that psychosis is characterized by a blunted distinction between predicted and unpredicted stimuli. We are not sure how to interpret this finding. However, it is probably important to note that the difference is only reduced when using a normalized comparison. Looking just at the subtraction of the two curves, the difference between closed and open loop locomotion onset responses remains unchanged before and after antipsychotic drug injection. The finding of a decorrelation of layer 5 activity, however, is easier to interpret under the assumption that layer 5 functions as an internal representation. If speech hallucinations, for example, are the consequence of a spurious activation of internal representations in speech processing areas of cortex, then antipsychotics might reduce the probability of these spurious activation events by reducing the lateral influence between layer 5 neurons in different cortical areas.
We do indeed plan to address the question of how antipsychotics influence cortical processing in mouse models of schizophrenia in the future.
- Incomplete testing of predictive coding interpretation
While the investigation of neuronal responses to different visual flow stimuli Is interesting, it remains open whether these responses indeed reflect internal representations in the framework of predictive coding. While the responses are consistent with internal representation as defined by the researchers, i.e., unsigned prediction error signals, an alternative interpretation might be that responses simply reflect sensory bottom-up signals that are more related to some low-level stimulus characteristics than to prediction errors.
This is correct – we will expand on the discussion of this point in the manuscript.
Moreover, This interpretational uncertainty is compounded by the fact that the used experimental paradigms were not suited to test whether behaviour is impacted as a function of the visual stimulation which makes it difficult to assess what the internal representation of the animal actual was. For these reasons, the observed effects might reflect simple bottom-up sensory processing alterations and not necessarily have any functional consequences. While this potential alternative explanation does not detract from the value of the study, future work would be needed to explain the effect of antipsychotic drugs on responses to visual flow. For example, experimental designs that systematically vary the predictive strength of coupled events or that include a behavioural readout might be more suited to draw from conclusions about whether antipsychotic drugs indeed alter internal representations.
We agree that much additional work will be necessary to identify internal representation neurons. However, it is difficult to envision how behavioral output could be used to make inferences about internal representations in sensory areas of cortex. In humans, for example, there is evidence that internal representations in visual cortex and behavioral output are not always directly related: binocular rivalry activates representations of both stimuli shown in visual cortex, while the conscious experience that drives behavioral output is only of one of the two stimuli. Hence, we would assume that the internal representation in visual cortex does not necessarily relate to behavioral output.
- Methodological constraints of experimental design
While the study findings provide valuable insights into the potential effects of antipsychotic drugs, it is important to acknowledge that there may be some methodological constraints that could impact the interpretation of the results. More specifically, the experimental design does not include a negative control condition or different doses. These conditions would help to ensure that the observed effects are not due to unspecific effects related to injection-induced stress or time, and not confined to a narrow dose range that might or might not reflect therapeutic doses used in humans. Hence, future work is needed to confirm that the observed effects indeed represent specific drug effects that are relevant to antipsychotic action.
We agree that both dosages and a broader spectrum of non-antipsychotic compounds will need to be investigated. We are in the process of building a screening pipeline to perform exactly these types of experiments. We would however argue that the paper already includes a control condition in the form of the amphetamine data (Figure 7). While it is possible that amphetamine might have an effect that exactly cancels out potential i.p. injection- or stress-induced changes, we would argue it is more probable that these changes had no measurable effect on Tlx3 positive L5 IT neuron calcium activity per se. We will provide additional evidence that time or injection stress alone do not result in the observed effects.
Conclusion:
Overall, the results support the idea that antipsychotic drugs affect neural responses to predicted and unpredicted stimuli in deep layers of cortex. Although some future work is required to establish whether this observation can indeed be explained by a drug-specific effect on predictive coding, the study provides important insights into the neural underpinnings of visual processing and antipsychotic drugs, which is expected to guide future investigations on the predictive coding hypothesis of psychosis. This will be of broad interest to neuroscientists working on predictive coding in health and in disease.
Reviewer #3 (Public Review):
The study examines how different cell types in various regions of the mouse dorsal cortex respond to visuomotor integration and how antipsychotic drugs impacts these responses. Specifically, in contrast to most cell types, the authors found that activity in Layer 5 intratelencephalic neurons (Tlx3+) and Layer 6 neurons (Ntsr1+) differentiated between open loop and closed loop visuomotor conditions. Focussing on Layer 5 neurons, they found that the activity of these neurons also differentiated between negative and positive prediction errors during visuomotor integration. The authors further demonstrated that the antipsychotic drugs reduced the correlation of Layer 5 neuronal activity across regions of the cortex, and impaired the propagation of visuomotor mismatch responses (specifically, negative prediction errors) across Layer 5 neurons of the cortex, suggesting a decoupling of long-range cortical interactions.
The data when taken as a whole demonstrate that visuomotor integration in deeper cortical layers is different than in superficial layers and is more susceptible to disruption by antipsychotics. Whilst it is already known that deep layers integrate information differently from superficial layers, this study provides more specific insight into these differences. Moreover, this study provides a first step into understanding the potential mechanism by which antipsychotics may exert their effect.
Whilst the paper has several strengths, the robustness of its conclusions is limited by its questionable statistical analyses. A summary of the paper's strengths and weaknesses follow.
Strengths:
The authors perform an extensive investigation of how different cortical cell types (including Layer 2/3, 4 , 5, and 6 excitatory neurons, as well as PV, VIP, and SST inhibitory interneurons) in different cortical areas (including primary and secondary visual areas as well as motor and premotor areas), respond to visuomotor integration. This investigation provides strong support to the idea that deep layer neurons are indeed unique in their computational properties. This large data set will be of considerable interest to neuroscientists interested in cortical processing.
The authors also provide several lines of evidence that visuomotor information is differentially integrated in deep vs. superficial layers. They show that this is true across experimental paradigms of visuomotor processing (open loop, closed loop, mismatch, drifting grating conditions) and experimental manipulations, with the demonstration that Layer 5 visuomotor integration is more sensitive to disruption by the antipsychotic drug clozapine, compared with cortex as a whole.
The study further uses multiple drugs (clozapine, aripiprazole and haloperidol) to bolster its conclusion that antipsychotic drugs disrupt correlated cortical activity in Layer 5 neurons, and further demonstrates that this disruption is specific to antipsychotics, as the psychostimulant amphetamine shows no such effect.
In widefield calcium imaging experiments, the authors effectively control for the impact of hemodynamic occlusions in their results, and try to minimize this impact using a crystal skull preparation, which performs better than traditional glass windows. Moreover, they examine key findings in widefield calcium imaging experiments with two-photon imaging.
Weaknesses:
A critical weakness of the paper is its statistical analysis. The study does not use mice as its independent unit for statistical comparisons but rather relies on other definitions, without appropriate justification, which results in an inflation of sample sizes.
We will expand on both analyses and justifications throughout.
For example, in Figure 1, independent samples are defined as locomotion onsets, leading to sample sizes of approx. 400-2000 despite only using 6 mice for the experiment. This is only justified if the data from locomotion onsets within a mouse is actually statistically independent, which the authors do not test for, and which seems unlikely. With such inflated sample sizes, it becomes more likely to find spurious differences between groups as significant. It also remains unclear how many locomotion onsets come from each mouse; the results could be dominated by a small subset of mice with the most locomotion onsets. The more disciplined approach to statistical analysis of the dataset is to average the data associated with locomotion onsets within a mouse, and then use the mouse as an independent unit for statistical comparison. A second example, for instance, is in Figure 2L, where the independent statistical unit is defined as cortical regions instead of mice, with the left and right hemispheres counting as independent samples; again this is not justified. Is the activity of cortical regions within a mouse and across cortical hemispheres really statistically independent? The problem is apparent throughout the manuscript and for each data set collected.
This may partially be a misunderstanding. Figures 1F-1K indeed use locomotion onsets as a unit, but there were no statistical comparisons. In these Figures we were addressing the question of whether locomotion onsets in closed loop differ from those in open loop. Thus, we quantify variability as a unit of locomotion onsets. The question of mouse-to-mouse variability of this analysis is a slightly different one. We did include the same analysis (for visual cortex) with the variability calculated across mice as Figure S2. We will expand this supplementary figure with the equivalent data of Figure 3 to further address this concern.
For Figure 1L (we assume the reviewer means Figure 1L, not Figure 2L), the unit we used for analysis was cortical area. We will update and improve the analysis. This was indeed not optimal, and we will replace the statistical testing with hierarchical bootstrap (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7906290/) to account for nested data.
An additional statistical issue is that it is unclear if the authors are correcting for the use of multiple statistical tests (as in for example Figure 1L and Figure 2B,D). In general, the use of statistics by the authors is not justified in the text.
We will update and improve the analysis shown in Figure 1L.
In Figures 2B and 2D, we think adding family-wise error correction would be slightly misleading. We could add a correction – our conclusions would remain unchanged almost independent of the choice of correction (most of the significant p values are infinitesimally small, see Table S1). However, our interpretation is not focusing on one particular comparison (of many possible comparisons) that is significant - all comparisons between closed and open loop data points were significant in the L5 IT recordings and none of them were significant in the recordings in C57BL/6 mice that expressed GCaMP brain-wide.
Finally, it is important to note that whilst the study demonstrates that antipsychotics may selectively impact visuomotor integration in L5 neurons, it does not show that this effect is necessary or sufficient for the action of antipsychotics; though this is likely beyond the scope of the study it is something for readers to keep in mind.
We fully agree, it is still unclear how the effects we observe in our work relate to the treatment relevant effects in patients. We will expand on this point in the discussion.
Reviewer #3 (Public Review):
The study examines how different cell types in various regions of the mouse dorsal cortex respond to visuomotor integration and how antipsychotic drugs impacts these responses. Specifically, in contrast to most cell types, the authors found that activity in Layer 5 intratelencephalic neurons (Tlx3+) and Layer 6 neurons (Ntsr1+) differentiated between open loop and closed loop visuomotor conditions. Focussing on Layer 5 neurons, they found that the activity of these neurons also differentiated between negative and positive prediction errors during visuomotor integration. The authors further demonstrated that the antipsychotic drugs reduced the correlation of Layer 5 neuronal activity across regions of the cortex, and impaired the propagation of visuomotor mismatch responses (specifically, negative prediction errors) across Layer 5 neurons of the cortex, suggesting a decoupling of long-range cortical interactions.
The data when taken as a whole demonstrate that visuomotor integration in deeper cortical layers is different than in superficial layers and is more susceptible to disruption by antipsychotics. Whilst it is already known that deep layers integrate information differently from superficial layers, this study provides more specific insight into these differences. Moreover, this study provides a first step into understanding the potential mechanism by which antipsychotics may exert their effect.
Whilst the paper has several strengths, the robustness of its conclusions is limited by its questionable statistical analyses. A summary of the paper's strengths and weaknesses follow.
Strengths:
The authors perform an extensive investigation of how different cortical cell types (including Layer 2/3, 4 , 5, and 6 excitatory neurons, as well as PV, VIP, and SST inhibitory interneurons) in different cortical areas (including primary and secondary visual areas as well as motor and premotor areas), respond to visuomotor integration. This investigation provides strong support to the idea that deep layer neurons are indeed unique in their computational properties. This large data set will be of considerable interest to neuroscientists interested in cortical processing.
The authors also provide several lines of evidence that visuomotor information is differentially integrated in deep vs. superficial layers. They show that this is true across experimental paradigms of visuomotor processing (open loop, closed loop, mismatch, drifting grating conditions) and experimental manipulations, with the demonstration that Layer 5 visuomotor integration is more sensitive to disruption by the antipsychotic drug clozapine, compared with cortex as a whole.
The study further uses multiple drugs (clozapine, aripiprazole and haloperidol) to bolster its conclusion that antipsychotic drugs disrupt correlated cortical activity in Layer 5 neurons, and further demonstrates that this disruption is specific to antipsychotics, as the psychostimulant amphetamine shows no such effect.
In widefield calcium imaging experiments, the authors effectively control for the impact of hemodynamic occlusions in their results, and try to minimize this impact using a crystal skull preparation, which performs better than traditional glass windows. Moreover, they examine key findings in widefield calcium imaging experiments with two-photon imaging.
Weaknesses:
A critical weakness of the paper is its statistical analysis. The study does not use mice as its independent unit for statistical comparisons but rather relies on other definitions, without appropriate justification, which results in an inflation of sample sizes. For example, in Figure 1, independent samples are defined as locomotion onsets, leading to sample sizes of approx. 400-2000 despite only using 6 mice for the experiment. This is only justified if the data from locomotion onsets within a mouse is actually statistically independent, which the authors do not test for, and which seems unlikely. With such inflated sample sizes, it becomes more likely to find spurious differences between groups as significant. It also remains unclear how many locomotion onsets come from each mouse; the results could be dominated by a small subset of mice with the most locomotion onsets. The more disciplined approach to statistical analysis of the dataset is to average the data associated with locomotion onsets within a mouse, and then use the mouse as an independent unit for statistical comparison. A second example, for instance, is in Figure 2L, where the independent statistical unit is defined as cortical regions instead of mice, with the left and right hemispheres counting as independent samples; again this is not justified. Is the activity of cortical regions within a mouse and across cortical hemispheres really statistically independent? The problem is apparent throughout the manuscript and for each data set collected.
An additional statistical issue is that it is unclear if the authors are correcting for the use of multiple statistical tests (as in for example Figure 1L and Figure 2B,D). In general, the use of statistics by the authors is not justified in the text.
Finally, it is important to note that whilst the study demonstrates that antipsychotics may selectively impact visuomotor integration in L5 neurons, it does not show that this effect is necessary or sufficient for the action of antipsychotics; though this is likely beyond the scope of the study it is something for readers to keep in mind.
Have you seen mobile phone lock screens where the user is required to draw a specific pattern onto a grid of dots? How about the Windows 8 picture password feature? These are examples of behavior-based authentication factors.
Behavior factors seems like an artificial distinction, at least based on these examples. These would be better classified as Knowledge factors. Drawing a pattern that you've memorized is conceptually no different than typing a code. Or should I point out that typing a code is also a behavior? You have to press your fingers in a certain location on your keyboard and in a certain order.
a really good Windows no-code is still very important, though, because three-quarters of all PCs still run Windows
It is important that it work on Windows, but three-quarters of all PCs do not run Windows. Three-quarters of all traditional laptops and desktops? Sure, but most personal computers these days are mobile phones.
巨帧设置:
帧的大小主要是会影响相机的Packet Size的大小,MVS会跟据PC的巨帧大小自动调整Packet Size的大小。我们一般设置巨型帧为9K,即9014字节。巨型帧设置路径:网络和共享中心>本地连接>属性配置>高级>巨型帧(巨帧数据包、大型数据包、Jumbo Frames、Jumbo Mtu、MTU等),若不设置,则相机很容易丢包。
环境设置:
III. 关闭Windows防火墙 操作方法:控制面板>Windows防火墙>打开或关闭Windows 防火墙 (1)需要确认每个网络对应的防火墙都关闭 (2)有些PC装完系统后windows防火墙是未启用的,需要启用任务管 理器-服务中防火墙后,再通过以上方法关闭,这样防火墙才算是关闭掉 IV. 确认千兆的网络运行环境 使用相机时,需要确保网络链接速度为千兆网络: (1)PC网卡需要是千兆网卡,且确保是以千兆的速度运行; (2)网线需要使用超五类或六类以上的千兆网线。 VI. 卸载杀毒、防护软件 VII. 安装相机的金属结构接地,或者PC工控机电源/外 壳接地:避免静电干扰
光电触发线: (从左到右,为 3 和 6口) 默认的蓝、紫线序为 line0. 默认光电为 NPN 还是 PNP 型,对应不同接线方式。
读码调节:
光圈值调节 • 相机位置不同,景深要求不同对应的光圈值也相应不同;大景深对应小 光圈值,小景深对应大光圈值。。 建议值:F8与F16中间,螺丝锁紧,如下图所示: 三、读码调节流程 TIPS: 光圈位置调节要求比较精确,请务 必按照图示中的位置调节,偏差一 点就会影响到后续的调试
帧的大小主要是会影响相机的Packet Size的大小,MVS会跟据PC的巨帧大小自动调整Packet Size的大小。我们一般设置巨型帧为9K,即9014字节。巨型帧设置路径:网络和共享中心>本地连接>属性配置>高级>巨型帧(巨帧数据包、大型数据包、Jumbo Frames、Jumbo Mtu、MTU等),若不设置,则相机很容易丢包。
海康的 ID6000系列读码相机。
关闭Windows防火墙 操作方法:控制面板>Windows防火墙>打开或关闭Windows 防火墙 (1)需要确认每个网络对应的防火墙都关闭 (2)有些PC装完系统后windows防火墙是未启用的,需要启用任务管 理器-服务中防火墙后,再通过以上方法关闭,这样防火墙才算是关闭掉 IV. 确认千兆的网络运行环境 使用相机时,需要确保网络链接速度为千兆网络: (1)PC网卡需要是千兆网卡,且确保是以千兆的速度运行; (2)网线需要使用超五类或六类以上的千兆网线。 VI. 卸载杀毒、防护软件 VII. 安装相机的金属结构接地,或者PC工控机电源/外 壳接地:避免静电干扰
Author Response
Reviewer #1 (Public Review):
In this interesting manuscript, Nasser et al explore long-term patterns of behavior and individuality in C. elegans following early-life nutritional stress. Using a rigorous, highly quantitative, high-throughput approach, they track patterns of motor behavior in many individual nematodes from L1 to young adulthood. Interestingly, they find that early-life food deprivation leads to decreased activity in young larvae and adults, but that activity between these times, during L2-L4, is largely unaffected. Further, they show that this "buffering" of stress requires dopamine signaling, as L2-L4 activity is significantly reduced by early-life starvation in cat-2 mutants. The paper also provides evidence that serotonin signaling has a role in modulating sensitivity to stress in L1 larvae and adults, but the size of these effects is modest. To evaluate patterns of individuality, the authors use principal components analysis to find that three temporal patterns of activity account for much of the variation in the data. While the paper refers to these as "individuality types," it may be more reasonable to think of these as "dimensions of individuality." Further, they provide evidence that stress may alter the strength and/or features of these dimensions. Though the circuit mechanisms underlying individuality and stress-induced changes in behavior remain unknown, this paper lays an important foundation for evaluating these questions. As the authors note, the behaviors studied here represent only a small fraction of the behavioral repertoire of this system. As such, the findings here are an interesting and very promising entry point for a deeper understanding of behavioral individuality, particularly because of the cellular/synaptic-level analysis that is possible in this system. This paper should be of interest to those studying C. elegans behavior and also more generally to those interested in behavioral plasticity and individuality.
We thank the reviewer for finding our results interesting.
Reviewer #2 (Public Review):
This paper set out to understand the impact of early life stress on the behavior and individuality of animals, and how that impact might be amplified or masked by neuromodulation. To do so, the authors built on a previously established assay (Stern et al 2017) to measure the roaming fraction and speed of individuals. This technique allowed the authors to assess the effects of early life starvation on behavior across the entire developmental trajectory of the individual. By combining this with strains with mutant neuromodulatory systems, this enabled the authors to produce a rich dataset ripe for analysis to analyze the complicated interactions between behavior, starvation intensity, developmental time, individuality, and neuromodulatory systems.
The richness of this dataset - 2 behavioral measures continuous across 5 developmental stages, 3 different neuromodulatory conditions (with the dopamine system subject to decomposition by receptor types) and 4 different levels of starvation, with ~50-500 individuals in each condition-underlies the strength of this paper. This dataset enabled the authors to convincingly demonstrate that starvation triggers a behavioral effect in L1 and adult animals that is largely masked in intermediate stages, and that this effect becomes larger with increased severity of starvation. Furthermore, they convincingly show that the masking of the effect of starvation in L2-L4 animals depends on dopaminergic systems. The richness of the dataset also allowed a careful analysis of individuality, though only neuromodulatory mutants convincingly manipulated individuality, recapitulating earlier research. Nonetheless, a few caveats exist on some of their findings and conclusions:
We thank the reviewer for the constructive comments. In the revised manuscript we include additional analyses and textual changes as detailed below, to address the points raised.
1) Lack of quantitative analysis for effects within developmental stages. In making the argument for buffered effects of starvation on behavior during periods of larval development, the authors make claims regarding the temporal structure of behavior within specific stages. However, no formal analysis is performed and and the traces are provided without confidence intervals, making it difficult to judge the significance of potential deviations between starvation conditions.
In the revised manuscript, we include additional analyses of roaming fraction effects across shorter developmental-windows, showing within-stage differences in behavioral patterns following starvation (Figure 1 - figure supplement 1E; Figure 3 - figure supplement 1C). In addition, we further temper and rewrite our conclusions to clearly describe these effects (now- “…while 1 day of early starvation modified within-stage temporal behavioral structures by shifting roaming activity peaks to later time-windows during the L2 and L3 stages…” in p. 4 and “Interestingly, during the L2 intermediate stage the effects on roaming activity patterns were more pronounced during earlier time-windows of the stage…” in p. 8).
2) Incorrect inferences from differences in significance demonstrating significant differences. The authors claim that there is an increase in PC1 inter-individual variation in tph-1 individuals, however the difference in significance is not evidence of a significant difference between conditions (see Nieuwenhuis et al. 2011). This undermines claims about an interaction of starvation, neuromodulators, and individuality.
In the revised manuscript we provide now a direct comparison of PCs inter-individual variances between starved and unstarved populations, demonstrating significant differences in inter-individual variation in specific PC individuality dimensions following stress (Figure 6 and Figure 6 - figure supplement 1). These results include the increase in PC1 inter-individual variation in tph-1 mutants following 3 and 4 days of starvation (Figure 6A,E).
3) Sensitivity of analysis to baseline effects and assumptions of additive/proportional effects. The neuromodulatory and stress conditions in this paper have a mixture of effects on baseline activity and differences from baseline. The authors normalize to the roaming fraction without starvation, making the reasonable assumption that the effect due to starvation is proportional to baseline, rather than an additive effect. This confound is most visible in the adult subpanel of figure 5d, where an ~2-3 fold difference in relative roaming due to starvation is clearly noted, however, this is from a baseline roaming fraction in tph-1 animals that are ~2 fold higher, suggesting that the effect could plausibly be comparable in absolute terms.
Unavoidably, any such assumptions on the expected interaction between multiple effects will be a gross simplification in complicated nonlinear systems, and the data are largely shown with sufficient clarity to allow the reader to make their own conclusions. However, some of the interpretations in the paper lean heavily on an assumption that the data support a direct interpretation (e.g. "neuronal mechanisms actively buffer behavioral alterations at specific development times") rather than an indirect interpretation (e.g. that serotonin reduces baseline roaming fraction which makes a fixed sized effect more noticeable). Parsing the differences requires either more detailed mechanistic study or careful characterization of the effect of different baselines on the sensitivity of behavior to perturbation-barring that it's worth noting that many of these interactions may be due to differences in biological and experimental sensitivity to change under different conditions, rather than a direct interaction of stress and neuromodulatory processes or evidence of differing neuromodulatory activity at different stages of development.
In the revised manuscript we added a discussion of the potential complicated interactions between neuromodulation and stress, altering baseline levels and deviations from baseline. We also discuss the interpretation of the results in the context of non-linear systems in which sensitivity of the behavioral response to underlying variations may be modified by specific neuromodulatory and environmental perturbations, without assuming direct differences in neuromodulatory states over development or across individuals (p. 16).
Reviewer #3 (Public Review):
In this study, Nasser et al. aim to understand how early-life experience affects 1) developmental behavior trajectory and 2) individuality. They use early life starvation and longitudinal recording of C. elegans locomotion across development as a model to address these questions. They focus on one specific behavioral response (roaming vs. dwelling) and demonstrate that early life (right after embryo hatching) starvation reduces roaming in the first larval (L1) and adult stages. However, roaming/dwelling behavior during mid-larval stages (L2 through L4) is buffered from early life starvation. Using dopamine and serotonin biosynthesis null mutant animals, they demonstrated that dopamine is important for the buffering/protection of behavioral responses to starvation in mid-larval stages, while in contrast, serotonin contributes to early-life starvation's effects on reduced roaming in the L1 and adult stages. While the technique and analysis approaches used are mostly solid and support many of the conclusions made in the manuscript for part 1), there are some technical limitations (e.g., whether the method has sufficient resolution to analyze the behaviors of younger animals) and confounding factors (e.g., size of the animal) that the authors do not yet sufficient address, and can affect interpretation of the results. Additionally, much of the study is descriptive and lacks deep mechanistic insight. Furthermore, the focus on a single behavioral parameter (dwelling vs. roaming) limits the broad applicability of the study's conclusions. Lastly, the manuscript does not provide clear presentation or analysis to address part 2), the question of how early life experience affect individuality.
We thank the reviewer for these important comments. As described below, in the revised manuscript we include new analyses (following extraction of size data), showing behavioral modifications across different conditions/genotypes also in size-matched individuals (within the same size range) (Figure 1 - figure supplement 1F; Figure 3 - figure supplement 1D,E; Figure 5 - figure supplement 1B,D). We also made edits to the text to describe these results (Methods p. 21 and Results section). In addition, while we can detect behavioral changes using our imaging method even in young L1 worms across conditions and genotypes (described in Stern et al. 2017 and this manuscript), as the reviewer correctly pointed out, we may miss some milder behavioral effects due to lower spatial imaging resolution in younger worms. We are now referring to this spatial resolution limitation in the revised manuscript (discussion part). Lastly, in the revised manuscript we added clearer and more direct analyses of changes in inter-individual variation in multiple PC dimensions following early stress, by directly comparing variation between starved and unstarved individuals within the mutant and wild-type populations (Figure 6; Figure 6 - figure supplement 1). These analyses show significant changes in inter-individual variation within specific PC individuality dimensions following early stress. Also, we made textual changes along the manuscript to increase the clarity of presentation of these results.
Background
This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad025), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:
Reviewer 1: Weilong Guo, PhD
Patrick König and colleagues have built a web application for the interactive query, visualization and analysis of genomic diversity data, supportting population structure analysis on specific genetic elements, and data export. The application can also be easily used as a plugin for existing web application. According to its documentation, this application can be easily installed form pip, Docker and conda, which would be useful for population genomic studies. There are still several concerns about this manuscript.
Major concerns:
As for the SNP visualization function, there are only very limited numbers of SNPs can be read on the webpage, without function such as "zoom in" or "zoom out"(it is suggested to add such functions or similar functions). Although the application can export almost all the SNP sites of a whole VCF file, it is far from user-friendly.It is suggested to add a track of chromosomes showing the genomic windows under querying, allowing the cursor to select or adjust the genomic regions (UCSC-browser style), which is necessary for an intuitive user experience.
The BLAST function could serve as a useful entry point. But what is the starting position of the query sequence when mapped on minus strand? The authors should make it more clearly explained on the website.
TThe authors mentioned that their application would convert the inputted VCF file into Zarr format. Thus, more performance evaluation should be declared to show the advantages of this strategy (rather than using the VCF file directly).
The authors should also compared the their applications with other similar existing web applications, such as CanvasDB, Gigwa, SNiPlay and SnpHub, to highlight their advantages and improvemences.
Minor concerns:
The analysis functions are still insufficient. Commonly used analysis tools or methods, such as haplotype analysis, STRUCTURE analysis, distribution of nucleotide diversity and selection sweep analysis, are also suggested to be supported.
Ref. 22 is not completed.
A long line of boys carrying crates of striped tulips, and of yellow and red roses, defiled in front of him, threading their way through the huge jade-green piles of vegetables. Under the portico, with its gray sun-bleached pillars, loitered a troop of draggled bareheaded girls, waiting for the auction to be over.
From LAWLER 216: Added in the typescript.
ZABROUSKI: After these sentences, Wilde revised the rest of the paragraph in 1891: "Others crowded round the swinging doors of the coffee-house in the piazza. The heavy cart-horses slipped and stamped upon the rough stones, shaking their bells and trappings. Some of the drivers were lying asleep on a pile of sacks. Iris-necked and pink-footed, the pigeons ran about picking up seeds. After a little while, he hailed a hansom and drove home. For a few moments he loitered upon the doorstep, looking round at the silent square, with its blank, close-shuttered windows and its staring blinds. The sky was pure opal now, and the roofs of the houses glistened like silver against it. From some chimney opposite a thin wreath of smoke was rising. It curled, a violet riband, through the nacre-coloured air."
Armed with all this knowledge, we realise that we can construct an almost unlimited number of different path strings that all refer to the same directory
See below the number of ways to define the same path on Windows
UNC stands for Universal Naming Convention and describes paths that start with \\, commonly used to refer to network drives. The first segment after the \\ is the host, which can be either a named server or an IP address
UNC paths on Windows
On any Unix-derived system, a path is an admirably simple thing: if it starts with a /, it’s a path. Not so on Windows
Paths on Windows are much more complex
One of the benefits of Python is that it is an open source language, which holds truefor the absolute majority of important packages as well. This allows for easy installa‐tion of the language and required packages on all major operating systems, such asmacOS, Windows, and Linux. There are only a few major packages that are requiredfor the code of this book and finance in general in addition to a basic Pythoninterpreter
ok
st days, the assault of the city eclipses its promise:·When thewater in the building has stopped running, when even in her bestdress she cannot help but wonder if she smells like the outhouse orif it is obvious that her bloomers are tattered, when she is so hungrythat the aroma of bean soup wafting from the settlement kitchenmakes her mouth water, she cakes t0 the streets, as if in search of thereal city and not this poor imitation. The old black ladies perched intheir windows shouted, "Girl, where you headed?" Each new depri-vation raises doubts about when freedom is going t0 come; if thequestion pounding inside her head-Can/ live?-is one to whichshe could ever give a certain answer, or only repeat in anticipation ofsomething better than this, bear the pain of it and the hope of it, thebeauty and the promise.
The quote raises questions about the nature of freedom and when it will truly arrive for the woman, as she wonders if she will ever be able to answer the question, "Can I live?" This uncertainty and the longing for a more genuine sense of freedom underscore the complex nature of American womanhood, as women throughout history have grappled with questions of freedom, rights, and equality.
eventlog Chain
eventlog chain
A fingerprint can be created based on the following information about your device: What operating system (iOS, Android, Windows, Linux, etc.) is used. What browser and browser version are used. What content (plug-ins, fonts, add-ons) has been installed. The location (determined by device location settings or the IP address). Its time zone settings (which can be adjusted automatically by the network provider).
This is important information for everybody to be aware of. Even if you think something you're doing online is a secret, it isn't, and you need to be careful.
deutschsprachige Tutorials
Moin, Herr Baumgartner,
ich persönlich halte dies nicht mehr für unbedingt nötig.
Jedenfalls solange bei DeppL (KI-basiert unter https://www.deepl.com/translator) in der freien Version 3000 Zeichen für eine einzelne Übersetzung weiterhin möglich bleiben. Bei einer Lesegeschwindigkeit von rd. 1200 Zeichen/min. sind das etwa 2,5 Minuten. Habe es mit lorem ipsum, auch mit einfügen in eine Word-Datei getestet, es bringt schon was. Für die Hilfe-Dateien von Obsidian auf alle Fälle. Und vielfach braucht man die Übersetzung ja eigentlich nur als Ergänzung.
Neben der Browser-Variante gibt es auch iOS und Android Apps, als Windows- und Mac-App und Erweiterung für ein paar Browser.
Ich habe die Android ab, da lassen sich die Übersetzungen auch speichern.
B. Frenzel
The only window in the room was a small one, highup on one wall. The Taliban had ordered all windows
If I were required to live in such way, I would feel so sad, then angry, maybe try to find way to show my angry
mongod --dbpath="D:\data\db"
that is wondful!!!
darius kazemi defines a bot ⧉ as 'a computer that attempts to talk to humans through technology that was designed for humans to talk to humans'. this definition sits well with me, when trying to identify just what is so creepy about accidentally talking on the phone to a robot without immediately realising. it's the uncanny valley effect of being unsure if something is human or not, manufactured or natural. just this week, louis vuitton stores have unveiled 'hyperrealistic' robot versions of yayoi kusama, painting their windows, in a move some have noted 'feels morbid' ⧉ (and many have described as 'creepy'). the rise of LLMs like GPT3 hits on this same kind of uncanny valley. they have become almost indistinguishable from humans, requiring us to imagine means of devising a 'reverse turing test' as described by maggie appleton in order to tell them apart.
Language itself the technology that was meant for humans to talk to humans. People complain about social media sites' bot populaces. If sex spam bots can degrade the Tumblr experience and crypto spam bots can degrade the Twitter experience will these new bots degrade the language experience?
imagined “Americas
I never thought of talk shows as windows into imagined Americas before but it is very interesting to think about
Simulated distributions of weights. A greater opportunity for lineage sorting (i - iii) biases the distribution toward the topology that matches the demographic history. Incomplete lineage sorting yields genealogies that are a better fit to one of the discordant trees, but the distribution is always symmetrical between the left and right half triangles. Additional factors, including gene flow, create a bias toward one of the discordant genealogies (panels iv - vi).
I'm super intrigued by the use/utility of simulation here. I know that these simulations were conducted using msprime, but I can't help but wonder about the inclusion of varying intensities of positive selection on loci in these simulations. This of course would increase the complexity of parameter space significantly with regards to potential simulations, but seems explicitly tied to the hypotheses being tested here.
More generally, it seems to me that this framework could lend itself nicely to the application of machine or deep learning approaches for assignment of genomic windows to alternative evolutionary histories (e.g. migration, selection), as well as potentially even parameter inference, such as through the use of a CNN. Obviously this is outside of the scope of the current paper, but I'm just curious as to whether this is a thought/space you all have explored?
At the centre was an octagonal pavilion which, on the first floor, consisted of only a single room, the king’s salon; on every side large windows looked out onto seven cages (the eighth side was reserved for the entrance), containing different species of animals. By Bentham’s time, this menagerie had disappeared.
The mention of the Menagerie in Bentham's work may be intended as a metaphor for the way in which power operates through the control and display of bodies, whether they be animal or human.
Bentham’s Panopticon is the architectural figure of this composition. We know the principle on which it was based: at the periphery, an annular building; at the centre, a tower; this tower is pierced with wide windows that open onto the inner side of the ring; the peripheric building is divided into cells, each of which extends the whole width of the building; they have two windows, one on the inside, corresponding to the windows of the tower; the other, on the outside, allows the light to cross the cell from one end to the other.
The idea behind the Panopticon was to create a prison design that would allow a single watchman to observe all of the prisoners without the prisoners knowing when they were being watched.
Every day, too, the syndic goes into the street for which he is responsible; stops before each house: gets all the inhabitants to appear at the windows (those who live overlooking the courtyard will be allocated a window looking onto the street at which no one but they may show themselves); he calls each of them by name
He’s like a teacher taking roll during a fire drill lol
Five or six days after the beginning of the quarantine, the process of purifying the houses one by one is begun. All the inhabitants are made to leave; in each room “the furniture and goods” are raised from the ground or suspended from the air; perfume is poured around the room; after carefully sealing the windows, doors and even the keyholes with wax, the perfume is set alight
Another just like when Covid was here, many people were spraying down houses when someone in their house was sick and when they did they did not leave a mark uncleaned or untouched.
I borrowed a friend’s Tesla 3 yesterday. About 5 minutes into the ride, the windshield started fogging up. I couldn’t find the defroster on the large control screen Teslas are so famous for. In desperation, I tapped the CAR icon but that took me to the settings screen which ended up being a dead end. Frustrated, I opened the side windows to clear the windshield. While pushing every button on the steering wheel, I accidentally discovered the voice control and was finally able to get the defroster working. It was such an odd experience I tweeted about how frustrating it was:
This shows that with tesla coming out with all this new tech that some people are still not familiar with it.
Impersonation is a security concept implemented in Windows NT that allows a server application to temporarily "be" the client in terms of access to secure objects.
Open the YouTube video you want to watch and press Ctrl+M. This keyboard shortcut can make YouTube hide the process bar even you haven’t paused the YouTube video.
To capture a Youtube video screenshot without the player controls, use the Hyde browser extension and the keystroke Ctrl+M to hide/unhide the controls.
Further, one can use the Windows Snipping Tool via Win-Shift-S to effect a screenshot to the clipboard for saving and editing.
https://youtubedownload.minitool.com/youtube/how-to-hide-youtube-bar.html
Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
General comments:
We thank the reviewers for recognizing the importance of our work and for their supportive and insightful comments.
Our planned revisions focus on addressing all the comments and especially in further elucidating the molecular mechanism underpinning our observations, their consequences for cell phenotypes and reproducing our observations in an additional cell line. Our revision plan is backed up in many cases by preliminary data.
Our submitted manuscript demonstrated that DNMT3B’s recruitment to H3K9me3-marked heterochromatin was mediated by the N-terminal region of DNMT3B. Data generated since submission suggest that DNMT3B binds indirectly to H3K9me3 nucleosomes through an interaction mediated by a putative HP1 motif in its N-terminal region.
Specifically, we have found that DNMT3B can pull down HP1a and H3K9me3 from cell extracts and that this interaction is abrogated when we remove the N-terminal region of DNMT3B (revision plan, figure 1a). Using purified proteins in vitro, we have shown binding of DNMT3B to HP1a that is dependent on the presence of DNMT3B’s N-terminus suggesting that the interaction with HP1a is direct and that this mediates DNMT3B’s recruitment to H3K9me3 (revision plan, figure 1b). Alphafold multimer modelling identified that DNMT3B's N-terminus binds the interface of a HP1 dimeric chromoshadow domain through a putative HP1 motif. Two point mutations in this motif ablate DNMT3B’s interaction with HP1a in vitro (revision plan, figure 1b - DNMT3B L166S I168N).
We propose to further characterize DNMT3B’s interaction with HP1a in vitro and determine the significance of these observations in cells by microscopy in a revised manuscript. Together with the other proposed experiments and analyses, we believe the extra detail regarding the molecular mechanisms through which DNMT3B is recruited to H3K9me3 heterochromatin will help address the reviewer’s comments.
Point by point response:
We have reproduced the reviewer’s comments in their entirety and highlighted them in blue italics.
Reviewer #1 (Evidence, reproducibility and clarity (Required)): Summary:
This paper by Francesca Taglini, Duncan Sproul, and their coworkers, examines the mechanisms of DNA methylation in a human cancer cell line. They use the human colorectal cancer line HCT116, which has been very widely used to look at epigenetics in cancer, and to dissect the contribution of different proteins and chromatin marks to DNA methylation.
The authors focus on the role of the de novo methyltransferase DNMT3B. It has been shown in ES cells in 2015 that its PWWP domain directs it to H3K36me3, typically found in gene bodies. More recently, the authors showed similar conclusions in colorectal cancer (Masalmeh Nat Comm 2021). Here they examine, more specifically, the role of the PWWP. The conclusions are described below.
Major comments:
- *1-I feel that this paper has several messages that are somewhat muddled. The main message, as expressed in the title and in the model, is that the PWWP domain of DNMT3B actively drags the protein to H3K36me3-marked regions. Inactivation of this domain by a point mutation, or removal of the Nter altogether, causes DNMT3B to relocate to other genomic regions that are H3K9me3-rich, and that see their DNA methylation increase in the mutant conditions. This first message is clear.
We thank the reviewer for their positive comments on our observations. However, we note that our results suggest that removal of the N-terminal region has a different effect to point mutations in the PWWP domain. The data we present suggest that the N-terminus facilitates recruitment to H3K9me3 regions.
The second message has to do with ICF. A mutant form of DNMT3B bearing a mutation found in ICF, S270P, is actually unstable and, therefore, does not go to H3K9me3 regions. I feel that here the authors go on a tangent that distracts from message #1. This could be moved to the supp data. At any rate, HCT116 are not a good model for ICF. In addition, a previous paper has looked at the S270P mutant, and it did not seem unstable in their hands (Park J Mol Med 2008, PMID: 18762900). So I really feel the authors do not do themselves a favor with this ICF angle.
While we agree with the reviewer that HCT116 cells as a cancer cell line are not a good model for ICF1 syndrome, our observation that S270P destabilizes DNMT3B is important to consider in the context of this disease. In addition, the S270P mutant was reported to abrogate the interaction between DNMT3B and H3K36me3 (Baubec et al 2015 Nature PMID: 25607372) making it important to compare it to the other mutations we examine. In our revised version of the manuscript, we propose to move these data to the supplementary materials and add a statement to the discussion noting the caveat that HCT116 cells are likely not to model many aspects of ICF1.
With regard to the differences between our results and that of Park et al, we note that stability of the S270P mutant was not assessed in that study whereas we directly assess stability in vitro and in cells. We propose to add discussion of this previous study to the revised manuscript.
2-I feel that some major confounders exist that endanger the conclusions of the work. The most worrisome one, in my opinion, is the amount of WT or mutant DNMT3B in the cells. It is clear in figure 4C that the WT rescue construct is expressed much more than the W263A mutant (around 3 or 4 times more). Unless I am mistaken, we are never shown how the level of exogenous rescue protein compares to the level of DNMT3B in WT cells. This bothers me a lot. If the level is too low, we may have partial rescue. If it is too high, we might have artifactual effects of all that DNMT3B. I would also like to see the absolute DNA methylation values (determined by WGBS) compared to the value found in WT. From figure S1A, it looks like WT is aroun 80% methylation, and 3BKO is around 77% or so. I wonder if the rescue lines may actually have more methylation than WT?
The rescue cell lines do express DNMT3B to a greater level than observed endogenously. In our manuscript we controlled for this effect by generating the knock-in W263A cells and, as reported in the manuscript, we observe similar effects to the rescue cells (manuscript, figure 2d) suggesting that our observations are not driven by the overexpression.
We also expressed ectopic DNMT3B from a weaker promoter (EF1a) in DNMT3B KO cells but did not include these data in the submitted manuscript. We have previously shown that this promoter expresses DNMT3B at lower levels than the CAG promoter used in the submitted manuscript (Masalmeh et al 2021 Nature Communications PMID: 33514701). Bisulfite PCR of representative non-repetitive loci within heterochromatic H3K9me3 domains show that we observe similar gains of methylation with DNMT3BW263A (revision plan, figure 2).
Revision plan figure 2. Expression of DNMT3BW236A from a weaker promoter leads to increased DNA methylation at selected H3K9me3 loci. Barplot of mean methylation by BS-PCR at H3K9me3 loci alongside the H3K4me3-marked BRCA2 promoter in DNMT3B mutant cells were DNMT3B is expressed from the EF1 promoter. P-values are from two-sided Wilcoxon rank sum tests.
To reinforce that our conclusions are not solely a result of the level of DNMT3B expression, we propose to include these data in the revised manuscript.
The reviewer is also correct that by WGBS, the rescue cell lines have higher levels of overall DNA methylation than HCT116 cells. We will note this in revised manuscript and include HCT116 cells in a revised version of Figure S1e.
3-I guess the unarticulated assumption is that the gain of DNA methylation seen at H3K9me3 region upon expression of a mutant DNMT3B is due to DNMT3B itself. But we do not know this for sure, unless the authors test a double mutant (PWWP inactive, no catalytic activity). I am not necessarily asking that they do it, but minimally they should mention this caveat.
The hypothesis that the gains in DNA methylation at H3K9me3 loci result from the direct catalytic activity of DNMT3B is supported by our observation that a catalytic dead DNMT3B does not remethylate heterochromatin (manuscript, figures 1d and e). However, we acknowledge that we have not formally shown that the additional DNA methylation seen with DNMT3BW263A are a direct result of its catalytic activity. We will conduct an analysis of the effect of catalytically dead DNMT3BW263A on DNA methylation at Satellite II and selected H3K9me3 loci and include this in the revised manuscript.
4-I am confused as to why the authors look at different genomic regions in different figures. In figure 1 we are looking at a portion of the "left" arm of chr 16. But in figure 2B, we now look at a portion of the "right" arm of the same chromosome, which has a large 8-Mb block of H3K9me3, and is surprisingly lowly methylated in the 3BKO. This seems quite odd, and I wonder if there is a possible artifact, for instance mapping bias, deletion, or amplification in HCT116. Showing the coverage along with the methylation values would eliminate some of these concerns.
By choosing different regions of the genome for different figures, we intended to reassure the reader that our results were not specific to any one region of the genome. In the revised manuscript, we propose to display a consistent genomic region between these figures.
With regard to the low levels of DNA methylation in H3K9me3 domains in DNMT3B KO cells, H3K9me3 domains are partially methylated domains which have reduced methylation in HCT116 cells (see page 5 of the manuscript):
… we found that hidden Markov model defined H3K9me3 domains significantly overlapped with extended domains of overall reduced methylation termed partially methylated domains (PMDs) defined in our HCT116 WGBS (Jaccard=0.575,p=1.07x10-6, Fisher’s test).
These domains lose further DNA methylation in DNMT3B KO cells leading to the low methylation level noted by the reviewer. The methylation percentages calculated from WGBS are based on the ratio of methylated to total reads. Thus, a lack of coverage generates errors from division by zero rather than the low values observed in this domain in DNMT3B KO cells.
We include a modified version of figure 2b from the manuscript below. This includes coverage for the 3 cell lines (revision plan, figure 3). Although WGBS coverage is slightly reduced in H3K9me3 domains, reads are still present and overall coverage equal between different cell lines.
While we could potentially include the coverage tracks in revised versions of figures, we note that doing so for multiple cell lines would make these figures extensively cluttered and it would likely be difficult to observe the differences in DNA methylation in these figure panels due to shrinkage of the other tracks.
Minor comments:
1-The WGBS coverage is not very high, around 2.5X on average, occasionally 2X. I don't believe this affects the findings, as the authors look at large H3K9me3 regions. But the info in table S2 was hard to find and it is important. I would make it more accessible.
In the revised manuscript we will specify the mean coverage in the text to ensure this is clearer.
2-It would be nice to have a drawing showing exactly what part of the Nter was removed.
We will add this in the figure in the revised manuscript.
3-some figures could be clearer. I was not always sure when we were looking at a CRISPR mutant clone (W263A) versus a piggyBac rescue.
In the revised manuscript we will clarify in the figure labels to ensure it is clear which data were generated using CRISPR clones.
4-unless I am mistaken, all the ChIP-seq data (H3K9me3, H3K36me3 etc) come from WT cells. It is not 100% certain that they remain the same in the 3BKO, is it? This should be discussed.
We performed ChIP-seq on both HCT116 and 3BKO cell lines and used ChIP-seq data from the 3BKO cell line for the rescue experiments where DNMT3Bs were expressed in 3BKO cells. We will ensure this is clearer in the revised version.
Reviewer #1 (Significance (Required)):
Strengths:
The experiments are for the most part well done and well interpreted (save for the limitations mentioned above). The techniques are appropriate and well mastered by the team. The paper is well written, the figures are nice. The authors know the field well, which translates into a good intro and a good discussion. The bioinformatics are convincing.
Limitations:
All the work is done in a single cancer cell line. One might assume the conclusions will hold in other systems, but there is no certainty at this point.
We acknowledge this limitation. To demonstrate that our results are applicable beyond HCT116 cells, we will include analysis of experiments on an independent cell line in the revised manuscript.
HCT116 are not the best model system to study ICF, which mostly affects lymphocytes
At present, I feel that the biological relevance of the findings is fairly unclear. The authors report what happens when DNMT3B has no functional PWWP domain. I am convinced by their conclusions, but what do they tell us, biologically? Are there, for instance, mutant forms of DNMT3B expressed in disease that have a mutant PWWP? Are there isoforms expressed during development or in certain cell types that do not have a PWWP? In these cell types, does the distribution of DNA methylation agree with what the authors predict?
As stated in response to point 1, although we acknowledge the limitations of HCT116 cells as a model of ICF, we believe are finding that the S270P mutation results in unstable DNMT3B are still important to consider for ICF syndrome.
We are not aware of reports of mutations affecting the residues of DNMT3B’s PWWP domain we have studied. Our preliminary analysis suggests that although mutations in DNMT3B’s PWWP domain are frequent, residues in the aromatic cage such as W263 and D266 are absent from the gnomAD catalogue (Karczewski et al 2020 Nature, PMID: 32461654). This suggests that they are incompatible with healthy human development.
A number of different DNMT3B splice isoforms have been reported. These include DDNMT3B4 which lacks the PWWP domain and a portion of the N-terminal region (Wang et al 2006 International Journal of Oncology, PMID: 16773201). DDNMT3B4 is proposed to be expressed in non-small cell lung cancer (Wang et al 2006 Cancer Research, PMID: 16951144).
We will include analysis of gnomAD and discussion of these points in the revised manuscript.
In its present state, I feel the appeal of the findings is towards a semi-specialized audience, that is interested in aberrant DNA methylation in cancer and other diseases. This is not a small audience, by the way.
We thank the reviewer for their comments and the suggestion that our findings are of interest to a cross-section of researchers.
Reviewer #2 (Evidence, reproducibility and clarity (Required)):
Note, we have added numbers to the comments made by reviewer 2 to aid cross-referencing.
In this manuscript, Taglini et al., describe an increased activity of DNMT3B at H3K9me3-marked regions in HCT cells. They first identify that DNA methylation at K9me3-marked regions is strongly reduced in absence of DNMT3B. Next, the authors re-express DNMT3B and DNMT3B mutant variants in the DNMT3B-KO HCT cells and assess DNA methylation by WGBS where they identify a strong preference for re-methylation of K9me3 sites. Based on genome-wide binding maps for DNMT3B, including the mutant variants, they address how the localization of DNMT3B relates to the observed changes in methylation.
Major points:
- The authors show increased reduction of mCG at H3K9me3 (and K27me3) sites in absence of DNMT3B. This is based on correlating delta %mCG with histone modifications in 2kb bins. I find this approach to not fully support the major claim. First, the correlation coefficients are very small -0.124 for K9me3 and -0.175 for K27me3, and just marginally better compared to, for example, K36me3 that does not seem to have any influence on mCG according to Sup Fig S1b. While I agree that mCG seems more reduced at K9me3 in absence of DNMT3B (e.g. in Fig 1a), is there a better way to visualize the global effect? The delta mCG Boxplots based on bins are not ideal (this applies to many figures using the same approach in the current manuscript).*
Our choice to examine the global effects using correlations in windows across the genome was motivated by similar previous analyses in other studies (for example: Baubec et al. 2015 Nature PMID 25607372, Weinberg at al 2021 Nature PMID:33986537, Neri et al 2017 Nature PMID: 28225755). These global analyses result in modest correlation coefficients because the vast majority of genomic windows are negative for a given mark. For this reason, we included specific analyses of H3K36me3, H3K9me3 and H3K27me3 domains in the manuscript (eg manuscript figure 1b, c and d) which reinforce the conclusions drawn from our global analyses.
However, we acknowledge that while our data support a specific activity at H3K9me3 marked heterochromatin, these are not the only changes in DNMT3B KO cells as DNMTs are promiscuous enzymes that are localized to multiple genomic regions. We will add discussion of this point to the revised manuscript.
2. Second, the calculation based on delta mCpG does not allow to see how much methylation was initially there. For example, S1b shows a median decrease of ~ 10% in K9me3 and ~7-8% in H3K4me3. What does this mean given that the starting methylation for both marks is completely different?
Following this point, the authors mention that mCG is already low at K9me3 domains in HCT cells (compared to other sites in the genome). I am curious if this may influence the accelerated loss of methylation in absence of DNMT3B? Any comments on this?
The observation that there is a greater loss at H3K9me3 domains than H3K27me3-only domains which also have low DNA methylation levels in HCT116 argue that the losses are not solely driven by the lower initial level of methylation in H3K9me3 domains. Our analyses later in the manuscript also support a specific activity at H3K9me3. In addition, we propose to reinforce this point through further data on exploring how DNMT3B interacts with HP1a (see general comments, revision plan figure 1).
However, we acknowledge the possibility that part of the loss seen at H3K9me3 domains in DNMT3B KO cells could be in part a result of their low initial level of methylation. In the revised manuscript we propose to include discussion of this possibility.
3. One issue is the lack of correlation in DNMT3B binding to H3K9me3 sites in WT cells (Fig 3). How does this explain the requirement for DNMT3B for maintenance of methylation at H3K9me3? While some of the tested mutants show some weak increase at K9me3 sites, these are not comparable to the strong binding preferences observed at K36me3 for the wt or delta N- term version.
Using ChIP-seq we cannot say that DNMT3BWT does not bind at H3K9me3, only that it binds here to a lower level than at K36me3-marked loci. The normalized DNMT3BWT signal at H3K9me3 domains is higher than the background signal from DNMT3B KO cells (manuscript figure 3d) supporting the hypothesis that DNMT3BWT localizes to H3K9me3. This hypothesis is also supported by the observation that the correlation between DNMT3BDN and H3K9me3 is reduced compared to that of DNMT3BWT (manuscript figure 6c compared to figure 3c).
There are several reasons why the apparent enrichment of DNMT3B at H3K9me3 may appear weaker than at H3K36me3 by ChIP-seq. Previous work has also suggested that formaldehyde crosslinking fails to capture transient interactions in cells (Schmiedeberg et al. 2009 PLoS One PMID: 19247482). H3K9me3-marked heterochromatin is also resistant to sonication (Becker et al. 2017 Molecular Cell PMID: 29272703) and this could further affect our ability to detect DNMT3B in these regions using ChIP-seq. Our new data also suggest that DNMT3B binds to H3K9me3 indirectly through HP1a (see general comments, revision plan figure 1) and this may also lead to weaker ChIP-seq enrichment at H3K9me3 compared to the direct interaction with H3K36me3 through DNMT3B’s PWWP domain.
We propose to add discussion of these issues to the revised manuscript.
4. Following the above comment, what about other methyltransferases in HCT cells? Could DNMT1 or DNMT3A function be altered in absence of DNMT3B, and the observed methylation changes could be indirectly influenced by DNMT3B? The authors could create a DNMT-TKO HCT cell line and re-introduce DNMT3B in this background and measure methylation to exclude that DNMT1 or DNT3A could have an influence. In this case, only H3K9me3 should gain DNA methylation.
As discussed in response to reviewer 1 (point 3), we propose to examine the changes in DNA methylation upon expression of catalytically dead DNMT3BW263A to further strengthen the evidence that DNMT3BW263A is directly responsible for the increased DNA methylation at H3K9me3-marked loci.
5. DNMT3B lacking N-terminal shows reduced K9me3 methylation & some localization by imaging. While the presented experiments show some support for this conclusion, I suggest to re-introduce a W263A mutant lacking the N-terminal part and measure changes in DNA methylation at H3K9. This should help to test the requirement for the N-terminal regions and further indicate which protein part (PWWP or N-term) is more important in regulating the balance between K9me3 and K36me3.
We have performed this experiment and the data are shown in manuscript figure S6c and d. The results of these experiments show that DNMT3BΔN+W263A cells showed less methylation at H3K9me3 loci than DNMT3BW263A cells, supporting a role for the N-terminus in recruiting DNMT3B to H3K9me3-marked heterochromatin. In the revised version, we will ensure that these data are more clearly indicated.
In the first paragraph of the discussion, the authors state: "Our results demonstrate that DNMT3B is recruited to and methylates heterochromatin in a PWWP- independent manner that is facilitated by its N-terminal region." Same statement is found in the abstract. This contradicts the ChIP-seq results that do not indicate a recruitment of DNMT3B to heterochromatin, and the N- terminal deletions are not fully supporting a role in this targeting since there is no localization to K9me3 to begin with. While changes in methylation are observed, it remains to be determined if this is indeed through direct DNMT3B delocalization or indirectly through influencing the remaining DNMTs.
As discussed above, there are several potential reasons why DNMT3B ChIP-seq signal at H3K9me3 is weak (reviewer 2, point 3). The additional experiments we propose to include in the revised manuscript could reinforce this statement by clarifying whether DNMT3B is directly responsible for methylating H3K9me3-marked regions (reviewer 1, point 3) and by delineating the role of the putative HP1a motif in DNMT3B’s N-terminal region (general comments, revision plan figure 1).
Reviewer #2 (Significance (Required)):
Advance: detailed analysis of DNMT3B mutants in relation to K9me3. Builds up on previous studies. Audience: specialised audience
We thank the reviewer for their insights.
Reviewer #3 (Evidence, reproducibility and clarity (Required)):
- *In this work, Taglini et al. examine how the de novo DNA methyltransferase DNMT3B localizes to constitutive heterochromatin marked by the repressive histone modification H3K9me3. The authors utilize a previously generated DNMT3B KO colorectal carcinoma cell line, HCT116 to study recruitment and activity of DNMT3B at constitutive H3K9me3 heterochromatin. The authors noted preferential decrease of DNA methylation (DNAme) at regions of the genome marked with H3K9me3 in DNMT3B KOs. The authors then rescued the deficiency through overexpression of WT and catalytic dead DNMT3A/B and confirmed that DNA methylation increase at H3K9me3+ region in the WT DNMT3B, but not catalytically inactive mutant nor DNMT3A. To examine which protein domains may be mediating DNMT3B's recruitment to H3K9me3 regions, the authors designed a series of mutants, primarily focusing on the PWWP domain which normally recognizes H3K36me3. In the PWWP mutants, DNMT3B binding to the genome is altered, showing depletion at some H3K36me3-marked regions and gain at H3K9me3 heterochromatin, which coincides with DNAme increase at satellites. In contrast, the clinically relevant ICF1 mutation S270P, shows DNMT3B protein destabilization and no such loss of DNAme at heterochromatin. Finally, the authors truncate the N-terminal portion of DNMT3B, and saw that this region of the protein is necessary for heterochromatin localization and subsequent DNAme of H3K9me3+ regions.
The experiments are well done with extensive controls, and the results are interesting and convincing. The structure of the manuscript could be improved for clarity and flow - for example, the PWWP mutations and truncations should be mentioned and compared together. I also found the section on ICF1 mutant to be out-of-place.
As described above (reviewer 1, point 1), we propose to move these data to the supplementary materials in the revised manuscript.
More emphasis should be placed on the N- terminal mutant as this region seems to be critical to heterochromatin recruitment, and this may address whether the interaction to H3K9me3 is direct or indirect.
As described above (general comments), the revised manuscript will include experiments clarifying the nature of DNMT3B’s interaction with H3K9me3. Our preliminary data support that it is an indirect interaction mediated through HP1a (revision plan figure 1).
Finally, while the epigenetic crosstalk is well-examined in this work, I would strongly urge the authors to add RNA-seq data to determine the transcriptional consequence of such chromatin disruptions (e.g. are repetitive sequences up-regulated in DNMT3B KOs?).
As suggested by the reviewer, we propose to generate and analyse RNA-seq data in the revised manuscript to understand the impact of DNMT3B on transcriptional programs.
Comments
1. A potential caveat to the study is the use of a single cell line - colorectal cancer cell HCT116 - to draw major conclusions on the function of DNMT3B. It is worth noting the Baubec et al. study examining DNMT3B recruitment to H3K36me3 was mainly performed in murine embryonic stem cells (mESCs). It would greatly strengthen the study if the authors could perform similar type of data analysis on an independent DNMT3B KO cell line. For example, does DNMT3B localize to H3K9me3 regions in WT mESCs?
As described above in response to reviewer 1, we will include analysis in an additional cell line in the revised manuscript to demonstrate that our results are generalizable beyond HCT116 cells.
2. Did the PWWP mutant W263A show the expected loss of DNAme at H3K36me3-marked regions? In other words, was there evidence of DNAme redistribution in loss at H3K36me3+ regions and inappropriate gain at H3K9me3+ regions? Please perform intersection analysis of DMRs with other epigenomic marks (e.g. H3K27me3, H3K36me3, CpG shores) in the PWWP mutants.
Our analysis of DNMT3B KO cells (manuscript figure s1d) show that losses of DNA methylation in these cells are not correlated with H3K36me3 in gene bodies suggesting that DNMT3A and DNMT1 are sufficient to compensate in maintaining their methylation in DNMT3B KO cells. To clarify this point for the DNMT3BW263A knock in clones, in the revised manuscript we will directly examine whether these cells show loss of methylation at H3K36me3 marked gene bodies in a similar analysis and add discussion of these results.
The study would also be strengthen greatly with the in addition of biochemical studies to confirm direct loss of binding, and possibly gain of H3K9me3 binding, in the DNMT3B PWWP mutants.
As detailed above (general comments, revision plan figure 1), our data suggest that DNMT3B interacts indirectly with H3K9me3 through an HP1 motif in its N-terminal region. We will undertake further biochemical studies on this interaction which will be included in the revised manuscript. Specifically we will focus on using EMSAs with synthetic nucleosomes to clarify the degree to which the HP1a interaction is responsible for binding of DNMT3B to H3K9me3 modified nucleosomes.
We also propose to undertake in vitro biochemical characterization of the effect of DNMT3B PWWP mutations on interaction with H3K36me3 using synthetic nucleosomes. However, we note that in the manuscript we have shown similar effects using two independent point mutations that are predicted to affect H3K36me3 binding (W263A and D266A) and deletion of the entire PWWP domain.
3. Examining the tracks in Figure 3A,B, the PWWP mutants showed almost indiscriminate increase across the genome, and not specifically to H3K9me3-marked regions. Would ask the authors to speculate as to why the ChIP-seq of DNMT3B mutants do not recapitulate the heterochromatin co-localization shown by immunofluorescence.
As discussed in response to reviewer 2 (point 3) we believe that the weak DNMT3B ChIP-seq signal at H3K9me3 loci is likely due to the nature of the interaction that DNMT3B has with chromatin in these regions. We will add discussion of these points to the revised manuscript.
4. It's a shame that the ICF1 mutation S270P was not characterized to the same extent as the PWWP mutants. Would consider adding WGBS for this clinically relevant mutation.
We have shown that this mutant does not produce stable protein in vitro or in our cells and we observe little difference in DNA methylation at selected loci. As WGBS is expensive, we believe that carrying out this experiment is not an efficient use of limited research resources.
5. Figure 7 - please draw in the ICF1 and the N-terminal mutations in the model figure. Also provide legends.
We will modify the manuscript to include these details in the revised manuscript.
Reviewer #3 (Significance (Required)):
This is an intersting study on a timely subject. It will be of interest to multiple fields from epigenetics to development and cancer. My expertise is in cancer, epigenetics, development.
We thank the reviewer for highlighting the broad interest of our study.
How to monitor network usage on Windows 10

Maximize/minimize windows Win+Up arrow / Win+Down arrow
Dock terminal then use this
Author Response
Reviewer #1 (Public Review):
During the height of the Covid19-pandemic, there was great and widely spread concern about the lowered protection the screening programs within the cancer area could offer. Not only were programs halted for some periods because of a lack of staff or concern about the spreading of SARS CoV2. When screening activities were upheld, participation decreased, and follow-up of positive test results was delayed. Mariam El-Zein and coworkers have addressed this concern in the context of cervical screening in Canada, one of the rather few countries in the world with well organized, population-based, although regionalized, cervical screening program.
Comment 1: Despite the existence of screening registries, they choose to do this in form of a survey on the internet, to different professional groups within the chain of care in cervical screening and colposcopy. The reason for taking this "soft data" approach is somewhat diffuse.
We are happy to provide a counterargument to the reviewer’s concern about the “soft data” approach. Our unit – McGill’s Division of Cancer Epidemiology – is a major stakeholder in policymaking and cervical screening guideline development in Canada. It is one of the components in a McGill Task Force on COVID-19 and Cancer that has been widely engaged in assessing the pandemic’s impact on the entire spectrum of cancer control and care (examples: PMID: 33669102, PMID: 34843106). Canada is a country of continental size, and during the pandemic even travel between provinces was interrupted. It is only via a web-based survey that one could have captured the required information. We took advantage of our unit’s credibility and stature to secure a substantial response to the survey, which elicited a high level of detail.
The survey questionnaire instrument was thoughtfully developed with input from Canadian experts who are active in the field of cervical cancer prevention and involved in clinical care to comprehensively formulate informative questions (and practical, reasonable responses) underpinning each of the themes covered. Of note, some of these coinvestigators, having executive roles in relevant clinical professional bodies, advised our team on the logistics of circulating the survey to members. The administration of the survey was coordinated with the pertinent societies. Our aim was to provide an overall portrait across Canada of the extent of the harms to cervical cancer screening and treatment processes at the beginning of the COVID-19 pandemic (specifically a snapshot from mid-March to mid-August 2020), as perceived by professional groups in multiple health disciplines.
Indeed, as the reviewer mentioned, there are fully (i.e., for Saskatchewan) and partially (i.e., for British Columbia, Alberta, Manitoba, Ontario) organized cervical cancer screening programs in Canada in addition to opportunistic programs (i.e., for North West Territories, Yukon, Nunavut, Quebec). The Canadian Partnership Against Cancer also collects information on cervical cancer screening programs and/or strategies across Canada. Using data from these different sources enables a quantitative assessment of the impact of the pandemic on cervical cancer screening, but this was not the research methodology used; the survey approach was our research strategy as we attempted to collect responses from all provinces and territories, regardless of the different screening programs and modalities implemented across the country, and including regions that do not have an official screening program.
Since the effects of the COVID-19 pandemic will stay with us for years to come, our research team is also examining – using a “hard data” approach via administrative healthcare datasets – the long-term effects that will accrue on cervical cancer morbidity and mortality from the interruptions and delays in screening processes and other activities in the process of care. A discussion of this is, however, beyond the scope and objectives of our manuscript.
No modifications were made in the manuscript to address this comment.
Comment 2: The authors claim they want to "capture modifications". However, the suggestions that come from this study are limited and are submitted for publication 2 years after the survey when the height of the pandemic has passed long since, and its burden on the screening program has largely disappeared. The value of the study had been larger if either the conclusions had been communicated almost directly, or if the survey had been done later, to sum up the total effect of the pandemic on the Canadian cervical screening program.
We appreciate this comment. As part of our commitment to transparency, we now plainly acknowledge that considerable time (1.5 years) has elapsed between the time the survey data were available (March 2021) and manuscript submission (September 2022) for publication in the special issue, curated by eLife, on the impact of the COVID-19 pandemic on cancer prevention, control, care and survivorship. However, we also argue that this lag time is reasonable given the undertaking of data management, analysis, and reporting of a large amount of data, including the synthesis of replies to open-ended questions. We also took this opportunity to expose two graduate students to the research process.
Changes made: Page 15, Lines 437-440.
In terms of assessing the total effect of the pandemic on the Canadian cervical screening program, this work is in progress, but not within the current manuscript. The PubMed references mentioned above show examples of directions we are taking. Also, as mentioned in our response 1 to comment 1, we will use data from administrative healthcare datasets (medical and drug claims, hospitalization data, death registry data) and hospital cancer registries (clinical characteristics such as cancer stage, grade, and biomarkers) on cancer patients diagnosed in Quebec between 2010 and 2026. Using these datasets, we intend to compare the pre- and post-pandemic eras in order to analyze changes in patterns of cancer care, cancer prognosis, and survival, including shifts at stage at diagnosis.
Comment 3: Another major problem with this study is the coverage. The results of persistent activities to get a large uptake is somewhat depressing although this is not expressed by the authors. 510 professionals filled out the survey partially or in total. 10 professions were targeted. The authors make no attempt to assess the coverage or the validity of the sample. They state the method used does not make that possible. But the number of family practicians, colposcopists, cytotechnicians, etc. involved in the program should roughly be known and the proportion of those who answered the survey could have been calculated. My guess is that it is far below 10%.
There were no extensive additional efforts to increase participation rate, apart from follow-up reminder emails to complete the survey, which is standard practice followed by the societies that administered the survey to their constituents. We respectfully disagree with the reviewer concerning coverage being a major limitation, particularly in view of the difficulty in general to secure a high response rate in a survey such as ours, at a time like the middle of the pandemic. Although it appears to be a seemingly easy to compute classic non-response rate, information on the “population of interest” (i.e., number of professionals approached in addition to the advertisement of the survey on social media platform”) is not available to estimate the extent of non-response. Even if the response rate is below 10% as suggested by the Reviewer, our survey and findings should be considered on their merits; the target population was involved in the survey design to ensure the validity of coverage of the questions along the continuum of care in cervical cancer screening and treatment. In addition, we followed the Checklist for Reporting Results of Internet E-surveys to inform the design, conduct, and reporting of our survey research.
Changes made: Page 14, Lines 421-425.
Comment 4: The national distribution seems shewed despite the authors boosting its pan-Canadian character. I am just faintly familiar with the Canadian regions, but, as an example, only 2 replies from Quebec must question the national validity of this survey.
We apologize for this typo error in Table 1; many cells were accidently shifted down (the last couple of provinces had the wrong numbers). There were actually 21 survey respondents from the province of Quebec. This has now been corrected.
Changes made: Page 19.
Comment 5: The result section is dominated by quantitative data from the responses to the 61 questions. All questions and their answers are tabulated. As there is no way to assess the selection bias of the answers these quantitative results have no real value from an epidemiological standpoint.
Indeed, we opted to provide the reader with descriptive results on all the questions and sub-questions that were asked, with explicit annotation to each question number and clear reference to the formulated question by appending the full survey instrument to the manuscript. We designed the survey as a descriptive and not an analytical study, contrary to traditional epidemiology studies that investigate a specific exposure-outcome relationship.
Changes made: Page 12, Lines 366-368.
In the spirit of other papers in the special issue on COVID-19 and cancer, curated by eLife, we measured the impact of the pandemic on the process of care like many other eLife articles did. The eLife collection is a snapshot of a period when not only was cancer control disrupted, but the ability to conduct valid research was also severely curtailed. The reviewer will likely agree that our paper is not the only one to suffer from these methodological shortcomings. Yet, taken together, the gestalt value of the eLife collection will inform epidemiologic modellers for the next long while on how this period affected cancer control. We are happy to contribute with this paper a few more pieces of the puzzle, adding to that which eLife published for many other jurisdictions.
Comment 6: The replies to the open-ended questions are summarized in a table and in the text. The main conclusion of the content analysis of the answers to the direct questions, and one of the main conclusions of the study, is that the majority favors HPV self-sampling in light of the pandemic. However, this not-surprising view is taken by only 80 responders while almost as many (n=60) had no knowledge about HPV self-sampling.
Another aim of our survey was to identify the windows of opportunity that were created by the pandemic and pinpoint positive aspects that could enable the transformation of cervical cancer screening (i.e., HPV primary based screening and HPV self-sampling). We found that 33% of respondents were of the opinion that the pandemic context could facilitate the implementation of self-sampling and that 50.1% were in favor of the implementation of this new screening practice (described in Results Theme 1: Screening Practice and Stable 5).
Changes made: Page 4, Lines 93-97.
The reviewer is correct that in the open-ended sub-question of Question 23 “Are you in favor of the implementation of HPV self-sampling as an alternative screening method in your clinical practice?”, 60 respondents justified their answer to the nominal question by their lack of familiarity with HPV self-sampling, compared to 80 who shared positive comments. However, we would like to draw the reviewer’s attention to the responses to the nominal part of the question in Stable 5. Of those who answered “Maybe”, 47.1% said that they were not familiar enough to express a favorable or unfavorable opinion. We would also like to draw the reviewer’s attention to the results of our cross-tabulation of profession and the question of relevance (described in Results Theme 1: Screening Practice). The lack of familiarity with novel screening practices such as self-sampling can be explained by the fact that most (75.0%) of those who expressed these views were primary healthcare professionals, and not secondary and tertiary specialists.
Changes made: Page 12, Lines 344-346
Comment 7: The authors conclude that their study identified the need for recommendations and strategies and building resilience in the screening system. No one would dispute the need, but the additional weight this study adds, unfortunately, is low, from a scientific standpoint.
Although no one would dispute the need as the reviewer is suggesting, but as epidemiologists we needed to collect this empirical evidence. We urge the reviewer to consider that this article is to contribute to a more complete picture of the collective process of discovery of the impact of the pandemic initiated by eLife’s special issue.
No modifications were made in the manuscript to address this comment.
Comment 8: The conclusion I draw from this study is that the authors have done a good job in identifying some possible areas within the Canadian screening programs where the SARS-Cov2 pandemic had negative effects and received some support for that in a survey. Furthermore, they listed a few actions that could be taken to alleviate the vulnerability of the program in a future similar situation, and received limited support for that. No more, no less.
We thank the Reviewer for the positive feedback provided in the first part of the comment. As for the rest, we believe we have addressed above the reviewer’s concerns.
Reviewer #2 (Public Review):
The study aimed to provide information on the extent to which the COVID-19 pandemic impacted cervical cancer (CC) screening and treatment in 3 Canadian provinces. The survey methodology is appropriate, and the results provide detailed descriptive statistics by province and type of practice. The results support the authors' conclusions. This evidence together with data gathered from other national surveys may provide baseline data on the impact of the pandemic on CC outcomes such as late-stage diagnoses and CC treatment outcomes due to these delays.
We are flattered by the Reviewer’s overall assessment of our manuscript.
Comment: This study relies mostly on descriptive statistics and open-ended questions that provide details about what CC screening and treatment procedures were delayed. It is unclear how the reader would use the results to affect current or future practice.
As mentioned in our reply above to a similar comment raised by reviewer 1, our overarching aim was to portray in a purely descriptive manner the negative and positive impacts of the COVID-19 pandemic on cervical cancer screening-related activities, as perceived by healthcare professionals. Please refer to arguments above.
Changes made: Page 12, Lines 366-368; Page 15, Lines 437-440.
The strongest methods widely available are those that support the WebAuthn secure authentication standard. These methods include physical security keys, as well as personal devices that support technologies, such as Windows Hello or Face ID/Touch ID.
White students need to hear those perspectives as well, just as straight and cisgender students need to read LGBTQ+ stories. This is because students need not just mirrors but also windows into other cultures, as Dr. Rudine Sims Bishop notes in her essay “Mirrors, Windows and Sliding Glass Doors.”
evidence.
“The prevailing method of writing Windows programs in 1990 was the raw Win32 API. That meant the 'C' Language WndProc(), giant switch case statements to handle WM_PAINT messages. Basically, all the stuff taught in the thick Charles Petzold book. This was a very tedious and complex type of programming. It was not friendly to a corporate ‘enterprise apps' type of programming,”
Researchers GES and S-EB reviewed each video trial (regardless ofcitizen scientists’ reports to the daily survey) using video-viewing soft-ware (e.g., Windows Media Player), indicating at what timestamp theparticipating cat sat on/in or near a stimulus over the course of the trial.Cats were considered “participant” if they sat or stood within the con-tours of a stimulus with all limbs for at least three seconds.
This part of the article is difficult to understand, I think I would need more context about what relevance the type of software used has to see why this detail was included.
Then enter values for the: primary-host, backup-host – hostnames or IP addresses of the hosts that will run Config Servers primary-port, backup-port – TCP port for Config Servers, enter 45000 for both
Clarification is needed:
For environments with 3 configservers 1 primary and only1 backup host must be specified
Try the Tool on Different Devices and Browsers Test the tool on the operating system(s) (e.g., Windows, iOS, Android) your students will use either at home or in class to access it. For web-based digital tools, test whether the tool works on different browsers (e.g., Firefox, Safari, Chrome, Edge).
I never thought about this before. I did not try this before. I have not thought about that it there have a restriction for some of the edivices and browsers.
Does the tool or app provide user instruction? Is the instruction easy to follow, multimodal, and interactive? Is the interface clear (simple, no distracting information or elements), easy to navigate, and ad free? Can students collaborate and share their work in the tool? For Apps: Does the app have Windows, iOS, and/or Android versions? For Digital Tools: Is the tool website responsive on mobile phones? Can it be used on multiple browsers? Does the tool or app provide free diverse characters/icons? Does it support multiple languages? What is the cost of the tool? Does it require any extra hardware?
I think these are very important and I want to highlight them for others as well as keep them around for myself for when I need to come back to look at them!
Another great feature of ARC Browser is the option to split windows.
This sounds little bit like Vivaldi :).
Author Response
Reviewer #1 (Public Review):
This is thorough, quantitative microbial ecology research on one of the most important problems of species coexistence in infection biology. The intermediate disturbance hypothesis is supported once again, and they show unsurprisingly that nutrition matters for their ratio of coexistence, but more specifically as a novel function of the ratio of metabolic fueling to reproductive rate, which the authors term absolute growth. I like this study for its care and completeness even though the results are fairly intuitive to those in the field of cystic fibrosis microbial ecology.
We would like to thank the reviewer for acknowledging the importance, care, and completeness of our original manuscript. We have continued to employ our standards of rigor for this revision.
Reviewer #2 (Public Review):
The authors present a manuscript that addresses an important topic of bacterial co-existence. Specifically modeling infection-relevant scenarios to determine how two highly antibiotic-resistant pathogens will develop over time. Understanding how such organisms can persist and tolerate therapeutic interventions has important consequences for the design of future treatment strategies.
We would like to thank the reviewer for acknowledging the importance of our work.
A major strength of this paper is the methodical approach taken to assess the dynamics between the two bacterial species. Using carbon sources to regulate growth to test different community structures provides a level of control to be able to directly assess the impact of one dominant pathogen over another.
The modeling aspect of this manuscript provides a basis for testing other disturbances and/or the impact of additional incoming pathogens. This could easily be applied to other infection settings where multiple microbes are observed ( for example viral/bacterial interactions in the lung).
Thank you for acknowledging the rigor in our experimental and modeling approaches.
The authors clearly show that by altering the growth rate and metabolism of various carbon sources, population structure can be modified, with one out-competing the other. Both modeling and experimental approaches support this.
The exploration of the role of virulence factors is less clear, for example how strains unable to produce virulence factors are impacted in regard to their overall growth and whether S. aureus is able to sense virulence factors without transcriptional assays here. Although the hypothesis is strong, the experimental data does not fully support this conclusion.
In addressing your comments below, we hope that we have increased your confidence in our hypotheses presented in our manuscript as it pertains to the involvement of virulence factors.
Spatial disturbance has a significant impact on community structure. Although using one approach to assess this, it is not clear if the spatial structure is impacted without the comparable microscopy evaluation.
We have indeed acknowledged this short coming in our revised manuscript. In the discussion, we write:
“While we did not explicitly quantify spatial organization experimentally owing to technical limitations of our microplate reader and microscope setups, in theory, co-culture in an undisturbed condition should facilitate the creation of spatial organization.”
In fact, we would really like to be able to track the position of each bacterium during shaking events. However, the plate reader cannot accommodate a microscope setup. While we could remove the plate from the plate reader and transport it to the microscope (two floors down), we cannot be certain that the position of the bacterium would not be altered during transport. We have thought about fixing the bacterium in place prior to transport. However, the injection of liquid for the purposes of fixation would likely alter the positioning of bacteria. Thus, we chose a modeling approach using an agent based model that is parametrized based on our experimental approach. Accordingly, we agree that this is a limitation of our current study. We hope that acknowledging this limitation in the discussion sits well with the reviewer.
Overall this paper highlights the use of modeling approaches in combination with wet lab experiments to predict microbial interactions in changing environments.
Reviewer #3 (Public Review):
This is an intriguing manuscript with a rigorous experimental and computational methodology looking at the interaction of Pseudomonas aeruginosa (Pa) and Staphylococcus aureus (Sa). These two pathogens frequently co-habit infections but in standard liquid media often show a winner-take-all outcome. This study seeks to be mechanistically predictive as to the outcome of the co-culture based on the addition of specific carbon sources as filtered through the lens of metabolic efficiency or, as the authors term - absolute growth. Overall, the study is sound, but there are some specific caveats that I would like to present:
We would like to thank the reviewer for acknowledging the rigor of our work.
1) The study undersells the knowledge in the literature of what allows or prohibits the stability of the Pa and Sa co-cultures. While most of the correct papers are cited, the outcomes of those studies are downplayed in favor of the current predictive study. While the current study is indeed more "predictive", it strays exceedingly far from an infection-relevant media, whereas other studies show reasonable co-existence in host-relevant media.
We have addressed this comment two different ways. First, we have included an entire paragraph in the discussion that acknowledges previous work and how our results fit into previous findings. We write:
“Given the clinical importance of co-infection with both P. aeruginosa and S. aureus, multiple previous studies have identified mechanisms of co-existence. Indeed, long term co-existence of both species can result in physiological changes that reduce their competitive interactions. Strains of P. aeruginosa isolated from patients that enter into a mucoid state show reduced production of siderophores, pyocyanin, rhamnolipids and HQNO, which facilitates the survival of S. aureus [23, 24]. These strains can also overproduce the polysaccharide alginate, which in itself is sufficient to decrease the production of these virulence factors. Moreover, exogenously supplied alginate can reduce the production of pyoverdine and expression from the PQS quorum sensing system, which is responsible for the production of HQNO [25]. Changes in the physiology of S. aureus can also facilitate co-existence. Strains of S. aureus isolated from patients with cystic fibrosis show multiple changes in the abundance of proteins including super oxide dismutase, the GroEL chaperone protein, and multiple surface associated proteins [26]. Interestingly, the majority of proteins that show changes in abundance in S. aureus are related to central metabolism, which is consistent with our findings demonstrating that metabolism can influence the co-existence of both species. While it is unclear as to how long-term co-culture would affect the ratio of absolute growth, our findings provide an additional mechanism that can determine the co-existence of these bacterial species.”
Second, as noted in our response in the ‘essential revisions’ section, we have tested the relationship between the final density ratio and the absolute growth ratio in SCFM medium, which we believe is host relevant. Our findings were fully consistent with the trends that we saw in our original submission. This data is presented in Fig. 3 and Figure 5 – figure supplement 3.
2) The major weakness in the ability of this study to be extrapolatable to infection conditions is the basal media selected for this analysis. The authors choose TSB, which is an incredibly rich media from the start, and proceed to alter only 11% of the available carbon (per mass) with their carbon source manipulations. This suggests an underappreciation for the amino acid metabolism routes of these two pathogens that are taking advantage of the roughly 89% of carbon as amino acid content in the TSB components of tryptone and soytone (17g and 3g, respectively vs the 2.5g carbon source). There are a few major issues with this basal formulation:
a) Comparison to all extant literature on Pa - The media historically used to assess Pa include (rich) LB, BHI, MH; (minimal) MOPS, M63, M9; (host-associated) ASM, SCFM, SCFM2, Serum, and DMEM. TSB is not a historically evaluated formulation for Pa (though it is often for non-mammalian pathogenic Pseudomonads and environmental species). Thus, this study is not inherently integrated into the Pa literature and presents an offshoot study for which a direct connection to extant literature is difficult. Explicitly testing these predictions in the most minimal media possible and then in a host-relevant model would be optimal.
We would truly like to thank the reviewer for their rigor in reviewing our manuscript. We, admittedly, overlooked how amino acids might be influencing the growth of P. aeruginosa in TSB medium. We originally chose TSB medium as previous studies that have examined the co-culture of S. aureus and P. aeruginosa, or their mechanisms of interaction, have used this medium (e.g., [29-34]).
To address this comment directly, we grew co-cultures in AMM minimal medium. This medium, to our knowledge, is the only minimal medium that allows growth of S. aureus. We, and others, have not reported growth of S. aureus in M9 or MOPS minimal medium despite the addition of components such as casamino acids and increases in the concentration of thiamine.
While AMM as reported is quite complex relative to media such as MOPS and M9, we removed several vitamins (nicotinic acid, thiamine, calcium pantothenate, biotin), decreased the concentration of some salts, used a low concentration of casamino acids (0.01%), and used a higher concentration of carbon source (0.04%). In doing so, we hoped to reduce any ‘background effect’ of media components and thus absolute growth could be driven more by carbon source.
Importantly, in using AMM medium, we continue to find a strong and significant relationship between the final density ratio and the absolute growth ratio. This data is presented in the Figure 3 and is described in a standalone paragraph in the results, along with our findings using SCFM.
b) TSB is not remotely host-relevant. The Whiteley lab has done monumental work evaluating in vitro models that mimic human infection (scrupulously matching transcriptomes) and TSB is about as far as you can get. Thus, the ability to extrapolate from the current work to infection without testing in host-relevant media is limited.
As noted above, we repeated our core experimental analysis in SCFM. The results are fully consistent with our original submission. This data is presented Figure 3 and in Figure 5- figure supplement 3.
c) The experimental situation has a component that is both good and bad- O2 tension. By overlaying with mineral oil, the authors immediately bias Staph (a more versatile fermenter) to success, whereas Pa deals with most of these carbon sources better at body level or higher O2 levels. The benefit of this is that many of the infection sites in which these two species co-occur are low in O2.
This was an interesting observation that we have partially addressed experimentally and acknowledged in the discussion.
First, we acknowledged the limitations of our experimental approach as it pertains to O2 levels in the discussion as follows:
“We note that our findings may be relevant to infections occurring in both high and low O2 environments. While P. aeruginosa is limited in its ability to perform fermentation [35], we have provided evidence that the absolute growth ratio can affect community composition in both aerobic (Figures 2-5) and more anaerobic environments (Figure 2 - figure supplement 1, panel H). The limited ability of P. aeruginosa to grow in anaerobic environments was apparent in SCFM as we could not obtain reliable or robustly quantifiable growth of this bacteria when succinate or -ketoglutarate was provided as a carbon source.”
Second, we tested the effect of placing mineral oil over top of the co-culture experiments, thus increasing the anaerobic nature of the environment. We found that, in general, as the ratio of absolute growth increased, so did the dominance of P. aeruginosa in the growth medium. This new data is presented in Figure 2 - figure supplement 1, panel H.
Taken together, we hope that these two modifications meet the Reviewer’s expectations.
d) Some of the tested metabolites are osmotically active (sucrose), while others are not (acetate), confounding the interpretation of what absolute metabolism means in the context of this study since the concentrations of all tested metabolites vary from above to below physiologic-dependent on the metabolite. A much better approach would have been to vary a single metabolite or combination to alter 'absolute metabolism' and test whether the stability of the co-culture held.
e) The manuscript never goes into the fact that for some of these "the carbon source" sources, they are catabolite repressed compared to the basal TSB amino acids (or not). Both organisms show exquisite catabolite repression control, yet this is not addressed at all within the text of the manuscript. Since this response in both organisms is sensitive to relative proportions of the various C-sources, failure to vary C-sources or compare utilization compared to the massive excess tryptone and soytone in the media makes the 'absolute metabolism' difficult to interpret.
To address comments d and e, and to acknowledge the potential limitations of our findings, we have included the following in the discussion. In this paragraph, we acknowledge the osmotic activity of the different carbon sources and preferential consumption of amino acids in TSB medium.
“One drawback of our approach in using different carbon sources to manipulate absolute growth is that some carbon sources are osmotically active, whereas others are not, which could have additional physiological effects on the bacteria outside of changing growth and metabolism. Moreover, both species of bacteria have different carbon source preferences; as above S. aureus tends to prefer carbon sources such as glucose [36] whereas P. aeruginosa prefers organic and amino acids [37]. Given the carbon source preferences of each species, in complex medium such as TSB, there is the potential that P. aeruginosa consumes amino acids prior to consuming the supplied carbon source. This is perhaps less of a concern in AMM medium or SCFM where the concentration of amino acids and additional nutrient components is reduced as compared to TSB medium. Along this line, it is certainly worth investigating how each nutrient component and its ordered utilization by both species contributes to changes in absolute growth. Minor or transient changes in absolute growth owing to preferential nutrient consumption may provide windows of opportunity for one species to increase its relative density to the other.”
f) The authors left out the 'favorite' sources of Pa that are known to be relevant in vivo - the TCA intermediates: citrate, succinate, fumarate (and directly relevant to host-pathogen interactions, itaconate)
We have included the analysis of succinate as a carbon source in both TSB medium (Figs. 1 and 2) and AMM medium (Fig. 3). However, we could not achieve reliable or a quantifiable growth rate of P. aeruginosa in SCFM medium supplemented with succinate in our experimental setup. Accordingly, this carbon source was not used in SCFM.
3) Statistics: Most of the experiments presented are comparisons in which there are more than two experimental groups and the t-tests employed therefore need to be corrected for multiple comparisons. The standard way to do this is to employ an ANOVA with the appropriate multiple-comparison-corrected post-test. These appear to be appropriate for Dunnett's post-testing but the comparator group is not directly defined within the figure legends. Multiple comparison testing is critical for this analysis, as the H0 is that all are the same - the more samples potentially pulled from the same distribution will result in a higher likelihood that one or more will appear as from a distinct population (i.e. H0 rejected). Multiple comparisons correct for this and are absolutely critical for the evaluation of the data presented in this manuscript.
We have addressed this comment two different ways.
First, where there was a clear control group, we performed either a Dunnett’s (for normally distributed data) or a Dunn’s (for non-parametric data sets) following either an ANOVA or Kruskal-Wallis, respectively. These tests were applied to the data presented in Figure 2B, 5H (top and bottom panels) and in Figure 2 - figure supplement 1, panels K-L.
Second, we did not broadly perform multiple comparisons across all data sets. The reason is that this approach would test the significance of relationships that are not relevant to the central premise of the manuscript. For example, a multiple comparison for figure 1B would test the growth rate of all carbon sources against all carbon sources. However, we are only interested if S. aureus or P. aeruginosa grows faster than one another. However, we do understand the need for a corrected P value to reduce the occurrence of Type 1 errors. To accomplish this, we applied a Benjamini-Hochberg Procedure [38] with a 8.5% discovery rate to all P values in the manuscript, including those that tested the distribution of data. This reduced the P value to indicate significance at < 0.0472. We have updated all claims and indications of significance in the figures based on this adjusted P value.
4) The authors missed including Alves et Maddocks 2018 in relation to priority effects and other contributing factors to stable Pa/Sa co-culture.
We have indeed included this manuscript and its findings in the introduction where we write:
“While S. aureus can initially aid in the establishment of the P. aeruginosa population [8], production of N-acetylglucosamine from S. aureus augments…..”
创建目录符号链接
在引入时用于目录链接,后来扩展到所有文件
There dwell the Sea King2 and his subjects. We must not imagine that there is nothing at the bottom of the sea but bare yellow sand. No, indeed; the most singular flowers and plants grow there; the leaves and stems of which are so pliant, that the slightest agitation of the water causes them to stir as if they had life. Fishes, both large and small, glide between the branches, as birds fly among the trees here upon land. In the deepest spot of all, stands the castle of the Sea King. Its walls are built of coral, and the long, gothic windows are of the clearest amber. The roof is formed of shells, that open and close as the water flows over them. Their appearance is very beautiful, for in each lies a glittering pearl, which would be fit for the diadem of a queen.
This is beautiful imagery.
Author Response:
Reviewer #1 (Public Review):<br /> <br /> Roberts et al have developed a tool called "XTABLE" for the analysis of publicly available transcriptomic datasets of premalignant lesions (PML) of lung squamous cell carcinoma (LUSC). Detection of PMLs has clinical implications and can aid in the prevention of deaths by LUSC. Hence efforts such as this will be of benefit to the scientific community in better understanding the biology of PMLs.
The authors have curated four studies that have profiled the transcriptomes of PMLs at different stages. While three of them are microarray-based studies, one study has profiled the transcriptome with RNA-seq. XTABLE fetches these datasets and performs analysis in an R shiny app (a graphical user interface). The tool has multiple functionalities to cover a wide range of transcriptomic analyses, including differential expression, signature identification, and immune cell type deconvolution.
The authors have also included three chromosomal instability (CIN) signatures from literature based on gene expression profiles. They showed one of the CIN signatures as a good predictor of progression. However, this signature performed well only in one study. The authors have further utilised the tool XTABLE to identify the signalling pathways in LUSC important for its developmental stages. They found the activation of squamous differentiation and PI3K/Akt pathways to play a role in the transition from low to high-grade PMLs
The authors have developed user-friendly software to analyse publicly available gene expression data from premalignant lesions of lung cancer. This would help researchers to quickly analyse the data and improve our understanding of such lesions. This would pave the way to improve early detection of PMLs to prevent lung cancer.
Strengths:
1. XTABLE is a nicely packaged application that can be used by researchers with very little computational knowledge.<br /> 2. The tool is easy to download and execute. The documentation is extensive both in the article and on the GitLab page.<br /> 3. The tool is user-friendly, and the tabs are intuitively designed for successive steps of analysis of the transcriptome data.<br /> 4. The authors have properly elaborated on the biological interest in investigating PMLs and their clinical significance.
Weaknesses:
The article is focused on the development and the utility of the tool XTABLE. While the tool is nicely developed, the need for a tool focussing only on the investigation of PMLs is not justified. Several shiny apps and online tools exist to perform transcriptomic analysis of published datasets. To list a few examples - i) http://ge-lab.org/idep/ ; ii) http://www.uusmb.unam.mx/ideamex/ ; iii) RNfuzzyApp (Haering et al., 2021); iv) DEGenR (https://doi.org/10.5281/zenodo.4815134); v) TCC-GUI (Su et al., 2019). While some of these are specific to RNA-seq, there are plenty of such shiny apps to perform both RNA-seq and microarray data analysis. Any of these tools could also be used easily for the analysis of the four curated datasets presented in this article. The authors could have elaborated on the availability of other tools for such analysis and provided an explanation of the necessity of XTABLE. Since 3 of the 4 datasets they curated are from microarray technology, another good example of a user-friendly tool is NCBI GEO2R. This is integrated with the NCBI GEO database, and the user doesn't need to download the data or run any tools. iDEP-READS (http://bioinformatics.sdstate.edu/reads/) provide an online user-friendly tool to download and analyse data from publicly available datasets. Another such example is GEO2Enrichr (https://maayanlab.cloud/g2e/). These tools have been designed for non-bioinformatic researchers that don't involve downloading datasets or installing/running other tools.
Two of these tools (IDEP and TCC-GUI) were reviewed in a literature review covering 20 Shiny apps performed two years ago prior to work on XTABLE starting. Three of the suggested tools (IDEP, RNFuzzyApp, TCC-GUI) are for processing only RNA-seq datasets. IDEAMEX appears to be for RNA-seq data only and is severely limited in its downstream analysis capabilities. DEGenR appears to handle microarray datasets and features an option to retrieve data directly from GEO. However, it appears to be based on GEO2R (with additional downstream analyses) where it automatically logtransforms already log-transformed data and unlike GEO2R, you do not have the option to not apply a log-transformation. A refreshed literature search focusing on microarray datasets highlighted three additional tools. iGEAK which hasn’t been updated in three years and seems to have compatibility issues running on new Windows and Mac machines. sMAP, an upcoming Shiny app for microarray data published in bioRxiv on 29 May 2022. MAAP which has the same issue of log-transforming already log-transformed data. iDEP-READS does not list the datasets used in XTABLE. GEO2Enrichr appears to require the counts table and experimental design in one file, performs a “characteristic direction” DEG test and outputs enriched pathways. These apps require not just downloading of datasets but reformatting and renaming of expression data files and creation of additional files for setting up the DEG analysis which is not practical for the number of samples we have (122, 63, 33, 448) even if these apps handled microarray data. XTABLE also incorporates AUC metrics, which is appropriate given the number of samples in each dataset and tool known for adequately controlling FDR, which is not seen in other apps as well as emphasis on individual gene results and interrogation.
A new paragraph on the discussion section (lines 361-370) of the discussion addresses the potential use of existing applications instead of XTABLE
Secondly, XTABLE doesn't provide a solution to integrate the four datasets incorporated in the tool. One can only analyse one dataset at a time with XTABLE. The differences in terms of methodology and study design within these four datasets have been elaborated on in the article. However, attempts to integrate them were lacking.
We repeatedly considered different strategies of integrating the analysis of the four datasets and we always reached the conclusion that it was hardly going to offer any advantage, or that it might be counterproductive.
Integration can occur at multiple levels. One possibility is to carry out the same analysis (e.g. expression of a given gene in two groups of samples) in all datasets. Since the design and methodologies of the four studies differ substantially (different stages, different definitions of progression status, etc), a unique stratification for all datasets is not possible. Moreover, interrogating the four datasets simultaneously would slow the analysis, with no significant advantage in terms of speed. Another possibility is the integration of results in the same output. For instance, obtain a single chart with the expression of a given gene in multiple subgroups of the four datasets. We think that the results from each cohort should be kept separately and then compared with a similar analysis from other datasets due to differences in design. Scientifically, this is the best way to proceed as it avoids confusions.
Nevertheless, XTABLE allows the export of data for further analysis. The user can use this option to integrate data using other applications or statistical packages.
We do understand the attractiveness of integration between the four datasets is and we seriously considered it. But there is a fine balance between user-friendliness, flexibility, and scientific rigour. We think that XTABLE achieves this balance. Increasing integration of datasets might lead to error and wrong conclusions due to biological and methodological differences between studies. We believe that comparing analyses obtained independently from the four cohorts is the most sensible way to proceed.
We propose to discuss these aspects accordingly.
The integrative analysis of two or more datasets has been discussed in a new paragraph (382-391)
The tool also lacks the flexibility for users to add more datasets. This would be helpful when there are more datasets of PMLs available publicly.
This was also a permanent topic for discussion while designing XTABLE. Creating a tool that could be used to analyse other cohorts of precancerous lesions, while maintaining the ease of use was certainly a challenge. We had to adapt XTABLE to the characteristics of each one of the four databases: specific stratification criteria, different nomenclatures for the different sample types, etc. Designing a shiny app that can be adapted to other present or future datasets without the need of changing the code is simply not practical.
The flexibility that these other Shiny apps incorporate to analyse any RNA-seq dataset requires the contrasts used for the differentially expressed gene analysis be manually defined. IDEP requires an experimental design file where sample names in the counts file must match exactly the sample names in this experimental design file and pre-processing visualisation is limited to the first 100 samples. RNFuzzyApp is similar but we could not format the experimental design file in a way that did not result in the app crashing upon upload. TCC-GUI requires all the sample names to be renamed to the contrast group with the addition of the replicate number. Apps that allow datasets to be uploaded do not have a practical or easy way to set up the DEG analysis of more than a couple dozen samples.
Future versions of XTABLE can be updated to include additional curated PML datasets that would enhance hypothesis generation upon request. Importantly, the code is freely available and can be modified by other scientists to add their cohorts of interest, although we agree that a high level of expertise in coding will be needed. We propose to add these considerations to the text.
The possibilities of expansion of XTABLE to new databases are discussed in lines 392-398
Understanding the biology of PML progression would require a multi-omics approach. XTABLE analyses transcriptome data and lacks integration of other omics data. The authors mention the availability of data from whole exome, methylation, etc from the four studies they have selected. However, apart from the CIN scores, they haven't integrated any of the other layers of omics data available.
Only one dataset (GSE108104) contains whole-exome sequencing and methylation data. We considered that a multi-omics approach in XTABLE would result in an overcomplicated application. As far as early detection and biomarker discovery is concerned, transcriptomic data is the most interesting parameter.
Also discussed in lines 382-391
Lastly, the authors could have elaborated on the limitations of the tool and their analysis in the discussion.
We propose to raise these limitations accordingly in the discussion.
See above.
Reviewer #2 (Public Review):
In this manuscript, Roberts et al. present XTABLE, a tool to integrate, visualise and extract new insights from published datasets in the field of preinvasive lung cancer lesions. This approach is critical and to be highly commended; whilst the Cancer Genome Atlas provided many insights into cancer biology it was the development of accessible visualisation tools such as cbioportal that democratised this knowledge and allowed researchers around the world to interrogate their genes and pathways of interest. XTABLE is trying to do this in the preinvasive space and should certainly be commended as such. We are also very impressed by the transparency of the approach; it is quite simple to download and run XTABLE from their Gitlab account, in which all data acquisition and analysis code can be easily interrogated.
We would however strongly advocate deploying XTABLE to a web-accessible server so that researchers without experience in R and git can utilise it. We found it a little buggy running locally and cannot be sure whether this is due to my setup or the code itself. Some issues clearly need development; Progeny analysis brings up a warning "Not working for GSE109743 on the server and not sure why". GSEA analysis does not seem to work at all, raising an error "Length information for genome hg38 and gene ID ensGene is not available". In such relatively complex software, some such errors can be overlooked, as long as the authors have a clear process for responding to them, for example using Gitlab issue reporting. Some acknowledgement that this is an ongoing development would be helpful.
We thank the reviewer for these comments. We will inspect the code to address those warnings, implement a system for issue reporting, and add the acknowledgements suggested by the reviewer. Regarding the deployment of XTABLE to a web-accessible server, this could present a challenge in the long term as computing resources need to be allocated for years and the economic cost involved.
The code has been inspected to remove the warning and errors pointed out by the reviewer.
The authors discuss some very important differences between the datasets in the text. Most notably they differ in endpoints and in the presence of laser capture. We would advocate including some warning text within the XTABLE application to explain these. For example, the "persistent/progressive" endpoint used in Beane et al (next biopsy is the same or higher grade) is not the same as the "progressive" endpoint in Teixeira et al (next biopsy is cancer); samples defined as "persistent/progressive" may never progress to cancer. This may not be immediately obvious to a user of XTABLE who wishes to compare progressive and regressive lesions. Similarly, the use of laser capture is important; the authors state that not using laser capture has the advantage of capturing microenvironment signals, but differentiating between intra-lesional and stromal signals is important, as shown in the Mascaux and Pennycuick papers. The authors cannot do much about the different study designs, but as the goal is to make these data more accessible We think some brief description of these issues within the app would help to prevent non-expert users from drawing incorrect conclusions.
The authors themselves illustrate this clearly in their analysis of CIN signatures in progression potential. They observe that there is a much clearer progressive/regressive signal in GSE108124 compared to GSE114489 and GSE109743. This does not seem at all surprising, since the first study used a much stricter definition of progression - these samples are all about to become cancer whereas "progressive" samples in GSE109743 may never become cancer - and are much enriched for CIN signals due to laser capture. Their discussion states "CIN scores as a predictor of progression might be limited to microdissected samples and CIS lesions"; you cannot really claim this when "progression" in the two cohorts has such a different meaning. To their credit, the authors do explain these issues but they really should be clearly spelled out within the app.
This is a very good point. We will add the warning text about the differences between studies regarding the definition of progression potential and the differences and sample processing (LCM or o not) so that the user is permanently aware of the differences between cohorts.
A new tab (Dataset) has been added table with the methodologies used in each of each study, and the differences in progression status definitions. Additionally, we emphasized these differences in the main text of the manuscript (lines 296-300 and 403-409).
We are not sure we agree with their analysis of CDK4/Cyclin-D1 and E2F expression in early lesions. The authors claim these are inhibited by CDKN2A and therefore are markers of CDKN2A loss of function. But these genes are markers of proliferation and can be driven by a range of proliferative processes. Histologically, low-grade metaplasias and dysplasias all represent proliferative epithelium when compared to normal control, but most never become cancer. It is too much of a leap to say that these are influenced by CDKN2A because that gene is inactivated in LUSC; do the authors have any evidence that this gene is altered at the genomic level in low-grade lesions?
We are grateful for this comment. There is currently not evidence that CDKN2A mutations occur in low-grade lesions and therefore, we cannot argue that the of CDK4/Cyclin-D1 and E2F expression signature are the result of CDKN2A inactivation in low-grade lesions. We propose to modify the text to introduce these caveats to our conclusion an make our interpretations more accurate.
We have modified the discussion (lines 443-454) to address the interpretation of our results regarding the connection between CDKN2A inactivation and the CDK4/cyclin-D1 and E2F signatures. We now focus our conclusions on the pathway itself and we mention Cyclin-D1 and CDKN2A alterations as a potential modulator of the changes in the pathway, but leaving the discussion open to other drivers.
Overall this tool is an important step forwards in the field. Whilst we are a little unconvinced by some of their biological interpretations, and the tool itself has a few bugs, this effort to make complex data more accessible will be greatly enabling for researchers and so should be commended. In the future, we would like to see additional molecular data integrated into this app, for example, the whole genome and methylation data mentioned in line 153. However, we think this is an excellent start to combining these datasets.
1-click Beautiful Screenshots on Windows
Interesting workflow here for taking a screenshot quickly, saving it as a file, saving it to clipboard, and sharing it to various services.
The aim of this study was to determine the exposure of em-ployees to these trace elements, and to quantify the related pollu-tion in the Museum. Measurements were carried out in differentlocations (exhibition rooms, storage rooms, windows ...) in theMuseum of Rouen over two years (2013e2014).
Important study and one that I had never heard of before.
Many of the buildings on the mainstreet are vacant, pocked by broken windows boarded up withplywood
Unfortunately a common sight in these small mining towns after their boom in the 20th century. They are often left mostly abandoned and deserted.
Sure, this is totally possible. The key is that all PE-format images (the Windows format for executable binaries, including DLLs and EXEs) have headers that contain attributes and other information about the binary itself. Microsoft's toolchain always sets fields in that header that indicate the version of the tools used to build it. So you can just dump that header and examine those fields to find out what you want to know. While there are third-party applications that can extract and pretty-print this information for you, the easiest way to get at it if you have any version of Visual Studio or the Windows SDK installed is dumpbin. Open a Visual Studio Command Prompt, type dumpbin /headers <path to your DLL>, and press Enter. You'll get a big list of header data; don't let it intimidate you, you're only interested in a couple of fields. Scroll up to the top. For a DLL, you'll see that the file type is a DLL (obviously). The first property in the "FILE HEADER VALUES" section is also sometimes interesting: it tells you whether the DLL is for a 32-bit or 64-bit machine. Then look under the next section, "OPTIONAL LINKER VALUES", for the "linker version" field. This, as I mentioned, is filled in by all Microsoft linkers with the version of the linker used to create the binary. The format is major.minor, so 14.00 is Visual Studio 2015, 10.00 is Visual Studio 2010, etc. There's a table of these version numbers on Wikipedia (the column you want is labeled "Version Number" here, since you want the version of the linker, not the version of the compiler, cl.exe). Other potentially interesting fields here are the "operating system version" and/or "subsystem version"—these will tell you which version of Windows that the binary was built to target. For example, 10.00 means Windows 10, 5.01 means Windows XP, and so on. Again, see Wikipedia for a table of Windows version numbers.
Check VC++ compiler version
Author Response
Reviewer #1 (Public Review):
Demographic inference is a notoriously difficult problem in population genetics, especially for non-model systems in which key population genetic parameters are often unknown and where the reality is always a lot more complex than the model. In this study, Rose et al. provided an elegant solution to these challenges in their analysis of the evolutionary history of human specialization in Ae. aegypti mosquitoes. They first applied state-of-the-art statistical phasing methods to obtain haplotype information in previously published mosquito sequences. Using this phased data, they conducted cross-coalescent and isolation-with-migration analyses, and they innovatively took advantage of a known historical event, i.e., the spread of Ae. aegypti to South America, to infer the key model parameters of generation time and mutation rate. With these parameters, they were able to confirm a previous hypothesis, which suggests that human specialists evolved at the end of the African Humid Period around 5,000 years ago when Ae. aegypti mosquitoes in the Sahel region had to adapt to human-derived water storage as their breeding sites during intense dry seasons. The authors further carried out an ancestry tract length analysis, showing that human specialists have recently introgressed into Ae. aegypti population in West African cities in the past 20-40 years, likely driven by rapid urbanization in these cities.
Given all the complexities and uncertainties in the system, the authors have done outstanding jobs coming up with well-informed research questions and hypotheses, carrying out analyses that are most appropriate to their questions, and presenting their findings in a clear and compelling fashion. Their results reveal the deep connections between mosquito evolution and past climate change as well as human history and demonstrate that future mosquito control strategies should take these important interactions into account, especially in the face of ongoing climate change and urbanization. Methodologically, the analytical approach presented in this paper will be of broad interest to population geneticists working on demographic inference in a diversity of non-model organisms.
In my opinion, the only major aspect that this paper can still benefit from is more explicit and in-depth communication and discussion about the assumptions made in the analyses and the uncertainties of the results. There is currently one short paragraph on this in the discussion section, but I think several other assumptions and sources of uncertainties could be included, and a few of them may benefit from some quantitative sensitivity analyses. To be clear, I don't think that most of these will have a huge impact on the main results, but some explicit clarification from the authors would be useful.
Below are some examples:
Thank you very much for your kind words and your feedback! We have expanded our discussion of assumptions and uncertainties – we have responded to each point below:
1) Phasing accuracy: statistical phasing is a relatively new tool for non-model species, and it is unclear from the manuscript how accurate it is given the sample size, sequencing depth, population structure, genetic diversity, and levels of linkage disequilibrium in the study system. If authors would like to inspire broader adoption of this workflow, it would be very helpful if they could also briefly discuss the key characteristics of a study system that could make phasing successful/difficult, and how sensitive cross-coalescent analyses are to phasing accuracy.
We agree that this is an important topic to expand on. We have clarified as follows:
Results, Page 4, last paragraph: “Over 95% of prephase calls had maximal HAPCUT2 phred-scaled quality scores of 100 and prephase blocks (i.e. local haplotypes) were 728bp long on average (interquartile range 199-1009bp). We then used SHAPEIT4.2 to assemble the prephase blocks into chromosome-level haplotypes, using statistical linkage patterns present across our panel of 389 individuals (25).”
Discussion, Page 8, last paragraph: “Overall linkage disequilibrium is relatively low in Ae. aegypti, dropping off quickly over a few kilobases and reaching half its maximum value within about 50kb (37); this is likely sufficient for assembling shorter, high-confidence prephase blocks into longer haplotypes in many cases. However, phase-switch errors may be common across longer distances – potentially affecting inferences in the most recent time windows. Nevertheless, the similar results we obtain using different proxy populations (and thus different input haplotype structures) for human-specialist and generalist lineages (see Figure S1) suggest that our results are robust to potential mistakes in long-range haplotype phasing.”
Discussion, Page 9, paragraph 2: “Here, we take advantage of a continent-wide set of genomes, combined with read-based prephasing and population-wide statistical phasing to develop a phasing panel that should enable future studies in Ae. aegypti with a lower barrier to entry. The same approach may work for other study organisms with similar population genomic properties; high levels of diversity are helpful for prephasing and at least moderate levels of linkage disequilibrium are important for the assembly of prephase blocks.”
2) Estimation of mutation rate and generation time: the estimation of these importantparameters is made based on the assumption that they should maximize the overlap between the distribution of estimated migration rate and the number of enslaved people crossing the Atlantic, but how reasonable is this assumption, and how much would the violation of this assumption affect the main result? Particularly, in the MSMC-IM paper (Wang et al. 2020, Fig 2A), even with a simulated clean split scenario, the estimated migration rate would have a wide distribution with a lot of uncertainty on both sides, so I believe that the exact meaning and limitations of such estimated migration rate over time should be clarified. This discussion would also be very helpful to readers who are thinking about using similar methods in their studies. Furthermore, the authors have taken 15 generations per year as their chosen generation time and based their mutation rate estimates on this assumption, but how much will the violation of this assumption affect the result?
This is a great point. We have expanded our discussion of how this assumption affects our conclusions (see Discussion page 9, first paragraph): “Furthermore, we chose a scaling factor that maximized overlap between the peak of estimated Ae. aegypti migration and the peak of the Atlantic Slave Trade (Fig. 2B). If we instead consider alternative scenarios where peak migration occurred at the very beginning of the slave trade era, around 1500, then our inferred mutation rate would be lower (about 2.4e-9, assuming 15 generations per year), pushing back the split of human-specialist lineages to about 10,000 years before present. This scenario seems less plausible, in part because our isolation-with-migration analyses suggest a gradual onset of migration between continents rather than a single, early-pulse model. It would also make it harder to explain the timing of the bottleneck we see in invasive populations; the first signs of this bottleneck occur at the beginning of the slave trade (~500 years ago) with our current calibration (Fig. S1A), but would be pushed to a pre-trade date in this alternative scenario. We can also consider a scenario in which peak Ae. aegypti migration occurred more recently, perhaps around 1850, corresponding to increased global shipping traffic outside the slave trade alone. In this case, our inferred mutation rate would be higher (or generation time lower), and the split of human-specialist lineages would be placed at about 3,000 years ago. Overall, the best match between the existing literature and our data corresponds to our main estimates, but alternative scenarios could gain support if future research finds evidence for a different time course of invasion than is suggested by the epidemiological literature.”
We have slightly expanded our description of calibration in Results, page 5, last paragraph: “The fact that we see good overlap between the two distributions (yellow–white color) across a wide range of reasonable mutation rates and generation times for Ae. aegypti is consistent with our understanding of the species’ recent history and supports our approach. For example, if we take the common literature value of 15 generations per year (0.067 years per generation) (17, 20), the de novo mutation rate that maximizes correspondence between the two datasets is 4.85x10-9 (black dot in Figure 2A, used in Figure 2B), which is on the order of values documented in other insects. We chose to carry forward this calibrated scaling factor (corresponding to any combination of mutation rate and generation time found along the line in Figure 2A) into subsequent analyses.”
We have also expanded on the uncertainty of our analyses (see Discussion page 8, last paragraph): “First, the temporal resolution of our inferences is relatively low, and both previously published simulations (39) and our own bootstrap replicates (Figure 2B–D, grey lines) suggest relatively wide bounds for the precise timing of events.”
3) The effect of selection: all analyses in this paper assume that no selection is at play,and the authors have excluded loci previously found to be under selection from these analyses, but how effective is this? In the ancestry tract length analysis, in particular, the authors have found that the human-specialist ancestry tends to concentrate in key genomic regions and suggested that selection could explain this, but doesn't this mean that excluding known loci under selection was insufficient? If the selection has indeed played an important role at a genome-wide level, how would it affect the main results (qualitatively)?
We have clarified that we excluded those loci from our timing estimates for both MSMC and ancestry tract analyses, but then re-ran the ancestry tract analysis with all regions included to visualize and assess how tracts were distributed along chromosomes. See Methods, page 12, paragraph 2: “Since selection associated with adaptation to urban habitats could shape lengths of admixture tracts, we masked regions previously identified as under selection between human-specialists and generalists when estimating admixture timing—namely, the outlier regions in (2). However, we used an unmasked analysis to determine and visualize the genome-wide distribution of ancestries (Fig. 3).”
We have also added additional discussion of the expected effects of selection on our analyses (see Discussion, page 9, last paragraph): “Positive selection during adaptive introgression can increase tract lengths and make admixture appear to be more recent than it actually is. For this reason, we masked regions of the genome thought to underlie adaptation to human habitats before running our analysis. Nevertheless, if selection has acted outside these regions, admixture may be somewhat older than we estimate.”
Rather than words comes the thought of high windows: The sun-comprehending glass, And beyond it, the deep blue air, that shows Nothing, and is nowhere, and is endless.
Poor Larkin, his pessimistic outlook on the universe doesn't take him much farther than the sun's comprehension of glass, which he sees as both nothing and perpetually nowhere.
About hell and that, or having to hide What you think of the priest. He And his lot will all go down the long slide Like free bloody birds. And immediately
Those individuals' imaginations that are enthralled by emotions become nothing more than flabbergasted bananas. The evidence is when they become entranced when they have no proof to support the things that they believe in. This demonstrates that Larkin has animosity against Christians since, in his view, the cross represents nothing more than a pointless symbol.
To happiness, endlessly. I wonder if Anyone looked at me, forty years back, And thought, That’ll be the life; No God any more, or sweating in the dark
The nihilistic perspective on the world that Larkin has is enough to make me throw up into his mouth. When Larking says "or sweating in the dark," I find it quite prothetic of him since that is precisely what he is going to be doing for the rest of his precious everlasting life. I dont find that amusing.
Everyone old has dreamed of all their lives— Bonds and gestures pushed to one side Like an outdated combine harvester, And everyone young going down the long slide
Larking believes that dreams are what keep the creative imagination prisoner, or in bondage he adds the gestures of people who wanted to bring damage or offer compassion. This vision depicts the harvest season with field workers cutting down the crop.
Everyone old has dreamed of all their lives— Bonds and gestures pushed to one side Like an outdated combine harvester, And everyone young going down the long slide
There is an image of harvest season and field workers mowing down the crop, which Larking suggests are dreams that hold the creative imagination hostage, or in bondage he includes the gestures of those who intended to cause harm or show mercy.
When I see a couple of kids And guess he’s fucking her and she’s Taking pills or wearing a diaphragm, I know this is paradise
Larkin has a very archaic view of the world. His paradise has become a couple kids fucking taking pills and birth control. What a looser.
High Windows
High windows is written in 5 quatrains. Lines 6,8 begin a water-downed rhyme scheme with the words side and slide. lines 10 and 12 have a rhyme scheme with the words back and dark. 13, 15 have a rhyme scheme with the words hide and slide. 17, 19 words windows and shows both have the "ows." 18, 20 scheme with words glass and endless.
将 Quicker 搜索功能打造成专属于你的 Windows 启动器
但windows自己出的也够用了
Presently we leave these yards and houses behind, we pass a factory with boarded-up windows, a lumberyard whose high wooden gates are locked for the night. Then the town falls away in a defeated jumble of sheds and small junkyards, the sidewalk gives up and we are walking on a sandy path with burdocks, plantains, humble nameless weeds all around. We enter a vacant lot, a kind of park really, for it is kept clear of junk and there is one bench with a slat missing on the back, a place to sit and look at the water. Which is generally grey in the evening, under a lightly overcast sky, no sunsets, the horizon dim. A very quiet, washing noise on the stones of the beach. Further along, towards the main part of town, there is a stretch of sand, a water slide, floats bobbing around the safe swimming area, a life guard’s rickety throne. Also a long dark green building, like a roofed verandah, called the Pavilion, full of farmers and their wives, in stiff good clothes, on Sundays. That is the part of the town we used to know when we lived at Dungannon and came here three or four times a summer, to the Lake. That, and the docks where we would go and look at the grain boats, ancient, rusty, wallowing, making us wonder how they got past the breakwater let alone to Fort William.
The narrator describes the lake area as a "defeated jumble of sheds and small junkyards, the sidewalk gives up and we are walking on a sandy path with burdocks, plantains, humble nameless weeds all around." it sounds like its almost a deserted area as he tells us they walk onto a vacant lot. Although his description is vivid Lake Huron sounds to be less desired when he describes "The Pavilion"... "full of farmers in stiff good clothes on Sundays."
Then he made the journey to Enugu and found another miracle waiting for him. It was unbelievable. He rubbed his eyes and looked again and it was still standing there before him. But, needless to say, even that monumental blessing must be accounted also totally inferior to the five heads in the family. This newest miracle was his little house in Ogui Overside. Indeed nothing puzzles God! Only two houses away a huge concrete edifice some wealthy contractor had put up just before the war was a mountain of rubble. And here was Jonathan’s little zinc house of no regrets built with mud blocks quite intact! Of course the doors and windows were missing and five sheets off the roof.
Blessed once again Jonathan is given back his home nearly all intact after the war. For survivors like the Iwegbu family the small things like the bike and the house begin to add up and it is this humble house they find standing a msiracleious.
Author Response
Reviewer #1 (Public Review):
- The statistical procedures used are not completely described and may not be appropriate.
We revised the text in Methods and Results sections to give more details about the methods used.
-As only two levels of delay were tested, it is not possible to directly test whether the subjective discounting function is hyperbolic or exponential and hence whether the delay is encoded subjectively or objectively.
We agree with the reviewer. A higher number of task parameters may offer a better resolution to evaluate the discounting functions. Fortunately, this does not affect our main results.
- The task has several variable interval lengths (hold in: 1.2-2.8 s, short delay: 1.8-2.3 s, long delay: 3.5-4s) that frustrate interpretation. The distribution of these delays is not described, for example as it reads it seems possible that some long delay rewards are delivered with shorter latency between cue and reward than some short delay rewards (1.2 + 3.5 = 4.7s vs. 2.8+2.3 = 5.1 s).
We revised the text to address that ambiguity. In the new version of the manuscript, we describe short versus long delays considering the total delay intervals between instruction cue onset and reward delivery [short delay (3.5-5.6s) and long delay (5.2-7.3s)]. Within each delay category, individual delays were distributed in a gaussian fashion such that the two delay ranges overlapped for 9% of trials. These details are now described in the revised Methods section (pg. 22).
-The authors have not considered that if the delay value is encoding, then the value, both objectively and subjectively, may be changing as the delay elapses. The variation of these task intervals may have an effect on the value of delay.
In the present study, we report a dynamic integration between the desirability of the expected reward and the imposed delay to reward delivery across the waiting period. Our results (e.g. see Fig. 6) do not fit with simple linear (or logarithmic) effects corresponding to continuous regular changes as the delay elapses. We found different types of interactions (Discounting± and Compounding±) at different periods of the hold period and in different single units. We did not find a way to model all these types of interactions with this type of approach.
Reviewer #2 (Public Review):
- Plots of "rejection rate" (trials where the monkeys failed to wait until the rewards) as a function of delay and reward size seem to indicate that the monkeys understood the visual cue. The rejection rates were very low (less than 4% for almost all conditions) which indicates that the monkeys did not have a hard time inhibiting their behavior. It also meant that the authors could not compare trials where the monkeys successfully waited with trials where they failed to wait. This missing comparison weakens the link between the neurophysiological observations and the conclusions the authors made about the signals they observed.
Here, our main goal was to describe the dynamic STN signals engaged during the waiting period without studying action-related activities. In the discussion (pg. 20), we clearly wrote ‘Further research is needed to determine whether the neural signals identified here causally drive animals’ behavior or rather just participate to reflect or evaluate the current situation.’ Consequently, our conclusions were already tempered by that point.
In addition, we address the same limitation by writing (pg. 20): “An important avenue for future research will be to determine how STN signals, such as those described here, change when animals run out of patience and finally decide to stop waiting. To do this, however, smaller reward sizes and longer delays might be used to promote more escape behaviors during the delay interval.”
- The authors examined the STN activity aligned to the start of the delay and also aligned to the reward. Most of the "delay encoding" in the STN activity was observed near the end of the waiting period. The trouble with the analysis is that a neuron that responded with exactly the same response on short and long trials could appear to be modulated by delay. This is easiest to see with a diagram, but it should be easy to imagine a neural response that quickly rose at the time of instruction and then decayed slowly over the course of 2 seconds. For long trials, the neuron's activity would have returned to baseline, but for short trials, the activity would still be above baseline. As such, it is not clear how much the STN neurons were truly modulated by delay.
We agree with the reviewers. Our original analyses using two-time windows had the potential to introduce biases in the detection of neuronal activities modulated by the delay. To overcome this issue, we modified the time frame of all of our analyses (neuronal activity, eye position, EMG). Now, the revised version of the manuscript only reports activities across one-time window aligned to the time of instruction cue delivery (i.e., -1 to 3.5s relative to instruction cue onset). This time frame corresponds to the minimum possible interval between instruction cues and reward delivery. We have revised all of the figures and we re-calculated all of the statistics using that one analysis window. Despite these major modifications, our key findings were not changed substantially. We found the same pattern in STN activities, with a strong encoding of reward (48% of neurons) preceding a late encoding of delay (39% of neurons). We also updated the text in Methods and Results sections to reflect the revised analyses.
- Another concern is the presence of eye movement variables in the regressions that determine whether a neuron is reward or delay encoding. If the task variables modulated eye movements (which would not be surprising) and if the STN activity also modulated eye movements, then, even if task variables did not directly modulate STN activity, the regression would indicate that it did. This is commonly known as "collider bias". This is, unfortunately, a common flaw in neuroscience papers.
Because the presence of eye variables did not influence how neurons were selected by the GLM, we do not think it likely that our analysis was susceptible to “collider bias”. Nonetheless, to control for that possibility directly, we have now repeated the GLM analyses with eye movement variables excluded. Results are shown in a new figure (Fig.4 – supplementary 1). Exclusion of eye parameters produced results that are very similar to those from the GLM that included eye parameters (differences <3 degrees). We have added text to the manuscript describing this added control analysis.
This dissolution of the division between the sacred (stained glass) andthe profane (graffiti) invites us to reflect on what texts are sanctioned, whaturban literacies allowed, what voices heard, which windows broken, which soulssaved (and which buffed).
but it also relies on the "deserving" and "undeserving" distinction of graffiti. Some graffiti artists are making art that, if it were not graffiti, would be acceptable to the wider public. But not every graffiti artist is. So is only "good" graffiti allowed?
‘The stained glass windows that we have – some of themare very historic over a hundred years old – and that medium of stained glassspoke very clearly to the people of the past, and this’ he continues, gesturing tothe wall of graffiti behind him (see Figure 6.6), ‘speaks to people of the presentand the future’.
I do love that this shows how humans are the same--we create wall art then, we create wall art now.
Reviewer #2 (Public Review):
The present study aimed to demonstrate the utility of brain signal decoding for the differentiation of asynchronous motor signs in Parkinson's disease (PD). To this end, thirty-one PD patients undergoing deep brain stimulation electrode implantation were recruited to participate in an intraoperative motor task. Task performance was compared to extra-operative experiments in healthy subjects. Neural activity and movement traces were segmented into 7-second windows and attributed tremor and slowness measures. To integrate the two symptom domains an additional decoding state termed effective motor control was introduced, which represented the absence of symptoms. Support vector machine regression was used as the model of choice that was trained on individual recording sessions within subjects. All decoding targets from each neurophysiological modality reached significant prediction performances. This represents an important milestone in the current state of research towards machine learning-based intelligent adaptive deep brain stimulation.
Strengths
1. The present analysis is among the first to demonstrate the potential utility of brain signal decoding for the differentiation of asynchronous motor symptoms in Parkinson's disease. In the future, such approaches may be adopted in clinical brain-computer interfaces that can adapt stimulation in real time to concurrent therapeutic demand.
2. The effort from the research team and patients to acquire this important dataset is commendable. The time pressure in the operation room combined with the current trend of asleep surgery for deep brain stimulation makes such data very rare.
3. No relevant difference in decoding performance was found for subthalamic micro vs. macroelectrode recordings. This has practical significance because current sensing-enabled deep brain stimulation implants only allow for macro-recordings, which according to this study has no severe disadvantage over microelectrode recordings for movement decoding. Note that this question could only be answered in the intraoperative setting, which on the other hand can have disadvantages further described below.
4. Beyond the subthalamic nucleus, the authors corroborate the superiority of electrocorticography over subthalamic activity for movement and symptom decoding in Parkinson's disease. This provides further evidence that additional sensing electrodes may complement the subthalamic signals for adaptive deep brain stimulation.
5. Finally, the idea of decoding the presence of an effective motor state is creative and may inspire future developments in adaptive stimulation control algorithms.
Weaknesses
(Note that I take more words for weaknesses, not because they outweigh the strengths, but because I want to justify my criticism in more detail)
1. One inherent limitation of this study is the intraoperative setting, which demands the patients' skull be fixed to the stereotactic frame. This setting is not naturalistic per se and likely comes with additional perturbations in the brain states that are recorded. Thus, the generalization to real-world scenarios is limited. Given the unique opportunity to record invasive brain signals in humans, this limitation has to be accepted and should be taken into account for the interpretation of the results. As mentioned in the strengths, this is currently the only setting that allows for a comparison of micro- and macroelectrode recordings for brain signal decoding.
2. Similarly, the medication state is defined by the intraoperative scenario, as deep brain stimulation implantations are performed after the withdrawal of dopaminergic medication in the so-called dopaminergic OFF state. In this state, PD symptoms are aggravated, which is used clinically to provide a more reliable assessment of deep brain stimulation-induced symptom alleviation. This may also lead to an overestimation of decoding performances as the difference between the absence and presence of PD motor signs in the dopaminergic medication ON state during activities of daily living could be more nuanced.
3. The task design is very interesting as it allows for a continuous definition of symptom severity and motor performance. The comparison to healthy subjects demonstrated clearly higher tremor scores in PD but no significant differences in movement velocity (depicted as trending but p>0.2). This is somewhat unexpected as slowness of movement, also called bradykinesia, is a defining symptom of Parkinson's disease (PD). By definition, this symptom is present in all PD patients, also indicated in the clinical scores shown in the present study. Action tremor, i.e. the presence of tremulous muscle activity during motor performance, is comparatively rare. To support the clinical relevance of the movement tremor observed during the task, the authors show a correlation with the "resting tremor" score from the clinical assessment. It is unclear to me why resting, instead of action tremor scores are shown, as both are part of the clinical assessment (Unified Parkinson's disease rating scale - UPDRS part III). Ultimately, even though resting tremor is significantly more common in Parkinson's disease, not all patients of the current cohort had resting tremor (as indicated in the clinical score correlation). Thus, it remains somewhat puzzling how precise the 3-8 Hz activity actually captures tremor vs. motor noise or inaccuracy. A more fine-grained analysis comparing patients with clinically diagnosed action tremor (as defined by preoperative UPDRS assessment) and without tremor could have helped to support the clinical claims on symptom-specific decoding. On the other hand, the lack of a significant difference in the slowness of movement in the patient cohort relative to healthy controls questions the ability of the task to capture this symptom. Here, I am not sure whether the normalization procedure may have an influence on the comparability. Finally, movement velocity is an easy target that is distributed across a spectrum, so despite the lack of a significant difference in the healthy cohort, I am relatively confident that the decoding of movement slowness in the present cohort is clinically meaningful.
4. Overall, the pathophysiological framework is well placed in the current state of literature, while almost the entire field of brain signal decoding for adaptive deep brain stimulation was neglected. Successful decoding to address Parkinson's and essential tremor (another disorder with more common action tremor) was achieved by multiple groups in impactful studies representing more naturalistic extraoperative or fully embedded settings (Hirschmann et al., 2017, He et al., 2021, Opri et al., 2021). Additionally, other symptoms, like gait disturbances have been the target of machine learning analyses more recently (Louie et al., 2022 and Thenaisie et al., 2022). Here, the manuscript appears to avoid a discussion of the present endeavour in comparison to the current state of the field. One of our own studies has provided the first demonstration of the superiority of electrocorticography over subthalamic LFP for movement decoding, which I am happy to see replicated for the first time in the present manuscript. Importantly, the referenced study showed modality-dependent model performances, with gradient-boosted decision trees performing significantly better than linear models for electrocorticography, while Wiener filters have been repeatedly shown to perform well for subthalamic local field potentials (e.g. see Shah et al., 2018 IEEE Trans Neural Syst Rehabil Eng). The present study does not compare different machine learning architectures. Thus, decoding performances could potentially be further improved with more refined computational approaches. A more thorough overview of the literature from the many laboratories that are invested in this research across the globe would have improved the interpretation with respect to the broader impacts of the present manuscript.
5. The authors also present analyses of the spatial localization of relative decoding performances. They demonstrate higher tremor decoding performance in the dorsolateral subthalamic nucleus and higher decoding performance for the slowness of movement in the more central and ventral subthalamic regions. The authors interpret this as potential evidence to support clinical decision-making for optimized stimulation control of these symptoms at the respective locations. This is overly speculative and currently not backed by the data. First of all, the results only show the contrast of tremor vs. slowness of movement and not each individually. Thus, the spatial peak with each symptom domain could be very similar, e.g. in the dorsolateral STN, but a reversal of the difference only occurs at relatively low performances, e.g. in the ventral STN. Thus, showing both spatial distributions individually could be more informative. However, the claim that this could also be used to adjust stimulation location to alleviate the respective target symptoms is by no means backed by the data and remains an interesting speculation.
6. Finally, as in many brain signal decoding studies, the presented decoding performances are relatively low. The authors decided to present linear correlation metrics as Pearson's r values. These values are by definition higher than the commonly chosen Coefficient of determination or R² that provides a more interpretable performance metric. The amount of variance in the symptom scores that could be explained by the models ranged between 10% and 30% at a temporal resolution of 7 seconds. Moreover, the validity of the linear score is not entirely clear as Pearson's r can be heavily biased by non-normal distributions which were not assessed or at least not reported for the performance evaluation. These considerations do not severely limit the validity of the results themselves, as the authors have convincingly shown that significant decoding performances are possible and other studies in this field range in similar performance ranks. However, this point should remind us that a short-term clinical adoption of such methods is not yet in sight and further research is warranted. Before machine learning-based clinical computer interfaces can reach the clinical routine, the field has to work on more refined methods. In my opinion, the field will have to provide robust decoding performances with R² > 0.8 without patient-specific training to get into the realm of widespread clinical adoption.
Reviewer #1 (Public Review):
Temporal patterning allows a neural stem cell to generate different neural identities through the course of development. While this relationship has been demonstrated in many instances of stem cells and/or neurons, it is unclear how birth order translates to target specification. In this manuscript, the authors use live imaging and new tools generated from single-cell RNA sequencing data to address this issue.
They find that neurons born from a given time window (at the resolution of early>middle>late) innervate together - and distinctly from - those born at different temporal windows, though the specifics of the innervations differ between neural stem cell lineages. They also find that neurons achieve this by extending their dendrites in exploratory directions and selectively stabilising the ones in the appropriate direction. This process likely occurs at the sub-second timescales. Finally, they also demonstrate that embryonic-born (larval-specific) neurons that remodel to integrate into adulty-specific circuitry simultaneously perform pruning and dendrite extension to integrate into the circuitry at the appropriate time.
This is a valuable description of how developmental programmes imparted to neurons at the time of their birth might translate to their targetting and connectivity. It lays down a framework for understanding the defects in these processes.
after insurrectionists smashed several ground floor windows of the Capitol, the only ones out of 658 they somehow knew were not reinforced, that allowed rioters to pour inside;
It's crazy how easy this all seemed. Like how others mentioned, it does seem like it was premeditated, inside help. If it was always this easy, people would have struck before. But why now?
after insurrectionists smashed several ground floor windows of the Capitol, the only ones out of 658 they somehow knew were not reinforced, that allowed rioters to pour inside;
Connection: I agree with my classmate who mentioned this is more than just a citizen attack. I believe the government, the staff from the White House I would say, had something to do with this and were, in fact, involve because I don't think there is just a way citizens would know exactly what window, how were they just not reinforced? The White House just needs maximum MAXIMUM security. It shouldn't be this easy for citizens to break in, especially during elections.
after insurrectionists smashed several ground floor windows of the Capitol, the only ones out of 658 they somehow knew were not reinforced, that allowed rioters to pour inside
This fact really surprised me because when I was watching this on the news I was in shock when I heard they broke through a window to get into the capital building. As previously stated there was a website called TheDonald.win and it probably included the information on what window to go through. I'm assuming this information was from someone with power in the government so it was definitely more than just citizen involvement.
FxFiles by Functionland
There doesn't seem to be a web version. Only mobile, and windows app. Personally I use Linux or web on the desktop
Ten Research-Based Steps forEffective Group Work
Greetings Colgate Faculty. Welcome to AnnotatED!
You are seeing this annotation because you have successfully activated Hypothes.is in your browser.
To add your annotations to the private AnnotatED - Colgate space, select the group in the upper left hand corner of the Hypothes.is sidebar.

Emacs on windows<br /> by John D. Cook
I borrowed a friend’s Tesla 3 yesterday. About 5 minutes into the ride, the windshield started fogging up. I couldn’t find the defroster on the large control screen Teslas are so famous for. In desperation, I tapped the CAR icon but that took me to the settings screen which ended up being a dead end. Frustrated, I opened the side windows to clear the windshield. While pushing every button on the steering wheel, I accidentally discovered the voice control and was finally able to get the defroster working.
This shows that even though Tesla is coming out with all of these upgrades and technological advancements, not all people are familiar with everything that happens and shows how all people catch up to todays world.
Charting the circadian cycle makes it possible to find the best windows of time for brainwork. The circadian graph below was generated with SleepChart using a log of free running sleep. In the graph, two yellow bands indicate the optimum time for brainwork. The exact timing may differ for each individual. However, the blocks can be easily determined: 绘制昼夜周期图表可以找到脑力劳动的最佳时间窗口。下面的昼夜节律图是使用 SleepChart 使用自由睡眠日志生成的。在图中,两条黄色带表示脑力劳动的最佳时间。每个人的确切时间可能不同。但是,块可以很容易地确定: Morning block: soon after waking, after morning coffee, or after breakfast. The best brain slot may last 2-4 hours. If you are sleepy in the morning see: Natural creativity cycle早上块:醒来后不久,早上喝咖啡后,或早餐后。最好的脑洞可能会持续2-4个小时。如果你早上困了,请看:自然创造力周期 Evening block: soon after siesta. The second best brain slot may last 2-4 hours as well. If you do not nap, you may not fully benefit from that block (see: Power nap). If you are sleepy during this slot see: Best time for napping晚间时段:午睡后不久。第二好的大脑槽也可能持续 2-4 小时。如果您不小睡,您可能无法从该块中充分受益(请参阅:小睡)。如果您在此时段困倦,请参阅:午睡的最佳时间
咖啡的最佳时间是?早上起床、午睡后的20min内。 脑力劳动的最佳时间是?早起后的5h与午睡后的4h。 午睡的最佳时间是?自然清醒后的7h。
Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Revision Plan
Manuscript number: RC-2022-01765
Corresponding author(s): Dr. Huiqing Zhou (Radboud University)
1. General Statements [optional]
We like to thank the editor and reviewers for their constructive comments and suggestions for improving the manuscript. We here address the comments point-by-point using the template of the revision plan.
2. Description of the planned revisions
Reviewer #1:
- While the study is complete and describes well, a strong conclusion, including validation of the role of some TFs such as FOSL2 through knock out experiments in model organisms or cell culture will elevate the paper more (optional). *
To address this point, we will perform siRNA knockdown experiments of TFs identified in our study, including FOSL2, in primary LSCs, and examine the transcriptional consequences of knocking down these TFs by RNA-seq or RT-qPCR analyses.
Reviewer #2:
- The findings provide an overview of transcriptional regulators and targets in two essential tissues. This is a valuable tool for future discoveries regarding the processes governing cell differentiation and those involved in the disease mechanism of the cornea. Although the presented predictions are interesting, what is missing is an examination of the functional significance of the findings. * Indeed, we fully agree with the reviewer that additional functional examinations are important and relevant and would strengthen the manuscript. We propose to do the following functional analyses to further demonstrate the importance of the key TFs.
Immunostaining of the key TFs in the human cornea and in LSCs and KCs.
- As described above, we will perform siRNA experiments for key TFs identified in our study, followed by RNA-seq or RT-qPCR analysis, to assess the transcriptional program controlled by these key TFs.
The gene regulatory network controlled by these tested TFs will be analysed, to examine the interplay of these TFs in transcriptional regulation and in cell fate determination. Reviewer #2:
Also, the findings indicate an interaction between FOXL2 and other TFS is important for maintaining the corneal epithelium. These interesting predictions indicate an important role for FoxL2 in corneal function. It would be important to verify these predictions by experimental studies, for example, by presenting the association of FOXL2 with the predicted co-factors and presenting data on the effect of the identified mutation on FOXL2 transcriptional activity.
*
We assume that Reviewer #2 refers to FOSL2 instead of FOXL2. We agree with this reviewer’s suggestion to functionally address the importance of FOSL2 in the cornea. In order to answer this, we plan to perform FOSL2 staining and FOSL2 siRNA knockdown in LSCs, followed by RNA-seq, as described above. This will show the FOSL2 importance in LSCs and in cornea, and will identify the affected downstream gene networks.
Regarding the clinical effect of the specific FOSL2 variant reported in our study, we agree that functional validation would strengthen our work even more. We believe that the main message of the study is the use of integrative omics analyses to uncover new transcription factors involved in corneal and limbal fates, and to highlight new candidate genes in corneal disease. Therefore we feel that the disease mechanism behind the specific FOSL2 variant, albeit interesting, is beyond the scope of this study. Nonetheless, we reinforced the pathogenicity of the variant with various in silico prediction platforms (supplementary table 9). Interestingly, a recent study reported that FOSL2 truncating mutations are involved in a new syndrome with ectodermal defects and cataract. This is in line with our findings that FOSL2 is an important shared TF in both LSCs and KCs, and strengthens the predicted role of FOSL2 in the epithelium of the eye and associated diseases. We have included additional discussion on this study in the Discussion line 662-668.
3. Description of the revisions that have already been incorporated in the transferred manuscript
Reviewer #2:
In Figure 1, the authors compare the transcriptome and epigenome (ATAC-seq and histone modifications) of basal KCs from skin donors and cultured LSCs established from limbal biopsies. The authors should clarify the source of the cells in the published studies - specifically, why more data were needed and if these were comparable to their datasets.
We have included the cell sources and cultured conditions from the published studies and added additional columns in the supplementary tables 1, 2 and 3. Briefly, LSC publicly available samples were extracted from post-mortem cornea and cultured in DMEM/F12, or KSFM.
Regarding the questions on the necessity of incorporating more data, our reasoning was two-fold. First of all, we have taken an integrated approach to perform our analysis, using both our own and publicly available datasets. We see this as a strength, as the most important differences between cell types that determine cell fates should be consistent with cells generated from different donors and labs. Second, we choose to generate more data in our own lab in order to make sure to have comparisons without the influence of technical differences between publicly available datasets. We include text about this approach in the Discussion (lines) 578-580. Furthermore, to show that the datasets we used are consistent and can be integrated, we have performed a PCA correlation analysis for the RNA-seq analysis (supplementary figures 1A&1B lines 834-837), and added a spearman correlation analysis for the ATAC-seq datasets (supplementary figure 5F & lines 818-820). Both indicated clear biological signal similarities between cell types across different labs and techniques.
Reviewer #2:
3.Next, they compared the transcriptomes of LSCs and KCs to the transcription profile of LSCs from two aniridia patients and control. They need to specify the stage of the donors' disease and provide details on the control samples.
Both aniridia samples were from patients of stage 4 on the Lagali Stages (Lagali, N. et al. (2020) ‘Early phenotypic features of aniridia-associated keratopathy and association with PAX6 coding mutations’, The Ocular Surface, 18(1), pp. 130–140. We have included this information in Material and Methods (lines 738-739). For healthy control cells, no information regarding the stage and gender is available, as they are from anonymous individuals. We added more information on the aniridia and control samples regarding the culture conditions and passage numbers (lines 747:750).
Reviewer #2:
- In addition, and as indicated above, when combining published datasets, one should clarify whether the methods of collecting/growing the cells and the disease stage are comparable. This is important with samples from aniridia as it is unclear if the patient LSCs survive the isolation or if other cells take over.*
Healthy LSC cells used in the direct comparison with aniridia LSC cells were grown using the same expansion and culture conditions. Furthermore, the method of culture, extraction and expansion between the earlier published aniridia cell data and our data are exactly the same (lines 735:750). As described above, we have included the cell sources from the aniridia samples, and added additional columns in the supplementary table 1.
Reviewer #2:
- It would be valuable to the community if the presented data were also provided online in a web tool so that CRE activity or gene expression could be easily examined.*
We have expanded the UCSC track hub to highlight the identified variable CRE elements. This will enable a searchable tool for differential CREs close to genes of interest between KCs and LSCs. We have furthermore added a sentence to explicitly mention the presence of this track hub in the result section (lines 248-249).
4. Description of analyses that authors prefer not to carry out
Reviewer #1:
- By identifying differential cis regulatory elements in two cells, they identify TFs that are associated with overexpression or repression of genes of interest. However, this approach of relying solely upon nearest genes of CREs is very cursory and the authors could have used methods such as Activity-by-Contact to establish CREs and their target genes and then assessed their correlation with expression (optional). Activity-by-Contact incorporation would be an exciting inclusion in the data analysis, and for the next step in GRN modelling. However, this is out of the scope of this manuscript. In addition, we would like to point out that we did not solely rely on the method of mapping CREs to the nearest genes. Instead, for H3K27ac and ATAC-seq signals, our analysis uses a weighted TSS distance method, within windows of up to 100kb, similar to the method ANANSE and other published gene regulatory network tools. For H3K4me3 and H3K27me3 marks which correlate far better with an expression of the closest genes, we use a window of 2kb at the closest gene.
Reviewer #2:
When ATAC-seq was combined with histone modification analyses, about a third of these regions showed different characteristics, inferring tissue-specific activities. These data may be valuable for identifying tissue-specific cis-regulatory elements (CREs) for the key TFs. This, however, remains to be examined experimentally.
It is not fully clear to us what ‘experimentally examination’ was referred to by this reviewer, whether to test these tissue-specific CREs individually or globally. We agree that it is important to test tissue-specific CREs experimentally to examine their function, e.g., which genes they are regulating, and which role they play in tissue-specificity. However, this is out of the scope of this manuscript.
Reviewer #1 (Evidence, reproducibility and clarity (Required)):
Multi-omics analyses identify transcription factor interplay in corneal epithelial fate determination and disease The authors of this manuscript describe their work on identifying transcriptional regulators and their interplay in cornea using limbal stem cells and epidermic using keratinocytes. This is a very well written and well described comprehensive manuscript. The authors performed various analyses which were in line with logical workflow of the research question. The authors begin by first identifying differential gene expression signals for the two tissues along with enriched biological processes using GSEA and PROGENy. The strength of this manuscript also includes usage of epigenetic data to determine the cell fate and its drivers. The authors study various epigenetic assays and correlate them expression levels of TFs and genes to identify regulatory patterns that mark differences between LSCs and KCs. By identifying differential cis regulatory elements in two cells, they identify TFs that are associated with overexpression or repression of genes of interest. However, this approach of relying solely upon nearest genes of CREs is very cursory and the authors could have used methods such as Activity-by-Contact to establish CREs and their target genes and then assessed their correlation with expression (optional). In logical progression, the authors use gene regulatory networks using to compare LSC and KC with Embryonic Stem Cells (ESC) to identify most influential TFs to differentiate them. Along with identifying key TFs, they also identify TF regulatory hierarchy to find TFs that regulate other TFs in context-specific manner. They identify "p63, FOSL2, EHF, TFAP2A, KLF5, RUNX1, CEBPD, and FOXC1 are among the shared epithelial TFs for both LSCs and KCs. PAX6, SMAD3, OTX1, ELF3, and PPARD are LSC specific TFs for the LSC fate, and HOXA9, IRX4, CEBPA, and GATA3 were identified as KC specific TFs." And "p63, KLF4 and TFAP2A can potentially co-regulate PAX6 in LSCs." To compare in vitro findings with in vivo results, they also generate single cell data and identify specific TFs that may play pathobiological role in disease development and progression. While the study is complete and describes well, a strong conclusion, including validation of role of some TFs such as FOSL2 through knock out experiments in model organisms or cell culture will elevate the paper more (optional). This study merits publication in high quality journal (IF:10-15)
Reviewer #1 (Significance (Required)):
The study is very significant not only in the context of corneal disease biology. The imbalance in interplay of TFs is often envisaged behind disease development but very few efforts of detailed analysis are undertaken. This study is performed very well and the methods are described in clear manner. Appropriate statistical methods are used where required.
Reviewer #2 (Evidence, reproducibility and clarity (Required)):
The study by Smits et al. presents detailed multi-omics (transcripts, ATAC-seq, histone marks) analyses comparing human limbal stem cells (LSCs) to skin keratinocytes (KCs). The authors compared these two cell types because they have a shared origin in the epidermal progenitors and because LSC diseases occasionally accompany a transition to KC-like phenotypes. The authors analyzed the "omics" data using several bioinformatic analysis tools. Their analyses resulted in a detailed list of the critical transcription factors (TFs) and their gene regulatory networks shared between the two lineages and those unique to either LSCs or KCs.
The findings provide an overview of transcriptional regulators and targets in two essential tissues. This is a valuable tool for future discoveries regarding the processes governing cell differentiation and those involved in the disease mechanism of the cornea. Although the presented predictions are interesting, what is missing is an examination of the functional significance of the findings. Also, as detailed below, there is a need to clarify the source of the cells used for the different analyses.
Comments and suggestions: 1. In Figure 1, the authors compare the transcriptome and epigenome (ATAC-seq and histone modifications) of basal KCs from skin donors and cultured LSCs established from limbal biopsies. The authors should clarify the source of the cells in the published studies - specifically, why more data were needed and if these were comparable to their datasets. 2. Next, they compared the transcriptomes of LSCs and KCs to the transcription profile of LSCs from two aniridia patients and control. They need to specify the stage of the donors' disease and provide details on the control samples. In addition, and as indicated above, when combining published datasets, one should clarify whether the methods of collecting/growing the cells and the disease stage are comparable. This is important with samples from aniridia as it is unclear if the patient LSCs survive the isolation or if other cells take over. The finding that LSC genes are reduced in aniridic LSCs may suggest that the cells resemble KCs, although specific KC genes are not elevated. 3. Figure 2: The authors characterized the regulatory regions in the two cell types based on ATAC-seq and histone marks. Based on ATAC-seq, 80% of the open areas were shared between the two lineages. When ATAC-seq was combined with histone modification analyses, about a third of these regions showed different characteristics, inferring tissue-specific activities. These data may be valuable for identifying tissue-specific cis-regulatory elements (CREs) for the key TFs. This, however, remains to be examined experimentally. 4. It would be valuable to the community if the presented data were also provided online in a web tool so that CRE activity or gene expression could be easily examined. 5. Using motif predictions, the authors point to the TF families that likely control the differential CREs (Figure 3). Next, the authors constructed the gene regulatory network based on the (ANANSE) pipeline, which integrates CRE and TF motif predictions with the expression of TFs and their target genes. To gain further insight into the shared gene regulatory networks, they compared each to similar data from embryonic stem cells. Their analysis further suggests shared TFs regulating each other and some of the tissue-specific transcription factors. Differential gene expression of the TFs was partially validated by analyzing available single-cell data (Figure 4). 6. In the final section of the study, the authors aimed to identify TFs in LSCs that are relevant to corneal disease. They examined whether the LSC TFs are bound to genes associated with LSC deficiency and inherited corneal diseases. To accomplish this task, the authors incorporated single-cell data on corneal gene expression and available datasets on genetic analyses of families. Through this analysis, they identified a mutation in FOSL2 that may be causing corneal opacity in the carriers. Also, the findings indicate an interaction between FOXL2 and other TFS is important for maintaining the corneal epithelium. These interesting predictions indicate an important role for FoxL2 in corneal function. It would be important to verify these predictions by experimental studies, for example, by presenting the association of FOXL2 with the predicted co-factors and presenting data on the effect of the identified mutation on FOXL2 transcriptional activity.
Reviewer #2 (Significance (Required)):
The analysis provides an overview of transcriptional regulators and targets in two essential tissues. This is a valuable tool for future discoveries regarding the processes governing cell differentiation and those involved in the disease mechanism of the cornea. The results predict a role for FoxL2 in corneal function.
Select your user profile and press F2 to rename. Enter a new name for your user profile (it must match the user name entered in the Registry Editor). Click away and then click Continue to save the changes.
This may not be possible. Windows doesn't allow to rename the active user folder. Instead restart windows. You will be logged in with a TEMP user setup and folder. Now rename the former user folder to the new name and change the user name again in regedit. With the next startup everything should be setup properly. You will only be logged in correctly if the username defined in regedit matches a folder in the users folder containing all necessary files.
Together we seek the best outcome for all people who use the web across many devices.
The best possible outcome for everyone likely includes a world where MS open sourced (at least as much as they could have of) Trident/EdgeHTML—even if the plan still involved abandoning it.
Automatically disable Windows Firewall. If you select this option, Veeam Backup & Replication will disable Windows Firewall for the the verified VM.
Add a note here, that new AppGroups have "Disable firewall" turned on by default
For the Microsoft Windows authentication mode, you can specify credentials for the account on the Credentials tab in the application group or SureBackup job settings.For the SQL Server authentication mode, you must pass credentials of the account as arguments to the script. You can do it using the command line interface, the UI or the .bat file.ImportantWe recommend to use the .bat file to pass credentials and for the script execution.To pass credentials using the .bat file:Create a .bat file. It must contain a path to the Microsoft SQL Checker Script, arguments for the %log_path% and the %vm_ip%, a username and a password. For example:cscript "C:\Program Files\Veeam\Backup and Replication\Backup\Veeam.Backup.SqlChecker.vbs" %1 %2 username passwordIn the application group or SureBackup job settings, select to use a custom script.Specify a path to the .bat file.Provide the %log_path% and the %vm_ip% in the Arguments field.
This page should be similar to https://helpcenter.veeam.com/archive/backup/110/vsphere/custom_verification_scripts.html We MUST recommend to NOT pass sensitive info via UI in that way
The men overwhelmed law and order. They pulled down road signs. They smashed windows of the congested streetcars. They toppled telephone booths and lit newspaper kiosks on fire. They heaved bricks from a nearby construction site through the Forum windows. When one young man was arrested and taken into a police car, the protestors began rocking the car, and the police officer feared they would flip it.
Reminiscent of George Floyd riots
AbstractRecent technological
This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac088), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:
Reviewer name: Kamil S. Jaron
Assembling a genome using short reads quite often cause a mixed bag of scaffolds representing uncollapsed haplotypes, collapsed haplotypes (i.e. the desired haploid genome representation) and collapsed duplicates. While there are individual software for collapsing uncollapsed haplotypes (e.g. HaploMerger2, or Redundans), there is no established workflow or standards for quality control of finished assemblies. Naranjo-Ortiz et al. describes a pipeline attempting to make one.
The Karyon pipeline is a workflow for assembling haploid reference genomes, while evaluating the ploidy levels on all scaffolds using GATK for variant calling and nQuire for a statistical method for estimating of ploidy from allelic coverage supports. I appreciated the pipeline promotes some of good habits - such as comparing k-mer spectra with the genome assembly (by KAT) or treatment of contamination (using Blobtools). Nearly all components of the pipeline are established tools, but authors also propose karyon plots - diagnostic plots for quality control of assemblies.
The most interesting and novel one I have seen is a plot of SNP density vs coverage. Such plot might be helpful in identifying various changes to ploidy levels specific to subset of chromosome, as authors demonstrated on the example of several fungal genomes (Mucorales). I attempted to run the pipeline and run in several technical issues. Authors, helped me overcoming the major ones (documented here: https://github.com/Gabaldonlab/karyon/issues/1) and I managed to generate a karyon plot for the genome of a male hexapod with X0 sex determination system. I did that, because we know well the karyotype and I suspected, the X chromosome will nicely pop-up in the karyon plot.
To my surprise, although I know the scaffold coverages are very much bi-modal, I got only a single peak of coverages in the karyon plot and oddly somewhere in between the expected haploid and diploid coverages. I think it is possible I have messed up something, but I would like authors to demonstrate the tool on a known genome with known karyotype. I would propose to use a male of a species with XY or X0 sex determination system. Although it's not aneuploidy sensu stricto, it is most likely the most common within-genome ploidy variation among metazoans. I would also propose authors to improve modularity of the pipeline. On my request authors added a lightweighted installation for users interested in the diagnostic plots after the assembly step, but the inputs are expected in a specific, but undocumented format, which makes a modular use rather hard. At least the documentation of the formats should improve, but in general I think it could be made more friendly to folks interested only in some smaller bits (I am happy to provide authors with the data I used).
Although I quite enjoyed reading the manuscript and the manual afterwards, I do think there is a lot of space for improvement. One major point is there is no formal description of the only truly innovative bit of this pipeline - the karyon plots. There is a nice philosophical overview, but the karyon plots are not explained in particular, which makes reading of the showcase study much harder. Perhaps a scheme showing the plot and annotating what is expected where would help. Furthermore, authors did a likelihood analysis of ploidy using nQuire, but they did not talk about it at all in the result section. I wonder, what's the fraction of the assembly the analysis found most likely to be aneuploid for the subset of strains that suspected to be aneuploids? Is 1000 basis sliding window big enough to carry enough signal to produce reliable assignments? In my experience, windows of this size are hard to assign ploidy to, but I usually do such analyses using coverage, not SNP supports.
However, I would like to appraise authors for the fungal showcases, I do think they are a nice genomics work, investigating and considering both biological and technical aspects appropriately. Finally, a bit smaller comment is that the introduction could a bit more to the point. Some of the sections felt a bit out of place, perhaps even unnecessary (see minor comments bellow). More specific and minor comments are listed bellow. Kamil S. Jaron
Minor manuscript comments: I gave this manuscript a lot of thought, so I would like to share with you what I have figured out. However, I recognise that these writing comments listed bellow are largely matter of personal preference. I hope they will be useful for you, bit it is nothing I would like to insist on as a reviewer. l56: An unnecessary book citation. It's not a primary source for that statement and if a reference was made a "further reading", perhaps better to cite a recent review available online rather than a book. l65 - 66: Is the "lower error rate" still a true statement? I don't think it is, error rates of HiFi reads are similar or even lower compared to short reads. (tough I do agree there is still plenty of use for short reads). l68 - 72: I don't think you really need this confusing statement " which are mainly influenced by the number of different k-mers", the problems of short read assembly are well explained bellow. However, I actually did not understand why the whole paragraph l76 - 88 was important. I would expect an introduction to cover approaches people use till now to overcome problems of ploidy and heterozygosity in assemblies. l176 - 177: "Ploidy can be easily estimated with cytogenetic techniques" - I don't think this statement is universally true. There are many groups where cytogenetics is extremely hard (like notoriously difficult nematodes) or species that don't cultivate in the lab. For those it's much easier to do NGS analysis. You actually contradict this "easily" right in the next sentence. l191: the first autor of nQUire is not Weib, but Weiß. The same typo is in the reference list. l222 - 223: and l69-70 explains what is a k-mer twice. l266 - 267: This statement or the list does not contain references to publications sequencing the original genomes. I am not sure, but when possible, it is good to credit original authors for the sequencing efforts. l302: REF instead of a reference l303: What is "important fraction"? l304: How can you make such a conclusion? Did you try to remove the contamination and redo the assembly step? Did the assembly improve? Not sure if it's so important for the manuscript, but I would tone down this statement ("could be caused by" sounds more appropriate). l310: "B9738 is haploid" are you talking about the genome or the assembly? How could you tell the difference between homozygous diploid and haploid genome? If there is a biological reason why homozygous diploid is unlikely, it should be mentioned. l342: How fig 7 shows 3% heterozygosity? How was the heterozygosity measured? Also, karyon plot actually shows that majority of the genome is extremely homozygous and all heterozygosity is in windows with spuriously high coverage. What do you think is the haploid / diploid sequencing coverage in this case? l343 - 345: I don't think these statements are appropriately justified. The analysis presented did not convincingly show the genome is triploid or heterozygous diploid. l350: I think citing SRA is rather unnecessary. l358: what "model"? How could one reproduce the analysis / where could be the model found? l378 - 379: Does Karyon analyse ploidy variation "during" the assembly process? Although the process is integrated in a streamlined pipeline, there are loads of approaches to detect karyotype changes in assemblies, from nQuire which is used by Karyon, through all the sex-chromosome analyses, such as https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002078.
Method/manual comments:
Scaffold length plots have no label of the x axis. As the plots are called distributions, I would expect frequency or probability on the y axis and the scaffold length on the x. Furthermore, plotting of my own data resulted in a linnear plot with a very overscaled y-axis. "Scaffold versus coverage" plot also does not have axis labels either. I would also call it scaffold length vs coverage instead. I also found the position of the illustrating picture in the manual confusing a bit (probably should be before the header of the next plot).
Variation vs. coverage is the main plot. It does look as a useful visualisation idea. Do I understand right that it's just numbers of SNPs vs coverage? I am confused as I thought the SNP calling is done on the reference individual and in the description you talk about homozygous variants too, what are those? Missmapped reads? Misassembled references?
I also wonder about "3. Diffuse cloud across both X and Y axes.", I would naturally imagine that collapsed paralogs would have a similar pattern to the plot that was shown as an example - a smear towards both higher coverage and SNP density. I guess this is a more general comment, would you expect any different signature of collapsed paralogs and higher ploidy levels? Should not paralogy be more explicitly considered as a factor?
Shattered windows and the sound of drums
"Shattered windows and the sound of drums" connotes violence in the persona while he ruled the people. Instead of being peaceful, the persona resort to forceful measures to get things done. This shows that the persona is evil and despicable,
peered from the windows, shades drawn
Stalker?
Typing the to bach and other accented characters in Windows 7, Vista and XP by Meurig Williams
I know I had it in there somewhere, but can't find it this morning.
To bach or circumflex on Welsh keyboard is:<br /> ctrl-alt ^ + letter
Markdown editor and organizer for Windows, Mac and Linux
Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.
Different cause but when things like this happen after sporting events when teams win or lose it is almost condoned by society when in fact it is a riot
Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.
Man they take their hockey seriously! I love the people's loyalty.
Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.
Wow, This is insane! This is obviously bigger than anger for the suspension.
Gyuri Lajos 1 second ago Still congratulate on the very nice summaries of the frameworks. Svelte is rightly coming up. Show less Read more 1.0x 0 Like 0 Dislike Reply Gyuri Lajos 0 seconds ago https://hypothes.is/a/EmQxSqvcEe2ivIMtMs4kww unlike react it does not need to be refactored every year or so. If the you cannot get the concept right, state management is a broken concept, it is not even clear what it is that you really want to do, hence the constant rework. As Alan Kay said about the browser, it is a broken wheel, it is not even clear what it is. Why is it that the most popular frameworks perform worse? And remember since the number of people working in the field doubles every 5 years or so, (Uncle Bob Martin) that means half of the mass of programmers have less than 5 years experience, popularity = pop culture and cargo cult. To be fair, Java got to be the top language because although it failed (first time) for set-top boxes, it did have huge marketing budget, had tried and eventuallyu failed in the browser, as applets, then escaped to the server side, eclipsed by javascript for good reasons. Android saved it, but it stills a lot to be desired. Back in the day there was a great project for Windows mobile called Ewe that was great UI (not using platform libraries but ist own drawing of UI elements. It was about 2 Mb 20 years ago and worked interchageably on mobile and desktop windows. So there. Show less Read more 1.0x 0 Like 0 Dislike Reply Gyuri Lajos 1 second ago do checkout apprun, and todo up is not enough. https://indylab-2022.fission.app/hyp?apprun&user=gyuri
My comments
In 1911 the Triangle Shirtwaist Factory in Manhattan caught fire. The doors of the factory had been chained shut to prevent employees from taking unauthorized breaks (the managers who held the keys saved themselves, but left over two hundred women behind). A rickety fire ladder on the side of the building collapsed immediately. Women lined the rooftop and windows of the ten-story building and jumped, landing in a “mangled, bloody pulp.”
"We hold that employment of women over 48 hours per week as a normal should be prohibited"
Instruction: Well-divided, text-based instructions. Step-by-steptutoring, which is easy to follow.Interface: Clear UI, easy to understand. Elements are easy tomanipulate with a mouse. WYSIWYG.Access: Windows. IOS. Android. Mobile App.Language Support: Multi-language interface.Cost and Devices: Extra templates and graphics for premiummembers ($12.95 per month).Diverse Character/Icon Selection: Canva uses Pixabay andPexels to provide some (but not many) diverse and multiculturalimages. On the other hand, there are limitations in the diversityof icons offered.
Canva is what I ultimately decided to use for this module's project. It was not an easy decision since I wanted to try out VYOND as was suggested in the project description. Ultimately, it came down to cost and features. I also cross-shopped Moovly and Powtoon, but at the free education tier which is all that my budget allows, I found that Canva provided the most features and was the easiest to use for what I had in mind.
Having used it, I can attest that the instruction and interface are easy to follow and that overall, Canva does provide an excellent user experience.
打开主菜单 维基百科 搜索 拍摄您的当地文化,帮助维基百科并获胜! 隐藏 维基百科:引用来源 项目页面 说话 语言 看 编辑 有关在维基百科条目中引用引文的信息,请参阅帮助:脚注和维基百科:内联引文。有关在维基百科之外的工作中使用维基百科条目的信息,请参阅维基百科:引用维基百科。 “WP:CITE”和“WP:REF”重定向到这里。有关“需要引文”信息页面,请参阅 WP:CITENEED。有关参考台,请参阅WP:REFD。 蓝色勾号 本页记录了英语维基百科内容指南。 这是编辑应该尝试遵循的普遍接受的标准,尽管最好用常识来对待它,偶尔可能会有例外。对本页的任何实质性编辑都应反映共识。如有疑问,请先在讨论页上讨论。 快捷方式 WP:CS WP:CITE WP:参考文献 简而言之,此页面:引用可靠的来源。您可以通过从编辑框顶部的下拉菜单“引用”中进行选择来添加引文。在标记中,您可以使用 ref 标记手动添加引文。下面详细介绍了更详细和有用的引用来源的方法。 新来的?欢迎!此页面有一个简化版本,请访问帮助:初学者参考。 引文,也称为参考文献,[注1]唯一标识信息来源,例如:
里特,R.M.(2003)。牛津风格手册。牛津大学出版社,第1页。国际标准书号978-0-19-860564-5。 维基百科的可验证性政策要求在条目空间的任何地方对任何受到质疑或可能受到质疑的材料以及所有引用进行内联引用。
文章中的引用或参考文献通常由两部分组成。在第一部分,基于外部来源或引用外部来源的每个文本部分都用内联引用进行标记。内联引文将是上标脚注编号。引文或参考文献的第二个必要部分是完整参考文献列表,它提供了有关来源的完整、格式化的详细信息,以便任何阅读文章的人都可以找到它并验证它。
本页介绍如何放置引文的两个部分并设置其格式。每篇文章应始终使用一种引用方法或风格。如果一篇文章已经有引用,请使用该方法保持一致性,或者在更改之前在讨论页上寻求共识(该原则在§引用方法的变化中进行了审查)。虽然您应该尝试正确编写引文,但最重要的是您提供足够的信息来识别来源。如果需要,其他人将改进格式。请参阅:“帮助:初学者参考”,有关如何在维基百科文章中放置参考文献的简要介绍;并在可视化编辑器中引用模板,关于引用的图形方式,包含在维基百科中。 维基百科指南 指南列表 策略列表 行为 假设诚信利益冲突礼貌消失破坏性编辑不要咬新人不要为了表达观点而破坏礼仪不要玩弄系统用户页面其他行为准则WMF 友好空间政策 讨论 讨论页指南签名 内容 引用来源 外部链接可靠的来源 医学边缘理论非自由内容 冒犯性材料不要复制长文本不要制造恶作剧专利废话其他内容准则 编辑 文章大小加粗编辑摘要可理解性其他编辑准则 组织 类别、列表、模板分类消除歧义 风格 样式内容列表表手册 删除 删除过程快速保留管理员删除准则 项目内容 项目页面 维基项目模板用户页面 用户框快捷方式子页面 其他 命名约定知名度 VTE 引文类型 何时以及为何引用来源 要包含哪些信息 内联引文 快捷方式 WP:煽动 WP:内联引用 更多信息:维基百科:内联引用 内联引用允许读者将文章中的给定材料与支持它的特定可靠来源相关联。内联引文使用脚注(长脚注或短脚注)添加。本节介绍如何添加任一类型,还介绍如何创建完整书目引文列表以支持缩短的脚注。
第一个向文章添加脚注的编辑必须创建一个要显示这些引文的部分。
脚注 另请参阅:帮助:脚注 如何创建引文列表 捷径 WP:REFLIST 如果需要,此部分通常标题为“注释”或“参考文献”,并放置在文章底部或附近。有关文章末尾章节的顺序和标题的更多信息(其中也可能包括“进一步阅读”和“外部链接”部分),请参阅维基百科:页脚。
除了下面讨论的一些例外情况外,引文显示在仅包含标签或模板的单个部分中。例如:<references />{{Reflist}}
== References == {{Reflist}} 然后,脚注将自动列在该节标题下。文本中的每个带编号的脚注标记都是指向相应脚注的可单击链接,每个脚注都包含一个链接回文本中相应点的插入符号。
捷径 WP:ASL 切勿使用滚动列表或滚动框中显示的引文列表。这是因为可读性、浏览器兼容性、辅助功能、打印和站点镜像存在问题。[注2]
如果一篇文章包含一般参考文献列表,则通常将其放在单独的部分中,例如标题为“参考文献”。这通常紧跟在列出脚注的部分(如果有)之后。(如果一般参考文献部分称为“参考文献”,则引文部分通常称为“注释”。
如何使用 ref 标签放置内联引文 捷径 WP:CITEFOOT 更多信息:脚注:基础知识 要创建脚注,请在文章文本中的适当位置使用语法,例如:<ref>...</ref>
Justice is a human invention.<ref>Rawls, John. ''A Theory of Justice''. Harvard University Press, 1971, p. 1.</ref> It ... 将显示为类似以下内容:
正义是人类的发明。[1] 它... 还需要生成脚注列表(实际显示引文文本的位置);为此,请参阅上一节。
与上面的例子一样,引文标记通常放在相邻的标点符号之后,例如句点(句号)和逗号。有关例外情况,请参阅 WP:样式手册 § 标点符号和脚注。另请注意,引文标记前不添加空格。引文不应放在章节标题内或与章节标题在同一行。
引文应添加到其支持的材料附近,提供文本来源的完整性。如果某个单词或短语特别有争议,可以在句子中的该单词或短语旁边添加内联引用,但通常只需将引用添加到从句、句子或段落的末尾就足够了,只要清楚哪个来源支持文本的哪个部分。
将引文与解释性脚注分开 另请参阅:维基百科:样式/布局手册§注释和参考文献,以及帮助:解释性注释 捷径 WP:EXPLNOTESECT 如果一篇文章同时包含脚注引文和其他(解释性)脚注,则可以(但不是必需)使用脚注组将它们分成两个单独的列表。然后,解释性脚注和引文被放置在单独的部分中,分别称为(例如)“注释”和“参考文献”。
将解释性脚注与脚注引用分开的另一种方法是使用 {{efn}} 作为解释性脚注。该系统的优点是,在这种情况下,解释性脚注的内容可以用脚注引用来引用。当解释性脚注和脚注参考文献不在单独的列表中时,{{refn}} 可用于包含脚注引文的解释性脚注。
避免杂乱无章 快捷方式 WP:杂乱无章 WP:内联杂波 WP:INLINECITECLUTTER 内联引用可能会使编辑窗口中的wiki文本明显膨胀,并且可能变得难以管理和混乱。有两种主要方法可以避免编辑窗口中的混乱:
例如,通过在参考文献列表模板 {{reflist}} 中收集完整的引文代码,然后使用缩短的参考文献标签将它们插入文本中,使用列表定义的参考文献。<ref name="Smith 2001, p99" /> 插入简短的引文(见下文),然后引用源文本的完整列表 与其他引文格式一样,文章不应在没有达成共识的情况下进行格式之间的大规模转换。
但请注意,不能再使用可视化编辑器编辑引用列表模板中定义的引用。
重复引用 更多信息:脚注:多次使用一个来源 对于同一内联引文或脚注的多次使用,可以使用命名引用功能,选择一个名称来标识内联引文,然后键入 。此后,通过键入以前的引用名称,可以在定义使用之前或之后重复使用任意次数的同一命名引用,如下所示:。在前面使用斜杠表示标签是自闭合的,而用于闭合其他引用的不得另外使用。<ref name="name">text of the citation</ref><ref name="name" />></ref>
文本几乎可以是任何东西——除了完全是数字。如果在 的文本中使用空格,则必须将文本放在双引号内。将所有命名引用放在双引号内可能对不知道该规则的未来编辑者有所帮助。为了帮助页面维护,建议 的文本与内联引文或脚注有联系,例如“作者年份页面”:。namenamename<ref name="Smith 2005 p94">text of the citation</ref>
使用直引号将引用名称括起来。不要使用弯引号。大花标记被视为另一个字符,而不是分隔符。如果在首次命名引用时使用一种引号样式,而在重复引用中使用另一种样式,或者在重复引用中使用混合样式,则页面将显示错误。"“”
引用同一来源的多个页面 快捷方式 WP:IBID WP:OPCIT 更多信息:帮助:参考文献和页码 当一篇文章引用来自同一来源的许多不同页面时,为了避免许多大的,几乎相同的完整引用的冗余,大多数维基百科编辑使用以下选项之一:
使用 {{cite}} 模板的 |pages= 参数将命名引用与页码组合列表结合使用(最常用的,但对于大量页面可能会造成混淆) 命名引用与 或 模板一起指定页面{{rp}}{{r}} 短引文 不鼓励使用同上、同上或类似的缩写,因为随着新参考文献的添加,它们可能会被破坏(同前的问题较少,因为它应该明确提及文章中的引用;但是,并非所有读者都熟悉这些术语的含义)。 如果同上广泛使用,请使用 {{同上}} 模板标记文章。
重复引用 快捷方式 WP:杜普西特 WP:DUPREF 结合精确重复的完整引文,与现有引文风格(如果有的话)保持一致。在这种情况下,“精确复制”意味着具有相同的内容,不一定是相同的字符串(“纽约时报”与“纽约时报”相同;不同的访问日期并不重要)。不要阻止编辑,特别是没有经验的编辑,在使用来源适当时添加重复引用,因为重复总比没有引用好。但是任何编辑都应该随意将它们组合在一起,这样做是维基百科的最佳实践。
对同一来源的不同页面或部分的引用也可以组合(保留引用的不同部分),如上一节所述。可以使用任何与现有引文风格(如果有的话)一致的方法,或者可以寻求共识来改变现有风格。
通过检查参考文献列表来查找重复的引文是很困难的。有一些工具可以提供帮助:
AutoWikiBrowser(AWB)将识别并(通常)纠正<ref>之间的精确重复...</ref>标记。请参阅文档。 网页和文本的 URL 提取器可以识别具有完全相同 URL 但在其他方面可能不同的 Web 引文。有时,对同一网页的引用后可能会跟有不同的不重要的跟踪参数 (, ),并且不会作为重复项列出。?utm ...#ixzz... 步骤1:输入维基百科文章的URL,然后单击“加载”, 第 2 步:勾选“仅显示重复的 URL 地址”(取消勾选“删除重复地址”) 可选:勾选单选按钮“不显示”,勾选其行首的框,然后进入框web.archive.org,wikipedia,wikimedia,wikiquote,wikidata 步骤3:点击 提取. 然后将列出重复项,并且必须手动合并。经常会出现误报; 特别是 URL 很麻烦,因为它们包含原始 URL,显示为重复项。步骤 2 的可选部分消除了存档 URL,但不幸的是,重复项列表包括存档的页面。维基* 网址问题不大,因为它们可以被忽略。web.archive.org 短引文 快捷方式 WP:CITESHORT WP:SFN 主页面: 帮助:缩短脚注 一些维基百科条目使用简短的引用,提供有关来源的摘要信息以及页码,如 。这些与完整的引文一起使用,这些引文提供了来源的完整详细信息,但没有页码,并列在单独的“参考文献”部分中。<ref>Smith 2010, p. 1.</ref>
使用的短引文形式包括作者日期引用(APA 样式、哈佛样式或芝加哥样式)以及作者标题或作者页面引用(MLA 样式或芝加哥样式)。和以前一样,脚注列表在“注释”或“脚注”部分中自动生成,该部分紧接在包含来源完整引用的“参考文献”部分之前。短引文可以手动编写,也可以使用 {{sfn}} 或 {{harvnb}} 模板或 {{r}} 引用模板编写。(请注意,未经共识,不应将模板添加到已使用一致引用样式的文章中。短引文和完整引文可以链接,以便读者可以单击短注释以查找有关来源的完整信息。有关常见问题的详细信息和解决方案,请参阅模板文档。有关带模板和不带模板的变体,请参阅完整参考的维基链接。有关一组实际示例,请参阅这些示例。
以下是短引文在编辑框中的外观:
The Sun is pretty big,<ref>Miller 2005, p. 23.</ref> but the Moon is not so big.<ref>Brown 2006, p. 46.</ref> The Sun is also quite hot.<ref>Miller 2005, p. 34.</ref>
== Notes == {{reflist}}
== References == * Brown, Rebecca (2006). "Size of the Moon", ''Scientific American'', 51 (78). * Miller, Edward (2005). ''The Sun''. Academic Press. 这是它们在文章中的样子:
太阳很大[1],但月亮没有那么大。[2] 太阳也很热。[3]
注意事项
^米勒,2005年,第23页。 ^布朗,2006年,第46页。 ^米勒,2005年,第34页。
引用
布朗,丽贝卡(2006)。“月球的大小”,《科学美国人》,51(78)。 米勒,爱德华(2005)。太阳。学术出版社。 使用标题而不是出版日期的缩短注释在文章中如下所示:
笔记
^米勒:《太阳报》,第23页。 ^布朗,“月亮的大小”,第46页。 ^米勒,《太阳报》,第34页。 使用手动链接时,很容易引入错误,例如重复的锚点和未使用的引用。脚本User:Trappist the monk/HarvErrors将显示许多相关错误。可以使用 W3C 标记验证服务找到重复的定位点。
Parenthetical referencing Shortcut WP:PAREN Since September 2020, inline parenthetical referencing has been deprecated on Wikipedia. This includes short citations in parentheses placed within the article text itself, such as (Smith 2010, p. 1). This does not affect short citations that use tags, which are not inline parenthetical references; see the section on short citations above for that method. As part of the deprecation process in existing articles, discussion of how best to convert inline parenthetical citations into currently accepted formats should be held if there is objection to a particular method. <ref>
This is no longer in use:
☒ 太阳很大(米勒 2005,第 1 页),但月亮没有那么大(布朗 2006,第 2 页)。太阳也很热(米勒 2005,第 3 页)。
引用 布朗,R.(2006)。“月球的大小”,《科学美国人》,51(78)。 米勒,E.(2005)。《太阳报》,学术出版社。 引文风格 捷径 WP:CITESTYLE 虽然引文应旨在提供上面列出的信息,但维基百科没有单一的风格,尽管任何给定条目中的引文都应遵循一致的风格。存在许多引文样式,包括维基百科文章中描述的引文样式,APA样式,ASA样式,MLA样式,芝加哥样式手册,作者日期引用,温哥华系统和蓝皮书。
尽管几乎可以使用任何一致的样式,但请避免使用 YYYY-MM-DD 以外的全数字日期格式,因为哪个数字是月份,哪个是日期存在歧义。例如,可以使用 2002-06-11,但不能使用 11/06/2002。在任何情况下,YYYY-MM-DD 格式都应限于年份在 1582 年之后的公历日期。由于它很容易与年份范围混淆,因此不使用格式 YYYY-MM(例如:2002-06)。
有关引用作品大写的更多信息,请参阅维基百科:样式/大写字母手册§所有大写字母和小写字母。
引用方法的变化 快捷方式 WP:CITEVAR WP:WHENINROME 编辑不应仅仅基于个人偏好,使其与其他文章相匹配,或者在没有首先寻求更改共识的情况下尝试更改文章的既定引用风格。[注3]
与拼写差异一样,通常的做法是遵循第一个主要贡献者使用的风格或已经在页面上工作的编辑的共识所采用的风格,除非已经达成共识。如果您正在编辑的文章已经在使用特定的引用样式,则应遵循它;如果您认为它不适合文章的需要,请在讨论页上寻求更改的共识。如果您是第一个为文章添加引用的贡献者,您可以选择您认为最适合该文章的样式。但是,自 2020 年 9 月 5 日起,内联括号引用是英语维基百科上弃用的引用样式。
如果文章中的所有或大部分引用都由裸露的URL组成,或者未能提供所需的书目数据 - 例如来源名称,所咨询文章或网页的标题,作者(如果已知),出版日期(如果已知)和页码(如果相关) - 那么这将不算作“一致的引用风格”,可以自由更改以插入此类数据。提供的数据应足以唯一标识来源,允许读者找到它,并允许读者在不检索来源的情况下初步评估来源。
要避免 当文章已经一致时,请避免:
在主要引文风格之间切换或将一个学科的首选风格替换为另一个学科的风格 - 除非远离弃用的风格,例如括号引用; 将引文模板添加到已经使用没有模板的一致系统的文章,或从一致使用引文模板的文章中删除引文模板; 更改引用的定义位置,例如,将引用列表中的引用定义移动到散文中,或将引用定义从散文移动到引用列表中。 通常认为有帮助 以下是标准做法:
通过添加缺失的信息来改进现有引文,例如将裸露的URL替换为完整的书目引文:这是一项改进,因为它有助于可验证性,并防止链接腐烂; 用内联引用替换部分或全部一般参考文献:这是一项改进,因为它为读者提供了更多可验证的信息,并有助于保持文本来源的完整性; 对引用样式不一致的文章强加一种样式(例如,脚注中的一些引用和其他作为括号引用):这是一种改进,因为它使引用更易于理解和编辑; 修复引文编码中的错误,包括错误使用的模板参数和标记问题:改进,因为它有助于正确解析引文;<ref> 合并重复引用(见上文§重复引用)。 将括号引用转换为可接受的引用样式。 将不透明的命名参考名称替换为常规名称,例如“Einstein-1905”而不是“:27”。 处理引文中的链接 如上文“应包括哪些信息”一节所述,在可用时包括指向源材料的超链接是有帮助的。在这里,我们注意到有关这些链接的一些问题。
避免嵌入链接 捷径 WP:CS:嵌入 指向外部网站的嵌入式链接不应用作内联引用的一种形式,因为它们极易受到链接腐烂的影响。维基百科在早期允许这样做 - 例如通过在句子后添加一个链接,例如:[http://media.guardian.co.uk/site/story/0,14173,1601858,00.html],它被渲染为:[1]。 不再建议这样做。不建议使用原始链接来代替正确写出的引文,即使放置在 ref 标签之间,就像这样。由于任何准确识别来源的引用都比没有好,因此不要恢复部分引用的善意添加。它们应被视为临时的,并尽快替换为更完整、格式正确的引文。<ref>[http://media.guardian.co.uk/site/story/0,14173,1601858,00.html]</ref>
嵌入式链接绝不应用于在文章内容中放置外部链接,例如:“Apple Inc. 宣布了他们的最新产品......”。
便利链接 更多信息:维基百科:版权 § 链接到受版权保护的作品,和 帮助:引文样式 1 § 在线资源 捷径 WP:CONLINK 便利链接是指向原始出版商或作者以外的人提供的网页上来源副本的链接。例如,报纸网站上不再提供的报纸文章的副本可能托管在其他地方。在提供便利链接时,重要的是要合理地确定便利副本是原件的真实副本,没有任何更改或不适当的评论,并且不侵犯原始出版商的版权。当托管网站看起来可靠时,可以假设准确性。
对于学术来源,便利链接通常是由开放访问存储库提供的重印本,例如作者所在大学的图书馆或机构存储库。这种绿色开放获取链接通常比付费或其他商业和非自由来源更可取。
如果多个网站托管材料的副本,则选择作为便利链接的网站应该是其一般内容最符合维基百科:中立观点和维基百科:可验证性的网站。
指示可用性 捷径 WP:INDICATEAVAIL 如果您的资源无法在线获得,则应在信誉良好的图书馆、档案馆或馆藏中提供。如果没有外部链接的引文被质疑为不可用,则以下任何一项都足以证明该材料是合理可用的(尽管不一定可靠):提供 ISBN 或 OCLC 编号;链接到有关来源(作品、作者或出版商)的既定维基百科文章;或直接在讨论页上引用材料,简要地和上下文。
来源链接 捷径 WP:源链接 对于以硬拷贝、缩微形式和/或在线形式提供的来源,在大多数情况下,省略您阅读的源。虽然引用作者、标题、版本(第 1、第 2 等)和类似信息很有用,但引用 ProQuest、EBSCOhost 或 JSTOR 等数据库通常并不重要(请参阅学术数据库和搜索引擎列表)或链接到需要订阅或第三方登录的此类数据库。您提供的基本书目信息应该足以在任何具有来源的数据库中搜索源。请勿添加嵌入了部分密码的 URL。但是,您可以提供 DOI、ISBN 或其他统一标识符(如果有)。如果发布者提供了指向来源或其摘要的链接,而该链接不需要付款或第三方登录即可访问,您可以提供该链接的URL。如果来源仅在线存在,即使访问受到限制,也要提供链接(请参阅WP:PAYWALL)。
预防和修复死链接 另请参阅:维基百科:链接腐烂和帮助:存档源 捷径 WP:DEADREF 为了帮助防止死链接,某些源可以使用持久标识符。一些期刊文章具有数字对象标识符 (DOI);一些在线报纸和博客,以及维基百科,都有稳定的永久链接。当永久链接不可用时,请考虑在撰写文章时制作引用文档的存档副本;按需 Web 归档服务,如 Wayback Machine (https://web.archive.org/save) 或 archive.today (https://archive.today),相当易于使用(请参阅抢占式归档)。
不要仅仅因为 URL 不起作用而删除引文。如果可能,应修复或更换死链接。如果您遇到用作支持文章内容的可靠来源的无效 URL,请在删除之前按照以下步骤操作:
确认状态:首先,检查链接以确认它已失效且未暂时关闭。搜索网站以查看它是否已重新排列。在线服务“现在关闭了吗?”可以帮助确定网站是否关闭,以及任何已知的信息。 检查同一网站上是否有更改的 URL:页面经常移动到同一网站上的不同位置,因为它们会成为存档内容而不是新闻。网站的错误页面可能有一个“搜索”框;或者,在Google和DuckDuckGo搜索引擎中 - 以及其他搜索引擎 - 可以使用关键字“site:”。例如:site:en.wikipedia.org “新西兰警车标记和涂装”。 检查网络存档:存在许多网络存档服务(有关完整列表,请参阅:维基百科:维基百科上的网络存档列表);链接到他们的网址内容存档(如果有)。例子: Internet Archive 拥有数十亿个存档网页。参见维基百科:使用时光机。 archive.today参见维基百科:使用 archive.today WebCite拥有数十亿个存档网页。参见维基百科:使用WebCite。但是,自 2019 年 7 月起,WebCite 不接受任何新的存档请求;自 2021 年 10 月起,无法访问以前存档的页面。 英国政府网络档案馆 (https://www.nationalarchives.gov.uk/webarchive/) 保存了 1500 个英国中央政府网站。 Mementos 界面允许您使用 Memento 协议通过单个请求搜索多个存档服务。不幸的是,Mementos 网页界面会删除与 URL 一起传递的任何参数。如果 URL 包含“?”,则不太可能正常工作。手动将 URL 输入 Mementos 界面时,最常见的更改是将 “” 更改为 “”。虽然在所有情况下仅进行此更改是不够的,但它在大多数情况下都有效。下表中的书签将对 URL 进行正确编码,以便搜索有效。?%3F 如果有多个存档日期可用,请尝试使用最有可能是在 上输入参考文献的编辑者看到的页面内容的日期。如果未指定该参数,则可以搜索文章的修订历史记录,以确定何时将链接添加到文章中。|access-date= 对于大多数引文模板,存档位置是使用 和 参数输入的。在以下情况下,主链路将切换到存档链路。这将保留原始链接位置以供参考。|archive-url=|archive-date=|url-status=|url-status=dead 如果网页现在指向完全不同的网站,请设置为在引文中隐藏原始网站链接。|url-status=usurped 注意:一些档案馆目前在链接公开之前会延迟~18个月。因此,编辑者应该在链接首次标记为死后等待~24个月,然后再声明不存在Web存档。指向可靠来源的无效 URL 通常应使用 标记,以便您可以估计链接失效的时间。{{dead link|date=February 2023}} 用于检查当前页面存档的常见存档站点的书签: Archive.org javascript:void(window.open('https://web.archive.org/web/*/'+location.href)) archive.today / archive.is javascript:void(window.open('https://archive.today/'+location.href)) 纪念品界面 javascript:void(window.open('https://www.webarchive.org.uk/mementos/search/'+encodeURIComponent(location.href)+'?referrer='+encodeURIComponent(document.referrer))) 删除便利链接:如果材料是在纸上发表的(例如,学术期刊、报纸文章、杂志、书籍),则不需要死 URL。只需删除无效 URL,保留引用的其余部分不变。 查找替代来源:在 Web 上搜索引用的文本、文章标题和部分网址。考虑联系最初发布参考文献的网站/人员,并要求他们重新发布。请向其他编辑者寻求帮助,以在其他地方查找引用,包括添加引用的用户。找到一个不同的来源,说的与所讨论的参考基本相同。 删除无可救药丢失的纯网络来源:如果源材料离线不存在,并且没有网页的存档版本(请务必等待~24个月),并且如果您找不到该材料的另一个副本,则应删除死引用,如果没有其他支持引用,则应将其支持的材料视为未经验证。如果是政策特别要求具有内联引用的材料,请考虑将其标记为 .您可能适合将引文移至讨论页并附上解释,并通知添加现已失效链接的编辑。{{citation needed}} 文本-源完整性 快捷方式 WP:TSI WP:诚信 “WP:INTEGRITY”重定向到这里。有关WikiProject Integrity,请参阅WP:WPINTEGRITY。 使用内联引文时,保持文本源的完整性非常重要。内联引用的目的是让读者和其他编辑看到引用支持材料的哪一部分;如果引文没有明确放置,这一点就会丢失。材料与其来源之间的距离是一个编辑判断的问题,但在没有明确说明来源的情况下添加文本可能会导致原创研究、违反来源政策甚至抄袭的指控。
保持引文接近 编辑在重新排列或插入材料时应谨慎行事,以确保保持文本-源关系。参考文献不需要仅仅为了保持脚注在文章中出现的时间顺序而移动,如果这样做可能会破坏文本-来源关系,则不应移动。
如果一个句子或段落带有来源的脚注,那么在没有新文本来源的情况下,在句子/段落中添加现有来源不支持的新材料,如果看起来引用的来源支持它,则具有高度误导性。在段落中插入新文本时,请确保现有或新源支持该文本。例如,编辑文本原文时
太阳很大。[1]
注意事项
^米勒,爱德华。太阳。学术出版社,2005年,第1页。 不暗示新材料得到相同引用支持的编辑是
太阳很大。[1] 太阳也很热。[2]
注意事项
^米勒,爱德华。太阳。学术出版社,2005年,第1页。 ^史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 不要在完全引用的段落或句子中添加其他事实或断言:
☒
太阳很大,但月亮没有那么大。[1] 太阳也很热。[2]
注意事项
^米勒,爱德华。太阳。学术出版社,2005年,第1页。 ^史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 包括支持新信息的来源。有几种方法可以编写此内容,包括:
检查
太阳很大[1],但月亮没有那么大。[2] 太阳也很热。[3]
注意事项
^米勒,爱德华。太阳。学术出版社,2005年,第1页。 ^布朗,丽贝卡。“月球的大小”,科学美国人,51(78):46。 ^史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 捆绑引文 快捷方式 WP:CITEBUNDLE WP:捆绑 主页面: 帮助:引文合并 另请参阅:帮助:缩短脚注§捆绑引文,和维基百科:引文矫枉过正 有时,如果将多个引文捆绑到一个脚注中,则文章更具可读性。例如,当给定句子有多个来源,并且每个来源应用于整个句子时,可以将来源放在句子的末尾,如下所示。[4][5][6][7]或者它们可以捆绑在句子或段落末尾的一个脚注中,就像这样。[4]
如果每个源都支持前面文本的不同部分,或者如果源都支持相同的文本,则捆绑也很有用。捆绑有几个优点:
它可以帮助读者和其他编辑一目了然地看到哪个来源支持哪个点,保持文本源的完整性; 它避免了句子或段落中多个可点击脚注的视觉混乱; 它避免了在句子之后单独列出多个来源的混淆,没有指示要检查文本的每个部分的来源,例如这个。[1][2][3][4] 这使得在重新排列文本时无意中移动内联引文的可能性较小,因为脚注清楚地说明了哪个来源支持哪个点。 要连接同一内容的多个引用,可以使用分号(或适合文章风格的其他字符)。或者,使用消除歧义页面模板:多个引用中列出的模板之一。
太阳很大,明亮而炽热。[1]
注意事项
分号 ^米勒,爱德华。太阳。学术出版社,2005年,第1页;布朗,丽贝卡。“太阳系”,科学美国人,51(78):46;史密斯,约翰。地球之星。学术出版社,2005年,第2页 对于单个脚注中的多个引文,每个引文都引用特定的陈述,有几种可用的布局,如下图所示。在给定的文章中,只应使用单个布局。
太阳很大,但月亮没有那么大。太阳也很热。[1]
注意事项
子弹 ^ 有关太阳的大小,请参阅米勒,爱德华。太阳。学术出版社,2005年,第1页。 有关月球的大小,请参阅布朗,丽贝卡。“月球的大小”,科学美国人,51(78):46。 关于太阳的热量,见史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 换行符 ^有关太阳的大小,请参阅米勒,爱德华。太阳。学术出版社,2005年,第1页。 有关月球的大小,请参阅布朗,丽贝卡。“月球的大小”,科学美国人,51(78):46。 关于太阳的热量,见史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 段 ^有关太阳的大小,请参阅米勒,爱德华。太阳。学术出版社,2005年,第1页。有关月球的大小,请参阅布朗,丽贝卡。“月球的大小”,科学美国人,51(78):46。关于太阳的热量,见史密斯,约翰。太阳的热量。学术出版社,2005年,第2页。 但是,使用换行符分隔列表项违反了WP:Accessibility § Nobreaks:“不要用换行符(<br>)分隔列表项。 {{Unbulleted list citebundle}} 是专门为此目的而制作的;同样可用的是{{非项目符号列表}}。 文本内属性 捷径 WP:INTEXT 更多信息:维基百科:中立观点§归属和指定有偏见的陈述,以及维基百科:风格手册§观点 文本内归属是句子内材料对其来源的归属,以及句子后的内联引用。文本内归属应与直接语音一起使用(引号之间的来源单词或作为块引用);间接语音(来源的单词被修改为不带引号);并关闭释义。当用你自己的话松散地总结一个来源的立场时,也可以使用它,并且它应该始终用于有偏见的意见陈述。它避免了无意的抄袭,并帮助读者了解立场的来源。内联引用应跟在署名之后,通常在相关句子或段落的末尾。
例如:
☒为了作出公平的决定,各方必须像无知的面纱一样考虑问题。[2]
检查 约翰·罗尔斯(John Rawls)认为,为了做出公平的决定,各方必须像无知的面纱一样考虑问题。[2]
检查 约翰·罗尔斯(John Rawls)认为,为了做出公平的决定,各方必须将问题视为“位于无知的面纱后面”。[2]
使用文本内归因时,请确保它不会导致无意中违反中立性。例如,以下暗示了来源之间的平等,而没有明确达尔文的立场是多数人的观点:
☒ 查尔斯·达尔文说人类是通过自然选择进化而来的,但约翰·史密斯写道,我们是从火星乘豆荚来到这里的。
检查人类是通过自然选择进化而来的,正如查尔斯·达尔文(Charles Darwin)的《人类的后裔》(The Descent of Man, and Selection in Relation to)中所解释的那样。
除了中立性问题,文本归因还有其他方式可能会产生误导。下面的句子表明,只有《纽约时报》做出了这一重要发现:
☒据《纽约时报》报道,今晚太阳将在西边落山。
检查每天傍晚,太阳从西边落下。
最好不要将信息弄得乱七八糟,最好留给参考文献。有兴趣的读者可以点击参考文献查找出版期刊:
☒在2012年发表在《柳叶刀》上的一篇文章中,研究人员宣布发现了新的组织类型。[3]
检查研究人员于2012年首次发表了这种新组织类型的发现。[3]
像这样的简单事实可以内联引用可靠的来源,以帮助读者,但通常文本本身最好保留为没有文本归属的纯语句:
检查按质量计算,氧是宇宙中仅次于氢和氦的第三丰富的元素。[4]
一般参考文献 捷径 WP:流派f 一般引用是对支持内容的可靠来源的引用,但不通过内联引用链接到文章中的任何特定文本。一般参考文献通常列在文章末尾的“参考文献”部分,通常按作者或编辑的姓氏排序。一般参考部分最有可能在不发达的文章中找到,特别是当所有文章内容都由单一来源支持时。一般参考文献的缺点是文本-源的完整性会丢失,除非文章很短。它们经常被后来的编辑重新加工成内联引用。
一般参考文献部分的外观与上面关于简短引用和括号参考文献的部分中给出的外观相同。如果同时存在引用和未引用的参考文献,则可以使用单独的章节名称突出显示它们的区别,例如“参考文献”和“一般参考文献”。
处理无源材料 快捷方式 WP:诺西特 WP:BLPCITE 如果一篇文章根本没有参考文献,那么:
如果整篇文章都是“专利废话”,请使用标准 G1 将其标记为快速删除。 如果文章是活着的人的传记,可以用{{subst:prod blp}}标记以建议删除。如果它是一个活着的人的传记并且是一个攻击页面,那么应该使用标准 G10 将其标记为快速删除,这将空白页面。 如果文章不属于上述两类,那么考虑自己寻找参考文献,或者在文章讨论页或文章创建者的讨论页上发表评论。您也可以使用模板标记文章,并考虑将其提名删除。{{unreferenced}} 对于没有参考文献支持的文章中的个人权利要求:
如果文章是活着的人的传记,那么任何有争议的材料都必须立即删除:见活着的人的传记。如果缺乏参考的材料严重不合适,则可能需要将其隐藏起来,在这种情况下,请请求管理员协助。 如果添加的材料似乎是虚假的或表达意见,请将其删除并通知添加无来源材料的编辑。该模板可以放在他们的讨论页上。{{uw-unsourced1}} 在任何其他情况下,请考虑自己查找参考文献,或在文章讨论页或添加无源材料的编辑的讨论页上发表评论。您可以在添加的文本上放置 or 标签。{{citation needed}}{{dubious}} 引文模板和工具 捷径 WP:CITECONSENSUS 更多信息:维基百科:引文模板和帮助:引文工具 有关使用模板和手写引文的引文的比较,请参阅维基百科:不同方法的引用来源/示例编辑§脚注。 引文模板可用于以一致的方式设置引文格式。既不鼓励也不鼓励使用引文模板:在没有充分理由和共识的情况下,不应在模板化和非模板化引文之间切换文章——参见上文“引文方法的变化”。
如果在文章中使用引文模板,则参数应准确。将参数设置为 false 值会导致模板呈现为以模板通常生成的样式(例如 MLA 样式)以外的某种样式编写是不合适的。
元数据 引文可能附有元数据,但不是强制性的。维基百科上的大多数引文模板都使用 COinS 标准。诸如此类的元数据允许浏览器插件和其他自动化软件使用户可以访问引文数据,例如通过提供指向其图书馆引用作品的在线副本的链接。在手动格式化引文的文章中,可以根据 COinS 规范在跨度内手动添加元数据。
引文生成工具 捷径 WP:CITEGENERATORS 维基百科可视化编辑器现在只需提供 DOI、URL、ISBN 等即可帮助用户格式化、插入和编辑源。 用户:Ark25/RefScript,一个JavaScript书签 - 一键创建引用,适用于许多报纸 User:V111P/js/WebRef,一个脚本或书签,用于自动填充 {{cite web}} 模板。您在要引用的页面上使用脚本。 用户:Badgettrg,生物医学引文制作者。使用 Pubmed ID (PMID) 或 DOI 或 PMCID 或 NCT。添加指向 ACP 期刊俱乐部和循证医学评论的链接(如果存在)。 WP:ReFill – 将标题添加到裸 URL 引用和其他清理中 模板:参考文献信息,可以帮助评估撰写文章时使用的引用样式 基于Citoid: 在可视化编辑器中引用模板 User:Salix alba/Citoid mw:citoid 服务器的客户端,它从 url 生成引文样式 1 模板。 参考标签: 编号对象标识符的参考标签 纽约时报的参考标签 维基百科 DOI 和 Google Books Citation Maker 托管于 tools.wmflabs.org: Wikipedia:refToolbar 2.0,在源代码编辑器中使用 引文机器人 Yadkard:一个基于网络的工具,用于使用Google Books URL,DOI或ISBN生成缩短的脚注和引用。还支持一些新闻网站。 维基百科模板填充 – 从 PMID(PubMed ID)生成温哥华风格的引文。 编程工具 参见:帮助:引文工具 § 工具 Wikicite是一个免费程序,帮助编辑者使用引文模板为其维基百科贡献创建引文。它是用Visual Basic .NET编写的,因此它只适合在Windows上安装了.NET Framework的用户,或者对于其他平台,Mono替代框架。维基引用及其源代码是免费提供的;有关更多详细信息,请参阅开发人员页面。 用户:Richiez具有一次自动处理整篇文章引用的工具。将出现的 {{pmid XXXX}} 或 {{isbn XXXX}} 转换为格式正确的脚注或哈佛样式的引用。用 Ruby 编写,需要使用基本库进行工作安装。 pubmed2wikipedia.xsl 一个 XSL 样式表,将 PubMed 的 XML 输出转换为 Wikipedia refs。 参考管理软件 参考文献管理软件可以输出多种样式的格式化引文,包括 BibTeX、RIS 或维基百科引文模板样式。
参考文献管理软件的比较 – 各种参考文献管理软件的并排比较 维基百科:使用Zotero引用来源 - 关于使用Zotero快速为文章添加引用的文章。Zotero(由Roy Rosenzweig Center for History and New Media提供;许可证:Affero GPL)是带有本地参考数据库的开源软件,可以通过在线数据库在多台计算机之间同步(无需付费,最大300 MB)。 EndNote(汤森路透提供;许可证:专有) Mendeley(爱思唯尔提供;许可证:专有) Paperpile (by Paperpile, LLC;许可证:专有) 论文(施普林格;许可证:专有) 参见 如何引用
维基百科:参考资料 该做和不该做 – 本页一些最重要指南的简明摘要 帮助:初学者参考 – 入门的简单实用指南 帮助:如何挖掘来源 – 从引用材料中获取最大信息的操作指南 维基百科:验证方法 – 列出维基百科条目中引用最常用的使用方式的示例 维基百科:改进参考文献工作 - 关于为什么参考文献很重要的文章 维基百科:引文模板 – 引用各种材料的各种风格的完整列表 维基百科:不同方法的引用来源/示例编辑 - 显示不同引用方法和技术的比较编辑模式表示 维基百科:引用来源/进一步考虑 - 引用来源的其他注意事项 维基百科:内联引文 – 有关内联引文的更多信息 维基百科:嵌套脚注 – “嵌套”脚注操作指南 维基百科:样式/布局手册§延伸阅读 - 有关“延伸阅读”部分的信息 维基百科:外部链接 – 有关“外部链接”部分的信息 维基百科:剽窃 § 公共领域来源 – 涵盖将材料纳入公有领域的指南 维基百科:科学引文指南 – 处理科学和数学文章的指南 维基百科:维基项目资源交换/共享资源 – 查找资源的项目指南 MediaWiki:Extension:Cite – 支持解析器钩子的软件的详细信息<ref> 引文问题
模板:不相关的引用 – 用于注释来源的内联模板与材料无关 模板:需要更多引用 – 在引用不足的情况下添加到文章(或部分)的模板 模板:文本源 – 添加到文本源完整性受到质疑的文章(或部分)的模板 维基百科:需要引用 – 标记需要引用的语句的模板说明 维基百科:引文矫枉过正 – 为什么对一个事实的过多引用可能是一件坏事 维基百科:版权问题 – 如果文本被逐字复制不当 维基百科:链接腐烂 – 防止链接腐烂指南 维基百科:你不需要引用天空是蓝色的——一篇文章建议:不要引用已经很明显的信息 维基百科:你确实需要引用天空是蓝色的 - 一篇文章建议:仅仅因为某些事情对你来说很明显并不意味着它对每个人都是显而易见的 维基百科:视频链接 – 一篇讨论使用链接到YouTube和其他用户提交的视频网站的引文的文章 维基百科:WikiProject 引文清理 – 一群致力于清理引文的人 维基百科:参考数据库 – 论文/提案 更改引文样式格式
WP:CITEVAR 笔记 像引用和参考这样的词在英语维基百科上可以互换使用。在讨论页上,语言可以更非正式,或者在考虑空间的编辑摘要或模板中,参考文献通常缩写为ref,带有复数refs。脚注可以特指使用 ref 标签格式的引文或解释性文本;尾注特指放在页面末尾的引文。另请参阅:维基百科:词汇表。 有关为什么不应使用滚动参考文献列表的更多详细信息,请参阅 2007 年 7 月的讨论。 仲裁委员会在2006年裁定:“维基百科没有在许多不同领域强制要求风格;其中包括(但不限于)美式与英式拼写、日期格式和引文样式。如果维基百科没有强制要求特定的风格,编辑者不应该试图将维基百科转换为他们自己喜欢的风格,也不应该仅仅为了将文章转换为他们喜欢的风格,或删除他们不喜欢的风格的例子或引用。 延伸阅读 “在线风格指南”。新的牛津风格手册。英国牛津: 牛津大学出版社. 2016.国际标准书号978-0198767251。 芝加哥风格手册(第17版)。芝加哥: 芝加哥大学出版社. 2017.国际标准书号978-0226287058。 “学术写作:引用来源”。作家工作坊。伊利诺伊大学。 “引文风格指南和管理工具”。图书馆指南。刘波. “引用:帮助和操作方法”。康考迪亚大学图书馆。 “引文帮助”。主题指南。爱荷华大学。 “引文风格指南”。新闻资源。爱荷华大学。 “图书馆:引用来源和引文生成器”。首都社区学院。 “研究和引用资源”。在线写作实验室。普渡大学。 “作家手册:文档”。写作中心。威斯康星大学麦迪逊分校。 “ACS 风格指南”。研究指南。威斯康星大学麦迪逊分校。 “期刊文章作者的格式化参考文献样本”。MEDLINE 和 PubMed:资源指南。美国国家医学图书馆。摄于2018年4月26日。 外部链接
维基共享资源有与所需引文相关的媒体。 “重新填充”。工具锻造。WP:重新填充。 –半自动扩展裸引用的工具 维基百科编辑基础:引用来源(第1部分)(YouTube)。维基媒体基金会。 维基百科编辑基础:引用来源(第2部分)(YouTube)。维基媒体基金会。 最后编辑 12小时前 由 BhamBoi 维基百科 除非另有说明,否则内容在 CC BY-SA 3.0 下可用。 隐私策略 使用条款桌面
Reviewer #2 (Public Review):
Most neuronal computations require keeping track of the inputs over temporal windows that exceed the typical time scales of single neurons. A standard and relatively well-understood way of obtaining time scales longer than those of the "microscopic" elements (here, the single neurons) is to have appropriate recurrent synaptic connectivity. Another possibility is to have a transient, input-dependent modulation of some neuronal and/or synaptic properties, with the appropriate time scale. Indeed, there is ample experimental evidence that both neurons and synapses modify their dynamics on multiple time scales, depending on the previous history of activation. There is, however, little understanding of the computational implications of these modifications, in particular for short-term memory.
Here, the authors have investigated the suitability of a class of transient synaptic modulations for storing and processing information over short-time scales. They use a purely feed-forward network architecture so that "synaptic modulation" is the only mechanism available for temporarily storing the information. The network is called Multi-Plasticity Network (MPN), in reference to the fact that the synaptic connectivity being transiently modulated is adjusted via standard supervised learning. They find that, in a series of integration-based tasks of varying difficulty, the MPN exhibits performances that are comparable with those of (trained) recurrent neuronal networks (RNNs). Interestingly, the MPN consistently outperforms the RNNs when only the read-out is being learned, that is in a minimal-training condition.
The conclusions of the paper are convincingly supported by the careful numerical experiments and the analysis performed by the authors, mostly to compare the performances of the MPN against various RNN architectures. The results are intriguing from a "classic" neuroscience perspective, providing a computational point of view to rationalize the various synaptic dynamics observed experimentally on largely different time scales, and are of certain interest to the machine learning community.
On the other hand, the general principle appears (perhaps naively) very general: any stimulus-dependent, sufficiently long-lived change in neuronal/synaptic properties is a potential memory buffer. For instance, one might wonder whether some non-associative form of synaptic plasticity (unlike the Hebbian-like form studied in the paper), such as short-term synaptic plasticity which depends only on the pre-synaptic activity (and is better motivated experimentally), would be equally effective. Or, for that matter, one might wonder whether just neuronal adaptation, in the hidden layer, for instance, would be sufficient. In this sense, a weakness of this work is that there is little attempt at understanding when and how the proposed mechanism fails.
“capitalism presented the best chances for the preservation ofindividual freedom and creative leadership in a bureaucratic world”
Its more like capitalism is a room with a small window that brings light and socialism is a room completely without windows.
the version of the target host must be equal to or later than the version of the source host:
target host must support configuration version of replica-vm
Host and target Microsoft Hyper-V hosts that you select for the replication process must be compatible. For more information, see Supported Platforms for VM Replica Types.
i thonk the better option is to add link to the microsofts documebtation about VM config versions supported by different HV versions or add table from this article and update it every version
Supported VM configuration versions for long-term servicing hosts The following table lists the VM configuration versions for hosts running a long-term servicing version of Windows.
Hyper-V host Windows version 10.0 9.3 9.2 9.1 9.0 8.3 8.2 8.1 8.0 7.1 7.0 6.2 5.0 Windows Server 2022 ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖ ✖ ✖ ✖ Windows 10 Enterprise LTSC 2021 ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖ ✖ ✖ ✖ Windows Server 2019 ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ Windows 10 Enterprise LTSC 2019 ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ Windows Server 2016 ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ Windows 10 Enterprise 2016 LTSB ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ Windows 10 Enterprise 2015 LTSB ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔ ✔ Windows Server 2012 R2 ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔ Windows 8.1 ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔
Replica from backup is not supported if you replicate data between Windows Server Hyper-V 2008 R2 hosts.
not relevant, 2008r2 are not supported as hv host
Author Response
Reviewer #1 (Public Review):
The article from Dumoux et al. shows the use of plasma-based focused ion beams for volume imaging on cryo-preserved samples. This exciting application can potentially increase the throughput and quality of the data acquired through serial FIB-SEM tomography on cryo-preserved and unstained biological samples. The article is well-written, and it is easy to follow. I like the structure and the experimental description, but I miss some points in the analyses, without which the conclusions are not adequately supported.
The authors state the following: "the application of serial FIB/SEM imaging of non-stained cryogenic biological samples is limited due to low contrast, curtaining, and charging artefacts. We address these challenges using a cryogenic plasma FIB/SEM (cryo-pFIB/SEM)".
Reading the article, I do not find that the challenges are addressed; it appears that some of these are evaluated when the samples are prepared using plasma-based beams. To support the fact that charging, contrast, and curtaining are addressed, a comparison should be made with the current state of the art, or it is otherwise impossible to determine whether these systems bring any advantage.
Charging is an issue that is not described in detail, nor has it been adequately analysed. The effect of using plasma beams is independent of the presented algorithm for charging suppression, which is purely image processing based, although very interesting. Given that the focus of the work is on introducing the benefit of using plasma ion beams (from the title) and given that a great deal of data is presented on the effect of the multiple ion sources, one would expect to have comparable images acquired after the surfaces have been prepared with the different beams. This should also be compared against the current state-of-the-art (gallium) to provide a baseline for different beams' benefits. I realise that this requires access to another microscope and that this also imposes controls on the detector responses on each instrument to have a normalised analysis. Still, it also provides the opportunity to quantify the benefits of each instrumentation.
We have provided a response to the charging comments outlined here in the main rebuttal above. The SEM we used in this study was selected based on its optimal performance at low electron voltages due to its immersion field. The low kV capability is particularly of interest in the case of charging (cross over energy). There is the possibility the interaction of the sample surface with chemically inert or reactive ion species could change the surface potential (either positively or negatively). The Vero cells imaged during a serial pFIB/SEM using nitrogen plasma still exhibit charging as well as the argon plasma we canonically used, suggesting that charging is ion beam independent.
Regarding Gallium, this would require prolonged access to another very bespoke microscope for a like-for-like comparison, and indeed there are studies (e.g. Schertel et al. 2013 and Scher et al, 2021) that show SEM data of cryogenic sample surfaces milled with gallium. Therefore, we consider such a study outside of the scope of this manuscript.
The curtaining scores. This is a good way to explain the problem, though a few aspects need to be validated. For example, curtains appear over time when milling, and it would be useful to understand how different sources behave over time in FIB/SEM tomography sessions. The score is currently done from individual windows milled, which gives a good indication of the performance. However, it would make sense to check that the behaviour remains identical in an imaging setting and with the moving milling windows (or lines). This will show the counteracting effect to the redeposition and etching effect reported when imaging with the E-beam the milled face.
Please see our response in the main rebuttal points.
No detail about the milling resolution has been reported. Since different currents and beams have different cross-sections, it is expected to affect the z-resolution achievable during an imaging session. It would be useful to have a description of the beam cross-sections at the various conditions used and how or whether these interfere with the preparation.
Please see our response in the main rebuttal points.
Contrast. No analysis of plasma FIBs' benefits on image contrast compared to the current state of the art has been provided. Measuring contrast is complex, especially when this value can change in response to the detector settings. Still, attempts can be made to quantify it through the FRC and through the analysis of the image MTF (amplitude and fall off), given that membranes are the only most prominent and visible features in cryoFIB/SEM images of biological samples.
We agree that measuring contrast is complex, and therefore the following parameters as stated on page 6, line 6 to 7 were kept consistent throughout data collection: voltage, current, line integration, exposure, detectors voltage offset and gain. We also decided to keep constant or vary the working distance (focus) in Figure 4 and compared the FRC as well as the contrast. As discussed above, a like-for-like comparison with the state of the art (gallium) is not currently possible, making this experiment/analysis outside the scope of this manuscript.
Figure S4 points out that electrons that hit the sample at normal incidence give better signal/contrast or imaging quality than when the sample is imaged at a tilt. This fact is expected to significantly affect large areas as the collection efficiency will vary across the sample, particularly as regions get further away from the optimal location. The dynamic focusing option available on all SEM will compensate for the focal change but not the collection efficiency. Even though this is a fact, the authors show a loss of resolution, which is not explained by the tilt itself. In particular, the generation of secondary electrons is known to increase with the increased tilt, and to consider that the curtains (that are the prominent feature on the surface) are running along the tilt direction, it would be expected to see no contrast difference between the background and the edge of each curtain as the generation of secondary electrons will increase with tilt for both the edges and the background. Therefore, the contrast should be invariant, at least on the curtains.
Looking at the images presented in the figure, they appear astigmatic and not properly focused when imaged at a tilt. As evidence of this claim, the cellular features do not measure the same, and the sharpness of the edge of the curtains is gone when tilted. This experience comes from improper astigmatism correction, which in turn, in scanning systems, leads to the impossibility of focusing. The tilt correction provides not only dynamic focusing but also corrects for the anisotropy in the sampling due to the tilt. If all imaging is set up correctly, the two images should show the imaged features with the exact sizes regardless of the resolution (which, in the presented case, is sufficient), and the sharpness of the curtain edges should be invariant regardless of the tilt, at least while or where in focus. Only at that point, the comparison will be fair.
Please see our response in the main rebuttal points.
Finally, the resolution measurements presented in the last supplementary figures have no impact or relation to the use of plasma FIB/SEM. It is an effect related to the imaging conditions used in the SEM regardless of the ion beam nature. The distribution of the resolution within images appears predominantly linked to local charging and the local sample composition (from fig8). Given the focus is aimed at introducing or presenting the use of the plasma-based beams the results should be presented in that optic in mind with a comparison between beams.
This figure is to present the absence of degradation in image quality over the dataset. As the stage is moving during the imaging at 90 it would be possible for the focus to be lost throughout a longer data acquisition session. However, this figure demonstrates that the focus is well adjusted throughout the data acquisition. We also considered potential beam damage accumulation which does not seem to be detectable with our method.
Reviewer #2 (Public Review):
The authors present a manuscript highlighting recent advancements in cryo-focused ion beam/scanning electron microscopy (cryo-FIB) using plasma ion sources as an alternative to positively-charged gallium sources for cryo-FIB milling and volumetric SEM (cryo-FIB/SEM) imaging. The authors benchmark several sources of plasma and determine argon gas is the most suitable source for reducing undesirable curtaining effects during milling. The authors demonstrate that milling with an argon source enables volumetric imaging of vitrified cells and tissue with sufficient contrast to gleam biological insight into the spatial localization of organelles and large macromolecular complexes in both vitrified human cells and in high-pressure frozen mouse brain tissue slices. The authors also show that altering the sample angle from 52 to 90 degrees relative to the SEM beam enhances the contrast and resolution of biological features imaged within the vitrified samples. Importantly, the authors also demonstrate that the resolution of SEM images after serial milling with argon and nitrogen plasma sources does not appear to significantly affect resolution, suggesting that resolution does not vary over an acquisition series. Finally, the authors test and apply a neural network-based approach for mitigating image artifacts caused by charging due to SEM imaging of biological features with high lipid content, such as lipid droplets in yeast, thereby increasing the clarity and interpretability of images of samples susceptible to charging.
Strengths and Weaknesses:
The authors do a fantastic job demonstrating the utility of plasma sources for increased contrast of biological features for cryo-FIB/SEM images. However, they do not specifically address the lingering question of whether or not it is possible to use this plasma source cryo-FIB/SEM volumetric imaging for the specific application of localizing features for downstream cryo-ET imaging and structural analyses. As a reader, I was left wondering whether this technique is ideally suited solely for volumetric imaging of cryogenic samples, or if it can be incorporated as a step in the cellular cryo-ET workflow for localization and perhaps structure determination. Another biorxiv paper (doi.org/10.1101/2022.08.01.502333) from the same group establishes a plasma cryo-FIB milling workflow to generate lamella of sufficient quality to elucidate sub-nanometer reconstructions of cellular ribosomes. However, I anticipate the real impact on the field will be from the synergistic benefits of combining both approaches of volumetric cryo-FIB/SEM imaging to localize regions of interest and cryo-ET imaging for high-resolution structural analyses.
Additional experiments were undertaken to demonstrate that serial cryo pFIB/SEM can be used in a variety of correlative imaging workflows, including follow-on cryoET. However, we have yet to carefully determine the consequences for downstream high spatial frequencies of such imaging modalities e.g., for sub volume averaging. The role of the SEM imaging, ion beam damage, etc has yet to be analysed or optimised in detail. This work is outside of the scope of this manuscript.
Another weakness is the lack of demonstration that the contrast gained from plasma cryo-FIB/SEM is sufficient to apply neural network-based approaches for automated segmentation of biological features. The ability to image vitrified samples with enhanced contrast is huge, but our interpretation of these reconstructions is still fundamentally limited in our ability to efficiently analyze subcellular architecture.
We have demonstrated that the segmentation of subcellular features such as mitochondria within a serial pFIB-SEM data set of heart tissue can be automated using SuRVos2 – a neural network based automated segmentation software. These comparisons are included in an additional figure (Figure 11).
Reviewer #1 (Public Review):
The article from Dumoux et al. shows the use of plasma-based focused ion beams for volume imaging on cryo-preserved samples. This exciting application can potentially increase the throughput and quality of the data acquired through serial FIB-SEM tomography on cryo-preserved and unstained biological samples. The article is well-written, and it is easy to follow. I like the structure and the experimental description, but I miss some points in the analyses, without which the conclusions are not adequately supported.
The authors state the following:<br /> "the application of serial FIB/SEM imaging of non-stained cryogenic biological samples is limited due to low contrast, curtaining, and charging artefacts. We address these challenges using a cryogenic plasma FIB/SEM (cryo-pFIB/SEM)".<br /> Reading the article, I do not find that the challenges are addressed; it appears that some of these are evaluated when the samples are prepared using plasma-based beams. To support the fact that charging, contrast, and curtaining are addressed, a comparison should be made with the current state of the art, or it is otherwise impossible to determine whether these systems bring any advantage.
Charging is an issue that is not described in detail, nor has it been adequately analysed. The effect of using plasma beams is independent of the presented algorithm for charging suppression, which is purely image processing based, although very interesting. Given that the focus of the work is on introducing the benefit of using plasma ion beams (from the title) and given that a great deal of data is presented on the effect of the multiple ion sources, one would expect to have comparable images acquired after the surfaces have been prepared with the different beams. This should also be compared against the current state-of-the-art (gallium) to provide a baseline for different beams' benefits. I realise that this requires access to another microscope and that this also imposes controls on the detector responses on each instrument to have a normalised analysis. Still, it also provides the opportunity to quantify the benefits of each instrumentation.
The curtaining scores. This is a good way to explain the problem, though a few aspects need to be validated. For example, curtains appear over time when milling, and it would be useful to understand how different sources behave over time in FIB/SEM tomography sessions. The score is currently done from individual windows milled, which gives a good indication of the performance. However, it would make sense to check that the behaviour remains identical in an imaging setting and with the moving milling windows (or lines). This will show the counteracting effect to the redeposition and etching effect reported when imaging with the E-beam the milled face.
No detail about the milling resolution has been reported. Since different currents and beams have different cross-sections, it is expected to affect the z-resolution achievable during an imaging session. It would be useful to have a description of the beam cross-sections at the various conditions used and how or whether these interfere with the preparation.
Contrast. No analysis of plasma FIBs' benefits on image contrast compared to the current state of the art has been provided. Measuring contrast is complex, especially when this value can change in response to the detector settings. Still, attempts can be made to quantify it through the FRC and through the analysis of the image MTF (amplitude and fall off), given that membranes are the only most prominent and visible features in cryoFIB/SEM images of biological samples.
Figure S4 points out that electrons that hit the sample at normal incidence give better signal/contrast or imaging quality than when the sample is imaged at a tilt. This fact is expected to significantly affect large areas as the collection efficiency will vary across the sample, particularly as regions get further away from the optimal location. The dynamic focusing option available on all SEM will compensate for the focal change but not the collection efficiency. Even though this is a fact, the authors show a loss of resolution, which is not explained by the tilt itself. In particular, the generation of secondary electrons is known to increase with the increased tilt, and to consider that the curtains (that are the prominent feature on the surface) are running along the tilt direction, it would be expected to see no contrast difference between the background and the edge of each curtain as the generation of secondary electrons will increase with tilt for both the edges and the background. Therefore, the contrast should be invariant, at least on the curtains.
Looking at the images presented in the figure, they appear astigmatic and not properly focused when imaged at a tilt. As evidence of this claim, the cellular features do not measure the same, and the sharpness of the edge of the curtains is gone when tilted. This experience comes from improper astigmatism correction, which in turn, in scanning systems, leads to the impossibility of focusing. The tilt correction provides not only dynamic focusing but also corrects for the anisotropy in the sampling due to the tilt. If all imaging is set up correctly, the two images should show the imaged features with the exact sizes regardless of the resolution (which, in the presented case, is sufficient), and the sharpness of the curtain edges should be invariant regardless of the tilt, at least while or where in focus. Only at that point, the comparison will be fair.
Finally, the resolution measurements presented in the last supplementary figures have no impact or relation to the use of plasma FIB/SEM. It is an effect related to the imaging conditions used in the SEM regardless of the ion beam nature. The distribution of the resolution within images appears predominantly linked to local charging and the local sample composition (from fig8). Given the focus is aimed at introducing or presenting the use of the plasma-based beams the results should be presented in that optic in mind with a comparison between beams.
I use Sharex on Windows and I don't think there's any better tool, so I searched for "run sharex on linux" and there is indeed a guide - https://github.com/ShareX/ShareX/issues/6531 - maybe you can get it to work?I believe it can do all of the things you want. Certainly area capture, remembered area capture, fullscreen capture, all bound to different hotkeys. Mine saves with the name = the timestamp but you can probably config it to be an incrementing index. It's incredibly full-featured.I also have hotkeys for "capture current pixel's hex code" and "measure bounded box in pixels." When you take a capture you can also annotate it including showing labeled steps. After capture you can do one or more of: save locally (to one or more places), upload (to one or more hosts), copy to clipboard, etc. That includes pastebin if you have text saved to your clipboard so I use this for that also.
ShareX is indeed the only excellent screenshot tool of its kind.
ShareX 确实是只此一家的优秀截图工具。
Women lined the rooftop and windows of the ten-story building and jumped, landing in a “mangled, bloody pulp.
This must have been a terrible sight. It reminds me of the people that were trapped in the world trade center towers on 9/11.
A dozen times in the course of the day Maria and her mother opened the window to feel the softness of the air
definitely can relate- favorite time of the year when you can open the windows to the house and feel the wonderful breeze
TV
In one of my other classes we are learning about stories being a mirror or window. A lot of the times they're windows but there needs to be more stories that provide mirrors so that they can look at stories that represent them, not just hear stories about representation for other people.
At the end of the poem, Dickinson says “the windows failed - and then I could not see to see”, she is referring to closing her eyes when she died. With the use of personification, she is able to compare the windows ‘failing’, to closing her eyes when she died.
There interposed a Fly - With Blue - uncertain - stumbling Buzz - Between the light - and me - And then the Windows failed - and then I could not see to see -
after reading this part again and again i feel like the speaker is unable to move on like some last bit of unfinished business is keeping them still in the casket with their body or it could be possibly a reference to how brittle the path to heaven is one wrong mistake and you could be rejected most likely the first than the second.
How to use DISM command tool to repair Windows 10 image
These steps are mostly the same for Windows 11 too. One just needs to download the Windows 11 Image from Microsoft and use that instead of the Windows 10 one.
"If our intelligence is so effective, then why did the plan for the special military operation fail completely?" he asked during the show's segment. "And there was no plan B at all. Does this indicate that our intelligence is effective or very ineffective?"
This is quite a damning indictment of the Russian war effort from a Russian State TV channel no less. I wonder if this man will be unexpectedly falling from any fifth story windows in the near future. That seems to be a common fate for Russian journalists/political dissidents.
G Suite (suite of tools for productivity and collaboration)Instruction: Step-by-step text-based instruction. Separateinstruction for Windows, iOS, Android.Interface: Concise layout with rich information. Icons with a
G Suite is an awesome tool to use. My school uses Google Docs, Google Sheets and others when they are trying to get a lot of people's ideas on to one document. It works on virtually all devices especially all you need is internet. It is also free, which is a huge factor for schools since funding can be limited. It is much easier to share and collaborate than Microsoft Word or Excel.
windows端:uwp应用RSS追踪
支持 fever api
Session - Pomodoro Timer Alternatives Session - Pomodoro Timer is described as 'Session helps you be more productive by guiding you to work in sessions, track your time, and remind you to rest' and is a Pomodoro Timer in the office & productivity category. There are more than 25 alternatives to Session - Pomodoro Timer for a variety of platforms, including Mac, Windows, Linux, Online / Web-based and Android. The best alternative is Super Productivity, which is both free and Open Source. Other great apps like Session - Pomodoro Timer are Gnome Pomodoro, YAPA-2, Pomofocus and Pomotodo. Session - Pomodoro Timer alternatives are mainly Pomodoro Timers but may also be Task Management Tools or Todo List Managers. Filter by these if you want a narrower list of alternatives or looking for a specific functionality of Session - Pomodoro Timer.
session平替
TextLocator 是一款开源的全文搜索工具,它可以用于快速查找本地文本文件中的文字内容。支持常见的 Word、Excel、PPT、PDF、TXT 和压缩包、代码文件。
值得尝试!
To me the World Wide Web is an unfortunate presence
= World Wide Web - deal with it Inland Revenue Service - unfortunate presence - all-right for shop windows
native compilation support
emacs: build with native compilation on windows
relative time windows
interesting idea - not relevant to our study
protesters direct their angerat buildings that represent a political system that continues to dehumanize Black bodies by placing moreinterest in buildings and corporations than in equity and social justice. M
One memory I have from the early stages of the pandemic is seeing news and media outlets describing the destruction of physical property during "riots" related to BLM. I always thought that it was unsettling how much media attention things like broken windows, shoplifting, etc garnered, specifically because these inanimate objects representing the capitalist society we live in were being mourned and advocated for more than the lives of the Black and Brown people who were literally dying in the streets due to police brutality. So, I am interested in the fact that this is directly referenced in this timeline. What are the implications of the systemic dehumanization of Black and Brown bodies and value placed on buildings and spaces for the U.S. education system? How might this relate to the ways in which we (collectively) tend to view schools in terms of their physicality rather than the students who make them up? I would be curious to explore this idea more.
scaffolding by providing links between academic concepts and the experiencesthat are familiar to students. In addition to providing “mirrors” reflectingstudents’ familiar world, teachers provide “windows” into the history, traditions,and experiences of other cultures and groups
What a great way to get students engaged and interested in what they're learning about
l “windows” into the culturalheritage and experiences of others
I like this window analogy, too! Seeing other cultures as "normal" will allow students to empathize and better understand those around them who may seem "different."
Author Response:
We would like to thank both reviewers and editors for their time and effort in reviewing our work, and the thoughtful suggestions made.
Reviewer #1 (Public Review):
[…] The experiments are well-designed and carefully conducted. The conclusions of this work are in general well supported by the data. There are a couple of points that need to be addressed or tested.
1) It is unclear how LC phasic stimulation used in this study gates cortical plasticity without altering cellular responses (at least at the calcium imaging level). As the authors mentioned that Polack et al 2013 showed a significant effect of NE blockers in membrane potential and firing rate in V1 layer2/3 neurons during locomotion, it would be useful to test the effect of LC silencing (coupled to mismatch training) on both cellular response and cortical plasticity or applying NE antagonists in V1 in addition to LC optical stimulation. The latter experiment will also address which neuromodulator mediates plasticity, given that LC could co-release other modulators such as dopamine (Takeuchi et al. 2016 and Kempadoo et al. 2016). LC silencing experiment would establish a causal effect more convincingly than the activation experiment.
Regarding the question of how phasic stimulation could alter plasticity without affecting the response sizes or activity in general, we believe there are possibilities supported by previous literature. It has been shown that catecholamines can gate plasticity by acting on eligibility traces at synapses (He et al., 2015; Hong et al., 2022). In addition, all catecholamine receptors are metabotropic and influence intracellular signaling cascades, e.g., via adenylyl cyclase and phospholipases. Catecholamines can gate LTP and LTD via these signaling pathways in vitro (Seol et al., 2007). Both of these influences on plasticity at the molecular level do not necessitate or predict an effect on calcium activity levels. We will expand on this in the discussion of the revised manuscript.
While a loss of function experiment could add additional corroborating evidence that LC output is required for the plasticity seen, we did not perform loss-of-function experiments for three reasons:
The effects of artificial activity changes around physiological set point are likely not linear for increases and decreases. The problem with a loss of function experiment here is that neuromodulators like noradrenaline affect general aspects neuronal function. This is apparent in Polack et al., 2013: during the pharmacological blocking experiment, the membrane hyperpolarizes, membrane variance becomes very low, and the cells are effectively silenced (Figure 7 of (Polack et al., 2013)), demonstrating an immediate impact on neuronal function when noradrenaline receptor activation is presumably taken below physiological/waking levels. In light of this, if we reduce LC output/noradrenergic receptor activation and find that plasticity is prevented, this could be the result of a direct influence on the plasticity process, or, the result of a disruption of another aspect of neuronal function, like synaptic transmission or spiking. We would therefore challenge the reviewer’s statement that a loss-of-function experiment would establish a causal effect more convincingly than the gain-of-function experiment that we performed.
The loss-of-function experiment is technically more difficult both in implementation and interpretation. Control mice show no sign of plasticity in locomotion modulation index (LMI) on the 10-minute timescale (Figure 4J), thus we would not expect to see any effect when blocking plasticity in this experiment. We would need to use dark-rearing and coupled-training of mice in the VR across development to elicit the relevant plasticity ((Attinger et al., 2017); manuscript Figure 5). We would then need to silence LC activity across days of VR experience to prevent the expected physiological levels of plasticity. Applying NE antagonists in V1 over the entire period of development seems very difficult. This would leave optogenetically silencing axons locally, which in addition to the problems of doing this acutely (Mahn et al., 2016; Raimondo et al., 2012), has not been demonstrated to work chronically over the duration of weeks. Thus, a negative result in this experiment will be difficult to interpret, and likely uninformative: We will not be able to distinguish whether the experimental approach did not work, or whether local LC silencing does nothing to plasticity.
Note that pharmacologically blocking noradrenaline receptors during LC stimulation in the plasticity experiment is also particularly challenging: they would need to be blocked throughout the entire 15 minute duration of the experiment with no changes in concentration of antagonist between the ‘before’ and ‘after’ phases, since the block itself is likely to affect the response size, as seen in Polack et al., 2013, creating a confound for plasticity-related changes in response size. Thus, we make no claim about which particular neuromodulator released by the LC is causing the plasticity.
There are several loss-of-function experiments reported in the literature using different developmental plasticity paradigms alongside pharmacological or genetic knockout approaches. These experiments show that chronic suppression of noradrenergic receptor activity prevents ocular dominance plasticity and auditory plasticity (Kasamatsu and Pettigrew, 1976; Shepard et al., 2015). Almost absent from the literature, however, are convincing gain-of-function plasticity experiments.
Overall, we feel that loss-of-function experiments may be a possible direction for future work but, given the technical difficulty and – in our opinion – limited benefit that these experiments, would provide in light of the evidence already provided for the claims we make, we have chosen not to perform these experiments at this time. Note that we already discuss some of the problems with loss-of-function experiments in the discussion.
2) The cortical responses to NE often exhibit an inverted U-curve, with higher or lower doses of NE showing more inhibitory effects. It is unclear how responses induced by optical LC stimulation compare or interact with the physiological activation of the LC during the mismatch. Since the authors only used one frequency stimulation pattern, some discussion or additional tests with a frequency range would be helpful.
This is correct, we do not know how the artificial activation of LC axons relates to physiological activation, e.g. under mismatch. The stimulation strength is intrinsically consistent in our study in the sense that the stimulation level to test for changes in neuronal activity is similar to that used to probe for plasticity effects. We suspect that the artificial activation results in much stronger LC activity than seen during mismatch responses, given that no sign of the plasticity in LMI seen in high ChrimsonR occurs in low ChrimsonR or control mice (Figure 4J). Note, that our conclusions do not rely on the assumption that the stimulation is matched to physiological levels of activation during the visuomotor mismatches that we assayed. The hypothesis that we put forward is that increasing levels of activation of the LC (reflecting increasing rates or amplitude of prediction errors across the brain) will result in increased levels of plasticity. We know that LC axons can reach levels of activity far higher than that seen during visuomotor mismatches, for instance during air puff responses, which constitute a form of positive prediction error (unexpected tactile input) (Figures 2C and S1C). The visuomotor mismatches used in this study were only used to demonstrate that LC activity is consistent with prediction error signaling. We will expand on these points in the discussion as suggested.
Reviewer #2 (Public Review):
[…] The study provides very compelling data on a timely and fascinating topic in neuroscience. The authors carefully designed experiments and corresponding controls to exclude any confounding factors in the interpretation of neuronal activity in LC axons and cortical neurons. The quality of the data and the rigor of the analysis are important strengths of the study. I believe this study will have an important contribution to the field of system neuroscience by shedding new light on the role of a key neuromodulator. The results provide strong support for the claims of the study. However, I also believe that some results could have been strengthened by providing additional analyses and experimental controls. These points are discussed below.
Calcium signals in LC axons tend to respond with pupil dilation, air puffs, and locomotion as the authors reported. A more quantitative analysis such as a GLM model could help understand the relative contribution (and temporal relationship) of these variables in explaining calcium signals. This could also help compare signals obtained in the sensory and motor cortical domains. Indeed, the comparison in Figure 2 seems a bit incomplete since only "posterior versus anterior" comparisons have been performed and not within-group comparisons. I believe it is hard to properly assess differences or similarities between calcium signal amplitude measured in different mice and cranial windows as they are subject to important variability (caused by different levels of viral expression for instance). The authors should at the very least provide a full statistical comparison between/within groups through a GLM model that would provide a more systematic quantification.
We will implement an improved analysis in the revised version of the manuscript.
Previous studies using stimulations of the locus coeruleus or local iontophoresis of norepinephrine in sensory cortices have shown robust responses modulations (see McBurney-Lin et al., 2019, https://doi.org/10.1016/j.neubiorev.2019.06.009 for a review). The weak modulations observed in this study seem at odds with these reports. Given that the density of ChrimsonR-expressing axons varies across mice and that there are no direct measurements of their activation (besides pupil dilation), it is difficult to appreciate how they impact the local network. How does the density of ChrimsonR-expressing axons compare to the actual density of LC axons in V1? The authors could further discuss this point.
In terms of estimating the percentage of cortical axons labelled based on our axon density measurements: we refer to cortical LC axonal immunostaining in the literature to make this comparison. In motor cortex, an average axon density of 0.07 µm/µm2 has been reported (Yin et al., 2021), and 0.09 µm/µm2 in prefrontal cortex (Sakakibara et al., 2021). Density of LC axons varies by cortical area, with higher density in motor cortex and medial areas than sensory areas (Agster et al., 2013): V1 axon density is roughly 70% of that in cingulate cortex (adjacent to motor and prefrontal cortices) (Nomura et al., 2014). So, we approximate a maximum average axon density in V1 of approximately 0.056 µm/µm2. Because these published measurements were made from images taken of tissue volumes with larger z-depth (~ 10 µm) than our reported measurements (~ 1 µm), they appear much larger than the ranges reported in our manuscript (0.002 to 0.007 µm/µm2). We repeated the measurements in our data using images of volumes with 10 µm z-depth, and find that the percentage axons labelled in our study in high ChrimsonR-expressing mice ranges between 0.012 to 0.039 µm/µm2. This corresponds to between 20% to 70% of the density we would expect based on previous work. Note that this is a potentially significant underestimate, and therefore should be used as a lower bound: analyses in the literature use images from immunostaining, where the signal to background ratio is very high. In contrast, we did not transcardially perfuse our mice leading to significant background (especially in the pia/L1, where axon density is high - (Agster et al., 2013; Nomura et al., 2014)), and the intensity of the tdTomato is not especially high. We therefore are likely missing some narrow, dim, and superficial fibers in our analysis.
We also can quantify how our variance in axonal labelling affects our results: For the dataset in Figure 3, there doesn’t appear to be any correlation between the level of expression and the effect of stimulating the axons on the mismatch or visual flow responses for each animal (Figure R1: https://imgur.com/gallery/Yl60hnT), while there is a significant correlation between the level of expression and the pupil dilation, consistent with the dataset shown in Figure 4. Thus, even in the most highly expressing mice, there is no clear effect on average response size at the level of the population. We will add these correlations to the revised manuscript.
To our knowledge, there has not yet been any similar experiment reported utilizing local LC axonal optogenetic stimulation while recording cortical responses, so when comparing our results to those in the literature, there are several important methodological differences to keep in mind. The vast majority of the work demonstrating an effect of LC output/noradrenaline on responses in the cortex has been done using unit recordings, and while results are mixed, these have most often demonstrated a suppressive effect on spontaneous and/or evoked activity in the cortex (McBurney-Lin et al., 2019). In contrast to these studies, we do not see a major effect of LC stimulation either on baseline or evoked calcium activity (Figure 3), and, if anything, we see a minor potentiation of transient visual flow onset responses (see also Figure R2). There could be several reasons why our stimulation does not have the same effect as these older studies:
Recording location: Unit recordings are often very biased toward highly active neurons (Margrie et al., 2002) and deeper layers of the cortex, while we are imaging from layer 2/3 – a layer notorious for sparse activity. In one of the few papers to record from superficial layers, it was been demonstrated that deeper layers in V1 are affected differently by LC stimulation methods compared to more superficial ones (Sato et al., 1989), with suppression more common in superficial layers. Thus, some differences between our results and those in the majority of the literature could simply be due to recording depth and the sampling bias of unit recordings.
Stimulation method: Most previous studies have manipulated LC output/noradrenaline levels by either iontophoretically applying noradrenergic receptor agonists, or by electrically stimulating the LC. Arguably, even though our optogenetic stimulation is still artificial, it represents a more physiologically relevant activation compared to iontophoresis, since the LC releases a number of neuromodulators including dopamine, and these will be released in a more physiological manner in the spatial domain and in terms of neuromodulator concentration. Electrical stimulation of the LC as used by previous studies differs from our optogenetic method in that LC axons will be stimulated across much wider regions of the brain (affecting both the cortex and many of its inputs), and it is not clear whether the cause of cortical response changes is in cortex or subcortical. In addition, electrical LC stimulation is not cell type specific.
Temporal features of stimulation: Few previous studies had the same level of temporal control over manipulating LC output that we had using optogenetics. Given that electrical stimulation generates electrical artifacts, coincident stimulation during the stimulus was not used in previous studies. Instead, the LC is often repeatedly or tonically stimulated, sometimes for many seconds, prior to the stimulus being presented. Iontophoresis also does not have the same temporal specificity and will lead to tonically raised receptor activity over a time course determined by washout times.
State specificity: Most previous studies have been performed under anesthesia – which is known to impact noradrenaline levels and LC activity (Müller et al., 2011). Thus, the acute effects of LC stimulation are likely not comparable between anesthesia and in the awake animal.
Due to these differences, it is hard to infer why our results differ compared to other papers. The study with the most similar methodology to ours is (Vazey et al., 2018), which used optogenetic stimulation directly into the mouse LC while recording spiking in deep layers of the somatosensory cortex with extracellular electrodes. Like us, they found that phasic optogenetic stimulation alone did not alter baseline spiking activity (Figure 2F of Vazey et al., 2018), and they found that in layers 5 and 6, short latency transient responses to foot touch were potentiated and recruited by simultaneous LC stimulation. While this finding appears more overt than the small modulations we see, it is qualitatively not so dissimilar from our finding that transient responses appear to be slightly potentiated when visual flow begins (Figure R2). Differences in the degree of the effect may be due to differences in the layers recorded, the proportion of the LC recruited, or the fact anesthesia was used in Vazey et al., 2018.
Note that we only used one set of stimulation parameters for optogenetic stimulation, and it is always possible that using different parameters would result in different effects. We will add a discussion on the topic to the revised manuscript.
In the analysis performed in Figure 3, it seems that red light stimulations used to drive ChrimsonR also have an indirect impact on V1 neurons through the retina. Indeed, figure 3D shows a similar response profile for ChrimsonR and control with calcium signals increasing at laser onset (ON response) and offset (OFF response). With that in mind, it is hard to interpret the results shown in Figure 3E-F without seeing the average calcium time course for Control mice. Are the responses following visual flow caused by LC activation or additional visual inputs? The authors should provide additional information to clarify this result.
This is a good point. When we plot the average difference between the stimulus response alone and the optogenetic stimulation + stimulus response, we do indeed find that there is a transient increase in response at the visual flow onset (and the offset of mismatch, which is where visual flow resumes), and this is only seen in ChrimsonR-expressing mice (Figure R2: https://imgur.com/gallery/cqN2Khd). We therefore believe that these enhanced transients at visual flow onset could be due to the effect of ChrimsonR stimulation, and indeed previous studies have shown that LC stimulation can reduce the onset latency and latency jitter of afferent-evoked activity (Devilbiss and Waterhouse, 2004; Lecas, 2004), an effect which could mediate the differences we see. We will add this analysis to the revised manuscript.
Some aspects of the described plasticity process remained unanswered. It is not clear over which time scale the locomotion modulation index changes and how many optogenetic stimulations are necessary or sufficient to saturate this index. Some of these questions could be addressed with the dataset of Figure 3 by measuring this index over different epochs of the imaging session (from early to late) to estimate the dynamics of the ongoing plasticity process (in comparison to control mice). Also, is there any behavioural consequence of plasticity/update of functional representation in V1? If plasticity gated by repeated LC activations reproduced visuomotor responses observed in mice that were exposed to visual stimulation only in the virtual environment, then I would expect to see a change in the locomotion behaviour (such as a change in speed distribution) as a result of the repeated LC stimulation. This would provide more compelling evidence for changes in internal models for visuomotor coupling in relation to its behavioural relevance. An experiment that could confirm the existence of the LC-gated learning process would be to change the gain of the visuomotor coupling and see if mice adapt faster with LC optogenetic activation compared to control mice with no ChrimsonR expression. Authors should discuss how they imagine the behavioural manifestation of this artificially-induced learning process in V1.
Regarding the question of plasticity time course: Unfortunately, owing to the paradigm used in Figure 3, the time course of the plasticity will not be quantifiable from this experiment. This is because in the first 10 minutes, the mouse is in closed loop visuomotor VR experience, undergoing optogenetic stimulation (this is the time period in which we record mismatches). We then shift to the open loop session to quantify the effect of optogenetic stimulation on visual flow responses. Since the plasticity is presumably happening during the closed loop phase, and we have no read-out of the plasticity during this phase (we do not have uncoupled visual flow onsets to quantify LMI in closed loop), it is not possible to track the plasticity over time.
Regarding the behavioral relevance of the plasticity: The type of plasticity we describe here is consistent with predictive, visuomotor plasticity in the form of a learned suppression of responses to self-generated visual feedback during movement. Intuitive purposes of this type of plasticity would be 1) to enable better detection of external moving objects by suppressing the predictable (and therefore redundant) self-generated visual motion and 2) to better detect changes in the geometry of the world (near objects have a larger visuomotor gain that far objects). In our paradigm, we have no intuitive read-out of the mouse’s perception of these things, and it is not clear to us that they would be reflected in locomotion speed, which does not differ between groups (manuscript Figure S5). Instead, we would need to turn to other paradigms for a clear behavioral read-out of predictive forms of sensorimotor learning: for instance, sensorimotor learning paradigms in the VR (such as those used in (Heindorf et al., 2018; Leinweber et al., 2017)), or novel paradigms that reinforce the mouse for detecting changes in the gain of the VR, or moving objects in the VR, using LC stimulation during the learning phase to assess if this improves acquisition. This is certainly a direction for future work. In the case of a positive effect, however, the link between the precise form of plasticity we quantify in this manuscript and the effect on the behavior would remain indirect, so we see this as beyond the scope of the manuscript. We will add a discussion on this topic to the revised manuscript.
Finally, control mice used as a comparison to mice expressing ChrimsonR in Figure 3 were not injected with a control viral vector expressing a fluorescent protein alone. Although it is unlikely that the procedure of injection could cause the results observed, it would have been a better control for the interpretation of the results.
We agree that this indeed would have been a better control. However, we believe that this is fortunately not a major problem for the interpretation of our results for two reasons:
The control and ChrimsonR expressing mice do not show major differences in the effect of optogenetic LC stimulation at the level of the calcium responses for all results in Figure 3, with the exception of the locomotion modulation indices (Figure 3I). Therefore, in terms of response size, there is no major effect compared to control animals that could be caused by the injection procedure, apart from marginally increased transient responses to visual flow onset – and, as the reviewer notes, it is difficult to see how the injection procedure would cause this effect.
The effect on locomotion modulation index (Figure 3I) was replicated with another set of mice in Figure 4C, for which we did have a form of injected control (‘Low ChrimsonR’), which did not show the same plasticity in locomotion modulation index (Figure 4E). We therefore know that at least the injection itself is not resulting in the plasticity effect seen.
References:
Agster, K.L., Mejias-Aponte, C.A., Clark, B.D., Waterhouse, B.D., 2013. Evidence for a regional specificity in the density and distribution of noradrenergic varicosities in rat cortex. Journal of Comparative Neurology 521, 2195–2207. https://doi.org/10.1002/cne.23270
Attinger, A., Wang, B., Keller, G.B., 2017. Visuomotor Coupling Shapes the Functional Development of Mouse Visual Cortex. Cell 169, 1291-1302.e14. https://doi.org/10.1016/j.cell.2017.05.023
Devilbiss, D.M., Waterhouse, B.D., 2004. The Effects of Tonic Locus Ceruleus Output on Sensory-Evoked Responses of Ventral Posterior Medial Thalamic and Barrel Field Cortical Neurons in the Awake Rat. J. Neurosci. 24, 10773–10785. https://doi.org/10.1523/JNEUROSCI.1573-04.2004
He, K., Huertas, M., Hong, S.Z., Tie, X., Hell, J.W., Shouval, H., Kirkwood, A., 2015. Distinct Eligibility Traces for LTP and LTD in Cortical Synapses. Neuron 88, 528–538. https://doi.org/10.1016/j.neuron.2015.09.037
Heindorf, M., Arber, S., Keller, G.B., 2018. Mouse Motor Cortex Coordinates the Behavioral Response to Unpredicted Sensory Feedback. Neuron 0. https://doi.org/10.1016/j.neuron.2018.07.046
Hong, S.Z., Mesik, L., Grossman, C.D., Cohen, J.Y., Lee, B., Severin, D., Lee, H.-K., Hell, J.W., Kirkwood, A., 2022. Norepinephrine potentiates and serotonin depresses visual cortical responses by transforming eligibility traces. Nat Commun 13, 3202. https://doi.org/10.1038/s41467-022-30827-1
Kasamatsu, T., Pettigrew, J.D., 1976. Depletion of brain catecholamines: failure of ocular dominance shift after monocular occlusion in kittens. Science 194, 206–209. https://doi.org/10.1126/science.959850
Lecas, J.-C., 2004. Locus coeruleus activation shortens synaptic drive while decreasing spike latency and jitter in sensorimotor cortex. Implications for neuronal integration. European Journal of Neuroscience 19, 2519–2530. https://doi.org/10.1111/j.0953-816X.2004.03341.x
Leinweber, M., Ward, D.R., Sobczak, J.M., Attinger, A., Keller, G.B., 2017. A Sensorimotor Circuit in Mouse Cortex for Visual Flow Predictions. Neuron 95, 1420-1432.e5. https://doi.org/10.1016/j.neuron.2017.08.036
Mahn, M., Prigge, M., Ron, S., Levy, R., Yizhar, O., 2016. Biophysical constraints of optogenetic inhibition at presynaptic terminals. Nat Neurosci 19, 554–556. https://doi.org/10.1038/nn.4266
Margrie, T.W., Brecht, M., Sakmann, B., 2002. In vivo, low-resistance, whole-cell recordings from neurons in the anaesthetized and awake mammalian brain. Pflugers Arch. 444, 491–498. https://doi.org/10.1007/s00424-002-0831-z
McBurney-Lin, J., Lu, J., Zuo, Y., Yang, H., 2019. Locus coeruleus-norepinephrine modulation of sensory processing and perception: A focused review. Neurosci Biobehav Rev 105, 190–199. https://doi.org/10.1016/j.neubiorev.2019.06.009
Müller, C.P., Pum, M.E., Amato, D., Schüttler, J., Huston, J.P., De Souza Silva, M.A., 2011. The in vivo neurochemistry of the brain during general anesthesia. Journal of Neurochemistry 119, 419–446. https://doi.org/10.1111/j.1471-4159.2011.07445.x
Nomura, S., Bouhadana, M., Morel, C., Faure, P., Cauli, B., Lambolez, B., Hepp, R., 2014. Noradrenalin and dopamine receptors both control cAMP-PKA signaling throughout the cerebral cortex. Front Cell Neurosci 8. https://doi.org/10.3389/fncel.2014.00247
Polack, P.-O., Friedman, J., Golshani, P., 2013. Cellular mechanisms of brain-state-dependent gain modulation in visual cortex. Nat Neurosci 16, 1331–1339. https://doi.org/10.1038/nn.3464
Raimondo, J.V., Kay, L., Ellender, T.J., Akerman, C.J., 2012. Optogenetic silencing strategies differ in their effects on inhibitory synaptic transmission. Nat Neurosci 15, 1102–1104. https://doi.org/10.1038/nn.3143
Sakakibara, Y., Hirota, Y., Ibaraki, K., Takei, K., Chikamatsu, S., Tsubokawa, Y., Saito, T., Saido, T.C., Sekiya, M., Iijima, K.M., n.d. Widespread Reduced Density of Noradrenergic Locus Coeruleus Axons in the App Knock-In Mouse Model of Amyloid-β Amyloidosis. J Alzheimers Dis 82, 1513–1530. https://doi.org/10.3233/JAD-210385
Sato, H., Fox, K., Daw, N.W., 1989. Effect of electrical stimulation of locus coeruleus on the activity of neurons in the cat visual cortex. Journal of Neurophysiology. https://doi.org/10.1152/jn.1989.62.4.946
Seol, G.H., Ziburkus, J., Huang, S., Song, L., Kim, I.T., Takamiya, K., Huganir, R.L., Lee, H.-K., Kirkwood, A., 2007. Neuromodulators control the polarity of spike-timing-dependent synaptic plasticity. Neuron 55, 919–929. https://doi.org/10.1016/j.neuron.2007.08.013
Shepard, K.N., Liles, L.C., Weinshenker, D., Liu, R.C., 2015. Norepinephrine is necessary for experience-dependent plasticity in the developing mouse auditory cortex. J Neurosci 35, 2432–2437. https://doi.org/10.1523/JNEUROSCI.0532-14.2015
Vazey, E.M., Moorman, D.E., Aston-Jones, G., 2018. Phasic locus coeruleus activity regulates cortical encoding of salience information. Proceedings of the National Academy of Sciences 115, E9439–E9448. https://doi.org/10.1073/pnas.1803716115
Yin, X., Jones, N., Yang, J., Asraoui, N., Mathieu, M.-E., Cai, L., Chen, S.X., 2021. Delayed motor learning in a 16p11.2 deletion mouse model of autism is rescued by locus coeruleus activation. Nat Neurosci 24, 646–657. https://doi.org/10.1038/s41593-021-00815-7
Reviewer #2 (Public Review):
The work presented by Jordan and Keller aims at understanding the role of noradrenergic neuromodulation in the cortex of mice exploring a visual virtual environment. The authors hypothesized that norepinephrine released by Locus Coeruleus (LC) neurons in cortical circuits gates the plasticity of internal models following visuomotor prediction errors. To test this hypothesis, they devised clever experiments that allowed them to manipulate visual flow with respect to locomotion to create prediction errors in visuomotor coupling and measure the related signals in LC axons innervating the cortex using two-photon calcium imaging. They observed calcium responses proportional to absolute prediction errors that were non-specifically broadcast across the dorsal cortex. To understand how these signals contribute to computations performed by V1 neurons in layers 2/3, the authors activated LC noradrenergic inputs using optogenetic stimulations while imaging calcium responses in cortical neurons. Although LC activation had little impact on evoked activity related to visuomotor prediction errors, the authors observed changes in the effect of locomotion on visually evoked activity after repeated LC axons activation that were absent in control mice. Using a clever paradigm where the locomotion modulation index was measured in the same neurons before and after optogenetic manipulations, they confirmed that this plasticity depended on the density of LC axons activated, the visual flow associated with running, and the concurrent visuomotor coupling during LC activation. Based on similar locomotion modulation index dependency on speed observed in mice that develop only with visuomotor experience in the virtual environment, the authors concluded that changes in locomotion modulation index are the result of experience-dependent plasticity occurring at a much faster rate during LC axons optogenetic stimulations.
The study provides very compelling data on a timely and fascinating topic in neuroscience. The authors carefully designed experiments and corresponding controls to exclude any confounding factors in the interpretation of neuronal activity in LC axons and cortical neurons. The quality of the data and the rigor of the analysis are important strengths of the study. I believe this study will have an important contribution to the field of system neuroscience by shedding new light on the role of a key neuromodulator. The results provide strong support for the claims of the study. However, I also believe that some results could have been strengthened by providing additional analyses and experimental controls. These points are discussed below.
Calcium signals in LC axons tend to respond with pupil dilation, air puffs, and locomotion as the authors reported. A more quantitative analysis such as a GLM model could help understand the relative contribution (and temporal relationship) of these variables in explaining calcium signals. This could also help compare signals obtained in the sensory and motor cortical domains. Indeed, the comparison in Figure 2 seems a bit incomplete since only "posterior versus anterior" comparisons have been performed and not within-group comparisons. I believe it is hard to properly assess differences or similarities between calcium signal amplitude measured in different mice and cranial windows as they are subject to important variability (caused by different levels of viral expression for instance). The authors should at the very least provide a full statistical comparison between/within groups through a GLM model that would provide a more systematic quantification.
Previous studies using stimulations of the locus coeruleus or local iontophoresis of norepinephrine in sensory cortices have shown robust responses modulations (see McBurney-Lin et al., 2019, https://doi.org/10.1016/j.neubiorev.2019.06.009 for a review). The weak modulations observed in this study seem at odds with these reports. Given that the density of ChrimsonR-expressing axons varies across mice and that there are no direct measurements of their activation (besides pupil dilation), it is difficult to appreciate how they impact the local network. How does the density of ChrimsonR-expressing axons compare to the actual density of LC axons in V1? The authors could further discuss this point.
In the analysis performed in Figure 3, it seems that red light stimulations used to drive ChrimsonR also have an indirect impact on V1 neurons through the retina. Indeed, figure 3D shows a similar response profile for ChrimsonR and control with calcium signals increasing at laser onset (ON response) and offset (OFF response). With that in mind, it is hard to interpret the results shown in Figure 3E-F without seeing the average calcium time course for Control mice. Are the responses following visual flow caused by LC activation or additional visual inputs? The authors should provide additional information to clarify this result.
Some aspects of the described plasticity process remained unanswered. It is not clear over which time scale the locomotion modulation index changes and how many optogenetic stimulations are necessary or sufficient to saturate this index. Some of these questions could be addressed with the dataset of Figure 3 by measuring this index over different epochs of the imaging session (from early to late) to estimate the dynamics of the ongoing plasticity process (in comparison to control mice). Also, is there any behavioural consequence of plasticity/update of functional representation in V1? If plasticity gated by repeated LC activations reproduced visuomotor responses observed in mice that were exposed to visual stimulation only in the virtual environment, then I would expect to see a change in the locomotion behaviour (such as a change in speed distribution) as a result of the repeated LC stimulation. This would provide more compelling evidence for changes in internal models for visuomotor coupling in relation to its behavioural relevance. An experiment that could confirm the existence of the LC-gated learning process would be to change the gain of the visuomotor coupling and see if mice adapt faster with LC optogenetic activation compared to control mice with no ChrimsonR expression. Authors should discuss how they imagine the behavioural manifestation of this artificially-induced learning process in V1.
Finally, control mice used as a comparison to mice expressing ChrimsonR in Figure 3 were not injected with a control viral vector expressing a fluorescent protein alone. Although it is unlikely that the procedure of injection could cause the results observed, it would have been a better control for the interpretation of the results.
When you are working on this course, I recommend that you only have open the application and windows you actually need for this course. Ample research has shown that multitasking does not work, and so when you switch from one activity to another (e.g., from working on another course to this one) you should wrap up your tasks, save your files, close them all, and then start fresh. This will clean up your taskbar and help you focus on this course. It will also free up your computer’s resources so it won’t start slowing down. Another friendly reminder: restart your computer frequently, at least once a week. Your computer will begin slowing down the longer you wait in between restarts. Also, be sure to update your computer regularly. Note that jamovi now has a cool Cloud version. It has nearly all the same features as the desktop version, but as of January 2023 it is still in beta and so it may not always work perfectly. I strongly recommend you use the Desktop version in this course.
This is very helpful advice that I will definitely apply when working on this course. I have a horrible habit of not restarting my computer or even just leaving it open to work on something later. I would like to add some advice for people that do not have this yet... Download Grammarly. It could be one of the best things you do for school when writing content. Thanks for reading Best, Corey Kohl.
Author Response
Reviewer #1 (Public Review):
This article is aimed at constructing a recurrent network model of the population dynamics observed in the monkey primary motor cortex before and during reaching. The authors approach the problem from a representational viewpoint, by (i) focusing on a simple center-out reaching task where each reach is predominantly characterised by its direction, and (ii) using the machinery of continuous attractor models to construct network dynamics capable of holding stable representations of that angle. Importantly, M1 activity in this task exhibits a number of peculiarities that have pushed the authors to develop important methodological innovations which, to me, give the paper most of its appeal. In particular, M1 neurons have dramatically different tuning to reach direction in the movement preparation and execution epochs, and that fact motivated the introduction of a continuous attractor model incorporating (i) two distinct maps of direction selectivity and (ii) distinct degrees of participation of each neuron in each map. I anticipate that such models will become highly relevant as neuroscientists increasingly appreciate the highly heterogeneous, and stable-yet-non-stationary nature of neural representations in the sensory and cognitive domains.
As far as modelling M1 is concerned, however, the paper could be considerably strengthened by a more thorough comparison between the proposed attractor model and the (few) other existing models of M1 (even if these comparisons are not favourable they will be informative nonetheless). For example, the model of Kao et al (2021) seems to capture all that the present model captures (orthogonality between preparatory and movement-related subspaces, rotational dynamics, tuned thalamic inputs mostly during preparation) but also does well at matching the temporal structure of single-neuron and population responses (shown e.g. through canonical correlation analysis). In particular, it is not clear to me how the symmetric structure of connectivity within each map would enable the production of temporally rich responses as observed in M1. If it doesn't, the model remains interesting, as feedforward connectivity between more than two maps (reflecting the encoding of many more kinematic variables) or other mechanisms (such as proprioceptive feedback) could well explain away the observed temporal complexity of neural responses. Investigating such alternative explanations would of course be beyond the scope of this paper, but it is arguably important for the readers to know where the model stands in the current literature.
Below is a summary of my view on the main strengths and weaknesses of the paper:
1) From a theoretical perspective, this is a great paper that makes an interesting use of the multi-map attractor model of Romani & Tsodyks (2010), motivated by the change in angular tuning configuration from the preparatory epoch to the movement execution epoch. Continuous attractor models of angular tuning are often criticised for being implausibly homogeneous/symmetrical; here, the authors address this limitation by incorporating an extra dimension to each map, namely the degree of participation of each neuron (the distribution of which is directly extracted from data). This extension of the classical ring model seems long overdue! Another nice thing is the direct use of data for constraining the model's coupling parameters; specifically, the authors adjust the model's parameters in such a way as to match the temporal evolution of a number of "order parameters" that are explicitly manifested (i.e. observable) in the population recordings.
I believe the main weakness of this continuous attractor approach is that it - perhaps unduly binarises the configuration of angular tuning. Specifically, it assumes that while angular tuning switches at movement onset, it is otherwise constant within each epoch (preparation and execution). I commend the authors for carefully motivating this in Figure 2 (2e in particular), by showing that the circular variance of the distribution of preferred directions is higher across prep & move than within either prep or move. While this justifies a binary "two-map model" to first order, the analysis nevertheless shows that preferred directions do change, especially within the preparatory epoch. Perhaps the authors could do some bootstrapping to assess whether the observed dispersion of PDs within sub-periods of the delay epoch is within the noise floor imposed by the finite number of trials used to estimate tuning curves. If it is, then this considerably strengthens the model; otherwise, the authors should say that the binarisation reflects an approximation made for analytical tractability, and discuss any important implications.
We thank the reviewer for the suggested analysis. We have included this new analysis in Fig. S1.
First of all, in Fig 2e of the previous version of the manuscript, we were considering three time windows during preparation and two time windows during movement execution. We are now using a shorter time window of 160ms, so that we can fit three time windows within either epoch. The results do not change qualitatively, and the results of the bootstrap analysis below do not change based on the definition of this time window.
The bootstrap analysis is described in detail in the second paragraph of the Methods sections (“Preparatory and movement-related epochs of motion”). The bootstrap distribution is generated by resampling trials with repetitions (and keeping the number of trials per condition the same as in the data), while shuffling the temporal windows in time, within epochs. For example: for condition 1, we have 43 trials in the data. In one trial of the bootstrap distribution for condition 1, each one of the 3 time windows of the delay period is chosen at random (with repetitions) between the possible 43*3 windows from the data. The analysis shows that the median variance of preferred directions from the data is significantly larger than the one from the bootstrap samples.
This suggests that neurons do change their preferred direction within epochs, but these changes are smaller in magnitude than changes that occur between the epochs. We explicitly comment on this in the methods, and in the main text we point out that considering only two epochs is a simplifying assumption, and as such it can be thought as a first step towards building a more complete model that shows dynamics of tuning within both preparatory and execution epochs. Note, however, that this simple framework is enough for the model to recapitulate to a large extent neuronal activity, both at the level of single-units and at the population level.
2) While it is great to constrain the model parameters using the data, there is a glaring "issue" here which I believe is both a weakness and a strength of the approach. The model has a lot of freedom in the external inputs, which leads to relatively severe parameter degeneracies. The authors are entirely forthright about this: they even dedicate a whole section to explaining that depending on the way the cost function is set up, the fit can land the model in very different regimes, yielding very different conclusions. The problem is that I eventually could not decide what to make of the paper's main results about the inferred external inputs, and indeed what to make of the main claim of the abstract. It would be great if the authors could discuss these issues more thoroughly than they currently do, and in particular, argue more strongly about the reasons that might lead one to favour the solutions of Fig 6d/g over that of Fig 6a. On the other hand, I see the proposed model as an interesting playground that will probably enable a more thorough investigation of input degeneracies in RNN models. Several research groups are currently grappling with this; in particular, the authors of LFADS (Pandarinath et al, 2018) and other follow-up approaches (e.g. Schimel et al, 2022) make a big deal of being able to use data to simultaneously learn the dynamics of a neural circuit and infer any external inputs that drive those dynamics, but everyone knows that this is a generally ill-posed problem (see also discussion in Malonis et al 2021, which the authors cite). As far as I know, it is not yet clear what form of regularisation/prior might best improve identifiability. While Bachschmid-Romano et al. do not go very far in dissecting this problem, the model they propose is low-dimensional and more amenable to analytical calculations, such that it provided a valuable playground for future work on this topic.
We agree with the reviewer that the problem of disambiguating between feedforward and recurrent connections from observation of the state of the recurrent units alone is a degenerate problem in general.
By explicitly looking for solutions that minimize the role of external inputs in driving the dynamics, we argued that the solutions of Fig 4d/g are favorable over the one of Fig 4a because they are based on local computations implemented through shorter range connections compared to incoming connections from upstream areas; as such, they likely require less metabolic energy.
In the new version of the paper, we discuss this issue more explicitly:
Degeneracy of solutions. We considered the case where parameters are inferred by minimizing a cost function that equals the reconstruction error only (this corresponds to the case of very large values of the parameter α in the cost function). Figure 4—figure supplement 2 shows that after minimizing the reconstruction error, the cost function is flat in a large region of the order parameters. We also added Figure 5—figure supplement 5, to show that the dynamics of the feedforward network looks almost indistinguishable from the one of the recurrent network (Fig.5) - although the average canonical correlation coefficient is a bit lower for the purely feedforward case.
Breaking the degeneracy of solutions. We added Figure 4—figure supplement 1 to show that for a wide range of the parameter α, all solutions cluster in a small region of parameter space. Solutions are found both above and below the bifurcation line. Note that all solutions are such that parameters jA and jB are close to the bifurcation line that separate the region where tuned network activity requires tuned external input, and the region where tuned network activity can be sustained autonomously. Furthermore, the weight of recurrent-connections within map B (j_B) is much stronger than the corresponding weight for map A (j_A). Hence, we observe that external inputs play a stronger role in shaping the dynamics during motor preparation than during execution, while recurrent inputs dominate the total inputs during movement execution, for a broad range of values of alpha. This prediction needs to be tested experimentally, although it is in line with the results of ref. 39, as we explain in the Discussion, section “Interplay between external and recurrent currents”, last paragraph.
3) As an addition to the motor control literature, this paper's main strengths lie in the modelcapturing orthogonality between preparatory and movement-related activity subspaces (Elsayed et al 2016), which few models do. However, one might argue that the model is in fact half hand-crafted for this purpose, and half-tuned to neural data, in such a way that it is almost bound to exhibit the phenomenon. Thus, some form of broader model cross-validation would be nice: what else does the model capture about the data that did not explicitly inspire/determine its construction? As a starting point, I would suggest that the authors apply the type of CCA-based analysis originally performed by Sussillo et al (2015), and compare qualitatively to both Sussillo et al. (2015) and Kao et al (2021). Also, as every recorded monkey M1 neuron can be characterized by its coordinates in the 4-dimensional space of angular tuning, it should be straightforward to identify the closest model neuron; it would be very compelling to show side-by-side comparisons of single-neuron response timecourses in model and monkey (i.e., extend the comparison of Fig S6 to the temporal domain).
We thank the reviewer for these suggestions. We have added the following comparisons:
● A CCA-based analysis (Fig 5.a) shows that the performance of our model is qualitatively comparable to the Sussillo et al. (2015) and Kao et al (2021) at generating realistic motor cortical activity (average canonical correlation ρ = 0.77 during movement preparation and 0.82 during movement execution).
● For each of the 141 neurons in the data, we selected the corresponding one in the model that is closest in the eta- and theta- parameters space:
a) A side-by-side comparison of the time course of responses shows a good qualitative agreement (Fig 5.c).
b) We successfully trained a linear decoder to read the responses of these 141 neurons from simulations and output trial-averaged EMG activity recorded from a monkey performing the same task Fig 5.b.
c) Figure 5—figure supplement 4 shows that simulated data presents sequential activity, as does the recorded data.
In our simulations, the temporal variability in single-neuron responses is due to the temporal evolution of the inferred external inputs, and to noise, implemented by an Ornstein-Uhlenbeck (OU) process that is added to the total inputs. Another source of variability could be introduced in the synaptic connectivity: one could add a gaussian random variable to each synaptic efficacy, for example. We checked that this simple extension of our model is able to reproduce the dynamics of the order parameters seen in the data. A full characterization of this extended model is beyond the scope of our paper.
4) The paper's clarity could be improved.
We thank the reviewer for his feedback. We have significantly rewritten most sections of the paper to improve clarity.
Reviewer #2 (Public Review):
The authors study M1 cortical recordings in two non-human primates performing straight delayed center-out reaches to one of 8 peripheral targets. They build a model for the data with the goal of investigating the interplay of inferred external inputs and recurrent synaptic connectivity and their contributions to the encoding of preferred movement direction during movement preparation and execution epochs. The model assumes neurons encode movement direction via a cosine tuning that can be different during preparation and execution epochs. As a result, each type of neuron in the model is described with four main properties: their preferred direction in the cosine tuning during preparation (denoted by θ_A) and execution (denoted by θ_B) epochs, and the strength of their encoding of the movement direction during the preparation (denoted by η_A) and execution (denoted by η_B) epochs. The authors assume that a recurrent network that can have different inputs during the preparation and execution epochs has generated the activity in the neurons. In the model, these inputs can both be internal to the network or external. The authors fit the model to real data by optimizing a loss that combines, via a hyperparameter α, the reconstruction of the cosine tunings with a cost to discourage/encourage the use of external inputs to explain the data. They study the solutions that would be obtained for various values of α. The authors conclude that during the preparatory epoch, external inputs seem to be more important for reproducing the neuron's cosine tunings to movement directions, whereas during movement execution external inputs seem to be untuned to movement direction, with the movement direction rather being encoded in the direction-specific recurrent connections in the network.
Major:
1) Fundamentally, without actually simultaneously recording the activity of upstream regions, it should not be possible to rule out that the seemingly recurrent connections in the M1 activity are actually due to external inputs to M1. I think it should be acknowledged in the discussion that inferred external inputs here are dependent on assumptions of the model and provide hypotheses to be validated in future experiments that actually record from upstream regions. To convey with an example why I think it is critical to simultaneously record from upstream regions to confirm these conclusions, consider two alternative scenarios: I) The recorded neurons in M1 have some recurrent connections that generate a pattern of activity that is based on the modeling seems to be recurrent. II) The exact same activity has been recorded from the same M1 neurons, but these neurons have absolutely no recurrent connections themselves, and are rather activated via purely feed-forward connections from some upstream region; that upstream region has recurrent connections and is generating the recurrent-like activity that is later echoed in M1. These two scenarios can produce the exact same M1 data, so they should not be distinguishable purely based on the M1 data. To distinguish them, one would need to simultaneously record from upstream regions to see if the same recurrent-like patterns that are seen in M1 were already generated in an upstream region or not. I think acknowledging this major limitation and discussing the need to eventually confirm the conclusions of this modeling study with actual simultaneous recordings from upstream regions is critical.
We agree with the reviewer that it is not possible to rule out the hypothesis that motor cortical activity is purely generated by feedforward connectivity.
In the new version of the paper, we discuss more explicitly the fact that neural activity can be fully explained by feedforward inputs, and we added Figure 5—figure supplement 5 to show that the dynamics of the feedforward network looks almost indistinguishable from the one of the recurrent network (Fig.5), provided their parameters are appropriately tuned. Notice, however, that a canonical correlation analysis comparing the activity from recording with the one from simulations shows that the average canonical correlation coefficient is slightly lower for the case of a purely feedforward network (Fig.5.a vs Fig.S12.a).
A summary of our approach is:
We observe that both a purely feedforward and a recurrent network can reproduce the temporal course of the recordings equally well (see also our answer to question 5 below);
We point out that a solution that would save metabolic energy consumption is one where the activity is generated by recurrent currents (with shorter range local connections) rather than by feedforward inputs from upstream regions (long-range connections).
We study the solution that best reproduces the recorded activity and minimizes inputs from upstream regions.
In the Discussion, we included the Reviewer’s observation that our hypothesis needs to be tested by simultaneous recordings of M1 and upstream regions, as well as measures of synaptic strength between motor cortical neurons. See the second paragraph of page 14: “ Our prediction (…) will be necessary to rule out alternative explanations”. Yet, we think that the results of reference [51] are consistent with our results.
One last point we would like to stress is that external inputs drive the network's dynamics at all times, even in the solution that we argue would save metabolic energy consumption: untuned inputs are present throughout the whole course of the motor action, also during movement execution, and they determine the precise temporal pattern of neurons firing rates.
2) The ring network model used in this work implicitly relies on the assumption that cosinetuning models are good representations of the recorded M1 neuronal activity. However, this assumption is not quantitatively validated in the data. Given that all conclusions depend on this, it would be important to provide some goodness of fit measure for the cosine tuning models to quantify how well the neurons' directional preferences are explained by cosine tunings. For example, reporting a histogram of the cosine tuning fit error over all neurons in Fig 2 would be helpful (currently example fits are shown only for a few neurons in Fig. 2 (a), (b), and Figure S6(b)). This would help quantitatively justify the modeling choice.
We thank the reviewer for this observation. Fig.S2.e-f shows the R^2 coefficient of the cosine fit; in particular, we show that the R^2 of the cosine fit strongly correlates with the variables \eta, which represent the degree of participation of single units to the recurrent currents. Units with higher \eta (the ones that contribute more to the recurrent currents) are the ones whose tuning curves better resemble a cosine. However, the plot also shows that the R^2 coefficient of the cosine fit is pretty low for many cells. To show that a model with cosine tuning can yield this result, we repeated the same analysis on the units in our simulated network. In our simulations, all neurons receive a stochastic input mimicking large fluctuations around mean inputs that are expected to occur in vivo. We selected the 141 units whose activity more strongly resembled the activity of the 141 recorded neurons (see figure caption for details). We then looked at the tuning curves of these 141 units from simulations, and calculated the R^2 coefficient of the cosine fit. Figure 5—figure supplement 2.c shows that the result agrees well with the data: the R^2 coefficient is pretty low for many neurons, and correlates with the variable \eta. To summarize, a model that assumes cosine tuning, but also incorporates noise in the dynamics, reproduces well the R^2 coefficient of the cosine fit of tuning curves from data. We added the paragraph “Cosine tuning “ in the Discussion to comment on this point.
3) The authors explain that the two-cylinder model that they use has "distinct but correlated"maps A and B during the preparation and movement. This is hard to see in the formulation. It would be helpful if the authors could expand in the Results on what they mean by "correlation" between the maps and which part of the model enforces the correlation.
We thank the reviewer for this comment. By correlation, we meant the correlation between neural activity during the preparatory and movement-related temporal intervals. In the model, the correlation between the vectors θA and θB induces correlation in the preparatory and movement-related activity patterns. To make the paper easier to read, we are not mentioning this concept in the Results; in the Discussion, we explicitly refer to it in the following two paragraphs:
“A strong correlation between the selectivity properties of the preparatory and movement-related epochs will produce strongly correlated patterns of activity in these two intervals and a strong overlap between the respective PCA subspaces.” (Discussion, section Orthogonal spaces dedicated to movement preparation and execution)
“The correlation between the vectors θAand θB (Discussion, section Interplay between external and recurrent currents)”
4) The authors note that a key innovation in the model formulation here is the addition ofparticipation strengths parameters (η_A, η_B) to prior two-cylinder models to represent the degree of neuron's participation in the encoding of the circular variable in either map. The authors state that this is critical for explaining the cosine tunings well: "We have discussed how the presence of this dimension is key to having tuning curves whose shape resembles the one computed from data, and decreases the level of orthogonality between the subspaces dedicated to the preparatory and movement-related activity". However, I am not sure where this is discussed. To me, it seems like to show that an additional parameter is necessary to explain the data well, one would need to compare fit to data between the model with that parameter and a model without that parameter. I don't think such a comparison was provided in the paper. It is important to show such a comparison to quantitatively show the benefit of the novel element of the model.
We thank the reviewer for this comment.
● The key observation is that without the parameters eta_A, eta_B, the temporal evolution of all neurons in the network is the same (only the noise term added to the dynamics is different). To show this, we have performed a comparison of the temporal evolution of the firing rates of single neurons of the model with data. Fig 5.c shows a comparison between the time-course of single neurons firing rates from data and simulations (good agreement), while Figure 6—figure supplement 2.a shows the same comparison for a model in which all neurons have the same value of the eta_A, eta_B parameters (worse agreement: the range of firing rates is the same for all neurons). In summary, the parameters eta_A, eta_B introduce the variability in the coupling strengths that is necessary to generate heterogeneity in neuronal responses.
● At the end of section “PCA subspaces dedicated to movement preparation and execution”, we refer to (Figure 6—figure supplement 2).c, showing that a model with eta_A=1=eta_B for all neurons yields less orthogonal subspaces.
5) The model parameters are fitted by minimizing a total cost that is a weighted average of twocosts as E_tot = α E_rec + E_ext, with the hyperparameter α determining how the two costs are combined. The selection of α is key in determining how much the model relies on external inputs to explain the cosine tunings in the data. As such, the conclusions of the paper rely on a clear justification of the selection of α and a clear discussion of its effect. Otherwise, all conclusions can be arbitrary confounds of this selection and thus unreliable. Most importantly, I think there should be a quantitative fit to data measure that is reported for different scenarios to allow comparison between them (also see comment 2). For example, when arguing that α should be "chosen so that the two terms have equal magnitude after minimization", this would be convincing if somehow that selection results in a better fit to the neural data compared with other values of α. If all such selections of α have a similar fit to neural data, then how can the authors argue that some are more appropriate than others? This is critical since small changes in alpha can lead to completely different conclusions (Fig. 6, see my next two comments).
All the points raised in questions 5 to 8 are interrelated, and we address them below, after Major issue 8.
6) The authors seem to select alpha based on the following: "The hyperparameter α was chosen so that the two terms have equal magnitude after minimization (see Fig. S4 for details)". Why is this the appropriate choice? The authors explain that this will lead to the behavior of the model being close to the "bifurcation surface". But why is that the appropriate choice? Does it result in a better fit to neural data compared with other choices of α? It is critical to clarify and justify as again all conclusions hinge on this choice.
7) Fig 6 shows example solutions for 2 close values of α, and how even slight changes in the selection of α can change the conclusions. In Fig. 6 (d-e-f), α is chosen as the default approach such that the two terms E_rec and E_ext have equal magnitude. Here, as the authors note, during movement execution tuned external inputs are zero. In contrast, in Fig. 6 (g-h-i), α is chosen so that the E_rec term has a "slightly larger weight" than the E_ext term so that there is less penalty for using large external inputs. This leads to a different conclusion whereby "a small input tuned to θ_B is present during movement execution". Is one value of α a better fit to neural data? Otherwise, how do the authors justify key conclusions such as the following, which seems to be based on the first choice of α shown in Fig. 6 (d-e-f): "...observed patterns of covariance are shaped by external inputs that are tuned to neurons' preferred directions during movement preparation, and they are dominated by strong direction-specific recurrent connectivity during movement execution".
8) It would be informative to see the extreme case of very large and very small α. For example, if α is very large such that external inputs are practically not penalized, would the model rely purely on external inputs (rather than recurrent inputs) to explain the tuning curves? This would be an example of the hypothetical scenario mentioned in my first comment. Would this result in a worse fit to neural data?
We agree with the reviewer that it is crucial to discuss how the choice of the parameter alpha affects the results, and we have strived to improve this discussion in the revised manuscript.
I. When we looked for the coupling parameters that best explain the data, without introducing a metabolic cost, we found multiple solutions that were equally good (see Figure 4—figure supplement 2 and our answer to question (1) above). These included the solution with all couplings set to zero ( j_s^B = j_s^A = j_a = 0), as well as many solutions with different values of synaptic couplings parameters. The solution with the strongest couplings is close to the bifurcation line, in the area where j_s^B > j_s^A.
II. We then introduced a metabolic cost to break the degeneracy between these different solutions. The cost function we minimized contains two terms; their relative strength is modulated by alpha. The case of very small alpha (i.e., only minimizing external input) yields a very poor reconstruction of neural dynamics and is not interesting. The case of very large alpha reduces to the case (I) above. We added Figure 4—figure supplement 1 to show the results for intermediate values of alpha - alpha is large enough to yield a good reconstruction of neural dynamics, yet small enough to ensure that we find a unique solution. For these intermediate values of alpha, the two terms of the cost function have comparable magnitudes. Although slight changes in the selection of alpha do change whether the solutions are above or below the bifurcation surface, Figure 4—figure supplement 1 shows that all solutions are close to the bifurcation surface. In particular, the value of j_s^B is close to its critical value, while we never find solutions where j_s^A is close to its critical value - we never find solutions in the lower-right region of the plot in Figure 4—figure supplement 1. The critical value for j_s^B is the one above which no tuned external inputs are necessary to sustain the observed activity during movement execution. For values of j_s^B close to the bifurcation line but below it (for example, Fig.4g) inferred tuned inputs are still much weaker than the untuned ones, during movement execution. Also, the inferred direction-specific couplings are strong and amplify the weak external inputs tuned to map B, therefore still playing a major role in shaping the observed dynamics during movement execution.
We have rewritten accordingly the abstract, introduction and conclusions of the paper. Instead of focusing on only one solution for a particular value of alpha, we now discuss all solutions and their implications.
9) The authors argue in the discussion that "the addition of an external input strengthminimization constraint breaks the degeneracy of the space of solutions, leading to a solution where synaptic couplings depend on the tuning properties of the pre- and post-synaptic neurons, in such a way that in the absence of a tuned input, neural activity is localized in map B". In other words, the use of the E_ext term, apparently reduces "degeneracy" of the solution. This was not clear to me and I'm not sure where it is explained. This is also related to α because if alpha goes toward very large values, it would be like the E_ext term is removed, so it seems like the authors are saying that the solution becomes degenerate if alpha grows very large. This should be clarified.
We thank the reviewer for pointing this out. By degeneracy of solution, we mean that the model can explain the data equally well for different choices of the recurrent couplings parameters (j_s^A, j_s^B, j_a). In other words, if we look for the coupling parameters that best explain the data, there are many equivalent solutions. When we introduce the E_ext term in the cost function, we then find one unique solution for each choice of alpha. So by “breaking the degeneracy”, we mean going from a scenario where there are many solutions that are equally valid, to one single solution. We added this explanation in the paper, along with the explanation on how our conclusion depends on the ‘choice of alpha’.
10) How do the authors justify setting Φ_A = Φ_B in equation (5)? In other words, how is the last assumption in the following sentence justified: "To model the data, we assumed that the neurons are responding both to recurrent inputs and to fluctuating external inputs that can be either homogeneous or tuned to θ_A; θ_B, with a peak at constant location Φ_A = Φ_B ≡ Φ". Does this mean that the preferred direction for a given neuron is the same during preparation and movement epochs? If so, how is this consistent with the not-so-high correlation between the preferred directions of the two epochs shown in Fig. 2 c, which is reported to have a circular correlation coefficient of 0.4?
We would like to stress the important distinction between the parameters \theta and the parameters Φ. While the parameters \theta_A and \theta_B represent the preferred direction of single neurons during preparatory and execution epochs, respectively, the parameters Φ_A, Φ_B represent the direction of motion that is encoded at the population level during these two epochs. The mean-field analysis shows that Φ_A = Φ_B, even though single neurons change their preferred direction from one epoch to the next. We added a more extensive explanation of the order parameters in the Results section.
Reviewer #3 (Public Review):
In this work, Bachschmid-Romano et al. propose a novel model of the motor cortex, in which the evolution of neural activity throughout movement preparation and execution is determined by the kinematic tuning of individual neurons. Using analytic methods and numerical simulations, the authors find that their networks share some of the features found in empirical neural data (e.g., orthogonal preparatory and execution-related activity). While the possibility of a simple connectivity rule that explains large features of empirical data is intriguing and would be highly relevant to the motor control field, I found it difficult to assess this work because of the modeling choices made by the authors and how the results were presented in the context of prior studies.
Overall, it was not clear to me why Bachschmid-Romano et al. couched their models within a cosine-tuning framework and whether their results could apply more generally to more realistic models of the motor cortex. Under cosine-tuning models (or kinematic encoding models, more generally), the role of the motor cortex is to represent movement parameters so that they can presumably be read out by downstream structures. Within such a framework, the question of how the motor cortex maintains a stable representation of movement direction throughout movement preparation and execution when the tuning properties of individual neurons change dramatically between epochs is highly relevant. However, prior work has demonstrated that kinematic encoding models provide a poor fit for empirical data. Specifically, simple encoding models (and the more elaborate extensions [e.g., Inoue, et al., 2018]) cannot explain the complexity of single-neuron responses (Churchland and Shenoy, 2007), and do not readily produce the population-level signals observed in the motor cortex (Michaels, Dann, and Scherberger, 2016) and cannot be extended to more complex movements (Russo, et al., 2018).
In both the Introduction and Discussion, the authors heavily cite an alternative to kinematic encoding models, the dynamical systems framework. Here, the correlations between kinematics and neural activity in the motor cortex are largely epiphenomenal. The motor cortex does not 'represent' anything; its role is to generate patterns of muscle activity. While the authors explicitly acknowledge the shortcomings of encoding models ('Extension to modeling richer movements', Discussion) and claim that their proposed model can be extended to 'more realistic scenarios', they neither demonstrate that their models can produce patterns of muscle activity nor that their model generates realistic patterns of neural activity. The authors should either fully characterize the activity in their networks and make the argument that their models better provide a better fit to empirical data than alternative models or demonstrate that more realistic computations can be explained by the proposed framework.
Major Comments
1) In the present manuscript, it is unclear whether the authors are arguing that representing movement direction is a critical computation that the motor cortex performs, and the proposed models are accurate models of the motor cortex, or if directional coding is being used as a 'proof of concept' that demonstrates how specific, population-level computations can be explained by the tuning of individual neurons.
If the authors are arguing the former, then they need to demonstrate that their models generate activity similar to what is observed in the motor cortex (e.g., realistic PSTHs and population-level signals). Presently, the manuscript only shows tuning curves for six example neurons (Fig. S6) and a single jPC plane (Fig. S8). Regarding the latter, the authors should note that Michaels et al. (2016) demonstrated that representational models can produce rotations that are superficially similar to empirical data, yet are not dependent on maintaining an underlying condition structure (unlike the rotations observed in the motor cortex).
If the authors are arguing the latter - and they seem to be, based on the final section of the Discussion - then they need to demonstrate that their proposed framework can be extended to what they call 'more realistic scenarios'. For example, could this framework be extended to a network that produces patterns of muscle activity?
We thank the reviewer for raising these issues.
Is our model a kinematic encoding model or a dynamical system?
Our model is a dynamical system, as can be seen by inspecting equations (1,2). The main difference between our model and recently proposed dynamical system models of motor cortex is that the synaptic connectivity matrix in our model is built from the tuning properties of neurons, instead of being trained using supervised learning techniques (we come back to this important difference below). Since the network’s connectivity and external input depend on the neurons’ tuning to the direction of motion (eq 5-6), kinematic parameters emerge from the dynamic interaction between recurrent and feedforward currents, as specified by equations (1-6). Thus, kinematic parameters can be decoded from population activity.
While in kinematic encoding models neurons’ firing rates are a function of parameters of the movement, we constrained the parameters of our model by requiring the model to reproduce the dynamics of a few order parameters, which are low-dimensional measures of the activity of recorded neurons. Our model is fitted to neural data, not to the parameters of the movement.
Although we observed that a linear decoder of the network’s activity can reproduce patterns of muscle activity without decoding any kinematic parameter (see below), discussing whether tuning in M1 plays a computational role in controlling muscle activity is outside of the scope of our work. Rather, the scope of our paper is to discuss how a specific connectivity structure can generate the observed patterns of neural activity, and which connectivity structure requires minimum external inputs to sustain the dynamics. In our approach, the correlations between kinematics and neural activity in the motor cortex are not merely epiphenomenal, but emerge from a specific structure of the connectivity that has likely been shaped by hebbian-like learning mechanisms.
Can the model generate realistic PSTHs and patterns of muscle activity? Yes, it can. As suggested, we have added the following comparisons:
● A CCA-based analysis (Fig 5.a) shows that the performance of our model is qualitatively comparable to the Sussillo et al. (2015) and Kao et al (2021) at generating realistic motor cortical activity (average canonical correlation ρ = 0.77 for motor preparation, 0.82 for motor execution).
● For each of the 141 neurons in the data, we selected the corresponding most similar unit in the model (the closest neurons in the eta- and theta- parameters space, i.e. the one with smallest euclidean distance in the space defined by (\theta_A, \theta_B, \eta_A, \eta_B)). A side-by-side comparison of the time course of responses (Fig 5.c) shows a good qualitative agreement.
● We successfully trained a linear decoder to read the responses of these 141 units from simulations and output trial-averaged EMG activity recorded from a monkey performing the same task (Fig 5.b).
● The model displays sequential activity and rotational dynamics (Fig. S10) without the need to introduce neuron-specific latencies (Michaels, Dann, and Scherberger, 2016).
Can our model explain the complexity of single-neuron tuning?
We have shown that our model captures the heterogeneity of neural responses. Yet, it has been shown that neurons’ tuning properties depend on many features of movement. For example, the current version of the model does not describe the dependence of tuning on speed (Churchland and Shenoy, 2007). However, our model could be extended to incorporate it. Preliminary results suggest that in a network model in which neurons differ by the degree of symmetry of their synaptic connectivity the speed of neural trajectories can be modulated by external inputs targeting preferentially neurons that are asymmetrically connected. In our model, all connections are a sum of a symmetric and an asymmetric term. We could extend our model to incorporate variability in the degree of symmetry in the connections, and speculate that in such a model tuning would depend on the speed of movement, for appropriate forms of external inputs. We leave this study to future work.
Can our model explain neural activity underlying more complex trajectories? When limb trajectories are more complex than simple reaches (Russo, et al., 2018), a single neuron’s activity displays intricate response patterns. Our work could be extended to model more complex movement in several ways. A simplifying assumption we made is that the task can be clearly separated into a preparatory phase and one movement-related phase. A possible extension is one where the motor action is composed of a sequence of epochs, corresponding to a sequence of maps in our model. It will be interesting to study the role of asymmetric connections for storing a sequence of maps. Such a network model could be used to study the storing of motor motifs in the motor cortex (Logiaco et al, 2021); external inputs could then combine these building blocks to compose complex actions.
In summary, we proposed a simple model that can explain recordings during a straight-reaching task. It provides a scaffold upon which we can build more sophisticated models to explain the activity underlying more complex tasks. We point out that a similar limitation is present in modeling approaches where a network is trained to perform specific neural or muscle activity. The question of whether/how trained recurrent networks can generalize is not yet solved, although currently under investigation (e.g., Dubreuil et al 2022; Driscoll et al 2022).
What is the advantage of the present model, compared to an RNN trained to output specific neural/muscle activity?
Its simplicity. Our model is a low-rank recurrent neural network: the structure of the connectivity matrix is simple enough to allow for analytical tractability of the dynamics. The model can be used to test specific hypotheses on the relationship between network connectivity, external inputs and neural dynamics, and to test hypotheses on the learning mechanisms that may lead to the emergence of a given connectivity structure. The model is also helpful to illustrate the problem of degeneracy of network models. An interesting future direction would be to compare the connectivity matrices of trained RNNs and our model.
We addressed these points in the Discussion, in sections: “Representational vs dynamical system approaches” and “Extension to modeling activity underlying more complex tasks.”
2) Related to the above point, the authors claim in the Abstract that their models 'recapitulatethe temporal evolution of single-unit activity', yet the only evidence they present is the tuning curves of six example units. Similarly, the authors should more fully characterize the population-level signals in their networks. The inferred inputs (Fig. 6) indeed seem reasonable, yet I'm not sure how surprising this result is. Weren't the authors guaranteed to infer a large, condition-invariant input during movement and condition-specific input during preparation simply because of the shape of the order parameters estimated from the data (Fig. 6c, thin traces)?
We thank the reviewer for this comment. Regarding the first part of the question: we added new plots with more comparisons between the activity of our model and neural recordings (see the answer above referring to Fig 5).
Regarding the second part: It is true that the shape of the latent variables that we measure from data constrains the solution that we find. However, a “condition-invariant input during movement and condition-specific input during preparation” is not the only scenario compatible with the data. Let’s take a step back and focus on the parameters that we are inferring from data. We are inferring both the strength of external inputs and the couplings parameters. This is done in a two-step inference procedure: we start from a random guess of the couplings parameters, then we infer the strength of the external inputs, and finally we compute the cost function, which depends on all parameters. This is done iteratively, by moving in the space of the coupling parameters; for each point in the space of the coupling parameters, there is one possible configuration of external inputs. The space of the coupling parameters is shown in Fig 4.a, for example (see also Fig. S4). The solutions that we find do not trivially follow from the shape of the latent variables. For example, one possible solution could be: large parameter j_s^A, small parameter j_s^B, which correspond to a point in the lower-right region of the parameter space in Fig 4.a (Fig. S4). The resulting external input would be a strong condition-specific external input during movement execution, but a condition-invariant input during movement preparation: the model is such that, for example, exciting for a short time-interval a few neurons whose preferred direction corresponds to the direction of motion would be enough to “set the direction of motion” for the network; the pattern of tuned activity could be sustained during the whole delay period thanks to the strong recurrent connections j_s^A. We could not rule out this solution by simply looking at the shape of the latent variables. However, it is a solution we have never observed. We only found solutions in the region where j_s^B is large and close to its critical value. This implies the presence of condition-specific inputs during the whole delay period, and condition-invariant external inputs that dominate over condition-specific ones during movement execution.
3) In the Abstract and Discussion (first paragraph), the authors highlight that the preparatory andexecution-related spaces in the empirical data and their models are not completely orthogonal, suggesting that this near-orthogonality serves an important mechanistic purpose. However, networks have no problem transferring activity between completely orthogonal subspaces. For example, the generator model in Fig. 8 of Elsayed, et al. (2016) is constrained to use completely orthogonal preparatory and execution-related subspaces. As the authors point out in the Discussion, such a strategy only works because the motor cortex received a large input just before movement (Kaufman et al., 2016).
We thank the reviewer for this observation. We would like to stress the fact that we are not claiming that having an overlap between subspaces is necessary to transfer activity. Instead, our model shows that a small overlap between the maps can be exploited by the network to transfer activity between subspaces without requiring direction-specific external inputs right before movement execution. A solution where activity is transferred through feedforward inputs is also possible. Indeed, one of the observations of our work (which we highlight more in the new version of the paper) is that by looking at motor cortical activity only, we are not able to distinguish between the activity generated by a feedforward network, and one generated by a recurrent one. However, we argue that a solution where external inputs are minimized can be favorable from a metabolic point of view, as it requires fewer signals to be transmitted through long-range connections. This informs our cost function, and yields a solution where activity is transferred through recurrent connections, by exploiting the small correlation between subspaces.
You can create and manage an IAM OIDC identity provider using the AWS Management Console, the AWS Command Line Interface, the Tools for Windows PowerShell, or the IAM API.
Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
We thank the reviewers for their thorough and insightful evaluations of our manuscript and for their constructive feedback, which have significantly improved the quality of our manuscript. We were pleased to read that all three reviewers found our work novel, interesting, and relevant. In this revised manuscript, we have done our best to address all of the points raised by the reviewers by performing new experiments and revising sections of the text, as requested.
Reviewer #1 (Evidence, reproducibility and clarity):
In this manuscript authors show that extracellular Mtb aggregates can cause macrophage killing in a close contact dependent but phagocytosis independent manner. They showed Mtb aggregates can induce plasma membrane perturbations and cytoplasmic Ca2+ influx with live cell microscopy. Next, the authors show that the type of cell death initiated by extracellular aggregates is pyroptosis and they partially supressed cell death with pyroptosis inhibitors. They also identified that PDIM, EsxA/EsxB and EspB all have a role in uptake-independent killing of macrophages even though their impact varies with respect membrane perturbation and Ca2+ influx. Finally, they used a small molecule inhibitor BTP15 to inhibit the effect of ESX-1 during the contact of the extracellular Mtb aggregates with the macrophages and they observed a substantial decrease in membrane perturbation and macrophage killing.<br /> The work describes a very interesting mechanism by which Mtb can kill macrophages that is possibly relevant in the context of infection.
- In general, there are two main issues with the experiments and the interpretation: the lack of quantitative analysis showing that in a population of macrophages the ones that are in contact with the aggregates die whereas the ones that are not in contact remain alive. This is currently not shown, and it should be added in figure 1.
All our data are based on the visual inspection and annotation of time-lapse microscopy image series, from which it is conclusive that death happens more often among cells in contact with Mtb aggregates (see movies S3 and S6 for representative examples). However, we acknowledge the reviewer’s suggestion that quantitative data supporting this observation might help to convey this conclusion more effectively. Therefore, we have quantified the percentage of dead cells in: I) macrophages in uninfected controls; II) macrophages that establish contact with an Mtb aggregate; III) bystander macrophages that never contact an Mtb aggregate despite being in the same sample as the infected cells, in experiments with (figure 1D) or without (figure 1Q) cytochalasin D treatment. These data have been incorporated as two additional plots in figure 1 in the revised manuscript. We find that uninfected and bystander cells have similar survival probabilities over the time-course of an experiment, whereas most of the cells that physically interact with Mtb_aggregates die by the end of the experiment. To further validate these observations, we have also plotted the lifespans of infected cells vs. bystander cells without (figure S3A) and with (figure S3B) cytochalasin D treatment. In these plots, the lifespan of an individual cell is represented by a line; the fraction of the line coloured in black corresponds to the time spent as bystander and the fraction of the line in magenta corresponds to the time spent in contact with an _Mtb aggregate. We hope that these new data convincingly show that bystander cells (black lines) survive longer compared to cells that interact with Mtb aggregates (black-magenta lines).
- The second is the cell death mode, as the markers used are very different and considering different outcomes (e.g., apoptosis vs. necrosis) are relevant for the infection it is unclear what is being measured here and the impact on bacterial replication.
As the reviewer points out, it has previously been shown that different cell death pathways can affect viability and propagation of intracellular bacteria (1, 2). Since in our experiments we are specifically analyzing extracellular bacteria, we cannot directly comment on how cell death affects intracellular bacterial replication. However, to address the reviewer’s comment, we have included additional data in figure S13A of the revised manuscript showing that specific inhibitors of cell death do not affect the growth or replication of extracellular Mtb. These results suggest that while these molecules do not affect Mtb growth per se, the suppression of these specific death pathways also does not significantly affect the microenvironment to alter Mtb growth (i.e., access to nutrients or molecules released by dead cells). In addition, we have included new data in figure S12 demonstrating the responsiveness of our isolated macrophages to the various cocktails of molecules typically used to induce apoptosis, pyroptosis, or necroptosis.
The authors are showing that infection with Mtb aggregates increase the rate of the macrophage killing but how does this impact infection dissemination and replication of the bacterial aggregates? Is it beneficial for the aggregates? Did the authors check the growth rate of Mtb along with cytochalasin D?
A previous study has shown that phagocytosis of Mtb aggregates leads to macrophage death more efficiently than phagocytosis of a similar number of individual bacteria (3). It has also been shown that Mtb growing on the debris of dead host macrophages forms cytotoxic aggregates that kill the newly interacting macrophages (3). These observations suggest a model in which host cell death induced by Mtb aggregates supports faster extracellular growth and propagation of infection (3). This study was cited in the Introduction section of our manuscript, and our data support these observations. In the revised manuscript, we show that single Mtb bacilli or Mtb aggregates induce macrophage death in a dose-dependent manner (figure S7A,B); however, bacterial aggregates kill more efficiently when compared to similar numbers of non-aggregated bacilli (figure S7A,B). We also show that infection with Mtb_aggregates leads to faster bacterial propagation compared to infection with similar numbers of individual bacteria (figure S7C,D). These observations, combined with our data showing that _Mtb aggregation also enhances uptake-independent killing of macrophages (figure 2), suggest that Mtb aggregates induce rapid host cell death, allowing the bacteria to escape intracellular stresses, grow faster outside host cells (figure S1B), and propagate to other cells. To address the reviewer’s concern whether cytochalasin D affects Mtb growth, the revised manuscript includes additional data confirming that cytochalasin D does not affect the growth of Mtb aggregates (figure S6).
- How did the authors quantify the interactions of Mtb with macrophages in Figure 1D?
The interactions of Mtb with macrophages were quantified through manual annotation of the time-lapse microscopy image series. If the Mtb aggregates disaggregated upon interaction with the macrophage, resulting in redistribution of smaller aggregates of bacteria, we categorized them as “fragmented”. On the other hand, if the aggregates remained clustered, we categorized them as “not fragmented”. Representative snapshots of these two patterns are presented in figure 1E and 1F and we have included additional representative examples in movies S4 and S5 of the revised manuscript. These interactions are quantified and plotted in figure 1N of the revised manuscript (figure 1D in the original version).
- Is it enough to conclude with one example of SEM that the mycobacteria with different fragmentation discriminates if the bacteria is intracellular or extracellularly localised? Can authors use an alternative quantitative method to confirm the localization of the bacteria by a quantification by 3D imaging of these two phenotypes with a cytoskeleton marker (or may be even with tdTomato-expressing BMDMs)?
In the revised manuscript, we provide additional examples of correlative time-lapse microscopy and SEM images (supplementary figure S5). As suggested by the reviewer, in the revised manuscript we further validate these conclusions using an alternative approach based on correlative time-lapse microscopy followed by confocal 3D imaging. After time-lapse imaging, we fixed the samples and labelled the plasma membrane of the macrophages with a fluorescent anti-CD45 antibody to define the cell boundaries and identify bacteria that are intracellular vs. extracellular. Representative images obtained using this approach have been added to figure 1 and additional examples are shown in supplementary figure S4 of the revised manuscript. The acquisition, processing, and analysis of these 3D images are time-consuming and prevent us from performing an exhaustive quantitative analysis. However, we are confident in our conclusions, since in all of the cells that we analyzed we found that aggregates that are not fragmented within 6 hours of stable interaction with macrophages are visible on the outer side of the plasma membrane.
- How do we know if the cell is lysed at 30 h in Supplementary Figure 1, did the authors use a marker to detect the cell lysis or is it based on just the observation from the live cell imaging? Movies in supplementary are actually not very informative as there are many ongoing events and it is hard to visualise what the authors claim. A marker of cell death in the movies should be used.
In this study, we used brightfield time-lapse microscopy images to identify cell death. Dying macrophages rapidly change shape, lose membrane integrity, and stop moving. Moreover, the intracellular structures and bacteria also stop moving at the time of death of the host cell. While these events can be difficult to distinguish by examining individual snapshots, they are readily identifiable by careful frame-by-frame examination of time-lapse microscopy image series. To exemplify this process, in the revised manuscript we show in supplementary figure S2A how we identify macrophage cell death events. We also include Draq7 (a live cell-impermeable dye commonly used to identify dead cells by flow cytometry and microscopy) in the growth medium during time-lapse imaging in order to label dead macrophages. The timing of staining validates and confirms our strategy of using brightfield time-lapse images to define the time-of-death of individual cells. To further assist readers, in the revised manuscript we provide the time-lapse microscopy movie used to generate this figure (movie S4). Similar images and movies have also been added for cells treated with cytochalasin D (figure S2B; movie S7). As suggested by the reviewer, we also replaced figure S1A with a new figure that shows a representative example of an Mtb intracellular microcolony that, upon death of the host macrophage, grows and forms a large extracellular aggregate on the debris of the dead cell (Draq7-positive). Movie S2 was used to generate this figure. Finally, we replaced figures 1E,F with new figures incorporating the Draq7 staining to label macrophage cell death and we include the time-lapse microscopy movies used to generate these figures (movies S4, S5).
- Total macrophage killing after contact in Figure 1L is around 12 hours, whereas it is observed that the macrophage death after contact with cytochalasin D treatment in Figure 1M is even longer than 24 hours. The viability at 12 hours in Figure1M is as fragmented Mtb survival in Figure1L, why there is a difference in timing with respect to macrophage killing?
We thank the reviewer for this interesting observation. Indeed, we find that macrophages treated with cytochalasin D do take longer to die upon establishing stable interaction with Mtb aggregates in comparison to untreated cells. Although we do not have a clear explanation for this difference in timing, we speculate that by inhibiting actin polymerization and consequently cell motility, cytochalasin D might slow the expansion of the macrophage plasma membrane and the establishment of a larger interface of contact between the cell and the bacterial aggregate, which could influence the timing of cell death.
- Did authors perform statistical tests for Figure 1D and Figure 1N? p-values should be added.
Figure 1D (figure 1N in the revised manuscript) shows the percentage of interactions between macrophages and _Mtb_aggregates that do or do not lead to fragmentation of the aggregate. Each dot represents the percentage of these events in one experimental replicate. We included this plot to show that reproducibly in all our replicates approximately 20% of the interactions do not lead to fragmentation of the aggregate. Since the purpose of this plot is not to compare the “fragmented” and “non-fragmented” populations but rather to highlight the reproducibility of the phenomenon, we do not think it would be appropriate to add a p-value. However, figure 1N (figure 1Q in the revised manuscript) has been updated and modified to include statistical analysis and a p-value.
- In Figure 3, do the observations indicated in the Figure 3 happen in all the macrophages that are in contact with aggregates? This is unclear and critical to support the conclusions. Do all the macrophages that are in contact with Mtb aggregates become Annexin-V positive? In Supplementary Figure 2 there is some information regarding this question, but it will be important to show it as a percentage.
In response to the reviewer’s suggestion, we have modified the figure to include quantitation of Annexin-V staining. Approximately 75% of the macrophages that interact with an Mtb aggregate show detectable local Annexin V-positive membrane domains at the site of contact with the aggregate during a typical 60 hour-long experiment. Since most of the macrophages show local Annexin V-positive membrane domains within the first 12 hours upon contact with an Mtb_aggregate (figure 3C), we used this criterion for comparison of different conditions or strains (for example, those shown in figure 6F). In addition, we added figure 3D, which shows the behaviour of 105 macrophages upon contact with _Mtb aggregates in a typical experiment. In this plot, each line represents the lifespan of an individual cell; the fraction of the line in black represents the time spent as bystander, the fraction of the line in magenta represents the time spent interacting with an Mtb aggregate, and the fraction in green represents the time upon formation of local Annexin V-positive membrane domains at the site of contact with the Mtb aggregate. We believe that this additional information further supports our conclusions that most of the cells in contact with an Mtb aggregate show local Annexin V-positive membrane domains and that cells that show this pattern die faster than cells that do not develop local Annexin V-positive membrane domains.
- Did the authors try to stain Mtb aggregates alone with Annexin-V as a control over the duration of the imaging?
We thank the reviewer for suggesting this control. In supplementary figure S8C of the revised manuscript, we include a representative example of a time-lapse microscopy image series showing Mtb aggregates that never interact with a live macrophage althought they are adjacent to a dead cell. As observed in the Annexin V fluorescence images (yellow), these Mtb aggregates never become Annexin-V positive during the course of the experiment (60 hours).
- In Figure 4, did the authors continue to image the cells interacting with Mtb aggregates that do not die after Ca2+ accumulation in Supplementary Figure 3D? Do these cells recover from the plasma membrane perturbation? Did the authors consider using another marker for plasma membrane perturbation together with BAPTA?
Unfortunately, we are not able to image macrophages stained with Oregon Green 488 Bapta-1 AM for more than 36 hours because they lose fluorescence over time, possibly due to partial dye degradation or secretion. Another issue is that macrophages do not establish synchronous interaction with Mtb aggregates (figure 3D; figure S3B). In order to pool together results from many cells, we analyze all the cells that interact with Mtb within the first 20 hours and we define as timepoint 0 the time at which each individual cell establishes interaction with the bacteria. To compare similar time windows for each cell, we use fluorescence values measured at 16 hours post-interaction with bacteria as a readout. This time window is sufficient to observe formation of local Annexin V plasma membrane domains and death in a relevant number of macrophages (figure 1P; figure 3D). Not all of the contacted cells die within the timeframe of our experiments; however, we believe that if we imaged cells that accumulate Ca2+ for longer durations, we would find that all such cells eventually die. This assumption is consistent with the observation that calcium chelation reduces inflammasome activation and death in macrophages in contact with Mtb aggregates (figure 5D; figure 4E).
With respect to the reviewer’s query whether cells recover from plasma membrane perturbation, in our time-lapse microscopy experiments, we observe that when macrophages form local Annexin V-positive plasma membrane domains at the site of contact with Mtb aggregates, they never revert to an Annexin V-negative status afterwards (figure 3D; movie S7; movie S8). Our SEM data show that Mtb aggregates colocalizing with Annexin V-positive domains are not partially covered by intact membrane, in contrast to those associated with Annexin V-negative macrophages, although they do present vesicles and membrane debris on their surface (figure 3F,G ). In the revised manuscript, we include additional fluorescence microscopy images showing that Annexin V-positive foci colocalize with markers for the macrophages’ plasma membrane (figure S8A,B) as well as with more distal areas of the bacterial aggregates, where we do not observe any positive plasma membrane staining (figure S8B). Similarly, although _Mtb_aggregates that are never in contact with macrophages never become Annexin V-positive (figure S8C), we see that upon macrophage death, aggregates in contact with dead cells retain some Annexin V-positive material on their surface (figure S8C; movie S8). Vesicle budding and shedding is a common ESCRT III-mediated membrane repair strategy that allows removal of damaged portions of the plasma membrane and wound resealing (4). Thus, we think that in our experiments the Annexin V-positive foci might represent both damaged membrane areas and released macrophage plasma membrane vesicles that stick to the hydrophobic surface of the bacterial aggregates. This means that the time of appearance of Annexin V-positive domains marks the time when the macrophage membrane experiences a damaging event. Interestingly, we do not observe a gradual increase in fluorescence intensity of the Annexin V-positive domains, but rather multiple single intensity peaks over time (movie S8). This might suggest that multiple discrete damaging events occur over time.
- In Figure 5D-G it will be important if the authors include dots for each macrophage events for the contact conditions as well, as it was done for the bystander condition.
We apologize for using a too-pale shade of magenta in the earlier version of the manuscript, which apparently made the dots in these figures hard to visualize. In the revised manuscript, we use a darker shade of magenta to show the dots corresponding to the fluorescence values of the macrophages in contact with Mtb aggregates.
- How did the authors discriminate between the macrophages that are in contact or not with Mtb aggregates after the staining with Casp-1, pRIP3 and pMLKL? Do the aggregates stay in contact even after the staining procedures? Representative images of the labelling should be included in this figure.
Before fixation, we make sure to remove the medium gently to avoid disrupting the interactions between cells and bacteria. This step most likely removes the floating bacterial aggregates that are not in stable contact with cells but apparently does not detach aggregates that stably interact with cells. Our correlative time-lapse microscopy and immunofluorescence images (figure 1; figure S4), as well as our correlative time-lapse microscopy and SEM images (figure S5; figure 3F,G), confirm that Mtb aggregates that interact with cells during time-lapse imaging are retained on the surface of those cells upon fixation and processing for immunofluorescence or electron microscopy. As we can observe in figure 5B (cell indicated by the white arrow), Mtb aggregates are retained on the debris of dead cells. In figure 5 we distinguish between “in contact” macrophages and “bystander” macrophages by inspecting brightfield images showing the cells and the respective fluorescence images corresponding to the bacteria. If the body of a macrophage identified in the brightfield image overlaps with a bacterial aggregate identified in the fluorescence channel, we define the macrophage as “in contact”; otherwise, it is annotated as “bystander”. We provide representative images in figure S12 and we clarify the definition of “in contact” and “bystander” in the figure legend of figure 5.
- The labelling of Figure 5H needs to be corrected both in the text and in the figure legend.
We thank the reviewer for bringing our attention to this error, which has been corrected in the revised manuscript.
- Pyroptosis inhibitors did reduce the percentage of cell death, but did it also reduce the number of Annexin-V positive domains? This is important as AnnexinV is a marker of apoptosis and the outcome for Mtb very different.
As pointed out by the reviewer, Annexin V staining is often used as a marker for apoptosis. Typically, apoptotic cells stain positive for Annexin V but negative for other membrane-impermeable markers such as propidium iodide, because they expose phosphatidylserine (bound by Annexin V) on the outer leaflet of the plasma membrane without losing plasma membrane integrity (5). Apoptotic cells often look round and their plasma membrane is stained homogeneously by fluorescently labelled Annexin V (5). In our experiments, we observe that macrophages in contact with Mtb aggregates become Annexin V-positive; however, this happens only at the site of contact with the bacteria (figure 3A; movie S7). Only when cells die and get stained by membrane-impermeable dies such as Draq7 do they also get stained with Annexin V over the entire membrane debris. We thus use Annexin V staining as a marker for membrane perturbation rather than for cell death. If we were using the Annexin V as a marker for cell death, we would expect to see a reduction in Annexin V-positive cells in samples treated with pyroptosis inhibitors. In these samples, we do observe a reduced percentage of cell death in comparison to untreated controls; however, we still observe a comparable percentage of macrophages that stain positive for Annexin V locally, i.e., at the site of contact with bacterial aggregates (supplementary figure S13B). In line with this observation, treated vs. untreated macrophages in contact with Mtb aggregates accumulate similar levels of intracellular calcium. These observations are consistent with our model suggesting that contact with Mtb aggregates induces membrane perturbation, calcium accumulation, inflammasome activation, and pyroptosis in contacted macrophages. Since the death inhibitors used in our study specifically target pyroptosis effectors, we do not expect them to affect upstream events such as membrane perturbation and calcium accumulation.
- In Figure 6, The sections for Figure 6 are well described but kept relatively long with too many details, it will be helpful to the reader if the authors can combine the sections in one header.
We agree that the text linked to figure 6 is long. We tried to make these sections as concise as possible; however, we are concerned that combining all of the sections under a single header might be at the expense of clarity. Thus, unless the reviewer objects, we would prefer to maintain the use of multiple headers.
- Figure 6F does not have a statistical test and p-value, it will be important to include the statistical test in the legend and p-values in the
As recommended by the reviewer, we have analyzed the results in figure 6F by using a one-way ANOVA test and we have added the calculated p-values to the figure.
Reviewer #1 (Significance):
Based on the literature, Mtb infection and replication can trigger different types of cell death and most of the studies have addressed cell death only as an outcome of intracellular replication. This study shows another form of host cell death, associated only to extracellular bacterial aggregates that are in contact with macrophages. Plasma membrane damage initiating pyroptosis has been defined in: "Plasma membrane damage causes NLRP3 activation and pyroptosis during Mycobacterium tuberculosis infection" by K.S. Beckwith et al. (2020). However, the effect of extracellular bacteria on plasma membrane damage was not addressed before and this paper is addressing an important observation with respect to Mtb evasion and dissemination. These observations represent a novel and interesting aspect in the induction of macrophage cell death by Mtb and potentially relevant for the disease. If the authors consider the comments listed above, this manuscript will be a novel and relevant addition to the field of host pathogen interactions in tuberculosis.
We thank the reviewer for their perspective and their positive comments about our work.
Reviewer #2 (Evidence, reproducibility and clarity):
In this work, Toniolo and coworkers use single-cell time-lapse fluorescence microscopy to show that extracellular aggregates of Mycobacterium tuberculosis can evade phagocytosis by killing macrophages in a contact-dependent but uptake-independent manner. The authors further show that this process is dependent on the functionality of the ESX-1 type VII secretion system and the presence of mycobacterial phthiocerol dimycocerosate (PDIM). In essence the authors show that M. tuberculosis can induce macrophage death from the outside of the cell, and dissect the different players that are involved in the process.
Major comments:
- I was intrigued by all the different findings of this work, which was done by using bone marrow derived murine macrophages, however, my first question to the authors is how they imagine that this process will take under an in vivo situation? Do they have evidence that these mycobacterial clumps may form during the initial infection process in the lungs? It would be important to provide more insights and discussion into this question in order to see how relevant the described details are inside the host organism.
Formation of Mtb aggregates in tuberculosis lesions have been documented in several animal models (6, 7) and in humans (8–11). While it is unclear whether mycobacterial aggregates form during the earliest stages of infection, extracellular bacterial aggregates have been observed in animal models of infection within the first month post-infection, and they are often associated with necrotic foci. Moreover, masses of Mtb growing as pellicle-like aggregates are often observed on the surface of cavities in human tuberculosis patients. These observations confirm that Mtb aggregates can form during a tuberculosis infection and that a significant fraction of bacteria are extracellular during different stages of infection. As we observe that macrophages undergo contact-dependent uptake-independent death also in the absence of cytochalasin D in vitro, we assume that this may also happen in vivo when Mtb aggregates are formed or released outside host cells. This process may promote bacterial propagation at early stages of infection as well as at later stages when necrotic granulomas and cavities are formed.
In the revised manuscript we present and discuss our observations in the context of the in vivo phenotypes reported in the scientific literature and we include additional references showing that extracellular Mtb aggregates are often observed in vivo. We also propose this concept already in the Introduction section to better link our observations to possible in vivo scenarios.
Minor comments:
Line 91: here the authors list the different forms of cell death that is induced by MTB infection, and it would be important to add apoptosis as a reported mechanism as well (References: PMID: 23848406, PMID: 28095608)
As suggested by the reviewer, in the revised manuscript we have modified the Introduction section to include apoptosis as a Mtb-induced mechanism of macrophage death and we have cited the two publications recommended by the reviewer.
- Line 95: The secretion of EspE was mainly described in M. marinum while in members of the M. tuberculosis complex no virulence phenotype was reported to the best of my knowledge.
In agreement with the reviewer’s comment, we have modified the sentence and removed EspE from the list of virulence factors.
- Lines 98: In the cited papers it is described that PDIM is required for phagosomal damage/rupture, however, the methods used there do not allow to specifically report about translocation. The wording should be adapted.
We thank the reviewer for this insightful comment and we have modified the text accordingly.
- Line 206: Here it is described that in Figure 3A the BMDMs were expressing tdTomato fluorescence and the bacteria GFP, and the same is also repeated in the Figure legend of Fig3A. However, on the images, BMDMs are shown green and bacterial clumps purple (as also indicated in the description directly on the images) Please check and explain/correct this discrepancy.
We apologize that the color scheme in figure 3A is confusing. In this figure we used tdTomato-expressing BMDMs and GFP-expressing bacteria; however, we have pseudo-colored the fluorescence images for the sake of consistency with the other figures in the manuscript, which always show bacteria in magenta. We have clarified this point in the figure legend of the revised manuscript.
- Line 304: Here the authors could mention that this finding is similar to results found previously in reference PMID: 28095608 and opposite to the results reported previously in PMID: 28505176.
As recommended by the reviewer, we have added a sentence comparing our results with previous studies and we have cited the two references suggested by the reviewer.
- Line 321: It should be mentioned that CFP10 (EsxB) can also be secreted without its EsxA partner (under certain circumstances, i.e. when the EspACD operon is not expressed due to a phoP regulatory mutation (PMID: 28706226)). However, in Figure S7 an EspAdeletion mutant shows loss of EsxB secretion. This should be checked and discussed how the data here compare with data and strains published previously.
We thank the reviewer for pointing out this interesting point. Our proteomics data revealed that both our esxA mutant and our espA mutants abolish secretion of both EsxA and EsxB, in line with previously published data (12–14). We do not know why the espA mutant behaves differently from the MTBVAC strain concerning secretion of EsxA and EsxB (although we note that regulatory mutations may have complex pleiotropic effects). In the revised manuscript, we have modified this section to include references highlighting that secretion of these proteins may be uncoupled in some circumstances.
- The finding that EspB can substitute the loss of virulence due to loss of EsxA/ESAT-6 secretion is astonishing and also is different to previous observations that strain H37Ra and MTBVAC (two attenuated strains that have no or very little EsxA secretion due to a regulation defect of the espACD operon PMID: 18282096; PMID: 28706226). How does the hypothesis put forward by the authors match with these previously published data ?
We thank the reviewer for this interesting comment. We would like to clarify that we are not claiming that EspB and EsxA are in general redundant and that EspB can substitute EsxA as a virulence factor. In our experiments we show that EspB can induce contact-dependent uptake-independent death in macrophages in contact with Mtb aggregates in vitro even in the absence of EsxA; however, the precise role of EspB during infection in mice or humans remains to be elucidated and is outside the scope of this manuscript. A previous study comparing Mtb ESX-1 mutants with different secretion patterns linked EspB secretion to Mtb virulence in vivo (14); however, the behavior of an isogenic espB_deletion strain _in vivo was not reported. A M. marinum espB mutant was shown to have reduced virulence; however, in contrast to Mtb, deletion of espB also affects secretion of EsxA in this organism (15). As the reviewer points out, the Mtb strains H37Ra and MTBVAC do not secrete EsxA due to a mutated phoP gene. Previous literature has shown that espB expression is also dependent on PhoP (16). We thus speculate that these strains might behave similarly to our espA espB mutant strain in the context of contact-dependent uptake-independent induction of macrophage death, although we think that this point is outside the scope of our manuscript.
- In the same context, it is to notice that the authors report in the paragraph between lines 310-330 about EsxA/EsxB secretion, however, looking at the Western blots of figure S7, there is no blot showing results using an antibody against EsxA. Given the previously published results that EsxA/EsxB secretion may also be disconnected (PMID: 28706226), the wording of the text in this paragraph should be adapted or the results from Western Blots using EsxA antibodies be added.
We agree with the reviewer’s comment. Unfortunately, we currently do not have access to a good antibody for EsxA. A commercial monoclonal antibody that was previously available for immunoblotting has been discontinued. We tried several other antibodies that were previously shown to work in M. marinum, but none of these antibodies were effective in M. tuberculosis. We agree that analysing secretion of EsxB alone might not be sufficient to support claims about EsxA secretion. For this reason, we performed quantitative proteome analysis of the secretome in all of the relevant mutant strains. In our revised manuscript, we are careful to make sure that whenever we refer to EsxA/EsxB secretion we always provide proteomics data to support our conclusions.
- Line 395: Here the authors write that BTP15, a small molecule that in a previous study was shown to inhibit EsxA secretion at higher concentrations (starting from 1.5 uM and higher). However, no effect on the expression of EsxA was described for that compound in reference PMID: 25299337. Thus the corresponding sentence in line 395 needs to adapted to that situation.
We thank the reviewer for noticing this error, which we have corrected in the revised manuscript.
- Moreover, most concentrations of the compounds used are reported in uM, except for BTP15. It would be easier for the reader if the concentration used for BTP15 could also be reported in uM.
As suggested by the reviewer, in the revised manuscript we report the concentration of BTP15 in μM.
- Line 475 The comment on the pore forming activity has to be made with caution, as recombinant EsxA produced from E. coli cultures has been shown to often retain detergent PMID: 28119503 that may be responsible for pore forming activity of recombinant EsxA observed in quite some studies, whereas EsxA purified from M. tuberculosis cultures did not show the detergent, but still retained membranolytic activity. This point should be clarified and discussed, and the wording adapted, as EsxA is not a classical poreforming toxin, but excerts the membrane-lysing activity together with other partners (PDIM) in a yet unknown way upon cell contact.
We thank the reviewer for this comment. In the revised manuscript, we have modified the text accordingly and included the sugggested reference.
Reviewer #2 (Significance):
The findings in this work extend the current knowledge on cell infection by M. tuberculosis in a significant way and put extracellular M. tuberculosis clumps in a new context. These data obtained by single-cell time-lapse fluorescence microscopy also need to be discussed for predicting the relevance for an in vivo situation inside the host organism.
As suggested by the reviewer, in the revised manuscript we discuss additional examples from the literature showing that Mtb aggregates can form during infection and that many bacteria are extracellular and associated with necrotic foci during different stages of the disease in animal models of infection and in human patients. We believe that these previously published observations support the in vivo relevance of the process we observe in vitro.
Reviewer #3 (Evidence, reproducibility and clarity):
This is an excellent study distinguished by the volume of observations, rigor of analysis and clarity of presentation. The results are novel, biologically interesting and pathophysiologically important. The ability of aggregated M. tuberculosis to kill macrophages has been reported, but the understanding was that proliferation of Mtb within macrophages killed them. Here, the authors observe that macrophages are susceptible to pyroptotic death triggered by contact with extracellular Mtb aggregates, and that this is not recapitulated by contact with a comparable number of Mtb as single bacilli. The authors go some way to tracing the mechanism and uncover a complex inter-dependence on PDIM and on components of the mycobacterial ESX-1 secretory system.
The following comments will helpfully improve the study further.
Major points
- The chief measurement in this study is death of individual macrophages as judged by the observer in videomicroscopy. However, the criteria for calling a macrophage "dead" are not defined with any morphological detail, beyond noting that the cell stops moving and lyses. Of course a cell will stop moving if it has lysed, but do not some if not most cells stop moving before they lyse? If so, lysis alone would seem to be the time-point marker for cell death. Yet from the images in Fig 1E and F, I cannot tell that the cells called "dead" have lysed. Watching the videos, the time of lysis is not clear to me. Eventually, shrunken cell bodies are obvious but it is not clear if these are residua of cells that had been said to "lyse" at an earlier time.
In this study, we used brightfield time-lapse microscopy images to identify cell death. Dying macrophages rapidly change shape, lose membrane integrity, and stop moving. Moreover, the intracellular structures and bacteria also stop moving at the time of death of the host cell. While these events can be difficult to distinguish by examining individual snapshots, they are readily identifiable by careful frame-by-frame examination of time-lapse microscopy image series. To exemplify this process, in the revised manuscript we show in supplementary figure S2A how we identify macrophage cell death events. We also include Draq7 (a live cell-impermeable dye commonly used to identify dead cells by flow cytometry and microscopy) in the growth medium during time-lapse imaging in order to label dead macrophages. The timing of staining validates and confirms our strategy of using brightfield time-lapse images to define the time-of-death of individual cells. To further assist readers, in the revised manuscript we provide the time-lapse microscopy movie used to generate this figure (movie S4). Similar images and movies have also been added for cells treated with cytochalasin D (figure S2B; movie S7). As suggested by the reviewer, we also replaced figures 1E,F with new figures incorporating the Draq7 staining to label macrophage cell death and we include the time-lapse microscopy movies used to generate these figures (movies S4, S5).
- The use of BTP15 as a specific inhibitor of ESX-1 is problematic. The source of the compound is not stated.
The BTP15 molecule was kindly provided by Prof. Stewart Cole, the corresponding author of the article describing the identification of this compound and its effect on Esx-1 secretion (17). We have included this information in the Materials and Methods section.
- The concentration used, 20 ug/mL, is well above the reported IC50 (1.2 uM) for its presumed target, a mycobacterial histidine kinase, and above the concentrations (0.3-0.6 uM) reported to inhibit Mtb's secretion of EsxA almost completely. It is concerning that the concentrations that were reported to work so well on the whole cell are lower than the IC50 for the presumed target, because uptake into Mtb and intrabacterial metabolism will typically lead to a lower potency for an inhibitor against the whole bacterium than against the isolated enzyme; and because 50% inhibition of an enzyme rarely gives a functional effect as complete as what is shown in the cited reference. In other words, it is not clear that the histidine kinase is the functionally relevant target of BTP15 in Mtb. The original report did not consider BTP15's possible effect on mammalian cells and the present authors likewise do not take that into consideration with respect to possible effects on the macrophages. No concentration-response or time course experiments with BTP15 are presented. Most important, unless I missed it, there is apparently no demonstration that the compound inhibited ESX-1-dependent secretion in the present authors' hands, no matter by what mechanism. Without this, I am reluctant to accept that the results with BTP15 demonstrate a dependence of extracellular-aggregate-induced macrophage death on ESX-1-mediated secretion from Mtb. I would recommend that the authors either provide a direct demonstration of BTP15's effect on ESX-1 dependent secretion at concentrations near those that worked on whole cells in the original report, or drop the BTP15 studies from the paper. That said, the genetic experiments remain unequivocal, so the paper's conclusions would not be affected.
We agree with the reviewer that in the original version of our manuscript we did not provide direct evidence demonstrating that BTP15 inhibits ESX-1 secretion and that it does not affect the host cells. We addressed the first issue by quantifying (by Western blot) the secretion of EsxB and EspB in Mtb cultures treated with different concentrations of BTP15. We show that BTP15 reduces secretion of these two proteins in a dose-dependent manner. These data have been included in figures S21A-B of the revised manuscript. In line with this observation, we also show that BTP15 reduces uptake-independent killing of macrophages by Mtb aggregates in a dose-dependent manner (figure 6H). To show that the dose-dependent effect observed in macrophages does not depend on a direct effect of BTP15 on the host cells, we treated Mtb with different concentrations of BTP15 for 48 hours and removed the compound by washing the bacteria prior to infection. We observe that Mtb aggregates that have been treated with BTP15 show reduced uptake-independent killing of macrophages, even when bacteria have been pre-treated and the small molecule is not present during the incubation with the cells (figure S21C). We hope that these additional results provide clear evidence that BTP15 reduces Mtb-mediated contact-dependent uptake-independent killing of macrophages by inhibiting ESX-1 secretion, consistent with our genetic data. We think these results are important because they provide a chemical validation of our genetic data. To the best of our knowledge, BTP15 is the only available compound known to inhibit ESX-1 secretion, and in the revised manuscript we confirm that this compound has the previously described effect on Mtb also in our hands. Unfortunately, we had to use concentrations higher than those previously reported to inhibit ESX-1 secretion in order to achieve the observed effects. As we had access only to prediluted aliquots that had been stored for a long time, we cannot rule out the posibility that the compound might have undergone partial degradation during storage.
- The experiments, or at least the discussion, could consider what may distinguish single Mtb cells from aggregated Mtb in some way relevant to the present observations. The authors seem to assume that all the Mtb cells in their preparations are biochemically equivalent and that their distribution into single-cell or aggregate subpopulations is stochastic. What if it is deterministic instead? For example, what if these two subpopulations are defined by differential expression of PDIM, so that the greater macrophage-killing effect of aggregates than single cells in equivalent numbers reflects a greater amount of PDIM in the aggregates, rather than some sort of valency-of-contact effect? The authors could compare the PDIM-to-DNA ratio in the single cell and aggregated subpopulations, or at least discuss this possibility.
We thank the reviewer for proposing this extremely interesting idea. In the revised manuscript, we have added a discussion of this point (lines 487-489) and we have floated various possible explanations. However, we believe that experimental dissection of the underlying mechanism could be a very lengthy undertaking and we hope that the reviewer will agree that this is outside the scope of the current manuscript.
Minor points
- Some of the experiments compare "low", "medium" and "high" numbers of Mtb, but I could not find a definition of these numbers.
We apologize for this oversight. In the revised manuscript, we have clarified the definition of these gates in the figure 2 legend.
- There seem to be no positive or negative controls for any of the antibodies used for cell staining (anti-cleaved caspase 1, antiphospho RIP3, anti-phospho MLKKL).
As recommended by the reviewer, the revised manuscript includes controls for all of the antibodies used for immunostaining. In figure S12 we provide representative immunostaining images and fluorescence quantification of uninfected untreated macrophages (negative controls) and of uninfected macrophages treated with cocktails of molecules typically used to induce apoptosis, pyroptosis, or necroptosis (positive controls).
Reviewer #3 (Significance):
The results are novel, biologically interesting and pathophysiologically important.
We thank the reviewer for their appreciation of our findings.
References 1. H. Gan, et al., Mycobacterium tuberculosis blocks crosslinking of annexin-1 and apoptotic envelope formation on infected macrophages to maintain virulence. Nature Immunology 9, 1189–1197 (2008). 2. M. Divangahi, et al., Mycobacterium tuberculosis evades macrophage defenses by inhibiting plasma membrane repair. Nature Immunology 10, 899–906 (2009). 3. D. Mahamed, et al., Intracellular growth of Mycobacterium tuberculosis after macrophage cell death leads to serial killing of host cells. eLife 6, e22028 (2017). 4. A. J. Jimenez, et al., ESCRT Machinery Is Required for Plasma Membrane Repair. Science 343, 1247136 (2014). 5. M. van Engeland, L. J. W. Nieland, F. C. S. Ramaekers, B. Schutte, C. P. M. Reutelingsperger, Annexin V-Affinity assay: A review on an apoptosis detection system based on phosphatidylserine exposure. Cytometry 31, 1–9 (1998). 6. D. R. Hoff, et al., Location of Intra- and Extracellular M. tuberculosis Populations in Lungs of Mice and Guinea Pigs during Disease Progression and after Drug Treatment. PLOS ONE 6, e17550 (2011). 7. S. M. Irwin, et al., Presence of multiple lesion types with vastly different microenvironments in C3HeB/FeJ mice following aerosol infection with Mycobacterium tuberculosis. Disease Models & Mechanisms 8, 591–602 (2015). 8. Kaplan, G., et al., Mycobacterium tuberculosis Growth at theCavity Surface: a Microenvironment with FailedImmunity. Infection and Immunity 71, 7099–7108 (2003). 9. J. Timm, et al., A Multidrug-Resistant, acr1-Deficient Clinical Isolate of Mycobacterium tuberculosis Is Unimpaired for Replication in Macrophages. The Journal of Infectious Diseases 193, 1703–1710 (2006). 10. R. L. Hunter, Pathology of post primary tuberculosis of the lung: An illustrated critical review. Tuberculosis 91, 497–509 (2011). 11. G. Wells, et al., Micro–Computed Tomography Analysis of the Human Tuberculous Lung Reveals Remarkable Heterogeneity in Three-dimensional Granuloma Morphology. Am J Respir Crit Care Med 204, 583–595 (2021). 12. S. A. Stanley, S. Raghavan, W. W. Hwang, J. S. Cox, Acute infection and macrophage subversion by Mycobacterium tuberculosis require a specialized secretion system. Proc Natl Acad Sci USA 100, 13001 (2003). 13. S. M. Fortune, et al., Mutually dependent secretion of proteins required for mycobacterial virulence. Proc Natl Acad Sci U S A 102, 10676 (2005). 14. J. M. Chen, et al., Mycobacterium tuberculosis EspB binds phospholipids and mediates EsxA-independent virulence. Mol Microbiol 89, 1154–1166 (2013). 15. L.-Y. Gao, et al., A mycobacterial virulence gene cluster extending RD1 is required for cytolysis, bacterial spreading and ESAT-6 secretion. Mol Microbiol 53, 1677–1693 (2004). 16. V. Anil Kumar, et al., EspR-dependent ESAT-6 Protein Secretion of Mycobacterium tuberculosis Requires the Presence of Virulence Regulator PhoP. Journal of Biological Chemistry 291, 19018–19030 (2016). 17. J. Rybniker, et al., Anticytolytic Screen Identifies Inhibitors of Mycobacterial Virulence Protein Secretion. Cell Host & Microbe 16*, 538–548 (2014).
Our modern design hinges on crisp minimalism
RIP the old windows logo
Up above, starstwinkle in a cloudless sky, while at ground level electric lamps shine throughthe windows of nearby houses .
This reminds me of the painting Starry Night by Vincent Van Gough
事实证明我还真不是游戏党,这些年来各大网游和单机游戏我都很少玩,除了我的世界。我是一个 MC 老玩家,十分喜欢这个游戏,大一时也补票购买了正版 Minecraft,但是 java 版 Minecraft 是出了名的优化差,虽然游戏配置要求不高,但游玩久了存档大起来也难免出现卡顿,我这台轻薄本也渐渐吃力了起来。
我以前也是在Ubuntu上玩Minecraft,确实是比Windows下流畅很多
Jiao said that winter is the season during which respiratory and other diseases interact often impacting the elderly, thus relatively large proportions of elderly people have been victims of the current wave of the epidemic. This reminds us to focus more on elderly patients and try our best to save lives.
So then why reopen during the winter? Given that winter is both the season when people are most likely to be in poorly ventilated indoor spaces (too cold to open windows) and a season when people travel a great deal, the timing of China's reopening could not have been worse.
we have one of the most powerful languages for manipulating everything in the browser (ecmascript/javascript) at our disposal, except for manipulating the browser itself! Some browsers are trying to address this (e.g. http://conkeror.org/ -- emacs styled tiled windows in feature branch!) and I will be supporting them in whatever ways I can. What we need is the bash/emacs/vim of browsers -- e.g. coding changes to your browser (emacs style) without requiring recompiling and building.
That was what pre-WebExtensions Firefox was. Mozilla Corp killed it.
See Yegge's remarks on The Pinocchio Problem:
The very best plug-in systems are powerful enough to build the entire application in its own plug-in system. This has been the core philosophy behind both Emacs and Eclipse. There's a minimal bootstrap layer, which as we will see functions as the system's hardware, and the rest of the system, to the greatest extent possible (as dictated by performance, usually), is written in the extension language.
Firefox has a plugin system. It's a real piece of crap, but it has one, and one thing you'll quickly discover if you build a plug-in system is that there will always be a few crazed programmers who learn to use it and push it to its limits. This may fool you into thinking you have a good plug-in system, but in reality it has to be both easy to use and possible to use without rebooting the system; Firefox breaks both of these cardinal rules, so it's in an unstable state: either it'll get fixed, or something better will come along and everyone will switch to that.
Something better didn't come along, but people switched anyway—because they more or less had to, since Mozilla abandoned what they were switching from.
At the time of writing, there is an open bug in Snakemake (version 5.8.2) on Windows systems that prevents requesting specific files from the command line when those files are in a subdirectory.
When running on Windows using Git Bash and Anaconda, the previous code will not work. Multiline strings containing multiple shell commands are not executed correctly. The simplest workaround is to add &&\ to the end of all lines except the last inside the multiline shell command:
--annex all
The --annex flag doesn't seem to work for Windows 11 and Ubuntu 22.04 (WSL2)
on an Intel/AMD PC or Mac, docker pull will pull the linux/amd64 image. On a newer Mac using M1/M2/Silicon chips, docker pull will the pull the linux/arm64/v8 image.
Reason of all the M1 Docker issues
In order to meet its build-once-run-everywhere promise, Docker typically runs on Linux. Since macOS is not Linux, on macOS this is done by running a virtual machine in the background, and then the Docker images run inside the virtual machine. So whether you’re on a Mac, Linux, or Windows, typically you’ll be running linux Docker images.
Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Reviewer #1 (Evidence, reproducibility and clarity (Required)):
Marta Sanvicente-García et al and colleague developed a comprehensive and versatile genome editing web application tool and a nextflow pipeline to give support to gene editing experimental design and analysis.
The manuscript is well written and all data are clearly shown.
While I did not tested extensively, the software seems to work well and I have no reason to doubt the authors' claims.
I usually prefer ready to use web applications like outknocker, they are in general easier to use for rookies (it would be good if the author could cite it, since it is very well implemented) but the nextflow implementation is anyway well suited.
We have been able to analyze the testing dataset that they provide, but we have tried to run it with our dataset and we have not been able to obtain results. We have also tried to run it with the testing dataset of CRISPRnano and CRISPResso2 without obtaining results. The error message has been in all the cases: “No reads mapping to the reference sequence were found.”
Few minor points:
Regarding the methods to assess whether the genome editing is working or not, I would definitely include High Resolution Melt Analysis, which is by far the fastest and probably more sensitive amongst the others.
Following the Reviewer 1 suggestion, we have added this technique in the introduction: “Another genotyping method that has been successfully used to evaluate genome editing is high-resolution melting analysis (HRMA) [REFERENCE]. This is a simple and efficient real-time polymerase chain reaction-based technique.”
Another point that would important to taclke is that often these pipelines do nto define the system they are working with (eg diploid, aploid vs etc). This will change the number of reads needed ato unambigously call the genotypes detected and to perform the downstream analysis (the CRISPRnano authors mentioned this point).
In the introduction, it is already said: " it is capable of analyzing edited bulk cell populations as well as individual clones". In addition, following this suggestion we have added in the help page of CRISPR-A web application and in the documentation of the nextflow pipeline a recommended sample coverage to orient the users on that.
I am also wondering whether the name CRISPR-A is appropriate since someone could confuse it with CRISPRa.
CRISPR-A is an abbreviation for CRISPR-Analytics. Even if it is true that it can be pronounced in the same way that CRISPRa screening libraries, it is spelled differently and would be easily differentiated by context.
CROSS-CONSULTATION COMMENTS
Reviewer 2 made an excellent work and raised important concerns about the software they need to be addressed carefully.
In the meantime we had more time to test the software and can confirm some of the findings of Reviewer 1:
1) We spent hours running (unsuccessfully) CRISPR A on Nextflow. The software does not seem to run properly.
2) No manual or instruction can be found on both their repositories (https://bitbucket.org/synbiolab/crispr-a_nextflow/
https://bitbucket.org/synbiolab/crispr-a_figures/)
We have added a readme.md file to both repositories and we hope that with the new documentation the software can be downloaded and run easily. We have also added an example test in CRISPR-A nextflow pipeline to facilitate the testing of the software. Currently, the software is implemented in DLS1 instead of DLS2, making it impossible to be run with the latest version of nextflow. We are planning to make the update soon, but we want to do it while moving the pipeline to crisprseq nf-core pipeline to follow better standards and make it fully reproducible and reusable.
Few more points to be considered:
We have added more details in the methods section “Library prep and Illumina sequencing with Unique Molecular Identifiers (UMIs)” to clarify the process and used terminology: “Uni-Molecular Identifiers are added through a 2 cycles PCR, called UMI tagging, to ensure that each identifier comes just from one molecule. Barcodes to demultiplex by sample are added later, after the UMI tagging, in the early and late PCR.”
We had already explained the computational pipeline through which these UMIs are clustered together to obtain a consensus of the amplified sequences in “CRISPR-A gene editing analysis pipeline” section in methods:
“An adapted version of extract_umis.py script from pipeline_umi_amplicon pipeline (distributed by ONT https://github.com/nanoporetech/ pipeline-umi-amplicon) is used to get UMI sequences from the reads, when the three PCRs experimental protocol is applied. Then vsearch⁴⁸ is used to cluster UMI sequences. UMIs are polished using minimap2³² and racon⁴⁹ and consensus sequences are obtained using minialign (https://github.com/ocxtal/minialign) and medaka (https://github.com/nanoporetech/medaka).”
We also have added the following in “CRISPR-A gene editing analysis pipeline” methods section to help to understand differences between the barcodes that can be used: “In case of working with pooled samples, the demultiplexing of the samples has to be done before running CRISPR-A analysis pipeline using the proper software in function of the sequencing used platform. The resulting FASTQ files are the main input of the pipeline.”
Then, SQK-LSK109 from Oxford Nanopore is followed through the steps specified in methods: “The Custom PCR UMI (with SQK-LSK109), version CPU_9107_v109_revA_09Oct2020 (Nanopore Protocol) was followed from UMI tagging step to the late PCR and clean-up step.”
Finally, we want to highlight that, as can be seen in methods as well as in discussion, UMIs are used to group sequences that have been amplified from the same genome and not to identify different samples: “Precision has been enhanced in CRISPR-A through three different approaches. [...] We also removed indels in noisy positions when the consensus of clusterized sequences by UMI are used after filtering by UBS.” As well as in results (Fig. 5C).
We have increased the letter size of Figure 5.
We have added a human classified dataset to do the final benchmarking. And we can see that for all examined samples CRISPR-A has an accuracy higher than 0.9. As has been shown in the figure with manual curated data, CRISPR-A shows good results in noisy samples using the empiric noise removal algorithm, without need of filtering by edition windows.
As can be seen in figure 2A, minimap is one of the alignment methods that gives better results for the aim of the pipeline. In addition, we have tuned the parameters (Figure 2B) for a better detection of CRISPR-based long deletions, which can be more difficult to report in a single open gap of the alignment.
Proper documentation to indicate the configuration requirements for installation has been added to the readme.md of the repository·
Principal Component Analysis is used to reduce the number of dimensions in a dataset and help to understand the effect of the explainable variables, detect trends or samples that are labeled in incorrect groups, simplify data visualization… Even PC4 explains less variability than PC2 or PC3, this helps us to understand and better decipher the effect of the 4 different analyzed parameters even if the differences are not big. We have decided to include as a supplementary figure other PCs to show these.
One more drawback is that the software seems to only support single FASTQ uploading (or we cannot see the option to add more FASTQ).
In the case of paired-end reads instead of single-end reads, in the web application, these can be selected at the beginning answering the question “How should we analyze your reads? Type of Analysis: Single-end Reads; Paired-end Reads”. In the case of the pipeline, now it is explained in the documentation how to mark if the data is paired-end or single-end. It has to be indicated in “input” and “r2file” configuration variables.
In the case of multiple samples, and for that reason multiple FASTQ files, there is the button to add more samples in the web application. In the pipeline, multiple samples can be analyzed in a single run by putting all together in a folder and indicating it with variable “input”.
Since usually people analyze more than one clone at the time (we usually analyze 96 clones together) this would mean that I have to upload manually each one of them.
All files can be added in the same folder and analyzed in a single run using the nextflow pipeline. Web application has a limit of ten samples that can be added clicking the button “Add more”.
Also, the software (the webserver, the docker does not work) works with Illumina data in our hands but not with ONT.
This should be clarified in the manuscript.
If a fastq is uploaded to CRISPR-A, the analysis can be done even if we haven't specifically optimized the tool for long reads sequencing platforms. We have checked the performance of CRISPR-A with CRISPRnano nanopore testing dataset and we have succeeded in the analysis. See results here: https://synbio.upf.edu/crispr-a/RUNS/tmp_1118819937/.
Summary of the results:
Sample
CRISPRnano
CRISPR-A
'rep_3_test_800'
42.60 % (-1del); 12.72 % (-10del)
71% (-1del);
16% (-10del)
– 36 (logo)
'rep_3_test_400'
37.50 % (-1del); 15.63 % (-10 del)
65% (-1del);
28% (-10del)
– 38 (logo)
'rep_1_test_200'
39.29 % (-1del); 8.33 % (-17del)
10del; 17del; 1del
'rep_1_test_400'
80.11 % (-17 del)
del17; del20; del18; del16;del 16
'rep_0_test_400'
80.11% (-17 del)
del17; del20; del 18; del16; del16
'rep_0_test_200'
71.91% (-17 del)
del17; del18
As we can see from these exemple, CRISPR-A reports all indels in general without classifying them as edits or noise. Since nanopore data has a high number of indels as sequencing errors the percentages of CRISPR-A are not accurate. Eventhat, CRISPR-A reports more diverse outcomes, which are probably edits, than CRISPRnano.
Therefore, we have added the following text in results:
“Even single-molecule sequencing (eg. PacBio, Nanopore..) can be analyzed by CRISPR-A, targeted sequencing by synthesis data is required for precise quantification.”
Reviewer #1 (Significance (Required)):
As I mentioned above I think this could be a useful software for those people that are screening genome editing cells. Since CRISPR is widely used i assume that the audience is broad.
There are many other software that perform similarly to CRISPR-A but it seems that this software adds few more things and seems to be more precise. It is hard to understand if everything the author claims is accurate since it requires a lot of testing and time and the reviewing time is of just two weeks. But 1) I have no reason to doubt the authors and 2) the software works
Broad audience (people using CRISPR)
Genetics, Genome Engineering, software development (we develop a very similar software), genetic compensation, stem cell biology
Reviewer #2 (Evidence, reproducibility and clarity (Required)):
Summary:
CRISPR-Analytics, abbreviated as CRISPR-A, is a web application implementing a tool for analyzing editing experiments. The tool can analyze various experiment types - single cleavage experiments, base editing, prime editing, and HDR. The required data for the analysis consists of NGS raw data or simulated data, in fastq, protospacer sequence and cut site. Amplicon sequence is also needed in cases where the amplified genome is absent from the genome reference list. The tool pipeline is implemented in NextFlow and has an interactive web application for visualizing the results of the analysis, including embedding the results into an IGV browser.
The authors developed a gene editing simulation mechanism that enables the user to assess an experiment design and to predict expected outcomes. Simulated data was generated by SimGE over primary T-cells. The parameters and distributions were also fitted for 3 cell lines to make it more generalized (Hek293, K562, and HCT116). The process simulated CRISPR-CAS9 activity and the resulting insertions, deletions, and substitutions. The simulation results are then compared to the experimental results. The authors report the Jensen-Shannon (JS) divergence between the results. The exact distributions that served as input to the JS are not well defined in the manuscript (see below).
To clarify the used distributions in the JS divergence calculation, we have changed the following piece of text in section “Simulations evaluation” of methods:
“ Afterward, we tested the performance on the fifth fold, generating the simulated sequences with the same target and gRNA as the samples that belong to the fifth fold, in order to calculate the distance between these. The final validation, with the mean parameters of the different training interactions, was performed on a testing data set that was not used in the training. Validation was done with samples that had never taken place in the training process. Jensen distance is used to compare the characterization of real samples and simulated samples since this is the explored distance that differentiates better replicates among samples. In order to obtain the different distributions, the T cell data, including 1.521 unique cut sites, was split into different datasets based on the different classes: deletions, insertions and substitutions. For each of these classes, giving as input the datasets with only that class, we obtained the distribution for size and then for position of indels. The same was done for the other three cell lines: K562, HEK293 and HCT116, which included 96 unique cut sites, with three replicates each. The whole datasets (with 1521 and 96 unique cut sites) were split into five-folds (4 for training and one for test) and validation, in order to train and validate the simulator. Using the parameters obtained during the training-test iterations (the average value of the 5 iterations), we generate simulated sequences with the same target and gRNA as the samples that are assigned to the test subset to calculate the Jensen-Shannon (JS) divergence between the simulated and real samples of that subset. Finally, the same was performed for validation. The input for the distance calculations were the generated simulated subset and its real equivalent (same target and gRNA) distributions of the classes. ”
The authors also report an investigation of different alignment approaches and how they may affect the resulting characterization of editing activity.
The authors examine three different approaches to increase what they call "edit quantification accuracy" (aka, in a different place - "precise allele counts determination" - what is this???): (1) spike-in controls (2) UMI's and (3) using mock to denoise the results. See below for our comments about these approaches.
Moreover, the authors developed an empirical model to reduce noise in the detection of editing activity. This is done by using mock (control), and by normalization and alignment of reads with indels, with the notion and observation that indels that are far from the cut site tend to classify as noise.
The authors then perform a comparison between 6 different tools, in the context of determining and quantifying editing activities. One important comparison approach uses manually curated data. However - the description of how this dataset was created is far from being sufficiently clear. The comparison is also performed for HDR experiment type, which can be compared only to 2 other tools.
We have changed alleles by editing outcomes in the title section “Three different approaches to increase precise editing outcomes counts determination” trying to be more clear.
There is already a section in methods “Manual curation of 30 edited samples” explaining how the manual curation was done.
We see the potential contribution aspects of the paper to be the following:
Major comments:
Upon attempting to run an analysis from the web interface (https://synbio.upf.edu/crispr-a) and using: fastq of Tx and mock (control), the human genome and the gRNA sequence provided as input for the protospacer field, our run was not successful. In fact the site crashed with no interpretable error message from CRISPR-A. We have improved the error handling together with the explanations in the help page, where you will find a video. Hopefully these improvements will avoid unexpected crashings.
Moreover, there should be more clear context. There is no information regarding the type of experiments that can be analyzed with the tool. We figure it is multiplex PCR and NGS but can the tool also be used for GUIDESeq, Capture, CircleSeq etc.? Experiments that could be analyzed are specified in Results: “CRISPR-A analyzes a great variety of experiments with minimal input. Single cleavage experiments, base editing (BE), prime editing (PE), predicted off-target sites or homology directed repair (HDR) can be analyzed without the need of specifying the experimental approach.” We have also specified this in the nextflow pipeline documentation as well as in the web application help page.
No off target analysis. Only on-target The accuracy of the tool allows checking if edits in predicted off-target sites are produced, this being an off-target analysis with some restrictions, since just variants of the predicted off-target sites are assessed. Translocations or other structural off-targets will not be detected by CRISPR-A since the input data analyzed by this tool are demultiplexed amplicon or targeted sequencing samples.
No translocations and long/complex deletions The source of used data as input does not allow us to do this. There are other tools like CRISPECTOR available for this kind of analysis. We have added this to supplementary table 1.
We view the use of a mock experiment as control as a must for any sound attempt to measure edit activity. This is even more so when off-target events need to be assessed (any rigorous application of GE, certainly any application aiming for clinical or crop engineering purposes). We therefore think that all investigation of other approaches should be put in this context. We agree with the necessity of using negative controls to assess editing. For that reason we have included the possibility of using mocks in the quantification. In addition, there are few tools that include this functionality.
It's a nice feature to have simulated data, however, it is not a good approach to rely on it. As can be seen in the manuscript we highlight the support that simulations can give without pretending to substitute experimental data by just simulated data. Simulated data has been useful in the development and benchmarking of CRISPR-A, but we are aware of the limitations of simulations. Here some examples from the manuscripts explaining how we have used or can be used simulated data:
“Analytical tools, and simulations are needed to help in the experimental design.”
“simulations to help in design or benchmarking”
“We developed CRISPR-A, a gene editing analyzer that can provide simulations to assess experimental design and outcomes prediction.”
“Gene editing simulations obtained with SimGE were used to develop the edits calling algorithm as well as for benchmarking CRISPR-A with other tools that have similar applications.”
Even simulated data has been useful for the development and benchmarking of CRISPR-A, we have also used real data and human validated data.
In p7 the authors indicate the implementation of three approaches to improve quantification. They should be clear as to the fact that many other tools and experimental protocols are also using these approaches. for example, ampliCan, CRipresso2 and CRISPECTOR all take into account a mock experiment run in parallel to the treatment. Even in page 7 (results) we don’t mention the other tools that also use mocks for noise correction, we detail this information in Supplementary Table 1. CRISPResso2 was not included since they can run mocks in parallel but only to compare results qualitatively, i.e. there is not noise reduction in their pipeline. It has been added to the table.
Figure1: ○ The figure certainly provides what seems to be a positive indication of the simulations approach being close to measured results. Much more details are needed, however, to fully understand the results.
We have added more details.
○ Squema = scheme ??
We have changed the word “schema” by diagram.
○ What was the clustering approach?
As is said in the caption of Figure 1 the clustering is hierarchical: “hierarchical clustering of real samples and their simulations from validation data set.” And we have added that “The clustering distance used is the JS divergence between the two subsets.”
○ What is the input to the JS calculation? What is the dimension of the distributions compared? These details need to be precisely provided.
The distribution has two dimensions, sizes and counts or positions and counts.
As said before, to clarify the used distributions in the JS divergence calculation, we have changed the following piece of text in section “Simulations evaluation” of methods:
“ Afterward, we tested the performance on the fifth fold, generating the simulated sequences with the same target and gRNA as the samples that belong to the fifth fold, in order to calculate the distance between these. The final validation, with the mean parameters of the different training interactions, was performed on a testing data set that was not used in the training. Validation was done with samples that had never taken place in the training process. Jensen distance is used to compare the characterization of real samples and simulated samples since this is the explored distance that differentiates better replicates among samples. In order to obtain the different distributions, the T cell data, including 1.521 unique cut sites, was split into different datasets based on the different classes: deletions, insertions and substitutions. For each of these classes, giving as input the datasets with only that class, we obtained the distribution for size and then for position of indels. The same was done for the other three cell lines: K562, HEK293 and HCT116, which included 96 unique cut sites, with three replicates each. The whole datasets (with 1521 and 96 unique cut sites) were split into five-folds (4 for training and one for test) and validation, in order to train and validate the simulator. Using the parameters obtained during the training-test iterations (the average value of the 5 iterations), we generate simulated sequences with the same target and gRNA as the samples that are assigned to the test subset to calculate the Jensen-Shannon (JS) divergence between the simulated and real samples of that subset. Finally, the same was performed for validation. The input for the distance calculations were the generated simulated subset and its real equivalent (same target and gRNA) distributions of the classes. ”
○ What clustering/aggregation approach did the authors use here (average dist, min dist, dist of centers?)
Hierarchical clustering.
○ 5 pairs were selected out of how many? Call that number K.
We have 100 samples in the validation set. Following the suggestion of indicating the total number of samples in the testing set, we have added this information to the figure caption.
○ What does the order of the samples in 1C mean? Is 98_real closer to 22_sim than to 98_sim? If so then state it. If not - what is the meaning of the order? Furthermore - how often, over K choose 2 pairs does this mis-matching occur for the CRISPR-A simulator??
Exactly, it is a hierarchical clustering, where samples are sorted by JS divergence. It was already stated in Results: “In addition, on top of comparing the distance between the experimental sample and the simulated, we have included two experimental samples, SRR7737722 and SRR7737698, which are replicates. These two and their simulated samples show a low distance between them and a higher distance with other samples.” As well as in Figure 1 caption: “For instance, SRR7737722 and SRR7737698, which cluster together, are the real sample and its simulated sample for two replicates.” Then, since these samples are replicates, its simulations will come from the same input and is expectable to find low distance between these two real samples as well as between both of them and their simulation. We have stated it in the discussion.
"From the characterized data we obtained the probability distribution of each class" (page 3) - How is this done? how many guides? how many replicates? what is class? where do you elabore regarding it? how you obtain the distributions? More details of the methods need to be provided. Added in methods.
The 96 samples used for development here - where are they taken from? This should be indicated in the first time these samples are mentioned. Namely - bottom of P6 Added: “The 96 samples, from these cell lines, are obtained from a public dataset BioProject PRJNA326019.”
CRISPECTOR is not mentioned in the comparison in the section: "CRISPR-A effectively calls indels in simulated and edited samples" (Table S2). Is there a specific reason for having left it out? CRISPECTOR, as well as ampliCan, is not in Table S2, since in this table is shown detailed data from Figure 2. CRISPECTOR is compared with CRISPR-A in figure 5, where the different approaches to enhance precision, like using a negative control, are explored.
In the section "Improved discovery and characterization of template-based alleles or objective modifications" - part of the analysis was made over simulated data and then over real data. The authors state "it is difficult to explain the origin of these differences...". Thus, needs to be investigated in more detail ... :) (P5) Moreover - the performance over real data is, at the end of the day, the more interesting one for comparison purposes. We have added this sample to the human validated dataset to understand better what was happening in this case and the results and pertinent discussion have been added in the manuscript: “CRISPResso2 is detecting a 2% more of reads classified as WT. These 2% correspond with the percentage classified as indels by CRISPR-A. In total, the percentage difference between CRISPResso2 and CRISPR-A template-based class is 0.6%, higher in CRISPR-A. CRISPR-A percentage is closer to the ground truth data than CRISPResso2.”
We found no explanation of "spike-in"/"spike experimental data" across the entire article. There is some general language about lengths but the scheme is still totally unclear. We have indicated in methods section when we were talking about the spike-in controls.
Description of the 96 gRNAs? Is this data from REF26? If so - where do you state this? If so - how do the methods described herein avoid the unique characteristics of the data of REF26? We have added the reference: “The 96 samples, from these cell lines, are obtained from a public dataset BioProject PRJNA326019.” In addition, there are other sources of data, simulations and now even human validated data.
"distance between the percentage of microhomology mediated end-joining deletions of samples with the same target was calculated and the mean of all these distances was used to reduce the information of the 96 different targets to a single one." (P6) What is the exact calculation used? which distance? How was clustering performed? What is the connection for gene expression? The used distance was euclidean distance and the clustering was performed using hierarchical clustering. We have added this information to the manuscript. Regarding the connection of gene expression, we are exploring the correlation of two phenotypes: the gene expression of the proteins differentially related with NHEJ and MMEJ pathways, and the gene editing landscape (indel patterns that are related with MMEJ and those that are more prone to be generated with NHEJ). We have tried to improve this explanation in the manuscript.
"we have fitted a linear model to transform the indels count depending on its difference in relation to the reference amplicon" (P7) - needs more explanation. Is this part of the pipeline? We have explained better how we have fitted the linear model in methods: “A linear regression model was fitted to obtain the parameters of Equation 1 using spike-in controls experimental data (original count, observed count and size of the change in the synthetic molecules). We have used the lm function from R. Parameter m in Equation 1 is equivalent to the obtained coefficient estimate of x which was 0.156 and n is the intercept (n=10). ”.
The model is optionally used as part of the pipeline as explained at the end of section “CRISPR-A gene editing analysis pipeline” to correct amplification biases due to differences in amplicon size. Then, what is part of the pipeline is the use of this model to make the transformation of counts from the observed counts to the predicted original counts. This is done with Equation 1 and can be found in the pipeline (VC_parser-cigar.R).
What is it "...manually curated data set"? (page 8) This is explained in “Manual curation of 30 edited samples” in methods.
Section "CRISPR-A empiric model removes more noise than other approaches" - with what data were the comparisons performed? Moreover, how were the comparison criteria selected (efficiency and sensitivity)? The literature already used several approaches to compare data analysis tools for editing experiments. See for example ampliCan, Crispresso (1 and 2) and CRISPECTOR. Maybe the authors should follow similar lines. The data used in this comparison comes from the reference 26:“26. van Overbeek, M. et al. DNA Repair Profiling Reveals Nonrandom Outcomes at Cas9-Mediated Breaks. Mol. Cell 63, 633–646 (2016).” We have added it to the manuscript.
The values of efficiency and sensitivity were not used directly for the comparison. We wanted to firstly evaluate our own algorithm. For that we obtained the values of efficiency and sensitivity for the previous mentioned dataset. These values were chosen to firstly have an idea of firstly, how much noise the algorithm is able to detect, and secondly, how much of it is able to be reduced after the Tx vs M process. That established a framework of comparison in which we can then compare directly the reported percentage of edition of the different tools.
Regarding the approaches used to compare data analysis tools for editing experiments, we are going to explain why we haven’t followed similar lines or how we have now included it:
In the case of ampliCan, the comparison that they do is with a synthetic dataset with introduced errors:
"synthetic benchmarking previously used to assess these tools (Lindsay et al. 2016), in which experiments were contaminated with simulated off-target reads that resemble the real on-target reads but have a mismatch rate of 30% per base pair".
In CRISPResso2, they benchmarked the efficiency against an inhouse dataset but this dataset is not published. Finally, for the benchmarking of CRISPECTOR, a manual curated dataset is used as a standard: "Assessment of such classification requires the use of a gold standard dataset of validated editing rates. In this analysis, we define the validated percent indels as the value determined through a detailed human investigation of the individual raw alignment results". In this sense, we have added a human validated dataset to do something similar to complement the analysis that we had already done.
In the end, we consider that simulated or synthetic datasets, as those used by ampliCan or CRISPResso2, does not capture the complete landscape of confounding events that can be detrimental to the analysis results. Similar limitations are found in the use of a gold standard dataset of validated editing rates, since the amount of reads or samples that can be validated by humans is not big since it is time consuming. In addition, humans can also make errors and have biases. Eventhogh, we have found very valuable talking into consideration adding a human validated dataset to complete our exploration.
“CRISPECTOR, although providing extensive information on the statistics and information about the indels, is not possible to track the reads along their pipeline, thus we cannot know which have been corrected and which have not.”
Section "CRISPR-A noise subtraction pipeline" describes a pretty naive method for noise subtraction (P12). Should be rigorously compared, for Tx vs Mock experiments, to CRISPECTOR and to CRISPResso2. In the section "CRISPR-A empiric model removes more noise than other approaches", we perform an exhaustive comparison with a dataset that contains 288 Mock Files vs 864 Tx files. This can be better appreciated in the, now included, figure Sup. 13A. CRISPResso2 was intentionally left out since their pipeline does not use a model to reduce noise but other approaches like reducing the quantification window.
"recalculated using a size bias correction model based on spike-in controls empiric data.." (P14). Where is the formula? The formula comes from Equation 1. Now it is correctly referenced.
Section "Noise subtraction comparison with ampliCan and CRISPECTOR" - fake mock was generated for comparison. We consider the avoidance of a Mock control in experiments designed to measure editing activity to not be best practice. It is OK to support this approach in CRISPR-A. However - the comparison to tools that predominantly work using a Mock control (including ampliCan and CRISPECTOR) should be done with actual Mock. Not with fake Mock .... (P15) We understand the claims of the reviewer for this point as the use of a “fake” mock may not be the best practice for general comparisons. Nevertheless here what we wanted to compare is the difference in the edition percentages using mock and not using it. Since to make a run for on-target data CRISPECTOR requires a mock, the only way to replicate the conditions of “no mock” was to use a synthetic file with the same characteristics of the treated files in terms of depth, but with no edition/noise events to avoid any correction outside this framework. The other run was made with the 288 real Mocks. This was a solution ad Hoc for CRISPECTOR, with ampliCan we used only real mock since they allow to make runs without a mock for on-target.
We changed the word fake for synthetic in the Noise subtraction comparison with ampliCan and CRISPECTOR section:
“As for CRISPECTOR, since it requires a mock file to perform on-target analysis, synthetic mock files were generated”.
Minor comments:
“Even not all of them have the same missing functionalities, as can be seen in the Supplementary table 1, CRISPR-A is the only tool that can identifies the amplicon reference from in a reference genome, correct errors through UMI clustering and sequence consensus, correct quantification errors due to differences in amplicon size, and includes interactive plots and a genome browser representation of the alignment.”
"Same parameters and probability distributions were fitted for three other cell lines: Hek293, K562, and HCT11626, to make SimGE more generalizable and increase its applicability" (page 3) - how was fitted? It was fitted in the same way as the t-cell samples as specified in methods. We have detailed more methods explaining how SimGE is built.
What is the "nature of modification"? (P5) We have changed nature by type for a better understanding.
In the section "CRISPR-A effectively calls indels in simulated and edited samples" (P5) towards the end, the authors write that the CRISPR-A algorithm did not give good results for a few examples. They then state that this was corrected and then yielded good results. There is no explanation of what correction was done, if it was implemented in the code and how to avoid/detect it in further cases. The problem was that the used reference sequence was too short. There is no modification in CRISPR-A code, we have just used the whole amplicon reference sequence obtained with the amplicon reference identification functionality of CRISPR-A. We have tried to explain it better in the manuscript: “Once the reference sequence is corrected used is the one corresponding to the whole reference amplicon, obtained with CRISPR-A amplicon sequence discovery function, CRISPR-A shows a perfect edition profile”
Cell culture, transfection, and electroporation - explanation only for HEK293, what about the others? (P15) We already had explained it for HEK293 and for C2C12, that are the experiments done by use. In the case of the analysis of the three cell lines and 96 targets we reference the source of the data as this data was not produced in our lab.
Typos and unclear wording: ○ "obtention" (P8) → changed by obtaining
○ "mico" >> micro (P 7,10) → changed
○ "Squema" >> scheme (Fig.1) → changed
○ "decombuled" (P10) → changed by separated
○ "empiric" >> empirical (P8 and other places) → changed
○ "Delins" (P14) → this is not a typo, it is used to indicate that a deletion and insertion has take place (http://varnomen.hgvs.org/recommendations/DNA/variant/delins/)
○ "performancer" (P9) → Change to performance
○ Change word across all article - "edition" to "editing" → changed. In the case of edition windows it has been changed by quantification windows.
○ "...has enough precision to find" (P6) not related to "results" section → We have moved to discussion.
■ No CRISPECTOR in the analysis
It is not included because for on-target analysis this tool requires a mock control sample. For this reason, it is compared in Figure 5D, where samples using negative controls are compared, and in Figure 5E where all tools and their different analysis options are compared.
■ It is simulated data only
Yes, it is. Comparison with real data is done in Figure 2D and 2E. And now we also have added a ground truth data in our comparisons obtained from human validation of the classification of more than 3,000 different reads.
■ It is not violin plot as mentioned in the description
It is a violin plot, but in general there is not much dispersion of the data points making the density curves flat.
○ Fig 3A - Is it significant? Yes, it is. We have added this information in the caption of the figure.
○ Fig. 4:
■ A
Each row/column is a vector of 96 guides? No, as it is said in the caption of the figure, it is the “mean between the distances calculated for each of the 96 different targets.”
How is the replicate number decided? Is it a different experiment by date? What is separating between experiments? Rep numbers? All this information should be found in the referenced paper from which this dataset comes from as already referenced.
■ B - Differential expression:
We have realized that the caption was not correct, missing the explanations for Fig. 4B and all the following ones moved to a previous letter.
How? did you measure RNA? It is already stated in methods that RNAseq data was obtained from SRA database and the analysis was done using nf-core/rnaseq pipeline: “RNAseq differential expression analysis of samples from BioProject PRJNA208620 and PRJNA304717 was performed using nf-core/rnaseq pipeline⁵².”
Is the observed data in the figure sufficiently strong in terms of P-value? Yes, at is it is highlighted in the plot with ** and ***. We have also added the p-value in the cation of the figure.
Where is the third cell-line? As mentioned in the text, we have just chosen the cell lines that show us higher differences in the the percentage of MMEJ: “HCT116 than in K562, which are the cell lines with the major and minor ratios of MMEJ compared with NHEJ, respectively”.
○ Fig.13 - There is no A and B as mentioned in the text
We thank the reviewer for the observation as we mistakenly uploaded the wrong figure. We corrected it.
Reviewer #2 (Significance (Required)):
We repeat the aspects of contribution, as listed in the first part of the review, and comment about significance:
Significant engineering contribution. Nonetheless, we were not able to run the analysis. So - needs to be checked.
Hopefully now that the documentation is properly added to the repository it will be easier to run analysis.
The option to simulate an experiment to assess it is a nice feature and can help experiment design
An important methodology contribution
Identification of amplicons when not provided as input
Not important in the context of multiplex PCR and NGS measurement assays, as amplicons will be known. Not clear what other contexts the authors were aiming at.
It is useful to save time, no need to look for the sequence of each amplicon and add it as input. Also, it can help to detect unspecific amplification, since all amplicons of the same genome can be retrieved from the discovery amplicon process. In addition, we have already found one example where this avoids getting incorrect results: “Once the reference sequence used is the one corresponding to the whole reference amplicon, obtained with CRISPR-A amplicon sequence discovery function, CRISPR-A shows a perfect edition profile”. We have added this to the discussion of the manuscript.
Importance/significance needs to be demonstrated
In figure 3 are shown the results of template-based and substitutions detection. CRISPR-A is a versatile and agnostic tool for gene editing analysis. This means that it can be prepared for the analysis of gene editing of future tools, since the cut site or other elements of experiment design are not required. In addition, it has been shown that when a mock is used its performance is comparable to filtering by edition windows, avoiding the loss of edits when the cut site is slided.
Significant contribution. However - the methods need to be much better explained and the results better described in order for this to be useful to the community.
We have made an effort to try to be more clear in the description of the results.
Mildly significant technical contribution. However - only addresses on-target. Also addressing off-target would have been significant.
The use of UMIs is something that has never been done before in this context. Sequencing biases are not taken into account and editing percentages are reported as observed. Being able to differentiate between different molecules at the beginning of the amplification sequence, allows a higher precision avoiding under or overestimation of each of the species in a bulk of cells.
In the case of off-targets, can be for sure done using sequencing the predicted off-target sites. In addition, there are other methods, like GuideSeq that can be used to discover off-targets, but this kind of data is out of the scope of CRISPR-A. Even that, we are aware of the importance of being able to analyse off-targets when in a context of a broad analysis platform and we will take these into consideration when participating in the building of crisprseq pipeline from nf-core.
As stated - interesting.
The glass can be replaced. But thosebroken windows are a symbol of a misdirected, angry younger generation
This reminds me of the news coverage during the 2020-2021 racial unrest following George Floyd's murder. While a lot of people responded with the demonstrations and looting as unnecessary and violent, it can that be perceived in the manner that DeBerry had, all of these actions are just visceral reaction to centuries of brutality and enslavement, what is that compared to weeks and days of retaliation?
Believe it or not, opening a window can actually disrupt that airflow and create sideways spread. If you’re going to open windows, install a baffle (basically a vent) to direct outside air downward.
Aitflow must be upwards, out not at all!
Unlike Android's unpolished support for x86, Bergstrom promised a real push for quality with RISC-V, saying, "We need to do all of the work to move from a prototype and something that runs to something that's really singing—that's showing off the best-in-class processors that [RISC-V International Chairman Krste Asanović] was mentioning in the previous talk."
What about Android apps on Windows?
forced to adopt interoperability the opening of their apis so that third-party software 01:54:53 could more easily interact with the windows ecosystem
= interoperability
= API
net send * MESSAGE
Command sending a MESSAGE to each computer in the network
Table 3.5 Practices of Teachers Who Are Effective Classroom Managers . . Ra een ly analyzes the rules and procedures that need to be in place so that i full 1. Analysis. The teacher caretult) : students can learn effectively in the classroom setting, = se clear language vo tha et 2. Description. The teacher states the rules and procedures in simple, cle: ts can understand them easily. ee 3 Toadies The teacher systematically teaches the rules and procedures at the start 0 inni i tudents. beginning a new course with new s ee ine The : sly monitors students’ compliance with the rules and proce dures, and also careful record keeping of students’ academic work. i d Supplies hysical Arrangement of Classroom ani . shaves i 7 ‘bility, Students should be able to see the instructional displays. The teacher no TE ind " te ie of instruction areas students’ work areas, and learmng centers to fa clear viev ’ __ of students. . b. Accessibility, High-traffic areas (areas for group work, pencil sharpener, = E , >. . . ility. eme: q wi seatin: s Di tr ctib ili Arrang ments hat can compete ith the teacher for students attention ( zg . Lstra . tude: ts facin, the windows to the la TOU id doo! oO et e Wit eac other but d Ty g p gz nd, rt the hall, fac oO fac 1 h h stu: y - ai mized. from the teacher) should be minimize as torll 4 ‘Supplies The teacher takes care to secure an adequate supply of textbooks and materia the students in the classroom. D.C., & Rosen door to the hail) should Source: Evertson, c. M. (198 7) Managing classrooms: A framework for teachers. In Ber liner, shine, B. V. (Eds.). Talks to teachers (pp. 52-74). New York: Random House. Effective Planning and Decision Making 75 Table 3.6 Classroom Tasks and Situations for Which a Teacher Needs Rules and Procedures . Seat assignment in the classroom . Start and end of class (e.g., “Be in your seat and ready to work when the bell rings.”) . Handing in of assignments, materials, etc. . Permissible activities if a student completes seatwork early . Leaving the room while class is in session . Standards for the form and neatness of one’s desk, notebooks, assignments, etc. . Supplies and materials to be brought to class . Signals for seeking help or indicating a willingness to answer a teacher question addressed to the class as a whole 9. Acceptable noise level in the room 10. Acceptability of verbal and physical aggression 11. Moving around the room to sharpen pencils, get materials, etc. 12. Storage of materials, hats, boots, etc., in the classroom 13. Consumption of food and gum 14. Selection of classroom helpers 15. Late assignments and make-up work ONAN BR wWN Source: Doyle, W. (1986). Classroom organization and management. In M. C. Wittrock (Ed.), Handbook of re- search on teaching (3d ed., pp. 392-431). New York: Macmillan. instruction. Eye contact, physical proximity to the misbehaving student, or “the look” are examples of such interventions. In the words of Walter Doyle, “successful interventions tend to have a private and fleeting quality that does not interrupt the flow of events.”°? Other techniques are also effective in managing student misbehavior. Discussion of these techniques, as well as comprehensive models of classroom discipline, is available in various sources.
I have a variety of teachers with whom I coach/mentor. Some have only been teaching a few years, while the rest have been teaching for 10+ years. With one particular experienced teacher in the mind, this long annotation is still a reminder that, regardless of experience, is something that needs to be set into place from the start.
As a coach, I will copy this and have teachers/mentees I coach self-reflect and assess how they are doing in these specific categories as we move into the new year.
as a developer, writing against the Win32 APIs allows your software to run on over 90 percent of the computers in the world
(Something else that has changed in the intervening years; most computers in the world—or a plurality of them, at least—are now running Android, not Windows, but Win32 is useless on Android. It's no help on iOS, either.)