3,850 Matching Annotations
  1. Dec 2022
    1. I don’t like to look out of the windows even—there are so many of those creeping women, and they creep so fast. I wonder if they all come out of that wallpaper as I did?

      This passage is the exact moment that the woman can see her full identity. When she finally makes the connection she has been avoiding. The woman who was behind the pattern was herself, she had been the one stopping and creeping.

    1. ranslated by zalas as part of al|together 2005, io[ChristmasEve] is a novel game made in the KiriKiri engine. The game received an update a year after the release of the translation patch which caused the patched game to crash immediately after starting, so my friend at the group Re*define has made a fix for it, making the game playable from start to finish. The fix consists of removing malfunctioning lines calling for an apparently nonexistent object called options2Menu. The XP3 archive was worked on with insani's xp3tools-20060708 available at http://insani.org/tools/. The fix has been tested on Windows XP and Windows 10.

      Thanks to Re*Define and Kaisernet, the original 2005 English-language translation of io [Christmas Eve], a freeware Japanese visual novel written in KiriKiri, is playable in English. The 2005 patch ceased working due to the Japanese game receiving a minor update in 2006. Kaisernet notes that "[t]he fix has been tested on Windows XP and Windows 10." I personally tested it on Linux running the game with the patch on WINE and found that it works without issue.

    1. Steam emulator for GNU/Linux and Windows that emulates steam online features. Lets you play games that use the steam multiplayer apis on a LAN without steam or an internet connection.

      Interesting project to make it possible to run some Steam games that ordinarily would require Steam to be running to run without Steam.

    1. I was on the shuttle at 1 a.m. The compartment was dark with no lights on internally. When the shuttle drove past the reservoir, I looked out the windows. The blue lake looked black and I could only recognize it by the little flashes of moonlight reflected on the surface of the lake.

      These are concrete details about the environment of one "sad night" when I experienced depressive emotions. It is due to the learning in class that concrete details help readers to be in the story and understand the author.

    1. The men overwhelmed law and order. They pulled down road signs. They smashed windows of the congested streetcars. They toppled telephone booths and lit newspaper kiosks on fire. They heaved bricks from a nearby construction site through the Forum windows. When one young man was arrested and taken into a police car, the protestors began rocking the car, and the police officer feared they would flip it. He told his driver, “When both back wheels touch the ground, gun it!”

      Sports on its own causes an emotional response, i can imagine adding a political dimension on top of that made this situation crazy

    2. The mob — for by now it had become a mob — headed eastward down St. Catherine Street’s shopping district. They shattered display windows and carried away what they could. They crashed windows of banks and the post office. They terrified patrons of a restaurant and bar with the objects they flung through windows. They pulled cabbies from their taxis and beat them. Twelve policemen and 25 civilians suffered injuries.

      What is wrong with these people? I get sports, but man this is scary behavior.

    3. The next night nobody threw galoshes, nobody broke any more windows, nobody stopped streetcars

      He may not of ment to start the revolution of the ages for the French people, but he def. had the big say and position to put an end to it. He became the voice of the people.

    4. Reeve held Richard himself responsible: “Why should Richard, for whom the game is made to order, take tantrums like a spoiled child and incite a lot of crack-pots such as the tear-gas bomb thrower at the Forum and the fools who broke windows and took after streetcars last night in Montreal?”

      so I have my own opinions of professional athletes and their intragame tantrums but I still feel on top of an uncontrolled temper that Richard suffered from CTE and the more I read this the more it is supported, makes for a terrible situation

    1. Author Response

      Reviewer #1 (Public Review):

      The authors succeeded in fitting their Jansen-Rit model parameters to accurately reproduce individual TEPs. This is a major success already and the first study of this kind to the best of my knowledge. Then the authors make use of this fitted model to introduce virtual lesions in specific time windows after stimulation to analyze which of the response waveforms are local and which come from recurrent circles inside the network. The methodological steps are nicely explained. The authors use a novel parameter fitting method that proves very successful. They use completely openly available data sets and publish their code in a manner that makes reproduction easy. I really enjoyed reading this paper and suspect its methodology to set a new landmark in the field of brain stimulation simulation. The conclusions of the authors are well supported by their results, however, some analysis steps should be clarified, which are specified in the essential revisions.

      We are delighted and flattered by the Reviewer’s positive evaluation of our work, and appreciation of our efforts to maximize its reproducibility. We wish also to thank the Reviewer for their compelling and interesting points, which we have addressed in full, and we believe further enhance the quality of the paper. Thanks again!

      Reviewer #2 (Public Review):

      Here the authors tackle the problem of identifying which parts of a TMS-evoked response are local to the stimulation site versus driven by reverberant activity from other regions. To do this they use a dataset of EEG recorded simultaneously with TMS pulses, and examine virtual lesions of a network of neural masses fitted to the data. The fitting uses a very recent model inversion method developed by the authors, able to fit time series directly rather than just summary statistics thereof. And it apparently works rather well indeed, at least after the first ~50 ms post-stimulus. I expect many readers will be keen to try this fitting method in their own work.

      We are delighted by the Reviewer’s appreciation and consideration of our paper. We have addressed the comments and revisions requested following the flow suggested by the Reviewer’s comments. We would take this opportunity to kindly thank the Reviewer for his/her contribution and for helping us to improve the manuscript.

      Reviewer #3 (Public Review):

      The manuscript is very well written and the graphics are quite iconic. Moreover, the hypothesis is clear and the rationale is very convincing. Overall, the paper has the potential of being of paramount importance for the TMS-EEG community because it provides a valuable tool for a proper interpretation of several previously published TMS-EEG results.

      Unfortunately, in my opinion, the dataset used to train and validate the method does not support the implication and interpretation of the results. Indeed, as clearly visible from most of the figures and mentioned by the authors of the database, the data contains residual sensory artifacts (auditory or somatosensory) that can completely bias the authors' interpretation of the re-entrant activity.

      We are most grateful to the Reviewer for their positive evaluation of our manuscript. We also sincerely appreciate all the comments and suggestions raised, and for contributing their evident expertise with TMS-EEG data towards the constructive improvement of this research. We hope the Reviewer will appreciate our efforts made to address their excellent points, and are pleased with the resultant strengthening of the paper.

    2. Reviewer #1 (Public Review):

      The authors succeeded in fitting their Jansen-Rit model parameters to accurately reproduce individual TEPs. This is a major success already and the first study of this kind to the best of my knowledge. Then the authors make use of this fitted model to introduce virtual lesions in specific time windows after stimulation to analyze which of the response waveforms are local and which come from recurrent circles inside the network. The methodological steps are nicely explained. The authors use a novel parameter fitting method that proves very successful. They use completely openly available data sets and publish their code in a manner that makes reproduction easy. I really enjoyed reading this paper and suspect its methodology to set a new landmark in the field of brain stimulation simulation. The conclusions of the authors are well supported by their results, however, some analysis steps should be clarified, which are specified in the essential revisions.

    1. Docker Desktop is a free, easy-to-install, downstream application for a Mac or Windows environment. The application lets you build and share containerized applications and microservices. Docker consists of Docker Engine, Docker Compose, Docker CLI client, Docker Content Trust,  Kubernetes, and Credential Helper.
    1. Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.

      This is scary to think about. Those passionate fans had to have been in so much fear too. Sports can bring out the worst in people.

    2. Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.

      its unfortunate this happens still to this day. How the few can ruin it for the many.

    3. Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.

      I can't imagine this taking place,

    4. Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.

      Scary and Crazy out of control.... to witness that would be awful

    5. Out on the street, the largest riot since Conscription was passed in 1944 (bringing in the draft for the final year of the Second World War) broke out along a seven-block length of Rue Ste. Catherine, featuring overturned cars, smashed windows, a shot fired from somewhere and 137 arrests.

      They were upset because of the game suspensions. Sports have always had that effect on the fan base, they will go act out whether something went their way or not. A good example of this is the Philidelphia Eagles fan base.

    1. Author Response:

      We thank the three reviewers for their thoughtful comments and constructive critique.

      Reviewer #1 (Public Review):

      Hu et al. present findings that extend the understanding of the cellular and synaptic basis of fast network oscillations in the sensory cortex. They developed the ex vivo model system to study synaptic mechanisms of ultrafast (>400Hz) network oscillation ("ripplets") elicited in layer 4 (L4) of the barrel cortex in the mouse brain slice by optogenetically activating thalamocortical axon terminals at L4, which mimic the thalamic transmission of somatosensory information to the cortex. This model allowed them to reproduce extracellular ripplet oscillations in the slice preparation and investigate the temporal relationship of cellular and synaptic response in fast-spiking (FS) inhibitory interneurons and regular spiking (RS) with extracellular ripplet oscillations to common excitatory inputs at these cells. FS cells show precisely timed firing of spike bursts at ripplet frequency, and these spikes are highly synchronized with neighboring FS cells. Moreover, the phase-locked temporal relationship between the ripplets and responses of FS and RS cells, although different phases, to thalamocortical activation are found to closely coincide with EPSCs in RS cells, which suggests that common excitatory inputs to FS and RS cells and their synaptic connectivity are essential to generate reverberating network activity as ripplet oscillations. Additionally, they show that spikes of FS cells in layer 5 (L5) reduced in the slice with a cut between L4 and L5, proposing that recurrent excitation from L4 excitatory cells induced by thalamocortical optogenetic stimulation is necessary to drive FS spike bursts in layer 5 (L5).

      Overall, this study helps extend our knowledge of the synaptic mechanisms of ultrafast oscillations in the sensory cortex. However, it would have been nice if the authors had utilized various methodologies and systems.

      Although the overall findings are interesting, the conclusion of the study could have been strengthened according to the following points:

      1. The authors investigate the temporal relationship between ripplets and FS and RS cells' response elicited by optogenetic activation of TC axon terminals, which is mainly supported by phase-locked responses of FS and RS cells with local ripplets oscillations to optogenetic activation. They also show highly synchronized FS-FS firing by eliminating electrical gap-junction and inhibitory synaptic connections to this synchrony. Based on these findings, the authors suggest that common excitatory inputs to FS and RS cells in L4 would be essential to generate these local ripplets. However, it interferes with the ability to follow the logical flow for biding other findings of phase-locking responses of FS and RS cells in ripplet oscillations in L4.

      We understand the reviewer’s issue with the logical flow of our argument. We will address this concern by textual changes and/or by rearranging the order of the presentation and figures.

      2. The authors suggest that the optogenetic activation of TC axon terminal elicits local ripplet oscillations via synchronized spike burst of FS inhibitory interneurons and alternating EPSC-IPSC of RS cells in phase-locked with ripplets in L4 barrel cortex, which would be generated by following common excitatory inputs from the local circuits to these cells at the ripple frequency. Thus they intend to investigate the source of these excitatory inputs at this local network of L4 by suppressing the firing of L4 RS cells. However, they show FS spike bursts in L5B, instead of L4, due to the technical limitations of their experimental setup, as described in the manuscript. Although L5 FS spike bursts decrease after cutting the L4/L5 boundary, supposedly inhibiting excitatory input from L4 as depicted in Fig 6D in the author's manuscript, the interpretation of data seems overly extended because it does not necessarily represent cellular and synaptic activities which are phase-locked with the ripplets observed in L4.

      We have not studied network oscillation in layer 5 at the same level of detail we have studied layer 4; however the oscillations in both layers are phase locked. We will show this as supplemental data in the revised manuscript.

      3. Authors suggested a circuit model. It would be recommended that the authors try to perform in silico analysis using the suggested model to explore the function of thalamocortical axons on the fast-spiking and regular-spiking neurons to support their circuit model.

      We agree that a computational model of the layer 4 network, demonstrating ripplets in silico, would enhance our understanding of this re-discovered ultrafast oscillation. Moreover, such a model would also help constrain the allowable parameter space of other, existing models of layer 4 or of the complete cortical column, as the ability of these existing models to recreate ripplets in response to strong, synchronous thalamocortical activation could now be used as a stringent test of the assumptions underlying these models. We hope to reproduce ripplets in silico, within an experimentally constrained parameter space, in a near future study.

      Reviewer #2 (Public Review):

      This manuscript studied potential cellular mechanisms that generate ultrafast oscillations (250-600Hz) in the cortex. These oscillations correlate with sensory stimulation and might be relevant for the perception of relevant sensory inputs. The authors combined ex-vivo whole-cell patch-clamp recordings, local field potential (LFP) recordings, and optogenetic stimulation of thalamocortical afferents. In a technical tour de force, they recorded pairs of fast-spiking (FS)-FS and FS-regular-spiking (RS) neurons in the cortex and correlated their activity with the LFP signal.

      Optogenetic activation of thalamic afferents generated ripple-like extracellular waveforms in the cortex, which the authors referred to as ripplets. The timing of the peaks and troughs within these ripplets was consistent across slices and animals. Activation of thalamic inputs induced precisely timed FS spike bursts and RS spikes, which were phase-locked to the ripplet oscillation. The authors described the sequences of RS and FS neuron discharge and how they phase-locked to the ripplet, providing a model for the cellular mechanism generating the ripplet.

      The manuscript is well-written and guides the reader step by step into the detailed analysis of the timing of ripplets and cellular discharges. The authors appropriately cite the known literature about ultrafast oscillations and carefully compare the novel ripplets to the well-known hippocampal ripples. The methods used (ex-vivo patch-clamp and LFP) were appropriate to study the cellular mechanisms underlying the ripplets.

      Overall, this manuscript develops means for studying the role of cortical ultrafast oscillations and proposes a coherent model for the cellular mechanism underlying these cortical ultrafast oscillations.

      We thank the reviewer for his supportive comments.

      Reviewer #3 (Public Review):

      In this study, Hu et al. aimed to identify the neuronal basis of ultrafast network oscillations in S1 layer 4 and 5 evoked by the optogenetic activation of thalamocortical afferents in vitro. Although earlier in vivo demonstration of this short-lived (~25 ms) oscillation is sparse and its significance in detecting salient stimuli is not known the available publications clearly show that the phenomenon is consistently present in the sensory systems of several species including humans.

      In this study using optogenetic activation of thalamocortical (TC) fibers as a proxy for a strong sensory stimulus the in vitro model accurately captures the in vivo phenomenon. The authors measure the features of oscillatory LFP signals together with the intracellular activity of fast-spiking (FS) interneurons in layer 4 and 5 as well as in layer 4 regular spiking (RS) cells. They accurately measure the coherence of intra- and extracellular activity and convincingly demonstrate the synchronous firing of FS cells and antiphase firing of RS and FS cells relative to the field oscillation.

      Major points:

      1) The authors conclude the FS cell network has a primary role in setting the frequency of the oscillation. While these data are highly plausible and entirely consistent with the literature only correlational not causal results are shown thus direct demonstration of the critical role of GABAergic mechanisms is missing.

      We find that blocking fast inhibition (by puffing a gabazine solution locally) converts ripplets into long-duration paroxysmal events with high-frequency firing of both RS and FS cells. While we do not think that this experiment is diagnostic in distinguishing between competing models (in all models fast inhibition is a necessary component), we will add these experiments as supplemental material.

      2) The authors put a strong emphasis on the role of RS-RS interactions in maintaining the oscillation once it was launched by a TC activity. Its direct demonstration, however, is not presented. The alternative scenario is that TC excitation provides a tonic excitatory background drive (or envelope) for interacting FS cells which then impose ultrafast, synchronized IPSPs on RS cells. Similar to the RS-RS drive in this scenario RS cells can also only fire in the "windows of opportunity" which explains their antiphase activity relative to FS cells, but RS cells themselves do not participate in the maintenance of oscillation. Distinguishing between these two scenarios is critical to assess the potential impact of ultrafast oscillation in sensory transmission. If TC inputs are critical the magnitude of thalamic activity will set the threshold for the oscillation if RS-RS interactions are important intracortical operation will build up the activity in a graded manner.

      Earlier theoretical studies (e.g Brunel and Wang, 2003; Geisler et al., 2005) strongly suggested that even in the case of the much slower hippocampal ripples (below 200 Hz) phasic activation of local excitatory cells cannot operate at these frequencies. Indeed, rise time, propagation, and integration of EPSPs can likely not take place in the millisecond (or submillisecond) range required for efficient RS-RS interactions. The alternative scenario (tonic excitatory background coupled with FS-FS interactions) on the other hand has been clearly demonstrated in the case of the CA3 ripples in the hippocampus (Schlingloff et al., 2014. J.Nsci).

      The Schlingloff et al. study is important, and we actually think that their results, and many of their conclusions, are consistent with our own. We agree with these authors that “…PV cells are essential for the initiation and maintenance of sharp waves and the generation of ripple oscillations”, that “…perisomatic inhibition enforces ripple synchrony by phase-locking firing during SWRs”, and also that “…neuronal coupling via gap junctions is not essential in ripple synchronization”. We also agree that “The tonic excitatory ‘envelope’ arising from the buildup of activity of PCs drives the firing of PV cells”, as far as initiation of ripples in CA3 is concerned. In our model system, thalamocortical excitation serves the same role, of initiating the oscillation. However I do not see how the data of Schlingloff et al. support the conclusion that (in the legend to their Fig. 11) “…there is no cycle-by-cycle reciprocal interaction between the PCs and the PV [interneurons]”, or the implication that FS cells function as independent pacemakers “…because of their reciprocal inhibition”, as their FINO model suggests. The Schlingloff et al. data clearly show cycle-by-cycle alternations of EPSCs and IPSCs (their Fig. 1C, D, as well as their Fig. 7B), as we show in our Fig. 5A. These phasic EPSCs, occurring at ripple frequency, by necessity originate from pyramidal cells synchronized (as a population) to the ripple oscillation, as indeed shown in their multi-unit recordings. This precise, phasic (and clearly not “tonic”) excitatory drive cannot be uncoupled from the ripple (or ripplet) oscillation, and therefore cannot be dismissed as a factor driving the oscillation.

      The strongest evidence the Schlingloff et al. study provides that FS cells synchronize independently of excitatory cells – and then impose this oscillation on the excitatory cells - is in their demonstration of ripples generated by prolonged direct optogenetic stimulation of PV cells, in the presence of glutamatergic blockers (their Fig. 6). However this manipulation worked only in some of their slices, and the oscillations only lasted as long as the light stimulus and therefore were exogenously driven rather than network driven. They do not show intracellular responses from either inhibitory or excitatory cells, nor multi-unit activity, during this manipulation, so it is difficult to know if excitatory cells were indeed entrained to the same frequency, as the FINO model posits. Nevertheless this is a very interesting experiment which we plan to attempt in our own model system in a future study.

      When the properties of the ultrafast oscillation were tested as various stimulation strengths (Figure 2) weaker stimulation resulted in less precise timing. If TC input is indeed required only to launch the oscillation not to maintain it, this is not expected since once a critical number of RS cells were involved to start the activity their rhythmicity should no longer depend on the magnitude of the initial input. On the other hand, if the entire transient oscillation depends on TC excitation weaker input would result in less precise firing.

      Our interpretation for the lesser spike precision with a weaker optogenetic stimulation is that fewer FS cells fired upon the initial thalamocortical volley, and therefore a weaker IPSP wavefront was propagated to RS cells allowing for a wider “window of opportunity” for RS firing,  and this loss of synchrony then propagated from cycle to cycle. This interpretation will be added in the revised manuscript.

      3) The experiments indicating the spread of phasic activity from L4 RS to L5 FS cells can not be accepted as fully conclusive. The horizontal cut not only severed the L4 RS to L5 FS connections but also many TC inputs to the L5 FS apical dendrites as well as the axons of L4 FS cells to L5 FS cells both of which can be pivotal in the translaminar spread.

      FS cells do not have apical dendrites so we assume the reviewer meant to say “L5 RS apical dendrites”; however if the cut reduced the excitability of L5 RS cells, that only strengthens our conclusion that RS firing is required for maintaining the oscillation. While the cut could have also disrupted L4 FS to L5 FS connections, we are not aware of any evidence that such inter-laminar connections exist. On the other hand, the Pluta et al. 2015 study shows very robust excitatory connections between L4 RS and L5 FS cells.  

      Having said that, we agree with the reviewer (indeed, with all three reviewers) that the L4/L5 cut experiments are not conclusive, and we will make this clear in our discussion in the revised manuscript. We plan to do a more conclusive test of our model by using a transgenic line to express inhibitory opsins specifically in L4. This will require expressing ChR2 in the thalamus by virus injection and a careful comparison of ripplets between the two models; we therefore reserve these experiments to a future study.

    2. Reviewer #3 (Public Review):

      In this study, Hu et al. aimed to identify the neuronal basis of ultrafast network oscillations in S1 layer 4 and 5 evoked by the optogenetic activation of thalamocortical afferents in vitro. Although earlier in vivo demonstration of this short-lived (~25 ms) oscillation is sparse and its significance in detecting salient stimuli is not known the available publications clearly show that the phenomenon is consistently present in the sensory systems of several species including humans.

      In this study using optogenetic activation of thalamocortical (TC) fibers as a proxy for a strong sensory stimulus the in vitro model accurately captures the in vivo phenomenon. The authors measure the features of oscillatory LFP signals together with the intracellular activity of fast-spiking (FS) interneurons in layer 4 and 5 as well as in layer 4 regular spiking (RS) cells. They accurately measure the coherence of intra- and extracellular activity and convincingly demonstrate the synchronous firing of FS cells and antiphase firing of RS and FS cells relative to the field oscillation.

      Major points:

      1) The authors conclude the FS cell network has a primary role in setting the frequency of the oscillation. While these data are highly plausible and entirely consistent with the literature only correlational not causal results are shown thus direct demonstration of the critical role of GABAergic mechanisms is missing.

      2) The authors put a strong emphasis on the role of RS-RS interactions in maintaining the oscillation once it was launched by a TC activity. Its direct demonstration, however, is not presented. The alternative scenario is that TC excitation provides a tonic excitatory background drive (or envelope) for interacting FS cells which then impose ultrafast, synchronized IPSPs on RS cells. Similar to the RS-RS drive in this scenario RS cells can also only fire in the "windows of opportunity" which explains their antiphase activity relative to FS cells, but RS cells themselves do not participate in the maintenance of oscillation. Distinguishing between these two scenarios is critical to assess the potential impact of ultrafast oscillation in sensory transmission. If TC inputs are critical the magnitude of thalamic activity will set the threshold for the oscillation if RS-RS interactions are important intracortical operation will build up the activity in a graded manner.

      Earlier theoretical studies (e.g Brunel and Wang, 2003; Geisler et al., 2005) strongly suggested that even in the case of the much slower hippocampal ripples (below 200 Hz) phasic activation of local excitatory cells cannot operate at these frequencies. Indeed, rise time, propagation, and integration of EPSPs can likely not take place in the millisecond (or submillisecond) range required for efficient RS-RS interactions. The alternative scenario (tonic excitatory background coupled with FS-FS interactions) on the other hand has been clearly demonstrated in the case of the CA3 ripples in the hippocampus (Schlingloff et al., 2014. J.Nsci).

      When the properties of the ultrafast oscillation were tested as various stimulation strengths (Figure 2) weaker stimulation resulted in less precise timing. If TC input is indeed required only to launch the oscillation not to maintain it, this is not expected since once a critical number of RS cells were involved to start the activity their rhythmicity should no longer depend on the magnitude of the initial input. On the other hand, if the entire transient oscillation depends on TC excitation weaker input would result in less precise firing.

      3) The experiments indicating the spread of phasic activity from L4 RS to L5 FS cells can not be accepted as fully conclusive. The horizontal cut not only severed the L4 RS to L5 FS connections but also many TC inputs to the L5 FS apical dendrites as well as the axons of L4 FS cells to L5 FS cells both of which can be pivotal in the translaminar spread.

    1. It was nursery first and then playroom and gymnasium, I should judge; for the windows are barred for little children, and there are rings and things in the walls.

      Because of the way the narrator says that John takes care of her to the point that he, "is very careful and loving, and hardly lets me stir without special direction", I feel like it's fitting that she's in a nursery because she's treated like a child and John's the overbearing mother. By the windows being "barred for little children" it suggests that the narrator is imprisoned.

    1. type 2 hypervisor is hosted, running as software on the O/S, which in turn runs on the physical hardware. This form of hypervisor is typically used to run multiple operating systems on one personal computer, such as to enable the user to boot into either Windows or Linux.
    1. Author Response

      Reviewer #1 (Public Review):

      This paper describes the results of a MEG study where participants listened to classical MIDI music. The authors then use lagged linear regression (with 5-fold cross-validation) to predict the response of the MEG signal using (1) note onsets (2) several additional acoustic features (3) a measure of note surprise computed from one of several models. The authors find that the surprise regressors predict additional variance above and beyond that already predicted by the other note onset and acoustic features (the "baseline" model), which serves as a replication of a recent study by Di Liberto.

      They compute note surprisal using four models (1) a hand-crafted Bayesian model designed to reflect some of the dominant statistical properties of Western music (Temperley) (2) an ngram model trained on one musical piece (IDyOM stm) (3) an n-gram model trained on a much larger corpus (IDyOM ltm) (4) a transformer DNN trained on a mix of polyphonic and monophonic music (MT). For each model, they train the model using varying amounts of context.

      They find that the transformer model (MT) and long-term n-gram model (IDyOM stm) give the best neural prediction accuracy, both of which give ~3% improvement in predicted correlation values relative to their baseline model. In addition, they find that for all models, the prediction scores are maximal for contexts of ~2-7 notes. These neural results do not appear to reflect the overall accuracy of the models tested since the short-term n-gram model outperforms the long-term n-gram model and the music transformer's accuracy improves substantially with additional context beyond 7 notes. The authors replicate all these findings in a separate EEG experiment from the Di Liberto paper.

      Overall, this is a clean, nicely-conducted study. However, the conclusions do not follow from the results for two main reasons:

      1) Different features of natural stimuli are almost always correlated with each other to some extent, and as a consequence, a feature (e.g., surprise) can predict the neural response even if it doesn't drive that response. The standard approach to dealing with this problem, taken here, is to test if a feature improves the prediction accuracy of a model above and beyond that of a baseline model (using cross-validation to avoid over-fitting). If the feature improves prediction accuracy, then one can conclude that the feature contributes additional, unique variance. However, there are two key problems: (1) the space of possible features to control for is vast, and there will almost always be uncontrolled-for features (2) the relationship between the relevant control features and the neural response could be nonlinear. As a consequence, if some new feature (here surprise) contributes a little bit of additional variance, this could easily reflect additional un-controlled features or some nonlinear relationship that was not captured by the linear model. This problem becomes more acute the smaller the effect size since even a small inaccuracy in the control model could explain the resulting finding. This problem is not specific to this study but is a problem nonetheless.

      We understand the reviewer’s point and agree that it indeed applies not exclusively to the present study, but likely to many studies in this field and beyond. We disagree, however, that it constitutes a problem per se. We maintain that the approach of adding a feature, observing that it increases crossvalidated prediction performance, and concluding that therefore the feature is relevant, is a valid one. Indeed, it is possible and even likely that not all relevant features (or non-linear transformations thereof) will be present in the control/baseline model. If a to-be-tested feature increases predictive performance and therefore explains relevant variance, then that means that part of what drives the neural response is non-trivially related to the to-be-tested feature. The true underlying relationship may not be linear, and later work may uncover more complex relationships that subsume the earlier discovery, but the original conclusion remains justified.

      Importantly, we wish to emphasize that the key conclusions of our study primarily rest upon comparisons between regression models that are by design equally complex, such as surpriseaccording-to-MT versus surprise-according-to-IDyOM and comparisons across different context lengths. We maintain that the comparison with the Baseline model is also important, but even taking the reviewer’s worry here into account, the comparison between different equally-complex regression models should not suffer from it to the same extent as a model-versus-baseline comparison.

      2) The authors make a distinction between "Gestalt-like principles" and "statistical learning" but they never define was is meant by this distinction. The Temperley model encodes a variety of important statistics of Western music, including statistics such as keys that are unlikely to reflect generic Gestalt principles. The Temperley model builds in some additional structure such as the notion of a key, which the n-gram and transformer models must learn from scratch. In general, the models being compared differ in so many ways that it is hard to conclude much about what is driving the observed differences in prediction accuracy, particularly given the small effect sizes. The context manipulation is more controlled, and the fact that neural prediction accuracy dissociates from the model performance is potentially interesting. However, I am not confident that the authors have a good neural index of surprise for the reasons described above, and this limits the conclusions that can be drawn from this manipulation.

      First of all, we would like to apologize for any unclarity regarding the distinction between Gestalt-like and statistical models. We take Gestalt-like models to be those that explain music perception as following a restricted set of rules, such as that adjacent notes tend to be close in pitch. In contrast, as the reviewer correctly points out, statistical learning models have no such a priori principles and must learn similar or other principles from scratch. Importantly, the distinction between these two classes of models is not one we make for the first time in the context of music perception. Gestalt-like models have a long tradition in musicology and the study of music cognition dating back to (Meyer, 1957). The Implication-Realization model developed by Eugene Narmour (Narmour, 1990, 1992; Schellenberg, 1997) is another example for a rule-based theory of music listening, which has influenced the model by David Temperley, which we applied as the most recently influential Gestalt-model of melodic expectations in the present study. Concurrently to the development of Gestalt-like models, a second strand of research framed music listening in light of information theory and statistical learning (Bharucha, 1987; Cohen, 1962; Conklin & Witten, 1995; Pearce & Wiggins, 2012). Previous work has made the same distinction and compared models of music along the same axis (Krumhansl, 2015; Morgan et al., 2019a; Temperley, 2014). We have updated the manuscript to elaborate on this distinction and highlight that it is not uncommon.

      Second, we emphasize that we compare the models directly in terms of their predictive performance both of upcoming musical notes and of neural responses. This predictive performance is not dependent on the internal details of any particular model; e.g. in principle it would be possible to include a “human expert” model where we ask professional composers to predict upcoming notes given a previous context. Because of this independence of the relevant comparison metric on model details, we believe comparing the models is justified. Again, this is in line with previously published work in music (Morgan et al., 2019a), language, (Heilbron et al., 2022; Schmitt et al., 2021; Wilcox et al., 2020), and other domains (Planton et al., 2021). Such work compares different models in how well they align with human statistical expectations by assessing how well different models explain predictability/surprise effects in behavioral and/or brain responses.

      Third, regarding the doubts on the neural index of surprise used: we respond to this concern below, after reviewer 1’s first point to which the present comment refers (the referred-to comment was not included in the “essential revisions” here).

      Reviewer #2 (Public Review):

      This manuscript focuses on the basis of musical expectations/predictions, both in terms of the basis of the rules by which these are generated, and the neural signatures of surprise elicited by violation of these predictions.

      Expectation generation models directly compared were gestalt-like, n-gram, and a recentlydeveloped Music Transformer model. Both shorter and longer temporal windows of sampling were also compared, with striking differences in performance between models.

      Surprise (defined as per convention as negative log prior probability of the current note) responses were assessed in the form of evoked response time series, recorded separately with both MEG and EEG (the latter in a previously recorded freely available dataset). M/EEG data correlated best with surprise derived from musical models that emphasised long-term learned experiences over short-term statistical regularities for rule learning. Conversely, the best performance was obtained when models were applied to only the most recent few notes, rather than longer stimulus histories.

      Uncertainty was also computed as an independent variable, defined as entropy, and equivalent to the expected surprise of the upcoming note (sum of the probability of each value times surprise associated with that note value). Uncertainty did not improve predictive performance on M/EEG data, so was judged not to have distinct neural correlates in this study.

      The paradigm used was listening to naturalistic musical melodies.

      A time-resolved multiple regression analysis was used, incorporating a number of binary and continuous variables to capture note onsets, contextual factors, and outlier events, in addition to the statistical regressors of interest derived from the compared models.

      Regression data were subjected to non-parametric spatiotemporal cluster analysis, with weights from significant clusters projected into scalp space as planar gradiometers and into source space as two equivalent current dipoles per cluster

      General comments:

      The research questions are sound, with a clear precedent of similar positive findings, but numerous unanswered questions and unexplored avenues

      I think there are at least two good reasons to study this kind of statistical response with music: firstly that it is relevant to the music itself; secondly, because the statistical rules of music are at least partially separable from lower-level processes such as neural adaptation.

      Whilst some of the underlying theory and implementation of the musical theory are beyond my expertise, the choice, implementation, fitting, and comparison of statistical models of music seem robust and meticulous.

      The MEG and EEG data processing is also in line with accepted best practice and meticulously performed.

      The manuscript is very well-written and free from grammatical or other minor errors.

      The discussion strikes a brilliant balance of clearly laying out the interim conclusions and advances, whilst being open about caveats and limitations.

      Overall, the manuscript presents a range of highly interesting findings which will appeal to a broad audience, based on rigorous experimental work, meticulous analysis, and fair and clear reporting.

      We thank the reviewer for their detailed and positive evaluation of our manuscript.

      Reviewer #3 (Public Review):

      The authors compare the ability of several models of musical predictions in their accuracy and in their ability to explain neural data from MEG and EEG experiments. The results allow both methodological advancements by introducing models that represent advancements over the current state of the art and theoretical advancements to infer the effects of long and shortterm exposure on prediction. The results are clear and the interpretation is for the most part well reasoned.

      At the same time, there are important aspects to consider. First, the authors may overstate the advancement of the Music Transformer with the present stimuli, as its increase in performance requires a considerably longer context than the other models. Secondly, the Baseline model, to which the other models are compared, does not contain any pitch information on which these models operate. As such, it's unclear if the advancements of these models come from being based on new information or the operations it performs on this information as claimed. Lastly, the source analysis yields some surprising results that don't fit with previous literature. For example, the authors show that onsets to notes are encoded in Broca's area, whereas it should be expected more likely in the primary auditory cortex. While this issue is not discussed by the authors, it may put the rest of the source analysis into question.

      While these issues are serious ones, the work still makes important advancements for the field and I commend the authors on a remarkably clear and straightforward text advancing the modeling of predictions in continuous sequences.

      We thank the reviewer for their compliments.

    2. Reviewer #2 (Public Review):

      This manuscript focuses on the basis of musical expectations/predictions, both in terms of the basis of the rules by which these are generated, and the neural signatures of surprise elicited by violation of these predictions.

      Expectation generation models directly compared were gestalt-like, n-gram, and a recently-developed Music Transformer model. Both shorter and longer temporal windows of sampling were also compared, with striking differences in performance between models.

      Surprise (defined as per convention as negative log prior probability of the current note) responses were assessed in the form of evoked response time series, recorded separately with both MEG and EEG (the latter in a previously recorded freely available dataset). M/EEG data correlated best with surprise derived from musical models that emphasised long-term learned experiences over short-term statistical regularities for rule learning. Conversely, the best performance was obtained when models were applied to only the most recent few notes, rather than longer stimulus histories.

      Uncertainty was also computed as an independent variable, defined as entropy, and equivalent to the expected surprise of the upcoming note (sum of the probability of each value times surprise associated with that note value). Uncertainty did not improve predictive performance on M/EEG data, so was judged not to have distinct neural correlates in this study.

      The paradigm used was listening to naturalistic musical melodies.

      A time-resolved multiple regression analysis was used, incorporating a number of binary and continuous variables to capture note onsets, contextual factors, and outlier events, in addition to the statistical regressors of interest derived from the compared models.

      Regression data were subjected to non-parametric spatiotemporal cluster analysis, with weights from significant clusters projected into scalp space as planar gradiometers and into source space as two equivalent current dipoles per cluster

      General comments:

      The research questions are sound, with a clear precedent of similar positive findings, but numerous unanswered questions and unexplored avenues

      I think there are at least two good reasons to study this kind of statistical response with music: firstly that it is relevant to the music itself; secondly, because the statistical rules of music are at least partially separable from lower-level processes such as neural adaptation.

      Whilst some of the underlying theory and implementation of the musical theory are beyond my expertise, the choice, implementation, fitting, and comparison of statistical models of music seem robust and meticulous.

      The MEG and EEG data processing is also in line with accepted best practice and meticulously performed.

      The manuscript is very well-written and free from grammatical or other minor errors.

      The discussion strikes a brilliant balance of clearly laying out the interim conclusions and advances, whilst being open about caveats and limitations.

      Overall, the manuscript presents a range of highly interesting findings which will appeal to a broad audience, based on rigorous experimental work, meticulous analysis, and fair and clear reporting.

    1. who sang out of their windows in despair

      It sounds like hope is lost, and the person is singing about trying to enjoy some of the joys in life without the pain while battling this hopelessness.

    2. who were expelled from the academies for crazy & publishing obscene odes on the windows of the skull,

      Could be standing up for what's right and going against the academic school system but in return, they were punished for standing up.

    1. e building has a story to tell. Originally designed as a private members club by Frederick Gould and Giles Gilbert Scott, designer of the iconic British red telephone box and architect of Bankside Power Station now the Tate Modern, The Gilbert and One Lackington have been returned to their former glory. Close attention has been paid to the unique and original features of these magnificent heritage buildings. The Gilbert and One Lackington bring out the distinctive, characterful – surprising – nature of the original buildings, and introduce new standards of daylight, exterior space, clean air via natural ventilation from opening windows and sustainability.

      History:: private members club

    1. 'Some people think that Women's Suffrage means breaking windows and spoiling other people's property. This is a great mistake. Only a small number of women do these violent actions': the suffragists

      Reaction: A suffragist who believes in peace was trying to make themselves different from a suffragette. They didn't want to be counted as one of them.

    1. Microsoft totally fucked up when they took aim at Netscape. It wasn’t Netscape that was a threat to Windows as an application platform, it was the web itself.

      I dunno about this assessment...

      They knew, and they tried.

      They just eventually stopped trying, because they beat Netscape and Sun.

    2. This isn’t about being “Mac-like” — it applies equally to Windows and open source desktop platforms.

      Yeah, but they're basically cribbing the paradigm introduced by the Lisa and popularized by the Mac in 1984. (At least that used to be the case—before Chrome introduced the hamburger menu and everyone else followed suit, Microsoft's attempt to change things up with the ribbon not withstanding.)

    1. Author Response

      Reviewer #1 (Public Review):

      This is a very interesting paper trying to quantify excess deaths due to the COVID-19 pandemic in the USA. The paper is roughly divided into two main sections. In the first section, the authors estimate age and cause-specific excess mortality. In the second section, using their excess mortality estimates, the authors attempt to disentangle the impact of SARS-CoV-2 infection (direct impact) vs. the impact of NPIs on this excess mortality (indirect impact). I have some concerns, particularly with respect to the second section.

      The model used to estimate excess mortality is quite clear. The authors adjust the baseline model to account for low influenza circulation (and deaths) during the COVID-19 pandemic, to avoid underestimating the number of deaths caused by COVID-19. While this makes sense if the authors are trying to estimate the total number of deaths caused by COVID-19, I'm not sure it needs to be accounted for if the authors want to estimate excess/added deaths. A counterfactual scenario would've included influenza. It also raises the question of whether (conceptually) they should be adjusting for other causes of deaths that may have also decreased during the pandemic. The authors briefly acknowledge this in the discussion ("we can't account for changes in baseline respiratory mortality due to depressed circulation of endemic pathogens other than influenza") but my comment goes beyond respiratory diseases. Analyses of excess mortality from other settings have suggested, for example, decreased deaths due to fewer traffic accidents (not in the US) or due to decreased air pollution, and not accounting for these would also lead to an underestimate of the total deaths caused by COVID-19. I understand that it is not feasible to account for all potential factors, so I wonder if they should focus on reporting excess deaths as compared to a counterfactual with influenza.

      Thanks. We think it is helpful to “single out” influenza as it causes major fluctuations in mortality from multiple causes in regular years and is a useful reference to contrast the pandemic impact. But the reviewer’s point is well taken. We have clarified our assumptions about the meaning of the baseline in this analysis (methods p 5), discussed the depressed circulation of other pathogens in depth, and mentioned air pollution (p 12-13). We have also slightly reworked our comparison between COVID19 and influenza so that excess mortality estimates are comparable and now cover periods of the same duration (Nov 2017-Mar 2018 for flu and Nov 2020-Mar 2021 for COVID19, see Figure S11).

      The second section, trying to estimate direct vs. indirect effects is also very interesting. However, more details are required about the regression model used and, importantly, what the assumptions and limitations of the approach are. Specifically:

      • Please provide a bit more information on the regression used for direct vs. indirect effects. I'd like to see explicit discussion of the assumptions and limitations of the approach but also of the stringency index used. Does this model include an intercept? Was the association between stringency index and excess deaths assumed to be linear? Or were different functional forms considered? It is also not clear how well the model fits the data.

      Thanks for these comments which helped us improve this section. We have provided more details about the stringency index in methods (it captures the “sum” of interventions), described the model in methods and supplement, and discussed limitations in caveats section, especially regarding effectiveness of these interventions (p13). We had tried different linear models with and without intercepts but elected to use models with intercepts so as not to overly constrain the relationship between interventions, COVID19 activity and excess mortality. These models also incorporate lags in the predictors that are determined by cross-correlation analysis (as detailed in supplement). In the revised version, we now use gam models, where the relationships between excess mortality and predictors do not have to be linear. We can do so since we were able to add several weeks of data (the regression is now based on 96 pandemic weeks from March 1, 2020 to January 1, 2022). The models are described in detail in supplement p 4-5, and we now specify that they have intercepts. We have also provided additional plots of model fits in main text and supplement (Figures 4 and S16-19).

      • Related to the above, please provide more details on how the results of the regressions were translated into the results presented. The main text reports percentages, but the methods only briefly explain how numbers of direct deaths were calculated, and the supplementary tables report coefficients. It is not clear if these estimates of direct and indirect deaths were somehow constrained to add up to the total number of excess deaths, but it doesn't seem like it since point estimates cross 100% in some cases.

      As discussed in response to one of the editor’s questions, estimates are not constrained to 100%. We have provided more details in the supplement on how we estimate the direct impact of the pandemic. Briefly, we calculate expected deaths in the gam model with all predictors set to their observed values and again with the COVID19 predictor to zero. The direct impact is the difference between the two predictions, divided by the predictions of the full model.

      We note that while some of the estimates derived from gam model exceed 100% (and are similar to the linear model estimates presented in the initial analysis, before revision), these estimates echo the findings from a more empirical analysis, in which we compare all-cause excess deaths with official COVID19 deaths tallies. There, in the two oldest age groups, we find more official COVID19 deaths than estimated by the excess mortality models. Hence both analyses point to an underestimation of the direct burden of COVID19 by the excess mortality approach, specific to the oldest age groups. We return to this point in depth in the discussion (p 12-13) and consider the possible effects of harvesting, depressed circulation of non-SARS pathogens, and inaccurate coding of official statistics (as pointed by reviewer #3).

      • Please discuss the potential limitations of using the stringency index to quantify NPIs.

      Several limitations have been added to caveats (p 13); major issues include aggregation of multiple interventions into a single index, which does not consider the actual implementation nor the effect of interventions. The index is solely based on mandates in place in different locations and time periods. We also assume that the effectiveness of these interventions, for a given level of stringency, does not change over time.

      • When estimating direct and indirect effects, the paper assumes that the estimated parameter is time-invariant? Indirect effects might have changed over the course of the epidemic by factors not necessarily captured by the stringency index used, particularly since the index doesn't take into account the implementation of the measures. Have the authors tested this assumption?

      This is an interesting point, which we have explored further. The non-linear relationships we find between NPIs and chronic condition excess mortality may suggest that the reviewer is right. We discuss the role of NPIs in the results section much more deeply than we were previously (bottom of p8).

      “At lower levels of interventions (Oxford index between 0 and 50), representing the early stages of the lockdown in March 2020, excess mortality rose with interventions. Later in the pandemic, increased interventions were estimated to have a beneficial effect on excess mortality, driven by comparison between the period when interventions were strengthened in response to increasing COVID19 activity in late 2020 (Oxford index above 60) to the period when interventions were relaxed in 2021 (Oxford index between 50 and 60).”

      We cannot run an analysis over different time windows because NPI and time are highly conflated (for instance NPI rise from 0-50% in the very early part of the lockdown period, and then stays above 50% for the rest of the study, so we cannot compare the effect of a 25% level in 2020 and 2021). We have added this limitation in the caveat section p.13.

      • The authors state "In contrast, the indirect impact of the pandemic measured by the intervention term was highest in youngest age groups, decreased with age, and lost significance in individuals above 65 years" - I'm not entirely sure of where this statement comes from? For example Table S3 suggests that the indirect effect (multivariate or univariate) is higher in 25-64 yo than in <25s? The same table also suggests negative impacts (protective effects?) in >75s in the multivariate model. Please clarify.

      There are fewer deaths in the under 25 yo so this is why the coefficients were lower overall in table S3. Yet we find that the proportion of variance explained by interventions is higher in the under 25 yrs than in 25-44 yrs.

      We have now changed our modeling strategy to use gam so Table S3 is no longer relevant but the main conclusion that interventions explain a larger relative portion of excess mortality in the under 25 yrs than in the other age groups, and than other covariates, remains valid. The NPI term is now significant is in all groups (although the relative contribution of NPI still declines with age, as in the prior analysis), so we have rephrased this sentence: “In contrast, the relative contribution of indirect effects, via the intervention variable, was highest in youngest age groups and decreased with age”.

      • How do the authors interpret "Percents of excess deaths" over 100%? Similarly, I don't fully understand how to interpret "The upper bound of the 95% confidence interval for heart diseases was above 100% (158%), suggesting that for every excess death from heart disease estimated by our model, up to 1.58 death from heart disease could be directly linked to SARS-CoV-2 infection.

      We have rephrased this section although the overall conclusions remain unchanged. GAM estimates of the direct COVID 19 impact is statistically significantly above 100% in the 85 yo and over, suggesting that our excess mortality approach is too conservative and does not estimate enough COVID19 excess deaths in this age group. We draw a similar conclusion from a more empirical analysis, in which we compare all-cause excess death estimates with official COVID19 deaths tallies. In this analysis, we find more official COVID19 deaths than estimated by the excess mortality models in the two oldest age groups (point estimates above 100% in the 75-84 and 85+ yrs). Hence both analyses point to an underestimation of the direct burden of COVID19 in the oldest age groups by excess mortality approaches.

      Rephrased results section bottom of p.9: “We estimate that the direct contribution of COVID-19 to excess mortality increases with age, from negative and non-statistically significant in individuals under 25 yrs to over 100% in those over 85 years, echoing the gradient seen in official statistics (Table 4). It is also worth noting that our excess mortality estimates may be too conservative (too high) as we did not account for missed circulation of endemic pathogens. This could explain why our estimates of direct COVID-19 contribution exceed 100% in the oldest age group.“

      We return to this point in depth in the discussion and consider the possible effects of harvesting and depressed circulation of non SARS pathogens (p 12-13).

      • Table 3: The signs of the point estimate vs CI for vehicle accidents are inconsistent.

      Thanks, this was a typo. It should have been 4300 (-700, 9300) excess deaths from accidents. This has been updated with more recent data.

      Reviewer #3 (Public Review):

      Authors examine mortality data in the US and use time-series approaches to estimate excess mortality during the COVID-19 pandemic.

      Major comments:

      I would encourage authors to discuss the two different concepts of excess mortality:

      (#1) what deaths were caused, directly or indirectly, by the pandemic. This is what the authors have aimed to assess, and I have no major concerns with the methodology

      (#2) how many additional deaths occurred during the pandemic, compared to what would have been expected in the absence of a pandemic. For such an analysis I think expected annual influenza deaths should be added back to the baseline (or subtracted from the excess)? Some of the discussion seems to relate more to an impression of #2 rather than #1 but I would be interested in the authors' thoughts.

      We have added more details about the approach, in particular why we think that #1 is the proper analysis here (see methods p 5). Given the sheer magnitude of COVID19 excess deaths (over 1 million excess deaths at the end of our study), adding back influenza deaths (up to 52,000 deaths in a recent severe season with a mismatched vaccine, as in 2017-18) would not make a large difference. We have also provided a more direct comparison of the impact of influenza and COVID19.

      1. Authors estimate fewer excess COVID deaths in the elderly than there were confirmed deaths (Table 3). Could this be an indication of some confirmed deaths being "deaths with COVID" rather than "deaths from COVID"? I'm not sure how to interpret the %s in the final column when they exceed 100%. The authors suggested a harvesting effect but I would suggest "deaths with COVID" might be a more likely explanation? This issue can be a limitation of confirmed-death data.

      This is a good point. We have added a comment along these lines in discussion in the middle of p 12. Still, we think harvesting and/or the depressed circulation of endemic pathogens, which would have inflated our baseline, are more likely explanations for these findings. This is because we find similar estimates (exceeding 100%) in gam models that ignore official statistics and rely on COVID19 case data, or COVID19 hospital occupancy data, and this suggests that other mechanisms, beyond coding of official mortality statistics, are at play.

      Yet, as more detailed official statistics become available, a tabulation of confirmed deaths by presence of a primary vs secondary COVID (U07) code may be revealing and get more directly at the reviewer’s question.

    1. Marullus PerformanceLines 32-55[Click to launch video.]Wherefore rejoice? What conquest brings he home? What tributaries follow him to Rome, To grace in captive bonds his chariot-wheels? You blocks, you stones, you worse than senseless things! O you hard hearts, you cruel men of Rome, Knew you not Pompey? Many a time and oft Have you climb'd up to walls and battlements, To towers and windows, yea, to chimney-tops, Your infants in your arms, and there have sat The livelong day, with patient expectation, To see great Pompey pass the streets of Rome. And when you saw his chariot but appear, Have you not made an universal shout, That Tiber trembled underneath her banks, To hear the replication of your sounds Made in her concave shores? And do you now put on your best attire? And do you now cull out a holiday? And do you now strew flowers in his way That comes in triumph over Pompey's blood? Be gone! Run to your houses, fall upon your knees, Pray to the gods to intermit the plague That needs must light on this ingratitude.

      Irrational response

    1. To download python, go to https://www.python.org/ and then click on the link for either Mac, Windows, or Linux depending on your computer

      Maybe a link to our own opinionated Python binaries would be good to have here ?

    2. To download R, go to https://cloud.r-project.org and then click on the link for either Mac, Windows, or Linux depending on your compute

      Maybe a link to our own opinionated R binaries would be good to have here ?

  2. www.msys2.org www.msys2.org
    1. MSYS2 is a collection of tools and libraries providing you with an easy-to-use environment for building, installing and running native Windows software.

      msys2

      The C/C++ extension does not include a C++ compiler or debugger. You will need to install these tools or use those already installed on your computer.

  3. Nov 2022
    1. Some, though certainly not all, English-speaking writers, such as Ted Reeve of The Ottawa Citizen, exonerated Campbell for doing “his duty as he saw it and in the good heart of him, turned up at the match, full square, and faced the affronts of the half-wits, as a gentleman should … a big salute to the president.” Reeve held Richard himself responsible: “Why should Richard, for whom the game is made to order, take tantrums like a spoiled child and incite a lot of crack-pots such as the tear-gas bomb thrower at the Forum and the fools who broke windows and took after streetcars last night in Montreal?”

      The English blame the French, the French blame the English.

    1. xemacs is never a good choice use emacs.As for Windows and Lisp the comecial versions are ok.ACL rules if you can afford it.Unix belongs in a museum :)-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

      They've always been like this : )

    1. Reviewer #2 (Public Review):

      Fuhrman et al. explore a fascinating system to study the evolution and genetic architecture of ecological adaptation in marine midges. They use a number of approaches including analyses of whole genome sequences and QTL mapping to explore population structure and the loci associated with the timing and mode of reproduction. I have some concerns about the analyses and interpretations which I outline below.

      1) My primary concern is in the design and interpretation of the QTL analysis. The QTL approach used here has low power, both due to the sample size and the number of markers used (it looks like ~8 per chromosome). The authors use an analysis of the sex determining locus as a "control" but because of the complete heritability of this trait in most systems it is more of a straw man to me. The authors conclude that the architecture of the trait is polygenic based on this, but we are missing key information to evaluate this.

      2) There are some issues with the presentation and interpretation of the population genetic analyses. Many assumptions are made about whether introgression or ILS occurred and there are statements that are not accurate about it being "impossible" to distinguish between these scenarios.

      3) Some of the analyses associated with ecological adaptation that follow on the QTL results struck me as ad hoc and with the potential to lead to spurious results. I am not familiar with the BayPass approach but since it is the approach that explicitly accounts for population structure it seems the one that would be most appropriate for the authors to focus on in a revised manuscript. The use of phylogenetic windows that associate with ecotype is concerning to me as given the level of ILS and gene flow that appears to be present in this system is would be very challenging to distinguish signal from noise.

      4) There were issues with the GO analysis that should be addressed. Because the gene universe used for GO enrichment is a subset of the full gene set, GO enrichment results will be biased. This will mostly lead to false positives (i.e. overrepresentation of a GO category due to evaluating a subset of genes that fall in that category).

    1. This opens a new window with that URL, it set the focus to that windows, and as soon as the 'load' event is triggered, it executes the code in the function. It only works with a page in the same domain.

      load url and run script

    1. In recent years, China has been able to limit and control Hollywood movies’ share of China’s annual total box office to around 40% through specific protectionist regulations and policies such as import quotas, blackout periods, short promotion windows, and stacked-up releases

      protection against too much globalization

    1. And so the church is finished-a beautiful stone church, with pictures on the walls and coloured glass in the windows

      There are few of churches that have this same original look, most of these churches are Cathedrals. There are some beauties up in the Cleveland/ Lakewood area.

    1. Author Response

      Reviewer #1 (Public Review):

      This fMRI study investigated how memories are updated after reinterpreting past events. Participants watched a movie and subsequently recalled individual scenes from that movie. Importantly, the movie ends with a twist that changes the interpretation of earlier scenes in the movie. One group of participants watched the movie with the twist at the end, one group did not get to see the twist, and a third group was already informed about this twist before watching the movie. Analyses compared the similarity of activity patterns to (encoded or recalled) events across participants within regions of the default mode network (DMN). The design allowed for multiple relevant comparisons, confirming the prediction that activity patterns in DMN regions reflect the (re)interpretation of the movie (during movie viewing and/or during recall).

      The study is well-designed and executed. The inclusion of multiple analyses involving distinct comparisons strengthens the evidence for the role of the DMN in memory updating.

      The following points may be relevant to consider:

      1) The cross-participant pattern analysis method used here is not standard, with such analyses typically done within participants (or across participants, but after aligning representational spaces). Considering individual variability in functional organization, the method is likely only sensitive to coarse-scale patterns (e.g., anterior vs posterior parts of an ROI). This is not necessarily a weakness but is relevant when interpreting the results.

      We agree with the reviewer that functional misalignment might have played against us here. We designed this study as a natural successor of our previous work in which we captured reliable and multimodal scene-specific cross-participant pattern similarity during encoding and recall in standard space. In this revised version, we provide further evidence on how scene content is captured and influences our results. Nonetheless, we agree with your comment and add the following section to the discussion to encourage considering this point while interpreting the results.

      "Moreover, our current method relies on averaging spatially-coarse activity patterns across subjects (and time points within an event). Future extensions of this work may benefit from using functional alignment methods (Haxby et al 2020, Chen et al 2015) to capture more fine-grained event representations which are shared across participants."

      2) Unlike previous work, analyses are not testing for scene-specific information. Rather, each scene is treated separately to establish between-group differences, and results are averaged across scenes. This raises the question of whether the patterns reflect scene-specific information or generic group differences. For example, knowing the twist may increase overall engagement, both when viewing the movie (spoiled group) and when recalling it (spoiled group + twist group). The DMN may be particularly sensitive to such differences in overall engagement.

      You have brought up great points. We addressed them in two ways: (1) We ran a univariate analysis in each DMN ROI to look at the role of overall regional-average response magnitude in our results. We did not observe a significant effect of group or an interaction between group and condition. (2) We ran a scene-specificity analysis in a new Results section entitled “The role of scene content” (Figure 4). This section is focused on comparing interaction index (Figure 2C), as an indicator of memory updating, under different manipulations. Interaction index reflects the reversal of neural similarity during encoding and recall. Our results suggest that we don’t see the same effects if we shuffle the scene labels and recompute the pattern similarity analyses. Please see added text and figures below:

      "To test whether our reported results were mainly driven by the similarities and differences in multivariate spatial patterns of neural representations, as opposed to by univariate regional-average response magnitudes, we ran a univariate analysis in each ROI. This analysis revealed no significant effect of group (“spoiled”, “twist”, “no-twist”) or interaction between group and condition (movie, recall) (Table 1, see Methods for details).

      Next, to determine whether scene-specific neural event representations—as opposed to coarser differences in general mental state across all scenes with similar interpretations—drive our observed pISC differences, we shuffled the labels of critical scenes within each group before calculating and comparing pISC across groups. By repeating this procedure 1000 times and recalculating the interaction index at each iteration, we constructed a null distribution of interaction indices for shuffled critical scenes (light magenta distributions in Figure 4B). In 12 out of 24 DMN regions, interaction indices were statistically significant based on the shuffled-scene distribution (p < .025, FDR controlled at q < .05). All of these 12 regions were among the ROIs that showed meaningful effects in our original analysis (Figure 2C). Regions with significant scene-specific interaction effects are marked as blue dots with black borders in Figure 4B. Overall, the findings from this analysis confirm that our results are driven by changes to scene-specific representations."

      3) The study does not reveal what the DMN represents about the movie, such that its activity changes after knowing the twist. The Discussion briefly mentions that it may reflect the state of the observer, related to the belief about the identity of the doctor. This suggests a link to the theory of mind/mentalizing, but this is not made explicit. Alternatively, the DMN may be involved in the conflict (or switching) between the two interpretations.

      Great points. We added to the discussion about the role of mentalizing network and in the particular temporo-parietal cortex. About your last point, we think our whole brain findings outside DMN (ACC and dlPFC) might relate to that point. We discussed these further in the paper.

      "We performed two targeted analyses to look for evidence of memory updating across encoding and recall: the interaction analysis (Figure 2C) and the encoding-recall analysis (Figure 3). We hypothesized that a shift in direction of pISC difference would occur when neural representations during recall in the “twist” group start to reflect the Ghost interpretation. The interaction analysis probed this shift indirectly by taking into account the effects of both encoding-encoding and recall-recall analyses. Unlike the interaction analysis, in the encoding-recall analysis, we directly compared neural event representations during encoding and recall. Interestingly, all regions exhibiting an effect across the two encoding-recall analyses, excluding left anterior temporal cortex, were present in the interaction results. Among these regions, the left angular gyrus/TPJ exhibited an effect across all three analyses. As a core hub in the mentalizing network, temporo-parietal cortex has been implicated in theory of mind through perspective-taking, rationalizing the mental state of someone else, and modeling the attentional state of others (Frith and Frith 2006, Guterstam et. al 2021, Saxe and Kanwisher 2003). The motivations behind some actions of the main character in the movie heavily depend on whether the viewer perceives them as a Doctor or a Ghost, and participants may focus on this during both encoding and recall. We speculate that neural event representations in AG/TPJ in the current experiment may be related to mentalizing about the main character’s actions. Under this interpretation, the updated event representations during recall following the twist would be more closely aligned to the “spoiled” encoding representations, as a consequence of memory updating in the “twist” group.

      In our whole brain analysis, these regions did not have significant interaction effects, which suggests that the effects were isolated to encoding. In the whole-brain analysis, we also observed a significant encoding-encoding and interaction effects in anterior cingulate cortex, as well as recall-recall and interaction effects in dlPFC. These results suggest that both the "spoiled" manipulation and the "twist" may recruit top-down control and conflict monitoring processes during naturalistic viewing and recall."

      4) The design has many naturalistic aspects, but it is also different from real life in that the critical twist involves a ghost. Furthermore, all results are based on one movie with a specific plot twist. It is thus not clear whether similar results would be obtained with other and more naturalistic plot twists.

      We added this as a limitation of the study.

      "Our findings provide further insight into the functional role of the DMN. However, these results have been obtained using only one movie. While naturalistic paradigms better capture the complexity of real life and provide greater ecological generalizability than highly-controlled experimental stimuli and tasks (Nastase et al., 2020), they are still limited by the properties of the particular naturalistic stimulus used. For example, this movie—including the twist itself—hinges on suspension of disbelief about the existence of ghosts. Future work is needed to extend our findings about updating event memories to a broader class of naturalistic stimuli: for example, movies with different kinds of (non-supernatural) plot twists, spoken stories with twist endings, or using autobiographical real-life situations where new information (e.g. discovering a longtime friend has lied about something important) triggers re-evaluation of the past (e.g. reinterpreting their friend’s previous actions)."

      5) Only 7 scenes (out of 18) were included in the analysis. It is not clear if/how the results depend on the selection of these 7 scenes.

      Thank you for bringing this up. These scenes were pre-selected for the analyses, as they are the only scenes that are rated high by our independent raters (not study participants) on “twist influence”, meaning that knowing the twist could dramatically change their interpretation. So, we had a priori reasons to hypothesize that the effect will be strong in these scenes. To address your point, we report results by including all 18 scenes in a new Results section entitled “The role of scene content” and in Figure 4A. While the effect was weaker for all scenes it was still apparent in this conservative analysis. As expected, however, including 7 critical scenes produces stronger results than including all scenes or the uncritical scenes (all minus critical scenes). Please see the “The role of scene content” in Results and in Figure 4 for more detailed information.

      "The role of scene content In the prior analyses, we focused on “critical scenes”, selected based on ratings from four raters who quantified the influence of the twist on the interpretation of each scene (see Methods). An independent post-experiment analysis of the verbal recall behavior of the fMRI participants yielded “twist scores” that were also highest for these scenes; that is, the expected and perceived effect of twist information on recall behavior were found to match. In our next analysis, we asked whether the neural event representations reflect these differences in the twist-related content of the scenes. In other words, are the “critical scenes” with highly twist-dependent interpretations truly critical for our observed effects?

      To answer this question, we re-ran our main encoding-encoding and recall-recall pISC analysis in each DMN ROI (Figure 2-3). We calculated interaction indices (Figure 2C) first by including all scenes, and second by including only the 11 non-critical scenes. To better compare the effect of including different subsets of scenes to our original results, in Figure 4 we show the results in 15 ROIs that exhibited meaningful effects in our main analyses (Figure 2C). Figure 4A demonstrates that “critical scenes” yielded higher interaction indices compared to all scenes or non-critical scenes across all ROIs. The interaction score across all DMN ROIs was significantly higher in “critical scenes” than all scenes (t(23) = 7.19, p = 2.53 x 10-7) and non-critical scenes (t(23) = 7.3, p = 1.95 x 10-7). These results show that critical scenes are indeed responsible for the observed pISC differences across groups."

      Reviewer #2 (Public Review):

      In this manuscript titled "Here's the twist: How the brain updates the representations of naturalistic events as our understanding of the past changes", the authors reported a study that examined how new information (manipulated as a twist at the end of a movie) changes the neural representations in the default mode network (DMN) during the recall of prior knowledge. Three groups of participants were compared - one group experienced the twist at the end, one group never experienced the twist, and one group received a spoiler at the beginning. At retrieval, participants received snippets of 18 scenes of the movie as cues and were asked to freely describe the events of each scene and to provide the most accurate interpretation of the scene, given the information they gathered throughout watching.

      All three groups were highly accurate in the recall of content. The groups that experienced the twist at the end as well as at the beginning as a spoiler showed a higher twist score (the extent to which twist information was incorporated into the recall), while seemingly also keeping the interpretation without the twist ("Doctor representation") intact. Neurally, several regions in the DMN showed significant interaction effects in their neural similarity patterns (based on intersubject pattern correlation), indicating a change in interpretation between encoding and recall in the twist group uniquely, presumably reflecting memory updating.

      Several points that I think should be addressed to strengthen the manuscript:

      1) The results from encoding-retrieval similarity analysis (particularly the one depicted in Figure 3B) don't match the results from encoding/retrieval interaction (particularly those shown in Figure 2C). While they were certainly based on different comparisons, I would think that both analyses were set up to test for memory updating. Can the authors comment on this divergence in results?

      Thank you for your comment. Except for one ROI, the other two regions in Figure 2C are present in the interaction analysis. The ROI at the frontal pole might be hard to see from this angle but in fact it holds a high effect size in interaction analysis. So we do not see a big divergence between these two results. But taking into account the recall-recall results, we agree that there seems to be inhomogeneity. We discussed these further in the discussion.

      "We performed two targeted analyses to look for evidence of memory updating across encoding and recall: the interaction analysis (Figure 2C) and the encoding-recall analysis (Figure 3). We hypothesized that a shift in direction of pISC difference would occur when neural representations during recall in the “twist” group start to reflect the Ghost interpretation. The interaction analysis probed this shift indirectly by taking into account the effects of both encoding-encoding and recall-recall analyses. Unlike the interaction analysis, in the encoding-recall analysis, we directly compared neural event representations during encoding and recall. Interestingly, all regions exhibiting an effect across the two encoding-recall analyses, excluding left anterior temporal cortex, were present in the interaction results. Among these regions, the left angular gyrus/TPJ exhibited an effect across all three analyses. As a core hub in the mentalizing network, temporo-parietal cortex has been implicated in theory of mind through perspective-taking, rationalizing the mental state of someone else, and modeling the attentional state of others (Frith and Frith 2006, Guterstam et. al 2021, Saxe and Kanwisher 2003). The motivations behind some actions of the main character in the movie heavily depend on whether the viewer perceives them as a Doctor or a Ghost, and participants may focus on this during both encoding and recall. We speculate that neural event representations in AG/TPJ in the current experiment may be related to mentalizing about the main character’s actions. Under this interpretation, the updated event representations during recall following the twist would be more closely aligned to the “spoiled” encoding representations, as a consequence of memory updating in the “twist” group.

      Our findings are consistent with the view that DMN synthesizes incoming information with one’s prior beliefs and memories (Yeshurun et al 2021). We add to this framework by providing evidence for the involvement of DMN regions in updating prior beliefs in light of new knowledge. Across our different encoding and recall analyses, we observe memory updating effects in a varied subset of DMN regions that do not cleanly map onto a specific subsystem of DMN (Robin and Moscovitch 2017, Ranganath and Ritchey 2012, Ritchey and Cooper 2020). Rather than being divergent, these results might be reflecting inherent differences between the processes of encoding and recall of naturalistic events. It has been proposed that neural representations corresponding to encoding of events are systematically transformed during recall of those events (Chen et al 2017, Favila et al 2020, Musz and Chen 2022). While we provide evidence for reinstatement of memories in DMN, our findings also support a transformation of neural representation during recall, as encoding-recall results were weaker in some areas than recall-recall findings. This transformation could affect how different regions and sub-systems of DMN represent memories, and suggests that the concerted activity of multiple subsystems and neural mechanisms might be at play during encoding, recall and successful updating of naturalistic event memories."

      2) The recall task was self-paced. Can reaction time information be provided on how long participants needed to recall? Did this differ across groups? Presumably in the twist group and spoiled group participants might have needed a longer time to incorporate both the original and twist interpretation.

      This is an interesting idea. Unfortunately, we could not measure this accurately because our recall cues were snippets from the beginning of each scene with different length (selected based on content). And updating could begin from the beginning of those snippets (but we wouldn’t know when). We will consider this point in the future related designs.

      How was the length difference across events taken into consideration in the beta estimates?

      They were used as event durations in the GLM model.

      Also, is there an order effect, such that one type of interpretation tended to be recalled first?

      This is hard to measure as this only occurs in a subset of scenes. But we assume it happens in other people’s brains as well

      This is indeed hard to measure as you mentioned. We will provide the transcripts when sharing the data and hopefully this will facilitate future text-analysis work on this dataset to answer interesting questions like this.

      3) The correlation analysis between neural pattern change and behavioral twist score is based on a small sample size and does not seem to be well suited to test the postulation of the authors, namely that some participants may hold both interpretations in their memory. Interestingly, the twist score of the spoiled group was similar to the twist group, indicating participants in this group might have held both interpretations as well. Could this observation be leveraged, for example by combining both groups (hence better powered with larger sample size), in order to relate individual differences in neural similarity patterns and behavioral tendency to hold both interpretations?

      Even though both groups showed signs of holding both interpretations in mind, the process happening in their brain during the recall is different. In particular, we do not expect to see any updating effect in the spoiled group. So it wouldn’t seem accurate to combine these groups to test the effect of incomplete updating.

      4) Several regions within the DMN were significant across the analysis steps, specifically the angular gyrus, middle temporal cortex, and medial PFC. Can the authors provide more insights on how these widely distributed regions may act together to enable memory updating? The discussion on the main findings is largely at a rather superficial level about DMN, or focuses specifically on vmPFC, but neglects the distributed regions that presumably function interactively

      Thanks for bringing this up. We added text to discussion to respond to this very valid point. Please see the added text in our response to your first point. One more snippet added to the discussion about this:

      "In addition to mPFC, right precuneus and parts of temporal cortex exhibited significantly higher pattern similarity in the “twist” and “spoiled” groups who recalled the movie with the same interpretation. Precuneus is a core region in the posterior medial network, which is hypothesized to be involved in constructing and applying situation models (Ranganath and Ritchey 2012). Our findings support a role for precuneus in deploying interpretation-specific situation models when retrieving event memories. In particular, we suggest that the posterior medial network may encode a shift in the situation model of the “twist” group in order to accommodate the new Ghost interpretation.

      We performed two targeted analyses to look for evidence of memory updating across encoding and recall: the interaction analysis (Figure 2C) and the encoding-recall analysis (Figure 3). We hypothesized that a shift in direction of pISC difference would occur when neural representations during recall in the “twist” group start to reflect the Ghost interpretation. The interaction analysis probed this shift indirectly by taking into account the effects of both encoding-encoding and recall-recall analyses. Unlike the interaction analysis, in the encoding-recall analysis, we directly compared neural event representations during encoding and recall. Interestingly, all regions exhibiting an effect across the two encoding-recall analyses, excluding left anterior temporal cortex, were present in the interaction results. Among these regions, the left angular gyrus/TPJ exhibited an effect across all three analyses. As a core hub in the mentalizing network, temporo-parietal cortex has been implicated in theory of mind through perspective-taking, rationalizing the mental state of someone else, and modeling the attentional state of others (Frith and Frith 2006, Guterstam et. al 2021, Saxe and Kanwisher 2003). The motivations behind some actions of the main character in the movie heavily depend on whether the viewer perceives them as a Doctor or a Ghost, and participants may focus on this during both encoding and recall. We speculate that neural event representations in AG/TPJ in the current experiment may be related to mentalizing about the main character’s actions. Under this interpretation, the updated event representations during recall following the twist would be more closely aligned to the “spoiled” encoding representations, as a consequence of memory updating in the “twist” group.

      Our findings are consistent with the view that DMN synthesizes incoming information with one’s prior beliefs and memories (Yeshurun et al 2021). We add to this framework by providing evidence for the involvement of DMN regions in updating prior beliefs in light of new knowledge. Across our different encoding and recall analyses, we observe memory updating effects in a varied subset of DMN regions that do not cleanly map onto a specific subsystem of DMN (Robin and Moscovitch 2017, Ranganath and Ritchey 2012, Ritchey and Cooper 2020). Rather than being divergent, these results might be reflecting inherent differences between the processes of encoding and recall of naturalistic events. It has been proposed that neural representations corresponding to encoding of events are systematically transformed during recall of those events (Chen et al 2017, Favila et al 2020, Musz and Chen 2022). While we provide evidence for reinstatement of memories in DMN, our findings also support a transformation of neural representation during recall, as encoding-recall results were weaker in some areas than recall-recall findings. This transformation could affect how different regions and sub-systems of DMN represent memories, and suggests that the concerted activity of multiple subsystems and neural mechanisms might be at play during encoding, recall and successful updating of naturalistic event memories."

      Reviewer #3 (Public Review):

      Zadbood and colleagues investigated the way key information used to update interpretations of events alter patterns of activity in the brain. This was cleverly done by the use of "The Sixth Sense," a film featuring a famous "twist ending," which fundamentally alters the way the events in the film are understood. Participants were assigned to three groups: (1) a Spoiled group, in which the twist was revealed at the outset, (2) a Twist group, who experienced the film as normal, and (3) a No-Twist group, in which the twist was removed. Participants were scanned while watching the movie and while performing cued recall of specific scenes. Verbal recall was scored based on recall success, and evidence for descriptive bias toward two ways of understanding the events (specifically, whether a particular character was or was not a ghost). Importantly, this allowed the authors to show that the Twist group updated their interpretation. The authors focused on regions of the Default Mode Network (DMN) based on prior studies showing responsiveness to naturalistic memory paradigms in these areas and analyzed the fMRI data using intersubject pattern similarity analysis. Regions of the DMN carried patterns indicative of story interpretation. That is, encoding similarity was greater between the Twist and No-Twist groups than in the Spoiled group, and retrieval similarity was greater between the Twist and Spoiled groups than in the No-Twist group. The Spoiled group also showed greater pattern similarity with the Twist group's recall than the No-Twist group's recall. The authors also report a weaker effect of greater pattern similarity between the Spoiled group's encoding and the Twist group's recall than between the Twist group's own encoding and recall. Together, the data all converge on the point that one's interpretation of an event is an important determinant of the way it is represented in the brain.

      This is a really nice experiment, with straightforward predictions and analyses that support the claims being made. The results build directly on a prior study by this research group showing how interpretational differences in a narrative drive distinct neural representations (Yeshurun et al., 2017), but extend an understanding of how these interpretational differences might work retrospectively. I do not have any serious concerns or problems with the manuscript, the data, or the analyses. However I have a few points to raise that, if addressed, would make for a stronger paper in my opinion.

      1) My most substantive comment is that I did not find the interpretive framework to be very clear with respect to the brain regions involved. The basic effects the authors report strongly support their claims, but the particular contributions to the field might be stronger if the interpretations could be made more strongly or more specifically. In other words: the DMN is involved in updating interpretations, but how should we now think about the role of the DMN and its constituent regions as a result of this study? There are a number of ideas briefly presented about what the DMN might be doing, but it just did not feel very coherent at times. I will break this down into a few more specific points:

      While many of us would agree that the DMN is likely to be involved in the phenomena at hand, I did not find that the paper communicated the logic for singularly focusing on this subset of regions very compellingly. The authors note a few studies whose main results are found in DMN regions, but I think that this could stand to be unpacked in a more theoretically interesting way in the Introduction.

      Relatedly, I found the summary/description of regional effects in the Discussion to be a bit unsatisfying. The various pattern similarity comparisons yielded results that were actually quite nonoverlapping among DMN regions, which was not really unpacked. To be clear, it is not a 'problem' that the regional effects varied from comparison to comparison, but I do think that a more theoretical exploration of what this could mean would strengthen the paper. To the authors' credit, they describe mPFC effects through the lens of schemas, but this stands in contrast to many other regions which do not receive much consideration.

      Finally, although there is evidence that regions of the DMN act in a coordinated way under some circumstances, there is also ample evidence for distinct regional contributions to cognitive processes, memory being just one of them (e.g., Cooper & Ritchey, 2020; Robin & Moscovitch, 2017; Ranganath & Ritchey, 2012). The authors themselves introduce the idea of temporal receptive windows in a cortical hierarchy, and while DMN regions do appear to show slower temporal drift than sensory areas, those studies show regional differences in pattern stability across time even within DMN regions. Simply put, it is worth considering whether it is ideal to treat the DMN as a singular unit.

      Thank you for your helpful comments. We added text to the introduction and discussion to address your point:

      "Introduction:

      The brain’s default mode network (DMN)—comprising the posterior medial cortex, medial prefrontal cortex, temporoparietal junction, and parts of anterior temporal cortex—was originally described as an intrinsic or “task-negative” network, activated when participants are not engaged with external stimuli (Raichle et al. 2001, Buckner et al 2008). This observation led to a large body of work showing that the DMN is an important hub for supporting internally driven tasks such as memory retrieval, imagination, future planning, theory of mind, and creating and updating situation models (Svoboda et al. 2006; Addis et al. 2007; Hassabis and Maguire 2007, 2009; Schacter et al. 2007; Szpunar et al. 2007; Spreng et al. 2009, Koster-Hale & Saxe, 2013 2013, Ranganath and Ritchey 2012). However, it is not fully understood how this network contributes to these varying functions, and in particular—the focus of the present study—memory processes. Activation of this network during “offline” periods has been proposed to play a role in the consolidation of memories through replay (Kaefer et al 2022). Interestingly, prior work has also shown that the DMN is reliably engaged during “online” processing (encoding) of continuous rich dynamic stimuli such as movies and audio stories (Stephens et al 2013, Hasson et al 2008). Regions in this network have been shown to have long “temporal receptive windows” (Hasson et al 2008; Lerner et al., 2011; Chang et al., 2022), meaning that they integrate and retain high-level information that accumulates over the course of extended timescales (e.g. scenes in movies, paragraphs in text) to support comprehension. This combination of processing characteristics suggests that the DMN integrates past and new knowledge, as regions in this network have access to incoming sensory input, recent active memories, and remote long-term memories or semantic knowledge (Yeshurun et al 2021, Hasson et al 2015). These integration processes feature in many of the “constructive” processes attributed to DMN such as imagination, future planning, mentalizing, and updating situation models (Schacter and Addis 2007, Ranganath and Ritchey 2012). Notably, constructive processes are highly relevant to real-world memory updating, which involves selecting and combining the relevant parts of old and new memories. Recent work has shown that neural patterns during encoding and recall of naturalistic stimuli (movies) are reliably similar across participants in this network (Chen et al. 2017; Oedekoven et al., 2017; Zadbood et al., 2017; see Bird 2020 for a review of recent naturalistic studies on memory), and the DMN displays distinct neural activity when listening to the same story with different perspectives (Yeshurun et al 2017). Building on this foundation of prior work on the DMN, we asked whether we could find neural evidence for the retroactive influence of new knowledge on past memories."

      "Discussion :

      In addition to mPFC, right precuneus and parts of temporal cortex exhibited significantly higher pattern similarity in the “twist” and “spoiled” groups who recalled the movie with the same interpretation. Precuneus is a core region in the posterior medial network, which is hypothesized to be involved in constructing and applying situation models (Ranganath and Ritchey 2012). Our findings support a role for precuneus in deploying interpretation-specific situation models when retrieving event memories. In particular, we suggest that the posterior medial network may encode a shift in the situation model of the “twist” group in order to accommodate the new Ghost interpretation.

      We performed two targeted analyses to look for evidence of memory updating across encoding and recall: the interaction analysis (Figure 2C) and the encoding-recall analysis (Figure 3). We hypothesized that a shift in direction of pISC difference would occur when neural representations during recall in the “twist” group start to reflect the Ghost interpretation. The interaction analysis probed this shift indirectly by taking into account the effects of both encoding-encoding and recall-recall analyses. Unlike the interaction analysis, in the encoding-recall analysis, we directly compared neural event representations during encoding and recall. Interestingly, all regions exhibiting an effect across the two encoding-recall analyses, excluding left anterior temporal cortex, were present in the interaction results. Among these regions, the left angular gyrus/TPJ exhibited an effect across all three analyses. As a core hub in the mentalizing network, temporo-parietal cortex has been implicated in theory of mind through perspective-taking, rationalizing the mental state of someone else, and modeling the attentional state of others (Frith and Frith 2006, Guterstam et. al 2021, Saxe and Kanwisher 2003). The motivations behind some actions of the main character in the movie heavily depend on whether the viewer perceives them as a Doctor or a Ghost, and participants may focus on this during both encoding and recall. We speculate that neural event representations in AG/TPJ in the current experiment may be related to mentalizing about the main character’s actions. Under this interpretation, the updated event representations during recall following the twist would be more closely aligned to the “spoiled” encoding representations, as a consequence of memory updating in the “twist” group.

      Our findings are consistent with the view that DMN synthesizes incoming information with one’s prior beliefs and memories (Yeshurun et al 2021). We add to this framework by providing evidence for the involvement of DMN regions in updating prior beliefs in light of new knowledge. Across our different encoding and recall analyses, we observe memory updating effects in a varied subset of DMN regions that do not cleanly map onto a specific subsystem of DMN (Robin and Moscovitch 2017, Ranganath and Ritchey 2012, Ritchey and Cooper 2020). Rather than being divergent, these results might be reflecting inherent differences between the processes of encoding and recall of naturalistic events. It has been proposed that neural representations corresponding to encoding of events are systematically transformed during recall of those events (Chen et al 2017, Favila et al 2020, Musz and Chen 2022). While we provide evidence for reinstatement of memories in DMN, our findings also support a transformation of neural representation during recall, as encoding-recall results were weaker in some areas than recall-recall findings. This transformation could affect how different regions and sub-systems of DMN represent memories, and suggests that the concerted activity of multiple subsystems and neural mechanisms might be at play during encoding, recall and successful updating of naturalistic event memories."

      2) I think that some direct comparison to regions outside the DMN would speak to whether the DMN is truly unique in carrying the key representations being discussed here. I was reluctant to suggest this because I think that the authors are justified in expecting that DMN regions would show the effects in question. However, there really is no "null" comparison here wherein a set of regions not expected to show these effects (e.g., a somatosensory network, or the frontoparietal network) in fact do not show them. There are not really controls or key differences being hypothesized across different conditions or regions. Rather, we have a set of regions that may or may not show pattern similarity differences to varying degrees, which feels very exploratory. The inclusion of some principled control comparisons, etc. would bolster these findings. The authors do include a whole-brain analysis in Supplementary Figure 1, which indeed produced many DMN regions. However, notably, regions outside the DMN such as the primary visual cortex and mid-cingulate cortex appear to show significant effects (which, based on the color bar, might actually be stronger than effects seen in the DMN). Given the specificity of the language in the paper in terms of the DMN, I think that some direct regional or network-level comparison is needed.

      In the original submission, we included additional analyses for visual and somatosensory networks, which we hypothesized would serve as control networks. Following your comment, in the revision, we added a separate section (included below) more thoroughly examining these analyses. We also added text to the results and discussion to explain our interpretation of these findings.

      "Changes in neural representations beyond DMN We focused our core analyses on regions of the default mode network. Prior work has shown that multimodal neural representations of naturalistic events (e.g. movie scenes) are similar across encoding (movie-watching or story-listening) and verbal recall of the same events in the DMN (Chen et al., 2017; Zadbood et al., 2017). Therefore, in the current work we hypothesized that retrospective changes in the neural representations of events as the narrative interpretation shifts would be observed in the DMN. We did not, for example, expect to observe such effects in lower-level sensory regions, where neural activity differs dramatically for movie-viewing and verbal recall. To be thorough, we ran the same set of analyses we performed in the DMN (Figure 2-3) in regions of the visual and somatomotor networks extracted from the same atlas parcellation (Schaefer et al., 2018). Our results revealed larger overall differences in DMN than in visual and somatosensory networks for the key comparisons discussed previously (Figure S2). In particular, the only regions showing significant differences in pISC in recall-recall and encoding-recall comparisons (p < 0.01, uncorrected) were located in the DMN. We did not observe a notable difference between DMN and the two other networks when comparing recall “twist” to movie “spoiled” and recall “twist” to movie “twist” (RG – MG > RG – MD) which is consistent with the weak effect in the original comparison (Figure 3B). In the encoding-encoding comparison, several ROIs from the visual and somatomotor networks showed relatively strong effects as well (see Discussion).

      In addition, we qualitatively reproduced our results by performing an ROI-based whole brain analysis (Figure S3, p < 0.01 uncorrected). This analysis confirmed the importance of DMN regions for updating neural event representations. However, strong differences in pISC in the hypothesized direction were also observed in a handful of other non-DMN regions, including ROIs partly overlapping with anterior cingulate cortex and dorsolateral prefrontal cortex (see Discussion)."

      "Discussion: While our main goal in this paper was to examine how neural representations of naturalistic events change in the DMN, we also examined visual and somatosensory networks. Aside from the encoding-encoding analysis in which some visual and somatosensory regions showed stronger similarity between two groups with the same interpretation of the movie, we did not find any regions with significant effects in these two networks in the other analyses. Unlike the recall phase where each participant has their unique utterance with their own choice of words and concepts to describe the movie, the encoding (move-watching) stimulus is identical across all groups. Therefore, the effects observed during encoding-encoding analysis in sensory regions could reflect similarity in perception of the movie guided by similar attentional state while watching scenes with the same interpretation (e.g. similarity in gaze location, paying attention to certain dialogues, or small body movements while watching the movie with the same Doctor or Ghost interpretations). In our whole brain analysis, these regions did not have significant interaction effects, which suggests that the effects were isolated to encoding. In the whole-brain analysis, we also observed a significant encoding-encoding and interaction effects in anterior cingulate cortex, as well as recall-recall and interaction effects in dlPFC. These results suggest that both the "spoiled" manipulation and the "twist" may recruit top-down control and conflict monitoring processes during naturalistic viewing and recall."

      3) If I understand correctly, the main analyses of the fMRI data were limited to across-group comparisons of "critical scenes" that were maximally affected by the twist at the end of the movie. In other words, the analyses focused on the scenes whose interpretation hinged on the "doctor" versus "ghost" interpretation. I would be interested in seeing a comparison of "critical" scenes directly against scenes where the interpretation did not change with the twist. This "critical" versus "non-critical" contrast would be a strong confirmatory analysis that could further bolster the authors' claims, but on the other hand, it would be interesting to know whether the overall story interpretation led to any differences in neural patterns assigned to scenes that would not be expected to depend on differences in interpretation. (As a final note, such a comparison might provide additional analytical leverage for exploring the effect described in Figure 3B, which did not survive correction for multiple comparisons.)

      This is a helpful suggestion, and we’ve added an analysis addressing your comment. We found that the interaction index capturing the difference between the three groups was stronger for the critical scenes than for the non-critical scenes for almost all DMN ROIs.

      "The role of scene content In the prior analyses, we focused on “critical scenes”, selected based on ratings from four raters who quantified the influence of the twist on the interpretation of each scene (see Methods). An independent post-experiment analysis of the verbal recall behavior of the fMRI participants yielded “twist scores” that were also highest for these scenes; that is, the expected and perceived effect of twist information on recall behavior were found to match. In our next analysis, we asked whether the neural event representations reflect these differences in the twist-related content of the scenes. In other words, are the “critical scenes” with highly twist-dependent interpretations truly critical for our observed effects?

      To answer this question, we re-ran our main encoding-encoding and recall-recall pISC analysis in each DMN ROI (Figure 2-3). We calculated interaction indices (Figure 2C) first by including all scenes, and second by including only the 11 non-critical scenes. To better compare the effect of including different subsets of scenes to our original results, in Figure 4 we show the results in 15 ROIs that exhibited meaningful effects in our main analyses (Figure 2C). Figure 4A demonstrates that “critical scenes” yielded higher interaction indices compared to all scenes or non-critical scenes across all ROIs. The interaction score across all DMN ROIs was significantly higher in “critical scenes” than all scenes (t(23) = 7.19, p = 2.53 x 10-7) and non-critical scenes (t(23) = 7.3, p = 1.95 x 10-7). These results show that critical scenes are indeed responsible for the observed pISC differences across groups."

      4) I appreciate the code being made available and that the neuroimaging data will be made available soon. I would also appreciate it if the authors made the movie stimulus and behavioral data available. The movie stimulus itself is of interest because it was edited down, and it would be nice for readers to be able to see which scenes were included.

      Unfortunately due to copyright, we cannot share the movie stimulus outright. However, we will share the timing of the cuts used, as well as the time-stamped transcripts of verbal recall.

      To sum up, I think that this is a great experiment with a lot of strengths. The design is fairly clean (especially for a movie stimulus), the analyses are well reasoned, and the data are clear. The only weaknesses I would suggest addressing are with regards to how the DMN is being described and evaluated, and the communication of how this work informs the field on a theoretical level.

    1. I came to this page looking for a way to disable news stories in Windows 11 Widgets. I attempted one of the solutions (Disable Interests From Widgets To Turn Off News Feeds) but News recommendations still appeared.

      Since I mainly wanted the Widget enabled for a calendar view, I decided against using Widgets altogether and settled for using the calendar in the notifications bar.

      Another alternative I considered was to have 4 static Widgets pinned to obscure any news articles in the feed. However, unless one uses the insider Windows 11 build 25211 or later, Widget display will pop up from mouse hovering.

    1. anditails · 1 yr. agoDell Pro Support Engineer (3rd party)You don't need Support Assist on Windows 11. Enable the "Optional Updates" and it'll do all the drivers through Windows Update.It's fast, too. Far quicker than Support Assist!

      Someone recommending to avoid using Dell SupportAssist on Windows 11. I came across this because I was trying to see if there was a way to update SA in order to ensure the driver iqvw64e.sys was removed. Related to the problem here. Uninstalling SupportAssist resolved the aforementioned problem since recursive file search through C drive failed to find driver iqvw64e.sys

      Based on other comments in this thread, seems like it's best to let Windows Update handle the drivers. Will no longer use Dell SA and will utilize "Optional Updates" to handle drivers

      Currently, the only perceived benefit from SA is automating support tickets submissions if product is under warranty. Last IT support experience with Dell was positive (they did the best they could), but they didn't know much about sys admin stuff on Windows (weren't very helpful in resolving issue without losing all files and installed software).

    1. The correct answer here is to uninstall the intel network driver completely because it is not supported anymore. Support Information for Intel® PROSet and Intel® Advanced...Let Kernel isolation on. saying home users should not care about safety is just a stupid way of thinking. installing bad drivers is a way to spread malware with ease. This should be the "marked solution" to this thread.And I would also add a link to the Intel® Driver & Support Assistant (Intel® DSA) to easily install the latest official driver. Thank you BjornVermeulen for pointing out the support info from Intel.

      I came here looking for a way to resolve an error "A driver cannot load on this device" for the driver "iqvw64e.sys". This error popped up after I enabled "memory integrity" in Windows 11.

      Note that "some malware camouflages itself as iqvw64e.sys" source.

      This driver is associated with Intel network connections software, and gets removed by uninstalling the software per this reddit comment in r/sysadmin. This error is probably because Intel won't support Intel PROSet & Intel Advanced Network Services on Windows 11. The driver is likely a holdover from my Windows 10 OS before I upgraded it to Windows 11. The driver is probably unneeded since other Intel drivers are available .

      The accepted answer in this Microsoft Q&A forum seems silly (just disable memory integrity), so I kept reading and found the highlighted response which quoted a more sensible answer (get rid of bad drivers). Later in the replies, someone asks what's the most efficient way to remove the driver and someone else states

      I found the solution to this problem. After digging for the source of this file, I came across this article. File.net description of iqvw64e.sys. According to the article, this driver can be removed by uninstalling "Intel(R) Network Connections". Sure enough, I went to Control Panel, uninstalled the recommended app, rebooted, and voila! No more error. As for the value of that application, I have no idea. I am however happy to be rid of this error.

      This didn't work for my case since "Intel(R) Network Connections" wasn't installed. Couldn't find iqvw64e.sys in the expected location of C:\Windows\System32\drivers. May have been removed after memory integrity enabled?

      Presently looks like non-issue and can disregard warning in the future

    1. The computing laboratory used had all the services needed for the study. It had Internet access, 30 Intel 5 computersfor personalized usage and suitable environment for learning. In fact, it had big windows and enough space for students to move around the class. One class completely used the digital material (100%) and the other class used it partially (60%)

      Resources The resources used in this document are technological, since it uses a laboratory experimental design.

    1. Create a ritual When you repeat a sequence of actions frequently enough, your brain will start associating them together. Many writers have a small ritual that they perform every time they sit down to write. It may be as simple as brewing a cup of tea, going to the library or coffee shop, playing a specific playlist or arranging the windows on their screen in a certain way.

      .c1

    1. Baths of Diocletian
      • 'Present' for the people of Rome from those higher up in the hierarchy
      • Similar symmetrical layouts
      • Large windows facilitate lighting
      • Directionality towards an alter
      • Decorated with iconography

    Annotators

    1. That ideas should freely spread from one toanother over the globe, for the moral and mutual instruction of man, andimprovement of his condition, seems to have been peculiarly and benevolentlydesigned by nature, when she made them,

      The downfall of computing was when they invented, enfored copy right on system created in the open.

      As long there is a licence we all loose!

      Then you can privatize 20 years of development in the commons called Linux

      If you can beat them run it inside a proprietary operating system. Better still, rely on emulators for old windows so you no longer need to worry about backward compatibiity

    1. Holy mackerel, when I saw the subject line of this topic I thought about Zoot – which I have not thought about in many months, and not for many years before that. Zoot was my introduction to this sort of “everything bucket” app. I also tried Info Select – which is also on Windows and may be an answer to @Claude’s question, assuming it’s still updated – and then to DevonThink and Evernote. My introduction to Zoot was an article by journalist James Fallows, of all people. He is the former editor-in-chief of The Atlantic, and reports mainly on public policy and politics. I wonder if he is still using Zoot? Three more probable options: Microsoft OneNote will be the most accessible to most Windows users. It doesn’t get you the search and “see also” of DevonThink. Obsidian and Roam Research take a different approach to the content-organization problems than DevonThink/OneNote/Evernote do. They rely on links and backlinks, like a personal Wikipedia. But they achieve the same goal of organizing information. They have search. AFAIK there’s nothing comparable to “see also,” but users report the same kind of serendipitous connections just by following the links they themselves made in the past. Another liability of Roam and Obsidian compared with DT: DT supports pretty much any kind of document that your computer can read, whereas Obsidian only supports Markdown, PDF, and images. I’m not as familiar with Roam, but I believe it has the same limitations. P.S. Partial answer to my own question: Fallows comes up in this forum as a person who advocated DT in a 2005 NYTimes article about “everything bucket” apps.

      From a discussion on DEVONthink alternatives for Windows users.

    2. I work primarily on Windows, but I support my kids who primarily use Mac for their college education. I have used DT on Mac, IPOS, IOS for about a year. On Windows, I have been using Kinook’s UltraRecall (UR) for the past 15 years. It is both a knowledge outliner and document manager. Built on top of a sql lite database. You can use just life DT and way way more. Of course, there is no mobile companion for UR. The MS Windows echo system in this regard is at least 12 years behind.

      Reference for UltraRecall (UR) being the most DEVONthink like Windows alternative. No mobile companion for UR. Look into this being paired with Obsidian

    1. Louis Burki 6 months ago (edited) I have make some changes to make it work, because I had a similar error. First, I have add a ":" before the "=" in the Text variable at the beginning of the script. Now it looks like that: "Text:=". Then I have put double quotes around (**your snippets**) so now it looks like this "(***your snippets***)". Then, I also changed the sort line to make it look that: Text:= sort(Text). And now it works as intended. Also, be careful not to remove the pipe symbol in your snippets.

      Someone giving a troubleshooting solution to using Joe Glines' Auto Hotkey script that inserts text from a list of the user's choosing. The problem another user had was including it in their main script file, but this was resolved with Louis Burki's answer

    1. Switch between keyboard layouts or input methods You can enter text with different keyboard layouts or input methods by switching between them. There are a few different ways to switch between keyboard layouts or input methods: On a hardware keyboard, press and hold the Windows logo key , and then press the Spacebar to cycle through your input methods. If you have a touchscreen, you can switch your touch keyboard layout by tapping or clicking the keyboard icon, and then tapping or clicking the keyboard layout you want to switch to. Language abbreviation button in the touch keyboard  On the desktop taskbar, tap or click the language abbreviation in the notification area at the far right of the taskbar, and then tap or click the keyboard layout or input method you want to switch to. Language abbreviation button in the desktop taskbar

    1. Another Authotkey user here! 😃 On my machine, Mehul's solution with the 1ms delay takes noticeably longer to actually insert some text. I found the following solution which inserts it in a less "chunky" manner. Adjust the 1ms till it works for your setup :) ; This worked ::dx::{Sleep 1}DevExpress ; For longer hotstrings, I needed more ::azerty::{Sleep 60}DevExpress With 250ms pretty much any length of hotstring expanded correctly. I answered this Stackoverflow question with details from this issue. The same bug might have been present in an earlier version of VSCode: microsoft/vscode#1934 They make mention of this commit fixing it. Unfortunately it's a rather large commit :( microsoft/vscode@a1bd50f Commit msg: "Fixes #1168: Read synchronously from textarea" The problem has to do with the backspace remapping. Take the following autohotkey hotstring: ::tada::🎉 This will make typing "tada" followed by one of the "EndingChars" (space, tab, comma, dot, ...) expand to the 🎉 emoji. What you see visually happening on the screen is that Autohotkey does this by first sending a backspace to the editor 4 times (length of hotstring "tada") and then inserts the replacement text (🎉) What happens when this (pretty fantastic) extension is active is that the first x characters get deleted then the replacement text gets inserted and then the remaining (hotstring length - x) characters get deleted. But because the cursor is now at the end of the replacement text... which gets chewed on 😃 I'll have to learn how to debug the IDE itself or always add a {Sleep 250} to my hotstrings...

      Solution to AutoHotkey text replacement bug. Just add sleep parameter. Adding {Sleep 250} should generally work

    1. But the room was awfully stuffy. There were a lot of those horrible, strong-smelling flowers about everywhere, and she had actually a bunch of them round her neck. I feared that the heavy odour would be too much for the dear child in her weak state, so I took them all away and opened a bit of the window to let in a little fresh air. You will be pleased with her, I am sure.”

      Oh NOOOOOOO; although I would be remiss to mention the fact that this was a common treatment (for up until actually very recently and even now it is considered by some to be beneficial) to get some fresh air so opening the windows when it was stuffy was thought to rid the place of sickness, which there's an extent at which this is true, though it is rooted in the idea of miasma and miasma was basically the idea that sickness traveled through the air by way of smells and stuffiness and things like that--and again it's not completely wrong but it's also not germ theory.

    1. With a long slow stride, limping a little from his blistered feet, Bud walked down Broadway, past empty lots where tin cans glittered among grass and sumach bushes and ragweed, between ranks of billboards and Bull Durham signs, past shanties and abandoned squatters’ shacks, past gulches heaped with wheelscarred rubbishpiles where dumpcarts were dumping ashes and clinkers, past knobs of gray outcrop where steamdrills continually tapped and nibbled, past excavations out of which wagons full of rock and clay toiled up plank roads to the street, until he was walking on new sidewalks along a row of yellow brick apartment houses, looking in the windows of grocery stores, Chinese laundries, lunchrooms, flower and vegetable shops, tailors’, delicatessens. Passing under a scaffolding in front of a new building, he caught the eye of an old man who sat on the edge of the sidewalk trimming oil lamps. Bud stood beside him, hitching up his pants; cleared his throat:

      NYC street scenes, 5th Ave, 1913 https://www.loc.gov/resource/cph.3c19546/ Fifth Ave., New York City, with two buses on street.

    1. The Subtle Knife of the book's title is a knife that is capable of cutting windows between worlds.

      = gloss - a knife capable if cutting windows between worlds

    1. Ihate Hyndland. You'll find its like in any large city. Greenleafy suburbs, two cars, children at public school and bore-dom, boredom, boredom. Petty respectability up front,intricate cruelties behind closed doors. Most of the townhouses have been turned into small apartments. The McKind-less residence was the largest building in the street and theonly one still intact. I parked and sat for a while looking at it.It dominated the road, a dark, sober fagade intersected bythree rows of darkened windows.

      Description of Hyndland

  4. Oct 2022
    1. he biggest motivation to use the compositor extensively is stitching together diverse visual sources, particularly video, 3D, and various UI embeddings including web and “native” controls. If you want a video playback window to scroll seamlessly and other UI elements to blend with it, there is essentially no other game in town.

      Pro of coupling to Wayland: can use expressive free WM animation! Con: Nobody will ever be able to try your system on Windows or MacOS

    1. NOTE: If you are looking to add multiple Live Linux distributions, System Diagnostic Tools, Antivirus Utilities, and Windows Installers, you should use YUMI Multiboot Software, instead

      281022 193603 6-301 R15. SL<br /> o Point for READ

    1. @route @twalpole as a community I think we're super grateful for your work on a CDP alternative to chromedriver/selenium, poltergeist etc. I do think collaboration could be very valuable though, although it would likely mean abandoning one of the projects and teaming up on the other, you both obviously have very deep knowledge of CDP and therefore would get a load more done than any of us "end users" trying to wade in there. The status for us on our Rails project is that Apparition fails with a ton of errors, they all seem related to handling timing events (accept_prompt doesn't work, opening new windows seems problematic etc etc etc) whereas Cuprite only rails with a cookie gem we're using (easy fixed) and doesn't support drag_to yet. So to me Cuprite seems more complete, but I don't know much about the internals.
    1. The change in how turnstile jumping will be prosecuted comes at a time when the city's reliance on Broken Windows policing is under fire because of its impact on New York's low-income non-white community

      Crime has a significant effect on the entire New York City community, but especially on the low income community. Many NYC officials prioritize minimizing the effect of the law on criminals over minimizing the effect of criminals on law-abiding citizens.

    1. Share WiFi from Android to Windows 1. Open up your Windows PC, and connect to the WiFi hotspot that says – DIRECT-Android and enters the password that you saw on the app in step 2. If you open up the browser now, you won’t see get internet access even though you are connected to the network. To fix that, you need to set up a proxy IP.

      Tod

    1. Author Response

      Reviewer #1 (Public Review):

      This paper introduces a new statistical framework to study cellular lineages and traits. Several new measures are introduced to infer selection strength from individual lineages. The key observation is that one can simply relate cumulants of a fitness landscape to population growth, and all of this can be simply computed from one generating function, that can be inferred from data. This formalism is then applied to experimental cell lineage data.

      I think this is a very interesting and clever paper. However, in its current form the paper is very hard to read, with very few explanations beyond the mathematical observations/definitions, which makes it almost unreadable for people outside of the field in my opinion. Some more intuitive explanations should be given for a broader audience, on all aspects : definitions of fitness « landscape », selection strength(s), connections between cumulants and other properties (including skewness) etc... There are many new definitions given with names reminiscent of classical concepts in evolutionary theory, but the connection is not always obvious. It would be great to better explain with very simple, intuitive examples, what they mean, beyond maths, possibly with simple examples. Some of this might be obvious to population geneticists, and in fact some explanations made in discussion are more illuminating, but earlier would be much better. I give more specific comments below.

      We thank the reviewer for calling our attention to the lack of accessible explanations on the significant terms and quantities in this framework. Following the suggestion in the comments below, we added Box 1, providing intuitive and plain explanations on the terms of fitness, fitness landscape, selection, selection strength, and cumulants. In each section, we explain the standard usage of these terms in evolutionary biology and clarify the similarities and differences in this framework. We also added a figure to Box 1 and provided a schematic explanation of the relationships among chronological and retrospective distributions, fitness landscapes, and selection strength. We believe that these explanations and a figure would better clarify the meanings and functions of these quantities.

      Major comments :

      1) the authors give names to several functions, for instance before equation (1) they mention « fitness landscape », then describe « net fitness » , which allows the authors to define « fitness cumulants ». Later on, a « selection » is defined. Those terms might mean different things for different authors depending on the context, to the point there are sometimes almost confusing. For instance, why is h a « landscape » ? For me, a landscape is kind of like a potential, and I really do not see how this is connected to h. « fitness cumulants » is particularly jargonic. There are also two kinds of selection strengths, which is very confusing. I would recommend that the authors make a glossary of the term, explain intuitively what they mean and maybe connect them to standard definitions.

      We appreciate the suggestion of making a glossary of the terms. Following the suggestion, we added Box 1 to provide intuitive and plain explanations of the terms used in this framework.

      In Box 1, we explain why we called h(x) a fitness landscape, referring to its standard usage in evolutionary biology. In evolutionary biology, fitness landscapes (also called adaptive landscapes) are visual representations of relationships between reproductive abilities (fitness) and genotypes. The height of landscapes corresponds to fitness. Since constructing "genotype space" is usually difficult, fitness is often mapped on an allele frequency or phenotype (trait) space to depict a "landscape." Fitness landscapes introduced in our framework are analogous to those in evolutionary biology in that fitness differences are mapped on trait spaces. Although fitness landscapes in evolutionary biology are usually metaphorical or conceptual tools for understanding evolutionary processes, the landscapes in our framework are directly measurable from division count and trait dynamics on cellular lineages.

      We also explain "selection" and "selection strength" in Box 1. As pointed out, we define three kinds of selection strength measures. These three measures share a similar property of reporting the overall correlations between traits and fitness. However, they also have critical differences regarding additional selection effects they represent: S_KL^((1)) for growth rate gain, S_KL^((2)) for additional loss of growth rate under perturbations, and their difference S_KL^((2))-S_KL^((1)) for the effect of selection on fitness variance. We restructured the sections in Results and clarified these important meanings of the different selection strength measures.

      We removed the term "fitness cumulants" as this is non-general and might cause confusion to readers. We now rephrased this more precisely as "cumulants of a fitness landscape (with respect to chronological distribution)." Besides, we added a general explanation of "cumulants" to Box 1 and clarified what first, second, and third-order cumulants represent about distributions.

      2) Along the same line, it would be good to give more intuitive explanations of the different functions introduced. For instance I find (2) more intuitive than (1) to define h . I think some more intuition on what the authors call selection strengths would be super useful . In Table 1 selection strengths are related to Kublack Leibler divergence (which does not seem to be defined), it would be good to better explain this.

      In addition to Box 1, we included more intuitive explanations on fitness landscapes and selection strength where they first appear in the Theoretical background section. As pointed out, descriptions of the linkage between the selection strength measures and Kullback-Leibler divergence were only in the Supplemental Information in the original manuscript. We now explicitly show this linkage where we first define the selection strength.

      Following this comment, we also changed the definition of a fitness landscape from the original one to h(x)≔τΛ+ln⁡〖Q_rs (x)/Q_cl (x)〗 (Eq. 1), using the chronological and retrospective distributions introduced in the preceding paragraph. This definition is mathematically equivalent to the previous one, but we believe it is more intuitive.

      3) It seems to me the authors implicitly assume that, along a lineage, one would have almost stationary phenotypes (e.g. constant division rate) . However, one could imagine very different situations, for instance the division rates could depend on interactions with other cells in the growing population, and thus change with time along a lineage. One could also have some strong random components of division rate over time . I am wondering how those more complex cases would impact the results and the discussion

      We thank the reviewer for pointing out our insufficient explanation of an essential feature of this framework. As we now explain in the "Examples of biological questions" section (L62-65) and Discussion (L492-493), this framework does not assume stationary phenotypes (traits) on cellular lineages. On the contrary, we developed this framework so that one can quantify fitness and selection strength even for non-stationary phenotypes (traits) due to factors such as non-constant environments and inherent stochasticity.

      In fact, if traits are stationary in cellular lineages, this framework becomes essentially identical to the individual-based evolutionary biology framework (see ref. 26, for example). Our framework assumes a cell lineage as a unit of selection and any measurable quantities along cellular lineages as lineage traits, whether they are stationary or non-stationary. Therefore, our framework can evaluate fitness landscapes and selection strength without explicitly taking the environmental conditions around cells into account. This means that h(x) and S[X] in this framework extract the correlations between the traits of interest and division counts among various factors that could potentially influence division counts. On the other hand, this framework has a limitation due to this design: it cannot say anything about the influence of factors such as non-quantified traits and potential variations in environmental conditions. We now explain these important points explicitly in the revised manuscript (L493-496).

      Likewise, stochasticity in division rate does affect division count distributions, and its influence appears as differences in the selection strength of division count S[D]. As stated in the text, S[D] sets the maximum bound for the selection strength of any lineage trait (L143-145). Therefore, S_rel [X]≔S[X]/S[D] reports the relative strength of the correlation between the trait X and lineage fitness in a given level of S[D] in each condition.

      To clarify the influence of stochasticity in division rate, we present a cell population model in which cells divide stochastically according to generation time (interdivision time) distributions in Appendix 2 (we moved this section from the Supplemental Information with modifications). We can confirm from this model that the shapes of generation time distributions influence the selection strength S[D]. Importantly, one can understand from this model that stochasticity in generation times constantly introduces selection to cell populations and modulates the growth rate and selection strength even in the long-term limit. We now clarify this important point in the Discussion (L519-526).

      4) « Therefore, in contrast to a common assumption that selection necessarily decreases fitness variance, here we show that under certain conditions selection can increase fitness variance among cellular ». This is a super interesting statement, but there is such a lack of explanations and intuition here that it is obscure to me what actually happens here.

      When a decrease in fitness variance by selection is mentioned in evolutionary biology, an upper bound and inheritance of fitness across the generations of individuals are usually assumed. In such circumstances, selection drives the fitness distribution toward the maximum value, and the selection eventually causes fitness variance to decrease. However, even in this process, a decrease is not assured for every step; whether selection reduces fitness variance at each step depends on the fitness distribution at that time.

      In our argument, we compared fitness variances between chronological and retrospective distributions. We showed both theoretically and experimentally that there are cases where the variances of the retrospective distributions (distributions after selection) become larger than those of the chronological distributions (distributions before selection). The direction of variance change depends on the shape of chronological distributions, primarily on the skewness of the distributions (positive skew for increasing the variance and negative skew for decreasing the variance). The direction of variance changes can also be probed by the difference between the two selection strength measures S_KL^((2))-S_KL^((1)). Notably, we can demonstrate that there are cases where retrospective fitness variances are larger than chronological fitness variances even in the long-term limit, as shown by a cell population model in Appendix 2.

      We now explain what kind of situations are usually premised when reduction of fitness variance is mentioned and clarify that, in our framework, we compare the fitness variances between chronological and retrospective distributions (L542-548). We also explain that a selection effect on fitness variance generally depends on fitness distribution and that a larger fitness variance in retrospective distribution is possible even in the long-term limit (L548-557).

      Reviewer #2 (Public Review):

      The paper addresses a fundamental question: how do phenotypic variations among lineages relate to the growth rate of a population. A mathematical framework is presented which focuses on lineage traits, i.e. the value of a quantitative trait averaged over a cell lineage, thus defining a fitness landscape h(x). Several measures of selection strengths are introduced, whose relationships are clarified through the introduction of the cumulant generating function of h(x). These relationships are illustrated in analytical mathematical models and examined in the context of experimental data. It is found that higher than third order cumulants are negligible when cells are in early exponential phase but not when they are regrowing from a stationary phase.

      The framework is elegant and its independence from mechanistic models appealing. The statistical approach is broadly applicable to lineage data, which are becoming increasingly available, and can for instance be used to identify the conditions under which specific traits are subject to selection.

      We appreciate the reviewer for the positive evaluation. We will reply to your specific comments below.

      Reviewer #3 (Public Review):

      In this work the authors have constructed a useful mathematical framework to delineate contributions leading to differences in lineages of populations of cells. In principle, the framework is widely applicable to exponentially growing populations. An attractive feature is that the framework is not tailored to particular growth models or environmental conditions. I expect it will be valuable for systems where contributions from phenotypic heterogeneity overwhelm contributions from intrinsic stochasticity in cellular dynamics.

      I am generally very positive about this work. Nevertheless, a few specific concerns:

      1) In here, lineages are considered as fitter if they have more division events. But this consideration neglects inherent stochasticity in division events. Even in a completely homogeneous population, the number of division events for different lineages is different due to intrinsic stochasticity, but applying the methods discussed in this manuscript may lead to falsely assigning different fitness levels to different lineages. The reason why (despite having different number of division events) these lineages ought be assigned the same fitness level is that future generations of these cells will have identical statistics, in contrast with those of cells that are phenotypically different. Extending the idea to heterogeneous populations, the actual difference in fitness levels may be significantly different from what is obtained from the mathematical framework presented here, depending on the level of inherent stochasticity.

      We thank the reviewer for the comment on the point of which our explanation was insufficient in the original manuscript. Intrinsic stochasticity in interdivision time (generation time) is, in fact, critical for selection. For example, if a cell divides with a generation time shorter than the average due to stochasticity, this cell is likely to have more descendant cells in the future population on average than the other cells born at the same timing, even if the descendants follow identical statistics. Therefore, the properties of intrinsic stochasticity, including shapes of generation time distributions and transgenerational correlations, significantly affect the overall selection strength S_KL^((1)) [D] (and also S_KL^((2)) [D]). We now explain this important point in the Results section, referring to the analytical model in Appendix 2 (L327-334), and also in Discussion (L519-524).

      Importantly, even when cell division processes seem purely stochastic, different states in some traits might underlie these variations in generation times. In such cases, evaluating h(x) and S_rel [X] can still unravel the correlations between the trait values and fitness. Especially, the relative selection strength S_rel [X]≔S_KL^((1) ) [X]/S_KL^((1) ) [D] extracts the correlation of the trait values in a given level of division count heterogeneity in each condition. We now clarify this important aspect of the framework in Discussion (L524-526).

      When a cell population is composed of heterogeneous subpopulations each of which follows a distinct statistical rule, our framework evaluates the combined effects from the heterogeneous rules and the inherent stochasticity of each subpopulation. Untangling these two contributions is generally challenging unless we have appropriate markers for distinguishing the subpopulations. However, when the subpopulations follow significantly distinct statistics, the division count distribution should become skewed or multimodal, and the difference between the two selection strength measures S_KL^((2) ) [D]-S_KL^((1) ) [D] can suggest the existence of such subpopulations. Therefore, detailed analyses using all the selection strength measures and the fitness landscapes can provide insights into cell populations’ internal structures and selection.

      We now explain the effect of inherent stochasticity in generation times (L327-334 and L519-524) and discuss how we can probe the existence of subpopulations based on the selection strength measures (L508-512). Please also refer to our reply to the comment 3 of reviewer #1.

      2) In one of the sections the authors mention having performed analytical calculations for a cellular population in which cells divide with gamma distributed uncorrelated interdivision times. It's unclear if 1) within specific sub-populations, cells with the sub-population divide with the same division time, and the distribution of division times is due to the diverse distribution of sub-populations; or 2) if there are no such sub-populations and all cells stochastically choose division time from the same distribution irrespective of their past lineage. If the latter, then I do not see the need for a lineage-based mathematical formulation when the problem can dealt with in much simpler traditional ways which so not keep track of lineages.

      We dealt with the situation of 2) in this model. As noted by the reviewer, we can calculate the chronological and retrospective mean fitness and the population growth rate by a simpler individual-based age-structured population model (see ref. 10, for example). However, applying this framework to this model can clarify the utility of the cumulant generating function, the meaning of the differences between these fitness measures, and the effect of statistical properties of intrinsic stochasticity on long-term growth rate and selection. Therefore, we kept this model in Appendix 2 (the section is moved from Supplemental Information) with additional clarification of our motivation for analysis and the implication of the results.

      3) The analytical calculations provided seem to be exact only for trajectories of almost infinite duration (or in practice, duration much greater than typical interdivision time). For example, if the observation time is of the order of division time, this would create significant artifacts / artificial bias in the weights of lineages depending on whether the cell was able to divide within the observation time or not. Thus, the results claiming that contributions of higher order cumulants become significant in the regrowth from a late stationary phase are questionable, especially since authors note that 90% of cells showed no divisions within the observation time.

      We thank the reviewer for an insightful comment. It is true that the duration of observation influences the results. In the regrowing experiments with E. coli, we aimed to compare the two cell populations regrowing from different stages of the stationary phase. Therefore, it is appropriate to fix the time windows between the two conditions. Even though a significant fraction of cell lineages remains undivided, the regrowing cells already divide several times within this time window. Therefore, the results are valid if we compare and discuss the selection levels in this time scale. However, clarification of the selection in the longer time scales requires a more detailed characterization of lag time distributions under both conditions.

      We now clarify the range of validity of the results and the limitations on prediction for the long-term selection without knowing the details of the lag time distributions in Discussion (L536-539).

    1. “Are you alive, or not? Is there nothing in your head?”

      Moment of zombie-ness: The questioning of something if they are between life and death. Eyes are often noted as the windows to the soul and they give off a lot of different emotions. "Dead" eyes, eyes that "light up", "kind" eyes, etc.

    1. For the record, this article worked for me. I started experiencing the no-refresh issue a few weeks ago. No other solutions I found seemed to work, but this worked! How to Fix it When the Desktop/File Explorer Won’t Automatically Refresh in Windows 10/8/7 | scootercomputers Basically do the following: Delete all the files inside %AppData%\Microsoft\windows\recent\automaticdestinationsDelete all the files inside %AppData%\Microsoft\windows\recent\customdestinationsRestart your PC.

      Esta solução parece funcionar para resolver o problema da atualização do Gerenciador de Arquivos.

    1. Years of grappling with the ripple effects of Broken Windows policing have shown us that arrests are not the way to deal with minor offenses, like riding your bike on the sidewalk, having an open container of alcohol, smoking marijuana, or jumping a turnstile. An uptick in enforcement would reverse the recent positive trend of fewer fare evasion arrests. Through October, police have made 5,236 arrests for fare evasion. That is still 5,236 arrests too many, but it represents a 66 percent drop compared to the same period last year.

      Not prosecuting crimes is a positive trend, apparently. This disregards how NYC transformed itself in the 90s and 00s under the leadership of Mayors Giuliani and Bloomberg, and how that success was maintained at least when former Mayor de Blasio wisely chose William Bratton as NYPD Commissioner.

    1. learn enough about the cultureof a society so that you understand the way it operates, and then tolook for windows of opportunity to alter the way the game is played,so that you can introduce, consistent with the culture and beliefs, newrules and norms

      "Window of opportunity" is a popular concept in political science used to describe how policymakers promote their policies at specific times

    1. Inside Edition's treatment of the subject also calls to mind the notorious "broken windows" theory of policing, which posits that evidence of unaddressed minor criminal activity signals to would-be criminals that cops will tolerate more serious crimes, too—and therefore that cracking down on things like turnstile jumping, graffiti, and public urination will prevent such crimes from occurring in the first place. The broken-windows theory was pioneered by former New York City Transit Police commissioner Bill Bratton in the early 1990s, and became the city's dominant law-enforcement philosophy after newly elected mayor Rudy Giuliani promoted Bratton to NYPD commissioner in 1993. There is, in other words, a gross history in New York City associated with the stigmatization of fare beating; it will probably not surprise you to learn that although the efficacy of broken-windows policing is, at best, debatable, its discriminatory impact on low-income people and communities of color is not.

      Broken windows policing is the "notorious" theory which drove New York City's revival under former Mayor Giuliani in the 1990s.

    1. very easy to ruin the MBR sector of the drive, making it impossible to boot up again. Then you'll either need to create a recovery USB drive with Windows or Linux and try to repair the MBR, or completely wipe the drive and reinstall the operating system

      todo

    1. In Europe glass was used in windows and lanterns, but the Chinese used paper for this purpose.

      在窗户材料选择上,欧洲使用玻璃而中国使用纸。

    1. My Content: A popular feature for business customers, now available for personal use, My Content is a central location to view and access all your content – created by you or shared with you – regardless of where it’s stored. Available next month to all users on the web and Windows.

      Talvez seja o que estou esperando a anos, integração das pesquisas em todos os produtos da Microsoft.

    1. Apresentistmediascapemaypreventtheconstructionoffalseandmisleadingnarrativesbyeliteswhomeanusnogood,butitalsotendstoleaveeveryonelookingfordirectionandrespondingoroverrespondingtoeverybumpintheroad

      Social media sites such as Facebook is terrible for this. Put a comment someone does not like and before you know it your windows are put threw.

    1. what fixed my issue was: remove the node-sass from package.json npm install install it again in latest version via npm install --save-dev node-sass if you find this helpful, I assume you just need to upgrade your node-sass for the latest version because it uses node-gyp as a lower version.
    1. "I thought WSL ran as root in Windows" ... ABSOLUTELY NOT! Do you think we're crazy? ;) When opened normally, your Bash instances are launched with standard Windows user rights. If you want to edit your Windows hosts file, you must do so from an elevated Bash instance ... though only do this with enormous care - any other script you run in the same elevated Bash Console will also get admin rights to the rest of your machine!!
    1. escriptive statistics were used to describe the prevalence of food insecurity and participants’ characteristics. Chi-square test of independence was used to determine the bivariate associations of food insecurity and sociodemographic variables. Whenever the number in any cell was < 5 in a 2 × 2 contingency table, Fisher’s exact test was used. The difference between food-secure and food-insecure students on health-related parameters was analyzed using independent t-test for data that pass the normality test and Mann–Whitney’s U test for those not. To model the association of health and academic outcomes (i.e., BMI, perceived stress, disordered eating behaviors, sleep quality, and self-reported GPA) and food security status, multiple logistic regressions were used. These models were adjusted for variables found to be significant in the bivariate analyses (i.e., Pell grant status, parental education, place of residence, and meal plan status) and variables known to affect outcome measures (age, sex, university, and employment status) based on previous literature [6, 19, 43, 44]. Results from these regression models were reported as odds ratios and 95% confidence intervals. All analyses were conducted using the IBM SPSS Statistics for Windows, version 24 (Armonk, NY). Statistical significance was determined at P < 0.05.

      discussion of steps taken for data analysis

    1. Author Response

      Reviewer #1 (Public Review):

      Figures 2 through 6. There is no description of the relationship between the findings and the anatomical location of the electrodes (other than distal versus local). Perhaps the non-uniform distribution of electrodes makes these analyses more complicated and such questions might have minimal if any statistical power. But how should we think about the claims in Figures 2-6 in relationship to the hippocampus, amygdala, entorhinal cortex, and parahippocampal gyrus? As one example question out of many, is Figure 2C revealing results for local pairs in all medial temporal lobe areas or any one area in particular? I won't spell out every single anatomical question. But essentially every figure is associated with an anatomical question that is not described in the results.

      To address the reviewer’s point we now report the distribution of spike-LFP pairs across anatomical regions for each Figure 2-6. The results split by anatomical regions are reported in Figure 2 – figure supplement 7, Figure 3 – figure supplement 7, Figure 4 – figure supplement 1, Figure 5 – figure supplement 2, and Figure 6 – figure supplement 3. We also calculated a non-parametric Kruskal-Wallis Test to statistically examine the effect of anatomical regions on the results shown in each figure. Generally, these new results show that the effects are similar across regions, apart from two exceptions (i.e. Figure 4 – supplement 1; and Figure 5 – supplement 2). However, we would like to stress that these results should be taken with a huge grain of salt because the electrodes were not evenly distributed across regions (i.e. ~75% of observations pertain to the hippocampus), and patients as the reviewer correctly points out. This leads to sometimes very low numbers of observations per region and it is difficult to disentangle whether any apparent differences are driven by regional differences, or differences between patients. Detailed results are reported below.

      Manuscript lines 207-212: “In the above analysis all MTL regions were pooled together to allow for sufficient statistical power. Results separated by anatomical region are reported in Figure 2 – figure supplement 7 for the interested reader. However, these results should be interpreted with caution because electrodes were not evenly distributed across regions and patients making it difficult to disentangle whether any apparent differences are driven by actual anatomical differences, or idiosyncratic differences between patients.”

      Manuscript lines 255-258: “Finally, we report the distal spike-LFP results separated by anatomical region in Figure 3 – figure supplement 7, which did not reveal any apparent differences in the memory related modulation of theta spike-LFP coupling between regions.”

      Manuscript lines 264-266: “PSI results separated by anatomical regions are reported in Figure 4 – figure supplement 1, which revealed that the PSI results were mostly driven by within regional coupling.”

      Manuscript lines 399-303: “We also analyzed whether the memory-dependent effects of cross-frequency coupling differ between anatomical regions (see Figure 5 – figure supplement 2). This analysis revealed that the results were mostly driven by the hippocampus, however we urge caution in interpreting this effect due to the large sampling imbalance across regions.”

      Manuscript lines 343-346: “As for the above analysis we also investigated any apparent differences in co-firing between anatomical regions. These results are reported in Figure 6 – figure supplement 3 and show that the earlier co-firing for hits compared to misses was approximately equivalent across regions.”

      Figure 1

      1A. I assume that image positions are randomized during a cued recall?

      Yes, that was the case. We now added that information in the methods section.

      Manuscript lines 526: “Image positions on the screen were randomized for each trial.”

      What was the correlation between subjects' indication of how many images they thought they remembered and their actual performance?

      We did not log how many images the patients thought they remembered. Specifically, if the patients answered that they remembered at least one image, then they were shown the selection screen where they could select the appropriate images. Therefore, we cannot perform this analysis. We report this now in the methods section. However, albeit interesting, the results of such an analysis would not affect the main conclusions of our manuscript.

      Manuscript lines 523-524: “The experimental script did not log how many images the patient indicated that they thought to remember.”

      1B. Chance is shown for hits but not misses. I assume that hits are defined as both images correct and misses as either 0 or 1 image correct. Then a chance for misses is 1-chance for hits = 5/6. It would be nice to mark this in the figure.

      Done as suggested (see Figure 1).

      The authors report that both incorrect was 11.9%. By chance, both incorrect should be the same as both correct, hence also 1/6 probability, hence the probability of both incorrect seems quite close to chance levels, right?

      Yes, that is correct, however, across sessions the proportion of full misses (i.e. both incorrect) was significantly below chance (t(39)=-1.9214; p<0.05). Nevertheless, the proportion of fully forgotten trials appears to be higher than expected purely by chance. This is likely driven by a tendency of participants to either fully remember an episode, or completely forget it, as demonstrated previously in behavioural work (Joensen et al., 2020; JEP Gen.). We report this now in the manuscript.

      Manuscript lines 132-136: “Across sessions the proportion of full misses (i.e. both incorrect) was significantly below chance (t39=-1.92; p<0.05). However, the proportion of fully forgotten trials appears to be higher than expected purely by chance. This is likely driven by a tendency of participants to either fully remember an episode, or completely forget it, as demonstrated previously in behavioral work (25).”

      1C. How does the number of electrodes relate to the number of units recorded in each area?

      The distribution of neurons per region is shown in the new Figure 1D (see above). It approximately matches the distribution of electrodes per region, except for the Amygdala where slightly more neurons where recorded. This is because of one patient (P08) who had two electrodes in the left and right Amygdala and who contributed at lot of sessions (i.e. 9 sessions, comparing to an average of 4.44 per patient).

      Line 152. The authors state that neural firing during encoding was not modulated by memory for the time window of interest. This is slightly surprising given that other studies have shown a correlation between firing rates and memory performance (see Zheng et al Nature Neuroscience 2022 for a recent example). The task here is different from those in other studies, but is there any speculation as to potential differences? What makes firing rates during encoding correlate with subsequent memory in one task and not in another? And why is the interval from 2-3 seconds more interesting than the intervals after 3 seconds where the authors do report changes in firing rates associated with subsequent performance? Is there any reason to think that the interval from 2-3 seconds is where memories are encoded as opposed to the interval after 3 seconds?

      Zheng et al. used a movie-based memory paradigm where they manipulated transitions between scenes to identify event cells and boundary cells. They show that boundary cells, which made up 7.24% of all recorded MTL cells, but not event cells (6.2% of all MTL cells) modulate their firing rate around an event depending on later memory. There are quite a few differences between Zheng et al’s study and our study that need to be considered. Most importantly, we did not perform a complex movie-based memory paradigm as in Zheng et al. and therefore cannot identify boundary cells, which would be expected to show the memory dependent firing rate modulation. This alone could contribute to the fact that no significant differences in firing rates in the first second following stimulus onset were observed. Such an absence of a difference of neural firing depending on later memory is not unprecedented. In their seminal paper, Rutishauser et al. (2010; Nature) report no significant differences in firing rates (0-1 seconds after stimulus onset, which is similar to our 2-3 seconds time window) between later remembered or later forgotten images. This finding is also in line to Jutras & Buffalo (2009; J Neurosci) who also show no significant difference in firing rates of hippocampal neurons during encoding of remembered and forgotten images.

      The 2-3 seconds time interval, which corresponds to 0-1 seconds after the onset of the two associate images, is special because it marks the earliest time point where memory formation can start, therefore allowing us to investigate these very early neural processes that set the stage for later memory-forming processes. While speculative, these early processes likely capture the initial sweep of information transfer into the MTL memory system which arguably is reflected in the timing of spikes relative to LFPs. It is conceivable that these initial network dynamics reflect attentional processes, which act as a gate keeper to the hippocampus (Moscovitch, 2008; Can J Exp Psychol) and thereby set the stage for later memory forming processes. This interpretation would be consistent with studies in macaques showing that attention increases spike-LFP coupling, whilst not affecting firing rates (Fries et al., 2004; Science). We modified the discussion section to address this issue.

      Manuscript lines 468-474: “Interestingly, these early modulations of neural synchronization by memory encoding were observed in the absence of modulations of firing rates, which is consistent with previous results in humans (16) and macaques (12), but contrasts with (43). Studies in macaques showed that attention increases spike-LFP coupling whilst not affecting firing rates (44). It is therefore conceivable that these initial network dynamics reflect attentional processes, which act as a gate keeper to the hippocampus and thereby set the stage for later memory forming processes (45).”

      Lines 154-157 and relationship to the subsequent analyses. These lines mention in passing differences in power in low-frequency bands and high-frequency bands. To what extent are subsequent results (especially Figures 3 and 4) related to this observation? That is, are the changes in spike-field coherence, correlated with, or perhaps even dictated by, the changes in power in the corresponding frequency bands?

      To address this question we repeated the analysis that we performed for SFC for Power in those channels whose LFP was locally coupled to spikes in gamma, and distally coupled to spikes in theta. Furthermore, we correlated the difference in peak frequency between hits and misses between Power and SFC. If power would dictate the effects seen in SFC then we would expect similar effects of memory in power as in SFC, that is an increase of peak frequency for hits compared to misses for gamma and theta. Furthermore, we would expect to find a correlation between the peak frequency differences in power and SFC. None of these scenarios were confirmed by the data. These results are now reported in Figure 2 – figure supplement 5 for gamma, and Figure 3 – figure supplement 5 for theta.

      Manuscript lines 195-199: “We also tested whether a similar shift in peak gamma frequency as observed for spike-LFP coupling is present in LFP power, and whether memory-related differences in peak gamma spike-LFP are correlated with differences in peak gamma power (Figure 2 – figure supplement 5). Both analyses showed no effects, suggesting that the effects in spike-LFP coupling were not coupled to, or driven by similar changes in LFP power.”

      Manuscript lines 248-253: “As for gamma, we also tested whether a similar shift in peak theta frequency is present in LFP power, and whether there is a correlation between the memory-related differences in peak theta spike-LFP and peak theta power (Figure 3 – figure supplement 5). Both analyses showed no effects, suggesting that the effects in spike-LFP coupling were not coupled to, or driven by similar changes in LFP power.”

      Do local interactions include spike-field coherence measurements from the same microwire (i.e., spikes and LFPs from the same microwire)?

      Yes, they do. Out of the 53 local spike-SFC couplings found for the gamma frequency range, 11 (20.75%) were from pairs where the spikes and LFPs were measured on the same microwire. We assume that the reviewer is asking this question because of a concern that spike interpolation may introduce artifacts which may influence the spectrograms and consequently the spike-LFP coupling measures. This was also pointed out by Reviewer #2. To address this concern, we split the data based on whether the spike and LFP providing channels were the same or different. The results show that (i) the spectrogram of SFC is highly similar between the two datasets, with a prominent gamma peak present in both and no significant differences between the two; (ii) restricting the analysis to those data where the LFP and spike providing channels are different replicated the main finding of faster gamma peak frequencies for hits compared to misses; and (iii) limiting the SFC analysis further to only ‘silent’ channels, i.e. channels where no SUA/MUA activity was present at all also replicated the main finding of faster gamma peak frequencies for hits compared to misses.

      These analyses suggest that the SFC results were not driven by spike interpolation artefacts.

      Manuscript lines 199-203: “To rule out concerns about possible artifacts introduced by spike interpolation we repeated the above analysis for spike-LFP pairs where the spike and LFP providing channels are the same or different, and for ‘silent’ LFP channels (i.e. channels were no SUA/MUA activity was detected (see Figure 2 – figure supplement 6). “

      60 Hz. It has always troubled me deeply when results peak at 60 Hz. This is seen in multiple places in the manuscript; e.g., Figures 2B, 2E. What are the odds that engineers choosing the frequency for AC currents would choose the exact same frequency that evolution dictated for interactions of brain signals? This is certainly not the only study that reports interesting observations peaking at 60 Hz. One strong line of argument to suggest that this is not line noise is the difference between conditions. For example, in Figure 2B, there is a difference between local and distal interactions. It is hard for me to imagine why line noise would reveal any such difference. Still ...

      The frequency for AC currents in Europe is 50 Hz, not 60 Hz as in the US. Therefore, our SFC effects are well outside the range of the notch.

      Figure 6. I was very excited about Figure 6, which is one of the most novel aspects of this study. In addition to the anatomical questions about this figure noted above, I would like to know more. What is the width of the Gaussian envelope?

      The width of the Gaussian Window used in the original results was 25ms. We chose this time window because in our view it represents a good balance between integrating over a long-enough time window and thus allowing for some jitter in neural firing between pairs of neurons, whilst still being temporally specific. Finding the right balance here is not trivial because a too short time window underestimates co-firing, and a too long time window may not provide the temporal specificity necessary to detect co-firing lags (Cohen & Kohn, 2011; Nat Neurosci). To test whether this choice critically affected our results, we repeated the analysis for different window sizes, i.e. 15, 35, and 45 ms. The results show that the pattern of results did not change, with hits showing earlier peaks in co-firing compared to misses. Critically, the difference in co-firing peaks was significant for all window sizes, except for the shortest one which presumably is due to the increase in noise because of the smaller time window over which spikes are integrated. These issues are now mentioned in the methods section, and the results for the different window sizes are reported in Figure 6 – figure supplement 4.

      Manuscript lines 346-347: “The co-firing analyses were replicated with different smoothing parameters (see Figure 6 – figure supplement 4).”

      Manuscript lines 894-898: “We chose this time window because it should represent a good balance between integrating over a long-enough time window and thus allowing for some jitter in neural firing between pairs of neurons, whilst still being temporally specific (57). To test whether this choice critically affected our results, we repeated the analysis for different window sizes, i.e. 15, 35, and 45 ms (see Figure 6 – figure supplement 4).”

      Are these units on the same or different microwires?

      All units used for the analysis shown in Figure 6 come from different microwires. This was naturally the case because the putative up-stream neuron was distally coupled to the theta LFP, and the putative down-stream neuron was locally coupled to gamma at this same theta LFP electrode. This information is listed in Figure 6 – source data 1 which lists the locations and electrode IDs for all neuron pairs shown in figure 6.

      How do the spike latencies reported here depend on the firing rates of the two units?

      To address this question we first tested whether firing rates (averaged across the putative up-stream and down-stream neurons) differ between hits and misses. If they do, this would be suggestive of a dependency of the spike latency differences between hits and misses on firing rates. No such difference was observed (p>0.3). Second, we correlated the differences between hits and misses in Co-firing peak latencies with the differences in firing rates. Again, no significant correlation was observed (R=-0.06; p>0.7), suggesting that firing rates had no influence on the observed differences in co-firing latencies. These control analyses are now reported in the main text.

      Manuscript lines 347-350: “No significant differences in firing rates between hits and misses were found (p>0.3), and on correlations between firing rates and the co-firing latencies were obtained (R=-0.06; p>0.7), suggesting that firing rates had no influence on the observed co-firing differences between hits and misses.”

      What do these results look like for other pairs that are not putative upstream/downstream pairs?

      As we reported in the original manuscript in lines 352-355 we did not find a memory dependent effect on co-firing latencies if we select neuron pairs solely on the basis of distal theta SFC. Within this analysis the distally theta coupled neuron would be the up-stream neuron and the neuron recorded at the site where the theta LFP is coupled would be the down-stream neuron. This null-result suggests that in order for the memory dependent difference in co-firing lags to emerge, the down-stream neurons need to be coupled to a local gamma rhythm in order for the memory effect on co-firing latencies to emerge. However, within this previous analysis there is still a notion of up-stream and down-stream neurons because neuron pairs were selected based on distal theta phase coupling. We therefore repeated this analysis for all pairs of neurons in a completely unconstrained fashion such that all possible pairs of neurons that were recorded from different electrodes were entered into the co-firing analysis. This analysis also revealed no difference in co-firing lags, neither for positive lags nor for negative lags. Instead, what this analysis showed is tendency for hits to show a higher occurrence of simultaneous or near simultaneous firing, which is in line with Hebbian learning. These results are now reported in Figure 6 – figure supplement 1.

      Manuscript lines 333-335: “In addition, a completely unconstrained co-firing analysis where all pairs possible pairings of units were considered also showed no systematic difference in co-firing lags between hits and misses (Figure 6 – figure supplement 1).”

      Reviewer #2 (Public Review):

      Roux et al. investigated the temporal relationship between spike field coherence (SFC) of locally and distally coupled units in the hippocampus of epilepsy patients to successful and unsuccessful memory encoding and retrieval. They show that SFC to faster theta and gamma oscillations accompany hits (successful memory encoding and retrieval) and that the timing of the SFC between local and distal units for hits comports well with synaptic plasticity rules. The task and data analyses appear to be rigorously done.

      Strengths: The manuscript extends previous work in the human medial temporal lobe which has shown that greater SFC accompanies improved memory strength. The cross-regional analyses are interesting and necessary to invoke plasticity mechanisms. They deploy a number of contemporary analyses to disentangle the question they are addressing. Furthermore, their analyses address limitations or confound that can arise from various sources like sample size, firing rates, and signal processing issues.

      Weaknesses:

      Methodological:

      The SFC coherence measures are dependent in part on extracting LFPs derived from the same or potentially other electrodes that are contaminated by spikes, as well as multiunit activity. In the methods, they cite a spike removal approach. Firstly, the incomplete removal or substitution of a signal with a signal that has a semblance to what might have been there if no spike was present can introduce broadband signal time-locked to the spike and create spurious SFC. Can the authors confirm that such an artifact is not present in their analyses? Secondly, how did they deal with the removal of the multiunit activity? It would be suspected that the removal of such activity in light of refractory period violation might be more difficult than well-isolated units, and introduce artifacts and broadband power, again which would spuriously elevate SFC. Conversely, the lack of removal of multiunit activity would seem to for a surety introduce significant broadband power. One way around this might be that since it is uncommon to have units on all 8 of the BF microwires, to exclude the microwire(s) with the units when extracting the LFP to avoid the need to perform spike removal.

      The reviewer raises a valid concern which we address as follows. Firstly, an artefact introduced into SFC by linear interpolation would be a problem for those local SFCs where the spike providing channel and the LFP providing channel are identical. Out of the 53 local spike-SFC couplings found for the gamma frequency range, only 11 (20.75%) were from pairs where the spikes and LFPs come from the identical microwire. It is unlikely that this minority of data would have driven the results. Furthermore, it is unlikely that the interpolation would introduce a frequency shift of SFC that is memory dependent, because the interpolation is more likely to cause a general increase in broadband SFC (as opposed to having a frequency band specific effect). However, to address this concern, we split the data based on whether the spike and LFP providing channels were the same or different. The results show that (i) the spectrogram of SFC is highly similar between the two datasets, with a prominent gamma peak present in both and no significant differences between the two; (ii) restricting the analysis to those data where the LFP and spike providing channels are different replicated the main finding of faster gamma peak frequencies for hits compared to misses.

      Secondly, we followed the reviewer’s suggestion and repeated the SFC analysis for ‘silent’ microwires, i.e. microwires where no single or multi-units were detected. This analysis replicated the same memory effects as observed in the analysis with all microwires. Specifically, we found an increase in the local gamma peak SFC frequency for hits compared to misses, as well as an increase in distal theta peak SFC frequency for hits compared to misses. These results are reported in the main manuscript and in Figure 2 – figure supplement 6 for gamma, and figure 3 – figure supplement 6 for theta.

      Manuscript lines 199-203: “To rule out concerns about possible artifacts introduced by spike interpolation we repeated the above analysis for spike-LFP pairs where the spike and LFP providing channels are the same or different, and for ‘silent’ LFP channels (i.e. channels were no SUA/MUA activity was detected (see Figure 2 – figure supplement 6).”

      Manuscript lines 253-255: “We also repeated the above analysis for spike-LFP pairs by only using ‘silent’ LFP channels (i.e. channels were no SUA/MUA activity was detected (see Figure 3 – figure supplement 6) to address possible concerns about artefacts introduced by spike interpolation.”

      In a number of analyses the spike train is convolved with a Gaussian in places with a window length of 250ms and in others 25ms. It is suspected that windows of varying lengths would induce "oscillations" of different frequencies, and would thus generate results biased towards the window length used. Can the authors justify their choices where these values are used, and/or provide some sensitivity analyses to show that the results are somewhat independent of the window length of the Gaussian used to convolve with the times series.

      The different choices in window length for the Gaussian convolution reflect the different needs of the two analyses where these convolutions were applied. In one analysis we wanted to get a smooth estimate of spike densities that we can average across trials, similar to a peri-stimulus spike histogram. For this analysis we used a window length of 250 ms which we found appropriate to yield a good balance between retaining smooth time courses whilst still being temporally sensitive. Importantly, for the statistical analysis of the firing rates, spike densities were averaged in much larger time windows than 250 ms (i.e. 1 – 2 seconds) therefore our choice of window length for spike densities would not have any bearing on the averaged firing rate analysis.

      In the other analysis, which is more central for our manuscript, we used a cross-correlation between spike trains to estimate co-firing lags in the range of milliseconds. Therefore, this analysis necessitated a much higher temporal precision. We used a Gaussian Window with a width of 25ms because it represents a good balance between integrating over a long-enough time window and thus allowing for some jitter in neural firing between pairs of neurons, whilst still being temporally specific. Finding the right balance here is not trivial because a too short time window would be prone to noise and underestimates co-firing, whereas a too long time window may not provide the temporal specificity necessary to detect co-firing lags (Cohen and Kohn, 2013; Nat Neurosci). To test whether this choice critically affected our results, we repeated the analysis for different window sizes, i.e. 15, 35, and 45 ms. The results show that the basic pattern of results did not change, with hits showing earlier peaks in co-firing compared to misses. Critically, the difference in co-firing peaks was significant for all window sizes, except for the shortest one which is likely due to the increase in noise because of the smaller time window over which spikes are integrated. These issues are now mentioned in the methods section, and the results for the different window sizes are reported in Figure 6 – figure supplement 4.

      Manuscript lines 346-347: “The co-firing analyses were replicated with different smoothing parameters (see Figure 6 – figure supplement 4).”

      Manuscript lines 894-898: “We chose this time window because it should represent a good balance between integrating over a long-enough time window and thus allowing for some jitter in neural firing between pairs of neurons, whilst still being temporally specific (57). To test whether this choice critically affected our results, we repeated the analysis for different window sizes, i.e. 15, 35, and 45 ms (see Figure 6 – figure supplement 4).”

      Conceptual:

      The co-firing analyses are very interesting and novel. In table S1 are listed locally and distally coupled neurons. There are some pairs for example where the distally coupled neuron is in EC and the downstream one in the hippo, and then there is a pair that is the opposite of this (dist: hippo, local EC). There appear to be a number of such "reversal", despite the delay between these two regions one would assume them to be similar in sign and magnitude given the units are in the same two regions. It seems surprising that in two identical regions of the hippo the flow of information or "causality", could be reversed, when/if one assumes information flows through the system from EC to hippo. This seems unusual and hard to reconcile given what is known about how information flows through the MTL system.

      The reviewer is correct that the spike co-firing analysis suggests a bi-directional flow of information between the hippocampus and surrounding MTL regions (e.g. entorhinal cortex; see Figure 6 – figure supplement 3). However, this bi-directional flow of information is not incompatible with neuroanatomy and the memory literature. The entorhinal cortex serves as an interface between the hippocampus and the neocortex with superficial layers providing input into the hippocampus (via the perforant pathway), and the deeper layers receiving output from the hippocampus (van Strien et al., 2009; Nat Rev Neurosci). Therefore, on a purely anatomical basis we can expect to see a bi-directional flow of information between the hippocampus and the entorhinal cortex, albeit in different layers. Importantly, reversals as shown in our Figure 6 – source data 1 involved different microwires and therefore different neurons (i.e. the entorhinal unit in row 1 was recorded from microwire 3, whereas the entorhinal unit in row 2 was recorded from microwire 8). It is conceivable that these two neurons correspond to different layers of the entorhinal cortex and therefore reflect input vs. output paths. Moreover, studies in humans demonstrated that successful encoding of memories depends not only on the input from the entorhinal cortex into the hippocampus, but also on the output of the hippocampal system into the entorhinal cortex, and indeed on the dynamic recurrent interaction between these input and output paths (Maass et al. 2014; Nat Comms; Koster et al., 2018; Neuron). Our bi-directional couplings between hippocampal and surrounding MTL regions (such as the EC) are in line with these findings. We have added a discussion of this issue in the discussion section.

      Manuscript lines 447-452: “Notably, the neural co-firing analysis indicates a bidirectional flow of information between the hippocampus and surrounding MTL areas, such as the entorhinal cortex (see Figure 6 – figure supplement 3; Figure 6 – source data 1). This result parallels other studies in humans showing that successful encoding of memories depends not only on the input from surrounding MTL areas into the hippocampus, but also on the output of the hippocampal system into those areas, and indeed on the dynamic recurrent interaction between these input and output paths (43, 44).”

    1. ull-Text Search Step 1: To add a comment by searching the full text of your library, first place the cursor where you’d like the text to appear… In this case, we’ve inserted a blank comment into a Google Doc (pro tip: use ALT-CMD-M on a Mac or CTRL-ALT-M on Windows).

      test test

    1. I had to do 13 steps just to get to the point where I can start writing. To make matters worse, I often compulsively close terminal windows after I am done. This is a process that gets repeated single every time I feel like writing.It’s actually a huge barrier of entry. Sure they are trivial tasks, but they take up a whole lot of mental energy.Living the good life is not all about money, it’s about avoiding stupid things like having to perform a tedious 30 second routine just to get started working on what you want to do.
    1. Note that fixing a deleted ESP requires re-creating not just the ESP itself, but the boot loader(s) that it used to contain. Such a repair will require the use of an emergency boot disk, like an Ubuntu disk in its "try before installing" mode. You'll also need to restore all the boot loader files. In the case of a Windows/Ubuntu dual-boot, this means recovering both OS's boot loaders. To simplify this task, I strongly recommend backing up the ESP. A file-level backup (using tar, cp, or similar tools) should be sufficient.
    1. Yep, the easiest thing is to create a second EFI partition: https://pop.system76.com/docs/dual-booting-windows/It is still possible to encrypt your popOS ext4 partition using LUKS and LVM, though this won't just work out of the box.
    1. There hecreated new stained-glass windows of Christ's life, death, resurrection, andascension. West's experience reflected where Jesus stood as a visual presence in revolutionary America: portrayals of him came from outsiders, andif one wanted to gain notoriety for it, one pretty much had to leave.

      Visual representation allow for people to feel closer because they could see physical appearance?

    1. The meanings of words and the kinds ofsense a sentence makes are rarely stable.The transparent theory of language assumes the opposite— that wordsare more like clear windows opening to a meaning that can be separatedfrom language. It also assumes that the meanings of words are obvious andself-evident.

      I chose this quote, as well as the following few sentences, because words and their meanings are all relative. Many words do, in fact, have multiple meanings which vary depending on context. Their meaning is not so fixed as to claim that it exists beyond the realm of language, but rather, the system of language is in place to assign meanings to words.

    1. This is what I am thinking of, years later, standing in front of my classroom when the alarms go off. My students ask What do we do. I want to say, I do not know. I say Get away from the windows.

      climax

    1. Author Response

      Reviewer #1 (Public Review):

      Dotov et al. took joint drumming as a model of human collective dynamics. They tested interpersonal synchronization across progressively larger groups composed of 1, 2, 4 and 8 individuals. They conducted several analyses, generally showing that the stability of group coordination increases with group numerosity. They also propose a model that nicely mirrors some of the results.

      The manuscript is very clear and very well written. The introduction covers a lot of relevant literature, including animal models that are very relevant in this field but often ignored by human studies. The methods cover a wide range of distinct analyses, including modelling, giving a comprehensive overview of the data. There are a few small technical differences across the experiments conducted with small vs. large groups, but I think this is to some extent unavoidable (yet, future studies might attempt to improve this). Furthermore, the currently adopted model accounts well for behaviors where all individuals produce a similar output and therefore are "equally important". However, it might be interesting to test to what extent this can be generalized to situations where each individual produces a distinct sound (as in a small orchestra) and therefore might selectively adapt to (more clearly) distinguishable individuals.

      We agree that this is important. We discuss this in a new section (4.1) at the end of the discussion. We suggest that heterogeneity makes it possible for other modes of organization to compete with the attractive tendency towards the global average. We also point out that factors such as individual skill, task difficulty, delays, and selective attention enable such heterogeneity in the ensemble.

      Similarly, it would be interesting to test to what extent the current results (and model) can be generalized to interactions that more strongly rely on predictive behavior (as there is not much to predict here given that all participants have to drum at a stable, non-changing tempo).

      We can only speculate that the present results are less relevant to interactions that rely strongly on predicitive behavior, as behaviour in our simple task could be modeled well by our hybrid single oscillator Kuromoto model. We inserted the idea that the presence of a group rhythm can diminish the demands for individuals to predict each other’s notes, the end of paragraph 1, page 27.

      An important implication of this study is that some well-known behaviors typically studied in dyadic interaction might be less prominent when group numerosity increases. I am specifically referring to "speeding up" (also termed "joint rushing") and "tap-by-tap error correction" (Wolf et al., 2019 and Konvalinka et al., 2010, also cited in the manuscript, are two recent examples). I am not sure whether this depends on how the data is analyzed (e.g. averaging the behavior of multiple drummers), yet this might be an important take-home message.

      Thank you for the suggestion. We edited to emphasize that the relevant part of the analysis of the drumming data was performed at the individual level and using the same methods as typically done in dyadic tapping (first sentences of Section 2.7.2). Speeding up was the only variable where we used group-averages. For consistency, and to avoid confusion, in the present version we re-did the stats (the changed statistical parameters are highlighted) and figures using the individual data points and we did not observe major changes.

      I am confident that this study will have a significant impact on the field, bringing more researchers close to the study of large groups, and generally bridging the gap between human and animal studies of collective behavior.

      Reviewer #2 (Public Review):

      In this manuscript Dotov et al. study how individuals in a group adjust their rhythms and maintain synchrony while drumming. The authors recognize correctly that most investigation of rhythm interaction examines pairs (dyads) rather than larger groups despite the ubiquity of group situations and interactions in human as well as non-human animals. Their study is both empirical, using human drummers, and modeling, evaluating how well variations of the Kuramoto coupled-oscillator describe timing of grouped drummers. Based on temporal analyses of drumming in groups of different sizes, it is concluded that this coupled oscillator model provides a 'good fit' to the data and that each individual in a group responds to the collective stimulus generated by all neighbors, the 'mean field'.

      I have concerns about 1) the overall analysis and testing in the study and about 2) specific aspects of the model and how it relates to human cognition. Because the study is largely empirical, it would be most critical for the authors to propose two - or more - alternative hypotheses for achieving and maintaining synchrony in a group. Ideally, these alternatives would have different predictions, which could be tested by appropriate analyses of drummer timing. For example, in non-human animals, where the problem of rhythm interaction in groups has been examined more thoroughly than in humans, many acoustic species organize their timing by attending largely to a few nearby neighbors and ignoring the rest. Such 'selective attention' is known to occur in species where dyads (and triads) keep time with a Kuramoto oscillator, but the overall timing of the group does not arise from individual responses to the mean field. Can this alternative be evaluated in the drumming data ? Would this alternative fit the drumming data as well as, or better than , the mean field, 'wisdom of the crowd' model ?

      These are very important points. The present paper is restricted to a simple task where participants are instructed to synchronize with each other. However, we now more explicitly acknowledge the limitations of our study and include a new section, “Beyond the group average” at the end of the Discussion that is dedicated to this issue and discussed other organizing tendencies that are particularly relevant in larger and more diverse ensembles. In the context of the present task, the relative difference between local and global interactions was likely negligible because of the small differences in timing, from 4 to 16 ms, between the closest and most distant pairs.

      It will be interesting in future studies to introduce acoustic heterogeneity by varying the timbre of the instruments, for example. In the present study, the instruments had the same timbre with narrowly varying fundamental frequencies (117-129 Hz in the duets/quartets and 249-284 Hz in the octets), a situation that encourages integration of all the acoustic information. We do point out that the present approach needs to be expanded to be able to account for competitive pressure and selective attention.

      The well-known Vicsek model (discussed briefly in paragraph 2, page 15), related to the Kuramoto under certain assumptions, can account for a variety of dynamic behaviors in flocking animals. The ability for selective attention in the form of a heterogeneous coupling matrix, combined with the existence of competitive pressure in the form of negative coupling terms can result in spontaneous formation of clusters and spatiotemporal patterns of movement. This is consistent with prior research in chorusing animals (insects and anurans). Large musical ensembles also involve groupings of instruments such as separate sections that change their relative loudness across time. Typically these are not spontaneous but composed and conducted, yet they may satisfy the same constraints.

      We also pointed out that we see these as complementary organizing principles. Even in the Vicsek model, there is a notion of a ‘local order parameter’ whereby individuals are coupled to a group average within a narrow interaction radius. The relative importance of other organization tendencies depends on the layout of the acoustic environment and the competitive and collaborative aspects of the task. Hence, parameters such as delay and individual heterogeneity could act as symmetry breaking terms that enable different stabilities from the basic global group synchrony.

      A second concern arises from relying on a hybrid, continuous - pulsed version of the Kuramoto coupled oscillator. If the human drummers in the test could only hear but not see their neighbors, this hybrid model would seem appropriate: Each drummer only receives sensory input at the exact moment when a neighbor's drumstick strikes the drum. But the drummers see as well as hear their neighbors, and they may be receiving a considerable amount of information on their neighbors' rhythms throughout the drum cycle. Can this potential problem be addressed? In general, more attention should be paid to the cognitive aspects of the experiment: What exactly do the individual drummers perceive, and how might they perceive the 'mean field' ?

      This is all very relevant. We instructed participants to focus on X’s in the centers of their drums and not look at their peers (edited to mention that in at the end of Section 2.4, page 9). Additionally, the pattern of results for tempo change, cross-correlations, and variability in the dyadic condition was consistent with previous studies that involved purely auditory tapping tasks (emphasized in the begging of paragraph 2, page 26). The best way to address this limitation would be to repeat the study and block the visual contact among participants, as well as include a condition emphasizing visual contact.

      It is beyond the scope of the present paper to make model-based predictions of effects of coupling and information availability, but this should be done in future work. For the present paper, we now include a simulation involving continuous coupling (end of section 2.9.2, page 16) and Supplementary Figure 8A) which fails to reproduce the results for variability, results that are well captured by the hybrid continuous-pulsed model we developed, see the Supplementary Materials.

      Reviewer #3 (Public Review):

      The contribution provides approaches to understanding group behaviour using drumming as a case of collective dynamics. The experimental design is interestingly complemented with the novel application of several methods established in different disciplines. The key strengths of the contribution seem to be concentrated in 1) the combination of theoretical and methodological elements brought from the application of methods from neurosciences and psychology and 2) the methodological diversity and creative debate brought to the study of musical performance, including here the object of study, which looks at group drumming as a cultural trait in many societies.

      Even though the experimental design and object of study do not represent an original approach, the proposed procedures and the analytical approaches shed light on elements poorly addressed in music studies. The performers' relationships, feedbacks, differences between solo and ensemble performance and interpersonal organization convey novel ideas to the field and most probably new insights to the methodological part.

      It must be mentioned that the authors accepted the challenge of leaving the nauseatic no-frills dyadic tests and tapping experiments in the direction of more culturally comprehensive (and complex) setups. This represents a very important strength of the paper and greatly improves the communication with performers and music studies, which have been affected by the poor impact of predictable non-musical experimental tasks (that can easily generate statistical significant measurements). More specifically, the originality of the experiment-analysis approach provided a novel framework to observe how the axis from individual to collective unfolds in interaction patterns. In special, the emergence of mutual prediction in large groups is quite interesting, although similar results might be found elsewhere.

      Thank you for these comments.

      On another side, important issues regarding the literature review, experimental design and assumptions should be addressed.

      I miss an important part of the literature that reports similar experiments under the thematic framework of musical expressivity/expression, groove, microtiming and timing studies. From the participatory discrepancies proposed in 1980's Keil (1987) to the work of Benadon et al (2018), Guy Madison, colleagues and others, this literature presents formidable studies that could help understand how timing and interactions are structured and conceptualized in the music studies and by musicians and experts. (I declare that I have no recent collaborations with the authors I mentioned throughout the text and that I don't feel comfortable suggesting my own contributions to the field). This is important because there are important ontological concerns in applying methods from sciences to cultural performances.

      Thank you for the suggestions. We included a brief discussion in the newly added “Beyond the group average” section at the end of the Discussion, specifically the first paragraph, pages 27-8. We think that expressive timing naturally fits in continuation with the other reviewers’ concerns about how much the idea of the group average generalizes to real musical situations. By design and instruction, we stripped individual expression from the present task. Specific cultural contexts and performance styles may want to escape or at least expressively tackle this constraint of our task, and we believe that now that we have established the mean field as one factor affecting group behaviour, further studies can take on the challenge of developing models that make predictions in more complex situations closer to real musical interactions – and testing those models empirically.

      One ontological issue that different cultural phenomena differ from, for example, animal behaviour. For example, the authors consider timing and synchrony in a way that does not comply with cultural concepts: p.4 "Here we consider a musical task in which timing consistency and synchrony is crucial". A large part of the literature mentioned above and evidence found in ethnographic literature indicate that the ability to modulate timing and synchrony-asynchrony elements are part of explicit cultural processes of meaning formation (see, for example, Lucas, Glaura and Clayton, Martin and Leante, Laura (2011) 'Inter-group entrainment in Afro-Brazilian Congado ritual.', Empirical musicology review., 6 (2). pp. 75-102.). Without these idiosyncrasies, what you listen to can't be considered a musical task in context and lacks basic expressivity elements that represent musical meaning on different levels (see, for example, the Swanwick's work about layers/levels of musical discourse formation).

      Indeed, this is an important issue. We often use cultural phenomena merely as a motivation but do not dive in the relevant details. Here, in addition to the previous discussion, we now reiterate that the tendency towards the group average is one organizing tendency but there are additional ones, enabled by individual heterogeneity and context. For example, marching bands and chanting crowds probably impose different constraints than individual artistic expression by skillful musicians.

      Such plain ideas about the ontology of musical activities (e.g. that musical practice is oriented by precision or synchrony) generate superficial constructs such as precision priority, dance synchrony, imaginary internal oscillators, strict predictive motor planning that are not present in cultural reports, excepting some cultures of classical European music based on notation and shaped by industrial models. The lack of proper cultural framing of the drumming task might also have induced the authors to instruct the participants to minimize "temporal variability" (musical timing) and maintain the rate of the stimulus (musical tempo), even though these limiting tasks mostly take part of musical training in some societies (examples of social drumming in non-western societies barely represent isochronous tempo or timing in any linguistic or conceptual way). The authors should examine how this instruction impacts the validity of results that describe the variability since it was affected by imposed conditions and might have limited the observed behaviour. The reporting of the results in the graphs must also allow the diagnosis of the effect of timing in such small time frame windows of action.

      We agree totally. We made changes and tried to be more specific about the cultural framing, delineating contexts where the present ideas are more relevant and where they are less relevant, or at least incomplete (the bottom of page 3, and pages 27-8).

    1. . I sit at truck-stop diners, drinking cup after cup of coffee. I have something sweet. Pancakes. Or pie. Or cake. Then more coffee until I can bear to go back out again and devour the miles. Windows open and the road screaming past.

      This paragraph has a very poetic sort of flow to it. The author has us move through one of these long drives smoothly. The short fragmented segments referencing different sweets at the diner interrupt that flow with short staccato beats before returning to the norm after.

  5. Sep 2022
    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      In this section we list all the comments done by the three referees and our corresponding action.

      Regarding Reviewer #1:

      1. On "mechanical control": The authors show changes in circadian power fraction with changes in YAP and with cytoskeletal inhibitors, but there are no properly-controlled experiments that directly perturb mechanics. The authors show a correlation between YAP nuclear/cytoplasmic ratio and circadian power, but YAP N/C alone is not a readout of mechanotrasndcution, per se. The authors have shown two different experiments where cells are cultured on a stiff (30kPa) substrate and soft substrate (300Pa), but they do not shown a direct comparison of YAP nuclear localization and circadian power under these two conditions in the same experiment. Direct, controlled perturbation of mechanical cues is necessary to support the title's use of the phrase "mechanical control."

      We agree with the referee that further mechanical perturbations could strengthen our conclusions. In our original manuscript we directly controlled the mechanical environment by culturing cells on substrates of 300Pa and 30kPa in stiffness. These differences in stiffness were not sufficient to drive changes in circadian power fraction and YAP localisation, as depicted in Fig. 3C (we note that the direct comparison requested by the referee is shown in that figure). We hypothesise that this negative result is due to a very low “rigidity threshold” or to secretion of extracellular matrix that stiffens the initially soft substrate. In any case, we plan to strengthen the “mechanical control” message of our paper with one or more of the below experiments:

      A) We will measure circadian power fraction and YAP localisation in even extremer stiffness/adhesion conditions, using 300 Pa and 30 kPa polyacrylamide gels with a different fibronectin coating protocol, as described in Elósegui-Artola et al., 2017. This allows a much finer control of the concentration of fibronectin coated, so we can reach low enough levels to compromise the cell adhesion to the substrate and cross down the threshold that would lead to cytosolic localisation of YAP. We will perform this experiment in presence of the FUD peptide, which inhibits matrix deposition (Tomasini-Johansson et al., 2001; this peptide has already been tested in our lab).

      B) We will use the approach described in Fig. 2E to compare the circadian power fraction in cells spread in stadium-shaped islands of 2400 um2 and 1200 um2. Oakes et al., 2014 already showed that traction forces exerted by 3T3 fibroblasts depend on the size of the spread area of the cells, so we expect differences in mechanotransduction that should affect YAP localisation and, if our hypothesis is correct, the RevVNP circadian oscillations.

      C) We will abolish the physical connection between the actin cytoskeleton and the nucleus by disrupting the LINC complex via the overexpression of a dominant negative (DN) nesprin-1 KASH domain (Lombardi et al., 2011). The plasmid designed for the inducible overexpression of the DN KASH domain, originally tested in NIH3T3 cells (Mayer et al., 2019), is available in our lab and has been used to prove that uncoupling cytoskeleton and nucleus leads to nuclear YAP decrease in single cells (Kechagia at al., 2022). We will aim to increase the circadian power fraction in low density cells upon the overexpression of the DN KASH domain.

      Elosegui-Artola A, Andreu I, Beedle AEM, Lezamiz A, Uroz M, Kosmalska AJ, Oria R, Kechagia JZ, Rico-Lastres P, Le Roux AL, et al (2017) Force Triggers YAP Nuclear Entry by Regulating Transport across Nuclear Pores. Cell 171: 1397-1410.e14

      Kechagia Z, Sáez P, Gómez-González M, Zamarbide M, Andreu I, Koorman T, Beedle AEM, Derksen PWB, Trepat X, Arroyo M, et al (2022) The laminin-keratin link shields the nucleus from mechanical deformation and signalling Cell Biology

      Lombardi ML, Jaalouk DE, Shanahan CM, Burke B, Roux KJ & Lammerding J (2011) The Interaction between Nesprins and Sun Proteins at the Nuclear Envelope Is Critical for Force Transmission between the Nucleus and Cytoskeleton*. Journal of Biological Chemistry 286: 26743–26753

      Mayer CR, Arsenovic PT, Bathula K, Denis KB & Conway DE (2019) Characterization of 3D Printed Stretching Devices for Imaging Force Transmission in Live-Cells. Cel Mol Bioeng 12: 289–300

      Oakes PW, Banerjee S, Marchetti MC & Gardel ML (2014) Geometry regulates traction stresses in adherent cells. Biophysical Journal 107: 825–833

      Tomasini-Johansson BR, Kaufman NR, Ensenberger MG, Ozeri V, Hanski E & Mosher DF (2001) A 49-Residue Peptide from Adhesin F1 of Streptococcus pyogenes Inhibits Fibronectin Matrix Assembly*. Journal of Biological Chemistry 276: 23430–23439

      2. On "via YAP/TAZ": In addition to above, it is necessary to show that the changes in Circadian power fraction induced by mechanical cues in fact require YAP/TAZ signaling. Thus, an experiment comparing soft (300Pa) substrate with Stiff (30kPa) substrate in the presence or absence of YAP/TAZ is necessary to state that YAP and TAZ are the mechanistic mediators of mechanical cues on the clock.

      We are currently generating via CRISPR-KO and shRNA silencing a YAP1/TAZ double mutant. We plan to use this cell line in those conditions where YAP is prominently nuclear (low density in stiff substrates) with the purpose of rescuing the RevVNP circadian power fraction.

      1. While the TEAD-binding domain mutant experiment is elegant, to claim that TEAD is the transcriptional mediator, it must be demonstrated that this mutant indeed fails to induce TEAD-mediated transcription. This could be simply executed by demonstrating that the CCD mutant expresses reduced CTGF and Cyr61 (for example), compared to the 5SA, under these conditions. Further, endogenous YAP is still active and available to bind to TEAD in this system, which should be discussed.

      We plan to carry out quantitative real-time PCR of CTGF and Cyr61 in all the YAP mutants and the control. Regarding the presence of endogenous YAP, we will clarify in the text that a) the overexpression of the different YAP mutants was done in high-density conditions, where endogenous YAP is significantly less localised in the nucleus, and that b) the levels of the exogenous YAP are much higher (we already have western blots showing this).

      1. In Figure 3a: The cell perimeter needs to be shown either by actin staining or by brightfield images. The manually marking of cell boundaries is insufficient, specifically because the drugs used in this experiment affect the cytoskeleton. It would be very helpful to see this via actin staining or in the least with brightfield images.

      The cell perimeter was drawn based on the cytosolic YAP immunostaining, whose levels are high enough to infer the cell shape (higher resolution images can be attached if necessary). As stated in the manuscript, the YAP nuclear-to-cytosolic ratio is calculated using two adjacent areas of identical size, one inside the nucleus and the other one just outside (see Materials and Methods/Immunostainings), so the exact cell shape is irrelevant for this particular quantification.

      Regarding Reviewer #2:

      Effects on the circadian clock

      1. The authors use the fluorescent reporter created by Nagoshi from sections of the Rev-erbα gene. This reporter is widely used to estimate relative circadian timing in individual cells but it does not provide direct information on the circadian clock activity. In other words, while Reverb rhythmic expression is driven by the clock, it is not known whether less-rhythmic or non-rhythmic expression or change in expression level of Rev-erbα is affecting the core clock. For example, it has been shown that Rev-erbα knock-down cells are rhythmic as long as Rev-erb-beta is present. Thus, one major shortcoming of the current version of the manuscript is the missing dissection between Rev-erbα rhythmicity/expression and the circadian clock. More concretely, it remains unclear whether the change in Rev-erbα expression is a direct effect or caused by a defect clock. Since the authors presume a direct effect of YAP/TAP on Rev-erb expression, the former is likely. If that is the case, the data could be interpreted as that (missing) mechanic stimuli can lead to nuclear YAP/TAZ, which rises the level of Rev-erbα (and maybe interfere with its rhythmic accumulation). Beyond Rev-erbα expression, there may or may not be an effect on the circadian clock (core clock, CCGs). With the current version we do not know since the authors do not look beyond Rev-erbα expression. Thus, the claims on circadian clock or circadian rhythms in their cells is not studied in this version of the manuscript. The current version is still very interesting and provides insights into the Rev-erbα modulation, but additional work would be needed to show links with the core clock machinery. For this the authors could show influence (or at least correlation) of the YAP/TAZ/REVERBA phenotype on the oscillations of core clock genes or clock-controlled genes. Either through the use of alternative (ideally constitutive) reporters (e.g. PER2, BMAL1, fluorescent or LUC), or/and by analyzing RNA/Protein of core clock genes or output genes. This would not be necessary for all experiments, but at least for some were its possible (e.g. experiments with drugs perturbations). Otherwise, any claim like "YAP/TAZ perturbs the circadian clock ..." or "the circadian clock deregulation in nuclear YAP-enriched cells" is potentially flawed and has to be removed/reformulated.

      We agree with the reviewer. In order to understand if the core clock is affected, beyond REV-ERBA, by YAP/TAZ expression and localisation, we plan to perform the two experimental approaches explained below. For both of them we will use high-density cells with and without YAP-5SA overexpression since the other conditions (drugs, micropatterned cells, low density) may not render enough cells for analytical approaches that are not based on fluorescent microscopy (real-time qPCR or luminescence recordings). Also, the potential results obtained with YAP-5SA overexpression will be more informative regarding causality YAP-circadian clock than those using the other conditions described in the manuscript.

      1. We will use NIH3T3 bmal1::luc cells (already generated in our lab with the pABpuro-BluF plasmid; https://www.addgene.org/46824/) and an adapted microscopy-based system to track bioluminescence. We will need to give our cells a synchronisation shock since the single-cell signal with this reporter is too low and noisy to perform single-cell tracking.
      2. We will check during 48 hours, every 4 hours, the mRNA levels of Bmal1, Clock, Cry1, Per2, Yap1 and Rev-erbα via quantitative real-time PCR. As in A), we will need to synchronise our cells prior RNA collection. In case the expression of the other components of the clock are not affected by YAP-5SA overexpression, we will modify the message of our manuscript to emphasize the role of REV-ERBA. As the referee mentions (and we thank them for that comment), finding that the modulation of Rev-erbα is mechano-sensitive and dependent on YAP/TAZ signalling would be still very relevant, given the role of this factor in metabolism, inflammation, mitochondrial activity, or Alzheimer’s disease, as discussed in lines 231-235 in the manuscript.
      1. The authors aim to discard the possibility of paracrine signals by showing no increase in circadian power fraction of cells growing in low density with conditioned medium (Figure 2D). A paracrine signal coming from an oscillatory system is likely to oscillate and in that case, I do not see how growing cells in constant conditional medium can discard the effects of an oscillatory paracrine signal. I believe the elegant experiment shown in Figure 2E more precisely address this issue.

      The reviewer is right in the sense that paracrine coupling of circadian oscillators would require a circadian paracrine signal, like shown in Finger et al., 2021, and that we provide sufficient experimental evidence of a mechanics- rather than paracrine-driven control of the RevVNP circadian oscillations. Specifically, by using micropatterning (Fig. 2E) and gap closure (Fig. 2A) we show that cells under the same paracrine medium are able to display acute differences in RevVNP expression. The experiment with conditioned medium, which is a traditional technique used in some papers in the field like in Noguchi et al., 2013, was performed to rule out the possibility that secreted factors, even if not circadian, could ultimately impact the low-density cells’ circadian clock. We will rephrase the manuscript to stress out this reasoning.

      Finger AM, Jäschke S, del Olmo M, Hurwitz R, Granada AE, Herzel H & Kramer A (2021) Intercellular coupling between peripheral circadian oscillators by TGF-β signaling. Science Advances 7

      Noguchi T, Wang LL & Welsh DK (2013) Fibroblast PER2 circadian rhythmicity depends on cell density. Journal of Biological Rhythms 28: 183–192

      Data analysis methodology:

      1. Single-cell circadian recordings like the ones analyzed here are characterized by noisy amplitude and non-sinusoidal waveforms with fluctuating period (Bieler et al., 2014; Feillet et al., 2014). The authors interpolate, smooth, detrend and normalize their data; operations that are known to introduce spectral artifacts that can mislead the interpretation of the power spectrum. Moreover, the time-series pre-processing operations described by the authors in the methods sections is incomplete and the authors should more explicitly describe all their operations with exact methods applied, filter parameters and time-windows sizes (if applicable). To validate their pre-processing steps the authors could provide their time-series analysis pipeline code and/or provide a few examples of raw versus pre-processed data together with their respective spectrums before and after pre-processing. In addition, the authors could provide their raw trace signal data together with the corresponding post-processed signal data as plain text files.

      In our response to the reviewers, we will address this point exactly as requested by the reviewer. We will rewrite our methods section to explain better our analysis pipeline, clarifying that we do not apply detrending, that we resort rarely to interpolation of missing points, and stating the specifics of the standard low-pass filter we apply. We will then strengthen Supplementary Figure 1 with more examples of raw-data and processed data, and will provide raw trace signal data and the corresponding processed data to illustrate our approach.

      1. The authors rely on Fourier analysis and a reasonable self-made definition of circadian strength named as "circadian power fraction". Using a stationary-based method for noisy non-stationary data can lead to inaccurate spectrum power estimations. As the current version of the manuscript does not provide any alternative/complementary analysis method nor we have any available raw signal data it is unclear if their analysis appropriately represents the circadian power. The authors could consider implementing complementary data-analysis strategies to validate their conclusions. Fortunately, there are multiple suitable data analysis strategies already available that are exactly designed for this kind of data (eg. (Price et al., 2008; Leise et al., 2012; Leise, 2013; Bieler et al., 2014; Mönke et al., 2020). This time-series analysis methods is a crucial step as all main results on this manuscript rely on the authors self-made definition of circadian power. This is particularly important as there is no standardized method in the circadian field to estimate circadian rhythmicity and/or circadian power of single-cell traces.

      We will take this point into consideration by running a complementary analysis of our data with one of the methods recommended by the reviewer. Our choice is pyBOAT, as presented in Mönke et al. (2020), because on first inspection its implementation of the wavelet method appears to be the most suitable for our dataset type. If we find that our time-series are too short for these methods we will use the RAIN algorithm (Thaben and Westermark, 2014) instead.

      Mönke G, Sorgenfrei FA, Schmal C & Granada AE (2020) Optimal time frequency analysis for biological data - pyBOAT Systems Biology

      Thaben PF & Westermark PO (2014) Detecting Rhythms in Time Series with RAIN. J Biol Rhythms 29: 391–400

      1. The authors mainly show circadian power fraction and analyze rhythmicity scores/powers. Is there the a chance that a rise in the basal expression level of Rev-erbα is reducing the rhythmicity score? Or to phrase it otherwise, the absolute amplitude may remain the same, but the relative amplitude may be reduced? Would that affect the FT analysis power scored? To clarify this the authors could provide an analysis of the relative amplitude in addition to the circadian intensity (as in Fig.1C).

      Our analysis pipeline subtracts the mean signal from each cell’s intensity-time trace, and then divides each trace by its standard deviation. This procedure eliminates any bias due to basal expression of Rev-erbα. We will address this point by clarifying the methods section and providing examples in Supplementary Figure 1 of raw data with high-basal levels and low basal levels, showing their pre- and post-processed spectra.

      Minor points by text-line:

      YAP and TAZ should be introduced to the reader during introduction. by set a of proteins. Here the authors probably meant that cells were not reset nor entrained during the experiment. "..expression depends on..". This is a correlation, not proof of causation is shown until this point. This is an overstatement. Using the term "provoked" suggests a causal relationship not shown. Similarly last sentence "This result established.... is caused..". Again, this is an overstatement as only correlation is shown. According to their description the authors are not using any image-preprocessing steps, eg background subtraction or other filters. Is this correct? It is not clear what image metric for the single-cell signals are the authors using, eg. integrated nuclear intensity or mean/median nuclear intensity. I am not familiar with TrackMate but it might be possible to export and share with the readers the image-analysis pipeline used which would clarify any questions about image processing and signal extraction.

      We thank the reviewer for pointing out all these minor points. We will address each one of them to make the paper clearer.

      Regarding Reviewer #3:

      The authors state in lines 163-165: 'This striking anticorrelation reveals that the robustness of the Rev-erbα circadian expression depends on the nucleocytoplasmic transport of YAP and its mechanosensitive regulation'. Although interesting, the data in figure 3 to which this statement refers is, as the authors identify, correlative, rather than causative. I would strongly suggest altering this statement to better reflect the data.

      We will modify the text to eliminate this overstatement.

      It looks to me as though all experiments were carried out in the same clonal reporter 3T3 line. To avoid possible issues with founder effects, I would ask that the authors repeat the initial experiment in figure 1B, and the associated analysis as in 1C-E with a different clonal 3T3 line. Hopefully this will not be very arduous, as the methods suggest that multiple clonal 3T3 reporter lines were made initially. With time to defrost, plate, record and analyse the data, I would hope that this would not take more than six weeks maximum.

      We will perform the experiments regarding the cell density effect on the RevVNP oscillations (Fig. 1) in another clonal 3T3 line as the reviewer suggests. We have already initiated the experimental repeats with the alternative clone.

      I would note that the custom software used for analysis does not appear to be generally available. I would assume that the authors would make this available upon request.

      We will extend the explanation of our method as suggested by Reviewer #2 and make the code available to the community.

      Experiments appear to have been adequately replicated in terms of n. However, the robustness of these findings would be supported though use of a different clonal reporter line, as discussed above.

      We will solve this problem as stated above.

      Statistical analysis is generally appropriate. I would suggest including statistical analysis in figures 3B and S4B to demonstrate that the pharmacological treatments are indeed having a statistically significant effect on the MAL and YAP nuclear/cytoplasmic ratio.

      We will perform the corresponding statistical analysis on those data.

      For Figure 4, it is not stated which statistical tests have been used, with only P values given in table S1. Please state which test has been used.

      We will specify the statistical test used in the figure legend.

      Furthermore, it would be valuable to see if it is possible to perform statistical analysis looking at the populations should in Figure 4A, to either support or refute the statement made in Line 189-90 that 'we overexpressed 5SA-S94A-YAP, a mutant version of YAP unable to interact with TEAD and observed that the cells recovered, to a large extent, both the RevVNP circadian power fraction and the REV-ERBα basal levels displayed by the wild-type high-density population'

      The p-values corresponding to that dataset are represented in Table S1, but we will move them to the figure legend so the extent of the differences between the YAP mutants and the control becomes more noticeable. This applies too to the next comment of the reviewer.

      Additionally, it is a little unclear to me why exact p values are reported in table S1. It seems that they might be better placed in the relevant figure legend.

      Minor comments:

      Although the authors took good care to try to ensure that there was minimal phase synchrony between cells, it would be good to see some analysis to confirm that these efforts were successful. This is of particular concern, given that many things that commonly happen during cell handling, such as temperature change and media change, even with conditioned media, can act to synchronise cells. Hopefully, this information should be available from your existing analysis.

      All our experiments, except for the gap closure ones (which imply an unavoidable medium shock after the removal of the gasket where the cells are cultured to achieve high density) are carried out in a similar way (see Materials and Methods). This approach does not involve the typical shock of serum, dexamethasone, or other hormones, because we want to avoid biochemical signalling that could mask the “pure” effect of mechanics on the pathways that affect the circadian clock. In any case, a certain level of synchrony should not affect the analysis we perform, since this is single cell-based and does not consider the phase but the strength of its circadian frequency. But as requested by the reviewer we will analyze the phase signal and report the results if relevant to the project.

      It would be informative to see both phase and period analysis for the data shown in figure 2C. Do cells at the edge show differences in relative synchrony following the removal of the PDMS barrier and Rev-erba induction? Is there a period difference between cells at the edge and those that remain confluent?

      We agree with the referee that the “shock” received by the cells at the edge should work as a reset of their circadian phase and we have tried to analyse this effect. However, there are technical limitations that make this analysis difficult, mainly the short duration of the experiment and the fact that these cells transition very fast, upon gap closure, from a non-circadian to a circadian behaviour. We will attempt to better report this interesting effect by using the WAVECLOCK (Price et al., 2008) or the pyBOAT method (Mönke et al., 2020), suggested by Reviewer #2, which are designed to analyse non-stationary data.

      Mönke G, Sorgenfrei FA, Schmal C & Granada AE (2020) Optimal time frequency analysis for biological data - pyBOAT Systems Biology

      Price TS, Baggs JE, Curtis AM, Fitzgerald GA & Hogenesch JB (2008) WAVECLOCK: wavelet analysis of circadian oscillation. Bioinformatics 24: 2794–2795

      Figure 2B - the text states that those cells far from the edge oscillate robustly thoughout the experiment, but this is not easy to see from this kymograph due to the dynamic range. Is there another way of presenting this that might make it easier to confirm?

      We will calculate the circadian power fraction of the “bulk” cells as we do for the other conditions described in the manuscript. We can also show examples of individual traces if the average shown in Fig. 2C or the kymograph in Fig. 2B are not clear enough.

      Figure 1D-E - the text provides periodicity for the high-density cells, but not the low density ones. Could you provide periodicity for both populations - do they differ?

      We will represent in more detail the results of the frequency analysis on the low-density cells so the diversity of periods (frequencies) at this condition gets more evident.

      Figure S3 - it is interesting to note the difference in population rhythmicity between the bulk and edge data here, which is not seen so clearly in cells without thymidine. Could the authors comment on this?

      We agree with the referee that there is an obvious difference regarding RevVNP expression (mainly on the edge cells but also in the bulk) between the experiments with and without thymidine. We hypothesise this is due to the pronounced decrease in cell divisions in the presence of thymidine, which considerably slows down the gap closure and impacts the density of the entire cell population. We will comment this effect in the manuscript.

      Line 148 - it is unclear here what is meant by 'the onset of circadian oscillations'. Could you rephrase this for clarity?

      We will change that sentence.

      Line 173 - a few words to highlight that Lats is a kinase and the function of YAP phosphorylation by Lats would aid clarity here. Similarly, explanation of the functional difference between the protein with 4 Serine to alanine mutations and 5 mutations and why both of these mutants were used would be helpful.

      We will clarify this point following the reviewer’s suggestion.

      Line 174 - for accuracy, this should perhaps read 'fibroblast circadian clock', as this work is only in 3T3 cells, and therefore may not apply more generally.

      We will implement this change.

      Line 202 - could you expand to explain the existing limitations of studying cell signalling cascades in synchronised cells? This is not clear to me. Thanks.

      We will discuss the signalling effects caused by 50% serum shocks and other traditional ways to synchronise the cells as requested by the reviewer.

      Figures 1D and 4B - the choice of colour range used in these kymographs is skewed towards the warmer colours, making it quite hard to discern differences between the groups. I would suggest using the cooler colour range for a greater proportion of the data set, to make rhythmicity, or lack of it, clearer to see.

      We will invest further efforts to finding the optimal colour map and range for our datasets.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Summary:

      Abenza et al. investigate an important question of how the physical environment affects the properties of the individual circadian clocks. The authors utilize a set of clever experiments, pharmacological manipulations and data analysis techniques to unveil a potential role of YAP/TAZ in the circadian clock.

      Major comments:

      Effects on the circadian clock

      1. The authors use the fluorescent reporter created by Nagoshi from sections of the Rev-erbα gene. This reporter is widely used to estimate relative circadian timing in individual cells but it does not provide direct information on the circadian clock activity. In other words, while Reverb rhythmic expression is driven by the clock, it is not known whether less-rhythmic or non-rhythmic expression or change in expression level of Rev-erbα is affecting the core clock. For example, it has been shown that Rev-erbα knock-down cells are rhythmic as long as Rev-erb-beta is present. Thus, one major shortcoming of the current version of the manuscript is the missing dissection between Rev-erbα rhythmicity/expression and the circadian clock. More concretely, it remains unclear whether the change in Rev-erbα expression is a direct effect or caused by a defect clock. Since the authors presume a direct effect of YAP/TAP on Rev-erb expression, the former is likely. If that is the case, the data could be interpreted as that (missing) mechanic stimuli can lead to nuclear YAP/TAZ, which rises the level of Rev-erbα (and maybe interfere with its rhythmic accumulation). Beyond Rev-erbα expression, there may or may not be an effect on the circadian clock (core clock, CCGs). With the current version we do not know since the authors do not look beyond Rev-erbα expression. Thus, the claims on circadian clock or circadian rhythms in their cells is not studied in this version of the manuscript. The current version is still very interesting and provides insights into the Rev-erbα modulation, but additional work would be needed to show links with the core clock machinery. For this the authors could show influence (or at least correlation) of the YAP/TAZ/REVERBA phenotype on the oscillations of core clock genes or clock-controlled genes. Either through the use of alternative (ideally constitutive) reporters (e.g. PER2, BMAL1, fluorescent or LUC), or/and by analyzing RNA/Protein of core clock genes or output genes. This would not be necessary for all experiments, but at least for some were its possible (e.g. experiments with drugs perturbations). Otherwise, any claim like "YAP/TAZ perturbs the circadian clock ..." or "the circadian clock deregulation in nuclear YAP-enriched cells" is potentially flawed and has to be removed/reformulated.

      2. The authors aim to discard the possibility of paracrine signals by showing no increase in circadian power fraction of cells growing in low density with conditioned medium (Figure 2D). A paracrine signal coming from an oscillatory system is likely to oscillate and in that case, I do not see how growing cells in constant conditional medium can discard the effects of an oscillatory paracrine signal. I believe the elegant experiment shown in Figure 2E more precisely address this issue.

      Data analysis methodology:

      1. Single-cell circadian recordings like the ones analyzed here are characterized by noisy amplitude and non-sinusoidal waveforms with fluctuating period (Bieler et al., 2014; Feillet et al., 2014). The authors interpolate, smooth, detrend and normalize their data; operations that are known to introduce spectral artifacts that can mislead the interpretation of the power spectrum. Moreover, the time-series pre-processing operations described by the authors in the methods sections is incomplete and the authors should more explicitly describe all their operations with exact methods applied, filter parameters and time-windows sizes (if applicable). To validate their pre-processing steps the authors could provide their time-series analysis pipeline code and/or provide a few examples of raw versus pre-processed data together with their respective spectrums before and after pre-processing. In addition, the authors could provide their raw trace signal data together with the corresponding post-processed signal data as plain text files.

      2. The authors rely on Fourier analysis and a reasonable self-made definition of circadian strength named as "circadian power fraction". Using a stationary-based method for noisy non-stationary data can lead to inaccurate spectrum power estimations. As the current version of the manuscript does not provide any alternative/complementary analysis method nor we have any available raw signal data it is unclear if their analysis appropriately represents the circadian power. The authors could consider implementing complementary data-analysis strategies to validate their conclusions. Fortunately, there are multiple suitable data analysis strategies already available that are exactly designed for this kind of data (eg. (Price et al., 2008; Leise et al., 2012; Leise, 2013; Bieler et al., 2014; Mönke et al., 2020). This time-series analysis methods is a crucial step as all main results on this manuscript rely on the authors self-made definition of circadian power. This is particularly important as there is no standardized method in the circadian field to estimate circadian rhythmicity and/or circadian power of single-cell traces.

      3. The authors mainly show circadian power fraction and analyze rhythmicity scores/powers. Is there the a chance that a rise in the basal expression level of Rev-erbα is reducing the rhythmicity score? Or to phrase it otherwise, the absolute amplitude may remain the same, but the relative amplitude may be reduced? Would that affect the FT analysis power scored? To clarify this the authors could provide an analysis of the relative amplitude in addition to the circadian intensity (as in Fig.1C).

      Minor points by text-line:

      1. YAP and TAZ should be introduced to the reader during introduction.
      2. by set a of proteins.
      3. Here the authors probably meant that cells were not reset nor entrained during the experiment.
      4. "..expression depends on..". This is a correlation, not proof of causation is shown until this point. This is an overstatement.
      5. Using the term "provoked" suggests a causal relationship not shown.
      6. Similarly last sentence "This result established.... is caused..". Again, this is an overstatement as only correlation is shown.
      7. According to their description the authors are not using any image-preprocessing steps, eg background subtraction or other filters. Is this correct?
      8. It is not clear what image metric for the single-cell signals are the authors using, eg. integrated nuclear intensity or mean/median nuclear intensity. I am not familiar with TrackMate but it might be possible to export and share with the readers the image-analysis pipeline used which would clarify any questions about image processing and signal extraction.

      References:

      1. Bieler, J, Cannavo, R, Gustafson, K, Gobet, C, Gatfield, D, and Naef, F (2014). Robust synchronization of coupled circadian and cell cycle oscillators in single mammalian cells. Mol Syst Biol 10, 739.

      2. Leise, TL (2013). Wavelet analysis of circadian and ultradian behavioral rhythms. J Circadian Rhythms 11, 5.

      3. Leise, TL, Wang, CW, Gitis, PJ, and Welsh, DK (2012). Persistent Cell-Autonomous Circadian Oscillations in Fibroblasts Revealed by Six-Week Single-Cell Imaging of PER2::LUC Bioluminescence. PLoS One 7, 1-10.

      4. Mönke, G, Sorgenfrei, F, Schmal, C, and Granada, A (2020). Optimal time frequency analysis for biological data - pyBOAT. BioRxiv 179, 985-986.

      5. Price, TS, Baggs, JE, Curtis, AM, FitzGerald, GA, and Hogenesch, JB (2008). WAVECLOCK: wavelet analysis of circadian oscillation. Bioinformatics 24, 2794-2795.

      Significance

      I believe this manuscript is of high significant both for the circadian as well as the mechanobiology fields. Readers from single-cell signalling studies will also be very interested in this work.

      To my knowledge the discussed link has not been studied before at single cell level, which as the authors show can provide multiple new insights.

      I do work with similar single-cell signals, have broad expertise in microscopy, image analysis methods, time series analysis, and the circadian clock mechanisms but very little experience in mechanobiology.

    1. These time windows are not usually enforced by the system(s) but instead are often a convention, documented elsewhere.

      ...and because it's a matter of documentation, it adds to the effort for onboarding. Further slowing down development.

    1. The mitigationist strategy was expressed most sharply in the full reopening of schools, which by early 2021 had been definitively proven to be centers of viral transmission. Extraordinary claims were made that if schools required masks, opened windows, improved ventilation, or some combination thereof, they would be safe havens from infection. In reality, reopening schools before the pandemic was contained proved disastrous throughout the world, including in districts with more stringent mitigation measures,

      Schools with stringent masking requirements had far lower rates of transmission, and these weren't even requirements for good masks. In addition, efforts in terms of filtration or ventilation were few and far between, aside from opening windows.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Summary:

      Wang et al. present an evaluation of a new generation of time-of-flight-based mass spectrometer that improves on the fraction of ions factually used for detection of peptide analytes, thus boosting the sensitivity of the Zenotof 7600 system when compared to the same instrument with the duty-cycle-enhancing Zenotrap module disabled and also when compared to the previous generation instrument of the same vendor in some of the comparisons.

      The authors position the MS acquisition technique as particularly suitable in combination with medium (micro-) and high ('analytical') flow and throughput methods where higher flow rates (vs. conventional nanoflow-LCMS) allow rapid sample turnover and high throughput, yet limit the efficiency of electrospray and ion transfer into the MS system, thus being in dire need for enhanced sensitivity of the MS system employed for detection. The competency of such an MS system for very low input materials as e.g. encountered in emerging single-cell proteomic workflows, typically employing nanoflow chromatography, was thus not part of the study.

      Accordingly, a medium- (micro-flow) and very high ('analytical'-flow) throughput LC method were screened on the three MS (parameter) setups using human cell lysate digests typically utilized in such technical evaluations. Well-received, the authors further extended their analysis (for the new instrument) across additional sample types of clinical and extended biological interest and spanning different levels of complexity and dynamic range of contained protein analytes.

      In addition, the authors also performed a controlled ratio 2-species mixture experiment which allows detailed benchmarking of proteome coverage as well as the quality of protein quantification in a known differential comparison for the medium throughput (micro-flow) method.

      The data quite convincingly demonstrate an increased sensitivity of the instrument based on similar identification performance in DIA bottom up proteomics from ca. 3- to 8-fold lower input peptide mass. However, I see a number of shortcomings mainly in the presentation and in part the completeness of the work, with specific comments below.

      Major comments:

      • Are the key conclusions convincing?

        • The concluded 10x sensitivity increase is overstating the observed numbers (x5-x8). In addition, the authors should at least discuss other changes than the Zeno trap incurred in the Zeno SWATH vs non-Zeno-SWATH DIA setups, particularly changes in accumulation times per m/z range, with Zeno Swath accumulating ~42 % longer per cycle spanning the same m/z range (85 vs 60 windows with 11ms per window) in the uflow method set and ~ 18 % longer in the high-flow method set (same window number but 13 ms vs. 11 ms dwell time per window). This should be discussed as one of the optimizations/factors contributing to the increased sensitivity observed in Zeno Swath measurements vs conventional SWATH. On that note, it was unclear to me when and where the 40 variable window SWATH method mentioned in the methods was used and where the settings can be found.
        • Since injected material is a critical parameter here, it would be good if it was mentioned also with the key conclusion on the increased number of confidently quantified peptides in microflow (based on the 2-species controlled quantity experiment).
        • Conclusions 'increasing protein identification numbers through the use of analytical-flow-rate chromatography' does not capture the observed data; the use of analytical-flow-rate does not convey an increase in protein identification numbers but enhanced sensitivity rather enables the maintenance of high protein identification numbers / proteome coverage despite/concurrent with analytical-flow chromatography
        • In titration curve experiments like these, probing proteome coverage from relatively small sample amounts, special care
        • Should the authors qualify some of their claims as preliminary or speculative, or remove them altogether?

        • 'Zeno SWATH increases protein identification in complex samples<br /> 5- to 10-fold when compared to current SWATH acquisition methods on the same instrument' - At no point this is shown, a decrease of required input amounts by 5-8-fold (increase in sensitivity) is shown by the data, not a multiplication of protein identification rates by that factor.

        • Would additional experiments be essential to support the claims of the paper? Request additional experiments only where necessary for the paper as it is, and do not ask authors to open new lines of experimentation.

        • Figure 1f, Supplemental Figure 1b, Figure 3 and Supplemental Figure 3 lack data for the Zeno SWATH method's performance at higher concentration. Given the fact that there is a clear, continuous trend of significant enhancement of proteomic depth in the highest 3 concentrations sampled by the Zeno SWATH method, I lack an assessment of the upper limit of proteome coverage achievable by the new platform when input material is not limited, or at least learn why injecting more is not advisable on the ZenoTOF 7600 system. It is clear that the region of interest is the lower loads where sensitivity gains are most pronounced, but with the strong trend in IDs per ng injected in the sampled range and discrepant range sampled by the non-Zeno method I feel there is a gap in the dataset and the upper ceiling of proteome coverage could be mapped out more thoroughly (At least for human cell lysate and possibly human plasma where trends appear most (log2-)linear).

        • Similarly, unless constrained for technical or practical reasons, I would suggest to find the ceiling for achievable proteome depth in analytical flow (4, 8 ug?)
        • Are the suggested experiments realistic in terms of time and resources? It would help if you could add an estimated cost and time investment for substantial experiments.

        • All these should be re-injections of existing samples on these MS setups and a minor effort provided instrument availability (<1w) and rapid re-analysis via DIA-NN.

        • Are the data and the methods presented in such a way that they can be reproduced?

        • The raw data have not been deposited to a public repository. Reproducibility of the study would benefit significantly by raw data (including search results and spectral libraries with log files of creation) upload/sharing e.g. via ProteomeXchange/PRIDE.

        • If any software versions or firmwares on the hardware are required to perform the measurements on the ZenoTOF on the market today, these versions and prospective release dates should be included or the accessibility of these settings commented on.
        • Are the experiments adequately replicated and statistical analysis adequate?

        • Figures 3 and Supplemental Figure 3 need a clarification in the legend as to the nature and origin of ID numbers (mean? Number of replicates? Add error bars if possible)

        • The usage of DIA-NN for data analysis is somewhat unclear, in particular the in Methods/Spectral libraries "For the analysis of plasma samples, a project-independent public spectral library [29] was used as described previously [15]. The Human UniProt [30] isoform sequence database (UP000005640, 19 October 2021) was used to annotate the library and the processing was performed using the MBR mode in DIA-NN." The authors should address in a revised version whether the identification numbers reported stem from two-pass or single-pass analysis (i.e. when the feature termed Match-between-runs implemented since DIANNv1.8 was enabled and whether all runs, spanning different injection amounts were co-analyzed and data-re-queried for a targeted library containing precursors identified in high load samples in first pass analysis and then queried in low-load samples. In other words, are the low-load IDs independent of high load IDs? If not (i.e. the different loads were co-analyzed with MBR), what proteome coverage to the low sample loads reach bona fide, without the 'guidance' of high-load IDs?

      Side note: Turning this around, could a high-load injection e.g. from a pool of limited-amount samples serve as a guiding element in a MBR-enabled analysis of a large cohort with limited sample amounts available per biological condition?

      Minor comments:

      • Specific experimental issues that are easily addressable.

        • The authors state the impact on dynamic range of identification when comparing ID sets against an external dataset with presumable cellular concentration numbers. I would in addition suggest comparing the dynamic range of the quantititative values observed from the available data which should provide a direct assessment of the dynamic range of quantification of the two methods.
        • Are prior studies referenced appropriately?

        • The statement that conventional DIA methods rely on nanoflow chromatography (p3, paragraph 3) is not accurate as there is previous implementations of data-independent acquisition MS of microflow separations, in part the group's work and referred to later in the text.

      o Vowinckel, J. et al. Cost-effective generation of precise label-free quantitative proteomes in high-throughput by microLC and data-independent acquisition. Sci. Rep. 8, 4346 (2018)

      o Bruderer, R. et al. Analysis of 1508 plasma samples by capillary-flow data-independent acquisition profiles proteomics of weight loss and maintenance. Mol. Cell Proteom. 18, 1242-1254 (2019).

      It is correct that most early implementations of DIA-MS utilized nanoflow separations due to sensitivity and proteome coverage but DIA as such is a chromatography-flow-speed-agnostic principle and the concept to combine microflow LC with DIA not new, yet powerful as demonstrated by the authors and others previously and once again, here. - P.3 paragraph 3 'Moreover, the increased sensitivity of DIA methods has facilitated applications in large-scale proteomics, including system-biology studies in various model organisms, disease states, and species [5-9]' Include Ref 4 where improved sensitivity of DIA was demonstrated (at proteomic breadth..) - Are the text and figures clear and accurate?

      -   Text and Figures need to be edited for typos, language, and clarity/accuracy.
      
      1. Abstract 'Zeno SWATH increases protein identification in complex samples<br /> 5- to 10-fold when compared to current SWATH acquisition methods on the same instrument' - At no point this is shown, a drop of required input amounts by 5-8-fold (increase in sensitivity) is shown by the data, not a multiplication of protein identification rates.
      2. P. 4 paragraph 3: Use terms 'consensus' or 'shared' identifications or similar to refer to the proteins identified in all 3 replicates, rather than 'reproducible' when discussing the reproducibility of peptide and protein quantification (as contrast to reproducibility of identification).
      3. P.3 paragraph 2 'selects and fragments multiple charge ions' -> multiply charged (?)
      4. P. 4 p. 1 'leading to under-detection' please clarify (leading to partial ion usage and limited sensitivity?)
      5. P. 6 paragraph 3 'The gain in identification number of Zeno SWATH versus SWATH is mostly explained by an increased dynamic range: i.e. more low-abundance proteins are detected' - Reformulate/clarify: Is increased dynamic range of identifications against external quantities an explanation or perhaps simply the increased sensitivity with improved duty cycle?
      6. Term 'active gradient' unclear. An inactive gradient is isocratic flow. Omit 'active'. Isocratic/other portions are overhead.
      7. Figure 1 panel a) iteration scheme a-d) is redundant with the rest of the figure; use alternative iter scheme within panel a). Panel a) is further contains illegibly small fonts and should be edited for legibility
      8. Revisit y-axis labels. Example: Fig. 1f) 'Precursors Identificaiton' -> Precursors identified/Precursor identifications. Correct throughout manuscript
      9. ID bar graphs in all Figures: Cumulative IDs shade of grey is not properly visible, suggest alternate color scheme or add black color outline to the bars
      10. Figure 1 e) legend 'along gradient length' -> gradient time / retention time
      11. Figure 1 d) too small, trend lines mentioned in text invisible in graph. Boxplots very small.
      12. There is three different terms used for the high throughput method (analytical-flow, high-flow, and another one.. please align where possible for clarity (i.e. choose 2 names for the 2 methods throughout the manuscript) etc..
      13. Do you have suggestions that would help the authors improve the presentation of their data and conclusions?

        • They authors may consider adding a short explanation of the term 'dynamic range coverage of identification' to contrast this from a direct assessment of dynamic range of quantitative values observed in this study.
        • 2-species controlled experiment: The discrepancy of observed vs true mixing ratios suggests the data were scaled during the analysis which, with these mixture ratios, tends to distort the accuracy (i.e. generates offset of observed from true ratio. That's very likely not a pipetting error on a log scale). In other words, you may want to evaluate the raw quantitative ratios (w/o any normalization/scaling applied) which should be more reflective of true/manual pipetting ratios in light of normalization strategy incompatibily with certain species mix scenarios (compare Supplementary Figure 1 a). Note to the editor(s): This will not affect the clear benefit of Zenotrap usage demonstrated by the 2-species benchmark as is but can be considered a minor yet recommended improvement (thus here).
        • The 2-species controlled experiment can reveal more information than currently extracted and I would recommend to show Zeno Swath and Swath xy scatters, including count-scaled density distributions of the observed ratios, side-by side. This would give deeper understanding of the large impact of the Zeno SWATH method. Also, I believe I haven't seen any instrument to date delivering precise quantification over as broad a dynamic range as surmisable from Fig. 1d) which might be worth wile highlighting.

      Significance

      Wang et al. describe a technical advance in ion usage and sensitivity based on an ion-trap device storing and focusing ions for TOF-based bottom-proteomics measurements. The study demonstrates improved sensitivity relative to previous generation instrumentation and also explores the impact of the specific trap device relative to the general improvements of the remaining MS system. The work outlines a route towards high coverage proteomics at very high throughput and robustness, as desirable in clinical proteomics and prospective personalized medicine approaches. While not all sample types of interest are limited to the amounts where the strongest improvements are seen in the presented data, large scale studies across expansive cohorts will likey be rendered more practical and realistic due to reduced instrument contamination at reduced loads and also further applications beyond those discussed in the manuscript will be rendered feasible on the newer generation instrument.

      The improved ZenoTOF system and SWATH method follows a series of innovations in the mass spectrometry instrumentation, most notably and related the drastic improvement of ion utilization by storage e.g in a trapped ion mobility device earlier in the ion stream where, beyond an accumulation-based boost of sensitivity, ion mobility as a further biophysical properties is assessed in addition to the conventional m/z, as reviewed recently (doi: 10.1016/j.mcpro.2021.100138.). While these developments culminated and have been targeting low-flow, ultra-high sensitivity applications such as single-cell proteomics, the present study takes a different angle towards higher throughput measurements from significantly larger than single cell, but also significantly lower than historically required sample amounts that were prohibitive to a range of applications that are now easier to accomplish thanks to this and related work of the authors and others. The presented research appears of broad relevance and interest to the scientific community interested in protein abundance pattern analysis, in particular in larger (clinical) cohorts. Furthermore, the performance metrics on proteomic depth from human cell lysate digests will likely allow researchers with analytical quests other than those exemplified in the manuscript to extrapolate the ZenoTOF and Zeno SWATH suitability for their respective analytical targets.

      Reviewer Field of expertise/background:

      Quantitative proteomics. DIA mass spectrometry method & algorithm development & heavy usage. Protein Biochemistry. Molecular Biology.

    1. These windows also add a phase shift to the light altering the measured signal. This shift has to be corrected based on measuring a known sample in the environmental cell.

      Window effect depends on the absolute phase (type of substrate: oxide or metal). Thus sample substrate has to be used. For non-ideal windows it also depends on the ambient. From my point of view, window effect should best be defined with the correct reference substrate within the ambient of the subsequent sample measurement.

    1. Reviewer #3 (Public Review):

      In this work, Dumrongprechachan et al. impressively expanded their earlier work on the identification of cell type-specific subcellular proteomes from mouse brain by APEX2 proximity labeling. Instead of using viral expression of APEX2, the authors now created a Cre-dependent APEX2 reporter mouse line using CRISPR knock-in, which can be combined with multiple Cre-driver lines for proteomic applications. Using this novel tool in combination with sophisticated mass spectrometry and elegant bioinformatics, they mapped the temporal dynamics of the axonal proteome in corticostriatal projections (instead of only identifying a static cell type- and compartment-specific proteome) together with its phosphorylation status (instead of only looking at protein abundance). The data will provide a valuable resource on developmental trajectories at the proteomic and phosphoproteomic level, and will allow for pathway- and phosphosite-centric systems-level analyses as exemplified by the identification of proline-directed protein kinases as major regulators of corticostriatal projection development.

      Strengths:<br /> The key tool developed in this work is the APEX2 reporter mouse line as it enables capturing of early postnatal time points, which was not possible before due to the time window of 2-4 weeks required for viral APEX expression. Thus, this tool puts the authors into position to access the temporal dynamics of the developing axon at time points spanning from neonate (as early as P5) to young adult (P50). Within this complex experimental design, the authors even managed to introduce a crucial compartment control at least for the time point P18, in which APEX expression is restricted to nucleus and soma upon viral expression. The resulting resource will be of high value as the data are derived from advanced mass spectrometric methods and stringent data handling. Examples of this high level of scrutiny include the use of MS3 methodology for the acquisition of TMT data to address the ratio distortion issues typically seen with isobaric labeling and thereby increase the quantification accuracy and the limitation to proteins quantified in all biological replicates.

      Weaknesses:<br /> As to sample preparation for mass spectrometry, the authors follow the interesting concept of first enriching the phosphopeptides from the pool of TMT-labeled tryptic peptides and then using the unbound fraction from that step for further peptide fractionation, followed by mass spectrometric protein quantification. While this strategy sounds very straightforward in principle, one would expect that the phosphopeptide enrichment comes with an unspecific loss of other peptides in general, and with a semi-specific loss of acidic peptides in particular. Was this potential issue investigated by comparison with samples that were fractionated directly without prior phosphopeptide enrichment? Or with other words: the rationale for this sequential procedure is compelling - quantification of both protein and phosphopeptide abundance from the same (limited) sample, but what is the price for it as to peptide loss?

      The APEX2 reporter mouse line is a novel tool with broad applicability for proximity labeling approaches and, understandably, the authors advertise its advantages, mainly via the suitability for short temporal windows. However, the discussion on the limitations of the approach falls short. The authors should make clear that the APEX method in general is limited to ex vivo approaches such as the acute brain slices used here due to the limitation that potentially toxic reagents (i.e. low membrane-permeable biotin-phenol and H2O2) have to be delivered to the target tissue. Although treatment with H2O2 is rather short, undesired oxidative stress signaling may have to be taken into account, particularly when protein phosphorylation rather than protein abundance is assessed. It would also be interesting to discuss the pros and cons of perfusing the mice prior to preparation of brain slices; e.g., in the context of removal of catalases/endogenous peroxidases or potential for substrate delivery (like recently shown in heart, doi: 10.1038/s41586-020-1947-z). Another issue with the Discussion is that the authors do not properly reflect the involvement of proline-directed kinases in the development of corticostriatal projections, which stands in contrast to the fact that they sell this as one of their major findings throughout the manuscript, including the Abstract.

    1. troubled, confused

      Eliot uses feminine language which connects to Baudelaire's sexualization and mutilation of women. Baudelaire creates an image of a murdered woman who gets used by her male killer. This disturbing scene is contrasted with the feminine imagery used. Baudelaire refers to the "sequined fabrics", "perfumed dresses" and "bouquets". Despite this, the imagery remains sexual as the furniture is referred to as "voluptuous". Eliot mimics this imagery through the emphasis on "strange synthetic perfumes" and "coloured glass". However, Eliot's use of feminine imagery feels uncomfortable next to the beautiful description of a woman sitting in a throne with glittering jewels because the feminine tone isn't used to sexualize the women, but rather objectify her. While Baudelaire used words like "voluptuous", Eliot highlights the money and wealth surrounding this women. She is linked to items like antique windows and shiny tables.

    1. Segregated waiting rooms in bus and train stations were required, as well as water fountains, restrooms, building entrances, elevators, cemeteries, even amusement-park cashier windows.

      segregation

    1. I took along my son, who had never had any fresh water up his nose and who had seen lily pads only from train windows. On the journey over to the lake I began to wonder what it would be like. I wondered how time would have marred this unique, this holy spot--the coves and streams, the hills that the sun set behind, the camps and the paths behind the camps. I was sure that the tarred road would have found it out and I wondered in what other ways it would be desolated. It is strange how much you can remember about places like that once you allow your mind to return into the grooves which lead back. You remember one thing, and that suddenly reminds you of another thing. I guess I remembered clearest of all the early mornings, when the lake was cool and motionless, remembered how the bedroom smelled of the lumber it was made of and of the wet woods whose scent entered through the screen. The partitions in the camp were thin and did not extend clear to the top of the rooms, and as I was always the first up I would dress softly so as not to wake the others, and sneak out into the sweet outdoors and start out in the canoe, keeping close along the shore in the long shadows of the pines. I remembered being very careful never to rub my paddle against the gunwale for fear of disturbing the stillness of the cathedral.

    1. About MarkdownPad Author MarkdownPad is developed by Evan Wondrasek, a software engineer based out of Minneapolis, MN. For updates, follow @evanw on Twitter. Built Exclusively for Windows MarkdownPad is designed for Windows and uses the .NET 4 and Windows Presentation Foundation 4 frameworks (that means it's extra shiny on the inside).
      • about

    Tags

    Annotators

    URL

    1. This also allows us to offer a new executable (gpdf, or gpdfwin??.exe on Windows) which is purely for PDF input. For this release, those new binaries are not included in the “install” make targets, nor in the Windows installers
      • WHEN AVAILABLE?
    1. GNU Emacs, which is a sort of hybrid between Windows Notepad, a monolithic-kernel operating system, and the International Space Station. It’s a bit tricky to explain, but in a nutshell, Emacs is a platform written in 1976 (yes, almost half a century ago) for writing software to make you more productive, masquerading as a text editor.
    1. I think that woman gets out in the daytime! And I’ll tell you why—privately—I’ve seen her! I can see her out of every one of my windows! It is the same woman, I know, for she is always creeping, and most women do not creep by daylight.

      Is this why she is always up at night?

    2. But he is right enough about the beds and windows and things. It is as airy and comfortable a room as any one need wish, and, of course, I would not be so silly as to make him uncomfortable just for a whim. I’m really getting quite fond of the big room, all but that horrid paper.

      He has once again invalded her feels and convinced her that he was right.

    3. I often wonder if I could see her out of all the windows at once. But, turn as fast as I can, I can only see out of one at one time. And though I always see her she may be able to creep faster than I can turn! I have watched her sometimes away off in the open country, creeping as fast as a cloud shadow in a high wind

      she's getting a little hysterical lol

    4. But there is something else about that paper—the smell! I noticed it the moment we came into the room, but with so much air and sun it was not bad. Now we have had a week of fog and rain, and whether the windows are open or not, the smell is here. It creeps all over the house. I find it hovering in the dining-room, skulking in the parlor, hiding in the hall, lying in wait for me on the stairs. It gets into my hair

      the smell and color has been getting on everything in sight

    1. At the centre was an octag-onal pavilion which, on the first floor, consisted of only a singleroom, the king’s salon; on every side large windows looked outonto seven cages (the eighth side was reserved for the en-trance), containing different species of animals.

      Brings up the question on the motives behind these innovations. What drives innovation? Is it the fondness of wanting to have power?

    2. whether the syndics have carried out their tasks,whether the inhabitants have anything to complain of; they ‘ob-serve their actions’. Every day, too, the syndic goes into thestreet for which he is responsible; stops before each house: getsall the inhabitants to appear at the windows (those who liveoverlooking the courtyard will be allocated a window lookingonto the street at which no one but they may show themselves);he calls each of them by name; informs himself as to the state ofeach and every one of them—‘in which respect the inhabitantswill be compelled to speak the truth under pain of death’; ifsomeone does not appear at the window, the syndic must askwhy: ‘In this way he will find out easily enough whether deador sick are being concealed.’ Everyone locked up in his cage,everyone at his window, answering to his name and showinghimself when asked—it is the great review of the living and thedead.

      The syndic kind of serves a god-like, mythical role as the determiners of who is considered living or deceased. Like a god, those locked in "cages" are compliant to and must speak the truth to him or face condemnation.

    3. whether the syndics have carried out their tasks,whether the inhabitants have anything to complain of; they ‘ob-serve their actions’. Every day, too, the syndic goes into thestreet for which he is responsible; stops before each house: getsall the inhabitants to appear at the windows (those who liveoverlooking the courtyard will be allocated a window lookingonto the street at which no one but they may show themselves);he calls each of them by name; informs himself as to the state ofeach and every one of them—‘in which respect the inhabitantswill be compelled to speak the truth under pain of death’; ifsomeone does not appear at the window, the syndic must askwhy: ‘In this way he will find out easily enough whether deador sick are being concealed.’ Everyone locked up in his cage,everyone at his window, answering to his name and showinghimself when asked—it is the great review of the living and thedead.

      There's an interesting, maybe subtle, contrast here between care and control. While citizens are confined to their homes and forced under a quasi-martial law, the Syndic still endeavors to hear the complaints of the people. By listening to each citizen, these laws seem centered around the idea of community, co-existence, and transparency. It differs from how the Covid-19 policy in the US alienated the voices of scientists and average people, reducing transparency.

    1. pon a carpet of the same material and hue. But {{1842-01: , }} in this chamber only, the color of the windows failed to correspond with the decorations

      From the description, it almost has "cartoon vibes" while describing these large color blocks as apartments."

    2. gigantic clock of ebony

      it feels as though this will have some connection with the red death like the windows do, maybe something like the time someone has left to live (a half hour?) when they become sick

    3. Gothic window looked out upon a closed corridor which pursued the windings of the suite. These windows were of stained glass whose color varied in accordance with the prevailing hue of the decorations of the chamber into which it opened.

      Gothic art/architecture of the time may have had a major impact on the dark themes of Poe's work.

    4. Gothic window looked out upon a closed corridor which pursued the windings of the suite. These windows were of stained glass whose color varied in accordance with the prevailing hue of the decorations of the chamber into which it opened.

      Here Poe seems to be explaining what stained glass is to his audience, as it was a new development which completely changed Gothic architecture.

    1. Windows cuenta con una función de suspensión selectiva de USB. Mediante ella, todos los dispositivos conectados al sistema por USB pasan a ejecutarse en un modo de bajo consumo si el sistema no detecta actividad en ellos durante un tiempo determinado, para de esta forma ahorrar energía.
      • PASO 2
      • control panel, power, advanced
      • USB: disabled
    2. Para desactivar el modo horro de energía debemos de pulsar el atajo «Windows + X» y seleccionamos el Administrador de dispositivos en la lista. A continuación, desplegamos el apartado de «Dispositivos de interfaz humana (HID)». Aquí, en la lista de controladores buscamos nuestro teclado o ratón bluetooth, hacemos clic sobre él con el botón derecho y seleccionamos «Propiedades».
      • PASO 1
      • DEVICES: power tab: [ ] unchecked
    1. simply search for a word and Miro will highlight all relevant sticky notes that contain that word.From there, you can press ENTER to bring the widgets from the search result into one view. If there are many sticky notes, you can also press CMD + A (Mac) or STRG + A (Windows) to select them all and pull them together using the Smart Align trick mentioned earlier.

      .c4

    1. upon the vacant eye-like windows

      Yet another reference to the "eye" as a symbol and ever present object within Poe's work, yet again used in a dark description as the narrator views this undesirable house.

    2. the vacant and eye-like windows.
      • Has a fascination with eyes as seen in his other works as well
      • Poe personifies The House of Usher...could reflect Roderick himself
    3. the ghastly tree-stems, and the vacant and eye-like windows

      Poe seems to be obsessed with these ideas of the tree trunks and windows since it's repeated twice in this paragraph, as if he is trying to personify them and bring them to life as their own characters.

    1. for the windows are barred for little children, and there are rings and things in the walls.

      The fact that her husband puts her in a room with barred windows shows what he thinks of her. She is a prisoner here, not really a wife or equal partner

    2. Up and down and sideways they crawl, and those absurd, unblinking eyes are everywhere. There is one place where two breadths didn’t match, and the eyes go all up and down the line, one a little higher than the other.

      I think he is keeping her in this room to make her go even crazier and continue a cycle of abuse and control. The fact that the room is a nursey, making her feel weak and unintelligent, and that it has an iron gate and bars on the windows.

  6. moodle.lynchburg.edu moodle.lynchburg.edu
    1. The crazy foundation stones still marked the former site of my poor little cabin,and not far away, on six weary boulders, perched a jaunty board house, perhaps twenty by thirtyfeet, with three windows and a door that locked.

      KP: Not only did the students of the school move on, but the school itself was "moving on" as well.

    1. 对于不习惯 Markdown 和拆分视图的用户而言,笔记应用 Obsidian 1桌面端。本文提到的插件笔者只在 Windows 下测试通过。的编辑体验并不好:同一视图中所见并非所得,表格和 Front Matter 中无意义的标记太多,列表操作不符合直觉……为了改进 Obsidian,一些用户会选择诸如 Typora 的第三方编辑器,但这样又会失去 Wiki 链接这个核心功能。
      • [ ] 这里非常有启发
    1. if we want to deliver a thousand terawatt hours a year using these systems you could use 142 00:29:07 coal-fired power stations or 30 000 solar pv arrays or 12 309 wind turbine arrays of average size 00:29:18 where each array is like 10 win windows this is this is where we're getting the extra numbers from so each of these sites will have to be built and constructed and maintained and then when they wear out they need to 00:29:30 be decommissioned so renewables have a much lower energy return on energy invested ratio than fossil fuels and they and the the truth is they may not be strong enough to power the next industrial era 00:29:45 so gas and hydro power generation has to balance with demand supply and demand has to balance otherwise the grid will age

      !- for : EROI, energy density - lower energy density = more plants

    1. Use Multiple Desktops on One Screen With the Virtual Desktop Feature in Windows 10 Take advantage of the virtual desktop feature in Windows 10 to organize your virtual space while working on a bunch of different things at once.

  7. Aug 2022
    1. Installing R and RStudio on Windows 10 https://www.youtube.com/watch?v=VLWaED9jTiA

      Will R and RStudio only be availabe through windows or does it work with apple/mac as well?

    1. The System unattended sleep timeout power setting is the idle timeout before the system returns to a low power sleep state after waking unattended.
      • I DONT UNDERSTAND
    1. While I'm still trying to re-create this and derive the fundamental root fix, the issue seems to have resolved itself, through the following registry change:https://www.tenforums.com/tutorials/72133-add-system-unattended-sleep-timeout-power-options-windows.html
      • TEST it
    2. Microsoft shifted to a model whereby they stopped doing detailed testing of their products (including Windows) and instead are much more reliant on a combination of Agile-style automated testing and "pre-release" testing by users,
      • OK
    3. However: when a system is set to "never sleep" and then goes to sleep / standby - in my case terminating a critical backup - it seems as though the "light testing" model has deep flaws. As far as I can see the "never sleep" option (by default, without implementing any of the 11 points you raise) still has the system going to sleep / standby and being unresponsive. The removal of the "unattended sleep timeout" and behaviour of -actually- going to sleep I would characterise as a bug. At a guess the bug goes unfixed due to the "most people use Word" paragraph above.Similarly - Windows Update seems to ignore the impact of forced reboots losing work, or to start up machines that are in Hibernate mode while in backpacks (as I have had Windows do). This seems an attitudinal change from Microsoft with regards to customer perception of the product.
      • I AGREE!
      • My W10 miniPC (without battery, pluged) went sleep WHILE running batch job calling rest apis over internet
    1. Dan Ingalls implemented the first opaque overlapping windows to let users see more code and other objects on the screen

      This is interesting context. I wonder if that need has gone away with large screens or if we're not using it the way it was originally intended. My intuition is that auto-layout is generally better but for smaller pieces of data ad hoc overlaps seem fine.

    1. The wonderful folks at Microsoft have designed this latest update to reset every damn time! In my case - it resets to 5 minutes for BOTH battery and for plugged in!!!!
      • WONDERFUL FOLKS!
      • MILLENIALS???
    2. LouisCornell Replied on August 7, 2016 Had the exact same problem.  I had my power settings set to "Maximize" performance, and some Windows update changed the power settings.  My computer started "going to sleep" without my telling it to.
      • ME TOO!!!
      • 2022-08-28 W10
    1. The fire crackled up the stairs. It fed upon Picassos and Matisses in the upper halls, like delicacies,baking off the oily flesh, tenderly crisping the canvases into black shavings.Now the fire lay in beds, stood in windows, changed the colors of drapes

      It seems as though the fire is now being personified and characterized a bit. It's like the fire is now, briefly, a resident of the house.

    2. "Who goesthere? What's the password?" and, getting no answer from lonely foxes and whining cats, it hadshut up its windows and drawn shades in an old-maidenly preoccupation with self-protection whichbordered on a mechanical paranoia

      The house starts to be characterized here. It seems to be portrayed as lonely and defensive. It's as though the house knows what has happened and that there is no family.

    1. Who Can Name the Bigger Number?by Scott Aaronson [Author's blog] [This essay in Spanish] [This essay in French] [This essay in Chinese] In an old joke, two noblemen vie to name the bigger number. The first, after ruminating for hours, triumphantly announces "Eighty-three!" The second, mightily impressed, replies "You win." A biggest number contest is clearly pointless when the contestants take turns. But what if the contestants write down their numbers simultaneously, neither aware of the other’s? To introduce a talk on "Big Numbers," I invite two audience volunteers to try exactly this. I tell them the rules: You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature. So contestants can’t say "the number of sand grains in the Sahara," because sand drifts in and out of the Sahara regularly. Nor can they say "my opponent’s number plus one," or "the biggest number anyone’s ever thought of plus one"—again, these are ill-defined, given what our reasonable mathematician has available. Within the rules, the contestant who names the bigger number wins. Are you ready? Get set. Go. The contest’s results are never quite what I’d hope. Once, a seventh-grade boy filled his card with a string of successive 9’s. Like many other big-number tyros, he sought to maximize his number by stuffing a 9 into every place value. Had he chosen easy-to-write 1’s rather than curvaceous 9’s, his number could have been millions of times bigger. He still would been decimated, though, by the girl he was up against, who wrote a string of 9’s followed by the superscript 999. Aha! An exponential: a number multiplied by itself 999 times. Noticing this innovation, I declared the girl’s victory without bothering to count the 9’s on the cards. And yet the girl’s number could have been much bigger still, had she stacked the mighty exponential more than once. Take , for example. This behemoth, equal to 9387,420,489, has 369,693,100 digits. By comparison, the number of elementary particles in the observable universe has a meager 85 digits, give or take. Three 9’s, when stacked exponentially, already lift us incomprehensibly beyond all the matter we can observe—by a factor of about 10369,693,015. And we’ve said nothing of or . Place value, exponentials, stacked exponentials: each can express boundlessly big numbers, and in this sense they’re all equivalent. But the notational systems differ dramatically in the numbers they can express concisely. That’s what the fifteen-second time limit illustrates. It takes the same amount of time to write 9999, 9999, and —yet the first number is quotidian, the second astronomical, and the third hyper-mega astronomical. The key to the biggest number contest is not swift penmanship, but rather a potent paradigm for concisely capturing the gargantuan. Such paradigms are historical rarities. We find a flurry in antiquity, another flurry in the twentieth century, and nothing much in between. But when a new way to express big numbers concisely does emerge, it’s often a byproduct of a major scientific revolution: systematized mathematics, formal logic, computer science. Revolutions this momentous, as any Kuhnian could tell you, only happen under the right social conditions. Thus is the story of big numbers a story of human progress. And herein lies a parallel with another mathematical story. In his remarkable and underappreciated book A History of π, Petr Beckmann argues that the ratio of circumference to diameter is "a quaint little mirror of the history of man." In the rare societies where science and reason found refuge—the early Athens of Anaxagoras and Hippias, the Alexandria of Eratosthenes and Euclid, the seventeenth-century England of Newton and Wallis—mathematicians made tremendous strides in calculating π. In Rome and medieval Europe, by contrast, knowledge of π stagnated. Crude approximations such as the Babylonians’ 25/8 held sway. This same pattern holds, I think, for big numbers. Curiosity and openness lead to fascination with big numbers, and to the buoyant view that no quantity, whether of the number of stars in the galaxy or the number of possible bridge hands, is too immense for the mind to enumerate. Conversely, ignorance and irrationality lead to fatalism concerning big numbers. Historian Ilan Vardi cites the ancient Greek term sand-hundred, colloquially meaning zillion; as well as a passage from Pindar’s Olympic Ode II asserting that "sand escapes counting." ¨ But sand doesn’t escape counting, as Archimedes recognized in the third century B.C. Here’s how he began The Sand-Reckoner, a sort of pop-science article addressed to the King of Syracuse: There are some ... who think that the number of the sand is infinite in multitude ... again there are some who, without regarding it as infinite, yet think that no number has been named which is great enough to exceed its multitude ... But I will try to show you [numbers that] exceed not only the number of the mass of sand equal in magnitude to the earth ... but also that of a mass equal in magnitude to the universe. This Archimedes proceeded to do, essentially by using the ancient Greek term myriad, meaning ten thousand, as a base for exponentials. Adopting a prescient cosmological model of Aristarchus, in which the "sphere of the fixed stars" is vastly greater than the sphere in which the Earth revolves around the sun, Archimedes obtained an upper bound of 1063 on the number of sand grains needed to fill the universe. (Supposedly 1063 is the biggest number with a lexicographically standard American name: vigintillion. But the staid vigintillion had better keep vigil lest it be encroached upon by the more whimsically-named googol, or 10100, and googolplex, or .) Vast though it was, of course, 1063 wasn’t to be enshrined as the all-time biggest number. Six centuries later, Diophantus developed a simpler notation for exponentials, allowing him to surpass . Then, in the Middle Ages, the rise of Arabic numerals and place value made it easy to stack exponentials higher still. But Archimedes’ paradigm for expressing big numbers wasn’t fundamentally surpassed until the twentieth century. And even today, exponentials dominate popular discussion of the immense. Consider, for example, the oft-repeated legend of the Grand Vizier in Persia who invented chess. The King, so the legend goes, was delighted with the new game, and invited the Vizier to name his own reward. The Vizier replied that, being a modest man, he desired only one grain of wheat on the first square of a chessboard, two grains on the second, four on the third, and so on, with twice as many grains on each square as on the last. The innumerate King agreed, not realizing that the total number of grains on all 64 squares would be 264-1, or 18.6 quintillion—equivalent to the world’s present wheat production for 150 years. Fittingly, this same exponential growth is what makes chess itself so difficult. There are only about 35 legal choices for each chess move, but the choices multiply exponentially to yield something like 1050 possible board positions—too many for even a computer to search exhaustively. That’s why it took until 1997 for a computer, Deep Blue, to defeat the human world chess champion. And in Go, which has a 19-by-19 board and over 10150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs. Exponential growth plagues computers in other guises as well. The traveling salesman problem asks for the shortest route connecting a set of cities, given the distances between each pair of cities. The rub is that the number of possible routes grows exponentially with the number of cities. When there are, say, a hundred cities, there are about 10158 possible routes, and, although various shortcuts are possible, no known computer algorithm is fundamentally better than checking each route one by one. The traveling salesman problem belongs to a class called NP-complete, which includes hundreds of other problems of practical interest. (NP stands for the technical term ‘Nondeterministic Polynomial-Time.’) It’s known that if there’s an efficient algorithm for any NP-complete problem, then there are efficient algorithms for all of them. Here ‘efficient’ means using an amount of time proportional to at most the problem size raised to some fixed power—for example, the number of cities cubed. It’s conjectured, however, that no efficient algorithm for NP-complete problems exists. Proving this conjecture, called P¹ NP, has been a great unsolved problem of computer science for thirty years. Although computers will probably never solve NP-complete problems efficiently, there’s more hope for another grail of computer science: replicating human intelligence. The human brain has roughly a hundred billion neurons linked by a hundred trillion synapses. And though the function of an individual neuron is only partially understood, it’s thought that each neuron fires electrical impulses according to relatively simple rules up to a thousand times each second. So what we have is a highly interconnected computer capable of maybe 1014 operations per second; by comparison, the world’s fastest parallel supercomputer, the 9200-Pentium Pro teraflops machine at Sandia National Labs, can perform 1012 operations per second. Contrary to popular belief, gray mush is not only hard-wired for intelligence: it surpasses silicon even in raw computational power. But this is unlikely to remain true for long. The reason is Moore’s Law, which, in its 1990’s formulation, states that the amount of information storable on a silicon chip grows exponentially, doubling roughly once every two years. Moore’s Law will eventually play out, as microchip components reach the atomic scale and conventional lithography falters. But radical new technologies, such as optical computers, DNA computers, or even quantum computers, could conceivably usurp silicon’s place. Exponential growth in computing power can’t continue forever, but it may continue long enough for computers—at least in processing power—to surpass human brains. To prognosticators of artificial intelligence, Moore’s Law is a glorious herald of exponential growth. But exponentials have a drearier side as well. The human population recently passed six billion and is doubling about once every forty years. At this exponential rate, if an average person weighs seventy kilograms, then by the year 3750 the entire Earth will be composed of human flesh. But before you invest in deodorant, realize that the population will stop increasing long before this—either because of famine, epidemic disease, global warming, mass species extinctions, unbreathable air, or, entering the speculative realm, birth control. It’s not hard to fathom why physicist Albert Bartlett asserted "the greatest shortcoming of the human race" to be "our inability to understand the exponential function." Or why Carl Sagan advised us to "never underestimate an exponential." In his book Billions & Billions, Sagan gave some other depressing consequences of exponential growth. At an inflation rate of five percent a year, a dollar is worth only thirty-seven cents after twenty years. If a uranium nucleus emits two neutrons, both of which collide with other uranium nuclei, causing them to emit two neutrons, and so forth—well, did I mention nuclear holocaust as a possible end to population growth? ¨ Exponentials are familiar, relevant, intimately connected to the physical world and to human hopes and fears. Using the notational systems I’ll discuss next, we can concisely name numbers that make exponentials picayune by comparison, that subjectively speaking exceed as much as the latter exceeds 9. But these new systems may seem more abstruse than exponentials. In his essay "On Number Numbness," Douglas Hofstadter leads his readers to the precipice of these systems, but then avers: If we were to continue our discussion just one zillisecond longer, we would find ourselves smack-dab in the middle of the theory of recursive functions and algorithmic complexity, and that would be too abstract. So let’s drop the topic right here. But to drop the topic is to forfeit, not only the biggest number contest, but any hope of understanding how stronger paradigms lead to vaster numbers. And so we arrive in the early twentieth century, when a school of mathematicians called the formalists sought to place all of mathematics on a rigorous axiomatic basis. A key question for the formalists was what the word ‘computable’ means. That is, how do we tell whether a sequence of numbers can be listed by a definite, mechanical procedure? Some mathematicians thought that ‘computable’ coincided with a technical notion called ‘primitive recursive.’ But in 1928 Wilhelm Ackermann disproved them by constructing a sequence of numbers that’s clearly computable, yet grows too quickly to be primitive recursive. Ackermann’s idea was to create an endless procession of arithmetic operations, each more powerful than the last. First comes addition. Second comes multiplication, which we can think of as repeated addition: for example, 5´3 means 5 added to itself 3 times, or 5+5+5 = 15. Third comes exponentiation, which we can think of as repeated multiplication. Fourth comes ... what? Well, we have to invent a weird new operation, for repeated exponentiation. The mathematician Rudy Rucker calls it ‘tetration.’ For example, ‘5 tetrated to the 3’ means 5 raised to its own power 3 times, or , a number with 2,185 digits. We can go on. Fifth comes repeated tetration: shall we call it ‘pentation’? Sixth comes repeated pentation: ‘hexation’? The operations continue infinitely, with each one standing on its predecessor to peer even higher into the firmament of big numbers. If each operation were a candy flavor, then the Ackermann sequence would be the sampler pack, mixing one number of each flavor. First in the sequence is 1+1, or (don’t hold your breath) 2. Second is 2´2, or 4. Third is 3 raised to the 3rd power, or 27. Hey, these numbers aren’t so big! Fee. Fi. Fo. Fum. Fourth is 4 tetrated to the 4, or , which has 10154 digits. If you’re planning to write this number out, better start now. Fifth is 5 pentated to the 5, or with ‘5 pentated to the 4’ numerals in the stack. This number is too colossal to describe in any ordinary terms. And the numbers just get bigger from there. Wielding the Ackermann sequence, we can clobber unschooled opponents in the biggest-number contest. But we need to be careful, since there are several definitions of the Ackermann sequence, not all identical. Under the fifteen-second time limit, here’s what I might write to avoid ambiguity: A(111)—Ackermann seq—A(1)=1+1, A(2)=2´2, A(3)=33, etc Recondite as it seems, the Ackermann sequence does have some applications. A problem in an area called Ramsey theory asks for the minimum dimension of a hypercube satisfying a certain property. The true dimension is thought to be 6, but the lowest dimension anyone’s been able is prove is so huge that it can only be expressed using the same ‘weird arithmetic’ that underlies the Ackermann sequence. Indeed, the Guinness Book of World Records once listed this dimension as the biggest number ever used in a mathematical proof. (Another contender for the title once was Skewes’ number, about , which arises in the study of how prime numbers are distributed. The famous mathematician G. H. Hardy quipped that Skewes’ was "the largest number which has ever served any definite purpose in mathematics.") What’s more, Ackermann’s briskly-rising cavalcade performs an occasional cameo in computer science. For example, in the analysis of a data structure called ‘Union-Find,’ a term gets multiplied by the inverse of the Ackermann sequence—meaning, for each whole number X, the first number N such that the Nth Ackermann number is bigger than X. The inverse grows as slowly as Ackermann’s original sequence grows quickly; for all practical purposes, the inverse is at most 4. ¨ Ackermann numbers are pretty big, but they’re not yet big enough. The quest for still bigger numbers takes us back to the formalists. After Ackermann demonstrated that ‘primitive recursive’ isn’t what we mean by ‘computable,’ the question still stood: what do we mean by ‘computable’? In 1936, Alonzo Church and Alan Turing independently answered this question. While Church answered using a logical formalism called the lambda calculus, Turing answered using an idealized computing machine—the Turing machine—that, in essence, is equivalent to every Compaq, Dell, Macintosh, and Cray in the modern world. Turing’s paper describing his machine, "On Computable Numbers," is rightly celebrated as the founding document of computer science. "Computing," said Turing, is normally done by writing certain symbols on paper. We may suppose this paper to be divided into squares like a child’s arithmetic book. In elementary arithmetic the 2-dimensional character of the paper is sometimes used. But such use is always avoidable, and I think it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, on a tape divided into squares. Turing continued to explicate his machine using ingenious reasoning from first principles. The tape, said Turing, extends infinitely in both directions, since a theoretical machine ought not be constrained by physical limits on resources. Furthermore, there’s a symbol written on each square of the tape, like the ‘1’s and ‘0’s in a modern computer’s memory. But how are the symbols manipulated? Well, there’s a ‘tape head’ moving back and forth along the tape, examining one square at a time, writing and erasing symbols according to definite rules. The rules are the tape head’s program: change them, and you change what the tape head does. Turing’s august insight was that we can program the tape head to carry out any computation. Turing machines can add, multiply, extract cube roots, sort, search, spell-check, parse, play Tic-Tac-Toe, list the Ackermann sequence. If we represented keyboard input, monitor output, and so forth as symbols on the tape, we could even run Windows on a Turing machine. But there’s a problem. Set a tape head loose on a sequence of symbols, and it might stop eventually, or it might run forever—like the fabled programmer who gets stuck in the shower because the instructions on the shampoo bottle read "lather, rinse, repeat." If the machine’s going to run forever, it’d be nice to know this in advance, so that we don’t spend an eternity waiting for it to finish. But how can we determine, in a finite amount of time, whether something will go on endlessly? If you bet a friend that your watch will never stop ticking, when could you declare victory? But maybe there’s some ingenious program that can examine other programs and tell us, infallibly, whether they’ll ever stop running. We just haven’t thought of it yet. Nope. Turing proved that this problem, called the Halting Problem, is unsolvable by Turing machines. The proof is a beautiful example of self-reference. It formalizes an old argument about why you can never have perfect introspection: because if you could, then you could determine what you were going to do ten seconds from now, and then do something else. Turing imagined that there was a special machine that could solve the Halting Problem. Then he showed how we could have this machine analyze itself, in such a way that it has to halt if it runs forever, and run forever if it halts. Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction. (That’s the sort of thing you don’t say in a research paper.) ¨ "Very nice," you say (or perhaps you say, "not nice at all"). "But what does all this have to do with big numbers?" Aha! The connection wasn’t published until May of 1962. Then, in the Bell System Technical Journal, nestled between pragmatically-minded papers on "Multiport Structures" and "Waveguide Pressure Seals," appeared the modestly titled "On Non-Computable Functions" by Tibor Rado. In this paper, Rado introduced the biggest numbers anyone had ever imagined. His idea was simple. Just as we can classify words by how many letters they contain, we can classify Turing machines by how many rules they have in the tape head. Some machines have only one rule, others have two rules, still others have three rules, and so on. But for each fixed whole number N, just as there are only finitely many distinct words with N letters, so too are there only finitely many distinct machines with N rules. Among these machines, some halt and others run forever when started on a blank tape. Of the ones that halt, asked Rado, what’s the maximum number of steps that any machine takes before it halts? (Actually, Rado asked mainly about the maximum number of symbols any machine can write on the tape before halting. But the maximum number of steps, which Rado called S(n), has the same basic properties and is easier to reason about.) Rado called this maximum the Nth "Busy Beaver" number. (Ah yes, the early 1960’s were a more innocent age.) He visualized each Turing machine as a beaver bustling busily along the tape, writing and erasing symbols. The challenge, then, is to find the busiest beaver with exactly N rules, albeit not an infinitely busy one. We can interpret this challenge as one of finding the "most complicated" computer program N bits long: the one that does the most amount of stuff, but not an infinite amount. Now, suppose we knew the Nth Busy Beaver number, which we’ll call BB(N). Then we could decide whether any Turing machine with N rules halts on a blank tape. We’d just have to run the machine: if it halts, fine; but if it doesn’t halt within BB(N) steps, then we know it never will halt, since BB(N) is the maximum number of steps it could make before halting. Similarly, if you knew that all mortals died before age 200, then if Sally lived to be 200, you could conclude that Sally was immortal. So no Turing machine can list the Busy Beaver numbers—for if it could, it could solve the Halting Problem, which we already know is impossible. But here’s a curious fact. Suppose we could name a number greater than the Nth Busy Beaver number BB(N). Call this number D for dam, since like a beaver dam, it’s a roof for the Busy Beaver below. With D in hand, computing BB(N) itself becomes easy: we just need to simulate all the Turing machines with N rules. The ones that haven’t halted within D steps—the ones that bash through the dam’s roof—never will halt. So we can list exactly which machines halt, and among these, the maximum number of steps that any machine takes before it halts is BB(N). Conclusion? The sequence of Busy Beaver numbers, BB(1), BB(2), and so on, grows faster than any computable sequence. Faster than exponentials, stacked exponentials, the Ackermann sequence, you name it. Because if a Turing machine could compute a sequence that grows faster than Busy Beaver, then it could use that sequence to obtain the D‘s—the beaver dams. And with those D’s, it could list the Busy Beaver numbers, which (sound familiar?) we already know is impossible. The Busy Beaver sequence is non-computable, solely because it grows stupendously fast—too fast for any computer to keep up with it, even in principle. This means that no computer program could list all the Busy Beavers one by one. It doesn’t mean that specific Busy Beavers need remain eternally unknowable. And in fact, pinning them down has been a computer science pastime ever since Rado published his article. It’s easy to verify that BB(1), the first Busy Beaver number, is 1. That’s because if a one-rule Turing machine doesn’t halt after the very first step, it’ll just keep moving along the tape endlessly. There’s no room for any more complex behavior. With two rules we can do more, and a little grunt work will ascertain that BB(2) is 6. Six steps. What about the third Busy Beaver? In 1965 Rado, together with Shen Lin, proved that BB(3) is 21. The task was an arduous one, requiring human analysis of many machines to prove that they don’t halt—since, remember, there’s no algorithm for listing the Busy Beaver numbers. Next, in 1983, Allan Brady proved that BB(4) is 107. Unimpressed so far? Well, as with the Ackermann sequence, don’t be fooled by the first few numbers. In 1984, A.K. Dewdney devoted a Scientific American column to Busy Beavers, which inspired amateur mathematician George Uhing to build a special-purpose device for simulating Turing machines. The device, which cost Uhing less than $100, found a five-rule machine that runs for 2,133,492 steps before halting—establishing that BB(5) must be at least as high. Then, in 1989, Heiner Marxen and Jürgen Buntrock discovered that BB(5) is at least 47,176,870. To this day, BB(5) hasn’t been pinned down precisely, and it could turn out to be much higher still. As for BB(6), Marxen and Buntrock set another record in 1997 by proving that it’s at least 8,690,333,381,690,951. A formidable accomplishment, yet Marxen, Buntrock, and the other Busy Beaver hunters are merely wading along the shores of the unknowable. Humanity may never know the value of BB(6) for certain, let alone that of BB(7) or any higher number in the sequence. Indeed, already the top five and six-rule contenders elude us: we can’t explain how they ‘work’ in human terms. If creativity imbues their design, it’s not because humans put it there. One way to understand this is that even small Turing machines can encode profound mathematical problems. Take Goldbach’s conjecture, that every even number 4 or higher is a sum of two prime numbers: 10=7+3, 18=13+5. The conjecture has resisted proof since 1742. Yet we could design a Turing machine with, oh, let’s say 100 rules, that tests each even number to see whether it’s a sum of two primes, and halts when and if it finds a counterexample to the conjecture. Then knowing BB(100), we could in principle run this machine for BB(100) steps, decide whether it halts, and thereby resolve Goldbach’s conjecture. We need not venture far in the sequence to enter the lair of basilisks. But as Rado stressed, even if we can’t list the Busy Beaver numbers, they’re perfectly well-defined mathematically. If you ever challenge a friend to the biggest number contest, I suggest you write something like this: BB(11111)—Busy Beaver shift #—1, 6, 21, etc If your friend doesn’t know about Turing machines or anything similar, but only about, say, Ackermann numbers, then you’ll win the contest. You’ll still win even if you grant your friend a handicap, and allow him the entire lifetime of the universe to write his number. The key to the biggest number contest is a potent paradigm, and Turing’s theory of computation is potent indeed. ¨ But what if your friend knows about Turing machines as well? Is there a notational system for big numbers more powerful than even Busy Beavers? Suppose we could endow a Turing machine with a magical ability to solve the Halting Problem. What would we get? We’d get a ‘super Turing machine’: one with abilities beyond those of any ordinary machine. But now, how hard is it to decide whether a super machine halts? Hmm. It turns out that not even super machines can solve this ‘super Halting Problem’, for the same reason that ordinary machines can’t solve the ordinary Halting Problem. To solve the Halting Problem for super machines, we’d need an even more powerful machine: a ‘super duper machine.’ And to solve the Halting Problem for super duper machines, we’d need a ‘super duper pooper machine.’ And so on endlessly. This infinite hierarchy of ever more powerful machines was formalized by the logician Stephen Kleene in 1943 (although he didn’t use the term ‘super duper pooper’). Imagine a novel, which is imbedded in a longer novel, which itself is imbedded in an even longer novel, and so on ad infinitum. Within each novel, the characters can debate the literary merits of any of the sub-novels. But, by analogy with classes of machines that can’t analyze themselves, the characters can never critique the novel that they themselves are in. (This, I think, jibes with our ordinary experience of novels.) To fully understand some reality, we need to go outside of that reality. This is the essence of Kleene’s hierarchy: that to solve the Halting Problem for some class of machines, we need a yet more powerful class of machines. And there’s no escape. Suppose a Turing machine had a magical ability to solve the Halting Problem, and the super Halting Problem, and the super duper Halting Problem, and the super duper pooper Halting Problem, and so on endlessly. Surely this would be the Queen of Turing machines? Not quite. As soon as we want to decide whether a ‘Queen of Turing machines’ halts, we need a still more powerful machine: an ‘Empress of Turing machines.’ And Kleene’s hierarchy continues. But how’s this relevant to big numbers? Well, each level of Kleene’s hierarchy generates a faster-growing Busy Beaver sequence than do all the previous levels. Indeed, each level’s sequence grows so rapidly that it can only be computed by a higher level. For example, define BB2(N) to be the maximum number of steps a super machine with N rules can make before halting. If this super Busy Beaver sequence were computable by super machines, then those machines could solve the super Halting Problem, which we know is impossible. So the super Busy Beaver numbers grow too rapidly to be computed, even if we could compute the ordinary Busy Beaver numbers. You might think that now, in the biggest-number contest, you could obliterate even an opponent who uses the Busy Beaver sequence by writing something like this: BB2(11111). But not quite. The problem is that I’ve never seen these "higher-level Busy Beavers" defined anywhere, probably because, to people who know computability theory, they’re a fairly obvious extension of the ordinary Busy Beaver numbers. So our reasonable modern mathematician wouldn’t know what number you were naming. If you want to use higher-level Busy Beavers in the biggest number contest, here’s what I suggest. First, publish a paper formalizing the concept in some obscure, low-prestige journal. Then, during the contest, cite the paper on your index card. To exceed higher-level Busy Beavers, we’d presumably need some new computational model surpassing even Turing machines. I can’t imagine what such a model would look like. Yet somehow I doubt that the story of notational systems for big numbers is over. Perhaps someday humans will be able concisely to name numbers that make Busy Beaver 100 seem as puerile and amusingly small as our nobleman’s eighty-three. Or if we’ll never name such numbers, perhaps other civilizations will. Is a biggest number contest afoot throughout the galaxy? ¨ You might wonder why we can’t transcend the whole parade of paradigms, and name numbers by a system that encompasses and surpasses them all. Suppose you wrote the following in the biggest number contest: The biggest whole number nameable with 1,000 characters of English text Surely this number exists. Using 1,000 characters, we can name only finitely many numbers, and among these numbers there has to be a biggest. And yet we’ve made no reference to how the number’s named. The English text could invoke Ackermann numbers, or Busy Beavers, or higher-level Busy Beavers, or even some yet more sweeping concept that nobody’s thought of yet. So unless our opponent uses the same ploy, we’ve got him licked. What a brilliant idea! Why didn’t we think of this earlier? Unfortunately it doesn’t work. We might as well have written One plus the biggest whole number nameable with 1,000 characters of English text This number takes at least 1,001 characters to name. Yet we’ve just named it with only 80 characters! Like a snake that swallows itself whole, our colossal number dissolves in a tumult of contradiction. What gives? The paradox I’ve just described was first published by Bertrand Russell, who attributed it to a librarian named G. G. Berry. The Berry Paradox arises not from mathematics, but from the ambiguity inherent in the English language. There’s no surefire way to convert an English phrase into the number it names (or to decide whether it names a number at all), which is why I invoked a "reasonable modern mathematician" in the rules for the biggest number contest. To circumvent the Berry Paradox, we need to name numbers using a precise, mathematical notational system, such as Turing machines—which is exactly the idea behind the Busy Beaver sequence. So in short, there’s no wily language trick by which to surpass Archimedes, Ackermann, Turing, and Rado, no royal road to big numbers. You might also wonder why we can’t use infinity in the contest. The answer is, for the same reason why we can’t use a rocket car in a bike race. Infinity is fascinating and elegant, but it’s not a whole number. Nor can we ‘subtract from infinity’ to yield a whole number. Infinity minus 17 is still infinity, whereas infinity minus infinity is undefined: it could be 0, 38, or even infinity again. Actually I should speak of infinities, plural. For in the late nineteenth century, Georg Cantor proved that there are different levels of infinity: for example, the infinity of points on a line is greater than the infinity of whole numbers. What’s more, just as there’s no biggest number, so too is there no biggest infinity. But the quest for big infinities is more abstruse than the quest for big numbers. And it involves, not a succession of paradigms, but essentially one: Cantor’s. ¨ So here we are, at the frontier of big number knowledge. As Euclid’s disciple supposedly asked, "what is the use of all this?" We’ve seen that progress in notational systems for big numbers mirrors progress in broader realms: mathematics, logic, computer science. And yet, though a mirror reflects reality, it doesn’t necessarily influence it. Even within mathematics, big numbers are often considered trivialities, their study an idle amusement with no broader implications. I want to argue a contrary view: that understanding big numbers is a key to understanding the world. Imagine trying to explain the Turing machine to Archimedes. The genius of Syracuse listens patiently as you discuss the papyrus tape extending infinitely in both directions, the time steps, states, input and output sequences. At last he explodes. "Foolishness!" he declares (or the ancient Greek equivalent). "All you’ve given me is an elaborate definition, with no value outside of itself." How do you respond? Archimedes has never heard of computers, those cantankerous devices that, twenty-three centuries from his time, will transact the world’s affairs. So you can’t claim practical application. Nor can you appeal to Hilbert and the formalist program, since Archimedes hasn’t heard of those either. But then it hits you: the Busy Beaver sequence. You define the sequence for Archimedes, convince him that BB(1000) is more than his 1063 grains of sand filling the universe, more even than 1063 raised to its own power 1063 times. You defy him to name a bigger number without invoking Turing machines or some equivalent. And as he ponders this challenge, the power of the Turing machine concept dawns on him. Though his intuition may never apprehend the Busy Beaver numbers, his reason compels him to acknowledge their immensity. Big numbers have a way of imbuing abstract notions with reality. Indeed, one could define science as reason’s attempt to compensate for our inability to perceive big numbers. If we could run at 280,000,000 meters per second, there’d be no need for a special theory of relativity: it’d be obvious to everyone that the faster we go, the heavier and squatter we get, and the faster time elapses in the rest of the world. If we could live for 70,000,000 years, there’d be no theory of evolution, and certainly no creationism: we could watch speciation and adaptation with our eyes, instead of painstakingly reconstructing events from fossils and DNA. If we could bake bread at 20,000,000 degrees Kelvin, nuclear fusion would be not the esoteric domain of physicists but ordinary household knowledge. But we can’t do any of these things, and so we have science, to deduce about the gargantuan what we, with our infinitesimal faculties, will never sense. If people fear big numbers, is it any wonder that they fear science as well and turn for solace to the comforting smallness of mysticism? But do people fear big numbers? Certainly they do. I’ve met people who don’t know the difference between a million and a billion, and don’t care. We play a lottery with ‘six ways to win!,’ overlooking the twenty million ways to lose. We yawn at six billion tons of carbon dioxide released into the atmosphere each year, and speak of ‘sustainable development’ in the jaws of exponential growth. Such cases, it seems to me, transcend arithmetical ignorance and represent a basic unwillingness to grapple with the immense. Whence the cowering before big numbers, then? Does it have a biological origin? In 1999, a group led by neuropsychologist Stanislas Dehaene reported evidence in Science that two separate brain systems contribute to mathematical thinking. The group trained Russian-English bilinguals to solve a set of problems, including two-digit addition, base-eight addition, cube roots, and logarithms. Some subjects were trained in Russian, others in English. When the subjects were then asked to solve problems approximately—to choose the closer of two estimates—they performed equally well in both languages. But when asked to solve problems exactly, they performed better in the language of their training. What’s more, brain-imaging evidence showed that the subjects’ parietal lobes, involved in spatial reasoning, were more active during approximation problems; while the left inferior frontal lobes, involved in verbal reasoning, were more active during exact calculation problems. Studies of patients with brain lesions paint the same picture: those with parietal lesions sometimes can’t decide whether 9 is closer to 10 or to 5, but remember the multiplication table; whereas those with left-hemispheric lesions sometimes can’t decide whether 2+2 is 3 or 4, but know that the answer is closer to 3 than to 9. Dehaene et al. conjecture that humans represent numbers in two ways. For approximate reckoning we use a ‘mental number line,’ which evolved long ago and which we likely share with other animals. But for exact computation we use numerical symbols, which evolved recently and which, being language-dependent, are unique to humans. This hypothesis neatly explains the experiment’s findings: the reason subjects performed better in the language of their training for exact computation but not for approximation problems is that the former call upon the verbally-oriented left inferior frontal lobes, and the latter upon the spatially-oriented parietal lobes. If Dehaene et al.’s hypothesis is correct, then which representation do we use for big numbers? Surely the symbolic one—for nobody’s mental number line could be long enough to contain , 5 pentated to the 5, or BB(1000). And here, I suspect, is the problem. When thinking about 3, 4, or 7, we’re guided by our spatial intuition, honed over millions of years of perceiving 3 gazelles, 4 mates, 7 members of a hostile clan. But when thinking about BB(1000), we have only language, that evolutionary neophyte, to rely upon. The usual neural pathways for representing numbers lead to dead ends. And this, perhaps, is why people are afraid of big numbers. Could early intervention mitigate our big number phobia? What if second-grade math teachers took an hour-long hiatus from stultifying busywork to ask their students, "How do you name really, really big numbers?" And then told them about exponentials and stacked exponentials, tetration and the Ackermann sequence, maybe even Busy Beavers: a cornucopia of numbers vaster than any they’d ever conceived, and ideas stretching the bounds of their imaginations. Who can name the bigger number? Whoever has the deeper paradigm. Are you ready? Get set. Go. References Petr Beckmann, A History of Pi, Golem Press, 1971. Allan H. Brady, "The Determination of the Value of Rado’s Noncomputable Function Sigma(k) for Four-State Turing Machines," Mathematics of Computation, vol. 40, no. 162, April 1983, pp 647- 665. Gregory J. Chaitin, "The Berry Paradox," Complexity, vol. 1, no. 1, 1995, pp. 26- 30. At http://www.umcs.maine.edu/~chaitin/unm2.html. A.K. Dewdney, The New Turing Omnibus: 66 Excursions in Computer Science, W.H. Freeman, 1993. S. Dehaene and E. Spelke and P. Pinel and R. Stanescu and S. Tsivkin, "Sources of Mathematical Thinking: Behavioral and Brain-Imaging Evidence," Science, vol. 284, no. 5416, May 7, 1999, pp. 970- 974. Douglas Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern, Basic Books, 1985. Chapter 6, "On Number Numbness," pp. 115- 135. Robert Kanigel, The Man Who Knew Infinity: A Life of the Genius Ramanujan, Washington Square Press, 1991. Stephen C. Kleene, "Recursive predicates and quantifiers," Transactions of the American Mathematical Society, vol. 53, 1943, pp. 41- 74. Donald E. Knuth, Selected Papers on Computer Science, CSLI Publications, 1996. Chapter 2, "Mathematics and Computer Science: Coping with Finiteness," pp. 31- 57. Dexter C. Kozen, Automata and Computability, Springer-Verlag, 1997. ———, The Design and Analysis of Algorithms, Springer-Verlag, 1991. Shen Lin and Tibor Rado, "Computer studies of Turing machine problems," Journal of the Association for Computing Machinery, vol. 12, no. 2, April 1965, pp. 196- 212. Heiner Marxen, Busy Beaver, at http://www.drb.insel.de/~heiner/BB/. ——— and Jürgen Buntrock, "Attacking the Busy Beaver 5," Bulletin of the European Association for Theoretical Computer Science, no. 40, February 1990, pp. 247- 251. Tibor Rado, "On Non-Computable Functions," Bell System Technical Journal, vol. XLI, no. 2, May 1962, pp. 877- 884. Rudy Rucker, Infinity and the Mind, Princeton University Press, 1995. Carl Sagan, Billions & Billions, Random House, 1997. Michael Somos, "Busy Beaver Turing Machine." At http://grail.cba.csuohio.edu/~somos/bb.html. Alan Turing, "On computable numbers, with an application to the Entscheidungsproblem," Proceedings of the London Mathematical Society, Series 2, vol. 42, pp. 230- 265, 1936. Reprinted in Martin Davis (ed.), The Undecidable, Raven, 1965. Ilan Vardi, "Archimedes, the Sand Reckoner," at http://www.ihes.fr/~ilan/sand_reckoner.ps. Eric W. Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 1999. Entry on "Large Number" at http://www.treasure-troves.com/math/LargeNumber.html. Back to Writings page Back to Scott's homepage Back to Scott's blog

      Why do we even care about big numbers is there any use?

    1. A growing body of research reveals that looking at their eyes may be a neglected and powerful way to do so. The phrases “the eyes are the window to the soul” and “I can see it in your eyes” certainly sound poetic. Many singers, songwriters and writers have capitalized on it. But it turns out that the eyes really might be the windows to the soul. And here’s the great thing about eyes: even if people don’t want you to know how they feel, they can’t change how their eyes behave. So how does this work? The first thing to look for is changes in pupil size. A famous study published in 1960 suggests that how wide or narrow pupils are reflects how information is processed, and how relevant it is. In their experiment, the two experimental psychologists Hess and Polt of the University of Chicago asked male and female participants to look at semi-nude pictures of both sexes. Female participants’ pupil sizes increased in response to viewing men, and male participants’ pupils increased in response to viewing women.

      Change in state of emotion can be seen from their eyes. 2 psychologists tested people to look at semi-nude pictures. Upon viewing their opposite sexes, both males' and females' pupils increased.

    1. https://github.com/sajjad2881/NewSyntopicon

      Someone's creating a new digitally linked version of the Syntopicon as text files for Obsidian (and potentially other platforms). Looks like it's partial at best and will need a lot of editing work to become whole.

      found by way of

      Has anyone made a hypermedia rendition of the Syntopicon, i.e. with transcluded windows or "parallel pages" into the indexed texts?<br><br>Many of Adler's Great Books are public domain, so it wouldn't require *so* titanic a copyright issue… pic.twitter.com/UmWiyn5aBC

      — Andy Matuschak (@andy_matuschak) August 17, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
    1. Open a link in a new window Hold Shift and click the link

      !- do how : open link in a new window - chrome now gives you a - searchable reverse chronologic listing of all open tabs - now that I use desktops on Windows - it make sense to open links in new window - so when you look at the current desktop - you can see a thumbpring of all your windows

      • do how : thumbview of tabs in chrome
    1. Author Response

      Reviewer #1 (Public Review):

      “This study investigates the dynamics of brain network connectivity during sustained experimental pain in healthy human participants. To this end, capsaicin was applied to the tongues of two cohorts of participants (discovery cohort, N=48; replication cohort, N=74). This procedure resulted in pain for several minutes. During sustained pain, pain avoidance/intensity ratings and fMRI scans were obtained. The analyses (i) compare the pain state with a resting state, (ii) assess the dynamics of brain networks during sustained pain, and (iii) aim to predict pain based on the dynamics of brain networks. To this end, the analyses focus on community structures of time-evolving networks. The results show that sustained pain is associated with the emergence of a brain network including somatomotor, frontoparietal, basal ganglia and thalamic brain areas. The somatomotor area of the tongue is particularly involved in that network while this area is decoupled from other parts of the somatomotor cortex. Moreover, the network configuration changes over time with the frontoparietal network decoupling from the somatomotor network. Frontoparietal-cerebellar connections were predictive of decreases of pain. Together, the findings provide novel and convincing insights into the dynamics of brain network during sustained pain.

      Strengths

      • The brain mechanisms of sustained pain is a timely and relevant topic with potential clinical implications.

      • Assessing the dynamics of sustained pain and relating it to the dynamics of brain networks is a timely and promising approach to further the understanding of the brain mechanisms of pain.

      • The study includes discovery and replication cohorts and pursues a cutting-edge analysis strategy.

      • The manuscript is very well-written and the results are visualized in an exemplary manner including a graphical outline and summary of the findings.”

      We thank the reviewer for the thoughtful summarization and evaluation of our study.

      “Weaknesses

      • It remains unclear whether the changes of brain networks over time simply reflect the duration of sustained pain or whether they essentially reflect different levels of pain intensity/avoidance.”

      We appreciate the editor and reviewer’s comment on this issue. With the current experimental paradigm, it is difficult to dissociate the pain duration from the level of pain because the delivery of oral capsaicin commonly induces initial bursting and then a gradual decrease of pain over time. That is, the pain duration is correlated with the pain intensity in our task.

      However, when we examined the time-course of the ratings at each individual level (as shown in Figure S2), the time duration explained 53.7% of the rating variance, R2 = 0.537 ± 0.315 (mean ± standard deviation). In addition, if we constrain the beta coefficient of the time duration to be negative (i.e., ratings should decrease over time), the explained variance decreases to 48.2%, R2 = 0.482 ± 0.457, leaving us enough variance (i.e., greater than 50%) for examining the distinct effects of time duration and ratings on the patterns of functional brain reorganization.

      Indeed, the two main analyses included in the manuscript—consensus community detection and predictive modeling—were designed to examine those two aspects of the task, i.e., time duration and pain avoidance ratings, respectively. First, through the consensus community detection analysis, we examined the community structure that changes over time, i.e., across the early, middle, and late periods (as shown in Figure 3). We then developed predictive models of pain avoidance ratings in the second main analysis (as shown in Figure 5).

      Though it is still a caveat that we cannot fully dissociate the effects of time duration versus pain ratings, we could interpret the first set of results to be more about time duration, while the second set of results is more about pain ratings.

      We now added a description of the implication of predictive modeling for isolating the effects of pain ratings. In addition, a discussion on the caveat of the current experimental design and relevant future direction.

      Revisions to the main manuscript:

      p. 25: Moreover, developing models to directly predict the pain ratings is helpful to complement the group-level analysis, because the changes in consensus community structure over the early, middle, and late periods only indirectly reflect the different levels of pain.

      p. 27: This study also had some limitations. First, with the current experimental paradigm, it is difficult to dissociate the pain duration from the level of pain because the delivery of oral capsaicin commonly induces initial bursting and then a gradual decrease of pain over time. Though we aimed to model the effects of pain duration and pain avoidance ratings with our two primary analyses, i.e., consensus community detection and predictive modeling, we cannot fully dissociate the impact of time duration versus pain ratings.

      “• Although the manuscript is very well-written it might benefit from an even clearer and simpler explanation of what the consensus community structure and the underlying module allegiance measure assesses.”

      We thank you for the suggestion. Now we added additional (but simple) descriptions of module allegiance and consensus community detection methods.

      Revisions to the main manuscript:

      pp. 8-9: Here, the consensus community means the group-level representative structures of the distinct community partitions of individuals. To determine the consensus community across different individuals and times, we first obtained the module allegiance (Bassett et al., 2011) from the community assignment of each individual. Module allegiance assesses how much a pair of nodes is likely to be affiliated with the same community label, and is defined as a matrix T whose element Tij is 1 when nodes i and j are assigned to the same community and 0 when assigned to different communities. This conversion of the categorical community assignments to the continuous module allegiance values allows group-level summarization of different community structures of individuals.

      p. 14: Here, high module allegiance indicates the voxels of two regions are likely to be in the same community affiliation, and vice versa.

      “• The added value of the assessment of the dynamics of brain networks remains unclear. Specifically, it is unclear whether the current analysis of brain networks dynamics allows for a clearer distinction between and prediction of pain and no-pain states than other measures of static or dynamic brain activity or static measures of brain connectivity.”

      The main goal (and thus, the added value) of the current study was to provide a “mechanistic” understanding of the brain processes of sustained pain, rather than the “prediction.” Even though we included the results from the predictive modeling, as in Figures 4-6, our focus was more on the interpretation of the model to quantitatively examine the functional changes in the brain, not on the maximization of the prediction performance.

      Indeed, maximizing the prediction performance was the main goal of our previous study (Lee et al., 2021), in which we developed a predictive model of sustained pain based on the patterns of dynamic functional connectivity. The model showed better prediction performances compared to the current study, but it was challenging to interpret the model because of the high dimensionality of the model and its features. In addition, functional connectivity itself provides only limited insight into how functional brain networks are structured and reconfigured over time.

      In this sense, the multi-layer community detection method has several advantages to achieving our goal. First, the community detection analysis allows us to summarize the complex, high-dimensional whole-brain connectivity patterns into neurobiologically interpretable subsystems. Second, the multi-layer community detection method allows us to study the temporal changes in community structure by connecting the same nodes across different time points.

      Now we added a description of the rationale behind the choice of the multi-layer community detection analysis over the conventional functional connectivity methods, and the added value of our study.

      Revisions to the main manuscript:

      p. 3: In this study, we examined the reconfiguration of whole-brain functional networks underlying the natural fluctuation in sustained pain to provide a mechanistic understanding of the brain responses to sustained pain.

      p. 7: In this study, we used this approach to examine the temporal changes of brain network structures during sustained pain, which cannot be done with conventional functional connectivity-based analyses (Lee et al., 2021).

      p. 27: However, the previous model provides a limited level of mechanistic understanding because of the high dimensionality of the model and its features. In addition, functional connectivity itself provides only limited insight into how functional brain networks are structured and reconfigured over time.

      Reviewer #2 (public Review):

      “The Authors J-J Lee et al., investigated cortical and subcortical brain networks and their organization in communities over time during evoked tonic pain. The paper is well-written, and the findings are interesting and relevant for the field. Interestingly, other than confirming well known phenomena (e.g., segregation within the primary somatomotor cortex) the Authors identified an emerging "pain supersystem" during the initial increase of pain, in which subcortical and frontoparietal regions, usually more segregated, showed more interactions with the primary somatomotor cortex. Decrease of pain was instead associated to a reconfiguration of the networks that sees subcortical and frontoparietal regions connected with areas of the cerebellum. The main novelty of the proposed analysis, lies in the resulting high performances of the classifier, that shows how this interesting link between frontoparietal network and subcortical regions with the cerebellum, is predictive of pain decrease. In summary, the main strengths of the present manuscript are: • Inclusion of subcortical regions: most of the recent papers using the Shaefer parcellation in ~200 brain areas1, do not consider subcortical areas, ignoring possible relevant responses and behaviors of those regions. Not only the Authors smartly addressed this issue, but most of their results showed how subcortical regions played a key role in the networks reconfiguration over time during evoked sustained pain.

      • Robust classification results: high accuracy obtained on training dataset (internal validation), using a leave-one-out approach, and on the available independent test dataset (external validation) of relatively large sample size (N=74).

      • Clarity in the description of aim and sub-aims and exhaustive presentation of the obtained results helped by appropriate illustrations and figures (I suggest less wording in some of them).

      • Availability of continuous behavioral outcome (track ball).”

      We appreciate the reviewer’s summary and positive evaluations.

      “Even though the results are mostly cohesive with previous literature, some of the results need to be discussed in relationship to recently published papers on the same topic as well as justifying some of the non-standard methodological procedures adding appropriate citations (or more detailed comments). The Authors do not touch upon the concept of temporal summation of pain, historically associated with tonic pain, especially when the study is finalized to better understanding brain mechanisms in chronic pain populations (chronic pain patients often exhibit increased temporal summation of pain2). I would suggest starting from the paper recently published by Cheng et al. that also shares most of the methodological pipeline3 to highlight similarities and novelties and deepen the comparison with the associated literature.”

      We thank the reviewer and editor for the comment on this important topic. Temporal summation of pain indicates progressively increased sensation of pain during prolonged noxious stimulation (Price, Hu, Dubner, & Gracely, 1977), and has been suggested as a hallmark of chronic pain disorders including fibromyalgia (Cheng et al., 2022; Price et al., 2002). In a recent study by Cheng et al. (2022), the authors induced tonic pain using constantly high cuff pressure and examined whether the participants experienced increased pain in the late period compared to the early period of pain. On the contrary, in our experimental paradigm, the capsaicin liquid initially delivered into the oral cavity is being cleaned out by saliva, and thus overall pain intensity was decreasing over time, not increasing (Figure 1B). Therefore, the temporal summation of pain may occur in a limited period (e.g., the early period of the run), but it is difficult to examine its effect systematically in our study.

      However, it is notable that Cheng et al.’s results overlap with our findings. For example, Cheng et al. reported the intra-network segregation within the somatomotor network and the inter-network integration between the somatomotor and other networks during the temporal summation of pressure pain in patients with fibromyalgia, which were similar to the findings we reported in Figure S9 and Figure 4. Although it is unclear whether these results reflect the temporal summation of pain, these network-level features shared across the two studies are likely to be an essential component of the sustained pain processes in the brain.

      Now we added a comment on the temporal summation of pain in the main manuscript.

      Revisions to the main manuscript (p. 26):

      Interestingly, a recent fMRI study on the temporal summation of pain in fibromyalgia patients reported results similar to ours (Cheng et al., 2022), including the intra-network dissociation within the somatomotor network and the inter-network integration between the somatomotor and other networks during pain. Although we cannot directly examine whether the temporal summation of pain gave rise to these network-level changes due to the limitation of our experimental paradigm, these consistent findings between the two studies may suggest that our findings could be generalized to clinical conditions.

      We thank the reviewer and editor for the information about this recent publication. Cheng et al. (2022) was not published at the time we wrote the manuscript, and we were surprised that Cheng et al. shares many aspects with our study, e.g., both used multilayer community detection and also reported similar findings, as described above.

      However, there were some differences between the two studies as well.

      First, the focus of our study was on the brain dynamics during the natural time-course of sustained pain from its initiation to remission in healthy participants, whereas the focus of Cheng et al. was on the temporal summation phenomenon of pain (TSP) and the enhanced TSP in patients with fibromyalgia patients. Because of this difference in the research focuses, our study and Cheng et al. are providing many nonoverlapping results and insights. For example, our study paid particular attention to the coping mechanisms of the brain (e.g., the network-level changes in the subcortical and frontoparietal network regions) and the brain systems that are correlated with the natural decrease of pain (e.g., the cerebellum in Figure 5). In contrast, Cheng et al. (2022) identified the brain connectivity and network features important for the increased TSP in fibromyalgia patients.

      Second, our great interest was in identifying and visualizing the fine-grained spatiotemporal patterns of functional brain network changes over the period of sustained pain. To utilize fine-grained brain activity information, we conducted our main analyses at a voxel-level resolution and on the native brain space, such as in Figures 2-3 and Figures S5, S7, and S8. With this fine-grained spatiotemporal mapping, we were able to identify small, but important voxel-level dynamics.

      We now cited Cheng et al. (2022) in multiple places and revised the manuscript accordingly.

      Revisions to the main manuscript (p. 26):

      Interestingly, a recent fMRI study on the temporal summation of pain in fibromyalgia patients reported results similar to ours (Cheng et al., 2022), including the intra-network dissociation within the somatomotor network and the inter-network integration between the somatomotor and other networks during pain. Although we cannot directly examine whether the temporal summation of pain gave rise to these network-level changes due to the limitation of our experimental paradigm, these consistent findings between the two studies may suggest that our findings could be generalized to clinical conditions.

      “Here the main significant weaknesses of the study:

      • The data analysis is entirely conducted on young healthy subjects. This is not a limitation per se, but the conclusion about offering new insights into understanding mechanisms at the basis of chronic pain is too far from the results. Centralization of pain is very different from summation and habituation, especially if all the subjects in the study consistently rated increased and decreased pain in the same way (it never happens in chronic pain patients). A similar pipeline has been actually applied to chronic pain patients (fibromyalgia and chronic back pain)3,4. Discussing the results of the present paper in relationship to those, could offer a more robust way to connect the Authors' results to networks behavior in pathological brains.”

      We are grateful for the opportunity to discuss the clinical implication of our study. First of all, we agree with the reviewer and editor that we cannot make a definitive claim about chronic pain with the current study, and thus, we revised the last sentence of the abstract to tone down our claim.

      Revisions to the main manuscript (p. 2, in the abstract):

      This study provides new insights into how multiple brain systems dynamically interact to construct and modulate pain experience, advancing our mechanistic understanding of sustained pain.

      However, as we noted above in E-4, some of our findings were consistent with the findings from a previous clinical study (Cheng et al., 2022), suggesting the potential to generalize our study to clinical pain conditions. In addition, we previously reported that a predictive model of sustained pain derived from healthy participants performed better at predicting the pain severity of chronic pain patients than the model derived directly from chronic pain patients (Lee et al., 2021), highlighting the advantage of the “component process approach.”

      The component process approach aims to develop brain-based biomarkers for basic component processes first, which can then serve as intermediate features for the modeling of multiple clinical conditions (Woo, Chang, Lindquist, & Wager, 2017). This has been one of the core ideas of the Research Domain Criteria (RDoC) (Insel et al., 2010) and the Hierarchical Taxonomy of Psychopathology (HiTOP) (Kotov et al., 2017). If the clinical pain of a patient group is modeled as a whole, it becomes unclear what is being modeled because of the multidimensional and heterogeneous nature of clinical pain (Melzack, 1999) as well as other co-occurring health conditions (e.g., mental health issues, medication use, etc.). The component process approach, in contrast, can specify which components are being modeled and are relatively free from heterogeneity and comorbidity issues by experimentally manipulating the specific component of interest in healthy participants.

      The current study was conducted on healthy young adults based on the component process approach. We used oral capsaicin to experimentally induce sustained pain, which unfolds over protracted time periods and has been suggested to reflect some of the essential features of clinical pain (Rainville, Feine, Bushnell, & Duncan, 1992; Stohler & Kowalski, 1999). Therefore, the detailed characterization of the brain processes of sustained pain will be able to serve as an intermediate feature of multiple clinical conditions in future studies.

      Now we added the discussion on the clinical generalizability issue in the discussion section.

      Revisions to the main manuscript:

      pp. 25-26: An interesting future direction would be to examine whether the current results can be generalized to clinical pain. Experimental tonic pain has been known to share similar characteristics with clinical pain (Rainville et al., 1992; Stohler & Kowalski, 1999). In addition, in a recent study, we showed that an fMRI connectivity-based signature for capsaicin-induced orofacial tonic pain can be generalized to chronic back pain (Lee et al., 2021). Therefore, a detailed characterization of the brain responses to sustained pain has the potential to provide useful information about clinical pain.

      p. 26: Interestingly, a recent fMRI study on the temporal summation of pain in fibromyalgia patients reported results similar to ours (Cheng et al., 2022), including the intra-network dissociation within the somatomotor network and the inter-network integration between the somatomotor and other networks during pain. Although we cannot directly examine whether the temporal summation of pain gave rise to these network-level changes due to the limitation of our experimental paradigm, these consistent findings between the two studies may suggest that our findings could be generalized to clinical conditions.

      “Vice versa, the behavioral measure used to assess evoked pain perception (avoidance ratings), has been developed for chronic pain patients and never validated on healthy controls5. It might not be an appropriate measure considering the total absence of pain variability in the reported responses over forty-eight subjects6,7.”

      We acknowledge that pain avoidance measures are not fully validated in the healthy population. Nevertheless, we used this measure in this study for the following two main reasons that outweigh the limitations.

      First, a pain avoidance rating provides an integrative measure that can reflect the multi-dimensional aspects of sustained pain. One of the essential functions of pain is to avoid harmful situations and promote survival, and the avoidance motivation induced by pain is composed of not only sensory-discriminative, but also cognitive components including learning, valuation, and contexts (Melzack, 1999). According to the fear-avoidance model (Vlaeyen & Linton, 2012), if the pain-induced avoidance motivation is not resolved for a long time and is maladaptively associated with innocuous environments, chronic pain is likely to develop, suggesting the importance and clinical relevance of pain avoidance measures. In addition, our experimental design is particularly suitable for the use of avoidance rating because the oral capsaicin stimulation is accompanied by the urge to avoid the painful sensation, but it cannot immediately be resolved similar to chronic pain. Moreover, capsaicin is sometimes experienced as intense but less aversive (or even appetitive) in some cases, e.g., spicy food craver (Stevenson & Yeomans, 1993). In this case, avoidance ratings can provide a more reasonable measure of pain compared to the intensity rating.

      Second, the avoidance measure provides a common scale on which we can compare different types of aversive experiences, allowing us to conduct specificity tests for a predictive model of pain. For example, a recent study successfully compared the brain representations of two types of pain and two types of aversive, but non-painful experiences (e.g., aversive auditory and visual experiences) using the same avoidance measure (Ceko, Kragel, Woo, Lopez-Sola, & Wager, 2022). These comparisons were possible because the avoidance measure provided one common scale for all the aversive experiences regardless of their types of stimuli.

      To provide a better justification for the use of the avoidance measure, we now included the specificity test results of our pain predictive models. More specifically, we tested our module allegiance-based SVM and PCR models of pain on the aversive taste and aversive odor conditions (Figure S13).

      Despite these advantages, the use of avoidance rating without thorough validation is a limitation of the current study, and thus future studies need to examine the psychometric properties of the avoidance rating, e.g., examining the relationship among pain intensity, unpleasantness, and avoidance measures. However, the current study showed that the predictive models derived with pain avoidance rating (Study 1) could be used to predict the pain intensity rating (Study 2). In addition, the overall time-course of pain avoidance ratings in Study 1 was similar to the time-course of pain intensity ratings in Study 2, providing some supporting evidence for the convergent validity of the pain avoidance measure.

      As to the following comment, “It might not be an appropriate measure considering the total absence of pain variability in the reported responses over forty-eight subjects,” there are pieces of evidence supporting that the low between-individual variability of ratings is due to the characteristics of our experimental design, not to the fact that we used the avoidance measure. As we discussed in more detail in our response to E-1, our experimental procedure based on capsaicin liquid commonly induces the initial burst of painful sensation and the subsequent gradual relief for most of the participants (Figure 1B, left). A similar time-course pattern of ratings was observed in Study 2 (Figure 1B, right), which used the pain “intensity” rating, not the pain avoidance rating. In addition, previous studies with a similar experimental design (i.e., intra-oral capsaicin application) (Berry & Simons, 2020; Lu, Baad-Hansen, List, Zhang, & Svensson, 2013; Ngom, Dubray, Woda, & Dallel, 2001) also showed a similar time-course of pain ratings with low between-individual variability regardless of the rating types (e.g., VAS or irritation intensity), confirming that this observation is not unique to the pain avoidance rating.

      Now we added descriptions on the small between-individual variability of pain ratings and the use of avoidance ratings.

      Revisions to the main manuscript:

      pp. 5-7: Note that the overall trend of pain ratings over time was similar across participants because of the characteristics of our experimental design, which has also been observed in the previous studies that used oral capsaicin (Berry & Simons, 2020; Lu et al., 2013; Ngom et al., 2001). However, also note that each individual’s time-course of pain ratings were not entirely the same (Figures S2 and S3).

      p. 26: However, there are also differences between the characteristics of capsaicin-induced tonic pain versus clinical pain. For example, clinical pain continuously fluctuates over time in an idiosyncratic pattern (Apkarian, Krauss, Fredrickson, & Szeverenyi, 2001), whereas capsaicin-induced tonic pain showed a similar time-course pattern across the participants—i.e., increasing rapidly and then decreasing gradually (Figure 1B). This typical time-course of pain ratings has been reported in previous studies that used oral capsaicin (Berry & Simons, 2020; Lu et al., 2013; Ngom et al., 2001).

      pp. 26-27: Note that Study 1 used a pain avoidance measure that is not yet fully validated in healthy participants. However, we chose to use the pain avoidance measure, which can provide integrative information on the multi-dimensional aspects of pain (Melzack, 1999; Waddell, Newton, Henderson, Somerville, & Main, 1993). It also has a clinical implication considering that the maladaptive associations of pain avoidance to innocuous environments have been suggested as a putative mechanism of transition to chronic pain (Vlaeyen & Linton, 2012). Lastly, the avoidance measure can provide a common scale across different modalities of aversive experience, allowing us to compare their distinct brain representations (Ceko et al., 2022) or test the specificity of their predictive models (Lee et al., 2021) (Figure S13). Although the psychometric properties of the pain avoidance measure should be a topic of future investigation, we expect that the pain avoidance measure would have a high level of convergent validity with pain intensity given the observed similarity between pain avoidance (Study 1) and pain intensity (Study 2) in their temporal profiles. The generalizability of our PCR model across Studies 1 and 2 also supports this speculation. However, there would also be situations in which pain avoidance is dissociated from pain intensity. For example, capsaicin can be experienced to be intense but less aversive or even appetitive in some contexts, such as cravings for spicy food (Stevenson & Yeomans, 1993). In addition, the gradual rise of avoidance ratings during the late period of the control condition in Study 1 would not be observed if the intensity measure was used. Future studies need to examine the relationship between pain avoidance and the other pain assessments and the advantage of using the pain avoidance measure.

      “• The dynamic measure employed by the Authors is better described from the term "windowed functional connectivity". It is often considered a measure of dynamic functional connectivity and it gives information about fluctuations of the connectivity patterns over time. Nevertheless, the entire focus of the paper, including the title, is on dynamic networks, which inaccurately leads one to think of time-varying measures with higher temporal resolution (either updating for every acquired time point, as the Authors did in their previous publication on the same dataset4, or sliding windows involving weighting or tapering8,9). This allows one to follow network reorganization over time without averaging 2-min intervals in which several different brain mechanisms might play an important role3,10,11. In summary, the assumption of constant response throughout 2-min periods of tonic pain and the use of Pearson correlations do not mirror the idea of dynamic analysis expressed by the Authors in title and introduction. I would suggest removing "dynamic" from the title, reduce the emphasis on this concept, address possible confounds introduced by the choice of long windows and rephrase the aim of the study in terms of brain network reconfiguration over the main phases of tonic pain experience.”

      Now we removed the word ‘dynamic’ from many places in the manuscript, including the title. In addition, we added a brief discussion on the reason we chose to use the long and non-overlapping windows for connectivity calculation.

      Revisions to the main manuscript (p. 8):

      Although the long duration of the time window without overlaps may obscure the fine-grained temporal dynamics in functional connectivity patterns, we chose to use this long time window based on previous literature (Bassett et al., 2011; Robinson, Atlas, & Wager, 2015), which also used long time windows to obtain more reliable estimates of network structures and their transitions.

      “• Procedure chosen for evoking sustained pain. To the best of my knowledge, capsaicin sauce on the tongue is not a validated tonic pain procedure. In favor of this argument is the absence of inter-subject variability in the behavioral results showed in the paper, very unusual for response to painful stimulations. The procedure is well described by the Authors, and some precautions like letting the liquid drying before the start of the scan, have helped reducing confounds. Despite this, the measures in figure 1B suggest that the intensity of the painful stimulation is not constant as expected for sustained pain (probably the effect washes out with the saliva). In this case, the first six-minute interval requires particular attention because it encapsulates the real tonic pain phase, and the following ones require more appropriate labels. Ideally the Author should cite previous studies showing that tongue evoked pain elicits a very specific behavioral response (summation, habituation/decrease of pain, absence of pain perception). If those works are missing, this response need to be treated as a funding rather than an obvious point.”

      We addressed this comment. Moreover, we could find previous studies that experimentally induced tonic pain through the application of capsaicin on the tongue (Berry & Simons, 2020; Boudreau, Wang, Svensson, Sessle, & Arendt-Nielsen, 2009; Green, 1991; Ngom et al., 2001), suggesting that our experimental procedure is in line with previous literature.

      Reviewer #3 (Public Review ):

      “In their manuscript, Lee and colleagues explore the dynamics of the functional community structure of the brain (as measured with fMRI) during sustained experimental pain and provide several potentially highly valuable insights into, and evaluate the predictive capacity of, the underlying dynamic processes. The applied methodology is novel but, at the same time, straightforward and has solid foundations. The findings are very interesting and, potentially, of high scientific impact as they may significantly push the boundaries of our understanding of the dynamic neural processes during sustained pain, with a (somewhat limited) potential for clinical translation.

      However (Major Issue 1), after reading the current manuscript version, not all of my doubts have been dissolved regrading the specificity of the results to pain. Moreover (Major Issue 2), some of the results (specifically, those related to the group level analysis of community differences) do not seem to be underpinned with a proper statistical inference in the current version of the manuscript and, therefore, their presentation and discussion may not be proportional to the degree of evidence. Next to these Major Issues (detailed below), some other, minor clarifications might also be needed before publications. These are detailed below or in the private part of the review ("Recommendations for the authors").

      Despite these issues, this is, in general, a high quality work with a high level of novelty and - after addressing the issues - it has a very high potential for becoming an important contribution (and a very interesting read) to the pain-research community and beyond.”

      We appreciate the reviewer’s thoughtful comments. We have revised the manuscript to address the Reviewer’s major concerns, as described below.

      “Major Issue 1:

      The main issue with the manuscript is that it remains somewhat unclear, how specific the results are to pain.

      Differences between the control resting state and the capsaicin trials might be - at least partially - driven by other factors, like:

      • motion artifacts

      • saliency, attention, axiety, etc.

      Differences between stages over the time-course might, additionally, be driven by scanner drifts (to which the applied approach might be less sensitive, but the possibility is still there ) or other gradual processes, e.g. shifts in arousal, attention shifts, alertness, etc.

      All the above factors might emerge as confounding bias in both of the predictive models.

      This problem should be thoroughly discussed, and at least the following extra analyses are recommended, in order to attenuate concerns related to the overall specificity and neurobiological validity of the results:

      • reporting of, and testing for motion estimates (mean, max, median framewise displacement or anything similar)

      • examining whether these factors might, at least partially, drive the predictive models.

      • e.g. applying the PCR model on the resting state data and verifying of the predicted timecourse is flat (no inverse U-shape, that is characteristic to all capsaicin trials).

      Not using the additional sessions (bitter taste, aversive odor, phasic heat) feels like a missed opportunity, as they could also be very helpful in addressing this issue.”

      We thank the reviewer for this comment on the important issue regarding the specificity of our results and the potential influences of noise. The effects of head motion and physiological confounds are particularly relevant to pain studies because pain involves substantial physiological changes and often causes head motion. To address the related concerns of specificity, we conducted additional analyses assessing the independence of our predictive models (i.e., SVM and PCR models) from head movement and physiology variables and the specificity of our models to pain versus non-painful aversive conditions (i.e., bitter taste and aversive odor) in Study 1.

      First, we examined the overall changes of framewise displacement (FD) (Power, Barnes, Snyder, Schlaggar, & Petersen, 2012), heart rate (HR), and respiratory rate (RR) in the capsaicin condition (Figure S11). For the univariate comparison between the capsaicin vs. control conditions (Figure S11A), the results showed that, as expected, the capsaicin condition caused significant changes in head motion and autonomic responses. The mean FD and HR were significantly higher, and the RR was lower in the capsaicin condition compared to the control condition (FD: t47 = 5.30, P = 2.98 × 10-6; HR: t43 = 4.98, P = 1.10 × 10-5; RR: t43 = -1.91, P = 0.063, paired t-test). In addition, the increased motion and autonomic responses were more prominent in the early period of pain (Figure S11B). The 10-binned (2 mins per time-bin) FD and HR showed a decreasing trend while the RR showed an increasing trend over time in the capsaicin condition. The comparisons between the early (1-3 bins, 0-6 min) vs. late (8-10 bins, 14-20 min) periods of the capsaicin condition showed significant differences both for FD and HR (FD: t47 = 6.45, P = 8.12 × 10-8; HR: t43 = 6.52, P = 6.41 × 10-8; RR: t43 = -1.61, P = 0.11, paired t-test). These results suggest that while participants were experiencing capsaicin tonic pain, particularly during the early period, head motion and heart rate were increased, while breathing was slowed down. Note that we needed to exclude 4 participants’ data in this analysis due to technical issues with the physiological data acquisition.

      Next, we examined whether the changes in head motion and physiological responses influenced our predictive model performance (Figure S12). We first regressed out the mean FD, HR, and RR (concatenated across conditions and participants as we trained the SVM model) from the predicted values of the SVM model with leave-one-subject-out cross-validation (2 conditions × 44 participants = 88) and then calculated the classification accuracy again (Figure S12A). The results showed that the SVM model showed a reduced, but still significant classification accuracy for the capsaicin versus control conditions in a forced-choice test (n = 44, accuracy = 89%, P = 1.41 × 10-7, binomial test, two-tailed). We also did the same analysis for the PCR model (10 time-bins × 44 participants = 440) and the PCR model also showed a significant prediction performance (n = 44, mean prediction-outcome correlation r = 0.20, P = 0.003, bootstrap test, two-tailed, mean squared error = 0.159 ± 0.022 [mean ± s.e.m.]) (Figure S12B). These results suggest that our SVM and PCR models capture unique variance in tonic pain above and beyond the head movement and physiological changes.

      Lastly, we examined the specificity of our predictive models to pain, by testing the models on the non-painful but aversive conditions including the bitter taste (induced by quinine) and aversive odor (induced by fermented skate) conditions (Figure S13). All the model responses were obtained using leave-one-participant-out cross-validation. The results showed that the overall model responses of the SVM model for the bitter taste and aversive odor conditions were higher than those for the control condition but lower than the capsaicin condition (Figure S13A). Classification accuracies for comparing capsaicin vs. bitter taste and capsaicin vs. aversive odor were all significant (for capsaicin vs. bitter taste, accuracy = 79%, P = 6.17 × 10-5, binomial test, two-tailed, Figure S13C; for capsaicin vs. aversive odor, accuracy = 83%, P = 3.31 × 10-6, binomial test, two-tailed, Figure S13E), supporting the specificity of our SVM model of pain. Similarly, the model responses of the PCR model for the bitter taste and aversive odor conditions were lower than the capsaicin condition, and their temporal trajectories were less steep and fluctuating compared to the capsaicin condition (Figure S13B). The time-course of the model responses for the control condition was flatter than all other conditions and did not show the inverted U-shape. Furthermore, the model responses of the bitter taste and aversive odor conditions did not show the significant correlations with the actual avoidance ratings (bitter taste: mean prediction-outcome correlation r = 0.05, P = 0.41, bootstrap test, two-tailed, mean squared error = 0.036 ± 0.006 [mean ± s.e.m.], Figure S13D; aversive odor: mean prediction-outcome correlation r = 0.12, P = 0.06, bootstrap test, two-tailed, mean squared error = 0.044 ± 0.004 [mean ± s.e.m.], Figure S13F), suggesting the specificity of PCR model to pain.

      Overall, we have provided evidence that our models can predict pain ratings above and beyond the head motion and physiological changes and that the models are more responsive to pain compared to non-painful aversive conditions.

      Now we added descriptions on the specificity tests to the main manuscript and also to the Supplementary Information.

      Revisions to the main manuscript (p. 20):

      Specificity of the module allegiance-based predictive models To examine whether the predictive models were specific to pain and the prediction performances were not influenced by confounding variables such as head motion and physiological changes, we conducted additional analyses as shown in Figures S11-13. The SVM and PCR models showed significant prediction performances even after controlling for head motion (i.e., framewise displacement) and physiological responses (i.e., heart rate and respiratory rate) (Figures S11 and S12) and did not respond to the non-painful but aversive conditions including the bitter taste and aversive odor conditions (Figure S13), supporting the specificity of our predictive to pain. For details, please see Supplementary Results.

      Revisions to the Supplementary Information (pp. 2-4):

      Specificity analysis (Figures S11-13) To examine whether the predictive models (i.e., SVM and PCR models) were specific to pain and not influenced by confounding noises, we conducted additional specificity analysis assessing the independence of the models from head movement and physiology variables and specificity of our models to pain versus non-painful aversive conditions (i.e., bitter taste and aversive odor) in Study 1. First, we examined the overall changes of framewise displacement (FD) (Power et al., 2012), heart rate (HR), and respiratory rate (RR) in sustained pain (Figure S11). For the univariate comparison between capsaicin vs. control conditions (Figure S11A), the results showed that, as expected, capsaicin condition caused significant changes in motion and autonomic responses. The mean FD and HR were significantly higher, and the RR was lower in the capsaicin condition compared to the control condition (FD: t47 = 5.30, P = 2.98 × 10-6; HR: t43 = 4.98, P = 1.10 × 10-5; RR: t43 = -1.91, P = 0.063, paired t-test). For the temporal changes of movement and physiology variables (Figure S11B), the results showed that the increased motion and autonomic responses are more prominent in the early period of pain. The 10-binned (2 mins per time-chunk) FD and HR showed decreasing trend while the RR showed increasing trend over time in capsaicin condition. Additional univariate comparisons between early (1-3 bins, 0-6 min) vs. late (8-10 bins, 14-20 min) period of capsaicin condition showed that differences were significant for FD and HR (FD: t47 = 6.45, P = 8.12 × 10-8; HR: t43 = 6.52, P = 6.41 × 10-8; RR: t43 = -1.61, P = 0.11, paired t-test). This suggests that while participants were experiencing tonic pain, particularly in the early period, motion and heart rate was increased but breathing was slowed. Note that we needed to exclude 4 participants’ data due to technical issues with physiological data acquisition. Next, we examined whether the head movement and physiological responses are the main driver of our predictive models (Figure S12). For all the original signature responses from SVM model (2 conditions × 44 participants = 88), we regressed out the mean FD, HR, and RR (concatenated across conditions and participants as the SVM model was trained) and calculated the classification accuracy (Figure S12A). Although the signature responses were controlled for movement and physiology variables, the SVM model still showed a high classification accuracy for the capsaicin versus control conditions in a forced-choice test (n = 44, accuracy = 89%, P = 1.41 × 10-7, binomial test, two-tailed). Similarly, for all the original signature responses from PCR model (10 time-bins × 44 participants = 440), we regressed out the 10-binned FD, HR, and RR (concatenated across time-bins and participants as the PCR model was trained) and calculated the within-individual prediction-outcome correlation (Figure S12B). Again, the PCR model showed a significantly high predictive performance (n = 44, mean prediction-outcome correlation r = 0.20, P = 0.003, bootstrap test, two-tailed, mean squared error = 0.159 ± 0.022 [mean ± s.e.m.]) while controlling for movement and physiology variables. These results suggest that our SVM and PCR models captures unique variance in tonic pain above and beyond the head movement and physiological changes. Lastly, we examined the specificity of our predictive models to pain, by testing the models onto the non-painful but tonic aversive conditions including bitter taste (induced by quinine) and aversive odor (induced by fermented skate) (Figure S13). All the signature responses were obtained using leave-one-participant-out cross-validation. The results showed that the overall signature responses of SVM model for bitter taste and aversive odor conditions were higher than those for control conditions, but lower than capsaicin condition (Figure S13A). Classification accuracy between capsaicin vs. bitter taste and vs. aversive odor were all significantly high (capsaicin vs. bitter taste: accuracy = 79%, P = 6.17 × 10-5, binomial test, two-tailed, Figure S13C; capsaicin vs. aversive odor: accuracy = 83%, P = 3.31 × 10-6, binomial test, two-tailed, Figure S13E), suggesting the specificity of SVM model to pain. Similarly, the temporal trajectories of the signature responses of PCR model for bitter taste and aversive odor conditions were not overlapping with that of the capsaicin condition (Figure S13B). Furthermore, the signature responses of bitter taste and aversive odor conditions do not have significant relationship with the actual avoidance ratings (bitter taste: mean prediction-outcome correlation r = 0.05, P = 0.41, bootstrap test, two-tailed, mean squared error = 0.036 ± 0.006 [mean ± s.e.m.], Figure S13D; aversive odor: mean prediction-outcome correlation r = 0.12, P = 0.06, bootstrap test, two-tailed, mean squared error = 0.044 ± 0.004 [mean ± s.e.m.], Figure S13F), suggesting the specificity of PCR model to pain. Overall, we have provided evidence that the module allegiance-based models can predict pain ratings above and beyond the movement and physiological changes, and are more responsive to pain compared to non-painful aversive conditions, which suggest the specificity of our results to pain.

      “Major Issue 2:

      Another important issue with the manuscript is the (apparent) lack of statistical inference when analyzing the differences in the group-level consensus community structures (both when comparing capsaicin to control and when analysing changes over the time-course of the capsaicin-challenge).

      Although I agree that the observed changes seem biologically plausible and fit very well to previous results, without proper statistical inference we can't determine, how likely such differences are to emerge just by chance.

      This makes all results on Figs. 2 and 3, and points 1, 4 and 5 in the discussion partially or fully speculative or weakly underpinned, comprising a large proportion of the current version of the manuscript.

      Let me note, that this issue only affects part of the results and the remaining - more solid - results may already provide a substantial scientific contribution (which might already be sufficient to be eligible for publication in eLife, in my opinion).

      Therefore I see two main ways of handling Major Issue 2:

      • enhancing (or clarifying potential misunderstandings regarding) the methodology (see my concrete, and hopefully feasible, suggestions in the "private part" of the review),

      • de-weighting the presentation and the discussion of the related results.

      I believe there are many ways to test the significance of these differences. I highlight two possible, permutation testing-based ideas.

      Idea 1: permuting the labels ctr-capsaicin, or early-mid-late, repeating the analysis, constructing the proper null distribution of e.g. the community size changes and obtain the p-values. Idea 2: "trace back" communities to the individual level and do (nonparametric) statistical inference there.”

      We appreciate this important comment. We did not conduct statistical inference when comparing the group-level consensus community affiliations of the different conditions (Figure 2) or different phases (Figure 3) because of the difficulty in matching the community affiliation values of the networks to be compared.

      For example, let us assume that the 800 out of 1,000 voxels of community #1 and 1,000 out of 4,000 voxels of community #2 in the control condition are commonly affiliated with the same community #3 in the capsaicin condition. To compare the community affiliation between two conditions, we should first match the community label of the capsaicin condition (i.e., #3) to that of the control condition (i.e., #1 or #2), and here a dilemma occurs; if we prioritize the proportion of the overlapping voxels for the matching, the common community should be labeled as #1, whereas if we prioritize the number of the overlapping voxels for the matching, the label of the common community should be #2. Although both choices look reasonable, none of them can be a perfect solution.

      As the example above, it is impossible to exactly match the community affiliation of the different networks. We must choose an imperfect criterion for the matching procedure, which essentially affects the comparison of network structure. This was the main reason that we limited our results of Figures 2-3 to a qualitative description based on visual inspection. Moreover, the group-level consensus community structures in Figures 2-3 are not a simple group statistic like sample mean; they were obtained from multiple steps of analyses including permutation-based thresholding and unsupervised clustering, which could further complicate the interpretation of statistical tests.

      Alternatively, there is a slightly different but more rigorous approach to the comparisons of the community structures, which is the Phi-test (Alexander-Bloch et al., 2012; Lerman-Sinkoff & Barch, 2016). Instead of direct use of the community labels, this method converts the community label of each voxel into a list of module allegiance values between the seed voxel and all the voxels of the brain (i.e., 1 if the seed and target voxels have the same community label and 0 otherwise). This allows quantitative comparisons of voxel-level community profiles between different conditions without an arbitrarily matching of the community labels. We adopted this Phi-test for our analyses to examine whether the regional community affiliation pattern is significantly different between (i) the capsaicin vs. control conditions and (ii) the early vs. late periods of pain (Figure S6), which correspond to the main findings of the Figures 2 and 3 in our manuscript, respectively.

      More specifically, to compare the group-level consensus community structures between the capsaicin vs. control conditions and the early vs. late periods, we first obtained a seed-based module allegiance map for each voxel (i.e., using each voxel as a seed). Then, we calculated a correlation coefficient of the module allegiance values between two different conditions for each voxel. This correlation coefficient can serve as an estimate of the voxel-level similarity of the consensus community profile. Because module allegiance is a binary variable, these correlation values are Phi coefficients. A small Phi coefficient means that the spatial pattern of brain regions that have the same community affiliation with the given voxel are different between the two conditions. For example, if a voxel is connected to the somatomotor-dominant community during the capsaicin condition and the default-mode-dominant community during the control condition, the brain regions that have the same community label with the voxel will be very different, and thus the Phi coefficient will become small. Moreover, the Phi coefficient can be small even if a voxel is affiliated as the same (matched) community label for both conditions, when the spatial patterns of the same community is different between conditions.

      To calculate the statistical significance of the Phi coefficient, we conducted permutation tests, in which we randomly shuffled the condition labels in each participant and obtained the group-level consensus community structure for each shuffled condition. Then, we calculated the voxel-level correlations of the module allegiance values between the two shuffled conditions. We repeated this procedure 1,000 times to generate the null distribution of the Phi coefficients, and calculated the proportion of null samples that have a smaller Phi coefficient (i.e., a more dis-similar regional community structure) than the non-shuffled original data.

      Results showed that there are multiple voxels with statistical significance (permutation tests with 1,000 iterations, one-tailed) in the area where the community affiliations of the two contrasting conditions were different (Figure S6). For example, the frontoparietal and subcortical regions for the capsaicin vs. control (c.f., Figure 2), and the frontoparietal, subcortical, brainstem, and cerebellar regions for the early vs. late period of pain (c.f., Figure 3) contain voxels that survived after thresholding with FDR-corrected q < 0.05, suggesting the robustness of our main results.

      Particularly, the somatomotor and insular cortices showed statistical significance in the permutation test, and this may reflect the large changes in other areas that are connecting to the somatomotor and insular cortices across different conditions. The statistical significance was also observed in the visual cortex, which was unexpected. We interpret that the spatial distribution of the visual network community is too stable across conditions, and thus the null distribution from permutation formed a very narrow distribution of Phi coefficients. Therefore, a small change in the community structure could achieve statistical significance.

      Now we added descriptions on the permutation tests.

      Revisions to the main manuscript:

      p. 9: Permutation tests confirmed that the community assignment in the frontoparietal and subcortical regions showed significant changes between the capsaicin versus control conditions (Figure S6A).

      p. 13: Permutation tests further confirmed that the community assignment in the frontoparietal, subcortical, and brainstem regions showed significant changes between the early versus late period of pain (Figure S6B).

      pp. 36-37: Permutation tests for regional differences in community structures. To test the statistical significance of the voxel-level difference of consensus community structures (Figures 2 and 3), we performed the following Phi-test (Alexander-Bloch et al., 2012; Lerman-Sinkoff & Barch, 2016). First, for each given voxel, we compared the community label of the voxel to the community label of all the voxels, generating a list of voxel-seed module allegiance values that allow quantitative comparison of voxel-level community profile (e.g., [1, 0, 1, 1, 0, 0, ...], whose element is equal to 1 if the seed and target voxels were assigned to the same community and 0 otherwise). Next, a correlation coefficient was calculated between the module allegiance values of the two different brain community structures (i.e., capsaicin versus control, and early versus late). This correlation coefficient is an estimate of the regional similarity of community profiles (here, the correlation coefficient is Phi coefficient because module allegiance is a binary variable). To estimate the statistical significance of the Phi coefficient, we performed permutation tests, in which we randomly shuffled the labels and then obtained the group-level consensus community structures from the shuffled data. Then, the Phi coefficient between the module allegiance values of the two shuffled consensus community structures was calculated. We repeated this procedure 1,000 times to generate the null distribution of the Phi coefficient for each voxel. Lastly, we examined the probability to observe a smaller Phi coefficient (i.e., a more dissimilar community profile) than the one from the non-shuffled original data, which corresponds to the P-value of the permutation test. All the P-values were one-tailed as the hypothesis of this permutation test is unidirectional.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Sfaxi et al., significantly extended a current knowledge of the mechanism and the function of co-transcriptional RNA cleavage (CoTC). First, the authors focused on TP53 genes to study the CoTC. They showed TP53 transcripts are cleaved independently of PCF11 and Pol II CTD Ser2 phosphorylation in the UV-treated cells, concluding RNA cleavage at polyadenylation site (PAS) of the TP53 gene occurs post-transcriptionally. This strongly suggests that TP53 gene transcription termination is regulated by CoTC. They also showed a biological importance of the CoTC on TP53 gene expression. Depletion of the potential TP53 CoTC genomic region impaired mRNA and protein levels of p53 and its target p21. This deregulated G1-S phase progression in cell cycle following UV treatment. Finally they extended these findings to other genes by the novel screening approach of the CoTC.

      Minor comments:

      1. P4, Second paragraph, Another model~; Two more papers need to be cited. "Dye and Proudfoot 2001 Cell" "West et al., 2008 Mol Cell"
      2. Figure 3B; The authors should explain more about foci detected by probes A and B.
      3. P10, Last line, strong decrease~; (Figures 4E and ~) -> (Figure 4E, right panels, and~)
      4. Figure 5C; The author should show the entire image of the RNA-seq reads in the gene region, but not just in the windows described in Figure 5A. Also, TP53 and GAPDH genes need to be shown for the controls.

      Referees cross-commenting

      I totally agree with Reviewer #2.

      Significance

      Overall this paper is well described and written. In my view, this will bring important information to the transcription termination field.

      Note, I am not sure that the authors need to include the generality of the CoTC in Figures 5 and 6 since their RNA-seq analysis and its validation for biological functions are incomplete. Therefore I feel that focusing on the TP53 gene may enhance the impact of this paper.

    1. Well I would like to express my huge concern regarding the withdrawal of support for the SMB 1.0 network protocol in Windows 11, and future versions of the Microsoft OS, as there are many, many users who need to make use of this communication protocol, especially users households, since there are hundreds of thousands of products that use the embedded Linux operating system on devices that still use the SMB 1.0 protocol, and many devices, such as media players and NAS, that have been discontinued and companies no longer update their firmware.
    1. With Windows 10 version 1511, support for SMBv1 and thus NetBIOS device discovery was disabled by default. Depending on the actual edition, later versions of Windows starting from version 1709 ("Fall Creators Update") do not allow the installation of the SMBv1 client anymore. This causes hosts running Samba not to be listed in the Explorer's "Network (Neighborhood)" views.

      .

    1. Reviewer #1 (Public Review):

      The authors analyze the roles of BRC-1 and SMC-5 in C. elegans meiosis taking advantage of specific assays to distinguish DSB repair pathways: an inter-sister assay (ICR) (Mos1 induced DSB), an inter-homolog assay (IH)(Mos1 induced DSB), a SCE assay based on Edu labelling of sister chromatids, and other assays such as radiation sensitivity. In addition, due to the controlled timing of DSB induction, by recovering progeny at specific time points, the authors evaluate the properties of cells at leptotene-mid pachytene or at late pachytene-diplotene. The authors also take advantage of SNP in the ICR assay to measure conversion tract length.

      The main findings are:<br /> - Intersister crossovers are increased in brc-1 and smc-5.<br /> - Intersister non-crossovers are increased in smc-5.<br /> - Interhomolog recombination is increased in both brc-1 and smc-5 for late prophase cells.<br /> - Increased mutation rate in brc-1.<br /> - Shorter non crossover conversion tracts (ICR assay) in brc-1.<br /> - TMEJ involved in DSB repair in brc-1 smc-5 double mutant.<br /> - Independent localization of Brc-1 and smc-5.

      Having assays for specific events allows gaining more direct information on the DSB repair phenotypes of such mutants. The conversion tract assay is the most convincing and clear data which fits well with the role of Brc-1 in end resection. However the results of the ICR and IH assays are interesting but do not fit with previous observations on the role of Brc-1 and Smc-5 based on analysis of meiotic phenotypes, Rad-51 foci and diakinesis, these discrepancies should be addressed.

      The experimental approach has some issues that should be addressed: i) the two main windows (inter-homolog and non-inter homolog) are defined based on meiotic progression in wild type. The timing in the mutants and upon Mos1 induction (which could also affect the timing of meiotic progression) should be determined. In particular, the increase of interhomolog events in brc-1 is left without a validated interpretation. ii) Potentially the phenotypes observed in the ICR and IH assays (but not EdU) may be specific to Mos1-induced DSB and may not apply to Spo11-induced breaks. iii) The use of the Edu assay could be clarified, it seems that the interpretation of configurations is challenging, thus potentially leading to selection bias among diakinesis.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2022-01528

      Corresponding author(s): Elena Taverna and Tanja Vogel

      1. General Statements [optional]

      We thank the reviewers for the comments and points they raised. We think what we have been asked is a doable task for us and we are confident we will manage to address all points in a satisfactory manner.

      2. Description of the planned revisions

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Reviewer’s comment: The manuscript investigated the role of DOT1L during neurogenesis especially focusing on the earlier commitment from APs. Using tissue culture method with single-cell tracing, they found that the inhibition of DOT1L results in delamination of APs, and promotes neuronal differentiation. Furthermore, using single cell RNA-seq, they seek possible mechanisms and changes in cellular state, and found a new cellular state as a transient state. Among differentially expressed genes, they focused on microcephaly-related genes, and found possible links between epigenetic changes led by DOT1L inhibition and epigenetic inhibition by PRC2. Based on these findings, they suggested that DOT1L could regulate neural fate commitment through epigenetic regulation. Overall, it is well written and possible links from epigenetic to metabolic regulation are interesting. However, there are several issues across the manuscript.

      Response to Reviewer and planned revision:

      We thank the reviewer’s 1 for her/his comments and constructive criticism.

      We hope the revision plan will address the points raised by the reviewer in a satisfactory manner.

      Major issues:

      * *Reviewer’s comment: 1) It is not clear whether the degree of H3K79 methylation (or other histones) changes during development, and whether DOT1L is responsible for those changes. It is necessary to show the changes in histone modifications as well as the levels of DOT1L from APs to BPs and neurons, and to what extent the treatment of EPZ change the degree of histone methylation.

      Response to Reviewer and planned revision:

      • As for the level of DOT1L protein We tried several commercially available antibodies, but they do not work in the mouse, even after multiple attempts and optimization. So, unfortunately we will not be able to provide this piece of information.

      • As for the level of DOT1L mRNA We will provide info regarding the DOT1L mRNA level in APs, BPs and neurons by using scRNAseq data from E12, E14, E16 WT cerebral cortex.

      • As for the levels of H3K79methylation, we did not intend to claim that the histone methylation is responsible for the reported fate transition. We will edit the text to avoid any possible confusion. If it is deemed to be necessary to address the point raised by the reviewer, we do have 3 options, that we here in order of priority and ease of execution from our side.

      • immunofluorescence with an Ab against H3K79me2 using CON and EPZ-treated hemispheres.

      • FACS sort APs, BPs and neurons from CON and EPZ-treated hemispheres, followed by immunoblot for H3K79me2 to assess the H3K79me2 levels. As for the FACS sorting, we will use a combinatorial sorting in the lab on either a TUBB3-GFP or a GFP-reporter line using EOMES-driven mouse lines. This strategy has already been employed in the lab by Florio et al., 2015 and we will use it with minor modifications.
      • scCut&Tag for H3K79me2 from CON and EPZ-treated hemispheres. This option entails a collaboration with the Gonzalo Castelo-Branco lab in Sweden and might therefore require additional time to be established and carried out. Reviewer’s comment:

      Furthermore, the study mainly used pharmacological bath application. DOT1L has anti-mitotic effect, thus it is not clear whether the effect is coming from the inhibition of transmethylation activity.

      Response to Reviewer and planned revision:

      In a previous work we used a genetic model (DOT1L KO mouse) that showed microcephaly (Franz et al. 2019). For this study, we wanted to fill a gap in knowledge by understating if the DOT1L effect was mediated by its enzymatic activity. For this reason, we choose to use the pharmacological inhibition with EPZ, whose effect on DOT1L activity has been extensively reported and documented in literature (EPZ is a drug currently in phase clinical 3 studies).

      The stringent focus of this study on the pharmacological inhibition is thus a step toward understanding what specific roles DOT1L can play, both as scaffold or as enzyme.

      Here, we concentrate on the enzymatic function and the scaffolding function is beyond the scope of this specific study. We can further discuss and elaborate on the rationale behind this in the revised manuscript.

      Reviewer’s comment:

      In addition, the study assumed that the effect of EPZ is cell autonomous. However, if EPZ treatment can change the metabolic state in a cell, it would be possible that observed effects was non-cell autonomous. It would be important to address if this effect is coming in a cell-autonomous manner by other means using focal shRNA-KD by IUE.

      Response to Reviewer and planned revision:

      We did not claim that the effect of EPZ is cell autonomous, we are actually open on this point, as we consider both explanations to be potentially valid. We will edit the text to avoid any possible confusion on what we assume and what not.

      As a general consideration, it is entirely possible that the effects are non-cell autonomous. We will comment and elaborate on that in the revised manuscript.

      If the reviewer/journal considers this a point that must be addressed experimentally, then we will proceed as follows:

      • DOT1L shRNA-KD via in utero electroporation, followed by either
      • in situ hybridization for ASNS to check if ASNS transcript is increased upon DOT1L shRNA-KD compared to CON
      • FACS sorting of the positive electroporated cells (CON and DOT1L shRNA-KD), followed by qPCR to assess the levels of ASNS
      • If the reviewer wants us to check for a more downstream effect on fate, then we will immuno-stain the DOT1L shRNA-KD and CON with TUBB3 AB and/or TBR1 AB (as already done in the present version of the manuscript). Reviewer’s comment: 2) The possible changes in cell division and differentiation were found by very nice single-cell tracing system. However, changes in division modes occurring in targeted APs such as angles of mitotic division and the expression of mitotic markers were not addressed. These information is critical information to understand mechanisms underlying observed phenotype, delamination, differentiation and fate commitment.

      Response to Reviewer and planned revision:

      Previous effects of DOT1L manipulation on the mitotic spindle were observed in a previous paper, using DOT1L KO mouse (Franz et al. 2019). Considering that in our experiments we do use a pharmacological inhibition, we will address this point by quantifying the spindle angle in CON and EPZ-treated cortical hemispheres.

      We will co-stain for DAPI to visualize the DNA/chromosomes, and for phalloidin (filamentous actin counterstain) that allows for a precise visualization of the apical surface and of the cell contour, as it stains the cell cortex.

      Of note, the protocols we are referring to are already established in the lab, based on published work from the Huttner lab (Taverna et al, 2012; Kosodo et al, 2005).

      Reviewer’s comment: 3) The scRNA-seq analysis indicated interesting results, but was not fully clear to explain the observed results in histology. In fact, in single cell RNA-seq, the author claimed that cells in TTS are increased after EPZ treatment, which are more similar to APs. However, in histological data, they found that EPZ treatment increased neuronal differentiation. These data conflicts, thus I wonder whether "neurons" from histology data are actually neurons? Using several other markers simultaneously, it would be important to check the cellular state in histology upon the inhibition/KD of DOT1L.

      Response to Reviewer and planned revision:

      The reviewer’s comment is valid, and we indeed found that TTS cells are an intermediate state between APs and neurons in term of transcriptional profile. This is the reason why we called this cell cluster transient transcriptional state.

      We plan to address this point by staining for TBR1 and/or CTIP2 in CON and EPZ-treated hemispheres and to expand with this EOMES and SOX2 co-staining.

      Minor issues:

      Reviewer’s comment: Figure 1 - It is not clear delaminated cells are APs, BPs or some transient cells (Sox2+ Tubb3+??). It is important to use several cell type-specific and cell cycle markers simulnaneously to characterize cell-type specific identity of the analysed cells by staining. These applied to Fig1B,D,E,F,G,as well as Fig2,3.

      Response to Reviewer and planned revision:

      We will address this point by using a combinatorial staining scheme for several fate markers such as TUBB3, EOMES and SOX2, as suggested by the reviewer.

      Reviewer’s comment: - Please provide higher magnification images of labelled cells (Fig 1H)

      Response to Reviewer and planned revision:

      In the revised manuscript, we will provide higher magnification for the staining.

      Reviewer’s comment: - Please provide clarification on the criteria of Tis21-GFP+ signal thresholding.

      Response to Reviewer and planned revision:

      In the revised manuscript, we will provide a clarification on the criteria of Tis21-GFP+ signal thresholding.

      Reviewer’s comment: - Splitting the GFP signal between ventricular and abventricular does not convincingly support the "more basal and/or differentiated" states after EPZ treatment.

      Response to Reviewer and planned revision:

      We will provide a clarification regarding this point.

      Reviewer’s comment: - Please explain the presence of Tis21-GFP+ cells at the apical VZ.

      Response to Reviewer and planned revision:

      Tis21-GFP+ cells at the apical VZ has been extensively reported in the literature, since the first paper by Haubensak et al. regarding the generation of the Tis21-GFP+ line. In a nutshell, T Tis21-GFP+ cells are present throughout the VZ (therefore also in the apical portion) as neurogenic, Tis21-GFP positive cells are undergoing mitosis at the apical surface. Indeed, the presence of Tis-21 GFP signal have been extensively used by the Huttner lab and collaborators to score apical neurogenic mitosis. In addition, since AP undergo interkinetic nuclear migration, it follows that Tis21-GFP+ nuclei are going to be present throughout the entire VZ.

      In the revised manuscript, we will explain this point and cite additional literature.

      Reviewer’s comment: - Order the legends in same order as the bars.

      Response to Reviewer and planned revision:

      We will follow reviewers’ recommendation and order the legends accordingly.

      Reviewer’s comment: Figure 2 -Fig 2B) The difference between CON and EPZ apical contacts is not clear and does not match with the graph in Fig 2E.

      Response to Reviewer and planned revision:

      We will explain Fig. 2B in more detail and provide additional images in the revised manuscript.

      Reviewer’s comment: -Supp Fig 2 - are these injected slices cultured in control conditions? Please include this in the text and figure/figure legend

      Response to Reviewer and planned revision:

      In the revised manuscript, the text will be changed to address this point and provide clearer info.

      Reviewer’s comment: Fig 2C) The EPZ-treated DxA555+ cells exhibit morphological change of cell shape. Is this phenotype? please comment on the image shown for EPZ treatment panel.

      Response to Reviewer and planned revision:

      We thank the reviewer for having raised this point.

      The change in morphology might be a consequence of delamination and or of cell fate. In the revised manuscript, we will certainly better comment on this very relevant point and expand the discussion accordingly.

      Reviewer’s comment: Fig 2F - 2G) Data presented on EOMES+ and TUBB3+ % are counterintuitive. The authors claimed that TUBB3+ cells are increased and neuronal differentiation is promoted. However, no changes in EOMES+ are observed. What is the explanation? Did the author check the double positive cells? These could be TSS cells?

      Response to Reviewer and planned revision:

      We thank the reviewer to have raised this point.

      As envisioned by the reviewer, we suspect that the counterintuitive data might be due to TSS cell, which based on our scRNAseq data are expressing at the same time several cell type specific markers. It is possible that, since the treatment with EPZ is 24h long, cells (like the TTS cluster) have no time to completely eliminate the EOMES protein. If that were to be the case, then we would expect to still detect (as we indeed do) EOMES immunoreactivity.

      To address this point, we will:

      • analyze scRNA-seq data and check which is the extent of co-expression of Eomes and Tubb3 mRNAs in the TTS population.
      • Check for EOMES and TUBB3 double positive cells in the microinjection experiment. Reviewer’s comment: Figure 2 and Figure 3) the number of pairs analyzed for EPZ is twice as that of Con for comparison of the parameters taken into account. Please include n of each graph in the figure legend of the specific panel if not the same for all panels in that figure (i.e. for figure 3)

      Response to Reviewer and planned revision:

      We will revise the text accordingly.

      Reviewer’s comment:

      Figure 3) The data indicated that the number of daughter cell pairs in EPZ samples is almost double than Control. Is this the phenotype? More numbers of daughter cells in EPZ treated samples were observed from the same number of injections? or the number of injected cells were different?

      Response to Reviewer and planned revision:

      Due to technical reasons, we indeed performed a higher number of injections in EPZ-treated slices. We think this is the main reason behind the difference in number.

      If the reason were to be biological, one would expect to see the same trend in IUE experiments, but this is actually not the case. This does suggest/corroborate the idea that the reason behind the difference is mainly technical.

      Reviewer’s comment: Figure 4)

      • Please clarify if the single cell transcriptomic analysis has been performed only once, and if yes, how statistical testing to compare the cell proportion is carried out with only one batch. Fig 4G)

      Response to Reviewer and planned revision:

      As for the scRNAseq on microinjected cells:

      the scRNA-seq analysis was done once using cells pooled from 3 different microinjection experiments performed in 3 different days.

      As for the scRNAseq on IUE cells:

      The scRNA-seq analysis was done once using cells pooled from 2-3 different IUE experiments performed in 3 different days.

      For all scRNAseq experiments the statistical testing is achieved by intrasample comparisons according to established bioinformatics pipelines. We will better explain this point in the revised manuscript.

      Reviewer’s comment: Figure 4 and 5) - Figures are not supportive of the statement regarding APs' neurogenic potential upon DOT1L inhibition. TSS transcriptomic profile resembles more progenitors than neurons. Please comment on TSS neurogenic capacity taking into account the provided GO and RNAseq.

      Response to Reviewer and planned revision:

      We thank Reviewer 1 for raising this point, It is indeed true that TTS resemble more AP than neurons (as indicated in the Fig. S5B, C). We took that to indicate the fact that these cells are transient and therefore still maintain some AP features. Interestingly, TTS downregulate cell division markers, suggesting a restriction of proliferative potential, as one would expect for cells with an increased neurogenic potential. We will discuss this point in the revised manuscript.

      Reviewer’s comment: - Please provide GO analysis for APs and BPs.

      Response to Reviewer and planned revision:

      Following the reviewer’s suggestion, we will incorporate a more careful and in-depth analysis in the revised version of the manuscript.

      Reviewer’s comment: - Reconstruct figure 5A by listing genes in the same order in both Con and EPZ and prioritize EPZ-Con differences instead of cell-cell differences.

      Response to Reviewer and planned revision:

      We will revise Figure 5A based on the reviewer’s comment.

      Reviewer’s comment:

      Moreover, the presented genes in the heatmap is not the same in two conditions (i.e. NEUROG1 is present in EPZ but absent in Con). Please justify.

      Response to Reviewer and planned revision:

      This observation is based on different activities of transcription factor networks in the control and EPZ condition. They are not supposed to be the same as the cell states are altered and different TF are expressed and active upon the treatment in the diverse cell types. In a revised manuscript we will justify this point.

      Reviewer’s comment: Fig 5D)

      • Please explain why binding of EZH2 on the promoter of Asns is strongly reduced in comparison to a mild significant reduction of H3K79me/H3K27me3 in EPZ compared to Control.

      Response to Reviewer and planned revision:

      Several explanations are possible

      First, the variation can be due to batch effects.

      Second, the acute reduction of EZH2 might not be directly accompanied by a reduced histone mark, which is reduced either by cell division or by demethylases. The two processes of getting rid of the mark might be slower than the reduction of EZH2 presence at the respective site.

      Based on the reviewer’s comment, we will explain this point in the revised manuscript.

      • *

      Reviewer’s comment:

      Also is the changed directly medicated by DOT1L?

      Please test whether DOT1L can bind the promoter of Asns.

      Response to Reviewer and planned revision:

      To address this relevant issue we will proceed with the following protocol:

      • electroporate a tagged version of DOT1L into ESCs
      • select ESCs and differentiate them into NPC_48h.
      • treat NPC with DMSO (Con) or EPZ
      • harvest CON and EPZ-treated NPC
      • perform ChIP-qPCR DOT1L at the Asns promoter Reviewer’s comment: Please provide the expression patterns of DOT1L and Asns during neuronal differentiation.

      Response to Reviewer and planned revision:

      As for Dot1l

      Dot1l expression was shown in Franz et al 2019, by ISH from E12.5 to E18.5.

      As for Asns

      We will provide E14.5 in situ staining of Asns in the developing mouse brain using the Gene Paint database (see Figure below).

      We will also show immunostainings for ASNS at mid-neurogenesis, provided that Ab against ASNS works in the mouse.

      Other General comments:

      Reviewer’s comment: Please Indicate VZ, SVZ and CP on the side of the pictures/ with dot lines in the pictures both for primary figures and supplementary.

      Response to Reviewer and planned revision:

      We will revise the figures accordingly.

      Reviewer’s comment: - The Results and figures sometimes do not support the statement made by the authors

      Response to Reviewer and planned revision:

      We will carefully check on this and eliminate any overinterpretation or non-supported statements from the text.

      • Schemes are not informative/explanatory enough, i.e. time windows of treatment and sample collection, culture conditions details.

      Response to Reviewer and planned revision:

      We will revise the schemes to include more details. In particular, we plan to add a supplementary figure with a detailed visual description of the protocol, to match the detailed description presented in the materials and methods.

      Reviewer’s comment: - A more extensive characterization of TTS cells in terms of differentiation progression and integration would be enlightening

      Response to Reviewer and planned revision:

      In general, we are facing two main challenges while studying the TTS population: one is the lack of a specific marker gene for TTS, the other is the relatively small size of the TTS subpopulation.

      For these reasons, our ability to carry on an in-depth analysis of this cell state is limited.

      Considering the reviewer’s comment, in the revised manuscript we will expand the analysis ad characterization of the differentiation potential of TTS using RNA velocity trajectory.

      We can also expand the discussion on this point.

      Reviewer’s comment: - Picture quality can be improved, provide high magnification images.

      Response to Reviewer and planned revision:

      We will revise the figures to include higher magnification images.

      Reviewer #1 (Significance (Required)):

      Reviewer’s comment: The study could be important for the specific field in neural development. It aims to understand mutations in respective genes and brain malformation. If the link between epigenetic and metabolic changes is clearly shown, it will be interesting. However, the current manuscript is still rather descriptive, and clear mechanistic insights were not provided. The study have potentials and additional data will strength the value of study.

      Response to Reviewer and planned revision:

      We will address the direct impact of DOT1L and H3K79me2 on the Asns gene locus during the revision (see the rationale of the experimental strategy also in the revision plan above). We hope we will thus provide a mechanistic link between epigenetics and altered metabolome.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      Reviewer’s comment: Appiah et al. present a concise manuscript that provides details and possible mechanisms of their previous work (Franz et al., 2019; Ferrari et al., 2020). The study uses diverse lines of investigation to arrive at most conclusions. However, as interesting as the data is, we find that at the present state, it is not sufficient to prove that, indeed, the asparagine metabolism is regulated by DOTL1/PRC2 crosstalk. The neurogenic shift presented in the first part of the paper is not comprehensive and, therefore, not very convincing. The quality of images provided in the main and supplementary data is less than ideal. Additional data analysis and interpretation of the scRNA seq data may be needed. The authors finally conclude with rescue experiments done in culture and in-vivo, which we believe is the stand-out part of this study. Overall the manuscript has some interesting observations that are often over-interpreted with less supporting data. The manuscript reads well but requires additional data and changes in the claims/interpretation to be suited for publication.

      Response to Reviewer and planned revision:

      In the revised manuscript, we hope we will address the comments and concerns raised by the reviewer in a satisfactory manner. Comments

      Reviewer’s comment: 1) Abstract: Is this statement correct: "DOT1L inhibition led to increased neurogenesis driven by a shift from asymmetric self-renewing to symmetric neurogenic divisions of APs. AP undergoes symmetric division for self-renewal and asymmetric neurogenic divisions.

      Response to Reviewer and planned revision:

      Based on the current literature (cit. Huttner and Kriegstein), AP undergo:

      • symmetric division for proliferative division at early stages of neurogenesis
      • asymmetric self-renewing division, generating an AP and a BP at mid neurogenesis. This division is also described as neurogenic, as it produces a BP, that is a step further than AP in term of neurogenic potential.
      • symmetric consumptive division at late neurogenesis To avoid any possible confusion, we will re-phrase the sentence to include the adjective “consumptive” and specify the composition of the progeny.

      In the revised manuscript, the sentence will read as follow:

      "DOT1L inhibition led to increased neurogenesis driven by a shift of APs from asymmetric self-renewing (generating one AP and one BP) to symmetric consumptive divisions (generating two neurons)"

      Reviewer’s comment: All the data is based on treatments with EPZ (DOTL1 inhibitor), yet no information is shown to support its targeted activity in this system. A proof of principle in the chosen experimental system is missing; for instance, examining the activity or protein level of DOTL1 and decreased methylation of the target(s) is essential.

      Response to Reviewer and planned revision:

      EPZ is a well characterized drug, that has been used previously in our lab and by others as well.

      As for our lab, the information regarding the inhibitor, its activity and efficiency in inhibiting DOT1L towards H3K79me2 was shown in Franz et al. Supplementary Fig. S6 D, E.

      In the present manuscript, an additional confirmation that EPZ targets DOT1L in regard to its H3K79me2 activity is shown in Fig. 5D.

      We would refer to this information more explicitly in a revised manuscript.

      Reviewer’s comment: 2) Figure 1: The scoring of centrosomes and cilia is insufficient to conclude delamination and increase in basal fates. The effect could be on ciliogenesis or centrosome tethering to the apical end-feet of the AP, and other possible explanations for this observation also exist. The images are too small; larger images or graphic representations could be helpful in addition to the data.

      Response to Reviewer and planned revision:

      We did not intend to claim that the change in centrosome location demonstrate delamination, but only that it suggests delamination. This criterion has been extensively used as a proxy for delamination by several labs working on the cell biology of neurogenesis, such Huttner and Gotz labs. If the issue persists, we can re-phrase in a more cautious way the text referring to Figure 1 to highlight that the data only suggest delamination.

      Response to Reviewer and planned revision:

      To make a statement regarding delamination, I would like to see either the dynamics of delamination (organotypic slices images), staining with BP markers, or morphological changes of AP (staining that will reveal loss of adherence) or comparable data to support the observation. In my opinion Supp. Figure 1 is insufficient; the single image is not convincing; I would like to see 3D reconstruction and better-quality images.

      Response to Reviewer and planned revision:

      We can certainly provide better images and co-stain with relevant markers.

      We think it is beyond the scope of the manuscript embarking in live imaging as we are not studying the dynamics of delamination per se.

      Reviewer’s comment: Tis21 data (1H), again of low quality, is only a single piece of evidence and the conclusion "suggesting that the acquisition of a basal fate was paralleled by a switch to neurogenesis" is premature. I think other cell cycle exit reporters, Fucci markers, pHis, BrdU, NeuroD, or Tbr2 reporters (Li et al., 2020, (Haydar and Sestan labs)) to name a few, are necessary to establish the conclusions. The authors should show other markers such as PAX6, EOMES, or other upper-layer markers upon cell cycle exit in the SVZ/CP. These additional experiments will assist in cell fate analysis.

      Response to Reviewer and planned revision:

      We completely understand the points raised by the reviewer, and we plan to address them by co-staining with PAX6/SOX2, PH3 and/or EOMES.

      We think establishing the Fucci or EOMES mouse system is beyond the scope of the manuscript. In addition, given the present setting of all labs involved, it would be logistically unattainable (see also comments in the section below).

      We think the co-staining scheme and plan will be informative enough to satisfactory address the concerns raised by the reviewer.

      Reviewer’s comment: 2) Figure 2: The microinjection experiments are elegant; the images, however, do not complement the experiment. The images of the microinjected cells seem not to be reconstructed from z-stacked optical slices, so often, processes are not continuous (panel B, for example); therefore, it is not clear if an apical process is indeed missing or just not seen.

      Response to Reviewer and planned revision:

      The mentioned images are reconstructed from continuous Z-stacks, as we always do given the type of data. We can provide better reconstructions and/or additional images.

      Reviewer’s comment:

      The data analysis should include other parameters; BrdU staining could have given information on cell cycle exit, PAX6, SOX2, and EOMES on the location of the cells in the VZ/sVZ. The quality of images showing EOMES and TUBB3 staining is so low that it makes the reader doubt the validity of the quantifications. "Taken together, these data suggest that the inhibition of DOT1L might favor the acquisition of a neuronal over BP cell fate" This interpretation should be subjected to more investigations. It is possible that this treatment just accelerates the AP-> BP -> Neuronal fate. The author's claim needs to be backed by additional experiments or be changed.

      Response to Reviewer and planned revision:

      To address this point, we will include in the revised manuscript staining and co-staining with PAX6, SOX2 (see also response above) and provide a BrdU labeling experiment.

      Reviewer’s comment: 3) Figure 3: The experiment concept and its performance are impressive, yet the data is insufficient. The images in A that are supposed to be representative show two cells; their location is not clear, and the expression of GFP is not clear; in fact, both pairs seem to be GFP negative (not clear what is the threshold for background). Staining with anti-GFP and a second method to follow neurogenesis is necessary.

      Response to Reviewer and planned revision:

      We did use different staining methods and schemes to follow neurogenesis. As specified above, we will deepen our analysis by using additional markers, such as TBR1.

      Reviewer’s comment: 4) On page 9, lines 8-10, the authors claim that their number of cells was "sufficient" for single-cell analysis; the numbers are Response to Reviewer and planned revision:

      In the revised manuscript, we will include the analysis of how many cells are needed to identify cluster of 6 cell types in this paradigm, based for example on the algorithms developed in Treppner et al. 2021.

      Reviewer’s comment: 5) The authors use Seurat and RaceID without their appropriate citations in the first mention during the results. The authors also stop immediately after DEG analysis along with clustering. The authors could analyze their RNA-seq data with a trajectory; to say the least, the identification/characterization of TTS and neurons as Neurons I, II, and III are insufficient. There could be multiple ways to show the "fate" of cells in the isolated FACS, which the authors have missed.

      Response to Reviewer and planned revision:

      We will include the respective citations in a revised manuscript. We provide already differentiation trajectories but will include other methods, including scVelo of FateID to extend the trajectory analyses. We kindly ask the reviewer to also refer to the comments above regarding the TTs cluster characterization as part of our effort to provide a better picture of the different clusters.

      Reviewer’s comment: 6) The authors detected candidates like Fgfr3, Nr2f1, Ofd1, and Mme as part of their treated (different approaches) datasets (from their DEG analysis). They correctly cite Huang et al., 2020 but fail to give us a sense of the consequences of these gene dysregulations. The authors can also validate if these proteins are expressed in their treated cells.

      Response to Reviewer and planned revision:

      In the revised manuscript we will comment on the function of the four genes mentioned.

      In addition, we will validate the expression of these genes on protein and transcriptional level through immunostainings -provided that antibodies are working in our system- or smFISH, respectively.

      Reviewer’s comment: 7) The authors list a few GO terms (page 10, lines 1-10) and associate them with reduced proliferation; they must cite relevant studies. The authors can also add supplementary data showing which genes in their data correspond to these GO terms.

      Response to Reviewer and planned revision:

      We thank the reviewer for pointing out the missing citations.

      We of course agree on the need to add them, and we will do so in the revised manuscript.

      Reviewer’s comment: 8) On Page 11, lines 3-7, the authors describe their method to arrive at the 17 targets with TF activity from the previous analysis. Can the authors describe the method used to correlate the two? The reviewer understands this could be MEME analysis or analysis of earlier datasets of Ferrari et al. 2020. But it must be explicitly stated, and a few examples in supplementary need to be exemplified as this analysis is key to discovering the three metabolic genes.

      Response to Reviewer and planned revision:

      In the revised manuscript, we will clarify the exact analysis that resulted in the identification of the 17 target genes, using the specific tool for gene network analysis, that is based on our scRNA-seq data alone, but not on the Ferrari et al 2020 data set.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      n/a

      4. Description of analyses that authors prefer not to carry out

      Reviewer’s comment: Tis21 data (1H), again of low quality, is only a single piece of evidence and the conclusion "suggesting that the acquisition of a basal fate was paralleled by a switch to neurogenesis" is premature. I think other cell cycle exit reporters, Fucci markers, pHis, BrdU, NeuroD, or Tbr2 reporters (Li et al., 2020, (Haydar and Sestan labs)) to name a few, are necessary to establish the conclusions. The authors should show other markers such as PAX6, EOMES, or other upper-layer markers upon cell cycle exit in the SVZ/CP. These additional experiments will assist in cell fate analysis.

      Response to Reviewer and planned revision:

      As pointed out above, we think establishing the Fucci or EOMES mice system is beyond the scope of the manuscript as it will not provide more information than the ones we will obtain from systematic and extensive co-staining experiments. In addition, all labs involved are facing a logistic issue (animal house not ready yet, construction works etc) that made the importing and setting up of the colony unattainable for the next 6-10months. If the reviewer and/or the editorial board think this is a major point compromising the entire revision, we kindly ask to contact us again so that we can discuss the issue and arrive to a shared conclusion.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      The manuscript investigated the role of DOT1L during neurogenesis especially focusing on the earlier commitment from APs. Using tissue culture method with single-cell tracing, they found that the inhibition of DOT1L results in delamination of APs, and promotes neuronal differentiation. Furthermore, using single cell RNA-seq, they seek possible mechanisms and changes in cellular state, and found a new cellular state as a transient state. Among differentially expressed genes, they focused on microcephaly-related genes, and found possible links between epigenetic changes led by DOT1L inhibition and epigenetic inhibition by PRC2. Based on these findings, they suggested that DOT1L could regulate neural fate commitment through epigenetic regulation. Overall, it is well written and possible links from epigenetic to metabolic regulation are interesting. However there are several issues across the manuscript.

      Major issues:

      1. It is not clear whether the degree of H3K79 methylation (or other histones) changes during development, and whether DOT1L is responsible for those changes. It is necessary to show the changes in histone modifications as well as the levels of DOT1L from APs to BPs and neurons, and to what extent the treatment of EPZ change the degree of histone methylation. Furthermore, the study mainly used pharmacological bath application. DOT1L has anti-mitotic effect, thus it is not clear whether the effect is coming from the inhibition of transmethylation activity. In addition, the study assumed that the effect of EPZ is cell autonomous.However, if EPZ treatment can change the metabolic state in a cell, it would be possible that observed effects was non-cell autonomous. It would be important to address if this effect is coming in a cell-autonomous manner by other means using focal shRNA-KD by IUE.
      2. The possible changes in cell division and differentiation were found by very nice single-cell tracing system. However, changes in division modes occurring in targeted APs such as angles of mitotic division and the expression of mitotic markers were not addressed. These information is critical information to understand mechanisms underlying observed phenotype, delamination, differentiation and fate commitment .
      3. The scRNA-seq analysis indicated interesting results, but was not fully clear to explain the observed results in histology. In fact, in single cell RNA-seq, the author claimed that cells in TTS are increased after EPZ treatment, which are more similar to APs. However, in histological data, they found that EPZ treatment increased neuronal differentiation. These data conflicts, thus I wonder whether "neurons" from histology data are actually neurons? Using several other markers simultaneously, it would be important to check the cellular state in histology upon the inhibition/KD of DOT1L.

      Minor issues:

      Figure 1

      • It is not clear delaminated cells are APs, BPs or some transient cells (Sox2+ Tubb3+??). It is important to use several cell type-specific and cell cycle markers simulnaneously to characterize cell-type specific identity of the analysed cells by staining.These applied to Fig1B,D,E,F,G,as well as Fig2,3.

      • Please provide higher magnification images of labelled cells (Fig 1H)

      • Please provide clarification on the criteria of Tis21-GFP+ signal thresholding.
      • Splitting the GFP signal between ventricular and abventricular does not convincingly support the "more basal and/or differentiated" states after EPZ treatment.
      • Please explain the presence of Tis21-GFP+ cells at the apical VZ.
      • Order the legends in same order as the bars.

      Figure 2

      • Fig 2B) The difference between CON and EPZ apical contacts is not clear and does not match with the graph in Fig 2E.

      • Supp Fig 2 - are these injected slices cultured in control conditions? Please include this in the text and figure/figure legend

      Fig 2C) The EPZ-treated DxA555+ cells exhibit morphological change of cell shape. Is this phenotype? please comment on the image shown for EPZ treatment panel.

      Fig 2F - 2G) Data presented on EOMES+ and TUBB3+ % are counterintuitive. The authors claimed that TUBB3+ cells are increased and neuronal differentiation is promoted. However, no changes in EOMES+ are observed. What is the explanation? Did the author check the double positive cells? These could be TSS cells?

      Figure 2 and Figure 3) the number of pairs analyzed for EPZ is twice as that of Con for comparison of the parameters taken into account. Please include n of each graph in the figure legend of the specific panel if not the same for all panels in that figure (i.e. for figure 3)

      Figure 3)

      • The data indicated that the number of daughter cell pairs in EPZ samples is almost double than Control. Is this the phenotype? More numbers of daughter cells in EPZ treated samples were observed from the same number of injections? or the number of injected cells were different? Figure 4)
      • Please clarify if the single cell transcriptomic analysis has been performed only once, and if yes, how statistical testing to compare the cell proportion is carried out with only one batch. Fig 4G)

      Figure 4 and 5)

      • Figures are not supportive of the statement regarding APs' neurogenic potential upon DOT1L inhibition. TSS transcriptomic profile resembles more progenitors than neurons. Please comment on TSS neurogenic capacity taking into account the provided GO and RNAseq.
      • Please provide GO analysis for APs and BPs.

      Figure 5)

      • Reconstruct figure 5A by listing genes in the same order in both Con and EPZ, and prioritize EPZ-Con differences instead of cell-cell differences. Moreover, the presented genes in the heatmap is not the same in two conditions (i.e. NEUROG1 is present in EPZ but absent in Con). Please justify. Fig 5D)
      • Please explain why binding of EZH2 on the promoter of Asns is strongly reduced in comparison to a mild significant reduction of H3K79me/H3K27me3 in EPZ compared to Control. Also is the changed directly medicated by DOT1L? Please test whether DOT1L can bind the promoter of Asns.

      Please provide the expression patterns of DOT1L and Asns during neuronal differentiation.

      Other General comments: - Please Indicate VZ, SVZ and CP on the side of the pictures/ with dot lines in the pictures both for primary figures and supplementary. - The Results and figures sometimes do not support the statement made by the authors - Schemes are not informative/explanatory enough, i.e. time windows of treatment and sample collection, culture conditions details.. - A more extensive characterization of TTS cells in terms of differentiation progression and integration would be enlightening - Picture quality can be improved, provide high magnification images.

      Significance

      The study could be important for the specific field in neural development. It aims to understand mutations in respective genes and brain malformation. If the link between epigenetic and metabolic changes is clearly shown, it will be interesting. However, the current manuscript is still rather descriptive, and clear mechanistic insights were not provided. The study have potentials and additional data will strength the value of study.

    1. Author Response

      Reviewer #1 (Public Review):

      This manuscript reports a new analytical method (rhapsodi) to impute genotypes on human gamete data. The authors characterize the specificity and sensitivity of the approach and benchmark it against the current tool to analyze gamete data. rhapsodi is more efficient and versatile than the current approach, and thus represents an important technical feat. The last analysis of the manuscript is a reanalysis of the SpermSeq dataset, a massive sequencing effort to characterize recombination in human sperm haplotype data. rhapsodi fails to find any deviations from random segregation and challenges the notion that there are distorters in the human genome. In general, the manuscript represents an important technical piece but the results could be better contextualized to provide a perspective of what are the implications of the findings for our understanding of human recombination and segregation distortion.

      Thank you for appreciating the technical importance of our work for improving the analysis of transmission distortion (TD) based on low-coverage single-cell sequencing data from gametes. We agree that the results (in regard to the method performance, statistical power, and implications for human TD) should be better contextualized, which we address in a point-by-point manner below.

      Reviewer #2 (Public Review):

      This paper describes a new and powerful method of inferring gametic haplotypes using low-coverage sperm sequencing data, rhapsodi. It is a highly useful tool, and the authors demonstrate its robustness using simulations and comparisons to the current gold standard, Hapi. The authors also use the results of rhapsodi on a sample of low-coverage human sperm sequencing data to assess the evidence for moderate transmission distortion (TD), a pattern that previous studies using pedigrees have sought to identify without replicable success. The work's main strength lies in the method the authors have developed and their clear and thorough description and validation of its use. The rhapsodi method clearly performs substantially better than Hapi in several relevant use cases, and in some instances it is usable when Hapi would fail to run or require unreasonable resources. This study, then, provides a highly useful tool to researchers wishing to phase donor haplotypes, infer gamete genotypes, and estimate rough locations of recombination breakpoints using Sperm-seq data.

      Thank you for engaging with our method and for noting its use cases and performance.

      A major limitation is the lack of consideration of strong TD. Under this scenario, there may be "allelic dropout" in the low-coverage Sperm-seq data; without information on the parental genotype from somatic cells, over-transmission of one allele would appear to be absence of the alternate allele (i.e., the donor would be erroneously inferred to be homozygous). Some known examples of TD in other species are extremely strong; e.g., the SD locus in Drosophila can cause distortion as strong as k=0.99. Such cases seem highly likely to be missed using Sperm-seq + rhapsodi, and a lack of power to detect them would influence both ability to observe individual cases of TD as well as the authors' test for a global signal of biased transmission. Since the provided simulations only include scenarios up to 70% transmission of one allele, the paper does not address this potential limitation.

      The authors claim that their work conclusively excludes the presence of ongoing TD in their sample of human males, which, if they are from the same populations as former studies, may provide additional evidence against ongoing TD in these human populations. However, whereas earlier studies were only highly powered for extremely strong TD, the current method appears to be highest powered for intermediate levels of TD, strong enough to generate differences from binomial expectations, but not so strong that one allele might be missing in the low coverage pool of sperm serving as input to rhapsodi. This claim, then, may be better framed as a lack of evidence for TD of intermediate strength in current samples, rather than the strict adherence to Mendelian transmission indicated in the title.

      This is an interesting and important point, and we agree that extreme TD would produce apparent tracts of homozygosity across the sample of sperm genomes. Without external knowledge of heterozygous sites in the donor genome, such SNPs would be unobserved within the sperm sequencing data. To address this possibility, we performed additional simulations of very strong TD (transmission rate, k = 0.99; Figure 4-figure supplement 3; lines 416-434; lines 1062-1083). These simulations demonstrate that despite the homozygosity of the causal SNP, recombination in flanking regions recovers heterozygosity but still manifests extreme and detectable TD. Specifically, across 2,200 simulations (100 independent simulations x 22 chromosomes; k = 0.99) with parameters matching a typical Sperm-seq donor, we identified the TD signature in all 2,200 cases (Power = 1) despite homozygosity (and thus filtering) of the causal SNP in 89% of cases (1958 / 2200). This high power also holds for donor samples with higher (Power = 1) and lower (Power = 1) coverages, respectively.

      In summary, even though it is the case that the causal SNP and nearby flanking SNPs “drop out” of the data, recombination occurs as one extends out from these regions in both directions, and very strong signals (well beyond genome-wide significance thresholds) are detectable within these heterozygous regions. While we cannot attribute the signal to the true causal SNP, this limitation is not unique to our study, but is a general limitation of any study design (including pedigree and pooled sequencing studies) that must contend with linkage disequilibrium.

      Nevertheless, as highlighted by Reviewer 3, the use of the term “strict” in the title may be too subjective. TD of 5% or less could be considered strong from a population genetic perspective, but undetectable based on binomial variance and our stringent multiple testing corrections. We have therefore removed the word “strict” from the title and moderated the adjectives we use when describing the strength of detectable TD throughout the paper. We also enumerate various forms of TD that would be undetectable based on our study design in the Discussion (lines 581-586; lines 603-638).

      Reviewer #3 (Public Review):

      The authors reanalyze an existing dataset of single-cell Sperm-seq data to search for signals of transmission distortion. They develop an improved genotype imputation method and use this approach to phase donors and characterize the landscape of ancestry across each sperm genome. Using these data, the authors determined that there are no regions in any of the male donors' genomes that display a significant excess of TD. The main biological claim of the paper is that there is a strict adherence to Mendelian transmission ratios in human males.

      The computational approaches for accurately phasing and reconstructing haplotypes in individually lightly sequenced gametes is a potentially useful advance that I expect may be valuable for geneticists analyzing similar datasets. The quality of software documentation and usability is high. I have concerns about the appropriateness of the comparisons selected for this approach and the algorithm does not appear particularly novel.

      I have no doubt about the authors' basic conclusion that there are no strong male TD loci in the male donors examined. However, I find their statements about "strict adherence to Mendelian ratios" and many references to strong statistical power to be oversold. The power of this study is still quite limited relative to the strength of TD that we would expect to find in human populations.

      Thank you for your comments and for engaging with our manuscript so closely. We agree that additional discussion of statistical power, the strength of TD that can be detected, and the uses of our software are necessary, and these changes have substantially strengthened our revised manuscript.

      Major Concerns:

      There are really two distinct papers here. One is about improved imputation and crossover analysis from sperm-seq data and one is about TD. The bulk of the methodological development is a rework of the approach for genotype imputation and haplotype phasing in Sperm-seq. Yet, the major conclusions are focused on a scan for TD. I am left wondering if analyzing these data using the original method in the Bell et al paper would have produced different conclusions about either? If not, is there a systematic bias such that one would find an excess of false detections of TD? Phasing slightly more markers is not a particularly compelling link between these sections because even fairly sparsely distributed markers that are correctly phased would certainly be fine in a scan for TD within a single individual due to linkage. If this cannot be shown I wonder if this work would be better split into two manuscripts with one more technical paper describing the differences in recombination maps associated with rhapsodi and the other as a brief report stating that strong TD is probably uncommon in human males.

      While we agree that there are two important aspects of our study, we feel that the combination of a generalizable method as well as an application to test an important biological hypothesis is a strength of our work.

      For additional context, Dr. Bell is a co-author on our study and collaborated with us in part based on the motivation to build a reproducible software toolkit for similar analyses. Bell et al. (2020) did not implement their method as generalizable software, but rather as a set of analysis scripts tested only with their data and computing environment. Unlike our method (rhapsodi) and the comparison approach (Hapi), those scripts were not written as user-friendly software and are therefore less likely to be used by the research community.

      It is not surprising that rhapsodi outperforms Hapi since Hapi was designed for a very different quantity of samples and sequencing depths. I appreciate the authors' point that Hapi performed better than other methods in comparisons run by the Hapi authors. However, they were looking at very few gametes (10 or so, I believe). For that reason, this comparison is not appropriate to address the application to the datasets used in this paper. The authors should include an analysis comparing rhapsodi against hapcut2, PHMM and other methods that are appropriate for the full scale and sequencing depth of the data. Additionally, the original Bell paper used a phasing + HMM approach of some kind for exactly this data. Why wasn't that approach considered as a point of comparison?

      While your point is well taken, we do not believe that a direct comparison between rhapsodi and PHMM would provide additional insight. In the publication describing PHMM (Hou et al. 2013), their algorithm was designed for datasets containing lower numbers of cells (11-41) sequenced to higher coverage per cell (0.4-0.9) relative to the data analyzed by rhapsodi. PHMM is therefore, like Hapi, optimized for a more narrow range of parameters than rhapsodi. Across this range of parameters, Hapi uniformly performs better than PHMM. Other tools such as hapcut2 may be designed to work with lower coverages and higher cell numbers than PHMM and Hapi, but are designed for use exclusively with diploid genomes. rhapsodi is therefore the first haploid phasing tool that can work with large numbers of low-coverage cells and there is no existing software that operates in the same niche. While the parameter spaces of Hapi and rhapsodi only partially overlap, Hapi therefore remains the most appropriate point of comparison.

      In addition to the point about analysis scripts versus a generalizable software package, we note two major differences between the steps employed in Bell et al. 2020 and rhapsodi’s method:

      1) For phasing, Bell et al. (2020) used Hapcut2 in an “off-label” way that required artificial assignment of alleles from the same sperm cell to the same “read” for input. This approach ignores the positional information that was already encoded in the alignment and may not take full advantage of the co-inheritance patterns of the SNP alleles. The phasing method implemented in rhapsodi is a principled approach tailored to the structure of the input data and knowledge of the biological process of meiosis.

      2) For crossover discovery, Bell et al. (2020) handled genotype error by encoding an “error” state in the HMM. In our method, we assign gamete-level genotypes via HMM-based imputation prior to detecting recombination breakpoints. We believe dealing with the error prior to crossover discovery is a simpler approach that better leverages the strengths of HMMs.

      With respect to the method for imputation, no comparison is made to known recombination maps nor do the authors make any comparison across the maps derived from each donor. Reporting an improved method without it motivating novel biological conclusions is not compelling in itself. I suggest the authors expand that analysis to consider these are related questions. E.g., are there males whose recombination maps differ in specific regions? Are those associated with known major chromosomal abnormalities? Is this map consistent with estimates from LD, pedigrees, Bell et al?

      We agree that evaluating the inferred crossover landscape in relation to published maps would be useful as a technical evaluation of our method, though we respectfully disagree with the suggestion to expand the scope of the manuscript to the analysis of inter-individual variability in the crossover landscape—topics that were the main focus of Bell et al. (2020). The distinction between our work and that study was addressed in our responses to previous comments.

      To address the suggestion to compare to existing maps, we counted the number of inferred recombination events for each 1 Mbp genomic bin, pooling across the donors. We compared this result with a published male-specific recombination map inferred from trio sequencing data (Halldorsson et al. 2019) and observed a strong correlation with our map (R = 0.9; Figure 5-figure supplement 5). We have incorporated this in “Results: Application to data from human sperm” (lines 372-377; lines 385-391) and note the potential biological and technical reasons for the observed discrepancies (lines 391-399). One such technical reason for the observed modest discrepancy appears to be related to the sample sequencing depth of coverage. Rather than pooling the number of inferred recombination events for each bin across all donors, we repeated the correlation analysis in a donor specific manner. Then, we fit a linear regression model with the sample-specific sequencing depths of coverage as the predictor and the sample-specific correlations as the response variable. We found that the sample-specific correlation with the deCODE map was positively associated with depth of coverage (lines 391-399).

      Most of the validations presented are based on simulated data. This is fine and has some advantages, but real data imposes challenges that these analyses do not address. My understanding is that the Bell et al. (2020) paper includes a donor with a phased diploid genome. A comparison of rhapsodi's phasing accuracy against that genome should be included.

      Bell et al. 2020 included only sperm donors with previously unknown genomes, and phased their genomes via the sperm sequencing data. They validated their phasing approach in two ways: 1) via simulated data and 2) via comparing to the phase generated by Eagle (Loh et al 2016, Nat Gen) for one donor genome, specifically comparing the phase of neighboring sites phased with both approaches. Importantly, such population-based approaches achieve only local phasing of common variation, as opposed to the chromosome-scale phasing achieved via gamete sequencing. Nevertheless, we acknowledge that real data exhibits features that are not captured by simulated data. We tried to capture the most significant potential contributors from real data (e.g., genotyping errors) in our simulations. Our newly added comparisons to the Halldorsson et al. (2019) map help address this concern (Figure 5-figure supplement 5).

      The main biological conclusion about a "strict adherence to Mendelian expectations across sperm genomes" is an overstatement. Statistical power of this study is still limited relative to the strength of TD that would be expected within human populations. One reason is the multiple testing correction. Another is that 1000-3000 draws from a binomial distribution with expected p = 0.5 is just not sufficient to overcome binomial sampling variance. In light of this concern and the central conclusion of this paper, the authors' discussion of power is inadequate. The main text really should contain explicit discussion of the required genotype ratio skew for TD in each donor to be detected with good power. Given previous pedigree studies, it is not surprising that no significant TD was discovered that exceeded the necessary ~10% effect sizes to be detectable. Recent, much more powerful analyses in mice, Drosophila and plants, indicate that strong TD is probably uncommon and even weak effects can be detected but are uncommon.

      Thank you for these detailed suggestions regarding statistical power. Our manuscript is greatly improved by these updates to the power analysis and our comparison to alternative methods for investigating TD.

      Specifically, we added additional simulations of TD at different rates (including very strong TD, as also noted in response to Reviewer 1) to demonstrate the range in which our study would be able to detect TD in this sample, considering the burden of multiple testing (Figure 4-figure supplement 3).

      We added to the section titled “Results: Statistical power to detect moderate and strong TD” a statement about the strength of TD that would be detectable within the Sperm-seq dataset (lines 400-415). Briefly, the 25 donors have an average of 1711 gametes each (range 969-3377). Based on this sample size, we have Power = 0.681 to detect deviations of 0.07 (i.e., 57% transmission of one allele in a single donor) and Power = 0.912 to detect deviations of 0.08, accounting for multiple hypothesis testing across the genome and across donors (p-value threshold = 1.78 x 10-7). For an individual with 950 gametes, we have Power = 0.637 to detect deviations of 0.09 and Power = 0.84 to detect deviations of 0.1.

      Based on these calculations, we agree that the term “strict” is subjective and may be considered an over-statement depending on the point of comparison, and we have modified the title accordingly.

      This manuscript would benefit from a much clearer examination of statistical power and a detailed comparison of the power of this approach vs pedigree-based analyses as well as bulk gamete sequencing approaches. Although the authors are correct that all scans for TD in human genomes have been pedigree or single-cell based, more powerful alternatives are known. These are based on sequencing pools of individuals or gametes (e.g., Wei et al. 2017, Corbett-Detig et al. 2019). Each of those studies has been able to identify signatures of segregation distortion below the thresholds required for significance in this study. These and related works should be acknowledged in both the introduction and discussion. Although I appreciate that the ability to phase the genome in a single experiment may be appealing, phasing diploid genomes via hi-c omni-c is straightforward and the advantages in statistical power suggest that approaches using pools of gametes are preferable for well-powered scans for TD.

      Thank you for your suggestions regarding contextualizing the statistical power of single-gamete sequencing-based approaches. Our steps to address these comments have strengthened our manuscript and made the paper more applicable to future research.

      The single-cell nature of the low-coverage (~0.01x) Sperm-seq data allowed us to augment our sample size 100-fold at each SNP in a way that is not possible with a pooled sequencing approach. Pooled sequencing methods may augment statistical power for detecting TD by 1) combining information from nearby SNPs and 2) assuming different sperm are sampled at each site. This approach has relied on external knowledge of haplotypes (e.g., obtained through sequencing of inbred strains of Drosophila). This permits aggregation of alleles supporting one haplotype or the other across adjacent SNPs, which can increase statistical power. The same statistical test for TD cannot be applied to bulk sequencing data from human sperm (e.g., Bruess et al. 2019, Yang et al. 2021) without external knowledge of the parental haplotypes. One potential approach for circumventing this issue would be local phasing using patterns of LD from a reference panel, but this would limit the analysis to common SNPs within relatively small windows that can be adequately phased with such methods.

      It is not immediately obvious that pooled sequencing studies have greater power for discovering TD than single-cell studies. None of the pooled sequencing studies mentioned by the reviewer performed similarly exhaustive power analyses, and the power analyses that were performed in pooled sequencing studies were done in systems with different levels of heterozygosity, different genome sizes, different sample sizes of donor individuals, etc. All of these factors affect the multiple testing burden, making it impossible to compare directly to a study in humans. Given the above considerations, we believe that an in-depth analysis of the statistical power of pooled sequencing approaches for discovering TD in humans lies outside the scope of our study.

      We have nevertheless updated our manuscript to discuss the strengths of pooled sequencing methods as an approach for investigating TD, citing relevant studies in both the Introduction (lines 37-46) and Discussion (lines 508-529; lines 557-580). We acknowledge that these methods have been successfully applied in other species (e.g., Wei et al. 2017, Corbett-Detig et al. 2019) and their potential to improve statistical power. We note the steps that would be necessary for making these methods applicable for TD scans in humans as new datasets are produced.

      We added a general power analysis of pedigree studies (Figure 4-figure supplement 4A) to illustrate the large sample sizes necessary to detect weak TD. To demonstrate the large sample size required for a pedigree study to achieve strong statistical power, we plot the number of informative transmissions of each SNP in the two pedigrees from Meyer et al. 2012 for which data was publicly accessible (Figure 4-figure supplement 4B).

      Importantly, in a single-gamete sequencing study, the number of informative transmissions is equal to the number of genotyped gametes for all heterozygous SNPs. In a pedigree-based study, the number of informative transmissions varies across SNPs, as not all parent-offspring trios will include one or more parent heterozygous for a given SNP. For example, the Hardy-Weinberg expected proportion of heterozygous parents for a common SNP with an allele frequency of 0.5 is 2pq = 0.5. Meanwhile, variants at lower frequencies will possess smaller proportions of heterozygotes, thus capturing fewer informative transmissions and limiting statistical power. One implication of this distinction is that pedigree-based studies rely on distorter alleles that act across multiple families, effectively restricting such scans to variants that are common in the population. This contrasts with single-gamete sequencing studies, which provide equal power for detecting TD involving common and rare alleles, provided that they are heterozygous in the sampled donor individual. We note this in the Discussion (lines 508-529).

      As noted by the reviewer, single-cell sequencing allows both phasing and examination of TD in a single study, allowing the investigation of meiotic recombination and its potential relationship with TD and fertility profiles. We have added text in the Conclusion (lines 659-693) to address this important point. Because of this study design, we are uniquely positioned to detect TD caused by any rare alleles we do capture; this contrasts with pedigree-based studies, where a distorter would need to be acting across multiple families to be detectable (thus restricting these scans to common variants). We have noted this in the Discussion (lines 521-529).

    1. The custom title bar has been a success on Windows, but the customer response on Linux suggests otherwise. Based on feedback, we have decided to make this setting opt-in on Linux and leave the native title bar as the default. The custom title bar provides many benefits including great theming support and better accessibility through keyboard navigation and screen readers. Unfortunately, these benefits do not translate as well to the Linux platform. Linux has a variety of desktop environments and window managers that can make the VS Code theming look foreign to users.
    1. Author Response*

      Reviewer 2 (Public Review):

      1) The periodic components of the simulated power did not overlap as is often seen in empirical data, they were confined to 1-40 Hz (e.g. no gamma activity was simulated), and the simulations did not include a knee in the aperiodic component. This means that it Is unclear whether SPRiNT would work as well in more complex or excessively noisy datasets. The non-sinusoidal waveform shape of the periodic component in the rodent data reiterates this concern.

      We are grateful that the Reviewer raised these important considerations about the practical value of SPRiNT in more complex data scenarios.

      We wish to clarify that in the simulations reported, although two simultaneous periodic components would not share the same centre frequency, a substantial number of realizations of the simulations made these components overlap with centre frequencies separated by less than 5 Hz (6% of all simultaneously simulated peaks; n = 8166). We now provide an example of two overlapping spectral peaks in the revised version of Figure 3 – figure supplement 1C.

      In preparing the revised manuscript, we also studied how the spectral overlap of periodic components would determine the peak detection rate: we found that the peak detection rate increases with the separation between two consecutive peaks along the frequency spectrum, but that it is independent of the presence of other peaks if they are at least 8 Hz apart from each other (Figure 3 – figure supplement 1D).

      As correctly mentioned by the Reviewer, the original synthesized data did not comprise components beyond a maximum frequency of 40 Hz, nor did they include a knee in their aperiodic component. In the revised manuscript, we now report new results obtained from the analysis of 1000 synthesized time series that comprise two periodic components (including one periodic component between 30-80 Hz) and a knee in their aperiodic component (Figure 3 – figure supplement 2). The relevant additions to the Methods section are pasted below:

      "We also simulated 1000 time series with aperiodic activity featuring a static knee (Figure 3 – figure supplement 2). Aperiodic exponents were initialized between 0.8-2.2 Hz-1. Aperiodic offsets were initialized between -8.1 and -1.5 a.u., and knee frequencies were set between 0 and 30 Hz. Within the 12-36 s time segment into the simulated time series (onset randomized), the aperiodic exponent and offset underwent a linear shift and a random magnitude in the range of -0.5 to 0.5 Hz-1 and -1 to 1 a.u., respectively. The duration of the linear shift was randomly selected for each simulated time series between 1 and 20 s; the knee frequency was constant for each simulated time series. We added two oscillatory (rhythmic) components (amplitude: 0.6-1.6 a.u.; standard deviation: 1-2 Hz) of respective peak centre frequencies between 3-30 Hz and between 30-80 Hz, with the constrain of minimum peak separation of at least 2.5 peak standard deviations. The onset of each periodic component was randomly assigned between 5-25 s, with an offset between 35-55 s. (Lines 773 to 784)"

      We analyzed these data with SPRiNT within the 1-100 Hz frequency range. These new results indicate that SPRiNT performs in a satisfactory manner on data with components distributed over a broader frequency range, with a knee in their aperiodic component.

      Below are the related edits to the revised Results section:

      "SPRiNT did not converge to fit aperiodic exponents in the range [-5, 5] Hz-1 only on rare occasions (<2% of all time points). We removed these data points from further analysis. The simulated aperiodic exponents and offsets were recovered with MAEs of 0.22 and 0.42, respectively; static knee frequencies were recovered with a MAE of 3.55x104 (inflated by large outliers in absolute error; median absolute error = 11.72). Overall, SPRiNT detected the peaks of the simulated periodic components with 56% sensitivity and 99% specificity. The spectral parameters of periodic components were recovered with equivalent performances in the lower (3-30 Hz) and respectively, higher (30-80 Hz) frequency ranges: MAEs for centre frequency (0.32, resp. 0.32), amplitude (0,27, resp. 0.22), and standard deviation (0,35, resp. 0.29). (Lines 244 to 252)"

      We also now discuss possible limitations in the Discussion:

      "Finally, SPRiNT’s performances were slightly degraded when spectrograms comprised an aperiodic knee (Figure 3 – figure supplement 2). This is due to the specific challenge of estimating knee parameters. Nevertheless, the spectral knee frequency is related to intrinsic neuronal timescales and cortical microarchitecture (Gao et al., 2021), which are expected to be stable properties within each individual and across a given recording. Thus, we recommend estimating (and reporting) aperiodic knee frequencies from the power spectrum of the data with specparam, and specifying the estimated value as a SPRiNT parameter. (Lines 480 to 486)"

      The Reviewer’s point on non-sinusoidal waveform shapes is also well taken, but we would like to emphasize that they challenge all current methods, including but not specific to SPRiNT or specparam (Donoghue et al., 2021). Indeed, SPRiNT and specparam perform a parametric decomposition of the spectrally transformed data, regardless of whether periodic components of a true sinusoidal nature are present. Non-sinusoidal periodic time series, such as the sawtooth waveforms observed in the rodent data analyzed in the manuscript, comprise spectral peaks as harmonic components (here of a theta-band fundamental rhythm). For this reason, we opted to focus our analyses and discussion of these data to the temporal dynamics of their aperiodic components.

      2) Furthermore, the SPRiNT and specparam parameters were fixed and arbitrary, and it is unclear how robust the current results are with respect to changes in these parameters.

      Here too, we appreciate the Reviewer’s insight and concern.

      We explored a subset of the simulations with SPRiNT using alternative settings for STFT (Figure 2 – figure supplement 3) and observed overall satisfactory performances. We now report the relevant results in an addition to the Supplemental Materials, as pasted below:

      "SPRiNT settings for higher temporal resolution (time range: 1-59 s, in 0.25 s steps; frequency range: 1-40 Hz, in 1 Hz steps) provided slightly larger estimation errors of exponent (MAE = 0.15) and offset (MAE = 0.20) relative to original settings (exponent, offset MAE = 0.11, 0.14, respectively). Alpha peaks were recovered with slightly lower sensitivity (98% at time bins with maximum peak amplitude; original 99%) and specificity (9% spurious detections; original 4%), and with greater errors in centre frequency (MAE = 0.43), amplitude (MAE = 0.24), and bandwidth (MAE = 0.53) compared to original settings (centre frequency, amplitude, bandwidth MAE = 0.33, 0.20, 0.42, respectively). Down-chirping beta oscillations were detected with lower sensitivity (93% sensitivity at time bins with maximum peak amplitude, original 98%; 86% specificity, original 98%), and with greater errors in centre frequency (MAE = 0.57), amplitude (MAE = 0.22), and bandwidth (MAE = 0.57) compared to original settings (centre frequency, amplitude, bandwidth MAE = 0.43, 0.17, 0.48, respectively). SPRiNT settings for higher frequency resolution (time range: 2-58 s, in 0.5 s steps; frequency range: 1-40 Hz, in 0.5 Hz steps) provided comparable estimation errors of exponent (MAE = 0.13) and offset (MAE = 0.16) relative to original settings (exponent, offset MAE = 0.11, 0.20, respectively). Alpha peaks were recovered with similar sensitivity (99% at time bins with maximum peak amplitude; original 99%) but lower specificity (21% spurious detections; original 4%), and with comparable errors in centre frequency (MAE = 0.35), amplitude (MAE = 0.23), and bandwidth (MAE = 0.41) to original settings (centre frequency, amplitude, bandwidth MAE = 0.33, 0.20, 0.42, respectively). Down-chirping beta oscillations were detected with comparable sensitivity (99% sensitivity at time bins with maximum peak amplitude, original 98%) but lower specificity (78%, original 98%), and with greater errors in centre frequency (MAE = 0.50), amplitude (MAE = 0.21), and bandwidth (MAE = 0.59) relative to original settings (centre frequency, amplitude, bandwidth MAE = 0.43, 0.17, 0.48, respectively). (Lines 1190 to 1213)"

      We now provide in the Discussion practical recommendations for setting the methods parameters, which will depend on the specific objectives of a given study. We saw the rationale for the settings used in the manuscript as guidelines to future users. We believe the specific recommendations added will be of greater practical value of the manuscript.

      Reviewer 3 (public Review):

      1) Based on the simulated data, SPRiNT seems to be very efficient and robust, and it is also superior to the wavelet-specparam approach. However, while the simulations are very extensive, I find that they are constructed in a manner that may induce biases as the comparison is conducted between SPRiNT and a single, fixed wavelet-based approach. Like any spectral analysis technique, wavelets possess their own trade-off between temporal and frequency resolutions. As the wavelet analyses are conducted using a fixed set of parameters, it may be that some of the differences between the methods stem from how well they are suited for detecting the simulated activity that is constructed using a certain standard deviation of their oscillatory frequencies. It would be valuable to evaluate whether changing the wavelet-analysis parameters or the width of the simulated oscillations would change how the alternative methods compare. It is of course clear that the STFT based approach would remain computationally superior, but it would be interesting to see whether the other differences would remain as robust after the above more detailed evaluation of the methods. Related to the method comparison, it also appears that the outlier removal within SPRiNT markedly improves the quantification of the periodic components. This matter could be discussed more within the manuscript.

      We appreciate the concerns expressed by this Reviewer regarding our choice of wavelet parameters.

      To respond to the concerns expressed, we have performed new analyses with the wavelet-specparam approach with a diversity of alternative time-frequency resolutions: FWHM of 2s at 1 Hz, and FWHM of 4s at 1 Hz (Figure 2 – Figure supplement 2).

      The changes observed remain qualitatively moderate, and the performances below those obtained with SPRiNT. The new results are displayed in Figure 2 – figure supplement 2 and described in the following revisions to Supplemental Materials:

      "Wavelet settings of finer resolution in time and coarser in frequency (time range: 3-57 s, in 0.005 s steps; central frequency = 1 Hz, FWHM = 2 s; frequency range: 1-40 Hz, in 1 Hz steps) yielded lower estimation errors of exponent (MAE = 0.12) and offset (MAE = 0.35) compared to original settings (exponent, offset MAE = 0.19, 0.78). Alpha peaks were recovered with higher sensitivity (97% at time bins with maximum peak amplitude, original 95%) and specificity (32% spurious detections, original 47%), although with greater errors in centre frequency (MAE = 0.61), amplitude (MAE = 0.25), and bandwidth (MAE = 0.94) compared to original settings (centre frequency, amplitude, bandwidth MAE = 0.41, 0.24, 0.64, respectively). Down-chirping beta oscillations were detected with lower sensitivity (29% sensitivity at time bins with maximum peak amplitude, original 62%) but higher specificity (97%, original 90%), and with greater errors in centre frequency (MAE = 0.63), amplitude (MAE = 0.17), and bandwidth (MAE = 1.59) relative to original settings (centre frequency, amplitude, bandwidth MAE = 0.58, 0.16, 1.05, respectively). When wavelet settings prioritized resolution in frequency over time (time range: 4-56 s, in 0.005 s steps; central frequency = 1 Hz, FWHM = 4 s; frequency range: 1-40 Hz, in 1 Hz steps) relative to original settings, the errors in estimates of exponent (MAE = 0.16) and offset (MAE = 0.47) parameters were reduced (original exponent, offset MAE = 0.19, 0.78, respectively). Alpha peaks were recovered with higher sensitivity (99% at time bins with maximum peak amplitude, original 95%) and similar specificity (46% spurious detections, original 47%), although with larger errors in centre frequency (MAE = 0.33), amplitude (MAE = 0.20), and bandwidth (MAE = 0.43) compared to original settings (centre frequency, amplitude, bandwidth MAE = 0.41, 0.24, 0.64, respectively). In contrast, down-chirping beta oscillations were detected with slightly higher sensitivity (79% at time bins with maximum peak amplitude, original 62%) and specificity (91%, original 90%), and with lower errors on centre frequency (MAE = 0.37), amplitude (MAE = 0.14), and bandwidth (MAE = 0.71) compared to original settings (centre frequency, amplitude, bandwidth MAE = 0.58, 0.16, 1.05, respectively). (Lines 1155 to 1179)"

      We now discuss the outlier peak removal process and its benefits/drawbacks more extensively in the revised Discussion. The relevant section is pasted below:

      "SPRiNT’s optional outlier peak removal procedure increases the specificity of detected spectral peaks by emphasizing the detection of periodic components that develop over time. This feature is controlled by threshold parameters that can be adjusted along the time and frequency dimensions. So far, we found that applying a semi-conservative threshold for outlier removal (i.e., if less than 3 more peaks are detected within 2.5 Hz and 3 s around a given peak of the spectrogram) reduced the false detection rate by 50%, without affecting the true detection rate substantially (a <5% reduction; Figure 3 and Figure 3 – figure supplement 3). Setting these threshold parameters too conservatively would reduce the sensitivity of peak detection. (Lines 487 to 494)"

      2) As for the investigation of real data, there are a few aspects that in my opinion could be investigated more thoroughly. Based on the findings it appears that the fine-grained time-resolved parametrization yields added value, especially in eyes-open rest where the fluctuation of alpha center frequency dissociates the different age groups, whereas the other time-resolved findings are not as unambiguously supportive of the need for fine-grained time-resolved analysis. Regarding the first point (fluctuation of alpha center frequency), the finding that the amount of fluctuation within the alpha frequency is distinct across age groups is very interesting. On the methodological, an open question is whether SPRiNT is required for making this observation. That is, is this effect observed only when applying the specparam-based parametrization (and outlier removal) after STFT or would the same observation have been made simply by estimating the fluctuations directly from the STFT based spectral estimates? As for using SPRiNT to determine the properties of aperiodic activity, presently it is not clear whether the approach yields added value compared to the more direct use of specparam. That is, the present findings show that the mean aperiodic slope dissociates both different age groups and resting-state conditions (eyes-open vs. -closed). It would be appropriate to test whether the same observation would be made by using specparam in the more standard way by first obtaining one spectral estimate across the whole one-minute time windows and then parametrizing this estimate. This type of testing would yield insights into whether there is a difference between SPRiNT that builds on dynamic but noisier spectral estimates and that allows the outlier removal and the standard approach benefiting from more stable spectral estimates for the present data and possibly for other questions. As for the rodent movement data, the evidence is clear that the aperiodic exponent differs between resting and movement state. However, the fundamental meaning of the change of the exponent at transition points is not explored. Does this change simply reflect the speed of the animal/amount of movement that changes across the time period prior and post rest and movement onsets? That is, does the transition curve align with the movement curve or does it represent something more complex? This aspect could be evaluated and discussed more extensively. Together, the above additional evaluations would be beneficial for determining whether there is value in looking at aperiodic activity in a time-resolved manner and whether a fine-grained analysis is needed or would a more static analysis takes into account the fact tasks/states fare equally or even in a superior manner.

      We appreciate all concerns raised here by the Reviewer. We intended to report that age-related changes of spectral features in healthy aging (Cellier et al., 2021; Donoghue et al., 2020; Hill et al., 2022; Ostlund et al., 2022; Schaworonkow & Voytek, 2021) can be replicated using summary statistics of SPRiNT outcomes. Our intention was not to showcase these effects as novel. To clarify our purpose and the novelty in the proposed approach, we have revised Figure 4 accordingly and now emphasize the genuine novel aspects of our findings from the time-resolved parameterization of the spectrogram.

      We further investigated the benefits of using SPRiNT to detect age-related changes in the temporal variability of alpha-peak frequency. Using STFT, we replicated the same effect trends whereby older individuals exhibit greater temporal variability of alpha-peak frequency. One asset from the SPRiNT approach is the interpretability of the effect because it detects genuine peak components in the spectrogram and correct their parameters from possible confounds from concurrent aperiodic components. Individual alpha peak frequency derived from STFT is based on instantaneous fluctuations of signal power in the alpha band, regardless of the actual presence of a periodic component.

      As for apparent discrepancies between the SPRiNT and specparam outcomes, we found that only the specparam-derived alpha amplitude, not SPRiNT’s, was predictive of age group. Please see our response to Reviewer 1’s first comment for a detailed interpretation of this outcome.

      Concerning the rodent data, we followed this Reviewer’s suggestion of determining whether aperiodic exponent was related to movement speed at the transitions between movement and rest (and vice versa). Indeed, we found that variability in aperiodic exponent proximal to transitions between movement and rest was partially explained by instantaneous movement speed (see Figure 5 – figure supplement 3). Below, we have revised the Results and Discussion sections accordingly:

      "We tested whether changes in aperiodic exponent proximal to transitions of movement and rest were related to movement speed and found a negative linear association in both subjects for both transition types (EC012 transitions to rest: β = -9.6x10-3, SE = 4.7x10-4, 95% CI [-1.1x10-2 -8.6x10-3], p < 0.001, R2 = 0.29; EC012 transitions to movement: β = -7.3x10-3, SE = 4.3x10-4, 95% CI [-8.1x10-3 -6.4x10-3], p < 0.001, R2 = 0.18; EC013 transitions to rest: β = -1.1x10-2, SE = 2.3x10-4, 95% CI [-1.2x10-2 -1.1x10-2], p < 0.001, R2 = 0.32; EC013 transitions to movement: β = -1.2x10-2, SE = 3.2x10-4, 95% CI [-1.3x10-2 -1.2x10-2], p < 0.001, R2 = 0.26; Figure 5 – figure supplement 3). (Lines 403-410)"

      Changes in aperiodic exponent were partially explained by movement speed (Figure 5 – figure supplement 3), which could reflect increased processing demands from additional spatial information entering entorhinal cortex (Keene et al., 2017) or increased activity in cells encoding speed directly (Iwase et al., 2020). (Lines 556-560)

    1. Author Response

      Reviewer #1 (Public Review):

      “The authors suggest that they uncovered two distinct phases of how the posterior axial identity is controlled; the first involving TBXT/Wnt to generate posterior 'uncommitted progenitors', which then go on to generate NCCs, and the second involving FGF to impart posterior axial identity onto CNS/spinal cord cells.”

      Based on our new data we have slightly modified our model: (i) TBXT controls posterior axial identity acquisition in NMP precursors and both their trunk NC and CNS spinal cord derivatives; (ii) this early, TBXT-driven posteriorisation phase appears to be WNT dependent; (iii) a subsequent TBXT/WNT-independent phase of Hox cluster regulation occurring during the transition of NMPs towards their NC/spinal cord derivatives is controlled predominantly by FGF signalling. This model is shown in Figure 9 in the revised manuscript.

      “I am not convinced that their data show this; it is equally possible that NMPs are heterogeneous and the effects observed simply reflect a differential response of cells or selection. Since the authors largely analyse their data by qPCR it is difficult to disentangle this.”

      We believe that the inclusion of new data defining the emergence of NMP derivatives at the single cell level through analysis of key trunk lineage-specific markers (HOXC9, SOX10, SOX1, SOX2) via immunostaining and image analysis/flow cytometry (see Figure 3-figure supplement 1, Figure 4C-D, Figure 5-figure supplement 1, Figure 7D-E in revised manuscript) should address the reviewer’s point. See also our response to the editorial comments above. It should be note that the vast majority of day 3 hESC-derived NMPs (>95%) is positive for TBXT protein expression based on antibody staining and thus the starting population for the generation of trunk NC/spinal cord progenitors can be considered largely homogeneous when it comes to the expression of this transcription factor.

      “The authors include some expression data in mouse to support their in vitro findings. However, these need to be explained and integrated better.”

      We hope that breaking down figure 4 and the related text into two parts has improved the integration of the in vivo data in the revised version of the manuscript.

      Reviewer #2 (Public Review):

      “The fact that the regimes are distinct makes the comparisons of neural crest versus spinal cord difficult to interpret as the cells have been exposed for different amounts of time to WNT and FGF when they asses the Hox code in neural crest or spinal cord cells. Specially because the spinal cord induction protocol involves four additional days of culture with FGF and CHIR, and the cells after seven days are not mature neural progenitors.

      To address this point, we employed “neutral”, extrinsic signal-free culture conditions that drive NMPs towards a mixture of early pre-neural spinal cord progenitors and mutually exclusive SOX1+HOXC9+ CNS spinal cord and SOX10+HOXC9+ NC populations. This facilitated the effective assessment of cell fate and posterior axial identity acquisition simultaneously in both NMP-derived spinal cord and NC cells, during discrete time windows of TBXT knockdown (Figure 4 in revised manuscript). For details see our response above.

      Likewise, the authors have previously shown that such a treatment induces the expression of dorsal neural tube/early neural crest markers”.

      Although we have no evidence of SOX10 expression in cultures generated from NMPs following WNT and FGF agonist treatment for 4 days indicating absence of definitive NC cells, we opted to remove the “CNS” references when describing this cell population to accommodate for the possibility that it may be NC-potent given its previously described dorsal neural tube/early NC character (Cooper et al, 2022; Wind et al., 2021).

      “It would be good to see some quality controls on the percentages of neural crest progenitors or spinal cord neural progenitors that they get in each signalling regime. Can the authors separate neural progenitor cells and neural crest cells (for example by FACS sorting with specific markers) to confirm the cell-type specific expression of the HOX genes in these experiments?”.

      As mentioned above, we have now included immunostaining data quantifying thoroughly the induction of trunk SOX1+HOXC9+ CNS spinal cord and SOX10+HOXC9+ NC cells under different culture conditions/TBXT levels (see Figure 4C-D, Figure 5-figure supplement 1, Figure 7 and Figure 7-figure supplement 1).

      “In the neural crest differentiation protocol, there is a slight, non-significant upregulation of neural progenitor markers following TBXT knockdown, can the authors quantify the percentage of neural cells in their cultures to see how much of the observed effect is specific to neural crest cells?”

      We have quantified the emergence of SOX1+ CNS spinal cord progenitor cells in NMPderived trunk NC cultures using both FACS/intracellular staining and immunostaining/image analysis but their numbers are too small (2-3% of total cells with no statistically significant difference between control and TBXT knockdown cells, see Figure 3-figure supplement 1) to extract any meaningful conclusions on the effect of TBXT depletion on them. However, quantification of SOX1+HOXC9+ cells generated from NMPs upon culture in “neutral” basal conditions revealed that TBXT depletion results in a decrease in their number in addition to its established impact on trunk NC (see Figure 4C-D in revised manuscript).

      “Previous work from the lab showed that a 3-day FGF/CHIR treatment of hESCs followed by a two-day incubation on basal medium is sufficient to induce neural progenitors that express Hox genes of posterior identity (PMID: 25157815). Can the authors draw the same conclusions for the spinal cord cells with this protocol if they deplete TBXT during the first three days and assay at day 7 the cells on basal medium, or if they deplete TBXT during the last four days of the protocol? The comparison of the 3-day FGF/CHIR regime followed by basal medium treatment versus the continuous FGF/CHIR for a 7-day period may help clarify the temporal and cell-type specific effects of the HOX code via TBXT/FGF on the neural crest and/or spinal cord cells”.

      We have carried out this experiment as suggested by the reviewer (Figure 4C-D/line numbers 226-256 in the revised manuscript), for details see our responses above.

      “In their data, it seems that anterior HOX genes (PG1-5) as well as other posterior HOX (PG6-9) are expressed in wild-type posterior neural crest and early spinal cord cells. Can HOX genes that mark posterior cranial, vagal or trunk identities be co-expressed in trunk neural crest or spinal cord cells? Is it possible that the differentiations generate cells that have different axial identities? I wonder if this interpretation comes from the normalization. Perhaps the authors could clarify if the levels of expression of the 3' Hox genes are higher or lower than 5' Hox genes in their differentiations”.

      Co-expression of HOX paralogous group (PG) (1-5) and (6-9) transcripts does occur in the posterior part of the mouse embryo around E9.5, both in the NMP-containing tailbud region (Gouti et al, 2017) as well as in differentiated posterior neural/neural crest cells e.g. for Hoxb1 expression in E9.5 mouse embryos see (Arenkiel et al, 2003; Glaser et al, 2006); for Hoxc9 expression see (Bel et al, 1998). Thus, the presence of HOXPG(1-5) transcripts in HOXC9+ trunk NC cells is not surprising and in line with what has been reported previously in other studies describing the generation of posterior NC/spinal cord cell types from hESC/NMPs (Frith et al., 2018; Hackland et al, 2019; Lippmann et al, 2015; Mouilleau et al, 2021). Alternatively, the simultaneous detection of transcripts belonging to both HOXPG(1-5) and HOXPG(6-9) could indicate the co-emergence of a separate population of posterior cranial/cardiac/vagal NC cells during trunk NC differentiation. Moreover, the detection of HOX transcripts does not always correlate with corresponding protein positivity (Faustino Martins et al, 2020) pointing to the existence of post-transcriptional/-translational mechanisms controlling HOX protein expression. Unfortunately, we have not identified reliable (in our hands) antibodies against HOXPG(1-5) members that we can use together with HOXC9 in order to distinguish between these possibilities.

      “In the experiments where the authors asses if TBXT binds directly or indirectly to the HOX clusters, the authors compare pluripotent cells with hNMPs. This data confirms that TBXT acts as an activator in hNMPs and that it binds to regions in the HOX clusters. Do the HOX regions overlap with known enhancers for the HOX genes for neural crest or spinal cord?”

      We have included new ATAC-seq data mapping chromatin accessibility in day 8 trunk NC cells generated from TBXT-depleted and control hESC-derived NMPs. These data, combined with the ATAC-seq and TBXT ChIP-seq analyses from day 3 hESC-derived NMPs, indicate that TBXT controls chromatin accessibility in trunk NC-specific enhancers within HOX clusters, both directly through genomic binding, and indirectly possibly by influencing expression of other key transcriptional regulators such as CDX2. For details see Figure 8-figure supplement 2 and Appendix Table S9 and line numbers 458-482 in the revised manuscript.

      “As they see distinct temporal phases of TBXT activity on spinal cord progenitors versus neural crest cells, the authors should test if there are changes in accessibility or TBXT binding in neural crest and spinal cord cells in the HOX locus and/or genome-wide. This comparison may help identify cell-type specific TBXT targets (perhaps acting with distinct coactivators) that are key in the two distinct phases of posterior axial identity control”.

      As mentioned above, we have added new ATAC-seq data from analysis of trunk NC cells derived from TBXT knockdown shRNA hESC-derived NMPs in the presence and absence of Tet. These data can be found in Figure 8-figure supplement 2 and Appendix Table S9 in the revised manuscript. As expected, ATAC-seq analysis of pre-neural CNS spinal cord progenitors generated from TBXT knockdown shRNA hESC-derived NMPs in the presence and absence of Tet showed no significant differences in chromatin accessibility between the two conditions again our gene expression data (Figure 6 in revised manuscript). These data were not included in the new manuscript version but they are publicly available as part of our revised GEO submission (GSE184227). Mapping of TBXT genomic binding in NMP-derived trunk NC cells/spinal cord progenitors is not feasible due to the very low/absent expression of TBXT protein in these cell populations. See also our response to the editor’s suggestions.

      “In the experiments where the authors examine the signalling pathway dependence of HOX expression during the transition in the neural crest differentiation protocol, it appears that CHIR/LDN treatment induces the highest levels of HOX expression (FIG 3F). Also, there is an increased expression of SOX1 while SOX10 expression is not detected "pointing to a role for BMP signalling in steering NMPs/dorsal pre-neural progenitors toward a NC fate in agreement with previous observations". The results may indicate that WNT and BMP inhibition may induce HOX gene expression in neural cells irrespective of FGF. How do the authors interpret this? How does it affect their final model where FGF (and not WNT) drives the expression of HOX genes in late pre-neural spinal cord progenitors?”.

      Based on our data and published work, we speculate that during the transition of hESCderived NMPs towards trunk NC cell, cultures still exhibit autocrine and/or paracrine FGF signalling even in the absence of exogenous FGF agonist supplementation. This is supported by previous reports showing the expression of the active, phosphorylated version of the FGF effector ERK1/2 in differentiating pluripotent stem cells cultured in FGF-free media (Diaz-Cuadros et al, 2020; Stavridis et al, 2007; Ying et al, 2003). This endogenous FGF activity is probably sufficient for the maintenance of HOX gene expression in these cells, while exogenous BMP signalling stimulation is required for the induction of a NC fate. Given the reported antagonism between these two pathways during early neural/NC induction (Anderson et al, 2016; Marchal et al, 2009), treatment with the BMP inhibitor LDN193189 results in FGF signalling potentiation, which in turn leads to increased HOX gene expression and a switch toward a CNS neurectodermal fate at the expense of NC. Further work is needed to mechanistically dissect this hypothesis, which is beyond the scope of this manuscript.

      “The identity of the cells in the inhibition of WNT or FGF treatments during the final four days towards spinal cord cells experiments is unclear. It would be very useful if the authors could characterize what cell types emerge after the treatments. In principle, I would expect that these treatments would generate different progenitor types (FGF inhibition may presumably give rise to mesoderm cells, whereas WNT inhibited may be pre-neural). Why would the authors expect these different cell types to have similar levels of expression of WNT targets or Hox genes?”

      The inclusion of the new immunostaining data and the quantification of the proportions of SOX2+HOXC9+ emerging upon various WNT/FGF inhibitor treatments (Figure 7D-E in revised manuscript) has now enabled us to define the role of these signalling pathways in controlling HOX gene expression specifically in pre-neural spinal progenitors thus confirming our conclusions from the qPCR data without any bias introduced from contaminating, nonneural HOXC9+ cells.

    2. Reviewer #2 (Public Review):

      How cell types acquire regional identity during embryonic development remains largely unknown. In this manuscript, Gogolou et al. study the role of TBXT in the establishment of posterior identity of neural crest and spinal cord cells derived from human neuromesodermal progenitors (hNMPs). Previously, it was shown that activation of human pluripotent stem cells with FGF and Wnt/β-catenin, establishes a progressive and full colinear HOX activation in human axial progenitors in vitro. In this manuscript, the authors confirm these findings and show unexpected temporally restricted and cell-specific modes of acquisition of posterior identities in neural crest and spinal cord cells. Specifically, they find that TBXT depletion impedes posterior identity acquisition in neural crest cells whereas it does not impact spinal cord regionalization. Instead, they find that FGF is the main driver for spinal cord axial patterning.

      This work addresses a very important question and the results they provide support their final model. The work opens up new interpretations on how cells define their axial identity and sets the ground to investigate how TBXT cooperates with other transcription factors to establish posterior identities prior to the acquisition of a neural crest or spinal cord fate. Further, this mechanistic insight may help explain the impairment of neural crest specification and HOX dysregulation in neural tube defects.

      A rigorous characterization of the cell types that are generated during this differentiation under various signaling regimes is essential to separate the cell-specific effect of WNT and FGF on the HOX code versus the temporally restricted windows in which this can happen. In their experiments towards neural crest or spinal cord differentiation, the starting hNMP population is homologous and it emerges after a 3-day treatment of the cells with FGF and CHIR. Then, the authors use specific signalling regimes for the generation of neural crest or spinal cord cells. The fact that the regimes are distinct makes the comparisons of neural crest versus spinal cord difficult to interpret as the cells have been exposed for different amounts of time to WNT and FGF when they asses the Hox code in neural crest or spinal cord cells. Specially because the spinal cord induction protocol involves four additional days of culture with FGF and CHIR, and the cells after seven days are not mature neural progenitors. Likewise, the authors have previously shown that such a treatment induces the expression of dorsal neural tube/early neural crest markers (PMID: 33658223).

      - It would be good to see some quality controls on the percentages of neural crest progenitors or spinal cord neural progenitors that they get in each signalling regime. Can the authors separate neural progenitor cells and neural crest cells (for example by FACS sorting with specific markers) to confirm the cell-type specific expression of the HOX genes in these experiments?

      - In the neural crest differentiation protocol, there is a slight, non-significant upregulation of neural progenitor markers following TBXT knockdown, can the authors quantify the percentage of neural cells in their cultures to see how much of the observed effect is specific to neural crest cells?

      - Previous work from the lab showed that a 3-day FGF/CHIR treatment of hESCs followed by a two-day incubation on basal medium is sufficient to induce neural progenitors that express Hox genes of posterior identity (PMID: 25157815). Can the authors draw the same conclusions for the spinal cord cells with this protocol if they deplete TBXT during the first three days and assay at day 7 the cells on basal medium, or if they deplete TBXT during the last four days of the protocol? The comparison of the 3-day FGF/CHIR regime followed by basal medium treatment versus the continuous FGF/CHIR for a 7-day period may help clarify the temporal and cell-type specific effects of the HOX code via TBXT/FGF on the neural crest and/or spinal cord cells.

      - In their data, it seems that anterior HOX genes (PG1-5) as well as other posterior HOX (PG6-9) are expressed in wild-type posterior neural crest and early spinal cord cells. Can HOX genes that mark posterior cranial, vagal or trunk identities be co-expressed in trunk neural crest or spinal cord cells? Is it possible that the differentiations generate cells that have different axial identities? I wonder if this interpretation comes from the normalization. Perhaps the authors could clarify if the levels of expression of the 3' Hox genes are higher or lower than 5' Hox genes in their differentiations.

      - In the experiments where the authors asses if TBXT binds directly or indirectly to the HOX clusters, the authors compare pluripotent cells with hNMPs. This data confirms that TBXT acts as an activator in hNMPs and that it binds to regions in the HOX clusters. Do the HOX regions overlap with known enhancers for the HOX genes for neural crest or spinal cord?

      - As they see distinct temporal phases of TBXT activity on spinal cord progenitors versus neural crest cells, the authors should test if there are changes in accessibility or TBXT binding in neural crest and spinal cord cells in the HOX locus and/or genome-wide. This comparison may help identify cell-type specific TBXT targets (perhaps acting with distinct co-activators) that are key in the two distinct phases of posterior axial identity control.

      - In the experiments where the authors examine the signalling pathway dependence of HOX expression during the transition in the neural crest differentiation protocol, it appears that CHIR/LDN treatment induces the highest levels of HOX expression (FIG 3F). Also, there is an increased expression of SOX1 while SOX10 expression is not detected "pointing to a role for BMP signalling in steering NMPs/dorsal pre-neural progenitors toward a NC fate in agreement with previous observations". The results may indicate that WNT and BMP inhibition may induce HOX gene expression in neural cells irrespective of FGF. How do the authors interpret this? How does it affect their final model where FGF (and not WNT) drives the expression of HOX genes in late pre-neural spinal cord progenitors?

      - The identity of the cells in the inhibition of WNT or FGF treatments during the final four days towards spinal cord cells experiments is unclear. It would be very useful if the authors could characterize what cell types emerge after the treatments. In principle, I would expect that these treatments would generate different progenitor types (FGF inhibition may presumably give rise to mesoderm cells, whereas WNT inhibited may be pre-neural). Why would the authors expect these different cell types to have similar levels of expression of WNT targets or Hox genes?

  8. www.janeausten.pludhlab.org www.janeausten.pludhlab.org
    1. Reviewer #2 (Public Review):

      Roux et al. investigated the temporal relationship between spike field coherence (SFC) of locally and distally coupled units in the hippocampus of epilepsy patients to successful and unsuccessful memory encoding and retrieval. They show that SFC to faster theta and gamma oscillations accompany hits (successful memory encoding and retrieval) and that the timing of the SFC between local and distal units for hits comports well with synaptic plasticity rules. The task and data analyses appear to be rigorously done.

      Strengths:

      The manuscript extends previous work in the human medial temporal lobe which has shown that greater SFC accompanies improved memory strength. The cross-regional analyses are interesting and necessary to invoke plasticity mechanisms. They deploy a number of contemporary analyses to disentangle the question they are addressing. Furthermore, their analyses address limitations or confound that can arise from various sources like sample size, firing rates, and signal processing issues.

      Weaknesses:

      Methodological:<br /> The SFC coherence measures are dependent in part on extracting LFPs derived from the same or potentially other electrodes that are contaminated by spikes, as well as multiunit activity. In the methods, they cite a spike removal approach. Firstly, the incomplete removal or substitution of a signal with a signal that has a semblance to what might have been there if no spike was present can introduce broadband signal time-locked to the spike and create spurious SFC. Can the authors confirm that such an artifact is not present in their analyses? Secondly, how did they deal with the removal of the multiunit activity? It would be suspected that the removal of such activity in light of refractory period violation might be more difficult than well-isolated units, and introduce artifacts and broadband power, again which would spuriously elevate SFC. Conversely, the lack of removal of multiunit activity would seem to for a surety introduce significant broadband power. One way around this might be that since it is uncommon to have units on all 8 of the BF microwires, to exclude the microwire(s) with the units when extracting the LFP to avoid the need to perform spike removal.

      In a number of analyses the spike train is convolved with a Gaussian in places with a window length of 250ms and in others 25ms. It is suspected that windows of varying lengths would induce "oscillations" of different frequencies, and would thus generate results biased towards the window length used. Can the authors justify their choices where these values are used, and/or provide some sensitivity analyses to show that the results are somewhat independent of the window length of the Gaussian used to convolve with the times series.

      Conceptual:<br /> The co-firing analyses are very interesting and novel. In table S1 are listed locally and distally coupled neurons. There are some pairs for example where the distally coupled neuron is in EC and the downstream one in the hippo, and then there is a pair that is the opposite of this (dist: hippo, local EC). There appear to be a number of such "reversal", despite the delay between these two regions one would assume them to be similar in sign and magnitude given the units are in the same two regions. It seems surprising that in two identical regions of the hippo the flow of information or "causality", could be reversed, when/if one assumes information flows through the system from EC to hippo. This seems unusual and hard to reconcile given what is known about how information flows through the MTL system.

    1. 再来一个新软件——ExTab官网|多标签文件管理器——应该是一个轻便的工具,官网没讲明系统支持情况。时至今日(2021-05-23)这软件还是有一些bug的,如果想要反馈,可以去它的百度贴吧或QQ群发言。

      windows

  9. icla2022.jonreeve.com icla2022.jonreeve.com
    1. The career of our play brought us through the dark muddy lanes behind the houses where we ran the gauntlet of the rough tribes from the cottages, to the back doors of the dark dripping gardens where odours arose from the ashpits, to the dark odorous stables where a coachman smoothed and combed the horse or shook music from the buckled harness. When we returned to the street light from the kitchen windows had filled the areas. If my uncle was seen turning the corner we hid in the shadow until we had seen him safely housed

      I find this section interesting because of its repeated mentioning of the words “dark” and “odors.” There is also an immediate reference to “light” and “shadow” afterwards. This reminds me of a passage in the previous reading. It seems like the author likes using light and darkness and the contrast between them.