10,000 Matching Annotations
  1. Sep 2025
    1. Reviewer #1 (Public review):

      Summary:

      Weiss and co-authors presented a versatile probabilistic tool. aTrack helps in classifying tracking behaviors and understanding important parameters for different types of single particle motion types: Brwonian, Confined, or Directed motion. The tool can be used further to analyze populations of tracks and the number of motion states. This is a stand-alone software package, making it user-friendly for a broad group of researchers.

      Strengths:

      This manuscript presents a novel method for trajectory analysis.

      Comments on revisions:

      The authors have strengthened and improved the manuscript

    2. Reviewer #2 (Public review):

      Summary:

      The authors present a software package "aTrack" for identification of motion types and parameter estimation in single-particle tracking data. The software is based on maximum likelihood estimation of the time-series data given an assumed motion model and likelihood ratio tests for model selection. They characterized the performance of the software mostly on simulated data and showed that it is applicable to experimental data.

      Strengths:

      Although many tools exist in the single-particle tracking (SPT) field, this particular software package is developed using an innovative mathematical model and a probabilistic approach. It also provide inference of motion types, which are critical to answer biological questions in SPT experiments.

      (1) The authors adopt a novel mathematical framework, which is unique in the SPT field.

      (2) The authors have validated their method extensively using simulated tracks and compared to existing methods when appropriate.

      (3) The code is freely available

      Weaknesses:

      The authors did a good job during the revision to address most of the weaknesses in my (as well as other reviewer's) first round of review. Nevertheless, the following issue is still not fully addressed.<br /> The hypothesis testing method presented here lacks rigorous statistical foundation. The authors improved on this point after the revision, but in their newly added SI section "Statistical Test", only justified their choices using "hand-waving" arguments (i.e. there is not a single reference to proper statistical textbooks or earlier works in this important section). I understand that sometimes mathematical rigor comes later after some intuition-guided choices of critical parameters seems to work, but nevertheless need to point it out as a remaining weakness.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review): 

      Summary: 

      Weiss and co-authors presented a versatile probabilistic tool. aTrack helps in classifying tracking behaviors and understanding important parameters for different types of single particle motion types: Brownian, Confined, or Directed motion. The tool can be used further to analyze populations of tracks and the number of motion states. This is a stand-alone software package, making it user-friendly for a broad group of researchers. 

      Strengths: 

      This manuscript presents a novel method for trajectory analysis. 

      Weaknesses: 

      (1) In the results section, is there any reason to choose the specific range of track length for determining the type of motion? The starting value is fine, and would be short enough, but do the authors have anything to report about how much is too long for the model? 

      We chose to test the range of track lengths (five-to-hundreds of steps) to cover the broad range of scenarios arising from single proteins or fluorophores to brighter objects with more labels.  While there is no upper-limit per se, the computation time of our method scales linearly with track length, 100 time-points takes ~2 minutes to run on a standard consumer-level desktop CPU. We have added the following sentence to note the time-cost with trajectory length:  

      “The recurrent formula enables our model computation time to scale linearly with the number of time points.”

      (2) Robustness to model mismatches is a very important section that the authors have uplifted diligently. Understanding where and how the model is limited is important. For example, the authors mentioned the limitation of trajectory length, do the authors have any information on the trajectory length range at which this method works accurately? This would be of interest to readers who would like to apply this method to their own data. 

      We agree that limitations are important to estimate, and trajectory length is an important consideration when choosing how to analyze a dataset. We report the categorization certainty, i.e. the likelihood differences, for a range of track lengths (Fig. 2 a,c, Fig. 3c-d, and Fig. 4 c,g.).

      For example, here are the key plots from Fig. 2 quantifying the relative likelihoods, where being within the light region is necessary. The light areas represent a useful likelihood ratio.

      We only performed analysis up to track lengths of 600 time steps but parameter estimations and significance can only improve when increasing the track length as long as the model assumptions are verified. The broader limitations and future opportunities for new methods are now expanded upon in the discussion, for example switching between states and model and state and model ambiguities (bound vs very slow diffusion vs very slow motion).

      (3) aTrack extracts certain parameters from the trajectories to determine the motion types. However, it is not very clear how certain parameters are calculated. For example, is the diffusion coefficient D calculated from fitting, and how is the confinement factor defined and estimated, with equations? This information will help the readers to understand the principles of this algorithm.

      We apologize for the confusion. All the model parameters are fit using the maximum likelihood approach. To make this point clearer in the manuscript, we have made three changes:

      (1) We modified the following sentence to replace “determined” with "fit”:

      “Finally, Maximum Likelihood Estimation (MLE) is used to fit the underlying parameter value”

      (2) We added the following sentence in the main text :

      “In our model, the velocity is the characteristic parameter of directed motion and the confinement factor represents the force within a potential well. More precisely, the confinement factor $l$ is defined such that at each time step the particle position is updated by $l$ times the distance particle/potential well center (see the Methods section for more details).”.

      (3) We have added a new section in the methods, called Fitting Method, where we have added the explanation below:

      “For the pure Brownian model, the parameters are the diffusion coefficient and the localization error. For the confinement model, the parameters are the diffusion coefficient, the localization error, confinement factor, and the diffusion coefficientof the potential well. For the directed model, the parameters are the diffusion coefficient, the localization error, the initial velocity and the acceleration variance.

      These parameters are estimated using the maximum likelihood approach which consists in finding the parameters that maximize the likelihood. We realize this fitting step using gradient descent via a TensorFlow model. All the estimates presented in this article are obtained from a single set of initial parameters to demonstrate that the convergence capacity of aTrack is robust to the initial parameter values.”

      (4) The authors mentioned the scenario where a particle may experience several types of motion simultaneously. How do these motions simulated and what do they mean in terms of motion types? Are they mixed motion (a particle switches motion types in the same trajectory) or do they simply present features of several motion types? It is not intuitive to the readers that a particle can be diffusive (Brownian) and direct at the same time. 

      In the text, we present an example where one can observe this type of motion to help the reader understand when this type of motion can be met: “Sometimes, particles undergo diffusion and directed motion simultaneously, for example, particles diffusing in a flowing medium (Qian 1991).”

      This is simulated by the addition of two terms affecting the hidden position variable before adding a localization term to create the observed variable. In the analysis, this manifests as non-zero values for the diffusion coefficient and the linear velocity. For example, Figure 4g and the associated text, where a single particle moves with a directed component and a Brownian diffusion component at each step.

      We did not simulate transitions between types of motion. Switching is not treated by this current model; however, this limitation is described in the discussion and our team and others are currently working on addressing this challenge.

      Reviewer #2 (Public Review): 

      Summary: 

      The authors present a software package "aTrack" for identification of motion types and parameter estimation in single-particle tracking data. The software is based on maximum likelihood estimation of the time-series data given an assumed motion model and likelihood ratio tests for model selection. They characterized the performance of the software mostly on simulated data and showed that it is applicable to experimental data. 

      Strengths: 

      A potential advantage of the presented method is its wide applicability to different motion types. 

      Weaknesses: 

      (1) There has been a lot of similar work in this field. Even though the authors included many relevant citations in the introduction, it is still not clear what this work uniquely offers. Is it the first time that direct MLE of the time-series data was developed? Suggestions to improve would include (a) better wording in the introduction section, (b) comparing to other popular methods (based on MSD, step-size statistics (Spot-On, eLife 2018;7:e33125), for example) using the simulated dataset generated by the authors, (c) comparing to other methods using data set in challenges/competitions (Nat. Comm (2021) 12:6253).  

      We thank the reviewer for this suggestion and agree that the explanation of the innovative aspects of our method in the introduction was not clear enough. We have now modified the introduction to better explain what is improved here compared to previous approaches.

      “The main innovations of this model are: 1) it uses analytical recurrence formulas to perform the integration step for complex motion, improving speed and accuracy; 2) it handles both confined and directed motion; 3) anomalous parameters, such as the center of the potential well and the velocity vector are allowed to change through time to better represent tracks with changing directed motion or confinement area; and lastly 4) for a given track or set of tracks, aTrack can determine whether tracks can be statistically categorized as confined or directed, and the parameters that best describe their behavior, for example, diffusion coefficient, radius of confinement, and speed of directed motion.”

      Regarding alternatives, we compare our method in the text to the best-performing algorithm of the

      2021 Anomalous Diffusion (AnDi) Challenge challenge mentioned by the reviewer in Figure 6 (RANDI, Argun et al, arXiv, 2021, Muñoz-Gil et al, Nat Com. 2021). Notably, both methods performed similarly on fBm, but ours was more robust in cases where there were small differences between the process underlying the data and the model assumptions, a likely scenario in real datasets. Regarding Spot-On, this was not mentioned as it only deals with multiple populations of Brownian diffusers, preventing a quantitative comparison.

      (2) The Hypothesis testing method presented here has a number of issues: first, there is no definition of testing statistics. Usually, the testing statistics are defined given a specific (Type I and/or Type II) error rate. There is also no discussion of the specificity and sensitivity of the testing results (i.e. what's the probability of misidentification of a Brownian trajectory as directed? etc).

      We now explain our statistical approach and how to perform hypothesis testing with our metric in a new supplementary section, Statistical test. 

      We use the likelihood ratio as a more conservative alternative to the p-value. In Fig S2, we show that our metric is an upper bound of the p-value and can be used to perform hypothesis testing with a chosen type I error rate. 

      Related, it is not clear what Figure 2e (and other similar plots) means, as the likelihood ratio is small throughout the parameter space. Also, for likelihood ratio tests, the authors need to discuss how model complexity affects the testing outcome (as more complex models tend to be more "likely" for the data) and also how the likelihood function is normalized (normalization is not an issue for MLE but critical for ratio tests). 

      We present the likelihood ratio as an upper bound of the p-value. Therefore, we can reject the null hypothesis if it is smaller than a given threshold, e.g. 0.05, but this number should be decreased if multiple tests are performed. The colorscale we show in the figure is meant to highlight the working range (light), and ambiguous range (dark) of the method.

      As the reviewer mentions, we expect the alternative hypothesis to result in higher likelihoods than the simpler null hypothesis for null hypothesis tracks, but, as seen in the Fig S2, the likelihood ratio of a dataset corresponding to the null hypothesis is strongly skewed toward its upper limit 1. This means that for most of the tracks, the likelihood is not (or little) affected by the model complexity. The likelihoods of all the models are normalized so their integrals over the data equals 1/A with A the area of the field of view which is independent of the model complexity.

      (3) Relating to the mathematical foundation (Figure 1b). The measured positions are drawn as direct arrows from the real position states: this infers instantaneous localization. In reality, there is motion blur which introduces a correlation of the measured locations. Motion blur is known to introduce bias in SPT analysis, how does it affect the method here? 

      The reviewer raises an important point as our model does not explicitly consider motion blur. We have now added a paragraph that presents how our model performs in case of motion blur in the section called Robustness to model mismatches. This section and the corresponding new Supplemental Fig. S7 demonstrate that the estimated diffusion length is accurate so long as the static localization error is higher than the dynamic localization error. If the dynamic localization error is higher, our model systematically underestimates the diffusion length by a factor 0.81 = (2/3)<sup>0.5</sup> which can be corrected for with an added post-processing step.  

      (4) The authors did not go through the interpretation of the figure. This may be a matter of style, but I find the figures ambiguous to interpret at times.  

      We thank the reviewer for their feedback on improving the readability. To avoid overly repetitive and lengthy sections of text, we have opted for a concise approach. This allows us to present closely related panels at the same point in the text, while not ignoring important variations and tests. Considering this feedback and the reviewers, we have added more information and interpretation throughout our manuscript to improve interpretability.

      (5) It is not clear to me how the classification of the 5 motion types was accomplished. 

      We have modified the specific text related to this figure to describe an illustrative example to show how one could use aTrack on a dataset where not that much is known: First, we present the method to determine the number of states; second, we verify the parameter estimates correspond to the different states.  

      Classifying individual tracks is possible. While not done in the section corresponding to Fig. 5, this is done in Fig. 7 and a new supplementary plot, Fig. S9b (shown below). In brief, this is accomplished with our method by computing the likelihood of each track given each state. The probability that a given track is in state k equals the likelihood of the track given the state divided by the sum of the likelihoods given the different states. 

      (6) Figure 3. Caption: what is ((d_{est}-0.1)/0.1)? Also panel labeled as "d" should be "e". 

      Thank you for bringing these errors to our attention, the panel and caption have been corrected.

      Reviewer #3 (Public Review): 

      Summary: 

      In this work, Simon et al present a new computational tool to assess non-Brownian single-particle dynamics (aTrack). The authors provide a solid groundwork to determine the motion type of single trajectories via an analytical integration of multiple hidden variables, specifically accounting for localization uncertainty, directed/confined motion parameters, and, very novel, allowing for the evolution of the directed/confined motion parameters over time. This last step is, to the best of my knowledge, conceptually new and could prove very useful for the field in the future. The authors then use this groundwork to determine the motion type and its corresponding parameter values via a series of likelihood tests. This accounts for obtaining the motion type which is statistically most likely to be occurring (with Brownian motion as null hypothesis). Throughout the manuscript, aTrack is rigorously tested, and the limits of the methods are fully explored and clearly visualised. The authors conclude with allowing the characterization of multiple states in a single experiment with good accuracy and explore this in various experimental settings. Overall, the method is fundamentally strong, wellcharacterised, and tested, and will be of general interest to the single-particle-tracking field. 

      Strengths: 

      (1) The use of likelihood ratios gives a strong statistical relevance to the methodology. There is a sharp decrease in likelihood ratio between e.g. confinement of 0.00 and 0.05 and velocity of 0.0 and 0.002 (figure 2c), which clearly shows the strength of the method - being able to determine 2nm/timepoint directed movement with 20 nm loc. error and 100 nm/timepoint diffusion is very impressive. 

      We apologize for the confusion, the directed tracks in Fig 2 have no Brownian-motion component, i.e. D=0. We have made this clearer in the main text. Specifically, this section of the text refers to a track in linear motion with 2 nm displacements per step. With 70 time points (69 steps), a single particle which moved from 138 nm with a localization error of 20 nm (95% uncertainty range of 80 nm) can be statistically distinguished from slow diffusive motion.

      In Fig. 4g, we explore the capabilities of our method to detect if a diffusive particle also has a directed motion component. 

      (2) Allowing the hidden variables of confinement and directed motion to change during a trajectory (i.e. the q factor) is very interesting and allows for new interpretations of data. The quantifications of these variables are, to me, surprisingly accurate, but well-determined. 

      (3) The software is well-documented, easy to install, and easy to use. 

      Weaknesses: 

      (1) The aTrack principle is limited to the motions incorporated by the authors, with, as far as I can see, no way to add new analytical non-Brownian motion. For instance, being able to add a dynamical stateswitching model (i.e. quick on/off switching between mobile and non-mobile, for instance, repeatable DNA binding of a protein), could be of interest. I don't believe this necessarily has to be incorporated by the authors, but it might be of interest to provide instructions on how to expand aTrack.  

      We agree that handling dynamic state switching is very useful and highlight this potential future direction in the discussion. The revised text reads:

      “An important limitation of our approach is that it presumes that a given track follows a unique underlying model with fixed parameters. In biological systems, particles often transition from one motion type to another; for example, a diffusive particle can bind to a static substrate or molecular motor (46). In such cases, or in cases of significant mislinkings, our model is not suitable. However, this limitation can be alleviated by implicitly allowing state transitions with a hidden Markov Model (15) or alternatives such as change-point approaches (30, 47, 48), and spatial approaches (49).”

      (2) The experimental data does not very convincingly show the usefulness of aTrack. The authors mention that SPBs are directed in mitosis and not in interphase. This can be quantified and studied by microscopy analysis of individual cells and confirming the aTrack direction model based on this, but this is not performed. Similarly, the size of a confinement spot in optical tweezers can be changed by changing the power of the optical tweezer, and this would far more strongly show the quantitative power of aTrack. 

      We agree with the reviewer and have revised the biological experiment section significantly to better illustrate the potential of aTrack in various use cases.

      Now, we show an experiment to quantify the effect of LatA, an actin inhibitor, on the fraction of directed tracks obtained with aTrack. We find that LatA significantly decreases directed motion while a LatA-resistant mutant is not affected (Fig7a-c).

      As suggested by the reviewer, we have expanded the optical tweezer experiment by varying the laser power. As expected, increasing the laser power decreases the confinement radius.

      (3) The software has a very strict limit on the number of data points per trajectory, which is a user input. Shorter trajectories are discarded, while longer trajectories are cut off to the set length. It is not explained why this is necessary, and I feel it deletes a lot of useful data without clear benefit (in experimental conditions).

      We thank the reviewer for this recommendation; we have now modified the architecture of our model to enable users to consider tracks of multiple lengths. Note that the computation time is proportional to the longest track length times the number of tracks.  

      Reviewer #2 (Recommendations For The Authors): 

      Develop a better mathematical foundation for the likelihood ratio tests. 

      We added more explanation of the likelihood ratio tests and their interpretation a new section entitled Statistical test in the supplementary information to address this recommendation.

      Place this work in clearer contexts. 

      We have now revised the introduction to better contextualize this work.

      Improve manuscript clarity. 

      Based on reviewer feedback and input from others, we have addressed this point throughout the article to improve readability.

      Make the code available. 

      The code is available on https://github.com/FrancoisSimon/aTrack, now including code for track generation.

      Reviewer #3 (Recommendations For The Authors): 

      (1) I believe the underlying model presented in Figure 1 is of substantial impact, especially when considering it as a simulation tool. I would suggest the authors make their method also available as a simulator (as far as I can tell, this is not explicitly done in their code repository, although logically the code required for the simulator should already be in the codebase somewhere). 

      Thank you for this suggestion, the simulation scripts are now on the Github repository together with the rest of the analysis method. https://github.com/FrancoisSimon/aTrack

      (2) The authors should explore and/or discuss the effects of wrong trajectory linking to their method. Throughout the text, fully correct trajectory linking is assumed and assessed, while in real experiments, it is often the case that trajectory linking is wrong, e.g. due to blinking emitters, imaging artefacts, high-density localizations, etc etc. This would have a major impact on the accuracy of trajectories, and it is extremely relevant to explore how this is translated to the output of aTrack. 

      As the reviewer notes, our current model does not account for track mislinking. This limits the method to data with lower fluorophore-densities, which is the typical use-case for SPT. We have added a brief description of the issue into the discussion of limitations.  

      (3) aTrack only supports 2D-tracking, but I don't believe there is a conceptual reason not to have this expanded to three dimensions. 

      The stand-alone software is currently limited to 2D tracks, however, the aTrack Python package works for any number of dimensions (i.e. 1-3). Note that since the current implementation assumes a single localization error for all axes, more modifications may be required for some types of 3D tracking. See https://github.com/FrancoisSimon/aTrack for more details about aTrack implementations.

      (4) Crucial information is missing in the experimental demonstrations. Especially in the NP-bacteria dataset, I miss scalebars, and information on the number of tracks. It is not explained why 5 different states are obtained - especially because I would naively expect three states: immobile NPs (e.g. stuck to glass), diffusing NPs, and NPs attached to bacteria, and thus directed. Figure 7e shows three diffusive states (why more than one?), no immobile states (why?), and two directed states (why?). 

      We thank the reviewer for pointing out these issues. We have now added scalebars and more experimental details to the figure and text as well as modifying the plot to more clearly emphasize the directed nanoparticles that are attached to cells from the diffusive nanoparticles.  

      Likely, our focal plane was too high to see the particles stuck on glass. The multiple diffusive states may be caused by different sizes of nanoparticle complexes, the multiple directed states can be caused by the fact that directed motion of the cell-attached-nanoparticles occasionally shows drastic changes of orientations. We have also clarified in the text how multiple states can help handle a heterogeneous population as was shown by Prindle et al. 2022, Microbiol Spectr. The characterization and phenotyping of microbial populations by nanoparticle tracking was published in Zapata et al. 2022, Nanoscale. 

      (5) I don't think I agree that 'robustness to model mismatches' is a good thing. Very crudely, the fact that aTrack finds fractional Brownian motion to be normal Brownian motion is technically a downside - and this should be especially carefully positioned if (in the future) a fractional Brownian motion model would be added to aTrack. I think that the author's point can be better tested by e.g. widely varying simulated vs fitted loc precision/diffusion coefficient (which are somewhat interchangeable).

      In this context, our intention in describing the robustness to “model mismatches” refers to classifying subdiffusion as subdiffusive irrespective of the exact subdiffusion motion physics (as well as superdiffusion), that is, to use aTrack how MSD analysis is often deployed. This is important in the context of real-world applications where simple mathematical models cannot perfectly represent real tracks with greater complexity. 

      Inevitably, some fraction of tracks with a pure Brownian motion may appear to match with a fractional Brownian motion, and thus statistical tests are needed to determine if this is significant. In general, aTrack finds fBm to be normal Brownian motion only when the anomalous coefficient is near 1, i.e. when the two models are indeed the same. When analysing fBm tracks with anomalous coefficients of 0.5 or 1.5, aTrack find that these tracks are better explained by our confined diffusion model or directed motion model, respectively (Please see Fig. 6a, copied below). 

      To better clarify our objective, the section now has a brief introduction that reads:

      “One of the most important features of a method is its robustness to deviations from its assumptions. Indeed, experimental tracking data will inevitably not match the model assumptions to some degree, and models need to be resilient to these small deviations.”  

      Smaller points: 

      (1) It is not clear what a biological example is of rotational diffusion. 

      We modified the text to better explain the use of rotational diffusion.

      (2) The text in the section on experimental data should be expanded and clarified, there currently are multiple 'floating sentences' that stop halfway, and it does not clearly describe the biological relevance and observed findings.  

      We thank the reviewer for pointing out this issue. We have reworked the experimental section to better and more clearly explain the biological relevance of the findings.

      (3) Caption of figure 3: 'd' should be 'e'. 

      (4) Caption of Figure 7: log-likelihood should be Lconfined - Lbrownian, I believe. 

      (5) Equation number missing in SI first sentence. 

      (6) Supplementary Figure 1 top part access should be Lc-Lb instead of Ld-Lb. 

      We have made these corrections, thank you for bringing them to our attention.

    1. s it a good poem?

      I was about to ask what makes a poem good but then I continued reading and saw this insane checklist… This is the type of stuff that drives people away from writing (in general)!

    1. eLife Assessment

      The paper reports valuable findings about the mechanism of regulation of the heat shock response in plants that acts as a brake to prevent hyperactivation of the stress response, which have theoretical or practical implications for a subfield. The study presented by the authors provides solid methods, data, and analysis that broadly support the claims. This report presents helpful information regarding new spliced HSFs forms in Arabidopsis that highlights key information in the understanding of heat stress and plant growth.

    2. Reviewer #2 (Public review):

      Summary:

      The authors report that Arabidopsis short HSFs S-HsfA2, S-HsfA4c, and S-HsfB1 confer extreme heat. They have truncated DNA binding domains that bind to a new heat-regulated element. Considering Short HSFA2, the authors have highlighted the molecular mechanism by which S-HSFs prevent HSR hyperactivation via negative regulation of HSP17.6B. The S-HsfA2 protein binds to the DNA binding domain of HsfA2, thus preventing its binding to HSEs, eventually attenuating HsfA2-activated HSP17.6B promoter activity. This report adds insights to our understanding of heat tolerance and plant growth.

      Strengths:

      (1) The manuscript represents ample experiments to support the claim.

      (2) The manuscript covers a robust number of experiments and provides specific figures and graphs to in support of their claim.

      (3) The authors have chosen a topic to focus on stress tolerance in changing environment.

      (4) The authors have summarized the probable mechanism using a figure.

      Weaknesses:

      Quite minimum

      (1) Fig. 3. the EMSA to reveal binding

      (2) Alignment of supplementary figures 6-7.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      In the present work, Chen et al. investigate the role of short heat shock factors (S-HSF), generated through alternative splicing, in the regulation of the heat shock response (HSR). The authors focus on S-HsfA2, an HSFA2 splice variant containing a truncated DNA-binding domain (tDBD) and a known transcriptional-repressor leucin-rich domain (LRD). The authors found a two-fold effect of S-HsfA2 on gene expression. On the one hand, the specific binding of S-HsfA2 to the heat-regulated element (HRE), a novel type of heat shock element (HSE), represses gene expression. This mechanism was also shown for other S-HSFs, including HsfA4c and HsfB1. On the other hand, S-HsfA2 is shown to interact with the canonical HsfA2, as well as with a handful of other HSFs, and this interaction prevents HsfA2 from activating gene expression. The authors also identified potential S-HsfA2 targets and selected one, HSP17.6B, to investigate the role of the truncated HSF in the HSR. They conclude that S-HsfA2-mediated transcriptional repression of HSP17.6B helps avoid hyperactivation of the HSR by counteracting the action of the canonical HsfA2.

      The manuscript is well written and the reported findings are, overall, solid. The described results are likely to open new avenues in the plant stress research field, as several new molecular players are identified. Chen et al. use a combination of appropriate approaches to address the scientific questions posed. However, in some cases, the data are inadequately presented or insufficient to fully support the claims made. As such, the manuscript would highly benefit from tackling the following issues:

      (1) While the authors report the survival phenotypes of several independent lines, thereby strengthening the conclusions drawn, they do not specify whether the presented percentages are averages of multiple replicates or if they correspond to a single repetition. The number of times the experiment was repeated should be reported. In addition, Figure 7c lacks the quantification of the hsp17.6b-1 mutant phenotype, which is the background of the knock-in lines. This is an essential control for this experiment

      For the seedling survival rates and gene expression levels, we added statistical analysis based on at least two independent experiments. Figure 6E of the revised manuscript shows the phenotypes of the WT, hsp17.6b-1, HSP17.6B-KI, and HSP17.6B-OE plants and the statistical analysis of their seedling survival rates after heat exposure.

      (2) In Figure 1c, the transcript levels of HsfA2 splice variants are not evident, as the authors only show the quantification of the truncated variant. Moreover, similar to the phenotypes discussed above, it is unclear whether the reported values are averages and, if so, what is the error associated with the measurements. This information could explain the differences observed in the rosette phenotypes of the S-HsfA2-KD lines. Similarly, the gene expression quantification presented in Figures 4 and 5, as well as the GUS protein quantification of Figure 3F, also lacks this crucial information.

      RT‒qPCR analysis of the expression of these genes from at least two independent experiments was performed. We also added these missing information to the figure legends.

      (3) The quality of the main figures is low, which in some cases prevents proper visualization of the data presented. This is particularly critical for the quantification of the phenotypes shown in Figure 1b and for the fluorescence images in Figures 4f and 5b. Also, Figure 9b lacks essential information describing the components of the performed experiments.

      We apologize; owing to the limitations of equipment and technology, we will attempt to obtain high-quality images in the future. A detailed description of Figure 9b is provided in the methods section.

      (4) Mutants with low levels of S-HsfA2 yield smaller plants than the corresponding wild type. This appears contradictory, given that the proposed role of this truncated HSF is to counteract the growth repression induced by the canonical HSF. What would be a plausible explanation for this observation? Was this phenomenon observed with any of the other tested S-HSFs?

      We found that the constitutive expression of S-HsfA2 inhibits Arabidopsis growth. Considering this, Arabidopsis plants do not produce S-HsfA2 under normal conditions to avoid growth inhibition. However, under heat stress, Arabidopsis plants generate S-HsfA2, which contributes to heat tolerance and growth balance. In the revised manuscript, we provided supporting data indicating that S-HsfA4c-GFP or S-HsfB1-RFP constitutive expression confers Arabidopsis extreme heat stress sensitivity but inhibits root growth (Supplemental Figure S8). Therefore, this phenomenon is also observed in S-HsfA4c-GFP or S-HsfB1-RFP.

      (5) In some cases, the authors make statements that are not supported by the results:<br /> (i) the claim that only the truncated variant expression is changed in the knock-down lines is not supported by Figure 1c;

      In three S-HsfA2-KD lines, RT‒PCR splicing analysis revealed that HsfA2-II but not HsfA2-III is easily detected. In the revised manuscript, we added RT‒qPCR analysis, and the results revealed that the abundance of HsfA2-III and HsfA2-II but not that of the full-length HsfA2 mRNA significantly decreased under extreme heat (Figure 1C). Considering that HsfA2-III but not HsfA2-II is a predominant splice variant under extreme heat (Liu et al., 2013), S-HsfA2-KD may lead to the knockdown of alternative HsfA2 splicing transcripts, especially HsfA2-III.

      (ii) the increase in GUS signal in Figure 3a could also result from local protein production;

      We included this possibility in the results analysis.

      (iii) in Figure 6b, the deletion of the HRE abolishes heat responsiveness, rather than merely altering the level of response; and

      In the revised manuscript, we added new data concerning the roles of HREs and HSEs in the response of the HSP17.6B promoter to heat stress (Figure 6A). These results suggest that the HRE and HSE elements are responsible for the response of the HSP17.6B promoter to heat stress and that the HRE negatively regulates the HSP17.6B promoter at 37°C, whereas the HSE is positive at 42°C.

      (iv) the phenotypes in Figure 8b are not clear enough to conclude that HSP17.6B overexpressors exhibit a dwarf but heat-tolerant phenotype.

      When grown in soil, the HSP17.6B-OE seedlings presented a dwarf phenotype compared with the WT control. Heat stress resulted in browning of the WT leaves, but the leaves of the HSP17.6B-OE plants remained green, suggesting that the HSP17.6B-OE seedlings also presented a heat-tolerant phenotype in the soil. These results are qualitative but not quantitative experimental data; therefore, the conclusions are adjusted in the abstract and results sections.

      Reviewer #2 (Public review):

      Summary:

      The authors report that Arabidopsis short HSFs S-HsfA2, S-HsfA4c, and S-HsfB1 confer extreme heat. They have truncated DNA binding domains that bind to a new heat-regulated element. Considering Short HSFA2, the authors have highlighted the molecular mechanism by which S-HSFs prevent HSR hyperactivation via negative regulation of HSP17.6B. The S-HsfA2 protein binds to the DNA binding domain of HsfA2, thus preventing its binding to HSEs, eventually attenuating HsfA2-activated HSP17.6B promoter activity. This report adds insights to our understanding of heat tolerance and plant growth.

      Strengths:

      (1) The manuscript represents ample experiments to support the claim.

      (2) The manuscript covers a robust number of experiments and provides specific figures and graphs in support of their claim.

      (3) The authors have chosen a topic to focus on stress tolerance in a changing environment.

      Weaknesses:

      (1) One s-HsfA2 represents all the other s-Hsfs; S-HsfA4c, and S-HsfB1. s-Hsfs can be functionally different. Regulation may be positive or negative. Maybe the other s-hsfs may positively regulate for height and be suppressed by the activity of other s-hsfs.

      In this study, we used S-HsfA2, S-HsfA4c, and S-HsfB1 data to support the view that “splice variants of HSFs generate new plant HSFs”. We also noted that S-HsfA2 cannot represent a traditional S-HSF. S-HsfA4c and S-HsfB1 may have functions other than S-HsfA2 because of their different C-terminal motifs or domains. Different S-HSFs might participate in the same biological process, such as heat tolerance, through the coregulation of downstream genes. We added this information to the discussion section.

      (2) Previous reports on gene regulations by hsfs can highlight the mechanism.

      In the introduction section, we included these references concerning HSFs and S-HSFs.

      (3) The Materials and Methods section could be rearranged so that it is based on the correct flow of the procedure performed by the authors.

      The materials and methods and results sections are arranged in the logical order.

      (4) Graphical representation could explain the days after sowing data, to provide information regarding plant growth.

      The days after sowing (DAS) for the age of the Arabidopsis seedlings are stated in the Materials and Methods section and figure legends.

      (5) Clear images concerning GFP and RFP data could be used.

      We provided high-quality images of S-HsfA2-GFP and the GFP control (Figure 3 in the revised manuscript).

      Reviewing Editor comments:

      The EMSA shown in Figures 2, 3, 4, and 5, which are critical to support the manuscript's claims, are of poor quality, without any repeats to support. In addition, there is not much information about how these EMSA were done. I suggest including better EMSA in a new version of this manuscript.

      Thank you for your suggestion. We added the missing information, including the detailed EMSA method and experiment repeat times in the methods section and figure legends. We provide high-quality images of HRE probes binding to nuclear proteins (Figure 4E).

      Reviewer  #1 (Recommendations for the authors):

      (1) The paper is overall well-written, but it could greatly benefit from reorganizing the results subsections. Currently, there are entire subsections dedicated to supplementary figures (e.g., lines 177-191) and main figures split into different subsections (e.g., lines 237-246). It is recommended to organize all the information related to a main figure into a single subsection and to incorporate the description of the corresponding supplementary figures. This would imply a general reorganization of the figures, moving some information to the supplementary data (for instance, the data in Figure 4 could be supplementary to Figure 5) and vice versa (Supplementary Figure 4 should be incorporated into main Figure 2, as it presents very important results). Also, Figures 7 and 8 would be better presented if merged into a single figure/subsection.

      Thank you for your suggestion. We have merged some figures into a single figure according to the main information. In the current version, there are 8 main figures, which includes a new figure.

      (2) Survival phenotypes vary widely, making reliable statistical analysis challenging. The chlorophyll and fresh weight quantifications presented in figures such as Figure 5 appear to effectively describe the phenomenon and allow for statistical comparisons. Figures 1 and 7 would benefit from including these measurements if the variability in survival percentages is too high to calculate statistical differences reliably. Also, in Figure 8, all chlorophyll measurements should be normalized to fresh weight rather than seedling number due to the dwarfism observed in the overexpressor lines.

      Thank you for pointing out your concerns. We added statistical analysis based on at least two independent experiments, including Figures 1 and 7, to the original manuscript. In Figure 8 in the original manuscript, chlorophyll measurements were normalized to fresh weight.

      (3) Typos: in Figure 3a it should be "min" not "mim"; in Supplementary Figure 3, the GFP and merge images are swapped.

      We apologize for these errors, and we have corrected them. Supplementary Figure 3 was replaced with new images and was included in Figure 3 in the revised manuscript.

      Reviewer  #2 (Recommendations for the authors):

      (1) The abstract states "How this process is prevented to ensure proper plant growth has not been determined." The authors can be the first to do this, by adding graphical data on the height difference in hSfA2-arabidopis and wild-type Arabidopsis.

      Thank you and agree with you. We have added this information to the new working model (Figure 8)

      (2) The authors claim that Arabidopsis S-HsfA2, S-HsfA4c, and S-HsfB1; but have used S-HsfA2 to understand the action. The mechanisms being unknown for S-HsfA4c, and S-HsfB1 cannot be represented by S-HsfA2 to represent the mechanism.

      Thank you for your valuable comments. In this study, we used S-HsfA2, S-HsfA4c, and S-HsfB1 data to support the view that “splice variants of HSFs generate new plant HSFs”. We also noted that S-HsfA2 cannot represent a traditional S-HSF because S-HsfA4c and S-HsfB1 may have functions other than S-HsfA2. Therefore, we deleted “representative S-HSF” from the revised manuscript. In the future, we will conduct in-depth research on the relevant mechanisms of S-HsfA4c and S-HsfB1 under your guidance.

      (3) The authors can include which of the HSFs interacted with other genes of Arabidopsis reported by other researchers are positively or negatively regulated in heat response/ growth or the balance.

      In the introduction section, we included these genes. AtHsfA2, AtHsfA3, and BhHsf1 confer heat tolerance in Arabidopsis but also result in a dwarf phenotype in plants (Ogawa et al., 2007; Yoshida et al., 2008; Zhu et al., 2009).

      (4) The authors have started from the subsection plant materials and growth conditions. It is unclear from where the authors have found these HSF mutant Arabidopsis? Is it a continuation of some other work? As a reader, I am utterly confused because of the arrangement of the materials and methods section.

      We apologize for the lack of detailed information in the Materials and methods section. These mutants were purchased from AraShare (Fuzhou, China) and verified via PCR and RT‒qPCR. We added the missing information.

      (5) Is the DAS - Days After Sowing - represented as a graph or table? This will add data to the plant growth section to clearly state the difference between the mutants and the wild-type.

      In this study, the age of the Arabidopsis seedlings was calculated as days after sowing (DAS), as stated in the Materials and Methods section and figure legends.

      (6) Heat stress treatment after gus staining looks absurd. Should it not follow after plant materials and growth conditions, which should ideally be after the plant transformation and cloning section? The initial step is definitely about plasmid construction. Kindly rearrange.

      Thank you for your valuable suggestions. We have rearranged the logical order of the materials and methods.

      (7) The expression of GFP and RFP was not clearly seen in the images. This could be because of the poor resolution of the images added.

      We obtained high-quality images of S-HsfA2-GFP (Figure 3 in the revised manuscript).

      (8) We live in an age where it is widely known that genes are not functioning independently but are coregulated and coregulate other proteins. The authors can address the role of these spliced variants on gene regulation and compare them with the HSFs.

      We agree with your suggestion. In this study, HSP17.6B was identified as a direct gene of S-HsfA2 and HsfA2, which can partly explain the role of S-HsfA2 in heat resistance and growth balance. However, the mechanical mechanism by which S-HsfA2 regulates heat tolerance and growth balance may not be limited to HSP17.6B. On the basis of the current data, we propose that the putative S-HsfA2-DERB2A-HsfA3 module might be associated with the roles of S-HsfA2 in heat tolerance and growth balance. Please refer to the discussion section for a detailed explanation.

      (9) Regulatory elements can be validated in relation to their interaction with proven HSFs.

      Supplemental Figure S3 shows that His6-HsfA2 failed to bind to the HRE in vitro.

      (10) The authors seem to be biased toward heat stress and have not worked enough on plant growth. Biochemical data and images on plant growth could be added to bring out the novelty of this manuscript.

      Thank you for your suggestion. We added new data indicating that, compared with the wild-type control, S-HsfA2-GFP, S-HsfA4c-GFP, or S-HsfB1-GFP overexpression inhibited root length (Supplemental Figure 8).

      (11) Line 251 on page 11 of the submitted manuscript says that the s-Hsfs were previously identified by Liu et al. (2013) yet in the abstract the authors claim that these s-HsFs are NEW kinds of HSF with a unique truncated DNA-binding domain (tDBD) that binds a NEW heat-regulated element (HRE).

      In our previous report, several S-HSFs, including S-HsfA2, S-HsfA4, S-HsfB1, and S-HsfB2a, were identified primarily in Arabidopsis (Liu et al., 2013). In this study, we further characterized S-HsfA2, S-HsfA4, and S-HsfB1 and revealed several features of S-HSFs. Therefore, we claim that these S-HSFs are new kinds of HSFs.

      (12) What are these NEW kinds of HRE? Which genes have these HRE? Was an in silico study conducted to study it or can any reports can be cited?

      HREs, i.e., heat-regulated elements, are newly identified heat-responsive elements in this study. The sequences of HREs are partially related to traditional heat shock elements (HSEs). Because we did not identify the essential nucleic acids required for t-DBD binding to the HRE, we did not perform an in silico study.

      (13) S-HSFs may interact with existing HSFs. Have the authors thought in this direction? It can have a role in positively regulating other sHSFs or regulating multiple expressing genes related to plant growth and other functions. This needs to be explored.

      Thank you for this point. Given that the overexpression of Arabidopsis HsfA2 or HsfA3 inhibits growth under nonstress conditions, we discussed this direction from the perspective of the interaction of S-HsfA2 with HsfA2 or HsfA3 in the revised manuscript.

      (14) The authors need to concentrate on the presentation and arrangement of both their materials and methods and result section and write them in a systematic manner (or following a workflow).

      The materials, methods and results sections are arranged in logical order.

      (15) The authors have used references in the results section which can be added to the discussion section to make it more accurate.

      Thank you for your suggestions. We have moved some references to the discussion section, but the necessary references remain in the results section.

    1. To answer this question, music education company Skoove and data experts DataPulse Research analyzed Spotify's Top 200 weekly charts across 73 countries for over a year, tracking whether each nation streams its own artists or international acts. This revealed how much each nation's Top 200 features its own artists versus international acts.

      I feel this reads better:

      "To explore this, music education platform Skoove teamed up with data experts DataPulse Research to analyze Spotify’s Top 200 weekly charts across 73 countries for over a year. The result shows which countries streamed local artists versus international acts, revealing how much each of their 'Top 200' is dominated by homegrown talent compared to global performers."

      Also, can "for over a year" be more specific by writing in weeks, since it's a weekly chart?

    2. Music carries the essence of a nation's culture. When local artists gain traction, they become sources of community pride. Yet streaming data reveals a striking divide: while India overwhelmingly supports homegrown talent, other countries flood their charts with international hits. Why do some nations fiercely protect their musical identity while others embrace global sounds?

      I rewrote this paragraph as follows:

      "Music embodies the essence of a nation's culture. When local artists break out, they become sources of pride for their communities. Yet, streaming data reveals a striking divide: while India overwhelmingly supports its homegrown talent, other countries flood their charts with international hits. Why do some nations passionately protect their musical identity while others embrace global sounds?"

    1. Did the "abode of Islam" not fit in with the idea of nation states and the growing number of secular empires that were emerging in the geographical region now known as the "Middle East"? Possibly, the European map makers and political leaders of the time did not like the idea of a growing and prosperous "abode of Islam". Especially since the European churches had undergone a lot of recent reforms and schisms during that time.

    1. Anyway, after all that, if we put it all together again, we get this:

      While images cannot be included in these annotations, it is important to note how this site uses images to emphasize the text and set the general tone. The images are important for the text that references them, but also the general tone and casual text sets

    2. McMansion Hell

      More on the overall structure of the digital format: The blog-like structure of the website allows for constant updating, with readers being presented with the most recent post. For non-digital literature, this kind of structure is harder to get out, with things like newsletters or a weekly newspaper column being the closest alternative, both of which can require more connections or resource than just making a website.

    1. In flipped classrooms, students wouldn’t use ChatGPT to conjure up a whole essay. Instead, they’d use it as a tool to generate critically examined building blocks of essays. It would be similar to how students in advanced math classes are allowed to use calculators to solve complex equations without replicating tedious, previously mastered steps.

      This seems to bring another issue that it replaces true crtive ideas from humans and coming up with there own anserws and thoughts in school they are useing AI in place of there own thoughts and just fleshing out the Ais reapted idea.

    1. Course learning becomes not just something to be learned, repeated in a test, but can be applied to our experiences, both profession-ally and personally.

      This is the definition

    1. Eastern nations, such as India and Saudi Arabia, will be particularly important to consumer businesses, given their pent-up demand and willingness to spend

      With global trade uncertainty due to the U.S. tariffs its becoming more prominent for trade to focused more on the European & Asian markets.

    1. Vibes are all-important. This is the reason I hear — and empathize with — the most. Some photographers, especially younger ones, have grown disillusioned with the computational perfection that our phone cameras produce. Instead, they yearn for the grit, harsh lighting and chromatic peculiarities that come from aging camera sensors.

      While phone cameras often produce higher-quality images in terms of sharpness and convenience, I believe they feel less real. This is a good example of why images from a dedicated camera can feel more authentic and why some people say they even look better despite technically having lower resolution or fewer features than your phone.

      From both a physics and practical standpoint, the larger lens of a dedicated camera captures more light, which makes it perform better in low-light situations. Personally, I prefer the experience of using a camera over a phone. It feels more intentional and personal, even if it's less convenient.

    1. hese service industries profited from the mining boom: as failed prospectors found, the rush itself often generated more wealth than the mines. The $25.5 million in gold that left Colorado in the first seven years after the Pikes Peak gold strike, for example, was less than half of what speculators had invested in the fever.

      The mining boom often enriched service industries still as the rush generated massive speculation. In Colorado, the gold extracted was less than half of what investors poured into it..

    2. At the end of 1921, the new president, Warren G. Harding, commuted Debs’ sentence to time served. We will return to resistance to World War I in a later chapter.

      And why did President Harding commute Eugene Debs sentence in 1921 what does this suggest about changing attitudes toward world war I dissent?

    3. The Socialist Party of America (SPA), founded in 1901, carried on the American third-party political tradition. Socialist mayors were elected in thirty-three cities and towns, from Berkeley, California, to Schenectady, New York, and two socialists from Wisconsin and New York won congressional seats.

      How did the Socialist Party of America gain influence in the early 1900s and what does their electoral success reveal about public sentiment at the time?

    4. , despairing farmers faced low crop prices and found few politicians on their side.

      It's odd that people let farmers struggled with falling crop prices, leaving them in economic despair. With little political support, they felt abandoned by the government.

    1. And of course there is the sky, and there is also the Heart of Sky. Thisis the name of the god, as it is spoken.

      The heart of the sky seems to be referring to God that controls the sky.

    2. SOVEREIGN PLUMED SERPENT: Here he isseated, holding a snake in his hand. On hisback he wears a quetzal bird, with its headbehind his, its wings at the level of hisshoulders, and its tail hanging down to theground. From the Dresden Codex.

      The snake in his hand, the bird on his back, and its tail down to the ground represent wind, knowledge, and creation? i believe so.

    3. Whatever might be is simply not there: only murmurs, ripples, in thedark, in the night. Only the Maker, Modeler alone, Sovereign PlumedSerpent, the Bearers, Begetters are in the water, a glittering light. Theyare there, they are enclosed in quetzal feathers, in blue-green.

      Could it be that when mankind or life began on Earth, it caused a "ripple" or "imbalance" to affect the natural world?

    4. pooledwater, only the calm sea, only it alone is pooled.

      Upon further research and analysis this statement describes the world before creation where the sky and sea where clam or motionless

    5. hs IS THE ACCOUNT, here it is:Now it still ripples, now it still murmurs, ripples, it still sighs, stillhums, and it is empty under the sky.Here follow the first words, the first eloquence:There is not yet one person, one animal, bird, fish, crab, tree, rock,hollow, canyon, meadow, forest. Only the sky alone is there; the face ofthe earth is not clear. Only the sea alone is pooled under all the sky;there is nothing whatever gathered together. It is at rest; not a singlething stirs. It is held back, kept at rest under the sky.

      The statement might mean that nothing is truly valuable until it's bought together. "There is nothing whatever gathered together. It is at rest; not a single thing stirs. It is held back, kept at rest under the sky." Does rest indicate that the sky protects or makes sure everything stays in balance from straying away?

    6. in the sky, on the earth,the four sides, the four corners, as it is said,by the Maker, Modeler,mother-father of life, of humankind,giver of breath, giver of heart,bearer, upbringer in the light that lastsof those born in the light, begotten in the light;worrier, knower of everything, whatever there is:sky-earth, lake-sea.

      The 'maker' seems to be molding life into the world and bringing life to everything they create. Mother and Father, could the gods be seen as gender-fluid or non-binary?

    7. hefourfoldsiding,fourfold cornering,measuring,fourfold staking,halvingthe cord, stretchingthecord63

      Four sides-four corners-the four corners of the globe is what they might be referring to. North, South, East, and West Hemispheres.

    8. a place to see it, a Council Book,a place to see “The Light That Came fromBeside the Sea,”the account of “Our Place in the Shadows.”a place to see “The Dawn of Life,

      Is the text stating that life began from a shadow and brought life into the world through light? The sea creates the illusion that the sun rises from the water at dawn and sets into the sea at dusk.

    9. They accountedforeverything—anddidit,too—asenlightenedbeings,inenlightenedwords.WeshallwriteaboutthisnowamidthepreachingofGod,inChristendomnow.

      Their Gods created everything and accounted for every small detail of the natural world, as well as beyond, to the Mayans.

    10. the Maker, Modeler,named Bearer, Begetter,Hunahpu Possum, Hunahpu Coyote,Great White Peccary, Coati,Sovereign Plumed Serpent,Heart of the Lake, Heart of the Sea,plate shaper, bowl shaper, as they are called,also named, also described asthe midwife, matchmakernamed Xpiyacoc, Xmucane,defender, protector,twice a midwife, twice a matchmaker,

      Names of divine entities present at the beginning of creation. Some of the Gods have more names than one.

    11. Andhereweshalltake upthedemonstration,revelation,andaccountofhowthingswereputinshadowandbroughttolightby

      Seems as if the Quiché Mayans sought to explain the world and its natural laws.

    1. The true measure of learning is not the time and energy you put in. It’s the knowledge and skills you take out.

      this is slightly more nuanced in a writing class. Some of the knowledge and skills you take out of it is about effort.

    2. At the same time, it’s our responsibility to tell students who burn the midnight oil that although their B– might not have fully reflected their dedication, it speaks volumes about their sleep deprivation.

      this speaks to the need for college students to understand how the rest of their life impacts the results they see in class assignments

    3. If there wasn’t a time limit, the higher people scored on grit, the more likely they were to keep banging away at a task they were never going to accomplish.

      we need to figure out the right times and ways to allow students independent time to work things out while also providing tools, strategies, and other forms of guidance so that they are actively working towards a goal and not simply spinning their wheels in the name of grit

    4. We’ve taught a generation of kids that their worth is defined primarily by their work ethic.

      this is something I struggle with as an educator and writer. How to encourage dedication and effort, without an overreliance on the idea of "hard work"

    5. In surveys, two-thirds of college students say that “trying hard” should be a factor in their grades, and a third think they should get at least a B just for showing up at (most) classes.

      how can we work on bridging the gap between encouraging effort and growth and emphasizing the need for excellence?

    6. “My grade doesn’t reflect the effort I put into this course.”

      it sounds like there is a blacklash to the growth mindset, which is central to this argument

    1. What do you anticipate will be the most difficult part of completing college? ________________________________________________________

      keeping up with the work and studying.

    1. I will tell the secret to you, to you, only to you. Come closer. This song is a cry for help: Help me! Only you, only you can, you are unique

      "I will tell the secret to you" she is referencing the song that is what the sirens tell them it makes them feel special and that they have to do something. They listen every time which is ultimately what gets them killed

    1. But the true nature of Maya society, the meaning of its hieroglyphics, and the chronicle of its history remained unknown to scholars for centuries after the Spaniards discovered the ancient Maya building sites.

      I wonder why it still remains a mystery

    2. They began to build ceremonial centers, and by 200 ce these had developed into cities containing temples, pyramids, palaces, courts for playing ball, and plazas.

      An early community. I wonder if they also had a HOA; maybe in the for of some form of Tax

    3. features the Hero Twins, Hunahpu and Xbalanque, who were transformed into, respectively, the Sun and the Moon

      I wonder if other cultures throughout time represent the sun and moon similar to the mayans.From what i can think of Egyption Gods are similar

    4. In 2009 archaeologist Richard Hansen discovered two 8-metre- (26-foot-) long panels carved in stucco from the pre-Classic Mayan site of El Mirador, Guatemala, that depict aspects of the Popol Vuh. The panels—which date to about 300 bce, some 500 years before the Classic-period fluorescence of Mayan culture—attested to the antiquity of the Popol Vuh.

      Thats amazing lets assume the 8-metre- (26-foot-) stucco is 4ft x 26ft. If the average weight for an sqft of stucco is 10lb the one panel would have weight 1040. (Giving that the stucco used does weigh 10lb per sqft)

    5. It chronicles the creation of humankind, the actions of the gods, the origin and history of the K’iche’ people, and the chronology of their kings down to 1550.

      The popol vah seems to be the mayans "bible" equivalent

  2. physerver.hamilton.edu physerver.hamilton.edu
    1. Second, since the charge on the drop wasmultiplied more than four times withoutchanging at all the value of G, or the value ofe l , the observations prove conclusively that inthe case of drops like this, the drag which theair exerts upon the drop is independentof whether the drop is charged or uncharged.

      Clear, logical inference.

    2. How completelythe error arising from evaporation, convectioncurrents, or any sort of disturbances in theair, are eliminated, is shown by the constancyduring all this time in the value of the velocityunder gravity.

      I did not even think that there would be so many potential source of error to consider.

    1. stunninglybeautiful and has much intrinsic value

      wouldn't you say the value is that of human art, art history, and culture, rather than just intrinsic value?

    1. con-front the reality thatconservation may beexpensive and stopdeceiving ourselves and partners in conser-vation with hopes that win-win solutions canalways be found.

      right, but how do you justify spending money (an inherently human thing) on protecting nature rather than protecting people (i.e. social reforms)

    2. make ecosystem services the foundationof our conservation strategies is to imply —intentionally or otherwise — that nature is onlyworth conserving when it is, or can be made,profitable

      i understand this argument, but I disagree; if done properly, nature is worth conserving for humans and for itself

    1. eLife Assessment

      This manuscript reports the development and characterization of iGABASnFR2, a genetically encoded GABA sensor that demonstrates substantially improved performance compared to its predecessor, iGABASnFR1. The work is comprehensive and methodologically rigorous, combining high-throughput mutagenesis, functional screening, structural analysis, biophysical characterization, and in vivo validation. The significance of the findings is fundamental, and the supporting evidence is compelling. iGABASnFR2 represents a notable advance in GABA sensor engineering, enabling enhanced imaging of GABA transmission both in brain slices and in vivo, and constitutes a timely, technically robust addition to the molecular toolkit for neuroscience research.

    2. Reviewer #1 (Public review):

      Summary:

      This manuscript by Kolb and Hasseman et al. introduces a significantly improved GABA sensor, building on the pioneering work of the Janelia team. Given GABA's role as the main inhibitory neurotransmitter and the historical lack of effective optical tools for real-time in vivo GABA dynamics, this development is particularly impactful. The new sensor boasts an enhanced signal-to-noise ratio (SNR) and appropriate kinetics for detecting GABA dynamics in both in vitro and in vivo settings. The study is well-presented, with convincing and high-quality data, making this tool a valuable asset for future research into GABAergic signaling.

      Strengths:

      The core strength of this work lies in its significant advancement of GABA sensing technology. The authors have successfully developed a sensor with higher SNR and suitable kinetics, enabling the detection of GABA dynamics both in vitro and in vivo. This addresses a critical gap in neuroscience research, offering a much-needed optical tool for understanding the most important inhibitory neurotransmitter. The clear representation of the work and the convincing, high-quality data further bolster the manuscript's strengths, indicating the sensor's reliability and potential utility. We anticipate this tool will be invaluable for further investigation of GABAergic signaling.

      Weaknesses:

      Despite the notable progress, a key limitation is that the current generation of GABA sensors, including the one presented here, still exhibits inferior performance compared to state-of-the-art glutamate sensors. While this work is a substantial leap forward, it highlights that further improvements in GABA sensors would still be highly beneficial for the field to match the capabilities seen with glutamate sensors.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript presents the development and characterization of iGABASnFR2, a genetically encoded GABA sensor with markedly improved performance over its predecessor, iGABASnFR1. The study is comprehensive and methodologically rigorous, integrating high-throughput mutagenesis, functional screening, structural analysis, biophysical characterization, and in vivo validation. iGABASnFR2 represents a significant advancement in GABA sensor engineering and application in imaging GABA transmission in slice and in vivo. This is a timely and technically strong contribution to the molecular toolkit for neuroscience.

      Strengths:

      The authors apply a well-established sensor optimization pipeline and iterative engineering strategy from single-site to combinatorial mutants to engineer iGABASnFR2. The development of both positive and negative going variants (iGABASnFR2 and iGABASnFR2n) offers experimental flexibility. The structure and interpretation of the key mutations provide insights into the working mechanism of the sensor, which also suggest optimization strategies. Although individual improvements in intrinsic properties are incremental, their combined effect yields clear functional gains, enabling detection of direction-selective GABA release in the retina and volume-transmitted GABA signaling in somatosensory cortex, which were challenging or missed using iGABASnFR1.

      Weaknesses:

      With minor revisions and clarifications, especially regarding membrane trafficking, this manuscript will be a valuable resource for probing inhibitory transmission.

    1. We recognize the natural impatience of people who feel that their hopesare slow in being realized. But we are convinced that these demonstrations are unwise and untimely.

      WE get you are impatient. Your actions are unwise and untimely.

    1. eLife Assessment

      This useful study characterises motor and somatosensory cortex neural activity during naturalistic eating and drinking tongue movement in nonhuman primates. The data, which include both electrophysiology and nerve block manipulations, will be of value to neuroscientists and neural engineers interested in tongue use. Although the current analyses provide a solid description of single neuron activity in these areas, both the population level analyses and the characterisation of activity changes following nerve block could be improved.

    2. Reviewer #1 (Public review):

      Summary:

      Hosack and Arce-McShane investigate how the 3D movement direction of the tongue is represented in the orofacial part of the sensory-motor cortex and how this representation changes with the loss of oral sensation. They examine the firing patterns of neurons in the orofacial parts of the primary motor cortex (MIo) and somatosensory cortex (SIo) in non-human primates (NHPs) during drinking and feeding tasks. While recording neural activity, they also tracked the kinematics of tongue movement using biplanar video-radiography of markers implanted in the tongue. Their findings indicate that many units in both MIo and SIo are directionally tuned during the drinking task. However, during the feeding task, directional turning was more frequent in MIo units and less prominent in SIo units. Additionally, in some recording sessions, they blocked sensory feedback using bilateral nerve block injections, which seemed to result in fewer directionally tuned units and changes in the overall distribution of the preferred direction of the units.

      Strengths:

      The most significant strength of this paper lies in its unique combination of experimental tools. The author utilized a video-radiography method to capture 3D kinematics of the tongue movement during two behavioral tasks while simultaneously recording activity from two brain areas. This specific dataset and experimental setup hold great potential for future research on the understudied orofacial segment of the sensory-motor area.

      Weaknesses:

      A substantial portion of the paper is dedicated to establishing directional tuning in individual neurons, followed by an analysis of how this tuning changes when sensory feedback is blocked. While such characterizations are valuable, particularly in less-studied motor cortical areas and behaviors, the discrepancies in tuning changes across the two NHPs, coupled with the overall exploratory nature of the study, render the interpretation of these subtle differences somewhat speculative. At the population level, both decoding analyses and state space trajectories from factor analysis indicate that movement direction (or spout location) is robustly represented. However, as with the single-cell findings, the nuanced differences in neural trajectories across reach directions and between baseline and sensory-block conditions remain largely descriptive. To move beyond this, model-based or hypothesis-driven approaches are needed to uncover mechanistic links between neural state space dynamics and behavior.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript by Hosack and Arce-McShane examines the directional tuning of neurons in macaque primary motor (MIo) and somatosensory (SIo) cortex. The neural basis of tongue control is far less studied than, for example, forelimb movements, partly because the tongue's kinematics and kinetics are difficult to measure. A major technical advantage of this study is using biplanar video-radiography, processed with modern motion tracking analysis software, to track the movement of the tongue inside the oral cavity. Compared to prior work, the behaviors are more naturalistic behaviors (feeding and licking water from one of three spouts), although the animals were still head-fixed.

      The study's main findings are that:

      • A majority of neurons in MIo and a (somewhat smaller) percentage of SIo modulated their firing rates during tongue movements, with different modulation depending on the direction of movement (i.e., exhibited directional tuning). Examining the statistics of tuning across neurons, there was anisotropy (e.g., more neurons preferring anterior movement) and a lateral bias in which tongue direction neurons preferred that was consistent with the innervation patterns of tongue control muscles (although with some inconsistency between monkeys).<br /> • Consistent with this encoding, tongue position could be decoded with moderate accuracy even from small ensembles of ~28 neurons.<br /> • There were differences observed in the proportion and extent of directional tuning between the feeding and licking behaviors, with stronger tuning overall during licking. This potentially suggests behavioral context-dependent encoding.<br /> • The authors then went one step further and used a bilateral nerve block to the sensory inputs (trigeminal nerve) from the tongue. This impaired the precision of tongue movements and resulted in an apparent reduction and change in neural tuning in Mio and SIo.

      Strengths:

      The data are difficult to obtain and appear to have been rigorously measured, and provide a valuable contribution to this under-explored subfield of sensorimotor neuroscience. The analyses adopt well-established methods especially from the arm motor control literature, and represent a natural starting point for characterizing tongue 3D direction tuning.

      Weaknesses:

      There are alternative explanations from some of the interpretations, but those interpretations are described in a way that clearly distinguishes results from interpretations, and readers can make their own assessments. Some of these limitations are described in more detail below.

      One weakness of the current study is that there is substantial variability in results between monkeys.

      This study focuses on describing directional tuning using the preferred direction (PD) / cosine tuning model popularized by Georgopoulous and colleagues for understanding neural control of arm reaching in the 1980s. This is a reasonable starting point and a decent first order description of neural tuning. However, the arm motor control field has moved far past that viewpoint, and in some ways an over-fixation on static representational encoding models and PDs held that field back for many years. The manuscript benefit from drawing the readers' attention (perhaps in their Discussion) that PDs are a very simple starting point for characterizing how cortical activity relates to kinematics, but that there is likely much richer population-level dynamical structure and that a more mechanistic, control-focused analytical framework may be fruitful. A good review of this evolution in the arm field can be found in Vyas S, Golub MD, Sussillo D, Shenoy K. 2020. Computation Through Neural Population Dynamics. Annual Review of Neuroscience. 43(1):249-75. A revised version of the manuscript incorporates more population-level analyses, but with inconsistent use of quantifications/statistics and without sufficient contextualization of what the reader is to make of these results.

      The described changes in tuning after nerve block could also be explained by changes in kinematics between these conditions, which temper the interpretation of these interesting results.

      I am not convinced of the claim that tongue directional encoding fundamentally changes between drinking and feeding given the dramatically different kinematics and the involvement of other body parts like the jaw (e.g., the reference to Laurence-Chasen et al. 2023 just shows that there is tongue information independent of jaw kinematics, not that jaw movements don't affect these neurons' activities). I also find the nerve block results inconsistent (more tuning in one monkey, less in the other?) and difficult to really learn something fundamental from, besides that neural activity and behavior both change - in various ways - after nerve block (not at all surprising but still good to see measurements of).

      The manuscript states that "Our results suggest that the somatosensory cortex may be less involved than the motor areas during feeding, possibly because it is a more ingrained and stereotyped behavior as opposed to tongue protrusion or drinking tasks". An alternative explanation be more statistical/technical in nature: that during feeding, there will be more variability in exactly what somatosensation afferent signals are being received from trial to trial (because slight differences in kinematics can have large differences in exactly where the tongue is and the where/when/how of what parts of it are touching other parts of the oral cavity)? This variability could "smear out" the apparent tuning using these types of trial-averaged analyses. Given how important proprioception and somatosensation are for not biting the tongue or choking, the speculation that somatosensory cortical activity is suppressed during feedback is very counter-intuitive to this reviewer. In the revised manuscript the authors note these potential confounds and other limitations in the Discussion.

    4. Reviewer #3 (Public review):

      Summary

      In this study, the authors aim to uncover how 3D tongue direction is represented in the Motor (M1o) and Somatosensory (S1o) cortex. In non-human primates implanted with chronic electrode arrays, they use X-ray based imaging to track the kinematics of the tongue and jaw as the animal is either chewing food or licking from a spout. They then correlate the tongue kinematics with the recorded neural activity. They perform both single-unit and population level analyses during feeding and licking. Then, they recharacterize the tuning properties after bilateral lidocaine injections in the two sensory branches of the trigeminal nerve. They report that their nerve block causes a reorganization of the tuning properties and population trajectories. Overall, this paper concludes that M1o and S1o both contain representations of the tongue direction, but their numbers, their tuning properties and susceptibility to perturbed sensory input are different.

      Strengths

      The major strengths of this paper are in the state-of-the-art experimental methods employed to collect the electrophysiological and kinematic data. In the revision, the single-unit analyses of tuning direction are robustly characterized. The differences in neural correlations across behaviors, regions and perturbations are robust. In addition to the substantial amount of largely descriptive analyses, this paper makes two convincing arguments 1) The single-neuron correlates for feeding and licking in OSMCx are different - and can't be simply explained by different kinematics and 2) Blocking sensory input alters the neural processing during orofacial behaviors. The evidence for these claims is solid.

      Weaknesses

      The main weakness of this paper is in providing an account for these differences to get some insight into neural mechanisms. For example, while the authors show changes in neural tuning and different 'neural trajectory' shapes during feeding and drinking - their analyses of these differences are descriptive and provide limited insight for the underlying neural computations.

    5. Author response:

      The following is the authors’ response to the current reviews.

      We thank the editors and the reviewers for their helpful comments. We have provided a response to reviewers’ recommendations and made some revisions on the manuscript. 

      Reviewer #1 (Recommendations for the authors): 

      In the newly added population factor analysis, several methodological decisions remain unclear to me:

      In Figure 7, why do the authors compare the mean distance between conditions in the latent spaces of MIo and SIo? Since these latent spaces are derived separately, they exist on di@erent scales (with MIo appearing roughly four times larger than SIo), and this discrepancy is reflected in the reported mean distances (Figure 7, inset plots). Wouldn't this undermine a direct comparison?

      Thank you for this helpful feedback. The reviewer is correct that the latent spaces are derived separately for MIo and SIo, thus they exist on diGerent scales as we have noted in the caption of Figure 7: “Axes for SIo are 1/4 scale of MIo.” 

      To allow for a direct comparison between MIo and SIo, we corrected the analysis by comparing their normalized mean inter-trajectory distances obtained by first calculating the geometric index (GI) of the inter-trajectory distances, d, between each pair of population trajectories per region as: GI= (d<sub>1</sub>-d<sub>2</sub>)/ (d<sub>1</sub>+d<sub>2</sub>). We then performed the statistics on the GIs and found a significant diGerence between mean inter-trajectory distances in MIo vs. SIo. We performed the same analysis comparing the distance travelled between MIo and SIo trajectories by getting the normalized diGerence in distances travelled and still found a significant diGerence in both tasks. We have updated the results and figure inset to reflect these changes.

      In Figure 12, unlike Figure 7 which shows three latent dimensions, only two factors are plotted. While the methods section describes a procedure for selecting the optimal number of latent factors, Figure 7 - figure supplement 3 shows that variance explained continues to increase up to about five latent dimensions across all areas. Why, then, are fewer dimensions shown?

      Thank you for the opportunity to clarify the figure. The m obtained from the 3-fold crossvalidation varied for the full sample and was 20 factors for the subsample. We clarify that all statistical analyses were done using 20 latent factors. Using the full sample of neurons, the first 3 factors explained 81% of variance in feeding data compared to 71% in drinking data. When extended to 5 factors, feeding maintained its advantage with 91% variance explained versus 82% for drinking. Because feeding showed higher variance explained than drinking across 3 or 5 factors, only three factors were shown in Figure 7 for better visualization. We added this clarification to the Methods and Results.

      Figure 12 shows the diGerences in the neural trajectories between the control and nerve block conditions. The control vs. nerve block comparison complicated the visualization of the results. Thus, we plotted only the two latent factors with the highest separation between population trajectories. This was clarified in the Methods and caption of Figure 12.

      In Figure 12, factor 2 and 3 are plotted against each other? and factor 1 is left out?

      This observation is incorrect; Factor 1 was included: Top subplots (feeding) show Factor 1 vs 3 (MIo) and Factor 1 vs 2 (SIo) while the bottom subplots (drinking) show Factor 2 vs 3 (MIo) and Factor 1 vs 2 (SIo).  We have clarified this in the Methods and caption of Figure 12.

      Finally, why are factor analysis results shown only for monkey R? 

      Factor analysis results were performed on both animals, but the results were shown only for monkey R to decrease the number of figures in the manuscript. Figure 7- figure supplement 1 shows the data for both monkeys. Here are the equivalent Figure 7 plots for monkey Y. 

      Author response image 1.

      Reviewer #2 (Recommendations for the authors): 

      Overall, the manuscript has been improved. 

      New analyses provide improved rigor (as just one example, organizing the feeding data into three-category split to better match the three-direction drinking data decoding analysis and also matching the neuron counts).

      The updated nerve block change method (using an equal number of trials with a similar leftright angle of movement in the last 100 ms of the tongue trajectory) somewhat reduces my concern that kinematic diGerences could account for the neural changes, but on the other hand the neural analyses use 250 ms (meaning that the neural diGerences could be related to behavioral diGerences earlier in the trial). Why not subselect to trials with similar trajectories throughout the whole movement(or at least show that as an additional analysis, albeit one with lower trial counts). 

      As the reviewer pointed out, selecting similar trajectories throughout the whole movement would result in lower trial counts that lead to poor statistical power. We think that the 100 ms prior to maximum tongue protrusion is a more important movement segment to control for similar kinematics between the control and nerve block conditions since this represents the subject’s intended movement endpoint. 

      A lot of the Results seemed like a list of measurements without suGicient hand-holding or guide-posting to explain what the take-away for the reader should be. Just one example to make concrete this broadly-applicable feedback: "Cumulative explained variance for the first three factors was higher in feeding (MIo: 82%, SIo: 81%) than in drinking (MIo: 74%, SIo: 63%) when all neurons were used for the factor analysis (Fig. 7)": why should we care about 3 factors specifically? Does this mean that in feeding, the neural dimensionality is lower (since 3 factors explain more of it)? Does that mean feeding is a "simpler" behavior (which is counter-intuitive and does not conform to the authors' comments about the higher complexity of feeding). And from later in that paragraph: what are we do make of the diGerences in neural trajectory distances (aside from quantifying using a diGerent metric the same larger changes in firing rates that could just as well be quantified as statistics across single-neuron PETHs)?

      Thank you for the feedback on the writing style. We have made some revisions to describe the takeaway for the reader. That fewer latent factors explain 80% of the variance in the feeding data means that the underlying network activity is relatively simple despite apparent complexity. When neural population trajectories are farther away from each other in state space, it means that the patterns of activity across tongue directions are more distinct and separable, thus, less likely to be confused with each other. This signifies that neural representations of 3D tongue directions are more robust. When there is better neural discrimination and more reliable information processing, it is easier for downstream brain regions to distinguish between diGerent tongue directions.  

      The addition of more population-level analyses is nice as it provides a more eGicient summary of the neural measurements. However, it's a surface-level dive into these methods; ultimately the goal of ensemble "computation through dynamics" analyses is to discover simpler structure / organizational principles at the ensemble level (i.e., show things not evidence from single neurons), rather than just using them as a way to summarize data. For instance, here neural rotations are remarked upon in the Results, without referencing influential prior work describing such rotations and why neural circuits may use this computational motif to separate out conditions and shape muscle activity-generating readouts (Churchland et al. Nature 2012 and subsequent theoretical iterations including the Russo et al.). That said, the Russo et al tangling study was well-referenced and the present tangling results were eGectively contextualized with respect to that paper in terms of the interpretation. I wish more of the results were interpreted with comparable depth. 

      Speaking of Russo et al: the authors note qualitative diGerences in tangling between brain areas, but do not actually quantify tangling in either. These observations would be stronger if quantified and accompanied with statistics.

      Contrary to the reviewer’s critique, we did frame these results in the context of structure/organizational principles at the ensemble level. We had already cited prior work of Churchland et al., 2012; Michaels et al., 2016and Russo et al., 2018. In the Discussion, DiGerences across behaviors, we wrote: “In contrast, MIo trajectories in drinking exhibited a consistent rotational direction regardless of spout location (Fig. 7). This may reflect a predominant non-directional information such as condition-independent time-varying spiking activity during drinking (Kaufman et al., 2016; Kobak et al., 2016; Arce-McShane et al., 2023).” 

      Minor suggestions: 

      Some typos, e.g. 

      • no opening parenthesis in "We quantified directional diGerences in population activity by calculating the Euclidean distance over m latent factors)"

      • missing space in "independent neurons(Santhanam et al., 2009;..."); 

      • missing closing parentheses in "followed by the Posterior Inferior (Figure 3 - figure supplement 1."

      There is a one-page long paragraph in the Discussion. Please consider breaking up the text into more paragraphs each organized around one key idea to aid readability.

      Thank you, we have corrected these typos.

      Could it be that the Kaufman et al 2013 reference was intended to be Kaufman et al 2015 eNeuro (the condition-invariant signal paper)?

      Thank you, we have corrected this reference.

      At the end of the Clinical Implications subsection of the Discussion, the authors note the growing field of brain-computer interfaces with references for motor read-out or sensory write-in of hand motor/sensory cortices, respectively. Given that this study looks at orofacial cortices, an even more clinically relevant development is the more recent progress in speech BCIs (two     recent reviews: https://www.nature.com/articles/s41583-024-00819-9, https://www.annualreviews.org/content/journals/10.1146/annurev-bioeng-110122012818) many of which record from human ventral motor cortex and aspirations towards FES-like approaches for orofacial movements (e.g., https://link.springer.com/article/10.1186/s12984-023-01272-y).  

      Thank you, we have included these references.

      Reviewer #3 (Recommendations for the authors): 

      Major Suggestions 

      (1) For the factor analysis of feeding vs licking, it appears that the factors were calculated separately for the two behaviors. It could be informative to calculate the factors under both conditions and project the neural data for the two behaviors into that space. The overlap/separations of the subspace could be informative. 

      We clarify that we performed a factor analysis that included both feeding and licking for MIo, as stated in the Results: “To control for factors such as diGerent neurons and kinematics that might influence the results, we performed factor analysis on stable neurons across both tasks using all trials (Fig. 7- figure supplement 2A) and using trials with similar kinematics (Fig. 7- figure supplement 2B).” We have revised the manuscript to reflect this more clearly.

      (2) For the LSTM, the Factor analyses and the decoding it is unclear if the firing rates are mean subtracted and being normalized (the methods section was a little unclear). Typically, papers in the field either z-score the data or do a softmax.

      The firing rates were z-scored for the LSTM and KNN. For the factor analysis, the spike counts were not z-scored, but the results were normalized. We clarified this in the Methods section.

      Minor: 

      Page 1: Abstract- '... how OSMCx contributes to...' 

      Since there are no direct causal manipulations of OSMCx in this manuscript, this study doesn't directly study the OSMCx's contribution to movement - I would recommend rewording this sentence.

      Similarly, Page 2: 'OSMCx plays an important role in coordination...' the citations in this paragraph are correlative, and do not demonstrate a causal role.

      There are similar usages of 'OSMCx coordinates...' in other places e.g. Page 8. 

      Thank you, we revised these sentences.

      Page 7: the LSTM here has 400 units, which is a very large network and contains >12000 parameters. Networks of this size are prone to memorization, it would be wise to test the rsquare of the validation set against a shuGled dataset to see if the network is actually working as intended. 

      Thank you for bringing up this important point of verifying that the network is learning meaningful patterns versus memorizing. Considering the size of our training samples, the ratio of samples to parameters is appropriate and thus the risk of memorization is low. Indeed, validation tests and cross-validation performed indicated expected network behavior and the R squared values obtained here were similar to those reported in our previous paper (Laurence-Chasen et al., 2023).


      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In their paper, Hosack and Arce-McShane investigate how the 3D movement direction of the tongue is represented in the orofacial part of the sensory-motor cortex and how this representation changes with the loss of oral sensation. They examine the firing patterns of neurons in the orofacial parts of the primary motor cortex (MIo) and somatosensory cortex (SIo) in non-human primates (NHPs) during drinking and feeding tasks. While recording neural activity, they also tracked the kinematics of tongue movement using biplanar videoradiography of markers implanted in the tongue. Their findings indicate that most units in both MIo and SIo are directionally tuned during the drinking task. However, during the feeding task, directional turning was more frequent in MIo units and less prominent in SIo units. Additionally, in some recording sessions, they blocked sensory feedback using bilateral nerve block injections, which resulted in fewer directionally tuned units and changes in the overall distribution of the preferred direction of the units.

      Strengths:

      The most significant strength of this paper lies in its unique combination of experimental tools. The author utilized a video-radiography method to capture 3D kinematics of the tongue movement during two behavioral tasks while simultaneously recording activity from two brain areas. Moreover, they employed a nerve-blocking procedure to halt sensory feedback. This specific dataset and experimental setup hold great potential for future research on the understudied orofacial segment of the sensory-motor area.

      Weaknesses:

      Aside from the last part of the result section, the majority of the analyses in this paper are focused on single units. I understand the need to characterize the number of single units that directly code for external variables like movement direction, especially for less-studied areas like the orofacial part of the sensory-motor cortex. However, as a field, our decadelong experience in the arm region of sensory-motor cortices suggests that many of the idiosyncratic behaviors of single units can be better understood when the neural activity is studied at the level of the state space of the population. By doing so, for the arm region, we were able to explain why units have "mixed selectivity" for external variables, why the tuning of units changes in the planning and execution phase of the movement, why activity in the planning phase does not lead to undesired muscle activity, etc. See (Gallego et al. 2017; Vyas et al. 2020; Churchland and Shenoy 2024) for a review. Therefore, I believe investigating the dynamics of the population activity in orofacial regions can similarly help the reader go beyond the peculiarities of single units and in a broader view, inform us if the same principles found in the arm region can be generalized to other segments of sensorymotor cortex.

      We thank and agree with the reviewer on the value of information gained from studying population activity. We also appreciate that population analyses have led to the understanding that individual neurons have “mixed selectivity”. We have shown previously that OSMCx neurons exhibit mixed selectivity in their population activity and clear separation between latent factors associated with gape and bite force levels (Arce-McShane FI, Sessle BJ, Ram Y, Ross CF, Hatsopoulos NG (2023) Multiple regions of primate orofacial sensorimotor cortex encode bite force and gape. Front Systems Neurosci. doi: 10.3389/fnsys.2023.1213279. PMID: 37808467 PMCID: 10556252), and chew-side and food types (Li Z & Arce-McShane FI (2023). Cortical representation of mastication in the primate orofacial sensorimotor cortex. Program No. NANO06.05. 2023 Neuroscience Meeting Planner. Washington, D.C.: Society for Neuroscience, 2023. Online.). 

      The primary goal of this paper was to characterize single units in the orofacial region and to do a follow-up paper on population activity. In the revised manuscript, we have now incorporated the results of population-level analyses. The combined results of the single unit and population analyses provide a deeper understanding of the cortical representation of 3D direction of tongue movements during natural feeding and drinking behaviors. 

      Further, for the nerve-blocking experiments, the authors demonstrate that the lack of sensory feedback severely alters how the movement is executed at the level of behavior and neural activity. However, I had a hard time interpreting these results since any change in neural activity after blocking the orofacial nerves could be due to either the lack of the sensory signal or, as the authors suggest, due to the NHPs executing a different movement to compensate for the lack of sensory information or the combination of both of these factors. Hence, it would be helpful to know if the authors have any hint in the data that can tease apart these factors. For example, analyzing a subset of nerve-blocked trials that have similar kinematics to the control.

      Thank you for bringing this important point. We agree with the reviewer that any change in the neural activity may be attributed to lack of sensory signal or to compensatory changes or a combination of these factors. To tease apart these factors, we sampled an equal number of trials with similar kinematics for both control and nerve block feeding sessions. We added clarifying description of this approach in the Results section of the revised manuscript: “To confirm this e ect was not merely due to altered kinematics, we conducted parallel analyses using carefully subsampled trials with matched kinematic profiles from both control and nerve-blocked conditions.”

      Furthermore, we ran additional analysis for the drinking datasets by subsampling a similar distribution of drinking movements from each condition. We compared the neural data from an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. We compared the directional tuning across an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. These analyses that control for similar kinematics showed that there was still a decrease in the proportion of directionally modulated neurons with nerve block compared to the control. This confirms that the results may be attributed to the lack of tactile information. These are now integrated in the revised paper under Methods section: Directional tuning of single neurons, as well as Results section: E ects of nerve block: Decreased directional tuning of MIo and SIo neurons and Figure 10 – figure supplement 1.

      Reviewer #2 (Public review):

      Summary:

      This manuscript by Hosack and Arce-McShane examines the directional tuning of neurons in macaque primary motor (MIo) and somatosensory (SIo) cortex. The neural basis of tongue control is far less studied than, for example, forelimb movements, partly because the tongue's kinematics and kinetics are difficult to measure. A major technical advantage of this study is using biplanar video-radiography, processed with modern motion tracking analysis software, to track the movement of the tongue inside the oral cavity. Compared to prior work, the behaviors are more naturalistic behaviors (feeding and licking water from one of three spouts), although the animals were still head-fixed.

      The study's main findings are that:

      • A majority of neurons in MIo and a (somewhat smaller) percentage of SIo modulated their firing rates during tongue movements, with different modulations depending on the direction of movement (i.e., exhibited directional tuning). Examining the statistics of tuning across neurons, there was anisotropy (e.g., more neurons preferring anterior movement) and a lateral bias in which tongue direction neurons preferred that was consistent with the innervation patterns of tongue control muscles (although with some inconsistency between monkeys).

      • Consistent with this encoding, tongue position could be decoded with moderate accuracy even from small ensembles of ~28 neurons.

      • There were di erences observed in the proportion and extent of directional tuning between the feeding and licking behaviors, with stronger tuning overall during licking. This potentially suggests behavioral context-dependent encoding.

      • The authors then went one step further and used a bilateral nerve block to the sensory inputs (trigeminal nerve) from the tongue. This impaired the precision of tongue movements and resulted in an apparent reduction and change in neural tuning in Mio and SIo.

      Strengths:

      The data are difficult to obtain and appear to have been rigorously measured, and provide a valuable contribution to this under-explored subfield of sensorimotor neuroscience. The analyses adopt well-established methods, especially from the arm motor control literature, and represent a natural starting point for characterizing tongue 3D direction tuning.

      Weaknesses:

      There are alternative explanations for some of the interpretations, but those interpretations are described in a way that clearly distinguishes results from interpretations, and readers can make their own assessments. Some of these limitations are described in more detail below.

      One weakness of the current study is that there is substantial variability in results between monkeys, and that only one session of data per monkey/condition is analyzed (8 sessions total). This raises the concern that the results could be idiosyncratic. The Methods mention that other datasets were collected, but not analyzed because the imaging pre-processing is very labor-intensive. While I recognize that time is precious, I do think in this case the manuscript would be substantially strengthened by showing that the results are similar on other sessions.

      We acknowledge the reviewer’s concern about inter-subject variability. Animal feeding and drinking behaviors are quite stable across sessions, thus, we do not think that additional sessions will address the concern that the results could be idiosyncratic. Each of the eight datasets analyzed here have su icient neural and kinematic data to capture neural and behavioral patterns.  Nevertheless, we performed some of the analyses on a second feeding dataset from Monkey R. The results from analyses on a subset of this data were consistent across datasets; for example, (1) similar proportions of directionally tuned neurons, (2) similar distances between population trajectories (t-test p > 0.9), and (3) a consistently smaller distance between Anterior-Posterior pairs than others in MIo (t-test p < 0.05) but not SIo (p > 0.1). 

      This study focuses on describing directional tuning using the preferred direction (PD) / cosine tuning model popularized by Georgopoulous and colleagues for understanding neural control of arm reaching in the 1980s. This is a reasonable starting point and a decent first-order description of neural tuning. However, the arm motor control field has moved far past that viewpoint, and in some ways, an over-fixation on static representational encoding models and PDs held that field back for many years. The manuscript benefits from drawing the readers' attention (perhaps in their Discussion) that PDs are a very simple starting point for characterizing how cortical activity relates to kinematics, but that there is likely much richer population-level dynamical structure and that a more mechanistic, control-focused analytical framework may be fruitful. A good review of this evolution in the arm field can be found in Vyas S, Golub MD, Sussillo D, Shenoy K. 2020. Computation Through Neural Population Dynamics. Annual Review of Neuroscience. 43(1):249-75

      Thank you for highlighting this important point. Research on orofacial movements hasn't progressed at the same pace as limb movement studies. Our manuscript focused specifically on characterizing the 3D directional tuning properties of individual neurons in the orofacial area—an analysis that has not been conducted previously for orofacial sensorimotor control. While we initially prioritized this individual neuron analysis, we recognize the value of broader population-level insights.

      Based on your helpful feedback, we have incorporated additional population analyses to provide a more comprehensive picture of orofacial sensorimotor control and expanded our discussion section. We appreciate your expertise in pushing our work to be more thorough and aligned with current neuroscience approaches.

      Can the authors explain (or at least speculate) why there was such a large difference in behavioral e ect due to nerve block between the two monkeys (Figure 7)?

      We acknowledge this as a variable inherent to this type of experimentation. Previous studies have found large kinematic variation in the effect of oral nerve block as well as in the following compensatory strategies between subjects. Each animal’s biology and response to perturbation vary naturally. Indeed, our subjects exhibited different feeding behavior even in the absence of nerve block perturbation (see Figure 2 in Laurence-Chasen et al., 2022). This is why each individual serves as its own control.

      Do the analyses showing a decrease in tuning after nerve block take into account the changes (and sometimes reduction in variability) of the kinematics between these conditions? In other words, if you subsampled trials to have similar distributions of kinematics between Control and Block conditions, does the effect hold true? The extreme scenario to illustrate my concern is that if Block conditions resulted in all identical movements (which of course they don't), the tuning analysis would find no tuned neurons. The lack of change in decoding accuracy is another yellow flag that there may be a methodological explanation for the decreased tuning result.

      Thank you for bringing up this point. We accounted for the changes in the variability of the kinematics between the control and nerve block conditions in the feeding dataset where we sampled an equal number of trials with similar kinematics for both control and nerve block. However, we did not control for similar kinematics in the drinking task. In the revised manuscript, we have clarified this and performed similar analysis for the drinking task. We sampled a similar distribution of drinking movements from each condition. We compared the neural data from an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. There was a decrease in the percentage of neurons that were directionally modulated (between 30 and 80%) with nerve block compared to the control. These results have been included in the revised paper under Methods section: Directional tuning of single neurons, as well as Results section: E ects of nerve block: Decreased directionality of MIo and SIo neurons.

      While the results from decoding using KNN did not show significant differences between decoding accuracies in control vs. nerve block conditions, the results from the additional factor analysis and decoding using LSTM were consistent with the decrease in directional tuning at the level of individual neurons.  

      The manuscript states that "Our results suggest that the somatosensory cortex may be less involved than the motor areas during feeding, possibly because it is a more ingrained and stereotyped behavior as opposed to tongue protrusion or drinking tasks". Could an alternative explanation be more statistical/technical in nature: that during feeding, there will be more variability in exactly what somato sensation afferent signals are being received from trial to trial (because slight differences in kinematics can have large differences in exactly where the tongue is and the where/when/how of what parts of it are touching other parts of the oral cavity)? This variability could "smear out" the apparent tuning using these types of trial-averaged analyses. Given how important proprioception and somatosensation are for not biting the tongue or choking, the speculation that somatosensory cortical activity is suppressed during feedback is very counter-intuitive to this reviewer.

      Thank you for bringing up this point. We have now incorporated this in our revised Discussion (see Comparison between MIo and SIo). We agree with the reviewer that trialby-trial variability in the a erent signals may account for the lower directional signal in SIo during feeding than in drinking. Indeed, SIo’s mean-matched Fano factor in feeding was significantly higher than those in drinking (Author response image 1). Moreover, the results of the additional population and decoding analyses also support this.  

      Author response image 1.

      Comparison of mean-matched Fano Factor between Sio neurons during feeding and drinking control tasks across both subjects (Wilcoxon rank sum test, p < 0.001).

      Reviewer #3 (Public review):

      Summary:

      In this study, the authors aim to uncover how 3D tongue direction is represented in the Motor (M1o) and Somatosensory (S1o) cortex. In non-human primates implanted with chronic electrode arrays, they use X-ray-based imaging to track the kinematics of the tongue and jaw as the animal is either chewing food or licking from a spout. They then correlate the tongue kinematics with the recorded neural activity. Using linear regressions, they characterize the tuning properties and distributions of the recorded population during feeding and licking. Then, they recharacterize the tuning properties after bilateral lidocaine injections in the two sensory branches of the trigeminal nerve. They report that their nerve block causes a reorganization of the tuning properties. Overall, this paper concludes that M1o and S1o both contain representations of the tongue direction, but their numbers, their tuning properties, and susceptibility to perturbed sensory input are different.

      Strengths:

      The major strengths of this paper are in the state-of-the-art experimental methods employed to collect the electrophysiological and kinematic data.

      Weaknesses:

      However, this paper has a number of weaknesses in the analysis of this data.

      It is unclear how reliable the neural responses are to the stimuli. The trial-by-trial variability of the neural firing rates is not reported. Thus, it is unclear if the methods used for establishing that a neuron is modulated and tuned to a direction are susceptible to spurious correlations. The authors do not use shuffling or bootstrapping tests to determine the robustness of their fits or determining the 'preferred direction' of the neurons. This weakness colors the rest of the paper.

      Thank you for raising these points. We have performed the following additional analyses: (1) We have added analyses to ensure that the results could not be explained by neural variability. To show the trial-by-trial variability of the neural firing rates, we have calculated the Fano factor (mean overall = 1.34747; control = 1.46471; nerve block = 1.23023). The distribution was similar across directions, suggesting that responses of MIo and SIo neurons to varying 3D directions were reliable. (2) We have used a bootstrap procedure to ensure that directional tuning cannot be explained by mere chance. (3) To test the robustness of our PDs we also performed a bootstrap test, which yielded the same results for >90% of neurons, and a multiple linear regression test for fit to a cosine-tuning function. In the revised manuscript, the Methods and Results sections have been updated to include these analyses.  

      Author response image 2.

      Comparison of Fano Factor across directions for MIo and SIo Feeding Control (Kruskal-Wallis, p > 0.7).

      The authors compare the tuning properties during feeding to those during licking but only focus on the tongue-tip. However, the two behaviors are different also in their engagement of the jaw muscles. Thus many of the differences observed between the two 'tasks' might have very little to do with an alternation in the properties of the neural code - and more to do with the differences in the movements involved. 

      Using the tongue tip for the kinematic analysis of tongue directional movements was a deliberate choice as the anterior region of the tongue is highly mobile and sensitive due to a higher density of mechanoreceptors. The tongue tip is the first region that touches the spout in the drinking task and moves the food into the oral cavity for chewing and subsequent swallowing. 

      We agree with the reviewer that the jaw muscles are engaged differently in feeding vs. drinking (see Fig. 2). For example, a wider variety of jaw movements along the three axes are observed in feeding compared to the smaller amplitude and mostly vertical jaw movements in drinking. Also, the tongue movements are very different between the two behaviors. In feeding, the tongue moves in varied directions to position the food between left-right tooth rows during chewing, whereas in the drinking task, the tongue moves to discrete locations to receive the juice reward. Moreover, the tongue-jaw coordination differs between tasks; maximum tongue protrusion coincides with maximum gape in drinking but with minimum gape in the feeding behavior. Thus, the different tongue and jaw movements required in each behavior may account for some of the differences observed in the directional tuning properties of individual neurons and population activity. These points have been included in the revised Discussion.

      Author response image 3.

      Tongue tip position (mm) and jaw pitch(degree) during feeding (left) and drinking (right) behaviors. Most protruded tongue position coincides with minimum gape (jaw pitch at 0°) during  feeding but with maximum gape during drinking.

      Many of the neurons are likely correlated with both Jaw movements and tongue movements - this complicates the interpretations and raises the possibility that the differences in tuning properties across tasks are trivial.

      We thank the reviewer for raising this important point. In fact, we verified in a previous study whether the correlation between the tongue and jaw kinematics might explain di erences in the encoding of tongue kinematics and shape in MIo (see Supplementary Fig. 4 in Laurence-Chasen et al., 2023): “Through iterative sampling of sub-regions of the test trials, we found that correlation of tongue kinematic variables with mandibular motion does not account for decoding accuracy. Even at times where tongue motion was completely un-correlated with the jaw, decoding accuracy could be quite high.” 

      The results obtained from population analyses showing distinct properties of population trajectories in feeding vs. drinking behaviors provide strong support to the interpretation that directional information varies between these behaviors.

      The population analyses for decoding are rudimentary and provide very coarse estimates (left, center, or right), it is also unclear what the major takeaways from the population decoding analyses are. The reduced classification accuracy could very well be a consequence of linear models being unable to account for the complexity of feeding movements, while the licking movements are 'simpler' and thus are better accounted for.

      We thank the reviewer for raising this point. The population decoding analyses provide additional insight on the directional information in population activity,  as well as a point of comparison with the results of numerous decoding studies on the arm region of the sensorimotor cortex. In the revised version, we have included the results from decoding tongue direction using a long short-term memory (LSTM) network for sequence-tosequence decoding. These results di ered from the KNN results, indicating that a linear model such as KNN was better for drinking and that a non-linear and continuous decoder was better suited for feeding.  These results have been included in the revised manuscript.

      The nature of the nerve block and what sensory pathways are being affected is unclear - the trigeminal nerve contains many different sensory afferents - is there a characterization of how e ectively the nerve impulses are being blocked? Have the authors confirmed or characterized the strength of their inactivation or block, I was unable to find any electrophysiological evidence characterizing the perturbation.

      The strength of the nerve block is characterized by a decrease in the baseline firing rate of SIo neurons, as shown in Supplementary Figure 6 of “Loss of oral sensation impairs feeding performance and consistency of tongue–jaw coordination” (Laurence-Chasen et al., 2022)..

      Overall, while this paper provides a descriptive account of the observed neural correlations and their alteration by perturbation, a synthesis of the observed changes and some insight into neural processing of tongue kinematics would strengthen this paper.

      We thank the reviewer for this suggestion. We have revised the Discussion to provide a synthesis of the results and insights into the neural processing of tongue kinematics.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The procedure for anesthesia explained in the method section was not clear to me. The following information was missing: what drug/dose was used? How long the animal was under anesthesia? How long after the recovery the experiments were done?

      The animals were fully sedated with ketamine (100 mg/ml, 10 mg/kg) for less than 30 minutes, and all of the data was collected within 90 minutes after the nerve block was administered.

      (2) In Figure 10, panels A and B are very close together, it was not at first clear whether the text "Monkey R, Monkey Y" belongs to panel A or B.

      We have separated the two panels further in the revised figure.

      (3) I found Figure 11 very busy and hard to interpret. Separating monkeys, fitting the line for each condition, or using a bar plot can help with the readability of the figure.

      Thank you for the suggestion. We agree with you and have reworked this figure. To simplify it we have shown the mean accuracy across iterations.

      (4) I found the laterality discussions like "This signifies that there are more neurons in the left hemisphere contributes toward one direction of tongue movement, suggesting that there is some laterality in the PDs of OSMCx neurons that varies between individuals" bit of an over-interpretation of data, given the low n value and the dissimilarity in how strongly the nerve blocking altered monkies behavior.

      Thank you for sharing this viewpoint. We do think that laterality is a good point of comparison with studies on M1 neurons in the arm/hand region. In our study, we found that the peak of the PD distribution coincides with leftward tongue movements in feeding. The distribution of PDs provides insight into how tongue muscles are coordinated during movement. Intrinsic and extrinsic tongue muscles are involved in shaping the tongue (e.g., elongation, broadening) and positioning the tongue (e.g., protrusion/retraction, elevation/depression), respectively. These muscles receive bilateral motor innervation except for genioglossus. Straight tongue protrusion requires the balanced action of the right and left genioglossi while the lateral protrusion involves primarily the contralateral genioglossus. Given this unilateral innervation pattern, we hypothesized that left MIo/SIo neurons would preferentially respond to leftward tongue movements, corresponding to right genioglossus activation. 

      Reviewer #2 (Recommendations for the authors):

      Are the observation of tuning peaks being most frequently observed toward the anterior and superior directions consistent with the statistics of the movements the tongue typically makes? This could be analogous to anisotropies previously reported in the arm literature, e.g., Lillicrap TP, Scott SH. 2013. Preference Distributions of Primary Motor Cortex Neurons Reflect Control Solutions Optimized for Limb Biomechanics. Neuron. 77(1):168-79

      Thank you for bringing our attention to analogous findings by Lillicrap & Scott, 2013. Indeed, we do observe the highest number of movements in the Anterior Superior directions, followed by the Posterior Inferior. This does align with the distribution of tuning peaks that we observed. Author response image 4 shows the proportions of observed movements in each group of directions across all feeding datasets. We have incorporated this data in the Results section: Neuronal modulation patterns di er between MIo and SIo, as well as added this point in the Discussion.

      Author response image 4.

      Proportion of feeding trials in each group of directions. Error bars represent ±1 standard deviation across datasets (n = 4).

      "The Euclidean distance was used to identify nearest neighbors, and the number of nearest neighbors used was K = 7. This K value was determined after testing different Ks which yielded comparable results." In general, it's a decoding best practice to tune hyperparameters (like K) on fully held-out data from the data used for evaluation. Otherwise, this tends to slightly inflate performance because one picks the hyperparameter that happened to give the best result. It sounds like that held-out validation set wasn't used here. I don't think that's going to change the results much at all (especially given the "comparable results" comment), but providing this suggestion for the future. If the authors replicate results on other datasets, I suggest they keep K = 7 to lock in the method.

      K = 7 was chosen based on the size of our smallest training dataset (n = 55). The purpose of testing different K values was not to select which value gave the best result, but to demonstrate that similar K values did not affect the results significantly. We tested the di erent K values on a subset of the feeding data, but that data was not fully held-out from the training set. We will keep your suggestion in mind for future analysis.

      The smoothing applied to Figure 2 PSTHs appears perhaps excessive (i.e., it may be obscuring interesting finer-grained details of these fast movements). Can the authors reduce the 50 ms Gaussian smoothing (I assume this is the s.d.?) ~25 ms is often used in studying arm kinematics. It also looks like the movement-related modulation may not be finished in these 200 ms / 500 ms windows. I suggest extending the shown time window. It would also be helpful to show some trial-averaged behavior (e.g. speed or % displacement from start) under or behind the PSTHs, to give a sense of what phase of the movement the neural activity corresponds to.

      Thank you for the suggestion. We have taken your suggestions into consideration and modified Figure 2 accordingly. We decreased the Gaussian kernel to 25 ms and extended the time window shown. The trial-averaged anterior/posterior displacement was also added to the drinking PSTHs.

      Reviewer #3 (Recommendations for the authors):

      The major consideration here is that the data reported for feeding appears to be very similar to that reported in a previous study:

      "Robust cortical encoding of 3D tongue shape during feeding in macaques"

      Are the neurons reported here the same as the ones used in this previous paper? It is deeply concerning that this is not reported anywhere in the methods section.

      These are the same neurons as in our previous paper, though here we include several additional datasets of the nerve block and drinking sessions. We have now included this in the methods section.

      Second, I strongly recommend that the authors consider a thorough rewrite of this manuscript and improve the presentation of the figures. As written, it was not easy to follow the paper, the logic of the experiments, or the specific data being presented in the figures.

      Thank you for this suggestion. We have done an extensive rewrite of the manuscript and revision of the figures.

      A few recommendations:

      (1) Please structure your results sections and use descriptive topic sentences to focus the reader. In the current version, it is unclear what the major point being conveyed for each analysis is.

      Thank you for this suggestion. We have added topic sentences to the begin each section of the results.

      (2) Please show raster plots for at least a few example neurons so that the readers have a sense of what the neural responses look like across trials. Is all of Figure 2 one example neuron or are they different neurons? Error bars for PETH would be useful to show the reliability and robustness of the tuning.

      Figure 2 shows different neurons, one from MIo and one from SIo for each task. There is shading showing ±1 standard error around the line for each direction, however this was a bit difficult to see. In addition to the other changes we have made to these figures, we made the lines smaller and darkened the error bar shading to accentuate this. We also added raster plots corresponding to the same neurons represented in Figure 2 as a supplement.

      (3) Since there are only two data points, I am not sure I understand why the authors have bar graphs and error bars for graphs such as Figure 3B, Figure 5B, etc. How can one have an error bar and means with just 2 data points?

      Those bars represent the standard error of the proportion. We have changed the y-axis label on these figures to make this clearer.

      (4) Results in Figure 6 could be due to differential placement of the electrodes across the animals. How is this being accounted for?

      Yes, this is a possibility which we have mentioned in the discussion. Even with careful placement there is no guarantee to capture a set of neurons with the exact same function in two subjects, as every individual is different. Rather we focus on analyses of data within the same animal. The purpose of Figure 6 is to show the di erence between MIo and SIo, and between the two tasks, within the same subject. The more salient result from calculating the preferred direction is that there is a change in the distribution between control and nerve block within the same exact population. Discussions relating to the comparison between individuals are speculative and cannot be confirmed without the inclusion of many more subjects.

      (5) For Figure 7, I would recommend showing the results of the Sham injection in the same figure instead of a supplement.

      Thank you for the suggestion, we have added these results to the figure.

      (6) I think the e ects of the sensory block on the tongue kinematics are underexplored in Figure 7 and Figure 8. The authors could explore the deficits in tongue shape, and the temporal components of the trajectory.

      Some of these effects on feeding have been explored in a previous paper, LaurenceChasen et al., 2022. We performed some additional analyses on changes to kinematics during drinking, including the number of licks per 10 second trial and the length of individual licks. The results of these are included below. We also calculated the difference in the speed of tongue movement during drinking, which generally decreased and exhibited an increase in variance with nerve block (f-test, p < 0.001). However, we have not included these figures in the main paper as they do not inform us about directionality.

      Author response image 5.

      Left halves of hemi-violins (black) are control and right halves (red) are nerve block for an individual. Horizontal black lines represent the mean and horizontal red lines the median. Results of two-tailed t-test and f-test are indicated by asterisks and crosses, respectively: *,† p < 0.05; **,†† p < 0.01; ***,††† p < 0.001.

      (9) In Figures 9 and 10. Are the same neurons being recorded before and after the nerve block? It is unclear if the overall "population" properties are different, or if the properties of individual neurons are changing due to the nerve block.

      Yes, the same neurons are being recorded before and after nerve block. Specifically, Figure 9B shows that the properties of many individual neurons do change due to the nerve block. Di erences in the overall population response may be attributed to some of the units having reduced/no activity during the nerve block session.

      Additionally, I recommend that the authors improve their introduction and provide more context to their discussion. Please elaborate on what you think are the main conceptual advances in your study, and place them in the context of the existing literature. By my count, there are 26 citations in this paper, 4 of which are self-citations - clearly, this can be improved upon.

      Thank you for this suggestion. We have done an extensive rewrite of the Introduction and Discussion. We discussed the main conceptual advances in our study and place them in the context of the existing literature.

    1. Kubb (Learn how to play) - Lawn games

      Master this fun Swedish lawn game that combines skill and strategy with our complete guide on how to play Kubb. By learning the simple Kubb rules and clever throwing techniques, you'll be ready to dominate your next outdoor get-together with friends and family. Read the full Kubb game instructions below and start playing this classic game today!

    1. eLife Assessment

      This paper performs a valuable critical reassessment of anatomical and functional data, proposing a reclassification of the mouse visual cortex in which almost all the higher visual areas are consolidated into a single area V2. However, the evidence supporting this unification is incomplete, as the key experimental observations that the model attempts to reproduce do not accurately reflect the literature. This study will likely be of interest to neuroscientists focused on the mouse visual cortex and the evolution of cortical organization.

    2. Reviewer #1 (Public review):

      Summary:

      In this manuscript, the authors argue that defining higher visual areas (HVAs) based on reversals of retinotopic tuning has led to an over-parcellation of secondary visual cortices. Using retinotopic models, they propose that the HVAs are more parsimoniously mapped as a single area V2, which encircles V1 and exhibits complex retinotopy. They reanalyze functional data to argue that functional differences between HVAs can be explained by retinotopic coverage. Finally, they compare the classification of mouse visual cortex to that of other species to argue that our current classification is inconsistent with those used in other model species.

      Strengths:

      This manuscript is bold and thought-provoking, and is a must-read for mouse visual neuroscientists. The authors take a strong stance on combining all HVAs, with the possible exception of area POR, into a single V2 region. Although I suspect many in the field will find that their proposal goes too far, many will agree that we need to closely examine the assumptions of previous classifications to derive a more accurate areal map. The authors' supporting analyses are clear and bolster their argument. Finally, they make a compelling argument for why the classification is not just semantic, but has ramifications for the design of experiments and analysis of data.

      Weaknesses:

      Although I enjoyed the polemic nature of the manuscript, there are a few issues that weaken their argument.

      (1) Although the authors make a compelling argument that retinotopic reversals are insufficient to define distinct regions, they are less clear about what would constitute convincing evidence for distinct visual regions. They mention that a distinct area V3 has been (correctly) defined in ferrets based on "cytoarchitecture, anatomy, and functional properties", but elsewhere argue that none of these factors are sufficient to parcellate any of the HVAs in mouse cortex, despite some striking differences between HVAs in each of these factors. It would be helpful to clearly define a set of criteria that could be used for classifying distinct regions.

      (2) On a related note, although the authors carry out impressive analyses to show that differences in functional properties between HVAs could be explained by retinotopy, they glossed over some contrary evidence that there are functional differences independent of retinotopy. For example, axon projections to different HVAs originating from a single V1 injection - presumably including neurons with similar retinotopy - exhibit distinct functional properties (Glickfeld LL et al, Nat Neuro, 2013). As another example, interdigitated M2+/M2- patches in V1 show very different HVA connectivity and response properties, again independent of V1 location/retinotopy (Meier AM et al., bioRxiv). One consideration is that the secondary regions might be considered a single V2 with distinct functional modules based on retinotopy and connectivity (e.g., V2LM, V2PM, etc).

      (3) Some of the HVAs-such as AL, AM, and LI-appear to have redundant retinotopic coverage with other HVAS, such as LM and PM. Moreover, these regions have typically been found to have higher "hierarchy scores" based on connectivity (Harris JA et al., Nature, 2019; D'Souza RD et al., Nat Comm, 2022), though unfortunately, the hierarchy levels are not completely consistent between studies. Based on existing evidence, there is a reasonable argument to be made for a hybrid classification, in which some regions (e.g., LM, P, PM, and RL) are combined into a single V2 (though see point #2 above) while other HVAs are maintained as independent visual regions, distinct from V2. I don't expect the authors to revise their viewpoint in any way, but a more nuanced discussion of alternative classifications is warranted.

    3. Reviewer #2 (Public review):

      Summary:

      The study by Rowley and Sedigh-Sarvestani presents modeling data suggesting that map reversals in mouse lateral extrastriate visual cortex do not coincide with areal borders, but instead represent borders between subregions within a single area V2. The authors propose that such an organization explains the partial coverage in higher-order areas reported by Zhuang et al., (2017). The scheme revisits an organization proposed by Kaas et al., (1989), who interpreted the multiple projection patches traced from V1 in the squirrel lateral extrastriate cortex as subregions within a single area V2. Kaas et al's interpretation was challenged by Wang and Burkhalter (2007), who used a combination of topographic mapping of V1 connections and receptive field recordings in mice. Their findings supported a different partitioning scheme in which each projection patch mapped a specific topographic location within single areas, each containing a complete representation of the visual field. The area map of mouse visual cortex by Wang and Burkhalter (2007) has been reproduced by hundreds of studies and has been widely accepted as ground truth (CCF) (Wang et al., 2020) of the layout of rodent cortex. In the meantime, topographic mappings in marmoset and tree shew visual cortex made a strong case for map reversals in lateral extrastriate cortex, which represent borders between functionally diverse subregions within a single area V2. These findings from non-rodent species raised doubts about whether during evolution, different mammalian branches have developed diverse partitioning schemes of the cerebral cortex. Rowley and Sedigh-Sarvestani favor a single master plan in which, across evolution, all mammalian species have used a similar blueprint for subdividing the cortex.

      Strengths:

      The story illustrates the enduring strength of science in search of definitive answers.

      Weaknesses:

      To me, it remains an open question whether Rowley and Sedigh-Sarvestani have written the final chapter of the saga. A key reason for my reservation is that the areas the maps used in their model are cherry-picked. The article disregards published complementary maps, which show that the entire visual field is represented in multiple areas (i.e. LM, AL) of lateral extrastriate cortex and that the map reversal between LM and AL coincides precisely with the transition in m2AChR expression and cytoarchitecture (Wang and Burkhalter, 2007; Wang et al., 2011). Evidence from experiments in rats supports the gist of the findings in the mouse visual cortex (Coogan and Burkhalter, 1993).

      (1) The selective use of published evidence, such as the complete visual field representation in higher visual areas of lateral extrastriate cortex (Wang and Burkhalter, 2007; Wang et al., 2011) makes the report more of an opinion piece than an original research article that systematically analyzes the area map of mouse visual cortex we have proposed. No direct evidence is presented for a single area V2 with functionally distinct subregions.

      (2) The article misrepresents evidence by commenting that m2AChR expression is mainly associated with the lower field. This is counter to published findings showing that m2AChR spans across the entire visual field (Gamanut et al., 2018; Meier et al., 2021). The utility of markers for delineating areal boundaries is discounted, without any evidence, in disregard of evidence for distinct areal patterns in early development (Wang et al., 2011). Pointing out that markers can be distributed non-uniformly within an area is well-familiar. m2AChR is non-uniformly expressed in mouse V1, LM and LI (Ji et al., 2015; D'Souza et al., 2019; Meier et al., 2021). Recently, it has been found that the patchy organization within V1 plays a role in the organization of thalamocortical and intracortical networks (Meier et al., 2025). m2AChR-positive patches and m2AChR-negative interpatches organize the functionally distinct ventral and dorsal networks, notably without obvious bias for upper and lower parts of the visual field.

      (3) The study has adopted an area partitioning scheme, which is said to be based on anatomically defined boundaries of V2 (Zhuang et al., 2017). The only anatomical borders used by Zhuang et al. (2017) are those of V1 and barrel cortex, identified by cytochrome oxidase staining. In reality, the partitioning of the visual cortex was based on field sign maps, which are reproduced from Zhuang et al., (2017) in Figure 1A. It is unclear why the maps shown in Figures 2E and 2F differ from those in Figure 1A. It is possible that this is an oversight. But maintaining consistent areal boundaries across experimental conditions that are referenced to the underlying brain structure is critical for assigning modeled projections to areas or sub-regions. This problem is evident in Figure 2F, which is presented as evidence that the modeling approach recapitulates the tracings shown in Figure 3 of Wang and Burkhalter (2007). The dissimilarities between the modeling and tracing results are striking, unlike what is stated in the legend of Figure 2F.

      (4) The Rowley and Sedigh-Sarvestani find that the partial coverage of the visual field in higher order areas shown by Zhuang et al (2017) is recreated by the model. It is important to caution that Zhuang et al's (2017) maps were derived from incomplete mappings of the visual field, which was confined to -25-35 deg of elevation. This underestimates the coverage we have found in LM and AL. Receptive field mappings show that LM covers 0-90 deg of azimuth and -30-80 elevation (Wang and Burkhalter, 2007). AL covers at least 0-90 deg of azimuth and -30-50 deg of elevation (Wang and Burkhalter, 2007; Wang et al., 2011). These are important differences. Partial coverage in LM and AL underestimates the size of these areas and may map two projection patches as inputs to subregions of a single area rather than inputs to two separate areas. Complete, or nearly complete, visual representations in LM and AL support that each is a single area. Importantly, both areas are included in a callosal-free zone (Wang and Burkhalter, 2007). The surrounding callosal connections align with the vertical meridian representation. The single map reversal is marked by a transition in m2AChR expression and cytoarchitecture (Wang et al., 2011).

      (5) The statement that the "lack of visual field overlap across areas is suggestive of a lack of hierarchical processing" is predicated on the full acceptance of the mappings by Zhuang et al (2017). Based on the evidence reviewed above, the reclassification of visual areas proposed in Figure 1C seems premature.

      (6) The existence of lateral connections is not unique to rodent cortex and has been described in primates (Felleman and Van Essen, 1991).

      (7) Why the mouse and rat extrastriate visual cortex differ from those of many other mammals is unclear. One reason may be that mammals with V2 subregions are strongly binocular.

    4. Reviewer #3 (Public review):

      Summary:

      The authors review published literature and propose that a visual cortical region in the mouse that is widely considered to contain multiple visual areas should be considered a single visual area.

      Strengths:

      The authors point out that relatively new data showing reversals of visual-field sign within known, single visual areas of some species require that a visual field sign change by itself should not be considered evidence for a border between visual areas.

      Weaknesses:

      The existing data are not consistent with the authors' proposal to consolidate multiple mouse areas into a single "V2". This is because the existing definition of a single area is that it cannot have redundant representations of the visual field. The authors ignore this requirement, as well as the data and definitions found in published manuscripts, and make an inaccurate claim that "higher order visual areas in the mouse do not have overlapping representations of the visual field". For quantification of the extent of overlap of representations between 11 mouse visual areas, see Figure 6G of Garrett et al. 2014. [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014). Topography and areal organization of mouse visual cortex. The Journal of neuroscience 34, 12587-12600. 10.1523/JNEUROSCI.1124-14.2014.

    5. Author response:

      eLife Assessment:

      This paper performs a valuable critical reassessment of anatomical and functional data, proposing a reclassification of the mouse visual cortex in which almost all the higher visual areas are consolidated into a single area V2. However, the evidence supporting this unification is incomplete, as the key experimental observations that the model attempts to reproduce do not accurately reflect the literature . This study will likely be of interest to neuroscientists focused on the mouse visual cortex and the evolution of cortical organization.

      We do not agree or understand which 'key experimental observations' that the model attempts to reproduce do not accurately reflect the literature. The model reproduces a complete map of the visual field, with overlap in certain regions. When reversals are used to delineate areas, as is the current custom, multiple higher order areas are generated, and each area has a biased and overlapping visual field coverage. These are the simple outputs of the model, and they are consistent with the published literature, including recent publications such as Garrett et al. 2014 and Zhuang et al. 2017, a paper published in this journal. The area boundaries produced by the model are not identical to area boundaries in the literature, because the model is a simplification.

      Reviewer #1 (Public review):

      Summary:

      In this manuscript, the authors argue that defining higher visual areas (HVAs) based on reversals of retinotopic tuning has led to an over-parcellation of secondary visual cortices. Using retinotopic models, they propose that the HVAs are more parsimoniously mapped as a single area V2, which encircles V1 and exhibits complex retinotopy. They reanalyze functional data to argue that functional differences between HVAs can be explained by retinotopic coverage. Finally, they compare the classification of mouse visual cortex to that of other species to argue that our current classification is inconsistent with those used in other model species.

      Strengths:

      This manuscript is bold and thought-provoking, and is a must-read for mouse visual neuroscientists. The authors take a strong stance on combining all HVAs, with the possible exception of area POR, into a single V2 region. Although I suspect many in the field will find that their proposal goes too far, many will agree that we need to closely examine the assumptions of previous classifications to derive a more accurate areal map. The authors' supporting analyses are clear and bolster their argument. Finally, they make a compelling argument for why the classification is not just semantic, but has ramifications for the design of experiments and analysis of data.

      Weaknesses:

      Although I enjoyed the polemic nature of the manuscript, there are a few issues that weaken their argument.

      (1) Although the authors make a compelling argument that retinotopic reversals are insufficient to define distinct regions, they are less clear about what would constitute convincing evidence for distinct visual regions. They mention that a distinct area V3 has been (correctly) defined in ferrets based on "cytoarchitecture, anatomy, and functional properties", but elsewhere argue that none of these factors are sufficient to parcellate any of the HVAs in mouse cortex, despite some striking differences between HVAs in each of these factors. It would be helpful to clearly define a set of criteria that could be used for classifying distinct regions.

      We agree the revised manuscript would benefit from a clear discussion of updated rules of area delineation in the mouse. In brief, we argue that retinotopy alone should not be used to delineate area boundaries in mice, or any other species. Although there is some evidence for functional property, architecture, and connectivity changes across mouse HVAs, area boundaries continue to be defined primarily, and sometimes solely (Garrett et al., 2014; Juavinett et al., 2018; Zhuang et al., 2017), based on retinotopy. We acknowledge that earlier work (Wang and Burkhalter, 2007; Wang et al., 2011) did consider cytoarchitecture and connectivity alongside retinotopy, but more recent work has shifted to a focus on retinotopy as indicated by the currently accepted criterion for area delineation.  

      As reviewer #2 points out, the present criteria for mouse visual area delineation can be found in the Methods section of: [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014)].

      Criterion 1: Each area must contain the same visual field sign at all locations within the area.

      Criterion 2: Each visual area cannot have a redundant representation of visual space.

      Criterion 3: Adjacent areas of the same visual field sign must have a redundant representation.

      Criterion 4: An area's location must be consistently identifiable across experiments.

      As discussed in the manuscript, recent evidence in higher order visual cortex of tree shrews and rats led us to question the universality of these criteria across species. Specifically, tree shrew V2, macaque V2, and marmoset DM, exhibit reversals in visual field-sign in what are defined as single visual areas. This suggests that criterion 1 should be updated. It also suggests that Criterion 2 and 3 should be updated since visual field sign reversals often co-occur with retinotopic redundancies, since reversing course in the direction of progression along the visual field can easily lead to coverage of visual field regions already traveled.  

      More broadly, we argue that topography is just one of several criteria that should be considered in area delineation. We understand that few visual areas in any species meet all criteria, but we emphasize that topography cannot consistently be the sole satisfied criterion – as it currently appears to be for many mouse HVAs. Inspired by a recent perspective on cortical area delineation (Petersen et al., 2024), we suggest the following rules, that will be worked into the revised version of the manuscript. Topography is a criterion, but it comes after considerations of function, architectonics and connectivity.

      (1) Function—Cortical areas differ from neighboring areas in their functional properties  

      (2) Architectonics—Cortical areas often exhibit distinctions from neighboring areas in multiple cyto- and myeloarchitectonic markers

      (3) Connectivity—Cortical areas are characterized by a specific set of connectional inputs and outputs from and to other areas

      (4) Topography—Cortical areas often exhibit a distinct topography that balances maximal coverage of the sensory field with minimal redundancy of coverage within an area.

      As we discuss in the manuscript, although there are functional, architectonic, and connectivity differences across mouse HVAs, they typically vary smoothly across multiple areas – such that neighboring areas share the same properties and there are no sharp borders. For instance, sharp borders in cytoarchitecture are generally lacking in the mouse HVAs. A notable exceptions to this is the clear and sharp change in m2AChR expression that occurs between LM and AL (Wang et al., 2011). 

      (2) On a related note, although the authors carry out impressive analyses to show that differences in functional properties between HVAs could be explained by retinotopy, they glossed over some contrary evidence that there are functional differences independent of retinotopy. For example, axon projections to different HVAs originating from a single V1 injection - presumably including neurons with similar retinotopy - exhibit distinct functional properties (Glickfeld LL et al, Nat Neuro, 2013). As another example, interdigitated M2+/M2- patches in V1 show very different HVA connectivity and response properties, again independent of V1 location/retinotopy (Meier AM et al., bioRxiv). One consideration is that the secondary regions might be considered a single V2 with distinct functional modules based on retinotopy and connectivity (e.g., V2LM, V2PM, etc).

      Thank you for the correction. We will revise the text to discuss (Glickfeld et al., 2013), as it remains some of the strongest evidence in favor of retinotopy-independent functional specialization of mouse HVAs. However, one caveat of this study is the size of the V1 injection that is the source of axons studied in the HVAs. As apparent in Figure 1B, the large injection covers nearly a quarter of V1. It is worth nothing that (Han et al., 2018) found, using single-cell reconstructions and MAPseq, that the majority of V1 neurons project to multiple nearby HVA targets. In this experiment the tracing does not suffer from the problem of spreading over V1’s retinotopic map, and suggests that, presumably retinotopically matched, locations in each area receive shared inputs from the V1 population rather than a distinct but spatially interspersed subset. In fact, the authors conclude “Interestingly, the location of the cell body within V1 was predictive of projection target for some recipient areas (Extended Data Fig. 8). Given the retinotopic organization of V1, this suggests that visual information from different parts of visual field may be preferentially distributed to  specific target areas, which is consistent with recent findings (Zhuang et al., 2017)”. Given an injection covering a large portion of the retinotopic map, and the fact that feed-forward projections from V1 to HVAs carry coarse retinotopy - it is difficult to prove that functional specializations noted in the HVA axons are retinotopyindependent. This would require measurement of receptive field location in the axonal boutons, which the authors did not perform (possibly because the SNR of calcium indicators prevented such measurements at the time).  

      Another option would be to show that adjacent neurons in V1, that project to far-apart HVAs, exhibit distinct functional properties on par with differences exhibited by neurons in very different parts of V1 due to retinotopy. In other words, the functional specificity of V1 inputs to HVAs at retinotopically identical locations is of the same order as those that might be gained by retinotopic biases. To our knowledge, such a study has not been conducted, so we have decided to measure the data in collaboration with the Allen Institute. As part of the Allen Institute’s pioneering OpenScope project, we will make careful two-photon and electrophysiology measurements of functional properties, including receptive field location, SF, and TF in different parts of the V1 retinotopic map. Pairing this data with existing Allen Institute datasets on functional properties of neurons in the HVAs will allow us to rule in, or rule-out, our hypotheses regarding retinotopy as the source of functional specialization in mouse HVAs. We will update the discussion in the revised manuscript to better reflect the need for additional evidence to support or refute our proposal.

      Meier AM et al., bioRxiv 2025 (Meier et al., 2025) was published after our submission, but we are thankful to the reviewers for guiding our attention to this timely paper. Given the recent findings on the influence of locomotion on rodent and primate visual cortex, it is very exciting to see clearly specialized circuits for processing self-generated visual motion in V1. However, it is difficult to rule out the role of retinotopy as the HVA areas (LM, AL, RL) participating in the M2+ network less responsive to self-generated visual motion exhibit a bias for the medial portion of the visual field and the HVA area (PM) involved in the M2- network responsive to self-generated visual motion exhibit a bias for the lateral (or peripheral) parts of the visual field. For instance, a peripheral bias in area PM has been shown using retrograde tracing as in Figure 6 of (Morimoto et al., 2021), single-cell anterograde tracing  as in Extended Data Figure 8 of (Han et al., 2018), and functional imaging studies (Zhuang et al., 2017). Recent findings in the marmoset also point to visual circuits in the peripheral, but not central, visual field being significantly modulated by selfgenerated movements (Rowley et al., 2024). 

      However, a visual field bias in area PM that selectively receive M2- inputs is at odds with the clear presence of modular M2+/M2- patches across the entire map of V1 (Ji et al., 2015).  One possibility supported by existing data is that neurons in M2- patches, as well as those in M2+ patches, in the central representation of V1 make fewer or significantly weaker connections with area PM compared to areas LM, AL and RL. Evidence to the contrary would support retinotopy-independent and functionally specialized inputs from V1 to HVAs.

      (3) Some of the HVAs-such as AL, AM, and LI-appear to have redundant retinotopic coverage with other HVAS, such as LM and PM. Moreover, these regions have typically been found to have higher "hierarchy scores" based on connectivity (Harris JA et al., Nature, 2019; D'Souza RD et al., Nat Comm, 2022), though unfortunately, the hierarchy levels are not completely consistent between studies. Based on existing evidence, there is a reasonable argument to be made for a hybrid classification, in which some regions (e.g., LM, P, PM, and RL) are combined into a single V2 (though see point #2 above) while other HVAs are maintained as independent visual regions, distinct from V2. I don't expect the authors to revise their viewpoint in any way, but a more nuanced discussion of alternative classifications is warranted.

      We understand that such a proposal would combine a subset of areas with matched field sign (LM, P, PM, and RL) would be less extreme and received better by the community. This would create a V2 with a smooth map without reversals or significant redundant retinotopic coverage. However, the intuition we have built from our modeling studies suggest that both these areas, and the other smaller areas with negative field sign (AL, AM, LI), are a byproduct of a complex single map of the visual field that exhibits reversals as it contorts around the triangular and tear-shaped boundaries of V1. In other words, we believe the redundant coverage and field-sign changes/reversals are a byproduct of a single secondary visual field in V2 constrained by the cortical dimensions of V1. That being said, we understand that area delineations are in part based on a consensus by the community. Therefore we will continue to discuss our proposal with community members, and we will incorporate new evidence supporting or refuting our hypothesis, before we submit our revised manuscript.

      Reviewer #2 (Public review):

      Summary:

      The study by Rowley and Sedigh-Sarvestani presents modeling data suggesting that map reversals in mouse lateral extrastriate visual cortex do not coincide with areal borders, but instead represent borders between subregions within a single area V2. The authors propose that such an organization explains the partial coverage in higher-order areas reported by Zhuang et al., (2017). The scheme revisits an organization proposed by Kaas et al., (1989), who interpreted the multiple projection patches traced from V1 in the squirrel lateral extrastriate cortex as subregions within a single area V2. Kaas et al's interpretation was challenged by Wang and Burkhalter (2007), who used a combination of topographic mapping of V1 connections and receptive field recordings in mice. Their findings supported a different partitioning scheme in which each projection patch mapped a specific topographic location within single areas, each containing a complete representation of the visual field. The area map of mouse visual cortex by Wang and Burkhalter (2007) has been reproduced by hundreds of studies and has been widely accepted as ground truth (CCF) (Wang et al., 2020) of the layout of rodent cortex. In the meantime, topographic mappings in marmoset and tree shew visual cortex made a strong case for map reversals in lateral extrastriate cortex, which represent borders between functionally diverse subregions within a single area V2. These findings from non-rodent species raised doubts about whether during evolution, different mammalian branches have developed diverse partitioning schemes of the cerebral cortex. Rowley and Sedigh-Sarvestani favor a single master plan in which, across evolution, all mammalian species have used a similar blueprint for subdividing the cortex.

      Strengths:

      The story illustrates the enduring strength of science in search of definitive answers.

      Weaknesses:

      To me, it remains an open question whether Rowley and Sedigh-Sarvestani have written the final chapter of the saga. A key reason for my reservation is that the areas the maps used in their model are cherry-picked. The article disregards published complementary maps, which show that the entire visual field is represented in multiple areas (i.e. LM, AL) of lateral extrastriate cortex and that the map reversal between LM and AL coincides precisely with the transition in m2AChR expression and cytoarchitecture (Wang and Burkhalter, 2007; Wang et al., 2011). Evidence from experiments in rats supports the gist of the findings in the mouse visual cortex (Coogan and Burkhalter, 1993).

      We would not claim to have written the final chapter of the saga. Our goal was to add an important piece of new evidence to the discussion of area delineations across species. We believe this new evidence supports our unification hypothesis.  We also believe that there are several missing pieces of data that could support or refute our hypothesis. We have begun a collaboration to collect some of this data.  

      (1) The selective use of published evidence, such as the complete visual field representation in higher visual areas of lateral extrastriate cortex (Wang and Burkhalter, 2007; Wang et al., 2011) makes the report more of an opinion piece than an original research article that systematically analyzes the area map of mouse visual cortex we have proposed. No direct evidence is presented for a single area V2 with functionally distinct subregions.

      This brings up a nuanced issue regarding visual field coverage. Wang & Burkhalter, 2007 Figure 6 shows the receptive field of sample neurons in area LM that cover the full range between 0 and 90 degrees of azimuth, and -40 to 80 degree of elevation – which essentially matches the visual field coverage in V1. However, we do not know whether these neurons are representative of most neurons in area LM. In other words, while these single-cell recordings along selected contours in cortex show the span of the visual field coverage, they may not be able to capture crucial information about its shape, missing regions of the visual field or potential bias. To mitigate this, visual field maps measured with electrophysiology are commonly produced by even sampling across the two dimensions of the visual area, either by moving a single electrode along a grid-pattern (e.g. (Manger et al., 2002)), or using a grid-liked multi-electrode probe (e.g. (Yu et al., 2020)). This was not carried out either in Wang & Burkhalter 2007 or Wang et al. 2011.  Even sampling of cortical space is time consuming and difficult with electrophysiology, but efficient with functional imaging. Therefore, despite the likely under-estimation of visual field coverage, imaging techniques are valuable in that they can efficiently exhibit not only the span of the visual field of a cortical region, but also its shape and bias.  

      Multiple functional imaging studies that simultaneously measure visual field coverage in V1 and HVAs report a bias in the coverage of HVAs, relative to that in V1 (Garrett et al., 2014; Juavinett et al., 2018; Zhuang et al., 2017). While functional imaging will likely underestimate receptive fields compared to electrophysiology, the consistent observation of an orderly bias for distinct parts of the visual field across the HVAs suggests that at least some of the HVAs do not have full and uniform coverage of the visual field comparable to that in V1. For instance, (Garrett et al., 2014) show that the total coverage in HVAs, when compared to V1, is typically less than half (Figure 6D) and often irregularly shaped.

      Careful measurements of single-cell receptive fields, using mesoscopic two-photon imaging across the HVAs would settle this question. As reviewer #1 points out, this is technically feasible, though no dataset of this kind exists to our knowledge.

      (2) The article misrepresents evidence by commenting that m2AChR expression is mainly associated with the lower field. This is counter to published findings showing that m2AChR spans across the entire visual field (Gamanut et al., 2018; Meier et al., 2021). The utility of markers for delineating areal boundaries is discounted, without any evidence, in disregard of evidence for distinct areal patterns in early development (Wang et al., 2011). Pointing out that markers can be distributed non-uniformly within an area is well-familiar. m2AChR is non-uniformly expressed in mouse V1, LM and LI (Ji et al., 2015; D'Souza et al., 2019; Meier et al., 2021). Recently, it has been found that the patchy organization within V1 plays a role in the organization of thalamocortical and intracortical networks (Meier et al., 2025). m2AChR-positive patches and m2AChR-negative interpatches organize the functionally distinct ventral and dorsal networks, notably without obvious bias for upper and lower parts of the visual field.

      We wrote that “Future work showed boundaries in labeling of histological markers such as SMI-32 and m2ChR labeling, but such changes mostly delineated area LM/AL (Wang et al., 2011) and seemed to be correlated with the representation of the lower visual field.” The latter statement regarding the representation of the lower visual field is directly referencing the data in Figure 1 of (Wang et al., 2011), which is titled “Figure 1: LM/AL border identified by the transition of m2AChR expression coincides with receptive field recordings from lower visual field.” Similar to the Wang et al., we were simply referring to the fact that the border of area LM/AL co-exhibits a change in m2AChR expression as well as lower-visual field representation.  

      (3) The study has adopted an area partitioning scheme, which is said to be based on anatomically defined boundaries of V2 (Zhuang et al., 2017). The only anatomical borders used by Zhuang et al. (2017) are those of V1 and barrel cortex, identified by cytochrome oxidase staining. In reality, the partitioning of the visual cortex was based on field sign maps, which are reproduced from Zhuang et al., (2017) in Figure 1A. It is unclear why the maps shown in Figures 2E and 2F differ from those in Figure 1A. It is possible that this is an oversight. But maintaining consistent areal boundaries across experimental conditions that are referenced to the underlying brain structure is critical for assigning modeled projections to areas or sub-regions. This problem is evident in Figure 2F, which is presented as evidence that the modeling approach recapitulates the tracings shown in Figure 3 of Wang and Burkhalter (2007). The dissimilarities between the modeling and tracing results are striking, unlike what is stated in the legend of Figure 2F.

      Thanks for this correction. By “anatomical boundaries of higher visual cortex”, we meant the cortical boundary between V1 and higher order visual areas on one end, and the outer edge of the envelope that defines the functional boundaries of the HVAs in cortical space (Zhuang et al., 2017). The reviewer is correct that we should have referred to these as functional boundaries. The word ‘anatomical’ was meant to refer to cortical space, rather than visual field space.

      More generally though, there is no disagreement between the partitioning of visual cortex in Figure 1 and 2. Rather, the portioning in Figure 1 is directly taken from Zhuang et al., (2017) whereas those in Figure 2 are produced by mathematical model simulation. As such, one would not expect identical areal boundaries between Figure 2 and Figure 1. What we aimed to communicate with our modeling results, is that a single area can exhibit multiple visual field reversals and retinotopic redundancies if it is constrained to fit around V1 and cover a visual field approximately matched to the visual field coverage in V1. We defined this area explicitly as a single area with a single visual field (boundaries shown in Figure 2A). So  the point of our simulation is to show that even an explicitly defined single area can appear as multiple areas if it is constrained by the shape of mouse V1, and if visual field reversals are used to indicate areal boundaries. As in most models, different initial conditions and parameters produce a complex visual field which will appear as multiple HVAs when delineated by areal boundaries. What is consistent however, is the existence of complex single visual field that appears as multiple HVAs with partially overlapping coverage.

      Similarly, we would not expect a simple model to exactly reproduce the multi-color tracer injections in Wang and Burkhalter (2007). However, we find it quite compelling that the model can produce multiple groups of multi-colored axonal projections beyond V1 that can appear as multiple areas each with their own map of the visual field using current criteria, when the model is explicitly designed to map a single visual field. We will explain the results of the model, and their implications, better in the revised manuscript.

      (4) The Rowley and Sedigh-Sarvestani find that the partial coverage of the visual field in higher order areas shown by Zhuang et al (2017) is recreated by the model. It is important to caution that Zhuang et al's (2017) maps were derived from incomplete mappings of the visual field, which was confined to -25-35 deg of elevation. This underestimates the coverage we have found in LM and AL. Receptive field mappings show that LM covers 0-90 deg of azimuth and -30-80 elevation (Wang and Burkhalter, 2007). AL covers at least 0-90 deg of azimuth and -30-50 deg of elevation (Wang and Burkhalter, 2007; Wang et al., 2011). These are important differences. Partial coverage in LM and AL underestimates the size of these areas and may map two projection patches as inputs to subregions of a single area rather than inputs to two separate areas. Complete, or nearly complete, visual representations in LM and AL support that each is a single area. Importantly, both areas are included in a callosal-free zone (Wang and Burkhalter, 2007). The surrounding callosal connections align with the vertical meridian representation. The single map reversal is marked by a transition in m2AChR expression and cytoarchitecture (Wang et al., 2011).

      This is a good point. We do not expect that expanding the coverage of V1 will change the results of the model significantly. However, for the revised manuscript, we will update V1 coverage to be accurate, repeat our simulations, and report the results.  

      (5) The statement that the "lack of visual field overlap across areas is suggestive of a lack of hierarchical processing" is predicated on the full acceptance of the mappings by Zhuang et al (2017). Based on the evidence reviewed above, the reclassification of visual areas proposed in Figure 1C seems premature.

      The reviewer is correct. In the revised manuscript, we will be careful to distinguish bias in visual field coverage across areas from presence or lack of visual field overlap.  

      (6) The existence of lateral connections is not unique to rodent cortex and has been described in primates (Felleman and Van Essen, 1991).

      (7) Why the mouse and rat extrastriate visual cortex differ from those of many other mammals is unclear. One reason may be that mammals with V2 subregions are strongly binocular.

      This is an interesting suggestion, and careful visual topography data from rabbits and other lateral eyed animals would help to evaluate it. For what it’s worth, tree shrews are lateral eyed animals with only 50 degrees of binocular visual field and also show V2 subregions.

      Reviewer #3 (Public review):

      Summary:

      The authors review published literature and propose that a visual cortical region in the mouse that is widely considered to contain multiple visual areas should be considered a single visual area.

      Strengths:

      The authors point out that relatively new data showing reversals of visual-field sign within known, single visual areas of some species require that a visual field sign change by itself should not be considered evidence for a border between visual areas.

      Weaknesses:

      The existing data are not consistent with the authors' proposal to consolidate multiple mouse areas into a single "V2". This is because the existing definition of a single area is that it cannot have redundant representations of the visual field. The authors ignore this requirement, as well as the data and definitions found in published manuscripts, and make an inaccurate claim that "higher order visual areas in the mouse do not have overlapping representations of the visual field". For quantification of the extent of overlap of representations between 11 mouse visual areas, see Figure 6G of Garrett et al. 2014. [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014). Topography and areal organization of mouse visual cortex. The Journal of neuroscience 34, 12587-12600. 10.1523/JNEUROSCI.1124-14.2014.

      Thank you for this correction, we admit we should have chosen our words more carefully. In the revised manuscript, we will emphasize that higher order visual areas in the mouse do have some overlap in their representations but also exhibit bias in their coverage. This is consistent with our proposal and in fact our model simulations in Figure 2E also show overlapping representations along with differential bias in coverage. However, we also note Figure 6 of Garret et al. 2014 provides several pieces of evidence in support of our proposal that higher order areas are sub-regions of a single area V2. Specifically, the visual field coverage of each area is significantly less than that in V1 (Garret et al. 2014, Figure 6D). While the imaging methods used in Garret et al. likely under-estimate receptive fields, one would assume they would similarly impact measurements of coverage in V1 and HVAs. Secondly, each area exhibits a bias towards a different part of the visual field (Figure 6C and E), that this bias is distinct for different areas but proceeds in a retinotopic manner around V1 - with adjacent areas exhibiting biases for nearby regions of the visual field (Figure 6E). Thus, the biases in the visual field coverage across HVAs appear to be related and not independent of each other. As we show in our modeling and in Figure 2, such orderly and inter-related biases can be created from a single visual field constrained to share a border with mouse V1.   

      With regards to the existing definition of a single area: we did not ignore the requirement that single areas cannot have redundant representations of the visual field. Rather, we believe that this requirement should be relaxed considering new evidence collected from other species, where multiple visual field reversals exist within the same visual area. We understand this issue is nuanced and was not made clear in the original submission.  

      In the revised manuscript, we will clarify that visual field reversals often exhibit redundant retinotopic representation on either side of the reversal. In the revised manuscript we will clarify that our argument that multiple reversals can exist within a single visual area in the mouse, is an argument that some retinotopic redundancy can exist with single visual areas. Such a re-classification would align how we define visual areas in mice with existing classification in tree shrews, ferrets, cats, and primates – all of whom have secondary visual areas with complex retinotopic maps exhibiting multiple reversals and redundant retinotopic coverage.

    1. Author response:

      We sincerely thank the reviewers for the time and care they have invested in evaluating our manuscript. We greatly appreciate their thoughtful feedback, which highlights both the strengths and the areas where the work can be improved. We recognize the importance of the concerns raised, particularly regarding the TMS analyses and interpretation, as well as aspects of the manuscript structure and clarity. The authors are committed to transparency and a rigorous scientific process, and we will therefore carefully consider all reviewer comments. In the coming months, we will revise the manuscript to incorporate additional analyses, provide clearer methodological detail, and refine the interpretation of the stimulation results.

    2. Reviewer #4 (Public review):

      Summary:

      Several behavioral experiments and one TMS experiment were performed to examine adaptation to room reverberation for speech intelligibility in noise. This is an important topic that has been extensively studied by several groups over the years. And the study is unique in that it examines one candidate brain area, dlPFC, potentially involved in this learning, and finds that disrupting this area by TMS results in a reduction in the learning. The behavioral conditions are in many ways similar to previous studies. However, they find results that do not match previous results (e.g., performance in anechoic condition is worse than in reverberation), making it difficult to assess the validity of the methods used. One unique aspect of the behavioral experiments is that Ambisonics was used to simulate the spaces, while headphone simulation was mostly used previously. The main behavioral experiment was performed by interleaving 3 different rooms and measuring speech intelligibility as a function of the number of words preceding the target in a given room on a given trial. The findings are that performance improves on the time scale of seconds (as the number of words preceding the target increases), but also on a much larger time scale of tens to hundreds of seconds (corresponding to multiple trials), while for some listeners it is degraded for the first couple of trials. The study also finds that the performance is best in the room that matches the T60 most commonly observed in everyday environments. These are potentially interesting results. However, there are issues with the design of the study and analysis methods that make it difficult to verify the conclusions based on the data.

      Strengths:

      (1) Analysis of the adaptation to reverberation on multiple time scales, for multiple reverberant and anechoic environments, and also considering contextual effects of one environment interleaved with the other two environments.

      (2) TMS experiment showing reduction of some of the learning effects by temporarily disabling the dlPFC.

      Weaknesses:

      While the study examines the adaptation for different carrier lengths, it keeps multiple characteristics (mainly talker voice and location) fixed in addition to reverberation. Therefore, it is possible that the subjects adapt to other aspects of the stimuli, not just to reverberation. A condition in which only reverberation would switch for the target would allow the authors to separate these confounding alternatives. Now, the authors try to address the concerns by indirect evidence/analyses. However, the evidence provided does not appear sufficient.

      The authors use terms that are either not defined or that seem to be defined incorrectly. The main issue then is the results, which are based on analysis of what the authors call d', Hit Rate, and Final Hit rate. First of all, they randomly switch between these measures. Second, it's not clear how they define them, given that their responses are either 4-alternative or 8-alternative forced choice. d', Hit Rate, and False Alarm Rate are defined in Signal detection theory for the detection of the presence of a target. It can be easily extended to a 2-alternative forced choice. But how does one define a Hit, and, in particular, a False Alarm, in a 4/8-alternative? The authors do not state how they did it, and without that, the computation of d' based on HR and FAR is dubious. Also, what the authors call Hit Rate, is presumably the percent correct performance (PCC), but even that is not clear. Then they use FHR and act as if this was the asymptotic value of their HR, even though in many conditions their learning has not ended, and randomly define a variable of +-10 from FHR, which must produce different results depending on whether the asymptote was reached or not. Other examples of usage of strange usage of terms: they talk about "global likelihood learning" (L426) without a definition or a reference, or about "cumulative hit rate" (L1738), where it is not clear to me what "cumulative" means there.

      There are not enough acoustic details about the stimuli. The authors find that reverberant performance is overall better than anechoic in 2 rooms. This goes contrary to previous results. And the authors do not provide enough acoustic details to establish that this is not an artefact of how the stimuli were normalized (e.g., what were the total signal and noise levels at the two ears in the anechoic and reverberant conditions?).

      There are some concerns about the use of statistics. For example, the authors perform two-way ANOVA (L724-728) in which one factor is room, but that factor does not have the same 3 levels across the two levels of the other factor. Also, in some comparisons, they randomly select 11 out of 22 subjects even though appropriate test correct for such imbalances without adding additional randomness of whether the 11 selected subjects happened to be the good or the bad ones.

      Details of the experiments are not sufficiently described in the methods (L194-205) to be able to follow what was done. It should be stated that 1 main experiment was performed using 3 rooms, and that 3 follow-ups were done on a new set of subjects, each with the room swapped.

    3. Reviewer #3 (Public review):

      Summary:

      This manuscript presents a well-designed and insightful behavioural study examining human adaptation to room acoustics, building on prior work by Brandewie & Zahorik. The psychophysical results are convincing and add incremental but meaningful knowledge to our understanding of reverberation learning. However, I find the transcranial magnetic stimulation (TMS) component to be over-interpreted. The TMS protocol, while interesting, lacks sufficient anatomical specificity and mechanistic explanation to support the strong claims made regarding a unique role of the dorsolateral prefrontal cortex (dlPFC) in this learning process. More cautious interpretation is warranted, especially given the modest statistical effects, the fact that the main TMS result of interest is a null result, the imprecise targeting of dlPFC (which is not validated), and the lack of knowledge about the timescale of TMS effects in relation to the behavioural task. I recommend revising the manuscript to shift emphasis toward the stronger behavioural findings and to present a more measured and transparent discussion of the TMS results and their limitations.

      Strengths:

      (1) Well-designed acoustical stimuli and psychophysical task.

      (2) Comparisons across room combinations are well conducted.

      (3) The virtual acoustic environment is impressive and applied well here.

      (4) A timely study with interesting behavioural results.

      Weaknesses:

      (1) Lack of hypotheses, particularly for TMS.

      (2) Lack of evidence for targeting TMS in [brain] space and time.

      (3) The most interesting effect of TMS is a null result compared to a weak statistical effect for "meta adaptation"

    4. Reviewer #2 (Public review):

      Summary:

      This study investigated how listeners adapt to and utilize statistical properties of different acoustic spaces to improve speech perception. The researchers used repetitive TMS to perturb neural activity in DLPFC, inhibiting statistical learning compared to sham conditions. The authors also identified the most effective room types for the effective use of reverberations in speech in noise perception, with regular human-built environments bringing greater benefits than modified rooms with lower or higher reverberation times.

      Strengths:

      The introduction and discussion sections of the paper are very interesting and highlight the importance of the current study, particularly with regard to the use of ecologically valid stimuli in investigating statistical learning. However, they could be condensed into parts. TMS parameters and task conditions were well-considered and clearly explained.

      Weaknesses

      (1) The Results section is difficult to follow and includes a lot of detail, which could be removed. As such, it presents as confusing and speculative at times.

      (2) The hypotheses for the study are not clearly stated.

      (3) Multiple statistical models are implemented without correcting the alpha value. This leaves the analyses vulnerable to Type I errors.

      (4) It is confusing to understand how many discrete experiments are included in the study as a whole, and how many participants are involved in each experiment.

      (5) The TMS study is significantly underpowered and not robust. Sample size calculations need further explanation (effect sizes appear to be based on behavioural studies?). I would caution an exploratory presentation of these data, and calculate a posteriori the full sample size based on effect sizes observed in the TMS data.

    5. Reviewer #1 (Public review):

      Summary:

      This manuscript describes the results of an experiment that demonstrates a disruption in statistical learning of room acoustics when transcranial magnetic stimulation (TMS) is applied to the dorsolateral prefrontal cortex in human listeners. The work uses a testing paradigm designed by the Zahorik group that has shown improvement in speech understanding as a function of listening exposure time in a room, presumably through a mechanism of statistical learning. The manuscript is comprehensive and clear, with detailed figures that show key results. Overall, this work provides an explanation for the mechanisms that support such statistical learning of room acoustics and, therefore, represents a major advancement for the field.

      Strengths:

      The primary strength of the work is its simple and clear result, that the dorsolateral prefrontal cortex is involved in human room acoustic learning.

      Weaknesses:

      A potential weakness of this work is that the manuscript is quite lengthy and complex.

    6. eLife Assessment:

      This study addresses valuable questions about the neural mechanisms underlying statistical learning of room acoustics, combining robust behavioral measures with non-invasive brain stimulation. The behavioral findings are strong and extend previous work in psychoacoustics, but the TMS results are modest, with methodological limitations and over-interpretation that weaken the mechanistic conclusions. The strength of evidence is therefore incomplete, and a more cautious interpretation of the stimulation findings, alongside strengthened analyses, would improve the manuscript.

    1. Length TBD based on project pitch: remix or re-imagine a paper youʼve written forclass in a creative format (playbill, skit, poem, sketch or painting, photo essay, trailer

      super excited for this!! and I think this is a great idea as a final project. it'll be easier to think about what I want to do during the semester.

    2. What is the history of empathy from a philosophical and historical perspective in theEuroAmerican tradition and beyond

      I think this question and the following questions are really important to note, they're are questions are that are not normally asked when thinking about musical theater.

    1. However, tangible changes in the operationof businesses and governments have notbeen dramatic, especially compared with thescale and urgency of the issue.

      !!!!!

    1. upper right corner

      I'm unsure of how to add this into my essay, especially the signature. I'll be sure to ask about it before I submit my essays.

    Annotators

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We thank the reviewers for their careful assessment and enthusiastic appreciation of our work.

      __Reviewer #1 (Evidence, reproducibility and clarity (Required)): __In this article, Thomas et al. use a super-resolution approach in living cells to track proteins involved in the fusion event of sexual reproduction. They study the spatial organization and dynamics of the actin fusion focus, a key structure in cell-cell fusion in Schizosaccharomyces pombe. The researchers have adapted a high-precision centroid mapping method using three-color live-cell epifluorescence imaging to map the dynamic architecture of the fusion focus during yeast mating. The approach relies on tracking the centroid of fluorescence signals for proteins of interest, spatially referenced to Myo52-mScarlet-I (as a robust marker) and temporally referenced using a weakly fluorescent cytosolic protein (mRaspberry), which redistributes strongly upon fusion. The trajectories of five key proteins, including markers of polarity, cytoskeleton, exocytosis and membrane fusion, were compared to Myo52 over a 75-minute window spanning fusion. Their observations indicate that secretory vesicles maintain a constant distance from the plasma membrane whereas the actin network compacts. Most importantly, they discovered a positive feedback mechanism in which myosin V (Myo52) transports Fus1 formin along pre-existing actin filaments, thereby enhancing aster compaction.

      This article is well written, the arguments are convincing and the assertions are balanced. The centroid tracking method has been clearly and solidly controlled. Overall, this is a solid addition to our understanding of cytoskeletal organization in cell fusion.

      Major comments: No major comment.

      Minor comments: _ Page 8 authors wrote "Upon depletion of Myo52, Ypt3 did not accumulate at the fusion focus (Figure 3C). A thin, wide localization at the fusion site was occasionally observed (Figure 3C, Movies S3)" : Is there a quantification of this accumulation in the mutant?

      We will provide the requested quantification. The localization is very faint, so we are not sure that quantification will capture this faithfully, but we will try.

      _ The framerate of movies could be improved for reader comfort: For example, movie S6 lasts 0.5 sec.

      We agree that movies S3 and S6 frame rates could be improved. We will provide them with slower frame rate.

      Reviewer #1 (Significance (Required)):

      This study represents a conceptual and technical breakthrough in our understanding of cytoskeletal organization during cell-cell fusion. The authors introduce a high-precision, three-color live-cell centroid mapping method capable of resolving the spatio-temporal dynamics of protein complexes at the nanometer scale in living yeast cells. This methodological innovation enables systematic and quantitative mapping of the dynamic architecture of proteins at the cell fusion site, making it a powerful live-cell imaging approach. However, it is important to keep in mind that the increased precision achieved through averaging comes at the expense of overlooking atypical or outlier behaviors. The authors discovered a myosin V-dependent mechanism for the recruitment of formin that leads to actin aster compaction. The identification of Myo52 (myosin V) as a transporter of Fus1 (formin) to the fusion focus adds a new layer to our understanding of how polarized actin structures are generated and maintained during developmentally regulated processes such as mating.

      Previous studies have shown the importance of formins and myosins during fusion, but this paper provides a quantitative and dynamic mapping that demonstrates how Myo52 modulates Fus1 positioning in living cells. This provides a better understanding of actin organization, beyond what has been demonstrated by fixed-cell imaging or genetic perturbation.

      Audience: Cell biologists working on actin dynamics, cell-cell fusion and intracellular transport. Scientists involved in live-cell imaging, single particle tracking and cytoskeleton modeling.

      I have expertise in live-cell microscopy, image analysis, fungal growth machinery and actin organization.

      We thank the reviewer for their appreciation of our work.

      __Reviewer #2 (Evidence, reproducibility and clarity (Required)): __ A three-color imaging approach to use centroid tracking is employed to determine the high resolution position over time of tagged actin fusion focus proteins during mating in fission yeast. In particular, the position of different protein components (tagged in a 3rd color) were determined in relation to the position (and axis) of the molecular motor Myo52, which is tagged with two different colors in the mating cells. Furthermore, time is normalized by the rapid diffusion of a weak fluorescent protein probe (mRaspberry) from one cell to the other upon fusion pore opening. From this approach multiple important mechanistic insights were determined for the compaction of fusion focus proteins during mating, including the general compaction of different components as fusion proceeds with different proteins having specific stereotypical behaviors that indicate underlying molecular insights. For example, secretory vesicles remain a constant distance from the plasma membrane, whereas the formin Fus1 rapidly accumulates at the fusion focus in a Myo52-dependent manner.

      I have minor suggestions/points: (1) Figure 1, for clarity it would be helpful if the cells shown in B were in the same orientation as the cartoon cells shown in A. Similarly, it would be helpful to have the orientation shown in D the same as the data that is subsequently presented in the rest of the manuscript (such as Figure 2) where time is on the X axis and distance (position) is on the Y axis.

      We have turned each image in panel B by 180° to match the cartoon in A. For panel D, we are not sure what the reviewer would like. This panel shows the coordinates of each Myo52 position, whereas Figure 2 shows oriented distance (on the Y axis) over time (on the X axis). Perhaps the reviewer suggests that we should display panel D with a rotation onto the Y axis rather than the X axis. We feel that this would not bring more clarity and prefer to keep it as is.

      (2) Figure 2, for clarity useful to introduce how the position of Myo52 changes over time with respect to the fusion site (plasma membrane) earlier, and then come back to the positions of different proteins with respect to Myo52 shown in 2E. Currently the authors discuss this point after introducing Figure 2E, but better for the reader to have this in mind beforehand.

      We have added a sentence at the start of the section describing Figure 2, pointing out that the static appearance of Myo52 is due to it being used as reference, but that in reality, it moves relative to the plasma membrane: “Because Myo52 is the reference, its trace is flat, even though in reality Myo52 also moves relative to other proteins and the plasma membrane (see Figure 2E)”. This change is already in the text.

      (3) First sentence of page 8 "..., peaked at fusion time and sharply dropped post-fusion (Figure S3)." Figure S3 should be cited so that the reader knows where this data is presented.

      Thanks, we have added the missing figure reference to the text.

      (4) Figure 3D-H, why is Exo70 used as a marker for vesicles instead of Ypt3 for these experiments? Exo70 seems to have a more confusing localization than Ypt3 (3C vs 3D), which seems to complicate interpretations.

      There are two main reasons for this choice. First, the GFP-Ypt3 fluorescence intensity is lower than that of Exo70-GFP, which makes analysis more difficult and less reliable. Second, in contrast to Exo70-GFP where the endogenous gene is tagged at the native genomic locus, GFP-Ypt3 is expressed as additional copy in addition to endogenous untagged Ypt3. Although GFP-Ypt3 was reported to be fully functional as it can complement the lethality of a ypt3 temperature sensitive mutant (Cheng et al, MBoC 2002), its expression levels are non-native and we do not have a strain in which ypt3 is tagged at the 5’ end at the native genomic locus. For these reasons, we preferred to examine in detail the localization of Exo70. We do not think it complicates interpretations. Exo70 faithfully decorates vesicles and exhibits the same localization as Ypt3 in WT cells (see Figure 2D) and in myo52-AID (see Figure 3C-D). We realize that our text was a bit confusing as we opposed the localization of Exo70 and Ypt3, when all we wanted to state was that the Exo70-GFP signal is stronger. We have corrected this in the text.

      (5) Page 10, end of first paragraph, "We conclude...and promotes separation of Myo52 from the vesicles." This is an interesting hypothesis/interpretation that is consistent with the spatial-temporal organization of vesicles and the compacting fusion focus, but the underlying molecular mechanism has not be concluded.

      This is an interpretation that is in line with our data. Firm conclusion that the organization of the actin fusion focus imposes a steric barrier to bulk vesicle entry will require in vitro reconstitution of an actin aster driven by formin-myosin V feedback and addition of myosin V vesicle-like cargo, which can be a target for future studies. To make clear that it is an interpretation and not a definitive statement, we have added “likely” to the sentence, as in: “We conclude that the distal position of vesicles in WT cells is a likely steric consequence of the architecture of the fusion focus, which restricts space at the center of the actin aster and promotes separation of Myo52 from the vesicles”.

      (6) Figure 5F and 5G, the results are confusing and should be discussed further. Depletion of Myo52 decreases Fus1 long-range movements, indicating that Fus1 is being transported by Myo52 (5F). Similarly, the Fus1 actin assembly mutant greatly decreases Fus1 long-range movements and prevents Myo52 binding (5G), perhaps indicating that Fus1-mediated actin assembly is important. It seems the author's interpretations are oversimplified.

      We show that Myo52 is critical for Fus1 long-range movements, as stated by the reviewer. We also show that Fus1-mediated actin assembly is important. The question is in what way.

      One possibility is that FH2-mediated actin assembly powers the movement, which in this case represents the displacement of the formin due to actin monomer addition on the polymerizing filament. A second possibility is that actin filaments assembled by Fus1 somehow help Myo52 move Fus1. This could be for instance because Fus1-assembled actin filaments are preferred tracks for Myo52-mediated movements, or because they allow Myo52 to accumulate in the vicinity of Fus1, enhancing their chance encounter and thus the number of long-range movements (on any actin track). Based on the analysis of the K1112A point mutant in Fus1 FH2 domain, our data cannot discriminate between these three different options, which is why we concluded that the mutant allele does not allow us to make a firm conclusion. However, the Myo52-dependence clearly shows that a large fraction of the movements requires the myosin V. We have clarified the end of the paragraph in the following way: “Therefore, analysis of the K1112A mutant phenotype does not allow us to clearly distinguish between Fus1-powered from Myo52-powered movements. Future work will be required to test whether, in addition to myosin V-dependent transport, Fus1-mediated actin polymerization also directly contributes to Fus1 long-range movements.”

      (7) Figure 6, why not measure the fluorescence intensity of Fus1 as a proxy for the number of Fus1 molecules (rather than the width of the Fus1 signal), which seems to be the more straight-forward analysis?

      The aim of the measurement was to test whether Myo52 and Fus1 activity help focalize the formin at the fusion site, not whether these are required for localization in this region. This is why we are measuring the lateral spread of the signal (its width) rather than the fluorescence intensity of the signal. We know from previous work that Fus1 localizes to the shmoo tip independently of myosin V (Dudin et al, JCB 2015), and we also show this in Figure 6. However, the precise distribution of Fus1 is wider in absence of the myosins.

      We can and will measure intensities to test whether there is also a quantitative difference in the number of molecules at the shmoo tip.

      (8) Figure 7, the authors should note (and perhaps discuss) any evidence as to whether activation of Fus1 to facilitate actin assembly depends upon Fus1 dissociating from Myo52 or whether Fus1 can be activated while still associated with Myo52, as both circumstances are included in the figure.

      This is an interesting point. We have no experimental evidence for or against Fus1 dissociating from Myo52 to assemble actin. However, it is known that formins rotate along the actin filament double helix as they assemble it, a movement that seems poorly compatible with processive transport by myosin V. In Figure 7, we do not particularly want to imply that Myo52 associates with Fus1 linked or not with an actin filament. The figure serves to illustrate the focusing mechanism of myosin V transporting a formin, which is more evident when we draw the formin attached to a filament end. We have now added a sentence in the figure legend to clarify this point: “Note that it is unknown whether Myo52 transports Fus1 associated or not with an actin filament.”

      (9) Figure 7, the color of secretory vesicles should be the same in A and B.

      This is now corrected.

      Reviewer #2 (Significance (Required)):

      This is an impactful and high quality manuscript that describes an elegant experimental strategy with important insights determined. The experimental imaging strategy (and analysis), as well as the insight into the pombe mating fusion focus and its comparison to other cytoskeletal compaction events will be of broad scientific interest.

      We thank the reviewer for their appreciation of our work.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Summary:

      Fission yeast cell-cell fusion during mating is mediated by an actin-based structure called the 'fusion focus', which orchestrates actin polymerization by the mating-specific formin, Fus1, to direct polarized secretion towards the mating site. In the current study, Thomas and colleagues quantitatively map the spatial distribution of proteins mediating cell-cell fusion using a three-color fluorescence imaging methodology in the fission yeast Schizosaccharomyces pombe. Using Myo52 (Type V myosin) as a fluorescence reference point, the authors discover that proteins known to localize to the fusion focus have distinct spatial distributions and accumulation profiles at the mating site. Myo52 and Fus1 form a complex in vivo detected by co-immunoprecipitation and each contribute to directing secretory vesicles to the fusion focus. Previous work from this group has shown that the intrinsically disordered region (IDR) of Fus1 plays a critical role in forming the fusion focus. Here, the authors swap out the IDR of fission yeast Fus1 for the IDR of an unrelated mammalian protein, coincidentally called 'fused in sarcoma' (FUS). They express the Fus1∆IDR-FUSLC-27R chimera in mitotically dividing fission yeast cells, where Fus1 is not normally expressed, and discover that the Fus1∆IDR-FUSLC-27R chimera can travel with Myo52 on actively polymerizing actin cables. Additionally, they show that acute loss of Myo52 or Fus1 function, using Auxin-Inducible Degradation (AID) tags and point mutations, impair the normal compaction of the fusion focus, suggesting that direct interaction and coordination of Fus1 and Myo52 helps shape this structure.

      Major Comments:

      (1) In the Results section for Figure 2, the authors claim that actin filaments become shorter and more cross-linked they move away from the fusion site during mating, and suggest that this may be due to the presence of Myo51. However, the evidence to support this claim is not made clear. Is it supported by high-resolution electron microscopy of the actin filaments, or some other results? This needs to be clarified.

      Sorry if our text was unclear. The basis for the claim that actin filaments become shorter comes from our observation that the average position of tropomyosin and Myo51, both of which decorate actin filaments, is progressively closer to both Fus1 and the plasma membrane. Thus, the actin structure protrudes less into the cytosol as fusion progresses. The basis for claiming that Myo51 promotes actin filament crosslinking comes mainly from previously published papers, which had shown that 1) Myo51 forms complexes with the Rng8 and Rng9 proteins (Wang et al, JCB 2014), and 2) the Myo51-Rng8/9 not only binds actin through Myo51 head domain but also binds tropomyosin-decorated actin through the Rng8/9 moiety (Tang et al, JCB 2016; reference 27 in our manuscript). We had also previously shown that these proteins are necessary for compaction of the fusion focus (Dudin et al, PLoS Genetics 2017; reference 28 in our manuscript). Except for measuring the width of Fus1 distribution in myo51∆ mutants, which confirms previous findings, we did not re-investigate here the function of Myo51.

      We have now re-written this paragraph to present the previous data more clearly: “The distal localization of Myo51 was mirrored by that of tropomyosin Cdc8, which decorates linear actin filaments (Figure 2B) (Hatano et al, 2022). The distal position of the bulk of Myo51-decorated actin filaments was confirmed using Airyscan super-resolution microscopy (Figure 2B, right). Thus, the average position of actin filaments and decreasing distance to Myo52 indicates they initially extend a few hundred nanometers into the cytosol and become progressively shorter as fusion proceeds. Previous work had shown that Myo51 cross-links and slides Cdc8-decorated actin filaments relative to each other (Tang et al, 2016) and that both proteins contribute to compaction of the fusion focus in the lateral dimension along the cell-cell contact area (perpendicular to the fusion axis) (Dudin et al, 2017). We confirmed this function by measuring the lateral distribution of Fus1 along the cell-cell contact area (perpendicular to the fusion axis), which was indeed wider in myo51∆ than WT cells (see below Figure 6A-B).”

      (2) In Figure 4, the authors comment that disrupting Fus1 results in more disperse Myo52 spatial distribution at the fusion focus, raising the possibility that Myo52 normally becomes focused by moving on the actin filaments assembled by Fus1. This can be tested by asking whether latrunculin treatment phenocopies the 'more dispersed' Myo52 localization seen in fus1∆ cells? If Myo52 is focused instead by its direct interaction with Fus1, the latrunculin treatment should not cause the same phenotype.

      This is in principle a good idea, though it is technically challenging because pharmacological treatment of cell pairs in fusion is difficult to do without disturbing pheromone gradients which are critical throughout the fusion process (see Dudin et al, Genes and Dev 2016). We will try the experiment but are unsure about the likelihood of technical success.

      We note however that a similar experiment was done previously on Fus1 overexpressed in mitotic cells (Billault-Chaumartin et al, Curr Biol 2022; Fig 1D). Here, Fus1 also forms a focus and latrunculin A treatment leads to Myo52 dispersion while keeping the Fus1 focus, which is in line with our proposal that Myo52 becomes focused by moving on Fus1-assembled actin filaments. Similarly, we showed in Figure 5B that Latrunculin A treatment of mitotic cells expressing Fus1∆IDR-FUSLC-27R also results in Myo52, but not Fus1 dispersion.

      (3) The Fus1∆IDR-FUSLC-27R chimera used in Figure 5 is an interesting construct to examine actin-based transport of formins in cells. I was curious if the authors could provide the rates of movement for Myo52 and for Fus1∆IDR-FUSLC-27R, both before and after acute depletion of Myo52. It would be interesting to see if loss of Myo52 alters the rate of movement, or instead the movement stems from formin-mediated actin polymerization.

      We will measure these rates.

      (4) Also, Myo52 is known to interact with the mitotic formin For3. Does For3 colocalize with Myo52 and Fus1∆IDR-FUSLC-27R along actin cables?

      This is an interesting question for which we do not have an answer. For technical reasons, we do not have the tools to co-image For3 with Fus1∆IDR-FUSLC-27R because both are tagged with GFP. We feel that this question goes beyond the scope of this paper.

      (5) If Fus1∆IDR-FUSLC-27R is active, does having ectopic formin activity in mitotic cells affect actin cable architecture? This could be assessed by comparing phalloidin staining for wildtype and Fus1∆IDR-FUSLC-27R cells.

      We are not sure what the purpose of this experiment is, or how informative it would be. If it is to evaluate whether Fus1∆IDR-FUSLC-27R is active, our current data already demonstrates this. Indeed, Fus1∆IDR-FUSLC-27R recruits Myo52 in a F-actin and FH2 domain-dependent manner (shown in Figure 5B and 5G), which demonstrates that Fus1∆IDR-FUSLC-27R FH2 domain is active. Even though Fus1∆IDR-FUSLC-27R assembles actin, we predict that its effect on general actin organization will be weak. Indeed, it is expressed under endogenous fus1 promoter, leading to very low expression levels during mitotic growth, such that only a subset of cells exhibit a Fus1 focus. Furthermore, most of these Fus1 foci are at or close to cell poles, where linear actin cables are assembled by For3, such that they may not have a strong disturbing effect. Because analysis of actin cable organization by phalloidin staining is difficult (due to the more strongly staining actin patches), cells with clear change in organization predicted to be rare in the population, and the gain in knowledge not transformative, we are not keen to do this experiment.

      Minor Comments:

      Prior studies are referenced appropriately. Text and figures are clear and accurate. My only suggestion would be Figure 1E-H could be moved to the supplemental material, due to their extremely technical nature. I believe this would help the broad audience focus on the experimental design mapped out in Figure 1A-D.

      We are relatively neutral about this. If this suggestion is supported by the Editor, we can move these panels to supplement.

      Reviewer #3 (Significance (Required)):

      Significance: This study provides an improved imaging method for detecting the spatial distributions of proteins below 100 nm, providing new insights about how a relatively small cellular structure is organized. The use of three-color cell imaging to accurately measure accumulation rates of molecular components of the fusion focus provides new insight into the development of this structure and its roles in mating. This method could be applied to other multi-protein structures found in different cell types. This work uses rigorously genetic tools such as knockout, knockdown and point mutants to dissect the roles of the formin Fus1 and Type V myosin Myo52 in creating a proper fusion focus. The study could be improved by biochemical assays to test whether Myo52 and Fus1 directly interact, since the interaction is only shown by co-immunoprecipitation from extracts, which may reflect an indirect interaction.

      Indeed, future studies should dissect the Fus1-Myo52 interaction, to determine whether it is direct and identify mutants that impair it.

      I believe this work advances the cell-mating field by providing others with a spatial and temporal map of conserved factors arriving to the mating site. Additionally, they identified a way to study a mating specific protein in mitotically dividing cells, offering future questions to address.

      This study should appeal to a range of basic scientists interested in cell biology, the cytoskeleton, and model organisms. The three-colored quantitative imaging could be applied to defining the architecture of many other cellular structures in different systems. Myosin and actin scientists will be interested in how this work expands the interplay of these two fields.

      I am a cell biologist with expertise in live cell imaging, genetics and biochemistry.

      We thank the reviewer for their appreciation of our work.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary:

      Fission yeast cell-cell fusion during mating is mediated by an actin-based structure called the 'fusion focus', which orchestrates actin polymerization by the mating-specific formin, Fus1, to direct polarized secretion towards the mating site. In the current study, Thomas and colleagues quantitatively map the spatial distribution of proteins mediating cell-cell fusion using a three-color fluorescence imaging methodology in the fission yeast Schizosaccharomyces pombe. Using Myo52 (Type V myosin) as a fluorescence reference point, the authors discover that proteins known to localize to the fusion focus have distinct spatial distributions and accumulation profiles at the mating site. Myo52 and Fus1 form a complex in vivo detected by co-immunoprecipitation and each contribute to directing secretory vesicles to the fusion focus. Previous work from this group has shown that the intrinsically disordered region (IDR) of Fus1 plays a critical role in forming the fusion focus. Here, the authors swap out the IDR of fission yeast Fus1 for the IDR of an unrelated mammalian protein, coincidentally called 'fused in sarcoma' (FUS). They express the Fus1∆IDR-FUSLC-27R chimera in mitotically dividing fission yeast cells, where Fus1 is not normally expressed, and discover that the Fus1∆IDR-FUSLC-27R chimera can travel with Myo52 on actively polymerizing actin cables. Additionally, they show that acute loss of Myo52 or Fus1 function, using Auxin-Inducible Degradation (AID) tags and point mutations, impair the normal compaction of the fusion focus, suggesting that direct interaction and coordination of Fus1 and Myo52 helps shape this structure.

      Major Comments:

      • In the Results section for Figure 2, the authors claim that actin filaments become shorter and more cross-linked they move away from the fusion site during mating, and suggest that this may be due to the presence of Myo51. However, the evidence to support this claim is not made clear. Is it supported by high-resolution electron microscopy of the actin filaments, or some other results? This needs to be clarified.

      • In Figure 4, the authors comment that disrupting Fus1 results in more disperse Myo52 spatial distribution at the fusion focus, raising the possibility that Myo52 normally becomes focused by moving on the actin filaments assembled by Fus1. This can be tested by asking whether latrunculin treatment phenocopies the 'more dispersed' Myo52 localization seen in fus1∆ cells? If Myo52 is focused instead by its direct interaction with Fus1, the latrunculin treatment should not cause the same phenotype.

      • The Fus1∆IDR-FUSLC-27R chimera used in Figure 5 is an interesting construct to examine actin-based transport of formins in cells. I was curious if the authors could provide the rates of movement for Myo52 and for Fus1∆IDR-FUSLC-27R, both before and after acute depletion of Myo52. It would be interesting to see if loss of Myo52 alters the rate of movement, or instead the movement stems from formin-mediated actin polymerization.

      • Also, Myo52 is known to interact with the mitotic formin For3. Does For3 colocalize with Myo52 and Fus1∆IDR-FUSLC-27R along actin cables?

      • If Fus1∆IDR-FUSLC-27R is active, does having ectopic formin activity in mitotic cells affect actin cable architecture? This could be assessed by comparing phalloidin staining for wildtype and Fus1∆IDR-FUSLC-27R cells.

      Minor Comments:

      • Prior studies are referenced appropriately.

      • Text and figures are clear and accurate. My only suggestion would be Figure 1E-H could be moved to the supplemental material, due to their extremely technical nature. I believe this would help the broad audience focus on the experimental design mapped out in Figure 1A-D.

      Significance

      Significance: This study provides an improved imaging method for detecting the spatial distributions of proteins below 100 nm, providing new insights about how a relatively small cellular structure is organized. The use of three-color cell imaging to accurately measure accumulation rates of molecular components of the fusion focus provides new insight into the development of this structure and its roles in mating. This method could be applied to other multi-protein structures found in different cell types. This work uses rigorously genetic tools such as knockout, knockdown and point mutants to dissect the roles of the formin Fus1 and Type V myosin Myo52 in creating a proper fusion focus. The study could be improved by biochemical assays to test whether Myo52 and Fus1 directly interact, since the interaction is only shown by co-immunoprecipitation from extracts, which may reflect an indirect interaction.

      I believe this work advances the cell-mating field by providing others with a spatial and temporal map of conserved factors arriving to the mating site. Additionally, they identified a way to study a mating specific protein in mitotically dividing cells, offering future questions to address.

      This study should appeal to a range of basic scientists interested in cell biology, the cytoskeleton, and model organisms. The three-colored quantitative imaging could be applied to defining the architecture of many other cellular structures in different systems. Myosin and actin scientists will be interested in how this work expands the interplay of these two fields.

      I am a cell biologist with expertise in live cell imaging, genetics and biochemistry.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      A three-color imaging approach to use centroid tracking is employed to determine the high resolution position over time of tagged actin fusion focus proteins during mating in fission yeast. In particular, the position of different protein components (tagged in a 3rd color) were determined in relation to the position (and axis) of the molecular motor Myo52, which is tagged with two different colors in the mating cells. Furthermore, time is normalized by the rapid diffusion of a weak fluorescent protein probe (mRaspberry) from one cell to the other upon fusion pore opening. From this approach multiple important mechanistic insights were determined for the compaction of fusion focus proteins during mating, including the general compaction of different components as fusion proceeds with different proteins having specific stereotypical behaviors that indicate underlying molecular insights. For example, secretory vesicles remain a constant distance from the plasma membrane, whereas the formin Fus1 rapidly accumulates at the fusion focus in a Myo52-dependent manner.

      I have minor suggestions/points:

      (1) Figure 1, for clarity it would be helpful if the cells shown in B were in the same orientation as the cartoon cells shown in A. Similarly, it would be helpful to have the orientation shown in D the same as the data that is subsequently presented in the rest of the manuscript (such as Figure 2) where time is on the X axis and distance (position) is on the Y axis.

      (2) Figure 2, for clarity useful to introduce how the position of Myo52 changes over time with respect to the fusion site (plasma membrane) earlier, and then come back to the positions of different proteins with respect to Myo52 shown in 2E. Currently the authors discuss this point after introducing Figure 2E, but better for the reader to have this in mind beforehand.

      (3) First sentence of page 8 "..., peaked at fusion time and sharply dropped post-fusion (Figure S3)." Figure S3 should be cited so that the reader knows where this data is presented.

      (4) Figure 3D-H, why is Exo70 used as a marker for vesicles instead of Ypt3 for these experiments? Exo70 seems to have a more confusing localization than Ypt3 (3C vs 3D), which seems to complicate interpretations.

      (5) Page 10, end of first paragraph, "We conclude...and promotes separation of Myo52 from the vesicles." This is an interesting hypothesis/interpretation that is consistent with the spatial-temporal organization of vesicles and the compacting fusion focus, but the underlying molecular mechanism has not be concluded.

      (6) Figure 5F and 5G, the results are confusing and should be discussed further. Depletion of Myo52 decreases Fus1 long-range movements, indicating that Fus1 is being transported by Myo52 (5F). Similarly, the Fus1 actin assembly mutant greatly decreases Fus1 long-range movements and prevents Myo52 binding (5G), perhaps indicating that Fus1-mediated actin assembly is important. It seems the author's interpretations are oversimplified.

      (7) Figure 6, why not measure the fluorescence intensity of Fus1 as a proxy for the number of Fus1 molecules (rather than the width of the Fus1 signal), which seems to be the more straight-forward analysis?

      (8) Figure 7, the authors should note (and perhaps discuss) any evidence as to whether activation of Fus1 to facilitate actin assembly depends upon Fus1 dissociating from Myo52 or whether Fus1 can be activated while still associated with Myo52, as both circumstances are included in the figure.

      (9) Figure 7, the color of secretory vesicles should be the same in A and B.

      Significance

      This is an impactful and high quality manuscript that describes an elegant experimental strategy with important insights determined. The experimental imaging strategy (and analysis), as well as the insight into the pombe mating fusion focus and its comparison to other cytoskeletal compaction events will be of broad scientific nterest.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      • In this article, Thomas et al. use a super-resolution approach in living cells to track proteins involved in the fusion event of sexual reproduction. They study the spatial organization and dynamics of the actin fusion focus, a key structure in cell-cell fusion in Schizosaccharomyces pombe. The researchers have adapted a high-precision centroid mapping method using three-color live-cell epifluorescence imaging to map the dynamic architecture of the fusion focus during yeast mating. The approach relies on tracking the centroid of fluorescence signals for proteins of interest, spatially referenced to Myo52-mScarlet-I (as a robust marker) and temporally referenced using a weakly fluorescent cytosolic protein (mRaspberry), which redistributes strongly upon fusion. The trajectories of five key proteins, including markers of polarity, cytoskeleton, exocytosis and membrane fusion, were compared to Myo52 over a 75-minute window spanning fusion. Their observations indicate that secretory vesicles maintain a constant distance from the plasma membrane whereas the actin network compacts. Most importantly, they discovered a positive feedback mechanism in which myosin V (Myo52) transports Fus1 formin along pre-existing actin filaments, thereby enhancing aster compaction.

      • This article is well written, the arguments are convincing and the assertions are balanced. The centroid tracking method has been clearly and solidly controlled. Overall, this is a solid addition to our understanding of cytoskeletal organization in cell fusion. Major comments: No major comment.

      Minor comments:

      • Page 8 authors wrote "Upon depletion of Myo52, Ypt3 did not accumulate at the fusion focus (Figure 3C). A thin, wide localization at the fusion site was occasionally observed (Figure 3C, Movies S3)" : Is there a quantification of this accumulation in the mutant?

      • The framerate of movies could be improved for reader comfort: For example, movie S6 lasts 0.5 sec.

      Significance

      This study represents a conceptual and technical breakthrough in our understanding of cytoskeletal organization during cell-cell fusion. The authors introduce a high-precision, three-color live-cell centroid mapping method capable of resolving the spatio-temporal dynamics of protein complexes at the nanometer scale in living yeast cells. This methodological innovation enables systematic and quantitative mapping of the dynamic architecture of proteins at the cell fusion site, making it a powerful live-cell imaging approach. However, it is important to keep in mind that the increased precision achieved through averaging comes at the expense of overlooking atypical or outlier behaviors. The authors discovered a myosin V-dependent mechanism for the recruitment of formin that leads to actin aster compaction. The identification of Myo52 (myosin V) as a transporter of Fus1 (formin) to the fusion focus adds a new layer to our understanding of how polarized actin structures are generated and maintained during developmentally regulated processes such as mating.

      Previous studies have shown the importance of formins and myosins during fusion, but this paper provides a quantitative and dynamic mapping that demonstrates how Myo52 modulates Fus1 positioning in living cells. This provides a better understanding of actin organization, beyond what has been demonstrated by fixed-cell imaging or genetic perturbation.

      Audience: Cell biologists working on actin dynamics, cell-cell fusion and intracellular transport. Scientists involved in live-cell imaging, single particle tracking and cytoskeleton modeling.

      I have expertise in live-cell microscopy, image analysis, fungal growth machinery and actin organization.

    1. eLife Assessment

      This important study evaluates a model for multisensory correlation detection, focusing on the detection of correlated transients in visual and auditory stimuli. Overall, the experimental design is sound and the evidence is compelling. The synergy between the experimental and theoretical aspects of the article is strong, and the work will be of interest to both neuroscientists and psychologists working in the domain of sensory processing and perception

    2. Reviewer #1 (Public review):

      Summary:

      Parise presents another instantiation of the Multisensory Correlation Detector model that can now accept stimulus-level inputs. This is a valuable development as it removes researcher involvement in the characterization/labeling of features and allows analysis of complex stimuli with a high degree of nuance that was previously unconsidered (i.e. spatial/spectral distributions across time). The author demonstrates the power of the model by fitting data from dozens of previous experiments including multiple species, tasks, behavioral modality, and pharmacological interventions.

      Strengths:

      One of the model's biggest strengths, in my opinion, is its ability to extract complex spatiotemporal co-relationships from multisensory stimuli. These relationships have typically been manually computed or assigned based on stimulus condition and often distilled to a single dimension or even single number (e.g., "-50 ms asynchrony"). Thus, many models of multisensory integration depend heavily on human preprocessing of stimuli and these models miss out on complex dynamics of stimuli; the lead modality distribution apparent in figure 3b and c are provocative. I can imagine the model revealing interesting characteristics of the facial distribution of correlation during continuous audiovisual speech that have up to this point been largely described as "present" and almost solely focused on the lip area.

      Another aspect that makes the MCD stand out among other models is the biological inspiration and generalizability across domains. The model was developed to describe a separate process - motion perception - and in a much simpler organism - drosophila. It could then describe a very basic neural computation that has been conserved across phylogeny (which is further demonstrated in the ability to predict rat, primate, and human data) and brain area. This aspect makes the model likely able to account for much more than what has already been demonstrated with only a few tweaks akin to the modifications described in this and previous articles from Parise.

      What allows this potential is that, as Parise and colleagues have demonstrated in those papers since our (re)introduction of the model in 2016, the MCD model is modular - both in its ability to interface with different inputs/outputs and its ability to chain MCD units in a way that can analyze spatial, spectral, or any other arbitrary dimension of a stimulus. This fact leaves wide-open the possibilities for types of data, stimuli, and tasks a simplistic neutrally inspired model can account for.

      And so it's unsurprising (but impressive!) that Parise has demonstrated the model's ability here to account for such a wide range of empirical data from numerous tasks (synchrony/temporal order judgement, localization, detection, etc.) and behavior types (manual/saccade responses, gaze, etc.) using only the stimulus and a few free parameters. This ability is another of the model's main strengths that I think deserves some emphasis: it represents a kind of validation of those experiments - especially in the context of cross-experiment predictions.

      Finally, what is perhaps most impressive to me is that the MCD (and the accompanying decision model) does all this with very few (sometimes zero) free parameters. This highlights the utility of the model and the plausibility of its underlying architecture, but also helps to prevent extreme overfitting if fit correctly.

      Weaknesses:

      The model boasts an incredible versatility across tasks and stimulus configurations and its overall scope of the model is to understand how and what relevant sensory information is extracted from a stimulus. We still need to exercise care when interpreting its parameters, especially considering the broader context of top-down control of perception and that some multisensory mappings may not be derivable purely from stimulus statistics (e.g., the complementary nature of some phonemes/visemes).

    3. Reviewer #2 (Public review):

      Summary:

      Building on previous models of multisensory integration (including their earlier correlation-detection framework used for non-spatial signals), the author introduces a population-level Multisensory Correlation Detector (MCD) that processes raw auditory and visual data. Crucially, it does not rely on abstracted parameters, as is common in normative Bayesian models," but rather works directly on the stimulus itself (i.e., individual pixels and audio samples). By systematically testing the model against a range of experiments spanning human, monkey, and rat data - the authors show that their MCD population approach robustly predicts perception and behavior across species with a relatively small (0-4) number of free parameters.

      Strengths:

      (1) Unlike prior Bayesian models that used simplified or parameterized inputs, the model here is explicitly computable from full natural stimuli. This resolves a key gap in understanding how the brain might extract "time offsets" or "disparities" from continuously changing audio-visual streams.

      (2) The same population MCD architecture captures a remarkable range of multisensory phenomena, from classical illusions (McGurk, ventriloquism) and synchrony judgments, to attentional/gaze behavior driven by audio-visual salience. This generality strongly supports the idea that a single low-level computation (correlation detection) can underlie many distinct multisensory effects.

      (3) By tuning model parameters to different temporal rhythms (e.g., faster in rodents, slower in humans), the MCD explains cross-species perceptual data without reconfiguring the underlying architecture.

      (4) The authors frame their model as a plausible algorithmic account of the Bayesian multisensory-integration models in Marr's levels of hierarchy.

      Weaknesses:

      What remains unclear is how the parameters themselves relate to stimulus quantities (like stimulus uncertainty), as is often straightforward in Bayesian models. A theoretical missing link is the explicit relationship between the parameters of the MCD models and those of a cue combination model, thereby bridging Marr's levels of hierarchy.

      Likely Impact and Usefulness

      The work offers a compelling unification of multiple multisensory tasks-temporal order judgments, illusions, Bayesian causal inference, and overt visual attention-under a single, fully stimulus-driven framework. Its success with natural stimuli should interest computational neuroscientists, systems neuroscientists, and machine learning scientists. This paper thus makes an important contribution to the field by moving beyond minimalistic lab stimuli, illustrating how raw audio and video can be integrated using elementary correlation analyses.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      Parise presents another instantiation of the Multisensory Correlation Detector model that can now accept stimulus-level inputs. This is a valuable development as it removes researcher involvement in the characterization/labeling of features and allows analysis of complex stimuli with a high degree of nuance that was previously unconsidered (i.e., spatial/spectral distributions across time). The author demonstrates the power of the model by fitting data from dozens of previous experiments, including multiple species, tasks, behavioral modalities, and pharmacological interventions.

      Thanks for the kind words!

      Strengths:

      One of the model's biggest strengths, in my opinion, is its ability to extract complex spatiotemporal co-relationships from multisensory stimuli. These relationships have typically been manually computed or assigned based on stimulus condition and often distilled to a single dimension or even a single number (e.g., "-50 ms asynchrony"). Thus, many models of multisensory integration depend heavily on human preprocessing of stimuli, and these models miss out on complex dynamics of stimuli; the lead modality distribution apparent in Figures 3b and c is provocative. I can imagine the model revealing interesting characteristics of the facial distribution of correlation during continuous audiovisual speech that have up to this point been largely described as "present" and almost solely focused on the lip area.

      Another aspect that makes the MCD stand out among other models is the biological inspiration and generalizability across domains. The model was developed to describe a separate process - motion perception - and in a much simpler organism - Drosophila. It could then describe a very basic neural computation that has been conserved across phylogeny (which is further demonstrated in the ability to predict rat, primate, and human data) and brain area. This aspect makes the model likely able to account for much more than what has already been demonstrated with only a few tweaks akin to the modifications described in this and previous articles from Parise.

      What allows this potential is that, as Parise and colleagues have demonstrated in those papers since our (re)introduction of the model in 2016, the MCD model is modular - both in its ability to interface with different inputs/outputs and its ability to chain MCD units in a way that can analyze spatial, spectral, or any other arbitrary dimension of a stimulus. This fact leaves wide open the possibilities for types of data, stimuli, and tasks a simplistic, neutrally inspired model can account for.

      And so it's unsurprising (but impressive!) that Parise has demonstrated the model's ability here to account for such a wide range of empirical data from numerous tasks (synchrony/temporal order judgement, localization, detection, etc.) and behavior types (manual/saccade responses, gaze, etc.) using only the stimulus and a few free parameters. This ability is another of the model's main strengths that I think deserves some emphasis: it represents a kind of validation of those experiments, especially in the context of cross-experiment predictions (but see some criticism of that below).

      Finally, what is perhaps most impressive to me is that the MCD (and the accompanying decision model) does all this with very few (sometimes zero) free parameters. This highlights the utility of the model and the plausibility of its underlying architecture, but also helps to prevent extreme overfitting if fit correctly (but see a related concern below).

      We sincerely thank the reviewer for their thoughtful and generous comments. We are especially pleased that the core strengths of the model—its stimulus-computable architecture, biological grounding, modularity, and cross-domain applicability—were clearly recognized. As the reviewer rightly notes, removing researcher-defined abstractions and working directly from naturalistic stimuli opens the door to uncovering previously overlooked dynamics in complex multisensory signals, such as the spatial and temporal richness of audiovisual speech.

      We also appreciate the recognition of the model’s origins in a simple organism and its generalization across species and behaviors. This phylogenetic continuity reinforces our view that the MCD captures a fundamental computation with wide-ranging implications. Finally, we are grateful for the reviewer’s emphasis on the model’s predictive power across tasks and datasets with few or no free parameters—a property we see as key to both its parsimony and explanatory utility.

      We have highlighted these points more explicitly in the revised manuscript, and we thank the reviewer for their generous and insightful endorsement of the work.

      Weaknesses:

      There is an insufficient level of detail in the methods about model fitting. As a result, it's unclear what data the models were fitted and validated on. Were models fit individually or on average group data? Each condition separately? Is the model predictive of unseen data? Was the model cross-validated? Relatedly, the manuscript mentions a randomization test, but the shuffled data produces model responses that are still highly correlated to behavior despite shuffling. Could it be that any stimulus that varies in AV onset asynchrony can produce a psychometric curve that matches any other task with asynchrony judgements baked into the task? Does this mean all SJ or TOJ tasks produce correlated psychometric curves? Or more generally, is Pearson's correlation insensitive to subtle changes here, considering psychometric curves are typically sigmoidal? Curves can be non-overlapping and still highly correlated if one is, for example, scaled differently. Would an error term such as mean-squared or root mean-squared error be more sensitive to subtle changes in psychometric curves? Alternatively, perhaps if the models aren't cross-validated, the high correlation values are due to overfitting?

      The reviewer is right: the current version of the manuscript only provides limited information about parameter fitting. In the revised version of the manuscript, we included a parameter estimation and generalizability section that includes all information requested by the reviewer.

      To test whether using the MSE instead of Pearson correlation led to a similar estimated set of parameter values, we repeated the fitting using the MSE. The parameter estimated with this method (TauV, TauA, TauBim) closely followed those estimated using Pearson correlation (TauV, TauA, TauBim). Given the similarity of these results, we have chosen not to include further figures, however this analysis is now included in the new section (pages 23-24).

      Regarding the permutation test, it is expected that different stimuli produce analogous psychometric functions: after all, all studies relied on stimuli containing identical manipulation of lags. As a result, MCD population responses tend to be similar across experiments. Therefore, it is not a surprise that the permuted distribution of MCD-data correlation in Supplementary Figure 1K has a mean as high as 0.97. However, what is important is to demonstrate that the non-permuted dataset has an even higher goodness of fit. Supplementary Figure 1K demonstrates that none of the permuted stimuli could outperform the non-permuted dataset; the mean of the non-permuted distribution is 4.7 (standard deviations) above the mean of the already high  permuted distribution.

      We believe the new section, along with the present response, fully addresses the legitimate concerns of the reviewer.

      While the model boasts incredible versatility across tasks and stimulus configurations, fitting behavioral data well doesn't mean we've captured the underlying neural processes, and thus, we need to be careful when interpreting results. For example, the model produces temporal parameters fitting rat behavior that are 4x faster than when fitting human data. This difference in slope and a difference at the tails were interpreted as differences in perceptual sensitivity related to general processing speeds of the rat, presumably related to brain/body size differences. While rats no doubt have these differences in neural processing speed/integration windows, it seems reasonable that a lot of the differences in human and rat psychometric functions could be explained by the (over)training and motivation of rats to perform on every trial for a reward - increasing attention/sensitivity (slope) - and a tendency to make mistakes (compression evident at the tails). Was there an attempt to fit these data with a lapse parameter built into the decisional model as was done in Equation 21? Likewise, the fitted parameters for the pharmacological manipulations during the SJ task indicated differences in the decisional (but not the perceptual) process and the article makes the claim that "all pharmacologically-induced changes in audiovisual time perception" can be attributed to decisional processes "with no need to postulate changes in low-level temporal processing." However, those papers discuss actual sensory effects of pharmacological manipulation, with one specifically reporting changes to response timing. Moreover, and again contrary to the conclusions drawn from model fits to those data, both papers also report a change in psychometric slope/JND in the TOJ task after pharmacological manipulation, which would presumably be reflected in changes to the perceptual (but not the decisional) parameters.

      Fitting or predicting behaviour does not in itself demonstrate that a model captures the underlying neural computations—though it may offer valuable constraints and insights. In line with this, we were careful not to extrapolate the implications of our simulations to specific neural mechanisms.

      Temporal sensitivity is, by definition, a behavioural metric, and—as the reviewer correctly notes—its estimation may reflect a range of contributing factors beyond low-level sensory processing, including attention, motivation, and lapse rates (i.e., stimulus-independent errors). In Equation 21, we introduced a lapse parameter specifically to account for such effects in the context of monkey eye-tracking data. For the rat datasets, however, the inclusion of a lapse term was not required to achieve a close fit to the psychometric data (ρ = 0.981). While it is likely that adding a lapse component would yield a marginally better fit, the absence of single-trial data prevents us from applying model comparison criteria such as AIC or BIC to justify the additional parameter. In light of this, and to avoid unnecessary model complexity, we opted not to include a lapse term in the rat simulations.

      With respect to the pharmacological manipulation data, we acknowledge the reviewer’s point that observed changes in slope and bias could plausibly arise from alterations at either the sensory or decisional level—or both. In our model, low-level sensory processing is instantiated by the MCD architecture, which outputs the MCDcorr and MCDlag signals that are then scaled and integrated during decision-making. Importantly, this scaling operation influences the slope of the resulting psychometric functions, such that changes in slope can arise even in the absence of any change to the MCD’s temporal filters. In our simulations, the temporal constants of the MCD units were fixed to the values estimated from the non-pharmacological condition (see parameter estimation section above), and only the decision-related parameters were allowed to vary. From this modelling perspective, the behavioural effects observed in the pharmacological datasets can be explained entirely by changes at the decisional level. However, we do not claim that such an explanation excludes the possibility of genuine sensory-level changes. Rather, we assert that our model can account for the observed data without requiring modifications to early temporal tuning.

      To rigorously distinguish sensory from decisional effects, future experiments will need to employ stimuli with richer temporal structure—e.g., temporally modulated sequences of clicks and flashes that vary in frequency, phase, rhythm, or regularity (see Fujisaki & Nishida, 2007; Denison et al., 2012; Parise & Ernst, 2016, 2025; Locke & Landy, 2017; Nidiffer et al., 2018). Such stimuli engage the MCD in a more stimulus-dependent manner, enabling a clearer separation between early sensory encoding and later decision-making processes. Unfortunately, the current rat datasets—based exclusively on single click-flash pairings—lack the complexity needed for such disambiguation. As a result, while our simulations suggest that the observed pharmacologically induced effects can be attributed to changes in decision-level parameters, they do not rule out concurrent sensory-level changes.

      In summary, our results indicate that changes in the temporal tuning of MCD units are not necessary to reproduce the observed pharmacological effects on audiovisual timing behaviour. However, we do not assert that such changes are absent or unnecessary in principle. Disentangling sensory and decisional contributions will ultimately require richer datasets and experimental paradigms designed specifically for this purpose. We have now modified the results section (page 6) and the discussion (page 11) to clarify these points.

      The case for the utility of a stimulus-computable model is convincing (as I mentioned above), but its framing as mission-critical for understanding multisensory perception is overstated, I think. The line for what is "stimulus computable" is arbitrary and doesn't seem to be followed in the paper. A strict definition might realistically require inputs to be, e.g., the patterns of light and sound waves available to our eyes and ears, while an even more strict definition might (unrealistically) require those stimuli to be physically present and transduced by the model. A reasonable looser definition might allow an "abstract and low-dimensional representation of the stimulus, such as the stimulus envelope (which was used in the paper), to be an input. Ultimately, some preprocessing of a stimulus does not necessarily confound interpretations about (multi)sensory perception. And on the flip side, the stimulus-computable aspect doesn't necessarily give the model supreme insight into perception. For example, the MCD model was "confused" by the stimuli used in our 2018 paper (Nidiffer et al., 2018; Parise & Ernst, 2025). In each of our stimuli (including catch trials), the onset and offset drove strong AV temporal correlations across all stimulus conditions (including catch trials), but were irrelevant to participants performing an amplitude modulation detection task. The to-be-detected amplitude modulations, set at individual thresholds, were not a salient aspect of the physical stimulus, and thus only marginally affected stimulus correlations. The model was of course, able to fit our data by "ignoring" the on/offsets (i.e., requiring human intervention), again highlighting that the model is tapping into a very basic and ubiquitous computational principle of (multi)sensory perception. But it does reveal a limitation of such a stimulus-computable model: that it is (so far) strictly bottom-up.

      We appreciate the reviewer’s thoughtful engagement with the concept of stimulus computability. We agree that the term requires careful definition and should not be taken as a guarantee of perceptual insight or neural plausibility. In our work, we define a model as “stimulus-computable” if all its inputs are derived directly from the stimulus, rather than from experimenter-defined summary descriptors such as temporal lag, spatial disparity, or cue reliability. In the context of multisensory integration, this implies that a model must account not only for how cues are combined, but also for how those cues are extracted from raw inputs—such as audio waveforms and visual contrast sequences.

      This distinction is central to our modelling philosophy. While ideal observer models often specify how information should be combined once identified, they typically do not address the upstream question of how this information is extracted from sensory input. In that sense, models that are not stimulus-computable leave out a key part of the perceptual pipeline. We do not present stimulus computability as a marker of theoretical superiority, but rather as a modelling constraint that is necessary if one’s aim is to explain how structured sensory input gives rise to perception. This is a view that is also explicitly acknowledged and supported by Reviewer 2.

      Framed in Marr’s (1982) terms, non–stimulus-computable models tend to operate at the computational level, defining what the system is doing (e.g., computing a maximum likelihood estimate), whereas stimulus-computable models aim to function at the algorithmic level, specifying how the relevant representations and operations might be implemented. When appropriately constrained by biological plausibility, such models may also inform hypotheses at the implementational level, pointing to potential neural substrates that could instantiate the computation.

      Regarding the reviewer’s example illustrating a limitation of the MCD model, we respectfully note that the account appears to be based on a misreading of our prior work. In Parise & Ernst (2025), where we simulated the stimuli from Nidiffer et al. (2018), the MCD model reproduced participants’ behavioural data without any human intervention or adjustment. The model was applied in a fully bottom-up, stimulus-driven manner, and its output aligned with observer responses as-is. We suspect the confusion may stem from analyses shown in Figure 6 - Supplement Figure 5 of Parise & Ernst (2025), where we investigated the lack of a frequency-doubling effect in the Nidiffer et al. data. However, those analyses were based solely on the Pearson correlation between auditory and visual stimulus envelopes and did not involve the MCD model. No manual exclusion of onset/offset events was applied, nor was the MCD used in those particular figures. We also note that Parise & Ernst (2025) is a separate, already published study and is not the manuscript currently under review. 

      In summary, while we fully agree that stimulus computability does not resolve all the complexities of multisensory perception (see comments below about speech), we maintain that it provides a valuable modelling constraint—one that enables robust, generalisable predictions when appropriately scoped. 

      The manuscript rightly chooses to focus a lot of the work on speech, fitting the MCD model to predict behavioral responses to speech. The range of findings from AV speech experiments that the MCD can account for is very convincing. Given the provided context that speech is "often claimed to be processed via dedicated mechanisms in the brain," a statement claiming a "first end-to-end account of multisensory perception," and findings that the MCD model can account for speech behaviors, it seems the reader is meant to infer that energetic correlation detection is a complete account of speech perception. I think this conclusion misses some facets of AV speech perception, such as integration of higher-order, non-redundant/correlated speech features (Campbell, 2008) and also the existence of top-down and predictive processing that aren't (yet!) explained by MCD. For example, one important benefit of AV speech is interactions on linguistic processes - how complementary sensitivity to articulatory features in the auditory and visual systems (Summerfield, 1987) allow constraint of linguistic processes (Peelle & Sommers, 2015; Tye-Murray et al., 2007).

      We thank the reviewer for their thoughtful comments, and especially for the kind words describing the range of findings from our AV speech simulations as “very convincing.”

      We would like to clarify that it is not our view that speech perception can be reduced to energetic correlation detection. While the MCD model captures low- to mid-level temporal dependencies between auditory and visual signals, we fully agree that a complete account of audiovisual speech perception must also include higher-order processes—including linguistic mechanisms and top-down predictions. These are critical components of AV speech comprehension, and lie beyond the scope of the current model.

      Our use of the term “end-to-end” is intended in a narrow operational sense: the model transforms raw audiovisual input (i.e., audio waveforms and video frames) directly into behavioural output (i.e., button press responses), without reliance on abstracted stimulus parameters such as lag, disparity or reliability. It is in this specific technical sense that the MCD offers an end-to-end model. We have revised the manuscript to clarify this usage to avoid any misunderstanding.

      In light of the reviewer’s valuable point, we have now edited the Discussion to acknowledge the importance of linguistic processes (page 13) and to clarify what we mean by end-to-end account (page 11). We agree that future work will need to explore how stimulus-computable models such as the MCD can be integrated with broader frameworks of linguistic and predictive processing (e.g., Summerfield, 1987; Campbell, 2008; Peelle & Sommers, 2015; Tye-Murray et al., 2007).

      References

      Campbell, R. (2008). The processing of audio-visual speech: empirical and neural bases. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1493), 1001-1010. https://doi.org/10.1098/rstb.2007.2155

      Nidiffer, A. R., Diederich, A., Ramachandran, R., & Wallace, M. T. (2018). Multisensory perception reflects individual differences in processing temporal correlations. Scientific Reports 2018 8:1, 8(1), 1-15. https://doi.org/10.1038/s41598-018-32673-y

      Parise, C. V, & Ernst, M. O. (2025). Multisensory integration operates on correlated input from unimodal transient channels. ELife, 12. https://doi.org/10.7554/ELIFE.90841

      Peelle, J. E., & Sommers, M. S. (2015). Prediction and constraint in audiovisual speech perception. Cortex, 68, 169-181. https://doi.org/10.1016/j.cortex.2015.03.006

      Summerfield, Q. (1987). Some preliminaries to a comprehensive account of audio-visual speech perception. In B. Dodd & R. Campbell (Eds.), Hearing by Eye: The Psychology of Lip-Reading (pp. 3-51). Lawrence Erlbaum Associates.

      Tye-Murray, N., Sommers, M., & Spehar, B. (2007). Auditory and Visual Lexical Neighborhoods in Audiovisual Speech Perception: Trends in Amplification, 11(4), 233-241. https://doi.org/10.1177/1084713807307409

      Reviewer #2 (Public review):

      Summary:

      Building on previous models of multisensory integration (including their earlier correlation-detection framework used for non-spatial signals), the author introduces a population-level Multisensory Correlation Detector (MCD) that processes raw auditory and visual data. Crucially, it does not rely on abstracted parameters, as is common in normative Bayesian models," but rather works directly on the stimulus itself (i.e., individual pixels and audio samples). By systematically testing the model against a range of experiments spanning human, monkey, and rat data, the authors show that their MCD population approach robustly predicts perception and behavior across species with a relatively small (0-4) number of free parameters.

      Strengths:

      (1) Unlike prior Bayesian models that used simplified or parameterized inputs, the model here is explicitly computable from full natural stimuli. This resolves a key gap in understanding how the brain might extract "time offsets" or "disparities" from continuously changing audio-visual streams.

      (2) The same population MCD architecture captures a remarkable range of multisensory phenomena, from classical illusions (McGurk, ventriloquism) and synchrony judgments, to attentional/gaze behavior driven by audio-visual salience. This generality strongly supports the idea that a single low-level computation (correlation detection) can underlie many distinct multisensory effects.

      (3) By tuning model parameters to different temporal rhythms (e.g., faster in rodents, slower in humans), the MCD explains cross-species perceptual data without reconfiguring the underlying architecture.

      We thank the reviewer for their positive evaluation of the manuscript, and particularly for highlighting the significance of the model's stimulus-computable architecture and its broad applicability across species and paradigms. Please find our responses to the individual points below.

      Weaknesses:

      (1) The authors show how a correlation-based model can account for the various multisensory integration effects observed in previous studies. However, a comparison of how the two accounts differ would shed light on the correlation model being an implementation of the Bayesian computations (different levels in Marr's hierarchy) or making testable predictions that can distinguish between the two frameworks. For example, how uncertainty in the cue combined estimate is also the harmonic mean of the unimodal uncertainties is a prediction from the Bayesian model. So, how the MCD framework predicts this reduced uncertainty could be one potential difference (or similarity) to the Bayesian model.

      We fully agree with the reviewer that a comparison between the correlation-based MCD model and Bayesian accounts is valuable—particularly for clarifying how the two frameworks differ conceptually and where they may converge.

      As noted in the revised manuscript, the key distinction lies in the level of analysis described by Marr (1982). Bayesian models operate at the computational level, describing what the system is aiming to compute (e.g., optimal cue integration). In contrast, the MCD functions at the algorithmic level, offering a biologically plausible mechanism for how such integration might emerge from stimulus-driven representations.

      In this context, the MCD provides a concrete, stimulus-grounded account of how perceptual estimates might be constructed—potentially implementing computations with Bayesian-like characteristics (e.g., reduced uncertainty, cue weighting). Thus, the two models are not mutually exclusive but can be seen as complementary: the MCD may offer an algorithmic instantiation of computations that, at the abstract level, resemble Bayesian inference.

      We have now updated the manuscript to explicitly highlight this relationship (pages 2 and 11). In the revised manuscript, we also included a new figure (Figure 5) and movie (Supplementary Movie 3), to show how the present approach extends previous Bayesian models for the case of cue integration (i.e., the ventriloquist effect).

      (2) The authors show a good match for cue combination involving 2 cues. While Bayesian accounts provide a direction for extension to more cues (also seen empirically, for eg, in Hecht et al. 2008), discussion on how the MCD model extends to more cues would benefit the readers.

      We thank the reviewer for this insightful comment: extending the MCD model to include more than two sensory modalities is a natural and valuable next step. Indeed, one of the strengths of the MCD framework lies in its modularity. Let us consider the MCDcorr​ output (Equation 6), which is computed as the pointwise product of transient inputs across modalities. Extending this to include a third modality, such as touch, is straightforward: MCD units would simply multiply the transient channels from all three modalities, effectively acting as trimodal coincidence detectors that respond when all inputs are aligned in time and space.

      By contrast, extending MCDlag is less intuitive, due to its reliance on opponency between two subunits (via subtraction). A plausible solution is to compute MCDlag in a pairwise fashion (e.g., AV, VT, AT), capturing relative timing across modality pairs.

      Importantly, the bulk of the spatial integration in our framework is carried by MCDcorr, which generalises naturally to more than two modalities. We have now formalised this extension and included a graphical representation in a supplementary section of the revised manuscript.

      Likely Impact and Usefulness:

      The work offers a compelling unification of multiple multisensory tasks- temporal order judgments, illusions, Bayesian causal inference, and overt visual attention - under a single, fully stimulus-driven framework. Its success with natural stimuli should interest computational neuroscientists, systems neuroscientists, and machine learning scientists. This paper thus makes an important contribution to the field by moving beyond minimalistic lab stimuli, illustrating how raw audio and video can be integrated using elementary correlation analyses.

      Reviewer #1 (Recommendations for the authors):

      Recommendations:

      My biggest concern is a lack of specificity about model fitting, which is assuaged by the inclusion of sufficient detail to replicate the analysis completely or the inclusion of the analysis code. The code availability indicates a script for the population model will be included, but it is unclear if this code will provide the fitting details for the whole of the analysis.

      We thank the reviewer for raising this important point. A new methodological section has been added to the manuscript, detailing the model fitting procedures used throughout the study. In addition, the accompanying code repository now includes MATLAB scripts that allow full replication of the spatiotemporal MCD simulations.

      Perhaps it could be enlightening to re-evaluate the model with a measure of error rather than correlation? And I think many researchers would be interested in the model's performance on unseen data.

      The model has now been re-evaluated using mean squared error (MSE), and the results remain consistent with those obtained using Pearson correlation. Additionally, we have clarified which parts of the study involve testing the model on unseen data (i.e., data not used to fit the temporal constants of the units). These analyses are now included and discussed in the revised fitting section of the manuscript (pages 23-24).

      Otherwise, my concerns involve the interpretation of findings, and thus could be satisfied with minor rewording or tempering conclusions.

      The manuscript has been revised to address these interpretative concerns, with several conclusions reworded or tempered accordingly. All changes are marked in blue in the revised version.

      Miscellanea:

      Should b0 in equation 10 be bcrit to match the below text?

      Thank you for catching this inconsistency. We have corrected Equation 10 (and also Equation 21) to use the more transparent notation bcrit instead of b0, in line with the accompanying text.

      Equation 23, should time be averaged separately? For example, if multiple people are speaking, the average correlation for those frames will be higher than the average correlation across all times.

      We thank the reviewer for raising this thoughtful and important point. In response, we have clarified the notation of Equation 23 in the revised manuscript (page 20). Specifically, we now denote the averaging operations explicitly as spatial means and standard deviations across all pixel locations within each frame.

      This equation computes the z-score of the MCD correlation value at the current gaze location, normalized relative to the spatial distribution of correlation values in the same frame. That is, all operations are performed at the frame level, not across time. This ensures that temporally distinct events are treated independently and that the final measure reflects relative salience within each moment, not a global average over the stimulus. In other words, the spatial distribution of MCD activity is re-centered and rescaled at each frame, exactly to avoid the type of inflation or confounding the reviewer rightly cautioned against.

      Reviewer #2 (Recommendations for the authors):

      The authors have done a great job of providing a stimulus computable model of cue combination. I had just a few suggestions to strengthen the theoretical part of the paper:

      (1) While the authors have shown a good match between MCD and cue combination, some theoretical justification or equivalence analysis would benefit readers on how the two relate to each other. Something like Zhang et al. 2019 (which is for motion cue combination) would add to the paper.

      We agree that it is important to clarify the theoretical relationship between the Multisensory Correlation Detector (MCD) and normative models of cue integration, such as Bayesian combination. In the revised manuscript, we have now modified the introduction and added a paragraph in the Discussion addressing this link more explicitly. In brief, we see the MCD as an algorithmic-level implementation (in Marr’s terms) that may approximate or instantiate aspects of Bayesian inference.

      (2) Simulating cue combination for tasks that require integration of more than two cues (visual, auditory, haptic cues) would more strongly relate the correlation model to Bayesian cue combination. If that is a lot of work, at least discussing this would benefit the paper

      This point has now been addressed, and a new paragraph discussing the extension of the MCD model to tasks involving more than two sensory modalities has been added to the Discussion section.

    1. Histogram showing distribution of per-unit weights across all countries and all years (2007-2024), imports

      i didn't see much difference between years, exports and imports. All positively skewed, with weighted mean much closer to the other HS codes proposed to be part of the UNU Key.

      For me it does not make sense to add the heavier ones though. At least from the boxplot, it doesn't look like there's such variation of weight per unit as it is in this case. What do we think about doing a cut-off weight per unit so only those that are less than x is considered? Certainly lifetime and composition of the heavier ones are not the same?

    1. he Haskell functions div and ^ are partial, meaning they can crash with a so-called imprecise exception (an exception that is not visible in the type, also sometimes called IO exceptions).

      partial functions

    1. eLife Assessment

      This study is a fundamental advance in the field of developmental biology and transcriptional regulation that demonstrates the use of hPSC-derived organoids to generate reproducible organoids to study the mechanisms that drive neural tube closure. The work is exceptional in its development of tools to use CRISPR interference to screen for genes that regulate morphogenesis in human PSC organoids. The additional characterization of the role of specific transcription factors in neural tube formation is solid. The work provides both technical advances and new knowledge on human development through embryo models.

    2. Reviewer #1 (Public review):

      Summary:

      This is a wonderful and landmark study in the field of human embryo modeling. It uses patterned human gastruloids and conducts a functional screen on neural tube closure, and identifies positive and negative regulators, and defines the epistasis among them.

      Strengths:

      The above was achieved following optimization of the micro-pattern-based gastruloid protocol to achieve high efficiency, and then optimized to conduct and deliver CRISPRi without disrupting the protocol. This is a technical tour de force as well as one of the first studies to reveal new knowledge on human development through embryo models, which has not been done before.

      The manuscript is very solid and well-written. The figures are clear, elegant, and meaningful. The conclusions are fully supported by the data shown. The methods are well-detailed, which is very important for such a study.

      Weaknesses:

      This reviewer did not identify any meaningful, major, or minor caveats that need addressing or correcting.

      A minor weakness is that one can never find out if the findings in human embryo models can be in vitro revalidated in humans in vivo. This is for obvious and justified ethical reasons. However, the authors acknowledge this point in the section of the manuscript detailing the limitations of their study.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript is a technical report on a new model of early neurogenesis, coupled to a novel platform for genetic screens. The model is more faithful than others published to date, and the screening platform is an advance over existing ones in terms of speed and throughput.

      Strengths:

      It is novel and useful.

      Weaknesses:

      The novelty of the results is limited in terms of biology, mainly a proof of concept of the platform and a very good demonstration of the hierarchical interactions of the top regulators of GRNs.

      The value of the manuscript could be enhanced in two ways:

      (1) by showing its versatility and transforming the level of neural tube to midbrain and hindbrain, and looking at the transcriptional hierarchies there.

      (2) by relating the patterning of the organoids to the situation in vivo, in particular with the information in reference 49. The authors make a statement "To compare our findings with in vivo gene expression patterns, we applied the same approach to published scRNA-seq data from 4-week-old human embryos at the neurula stage" but it would be good to have a more nuanced reference: what stage, what genes are missing, what do they add to the information in that reference?

    1. Comparing the outputs with Eurostat data in tabular format

      Nice to see that overall is better. Much higher differences at EU than World level. I think this is somewhat justifiable since we "lock" the EU data since DGEnv, thus not accounting for updated trade, production data in recent years. It can also be just a confirmation bias from my side.

    1. MI PRESENTACIÓN EN HIVE, Mis primeros años

      MY PRESENTATION AT HIVE, My Early Years

      Adzael Tovar shares his journey as a young content creator in the Hive community. Reflecting on his family background. He shares experiences with internet connectivity in Venezuela. And aspirations to study engineering. While engaging in blockchain and web3 opportunities. He expresses excitement about participating in community events. And the support he has received from friends and family in his new endeavors.

    1. 06_overwrite_EU.R if 03a1_use_WOT_EU_data.R is ran?

      I don't think so as long as 03a1 is called at the end of 03POM, and then you run 4 and 5. From the main GEM, I haven't called 06.

    1. Set-off. Set-off is the discharge of obligations without money. This is done by balancing obligationsacross balance sheets so they offset each other. If Alice owes Bob and Bob owes Alice, they can doset-off. Set-off is more interesting when there are cycles of size greater than two – if Alice owes Boband Bob owes Carol and Carol owes Alice, they can all set off the lowest amount.

      This reframes “we don’t have money to pay” into “we have a mutual trust system that can settle this.” It’s empowering: No need for extra debts for the SMEs.

    2. The major difference with Cycles is that Alice doesn’t simplypublish a transaction to send $10 to Bob; she first declares that she owes Bob $10

      I love how this shifts the perspective of money, into a trust base commitment to each other

    3. Perhaps Alice’s counterparty Bob doesn’t accept stablecoins;he only accepts ATOM. However, he may owe Carol $10, and Carol does accept the stablecoin. Byhaving Alice, Bob, and Carol all declare their intents, Cycles can transfer Alice’s stablecoin directlyto Carol (without them being aware of each other) and publish set-off notices for everyone

      I love the way how there is flexibility for all different kinds of forms of money and crypto!

    4. At regular intervals (e.g. daily, monthly),solvers execute and find solutions that clear the most obligations for the most people with the leastamount of liquidity, based on the published intents

      What is your experience around the most optimal interval? Is this a manual process for certain users in the platform or is this automated?

    5. Our defaultsolver is a min-cost max-flow algorithm called Multilateral Trade Credit Set-off (MTCS

      I am curious about the computational complexity of this algorithm. Especially when the userbase of the platform will scale up in the future. Did you already do any benchmark tests with larger userbase?

    6. our proposed payment system is designed without such intermediaries, and is focused onliquidity saving via set-off.

      Will Cycles also be connected (in the future) with existing clearinghouses or other clearing DeFi protocols?

    7. onnect their internal accounting system to a global network that optimizes theclearing of credits and debts using the available sources of liquidity

      Is there any information available on the onboarding process of SME's? As it seems to be a protocol heavy environment, I would state the importance of good UX abstraction. I am curious of the UX interface and the terminology used, so SMEs have a smooth onboarding process

      How feasible is it to onboard SMEs into a protocol-heavy environment like this? What UX abstractions will hide complexity of “obligations, tenders, and acceptances”?

  3. pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
    1. Janie said archly and fixed him back in bed. It was then she felt the pistol under the pillow. It gave her a quick ugly throb, but she didn’t ask him about it since he didn’t say.

      Foreshadowing

    2. Mrs. Turner’s brother was back on the muck and now he had this mysterious sickness. People didn’t just take sick like this for nothing.

      He doesn't even know he has rabies

    3. Tea Cake took it and filled his mouth then gagged horribly, disgorged that which was in his mouth and threw the glass upon the floor. Janie was frantic with alarm.

      RABIES SYMPTOMS

    4. He bought another rifle and a pistol and he and Janie bucked each other as to who was the best shot with Janie ranking him always with the rifle.

      They're loaded upp

    1. both groups had startedseventh grade with equivalent achievement test scores

      Starting at the same time or even later than the other person is not a problem with the right growth mindset.

    2. we find that students with a fixed mindsetcare so much about how smart they will appear that theyoften reject learning opportunities

      Fixed mindset can block progress even when opportunities are helpful. Can we say the opposite? Because they deny too many opportunities to get better, they passively develop a fixed mindset from a young age.

    3. As the students entered seventh grade, wemeasured their mindsets (along with a number of otherthings) and then we monitored their grades over thenext two years

      Longitudinal study = stronger evidence, not just a snapshot

    4. A fixed mindset makes chal-lenges threatening for students (because they believethat their fixed ability may not be up to the task

      Real examples, they believe that they can never do that with their poor brain, don't even try because of the fear. Overtime, this bad belief make their intelligence be fixed and this goes on and on until they change.

    5. studentswith this mindset worry about how much of this fixedintelligence they possess

      Some geniuses are born with natural talent, but not everyone is like that. The truth is that everyone is born with a certain level of intelligence, but it is not necessarily fixed.

    6. Internal and External Motivation

      This shows the article is part of a larger unit/theme about motivation. Will the article argue that growth mindset is a form of internal motivation?

    7. These different beliefs, or mindsets, cre-ate different psychological worlds

      That's important to believe and confident in yourself. Just a little bit change in mindsets, there can be change the whole worlds.

    8. Her article “Brainology”was initially published in Independent School magazine in 2008 and explains some ofthe benefits of considering mindsets in a learning environment.

      Main idea, audience is educators and students. Is there any specifically reason that she chose Independent School magazine for the first public? Is this magazine was the most popular one about education in 2008?

    9. is considered one of the world’s lead-ing experts in in the fields of motivation, personality, and social development, with hermost significant contribution being the concept of mindsets.

      she has a good background, with high academic position, shows strong credibility.

  4. pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
    1. Dey oughta know if it’s dangerous.

      The perception of white people was that they were highly educated and it was common to think that way

    1. EU General Court rules on narrow challenge to EU-US data agreement letting it stand. Bound to be appealed at CJEU. NOYB thinks General Court departed strongly from earlier CJEU decisions (Schrems I and II) as current agreement doesn't really have new formulations, and that General Court accepts the independence DPCR which is not what the current reality in US is.

    1. .working.in.the.open

      experiment in instant sharing for annotations and linked over web autonomous conversations

      facilitatin the emergence of autonomous virtual communities dedicated to named interests and pursuits

      This one is contrast with say sharing Open work say on Github that requires sign up and sign in and is primary for the benefit of the aggregator

      ♖/hyperpost/♖/indy/🧘‍♂️/📅/20/25/09/1/

    1. AI output is unreliable and unpredictable. It can be good, but it can also be inaccurate or misleading,

      A.I. can make facts and sources up on a whim, which are known as A.I. hallucinations

    1. Briefing Document: "Supersens | Le génie caché des plantes (1/2) | ARTE"

      Ce document de briefing vise à synthétiser les thèmes principaux, les idées essentielles et les faits importants présentés dans l'extrait "Supersens | Le génie caché des plantes (1/2) | ARTE".

      La vidéo révolutionne notre perception des plantes, révélant une complexité sensorielle et une intelligence collective insoupçonnées.

      Thèmes Majeurs et Idées Clés:

      Redéfinition de la Perception Sensorielle des Plantes:

      • Multiplicité des sens: Les recherches récentes démontrent que les plantes possèdent une gamme de sens bien plus étendue que ce que l'on croyait. Elles "sentent, touchent, goûtent", ont "l'oreille fine", "la mémoire" et "perçoivent les formes".

      Elles sont sensibles non seulement à la température, l'ensoleillement et l'humidité, mais aussi à des stimuli inattendus.

      • Sensibilité supérieure aux animaux: Stefano Mancuso, professeur à l'université de Florence et fondateur du premier laboratoire international de neurobiologie végétale, affirme que les plantes sont "beaucoup plus sensibles que les animaux et elles ont besoin de l'être car elles ne peuvent pas fuir le danger en courant".

      • Absence de cerveau centralisé: La sensibilité n'est pas liée à un cerveau unique. Comme le souligne Mancuso, "le cerveau en lui-même est un organe stupide, c'est simplement un tas de cellules... vous n'avez pas besoin de neurones pour faire marcher tout ça". Les plantes ont développé des solutions originales pour la transmission des signaux.

      • Systèmes de communication internes: Elles possèdent un "système vasculaire comparable à nos veines et à nos artères" et un "réseau électrique ressemblant à nos nerfs par lesquels circulent les informations". Cependant, la vitesse de propagation des signaux électriques est plus lente (6 à 8 cm par minute), ce qui est "cohérente avec leur vie sédentaire".

      L'Ouïe et la Réponse aux Sons:

      • Perception des fréquences: Les plantes sont "capables de détecter des fréquences spécifiques et de réagir en conséquence". Elles sont particulièrement sensibles aux fréquences entre 100 et 1000 Hz, vers lesquelles les racines s'orientent. Des fréquences supérieures à 5000 Hz les font s'éloigner.

      • Gènes de l'ouïe et adaptation: Des chercheurs comme Daniel Chamovitz ont découvert que les plantes possèdent des gènes similaires à ceux responsables des cils vibratiles de l'oreille interne humaine.

      Chez les plantes, ces gènes sont "nécessaires à la formation des poils au bout des racines", indispensables à l'absorption de l'eau.

      • Détection des pollinisateurs: Les travaux de Lilach Hadany en Israël montrent que les plantes identifient les sons émis par les pollinisateurs (abeilles et papillons).

      Les fleurs, par leur forme concave agissant comme une "antenne satellite", "vibrent en réponse au son des abeilles" et "incitent la fleur à produire du nectar sucré".

      Elles distinguent les pollinisateurs des non-pollinisateurs pour optimiser leurs dépenses énergétiques.

      • Émission de sons en réponse au stress: Les plantes émettent des "claquements" audibles dans des registres très aigus (20 à 100 kHz), bien au-delà de l'audition humaine. Ces sons, similaires à du "popcorn qui éclate", fournissent des informations sur le stress de la plante (sécheresse, blessures). "Les claquements ont culminé vers le 5e jour sans eau puis ils ont commencé à diminuer".

      L'Odorat et la Communication Chimique:

      • Signatures olfactives: "Chaque espèce de plante possède une signature olfactive qui lui est propre".
      • Interaction parasite-hôte: La cuscute, une plante parasite, "sent son odeur" pour localiser et sélectionner une plante hôte saine et robuste. Elle est capable de "distinguer une plante saine d'une plante malade", faisant preuve de "sens incroyables".
      • Détection des phéromones d'insectes: Les plantes peuvent "percevoir les phéromones d'insectes", signalant une menace imminente. Par exemple, la plante Althima perçoit les phéromones de la mouche Eurosta et élabore une défense, comme "différer sa floraison".

      La Vision et la Lumière:

      • Perception diffuse et étendue: Contrairement aux humains, les plantes "perçoivent la lumière avec chacune de leurs cellules". Elles ne voient pas des images, mais traduisent les signaux lumineux en actions (germination, croissance, floraison).
      • Spectre lumineux étendu: Les plantes sont "malvoyantes" par rapport aux humains. Elles "réagissent à toute la lumière visuelle que nous voyons mais elles perçoivent en plus les ultraviolets et les lumières rouges lointaines". Ces longueurs d'onde sont cruciales pour la photosynthèse et la croissance.
      • Communication infrarouge: Elsbieta Frac à l'INRA étudie la "communication lumineuse des plantes" en infrarouge. Les plantes "communiquent en infrarouge" et "échangent des signaux entre elles à travers la lumière notamment dans le proche infrarouge". Elles utilisent ces signaux pour se "repérer dans l'espace", localiser leurs voisines et "adapter sa croissance en conséquence pour ne pas gaspiller d'énergie".

      La Proprioception et la Mémoire:

      • Sensibilité à la gravité: Les plantes sont sensibles à la gravité. Des "grains d'amidon" dans leurs cellules se déplacent et indiquent l'inclinaison, permettant à la plante de se redresser, même en l'absence de repères lumineux.
      • Perception de la position du corps: L'expérience avec le gravitron démontre que les plantes "perçoivent leur situation dans l'espace comme les humains et les animaux", possédant un "6e sens qu'on appelle proprioception".
      • Mémoire des événements physiques: Bruno Moulia à l'INRA de Clermont-Ferrand a prouvé que les plantes ont une mémoire.

      Face à des "coups de vent successifs, la réponse électrique... va diminuer... la plante s'habitue".

      Elles sont "capable de mémoriser pendant plus d'une semaine". Les arbres "répondent surtout au forts coups de vent, au vent inhabituel" et peuvent "faire deux fois plus de bois" pour se renforcer, nécessitant une "certaine mémoire pour pouvoir comparer ce qui est habituel de ce qui ne l'est pas".

      Ils peuvent même "réussir à remonter et à se rectifier jusqu'à de revenir parfaitement droit" après avoir été inclinés.

      L'Intelligence Collective et le "Root Wide Web":

      • Coordination des racines: Les "apex" (pointes des racines) "coordonnent leurs activités de croissance comme un essaim chez les insectes ou les oiseaux".
      • Réseau d'information souterrain: Les informations recueillies par les apex sont traitées par toutes les cellules de la plante, formant une "intelligence collective qui désigne les tâches à exécuter et coordonne les actions". Il n'y a "pas d'organisateur central", permettant une "auto-organisation très efficace". Ils sont "similaires aux réseaux internet, on pourrait les appeler The Root Wide Web, la toile web des racines".
      • Symbiose mycorhizienne: Les racines des plantes s'unissent à un réseau souterrain de champignons pour former la "mycorhize", un "réseau d'échange intense" d'informations, de minéraux rares et de sucres. Dans une forêt, les champignons "démultiplient par 10 000 l'étendue et les capacités originelles du réseau racinaire de chacun d'eux".
      • Partage des ressources: Les cartographies souterraines montrent que les arbres "partagent du carbone (du sucre) avec leurs voisins de même espèce mais aussi avec des arbres d'espèces différentes". Jusqu'à "40 % de la biomasse des racines fines de ce pin peut provenir du carbone d'un chêne voisin".

      Cet échange est contrôlé, un arbre pouvant "cesser de donner du carbone à un champignon qui ne fournit ni eau ni nutriments".

      Résilience des écosystèmes: Les réseaux mycorhiziens sont essentiels à la survie des arbres en conditions extrêmes, comme la sécheresse ou l'absence de lumière.

      La diversité des espèces d'arbres et de mycorhizes crée des "réseaux denses complexes qui rendront la forêt plus résiliente et plus durable".

      Conclusion Principale:

      Les découvertes scientifiques récentes bousculent profondément notre compréhension du règne végétal.

      Les plantes sont des organismes dotés de multiples sens étonnants, d'une capacité de communication complexe, de mémoire et d'une intelligence collective, particulièrement visible dans leurs réseaux souterrains.

      Elles ont développé des solutions originales et efficaces pour survivre et interagir avec leur environnement.

      Cette remise en question de l'image du vivant, où les frontières entre humains, animaux et végétaux sont de moins en moins nettes, ouvre des perspectives immenses pour la science, l'agriculture écologique et notre rapport à la nature.

      Comme le souligne le document, "si demain les humains disparaissaient... les plantes prendraient le contrôle de tout", mais "si demain les plantes commençaient à disparaître... toute la vie terrestre disparaîtrait".

      Ces révélations incitent à une "révolution de la pensée" et à un respect accru pour la "dignité et la valeur morale" des plantes.

    1. You lose confidence in your writing.”

      I resonate with this statement. Sometimes when I can't find the wording, I will rely on an LLM to help phrase sentences. it's taking away from any actual though being put into my writing.

    1. what if we change the game and all of a sudden the spiritual theory gives us technologies that are impossible with a theory that says that spaceime is fundamental

      for - comparison - spiritual vs material technologies - Donald Hoffman

      • Q❓- What about love? As per earlier discussion, love it's the most quintessential spiritual quality
      • if we don't have live in life, any technology would not matter
    2. I'm using this logic as as to build spacetime. But I think it's going to give an even more powerful approach. I don't have to minimize some free energy principle. I I have a more direct computational way

      for - future project - building a model to explain spacetime using Active Inference - Donald Hoffman - use Active Inference to minimise surprise using Markov chains - this model assumes consciousness is fundamental - this is going to be a model of intelligence based entirely from a model which takes consciousness as fundamental. - it goes back to game theory again. - back to the idea of a simulation - If you're able to create a piece of software that - is able to replicate and - is built on the fundamentals of consciousness. - Then it's potentially, it's going to think it's conscious

    3. it's very intelligent to minimize surprise

      for - explanation - why minimising surprise is a good definition of intelligence - Donald Hoffman - it's very intelligent to minimize surprise - I'm surprised all the time - I'm pretty stupid right, I don't understand the world very well - but if I'm NOT surprised, it's like I've got a really good model especially if I'm doing lots of stuff in the world and I'm almost never surprised - boy am I I'm really intelligent! - So, you can see why that's a really good principle for trying to build an AI, - not just finding correlations between everything, - but really something deeper.

    1. So now, in many ways, humanities majors can produce some of the most interesting “code.”

      this is the "education" that Mollick suggests is best for most effective use of A.I.

    1. The first McDonald’s restaurant in Beijing was built at the southern end of Wang-fujing Street, Beijing’s Fifth Avenue. With 700 seats and 29 cash registers, the restau-rant served more than 40,000 customers on its grand opening day of April 23, 1992

      It shows the popularity of McDonald's in Beijing during the 1990s.

    2. this chapter aims to unpack the rich mean-ings of fast-food consumption in Beijing by focusing on the fast-food restaurants as asocial space.

      this shows what the author tries to uncover

    1. create new ways: ways on the side of the existing system; ways that are feeding from and feeding into the existing system.

      complete not to compete

    2. “When a complex system is far from equilibrium, small islands of coherence in a sea of chaos have the capacity to shift the entire system to a higher order” Ilya Prigogine, Nobel Prize-winning chemist

      small islands of coherence

    1. Men so many of mien more courageous. He expresses no little admiration I ween that from valor, nowise as outlaws, for the strangers. But from greatness of soul ye sought for King Hrothgar.” Then the strength-famous earlman answer rendered, Beowulf replies. The proud-mooded Wederchief replied to his question, 25 Hardy ’neath helmet: “Higelac’s mates are we; We are Higelac’s table-companions, Beowulf hight I. To the bairn of Healfdene, and bear an important commission to The famous folk-leader, I freely will tell your prince. To thy prince my commission, if pleasantly hearing He’ll grant we may greet him so gracious to all men.” 30 Wulfgar replied then (he was prince of the Wendels, His boldness of spirit was known unto many, His prowess and prudence): “The prince of the Scyldings, The friend-lord of Danemen, I will ask of thy journey,

      So many men have such courage This type of courage is seen from living an honorable life, not as a rebel It was from the goodness of the soul that he saught King Hrothgar The strong warrior replied Beowulf, a high ranking cheif responded to his question saying, We are companions of Higelac’s, we have a seat at the table and are important to your prince I am called Beowulf and would like to speak to the leader Hrothgar, son of Healfdene Bringing an important request if your prince would be kind to approve us an audience, Wulfgar, prince of the Wendels, was known to many for his wisdom and boldness, Responded to Beowulf saying, “The generous prince of the Scyldings, I’ll ask about your journey. As you have asked, I’ll tell the prince of your request and see what he says.”

    1. some will have to change careers or learn to integrate GPT in their profession; others will lose their job. New positions will be created by GPT either directly (like the Prompt Engineer, the one who can “talk to the machine”) or indirectly by making it easier to create products and companies.

      I found this really interesting because I did not think about the ways in which AI can implicate people's jobs. I also really enjoyed this sentence because it helps us see why we need to know how to effectively utilize AI in a classroom.