10,000 Matching Annotations
  1. Oct 2025
    1. Author Response

      We would like to thank the senior editor, reviewing editor and all the reviewers for taking out precious time to review our manuscript and appreciating our study. We are excited that all of you have found strength in our work and have provided comments to strengthen it further. We sincerely appreciate the valuable comments and suggestions, which we believe will help us to further improve the quality of our work.

      Reviewer 1

      The manuscript by Dubey et al. examines the function of the acetyltransferase Tip60. The authors show that (auto)acetylation of a lysine residue in Tip60 is important for its nuclear localization and liquid-liquid-phase-separation (LLPS). The main observations are: (i) Tip60 is localized to the nucleus, where it typically forms punctate foci. (ii) An intrinsically disordered region (IDR) within Tip60 is critical for the normal distribution of Tip60. (iii) Within the IDR the authors show that a lysine residue (K187), that is auto-acetylated, is critical. Mutation of that lysine residue to a non-acetylable arginine abolishes the behavior. (iv) biochemical experiments show that the formation of the punctate foci may be consistent with LLPS.

      On balance, this is an interesting study that describes the role of acetylation of Tip60 in controlling its biochemical behavior as well as its localization and function in cells. The authors mention in their Discussion section other examples showing that acetylation can change the behavior of proteins with respect to LLPS; depending on the specific context, acetylation can promote (as here for Tip60) or impair LLPS.

      Strengths:

      The experiments are largely convincing and appear to be well executed.

      Weaknesses:

      The main concern I have is that all in vivo (i.e. in cells) experiments are done with overexpression in Cos-1 cells, in the presence of the endogenous protein. No attempt is made to use e.g. cells that would be KO for Tip60 in order to have a cleaner system or to look at the endogenous protein. It would be reassuring to know that what the authors observe with highly overexpressed proteins also takes place with endogenous proteins.

      Response: The main reason to perform these experiments with overexpression system was to generate different point mutants and deletion mutants of TIP60 and analyse their effect on its properties and functions. To validate our observations with overexpression system, we also examined localization pattern of endogenous TIP60 by IFA and results depict similar kind of foci pattern within the nucleus as observed with overexpressed TIP60 protein (Figure 4A). However, we understand the reviewers concern and agree to repeat some of the overexpression experiments under endogenous TIP60 knockdown conditions using siRNA or shRNA against 3’ UTR region.

      Also, it is not clear how often the experiments have been repeated and additional quantifications (e.g. of western blots) would be useful.

      Response: The experiments were performed as independent biological replicates (n=3) and this is mentioned in the figure legends. Regarding the suggestion for quantifying Western blots, we want to bring into the notice that where ever required (for blots such as Figure 2F, 6H) that require quantitative estimation, graph representing quantitated value with p-value had already been added. However as suggested, in addition, quantitation for Figure 6D will be performed and added in the revised version.

      In addition, regarding the LLPS description (Figure 1), it would be important to show the wetting behaviour and the temperature-dependent reversibility of the droplet formation.

      Response: We appreciate the suggestion, and we will perform these assays and include the results in the revised version.

      In Fig 3C the mutant (K187R) Tip60 is cytoplasmic, but still appears to form foci. Is this still reflecting phase separation, or some form of aggregation?

      Response: TIP60 (K187R) mutant remains cytosolic with homogenous distribution as shown in Figure 2E. Also with TIP60 partners like PXR or p53, this mutant protein remains homogenously distributed in the cytosol. However, when co-expressed with TIP60 (Wild-type) protein, this mutant protein although still remain cytosolic some foci-like pattern is also observed at the nuclear periphery which we believe could be accumulated aggregates.

      Reviewer 2

      The manuscript "Autoacetylation-mediated phase separation of TIP60 is critical for its functions" by Dubey S. et al reported that the acetyltransferase TIP60 undergoes phase separation in vitro and cell nuclei. The intrinsically disordered region (IDR) of TIP60, particularly K187 within the IDR, is critical for phase separation and nuclear import. The authors showed that K187 is autoacetylated, which is important for TIP60 nuclear localization and activity on histone H4. The authors did several experiments to examine the function of K187R mutants including chromatin binding, oligomerization, phase separation, and nuclear foci formation. However, the physiological relevance of these experiments is not clear since TIP60 K187R mutants do not get into nuclei. The authors also functionally tested the cancer-derived R188P mutant, which mimics K187R in nuclear localization, disruption of wound healing, and DNA damage repair. However, similar to K187R, the R188P mutant is also deficient in nuclear import, and therefore, its defects cannot be directly attributed to the disruption of the phase separation property of TIP60. The main deficiency of the manuscript is the lack of support for the conclusion that "autoacetylation-mediated phase separation of TIP60 is critical for its functions".

      This study offers some intriguing observations. However, the evidence supporting the primary conclusion, specifically regarding the necessity of the intrinsically disordered region (IDR) and K187ac of TIP60 for its phase separation and function in cells, lacks sufficient support and warrants more scrutiny. Additionally, certain aspects of the experimental design are perplexing and lack controls to exclude alternative interpretations. The manuscript can benefit from additional editing and proofreading to improve clarity.

      Response: We understand the point raised by the reviewer, however we would like to draw his attention to the data where we clearly demonstrated that acetylation of lysine 187 within the IDR of TIP60 is required for its phase separation (Figure 2J). We would like to draw reviewer’s attention to other TIP60 mutants within IDR (R177H, R188H, K189R) which all enters the nucleus and make phase separated foci. Cancer-associated mutation at R188 behaves similarly because it also hampers TIP60 acetylation at the adjacent K187 residue. Our in vitro and in cellulo results clearly demonstrate that autoacetylation of TIP60 at K187 within its IDR is critical for multiple functions including its translocation inside the nucleus, its protein-protein interaction and oligomerization which are prerequisite for phase separation of TIP60.

      There are two putative NLS sequences (NLS #1 from aa145; NLS #2 from aa184) in TIP60, both of which are within the IDR. Deletion of the whole IDR is therefore expected to abolish the nuclear localization of TIP60. Since K187 is within NLS #2, the cytoplasmic localization of the IDR and K187R mutants may not be related to the ability of TIP60 to phase separation.

      Response: We are not disputing the presence of putative NLS within IDR region of TIP60, however our results through different mutations within IDR region (K76, K80, K148, K150, R177, R178, R188, K189) clearly demonstrate that only K187 residue acetylation is critical to shuttle TIP60 inside the nucleus while all other lysine mutants located within these putative NLS region exhibited no impact on TIP60’s nuclear shuttling. We have mentioned this in our discussion, that autoacetylation of TIP60’s K187 may induce local structural modifications in its IDR which is critical for translocating TIP60 inside the nucleus where it undergoes phase separation critical for its functions. A previous example of similar kind shows, acetylation of lysine within the NLS region of TyrRS by PCAF promote its nuclear localization (Cao X et al 2017, PNAS). IDR region (which also contains K187 site) is important for phase separation once the protein enters inside the nucleus. This could be the cell’s mechanism to prevent unwarranted action of TIP60 until it enters the nucleus and phase separate on chromatin at appropriate locations.

      The chromatin-binding activity of TIP60 depends on HAT activity, but not phase-separation (Fig 1I), (Fig 2B). How do the authors reconcile the fact that the K187R mutant is able to bind to chromatin with lower activity than the HAT mutant (Fig 2F, 2I)?

      Response: K187 acetylation is required for TIP60’s nuclear translocation but not critical for chromatin binding. When soluble fraction is prepared in fractionation experiment, nuclear membrane is disrupted and TIP60 (K187R) mutant has no longer hindrance in accessing the chromatin and thus can load on the chromatin (although not as efficient as Wild-type protein). For efficient chromatin binding auto-acetylation of other lysine residues in TIP60 is required which might be hampered due to reduced catalytic activity or not sufficient enough to maintain equilibrium with HDAC’s activity inside the nucleus. In case of K187R, the reduced auto-acetylation is captured when protein is the cytosol. During fractionation, once this mutant has access to chromatin, it might auto-acetylate other lysine residues critical for chromatin loading (remember catalytic domain is intact in this mutant). This is evident due to hyper auto-acetylation of Wild-type protein compared to K187R or HAT mutant proteins. We want to bring into notice that phase-separation occurs only after efficient chromatin loading of TIP60 that is the reason that under in-cellulo conditions, both K187R (which cannot enter the nucleus) and HAT mutant (which enters the nucleus but fails to efficiently binds onto the chromatin) fails to form phase separated nuclear punctate foci.

      The DIC images of phase separation in Fig 2I need to be improved. The image for K187R showed the irregular shape of the condensates, which suggests particles in solution or on the slide. The authors may need to use fluorescent-tagged TIP60 in the in vitro LLPS experiments.

      Response: We believe this comment is for figure 2J. The irregularly shaped condensates observed for TIP60 K187R are unique to the mutant protein and are not caused by particles on the slide. We would like to draw reviewer’s attention to supplementary figure S2A, where DIC images for TIP60 (Wild-type) protein tested under different protein and PEG8000 conditions are completely clear where protein did not made phase separated droplets ruling out the probability of particles in solution or slides.

      The authors mentioned that the HAT mutant of TIP60 does not phase separate, which needs to be included.

      Response: We have already added the image of RFP-TIP60 (HAT mutant) in supplementary Fig S4A (panel 2) in the manuscript.

      Related to Point 3, the HAT mutant that doesn't form punctate foci by itself, can incorporate into WT TIP60 (Fig 5A). In vitro LLPS assay for WT, HAT, and K187R mutants with or without acetylation should be included. WT and mutant TIP can be labelled with GFP and RFP, respectively.

      Response: We would like to draw reviewer’s attention towards our co-expression experiments performed in Figure 5 where Wild-type protein (both tagged and untagged condition) is able to phase separate and make punctate foci with co-expressed HAT mutant protein (with depleted autoacetylation capacity). We believe these in cellulo experiments are already able to answer the queries what reviewer is suggesting to acheive by in vitro experiments.

      Fig 3A and 3B showed that neither K187 mutant nor HAT mutant could oligomerize. If both experiments were conducted in the absence of in vitro acetylation, how do the authors reconcile these results?

      Response: We thank the reviewer for highlighting our oversight in omitting the mention of acetyl coenzyme A here. To induce acetylation under in vitro conditions, we have added 10 µM acetyl CoA into the reactions depicted in Figure 3A and 3B. The information for acetyl CoA for Figure 3B was already included in the GST-pull down assay (material and methods section). We will add the same in the oligomerization assay of material and methods in the revised manuscript.

      In Fig 4, the colocalization images showed little overlap between TIP60 and nuclear speckle (NS) marker SC35, indicating that the majority of TIP60 localized in the nuclear structure other than NS. Have the authors tried to perturbate the NS by depleting the NS scaffold protein and examining TIP60 foci formation? Do PXR and TP53 localize to NS?

      Response: Under normal conditions majority of TIP60 is not localized in nuclear speckles (NS) so we believe that perturbing NS will not have significant effect on TIP60 foci formation. Interestingly, recently a study by Shelly Burger group (Alexander KA et al Mol Cell. 2021 15;81(8):1666-1681) had shown that p53 localizes to NS to regulate subset of its targeted genes. We have mentioned about it in our discussion section. No information is available about localization of PXR in NS.

      Were TIP60 substrates, H4 (or NCP), PXR, TP53, present inTIP60 condensates in vitro? It's interesting to see both PXR and TP53 had homogenous nuclear signals when expressed together with K187R, R188P (Fig 6E, 6G), or HAT (Suppl Fig S4A) mutants. Are PXR or TP53 nuclear foci dependent on their acetylation by TIP60? This can and should be tested.

      Response: Both p53 and PXR are known to be acetylated by TIP60. In case of PXR, TIP60 acetylate PXR at lysine 170 and this TIP60-mediated acetylation of PXR at K170 is important for TIP60-PXR foci which now we know are formed by phase separation (Bakshi K et al Sci Rep. 2017 Jun 16;7(1):3635).

      Since R188P mutant, like K187R, does not get into the nuclei, it is not suitable to use this mutant to examine the functional relevance of phase separation for TIP60. The authors need to find another mutant in IDR that retains nuclear localization and overall HAT activity but specifically disrupts phase separation. Otherwise, the conclusion needs to be restated. All cancer-derived mutants need to be tested for LLPS in vitro.

      Response: We appreciate the reviewer’s point here, but it is important to note that the objective of these experiments is to understand the impact of K187R (critical in multiple aspects of TIP60 including phase separation) and R188P (a naturally occurring cancer-associated mutation and behaving similarly to K187R) on TIP60’s activities to determine their functional relevance. As suggested by the reviewer to test and find IDR mutant that fails to phase separate however retains nuclear localization and catalytic activity can be examined in future studies.

      For all cellular experiments, it is not mentioned whether endogenous TIP60 was removed and absent in the cell lines used in this study. It's important to clarify this point because the localization and function of mutant TIP60 are affected by WT TIP60 (Fig 5).

      Response: Endogenous TIP60 was present in in cellulo experiments, however as suggested by reviewer 1 we will perform some of the in cellulo experiments under endogenous TIP60 knockdown condition to validate our findings.

      It is troubling that H4 peptide is used for in vitro HAT assay since TIP60 has much higher activity on nucleosomes and its preferred substrates include H2A.

      Response: The purpose of using H4 peptide in the HAT assay is to determine the impact of mutations of TIP60’s catalytic activity. As H4 is one of the major histone substrate for TIP60, we believe it satisfy the objective of experiments.

      Reviewer 3

      This study presents results arguing that the mammalian acetyltransferase Tip60/KAT5 auto-acetylates itself on one specific lysine residue before the MYST domain, which in turn favors not only nuclear localization but also condensate formation on chromatin through LLPS. The authors further argue that this modification is responsible for the bulk of Tip60 autoacetylation and acetyltransferase activity towards histone H4. Finally, they suggest that it is required for association with txn factors and in vivo function in gene regulation and DNA damage response.

      These are very wide and important claims and, while some results are interesting and intriguing, there is not really close to enough work performed/data presented to support them. In addition, some results are redundant between them, lack consistency in the mutants analyzed, and show contradiction between them. The most important shortcoming of the study is the fact that every single experiment in cells was done in over-expressed conditions, from transiently transfected cells. It is well known that these conditions can lead to non-specific mass effects, cellular localization not reflecting native conditions, and disruption of native interactome. On that topic, it is quite striking that the authors completely ignore the fact that Tip60 is exclusively found as part of a stable large multi-subunit complex in vivo, with more than 15 different proteins. Thus, arguing for a single residue acetylation regulating condensate formation and most Tip60 functions while ignoring native conditions (and the fact that Tip60 cannot function outside its native complex) does not allow me to support this study.

      Response: We appreciate the reviewer’s point here, but it is important to note that the main purpose to use overexpression system in the study is to analyse the effect of different generated point/deletion mutations on TIP60. We have overexpressed proteins with different tags (GFP or RFP) or without tags (Figure 3C, Figure 5) to confirm the behaviour of protein which remains unperturbed due to presence of tags. To validate we have also examined localization of endogenous TIP60 protein which also depict similar localization behaviour as overexpressed protein. We would like to draw attention that there are several reports in literature where similar kind of overexpression system are used to determine functions of TIP60 and its mutants. Also nuclear foci pattern observed for TIP60 in our studies is also reported by several other groups.

      Sun, Y., et. al. (2005) A role for the Tip60 histone acetyltransferase in the acetylation and activation of ATM. Proc Natl Acad Sci U S A, 102(37):13182-7.

      Kim, C.-H. et al. (2015) ‘The chromodomain-containing histone acetyltransferase TIP60 acts as a code reader, recognizing the epigenetic codes for initiating transcription’, Bioscience, Biotechnology, and Biochemistry, 79(4), pp. 532–538.

      Wee, C. L. et al. (2014) ‘Nuclear Arc Interacts with the Histone Acetyltransferase Tip60 to Modify H4K12 Acetylation(1,2,3).’, eNeuro, 1(1). doi: 10.1523/ENEURO.0019-14.2014.

      However, as a caution and suggested by other reviewers also we will perform some of these overexpression experiments in absence of endogenous TIP60 by using 3’ UTR specific siRNA/shRNA.

      We thank the reviewer for his comment on muti-subunit complex proteins and we would like to expand our study by determining the interaction of some of the complex subunits with TIP60 ((Wild-type) that forms nuclear condensates), TIP60 ((HAT mutant) that enters the nucleus but do not form condensates) and TIP60 ((K187R) that do not enter the nucleus and do not form condensates). We will include the result of these experiments in the revised manuscript.

      • It is known that over-expression after transient transfection can lead to non-specific acetylation of lysines on the proteins, likely in part to protect from proteasome-mediated degradation. It is not clear whether the Kac sites targeted in the experiments are based on published/public data. In that sense, it is surprising that the K327R mutant does not behave like a HAT-dead mutant (which is what exactly?) or the K187R mutant as this site needs to be auto-acetylated to free the catalytic pocket, so essential for acetyltransferase activity like in all MYST-family HATs. In addition, the effect of K187R on the total acetyl-lysine signal of Tip60 is very surprising as this site does not seem to be a dominant one in public databases.

      Response: We have chosen autoacetylation sites based on previously published studies where LC-MS/MS and in vitro acetylation assays were used to identified autoacetylation sites in TIP60 which includes K187. We have already mentioned about it in the manuscript and have quoted the references (1. Yang, C., et al (2012). Function of the active site lysine autoacetylation in Tip60 catalysis. PloS one 7, e32886. 10.1371/journal.pone.0032886. 2. Yi, J., et al (2014). Regulation of histone acetyltransferase TIP60 function by histone deacetylase 3. The Journal of biological chemistry 289, 33878–33886. 10.1074/jbc.M114.575266.). We would like to emphasize that both these studies have identified K187 as autoacetylation site in TIP60. Since TIP60 HAT mutant (with significantly reduced catalytic activity) can also enter nucleus, it is not surprising that K327 could also enter the nucleus.

      • As the physiological relevance of the results is not clear, the mutants need to be analyzed at the native level of expression to study real functional effects on transcription and localization (ChIP/IF). It is not clear the claim that Tip60 forms nuclear foci/punctate signals at physiological levels is based on what. This is certainly debated because in part of the poor choice of antibodies available for IF analysis. In that sense, it is not clear which Ab is used in the Westerns. Endogenous Tip60 is known to be expressed in multiple isoforms from splice variants, the most dominant one being isoform 2 (PLIP) which lacks a big part (aa96-147) of the so-called IDR domain presented in the study. Does this major isoform behave the same?

      Response: TIP60 antibody used in the study is from Santa Cruz (Cat. No.- sc-166323). This antibody is widely used for TIP60 detection by several methods and has been cited in numerous publications. Cat. No. will be mentioned in the manuscript. Regarding isoforms, three isoforms are known for TIP60 among which isoform 2 is majorly expressed and used in our study. Isoform and 1 and 2 have same length of IDR (150 amino acids) while isoform 3 has IDR of 97 amino acids. Interestingly, the K187 is present in all the isoforms (already mentioned in the manuscript) and missing region (96-147 amino acid) in isoform 3 has less propensity for disordered region (marked in blue circle). This clearly shows that all the isoforms of TIP60 has the tendency to phase separate.

      Author response image 1.

      • It is extremely strange to show that the K187R mutant fails to get in the nuclei by cell imaging but remains chromatin-bound by fractionation... If K187 is auto-acetylated and required to enter the nucleus, why would a HAT-dead mutant not behave the same?

      Response: We would like to draw attention that both HAT mutant and K187R mutant are not completely catalytically dead. As our data shows both these mutants have catalytic activity although at significantly decreased levels. We believe that K187 acetylation is critical for TIP60 to enter the nucleus and once TIP60 shuttles inside the nucleus autoacetylation of other sites is required for efficient chromatin binding of TIP60. In fractionation assay, nuclear membrane is dissolved while preparing the soluble fraction so there is no hindrance for K187R mutant in accessing the chromatin. While in the case of HAT mutant, it can acetylate the K187 site and thus is able to enter the nucleus however this residual catalytic activity is either not able to autoacetylate other residues required for its efficient chromatin binding or to counter activities of HDAC’s deacetylating the TIP60.

      • If K187 acetylation is key to Tip60 function, it would be most logical (and classical) to test a K187Q acetyl-mimic substitution. In that sense, what happens with the R188Q mutant? That all goes back to the fact that this cluster of basic residues looks quite like an NLS.

      Response: As suggested we will generate acetylation mimicking mutant for K187 site and examine it. Result will be added in the revised manuscript.

      • The effect of the mutant on the TIP60 complex itself needs to be analyzed, e.g. for associated subunits like p400, ING3, TRRAP, Brd8...

      Response: As suggested we will examine the effect of mutations on TIP60 complex

    1. Author Response:

      Reviewer #1:

      Summary:

      This research study utilizes a realistic motoneuron model to explore the potential to trace back the appropriate levels of excitation, inhibition, and neuromodulation in the firing patterns of motoneurons observed in in-vitro and in-vivo experiments in mammals. The research employs high-performance computing power to achieve its objectives. The work introduces a new framework that enhances understanding of the neural inputs to motoneuron pools, thereby opening up new avenues for hypothesis testing research.

      Strengths: The significance of the study holds relevance for all neuroscientists. Motoneurons are a unique class of neurons with known distribution of outputs for a wide range of voluntary and involuntary motor commands, and their physiological function is precisely understood. More importantly, they can be recorded in-vivo using minimally invasive methods, and they are directly impacted by many neurodegenerative diseases at the spinal cord level. The computational framework developed in this research offers the potential to reverse engineer the synaptic input distribution when assessing motor unit activity in humans, which holds particular importance. Overall, the strength of the findings focuses on providing a novel framework for studying and understanding the inputs that govern motoneuron behavior, with broad applications in neuroscience and potential implications for understanding neurodegenerative diseases. It highlights the significance of the study for various research domains, making it valuable to the scientific community.

      Weaknesses: The exact levels of inhibition, excitation, and neuromodulatory inputs to neural networks are unknown. Therefore the work is based on fine-tuned measures that are indirectly based on experimental results. However, obtaining such physiological information is challenging and currently impossible. From a computational perspective it is a challenge that in theory can be solved. Thus, although we have no ground-truth evidence, this framework can provide compelling evidence for all hypothesis testing research and potentially solve this physiological problem with the use of computers.

      We agree with the reviewer. This work was intended to determine the feasibility of reverse engineering motor unit firing patterns, using neuron models with a high degree realism. Given the results support this feasibility, our model and technique will therefore serve to construct new hypotheses as well as testing them.

      Reviewer #2:

      The study presents an extensive computational approach to identify the motor neuron input from the characteristics of single motor neuron discharge patterns during a ramp up/down contraction. This reverse engineering approach is relevant due to limitations in our ability to estimate this input experimentally. Using well-established models of single motor neurons, a (very) large number of simulations were performed that allowed identification of this relation. In this way, the results enable researchers to measure motor neuron behavior and from those results determine the underlying neural input scheme. Overall, the results are very convincing and represent an important step forward in understanding the neural strategies for controlling movement.

      Nevertheless, I would suggest that the authors consider the following recommendations to strengthen the message further. First, I believe that the relation between individual motor neuron behavioral characteristics (delta F, brace height etc.) and the motor neuron input properties can be illustrated more clearly. Although this is explained in the text, I believe that this is not optimally supported by figures. Figure 6 to some extent shows this, but figures 8 and 9 as well as Table 1 shows primarily the goodness of fit rather than the actual fit.

      We agree with the reviewer that showing the relationship between the motor neuron behavioral characteristics (delta F, brace height etc.) and the motor neuron input properties would be a great addition to the manuscript. Because the regression models have multiple dimensions (7 inputs and 3 outputs) it is difficult to show the relationship in a static image. We thought it best to show the goodness of fit even though it is more abstract and less intuitive. We added a supplemental diagram to Figure 8 to show the structure of the reverse engineered model that was fit (see Figure 8D).

      Author response image 1: Figure 8. Residual plots showing the goodness of fit of the different predicted values: (A) Inhibition, (B) Neuromodulation and (C) excitatory Weight Ratio. The summary plots are for the models showing highest 𝑅2 results in Table 1. The predicted values are calculated using the features extracted from the firing rates (see Figure 7, section Machine learning inference of motor pool characteristics and Regression using motoneuron outputs to predict input organization). Diagram (D) shows the multidimensionality of the RE models (see Model fits) which have 7 feature inputs (see Feature Extraction) predicting 3 outputs (Inhibition, Neuromodulation and Weight Ratio).

      Second, I would have expected the discussion to have addressed specifically the question of which of the two primary schemes (push-pull, balanced) is the most prevalent. This is the main research question of the study, but it is to some degree left unanswered. Now that the authors have identified the relation between the characteristics of motor neuron behaviors (which has been reported in many previous studies), why not exploit this finding by summarizing the results of previous studies (at least a few representative ones) and discuss the most likely underlying input scheme? Is there a consistent trend towards one of the schemes, or are both strategies commonly used?

      We agree with the reviewer that our discussion should have addressed which of the two primary schemes – push-pull or balanced – is the most prevalent. At first glance, the upper right of Figure 6 looks the most realistic when compared to real data. We thus would expect that the push-pull scheme to dominate for the given task. We added a brief section (Push-Pull vs Balance Motor Command) in the discussion to address the reviewer’s comments. This section is not exhaustive but frames the debate using relevant literature. We are also now preparing to deploy these techniques on real data.

      In addition, it seems striking to me that highly non-linear excitation profiles are necessary to obtain a linear CST ramp in many model configurations. Although somewhat speculative, one may expect that an approximately linear relation is desired for robust and intuitive motor control. It seems to me that humans generally have a good ability to accurately grade the magnitude of the motor output, which implies that either a non-linear relation has been learnt (complex task), or that the central nervous system can generally rely on a somewhat linear relation between the neural drive to the muscle and the output (simpler task).

      We agree with the reviewer, and we were surprised by these results. Our motoneuron pool is equipped with persistent inward currents (PICs) which are nonlinear. Therefore, for the motoneuron to produce a linear output the central nervous system would have to incorporate these nonlinearities into its commands.

      Following this reasoning, it could be interesting to report also for which input scheme, the excitation profile is most linear. I understand that this is not the primary aim of the study, but it may be an interesting way to elaborate on the finding that in many cases non-linear excitation profiles were needed to produce the linear ramp.

      This is a very interesting point. The most realistic firing patterns – with respect to human data – are found in the parameter regions in the upper right in Figure 6, which in fact produce the most nonlinear input (see push-pull pattern in Figure 4C). However, in future studies we hope to separate the total motor command illustrated here into descending and feedback commands. This may result in a more linear descending drive.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public Review):

      (1) It will be interesting to monitor the levels of another MIM insertase namely, OXA1. This will help to understand whether some of the observed changes in levels of OXPHOS subunits are related to alterations in the amounts of this insertase.

      OXA1 was not detected in the untargeted mass spectrometry analysis, most likely due to the fact that it is a polytopic membrane protein, spanning the membrane five times (1,2). Consequently, we measured OXA1 levels with immunoblotting, comparing patient fibroblast cells to the HC. No significant change in OXA1 steady state levels was observed. 

      See the results below. These results will be added and discussed in the revised manuscript.

      Author response image 1.

      (2) Figure 3: How do the authors explain that although TIMM17 and TIMM23 were found to be significantly reduced by Western analysis they were not detected as such by the Mass Spec. method?

      The untargeted mass spectrometry in the current study failed to detect the presence of TIMM17 for both, patient fibroblasts and mice neurons, while TIMM23 was detected only for mice neurons and a decrease was observed for this protein but was not significant. This is most likely due to the fact that TIMM17 and TIMM23 are both polytopic membrane proteins, spanning the membrane four times, which makes it difficult to extract them in quantities suitable for MS detection (2,3).

      (3) How do the authors explain the higher levels of some proteins in the TIMM50 mutated cells?

      The levels of fully functional TIM23 complex are deceased in patients' fibroblasts. Therefore, the mechanism by which the steady state level of some TIM23 substrate proteins is increased, can only be explained relying on events that occur outside the mitochondria. This could include increase in transcription, translation or post translation modifications, all of which may increase their steady state level albite the decrease in the steady state level of the import complex.

      (4) Can the authors elaborate on why mutated cells are impaired in their ability to switch their energetic emphasis to glycolysis when needed?

      Cellular regulation of the metabolic switch to glycolysis occurs via two known pathways: 1) Activation of AMP-activated protein kinase (AMPK) by increased levels of AMP/ADP (4). 2) Inhibition of pyruvate dehydrogenase (PDH) complexes by pyruvate dehydrogenase kinases (PDK) (5). Therefore, changes in the steady state levels of any of these regulators could push the cells towards anaerobic energy production, when needed. In our model systems, we did not observe changes in any of the AMPK, PDH or PDK subunits that were detected in our untargeted mass spectrometry analysis (see volcano plots below, no PDK subunits were detected in patient fibroblasts). Although this doesn’t directly explain why the cells have an impaired ability to switch their energetic emphasis, it does possibly explain why the switch did not occur de facto.

      Author response image 2.

      Reviewer #2 (Public Review):

      (1) The authors claim in the abstract, the introduction, and the discussion that TIMM50 and the TIM23 translocase might not be relevant for mitochondrial protein import in mammals. This is misleading and certainly wrong!!!

      Indeed, it was not in our intention to claim that the TIM23 complex might not be relevant. We have now rewritten the relevant parts to convey the correct message:

      Abstract – 

      Line 25 - “Strikingly, TIMM50 deficiency had no impact on the steady state levels of most of its putative substrates, suggesting that even low levels of a functional TIM23 complex are sufficient to maintain the majority of complex-dependent mitochondrial proteome.”

      Introduction – 

      Line 87 - Surprisingly, functional and physiological analysis points to the possibility that low levels of TIM23 complex core subunits (TIMM50, TIMM17 and TIMM23) are sufficient for maintaining steady-state levels of most presequence-containing proteins. However, the reduced TIM23CORE component levels do affect some critical mitochondrial properties and neuronal activity.

      Discussion – 

      Line 339 – “…surprising, as normal TIM23 complex levels are suggested to be indispensable for the translocation of presequence-containing mitochondrial proteins…”

      Line 344 – “…it is possible that unlike what occurs in yeast, normal levels of mammalian TIMM50 and TIM23 complex are mainly essential for maintaining the steady state levels of intricate complexes/assemblies.”

      Line 396 – “In summary, our results suggest that even low levels of TIMM50 and TIM23CORE components suffice in maintaining the majority of mitochondrial matrix and inner membrane proteome. Nevertheless, reductions in TIMM50 levels led to a decrease of many OXPHOS and MRP complex subunits, which indicates that normal TIMM50 levels might be mainly essential for maintaining the steady state levels and assembly of intricate complex proteins.”

      (1) Homberg B, Rehling P, Cruz-Zaragoza LD. The multifaceted mitochondrial OXA insertase. Trends Cell Biol. 2023;33(9):765–72. 

      (2) Carroll J, Altman MC, Fearnley IM, Walker JE. Identification of membrane proteins by tandem mass spectrometry of protein ions. Proc Natl Acad Sci U S A.

      2007;104(36):14330–5. 

      (3) Dekker PJT, Keil P, Rassow J, Maarse AC, Pfanner N, Meijer M. Identification of MIM23, a putative component of the protein import machinery of the mitochondrial inner membrane. FEBS Lett. 1993;330(1):66–70. 

      (4) Trefts E, Shaw RJ. AMPK: restoring metabolic homeostasis over space and time. Mol Cell [Internet]. 2021;81(18):3677–90. Available from:

      https://doi.org/10.1016/j.molcel.2021.08.015

      (5) Zhang S, Hulver MW, McMillan RP, Cline MA, Gilbert ER. The pivotal role of pyruvate dehydrogenase kinases in metabolic flexibility. Nutr Metab. 2014;11(1):1–9.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      We thank the reviewer for his valuable input and careful assessment, which have significantly improved the clarity and rigor of our manuscript.

      Summary:

      Mazer & Yovel 2025 dissect the inverse problem of how echolocators in groups manage to navigate their surroundings despite intense jamming using computational simulations.

      The authors show that despite the 'noisy' sensory environments that echolocating groups present, agents can still access some amount of echo-related information and use it to navigate their local environment. It is known that echolocating bats have strong small and large-scale spatial memory that plays an important role for individuals. The results from this paper also point to the potential importance of an even lower-level, short-term role of memory in the form of echo 'integration' across multiple calls, despite the unpredictability of echo detection in groups. The paper generates a useful basis to think about the mechanisms in echolocating groups for experimental investigations too.

      Strengths:

      (1) The paper builds on biologically well-motivated and parametrised 2D acoustics and sensory simulation setup to investigate the various key parameters of interest

      (2) The 'null-model' of echolocators not being able to tell apart objects & conspecifics while echolocating still shows agents successfully emerge from groups - even though the probability of emergence drops severely in comparison to cognitively more 'capable' agents. This is nonetheless an important result showing the direction-of-arrival of a sound itself is the 'minimum' set of ingredients needed for echolocators navigating their environment.

      (3) The results generate an important basis in unraveling how agents may navigate in sensorially noisy environments with a lot of irrelevant and very few relevant cues.

      (4) The 2D simulation framework is simple and computationally tractable enough to perform multiple runs to investigate many variables - while also remaining true to the aim of the investigation.

      Weaknesses:

      There are a few places in the paper that can be misunderstood or don't provide complete details. Here is a selection:

      (1) Line 61: '... studies have focused on movement algorithms while overlooking the sensory challenges involved' : This statement does not match the recent state of the literature. While the previous models may have had the assumption that all neighbours can be detected, there are models that specifically study the role of limited interaction arising from a potential inability to track all neighbours due to occlusion, and the effect of responding to only one/few neighbours at a time e.g. Bode et al. 2011 R. Soc. Interface, Rosenthal et al. 2015 PNAS, Jhawar et al. 2020 Nature Physics.

      We appreciate the reviewer's comment and the relevant references. We have revised the manuscript accordingly to clarify the distinction between studies that incorporate limited interactions and those that explicitly analyze sensory constraints and interference. We have refined our statement to acknowledge these contributions while maintaining our focus on sensory challenges beyond limited neighbor detection, such as signal degradation, occlusion effects, and multimodal sensory integration (see lines 61-64):

      While collective movement has been extensively studied in various species, including insect swarming, fish schooling, and bird murmuration (Pitcher, Partridge and Wardle, 1976; Partridge, 1982; Strandburg-Peshkin et al., 2013; Pearce et al., 2014; Rosenthal, Twomey, Hartnett, Wu, Couzin, et al., 2015; Bastien and Romanczuk, 2020; Davidson et al., 2021; Aidan, Bleichman and Ayali, 2024), as well as in swarm robotics agents performing tasks such as coordinated navigation and maze-solving (Faria Dias et al., 2021; Youssefi and Rouhani, 2021; Cheraghi, Shahzad and Graffi, 2022), most studies have focused on movement algorithms , often assuming full detection of neighbors (Parrish and Edelstein-Keshet, 1999; Couzin et al., 2002, 2005; Sumpter et al., 2008; Nagy et al., 2010; Bialek et al., 2012; Gautrais et al., 2012; Attanasi et al., 2014). Some models have incorporated limited interaction rules where individuals respond to one or a few neighbors due to sensory constraints (Bode, Franks and Wood, 2011; Jhawar et al., 2020). However, fewer studies explicitly examine how sensory interference, occlusion, and noise shape decision-making in collective systems (Rosenthal et al., 2015).

      (2) The word 'interference' is used loosely places (Line 89: '...took all interference signals...', Line 319: 'spatial interference') - this is confusing as it is not clear whether the authors refer to interference in the physics/acoustics sense, or broadly speaking as a synonym for reflections and/or jamming.

      To improve clarity, we have revised the manuscript to distinguish between different types of interference:

      · Acoustic interference (jamming): Overlapping calls that completely obscure echo detection, preventing bats from perceiving necessary environmental cues.

      · Acoustic interference (masking): Partial reduction in signal clarity due to competing calls.

      · Spatial interference: Physical obstruction by conspecifics affecting movement and navigation.

      We have updated the manuscript to use these terms consistently and explicitly define them in relevant sections (see lines 87-94 and 329-330). This distinction ensures that the reader can differentiate between interference as an acoustic phenomenon and its broader implications in navigation.

      (3) The paper discusses original results without reference to how they were obtained or what was done. The lack of detail here must be considered while interpreting the Discussion e.g. Line 302 ('our model suggests...increasing the call-rate..' - no clear mention of how/where call-rate was varied) & Line 323 '..no benefit beyond a certain level..' - also no clear mention of how/where call-level was manipulated in the simulations.

      All tested parameters, including call rate dynamics and call intensity variations, are detailed in the Methods section and Tables 1 and 2. Specifically:

      · Call Rate Variation: The Inter-Pulse Interval (IPI) was modeled based on documented echolocation behavior, decreasing from 100 msec during the search phase to 35 msec (~28 calls per second) at the end of the approach phase, and to 5 msec (200 calls per second) during the final buzz (see Table 2). This natural variation in call rate was not manually manipulated in the model but emerged from the simulated bat behavior.

      · Call Intensity Variation: The tested call intensity levels (100, 110, 120, 130 dB SPL) are presented in Table 1 under the “Call Level” parameter. The effect of increasing call intensity was analyzed in relation to exit probability, jamming probability, and collision rate. This is now explicitly referenced in the Discussion.

      We have revised the manuscript to explicitly reference these aspects in the Results and Discussion sections.

      Reviewer #2 (Public review):

      We are grateful for the reviewer’s insightful feedback, which has helped us clarify key aspects of our research and strengthen our conclusions.

      This manuscript describes a detailed model of bats flying together through a fixed geometry. The model considers elements that are faithful to both bat biosonar production and reception and the acoustics governing how sound moves in the air and interacts with obstacles. The model also incorporates behavioral patterns observed in bats, like one-dimensional feature following and temporal integration of cognitive maps. From a simulation study of the model and comparison of the results with the literature, the authors gain insight into how often bats may experience destructive interference of their acoustic signals and those of their peers, and how much such interference may actually negatively affect the groups' ability to navigate effectively. The authors use generalized linear models to test the significance of the effects they observe.

      In terms of its strengths, the work relies on a thoughtful and detailed model that faithfully incorporates salient features, such as acoustic elements like the filter for a biological receiver and temporal aggregation as a kind of memory in the system. At the same time, the authors' abstract features are complicating without being expected to give additional insights, as can be seen in the choice of a two-dimensional rather than three-dimensional system. I thought that the level of abstraction in the model was perfect, enough to demonstrate their results without needless details. The results are compelling and interesting, and the authors do a great job discussing them in the context of the biological literature.

      The most notable weakness I found in this work was that some aspects of the model were not entirely clear to me.

      For example, the directionality of the bat's sonar call in relation to its velocity. Are these the same?

      For simplicity, in our model, the head is aligned with the body, therefore the direction of the echolocation beam is the same as the direction of the flight.

      Moreover, call directionality (directivity) is not directly influenced by velocity. Instead, directionality is estimated using the piston model, as described in the Methods section. The directionality is based on the emission frequency and is thus primarily linked to the behavioral phases of the bat, with frequency shifts occurring as the bat transitions from search to approach to buzz phases. During the approach phase, the bat emits calls with higher frequencies, resulting in increased directionality. This is supported by the literature (Jakobsen and Surlykke, 2010; Jakobsen, Brinkløv and Surlykke, 2013). This phase is also associated with a natural reduction in flight speed, which is a well-documented behavioral adaptation in echolocating bats (Jakobsen et al., 2024).

      To clarify this in the manuscript, we have updated the text to explicitly state that directionality follows phase-dependent frequency changes rather than being a direct function of velocity, see lines 460-465.

      If so, what is the difference between phi_target and phi_tx in the model equations?

      represents the angle between the bat and the reflected object (target).

      the angle [rad], between the masking bat and target (from the transmitter’s perspective)

      refers to the angle between the transmitting conspecific and the receiving focal bat, from the transmitter’s point of view.

      represents the angle between the receiving bat and the transmitting bat, from the receiver’s point of view.

      These definitions have been explicitly stated in the revised manuscript to prevent any ambiguity (lines 467-468). Additionally, a Supplementary figure demonstrating the geometrical relations has been added to the manuscript.

      Author response image 1.

      What is a bat's response to colliding with a conspecific (rather than a wall)?

      In nature, minor collisions between bats are common and typically do not result in significant disruptions to flight (Boerma et al., 2019; Roy et al., 2019; Goldstein et al., 2024).Given this, our model does not explicitly simulate the physical impact of a collision event. Instead, during the collision event the bat keeps decreasing its velocity and changing its flight direction until the distance between bats is above the threshold (0.4 m). We assume that the primary cost of such interactions arises from the effort required to avoid collisions, rather than from the collision itself. This assumption aligns with observations of bat behavior in dense flight environments, where individuals prioritize collision avoidance rather than modeling post-collision dynamics.

      From the statistical side, it was not clear if replicate simulations were performed. If they were, which I believe is the right way due to stochasticity in the model, how many replicates were used, and are the standard errors referred to throughout the paper between individuals in the same simulation or between independent simulations, or both?

      The number of repetitions for each scenario is detailed in Table 1, but we included it in a more prominent location in the text for clarity. Specifically, we now state (Lines 274-275):

      "The number of repetitions for each scenario was as follows: 1 bat: 240; 2 bats: 120; 5 bats: 48; 10 bats: 24; 20 bats: 12; 40 bats: 12; 100 bats: 6."

      Regarding the reported standard errors, they are calculated across all individuals within each scenario, without distinguishing between different simulation trials.

      We clarified in the revised text (Lines 534-535 in Statistical Analysis)

      Overall, I found these weaknesses to be superficial and easily remedied by the authors. The authors presented well-reasoned arguments that were supported by their results, and which were used to demonstrate how call interference impacts the collective's roost exit as measured by several variables. As the authors highlight, I think this work is valuable to individuals interested in bat biology and behavior, as well as to applications in engineered multi-agent systems like robotic swarms.

      Reviewer #3 (Public review):

      We sincerely appreciate the reviewer’s thoughtful comments and the time invested in evaluating our work, which have greatly contributed to refining our study.

      We would like to note that in general, our model often simplifies some of the bats’ abilities, under the assumption that if the simulated bats manage to perform this difficult task with simpler mechanisms, real better adapted bats will probably perform even better. This thought strategy will be repeated in several of the answers below.

      Summary:

      The authors describe a model to mimic bat echolocation behavior and flight under high-density conditions and conclude that the problem of acoustic jamming is less severe than previously thought, conflating the success of their simulations (as described in the manuscript) with hard evidence for what real bats are actually doing. The authors base their model on two species of bats that fly at "high densities" (defined by the authors as colony sizes from tens to tens of thousands of individuals and densities of up to 33.3 bats/m2), Pipistrellus kuhli and Rhinopoma microphyllum. This work fits into the broader discussion of bat sensorimotor strategies during collective flight, and simulations are important to try to understand bat behavior, especially given a lack of empirical data. However, I have major concerns about the assumptions of the parameters used for the simulation, which significantly impact both the results of the simulation and the conclusions that can be made from the data. These details are elaborated upon below, along with key recommendations the authors should consider to guide the refinement of the model.

      Strengths:

      This paper carries out a simulation of bat behavior in dense swarms as a way to explain how jamming does not pose a problem in dense groups. Simulations are important when we lack empirical data. The simulation aims to model two different species with different echolocation signals, which is very important when trying to model echolocation behavior. The analyses are fairly systematic in testing all ranges of parameters used and discussing the differential results.

      Weaknesses:

      The justification for how the different foraging phase call types were chosen for different object detection distances in the simulation is unclear. Do these distances match those recorded from empirical studies, and if so, are they identical for both species used in the simulation?

      The distances at which bats transition between echolocation phases are identical for both species in our model (see Table 2). These distances are based on well-documented empirical studies of bat hunting and obstacle avoidance behavior (Griffin, Webster and Michael, 1958; Simmons and Kick, 1983; Schnitzler et al., 1987; Kalko, 1995; Hiryu et al., 2008; Vanderelst and Peremans, 2018). These references provide extensive evidence that insectivorous bats systematically adjust their echolocation calls in response to object proximity, following the characteristic phases of search, approach, and buzz.

      To improve clarity, we have updated the text to explicitly state that the phase transition distances are empirically grounded and apply equally to both modeled species (lines 430-447).

      What reasoning do the authors have for a bat using the same call characteristics to detect a cave wall as they would for detecting a small insect?

      In echolocating bats, call parameters are primarily shaped by the target distance and echo strength. Accordingly, there is little difference in call structure between prey capture and obstacles-related maneuvers, aside from intensity adjustments based on target strength (Hagino et al., 2007; Hiryu et al., 2008; Surlykke, Ghose and Moss, 2009; Kothari et al., 2014). In our study, due to the dense cave environment, the bats are found to operate in the approach phase nearly all the time, which is consistent with natural cave emergence, where they are navigating through a cluttered environment rather than engaging in open-space search. For one of the species (Rhinopoma M.), we also have empirical recordings of individuals flying under similar conditions (Goldstein et al., 2024). Our model was designed to remain as simple as possible while relying on conservative assumptions that may underestimate bat performance. If, in reality, bats fine-tune their echolocation calls even earlier or more precisely during navigation than assumed, our model would still conservatively reflect their actual capabilities.

      We actually used logarithmically frequency modulated (FM) chirps, generated using the MATLAB built-in function chirp(t, f0, t1, f1, 'logarithmic'). This method aligns with the nonlinear FM characteristics of Pipistrellus kuhlii (PK) and Rhinopoma microphyllum (RM) and provides a realistic approximation of their echolocation signals. We acknowledge that this was not sufficiently emphasized in the original text, and we have now explicitly highlighted this in the revised version to ensure clarity (sell Lines 447-449 in Methods).

      The two species modeled have different calls. In particular, the bandwidth varies by a factor of 10, meaning the species' sonars will have different spatial resolutions. Range resolution is about 10x better for PK compared to RM, but the authors appear to use the same thresholds for "correct detection" for both, which doesn't seem appropriate.

      The detection process in our model is based on Saillant’s method using a filter bank, as detailed in the paper (Saillant et al., 1993; Neretti et al., 2003; Sanderson et al., 2003). This approach inherently incorporates the advantages of a wider bandwidth, meaning that the differences in range resolution between the species are already accounted for within the signal-processing framework. Thus, there is no need to explicitly adjust the model parameters for bandwidth variations, as these effects emerge from the applied method.

      Also, the authors did not mention incorporating/correcting for/exploiting Doppler, which leads me to assume they did not model it.

      The reviewer is correct. To maintain model simplicity, we did not incorporate the Doppler effect or its impact on echolocation. The exclusion of Doppler effects was based on the assumption that while Doppler shifts can influence frequency perception, their impact on jamming and overall navigation performance is minor within the modelled context.

      The maximal Doppler shifts expected for the bats in this scenario are of ~ 1kHz. These shifts would be applied variably across signals due to the semi-random relative velocities between bats, leading to a mixed effect on frequency changes. This variability would likely result in an overall reduction in jamming rather than exacerbating it, aligning with our previous statement that our model may overestimate the severity of acoustic interference. Such Doppler shifts would result in errors of 2-4 cm in localization (i.e., 200-400 micro-seconds) (Boonman, Parsons and Jones, 2003). 

      We have now explicitly highlighted this in the revised version (see Lines 468-470).

      The success of the simulation may very well be due to variation in the calls of the bats, which ironically enough demonstrates the importance of a jamming avoidance response in dense flight. This explains why the performance of the simulation falls when bats are not able to distinguish their own echoes from other signals. For example, in Figure C2, there are calls that are labeled as conspecific calls and have markedly shorter durations and wider bandwidths than others. These three phases for call types used by the authors may be responsible for some (or most) of the performance of the model since the correlation between different call types is unlikely to exceed the detection threshold. But it turns out this variation in and of itself is what a jamming avoidance response may consist of. So, in essence, the authors are incorporating a jamming avoidance response into their simulation.

      We fully agree that the natural variations in call design between the phases contribute significantly to interference reduction (see our discussion in a previous paper in Mazar & Yovel, 2020). However, we emphasize that this cannot be classified as a Jamming Avoidance Response (JAR). In our model, bats respond only to the physical presence of objects and not to the acoustic environment or interference itself. There is no active or adaptive adjustment of call design to minimize jamming beyond the natural phase-dependent variations in call structure. Therefore, while variation in call types does inherently reduce interference, this effect emerges passively from the modeled behavior rather than as an intentional strategy to avoid jamming.

      The authors claim that integration over multiple pings (though I was not able to determine the specifics of this integration algorithm) reduces the masking problem. Indeed, it should: if you have two chances at detection, you've effectively increased your SNR by 3dB.

      The reviewer is correct. Indeed, integration over multiple calls improves signal-to-noise ratio (SNR), effectively increasing it by approximately 3 dB per doubling of observations. The specifics of the integration algorithm are detailed in the Methods section, where we describe how sensory information is aggregated across multiple time steps to enhance detection reliability.

      They also claim - although it is almost an afterthought - that integration dramatically reduces the degradation caused by false echoes. This also makes sense: from one ping to the next, the bat's own echo delays will correlate extremely well with the bat's flight path. Echo delays due to conspecifics will jump around kind of randomly. However, the main concern is regarding the time interval and number of pings of the integration, especially in the context of the bat's flight speed. The authors say that a 1s integration interval (5-10 pings) dramatically reduces jamming probability and echo confusion. This number of pings isn't very high, and it occurs over a time interval during which the bat has moved 5-10m. This distance is large compared to the 0.4m distance-to-obstacle that triggers an evasive maneuver from the bat, so integration should produce a latency in navigation that significantly hinders the ability to avoid obstacles. Can the authors provide statistics that describe this latency, and discussion about why it doesn't seem to be a problem?

      As described in the Methods section, the bat’s collision avoidance response does not solely rely on the integration process. Instead, the model incorporates real-time echoes from the last calls, which are used independently of the integration process for immediate obstacle avoidance maneuvers. This ensures that bats can react to nearby obstacles without being hindered by the integration latency. The slower integration on the other hand is used for clustering, outlier removal and estimation wall directions to support the pathfinding process, as illustrated in Supplementary Figure 1.

      Additionally, our model assumes that bats store the physical positions of echoes in an allocentric coordinate system (x-y). The integration occurs after transforming these detections from a local relative reference frame to a global spatial representation. This allows for stable environmental mapping while maintaining responsiveness to immediate changes in the bat’s surroundings.

      See lines 518-523 in the revied version.

      The authors are using a 2D simulation, but this very much simplifies the challenge of a 3D navigation task, and there is an explanation as to why this is appropriate. Bat densities and bat behavior are discussed per unit area when realistically it should be per unit volume. In fact, the authors reference studies to justify the densities used in the simulation, but these studies were done in a 3D world. If the authors have justification for why it is realistic to model a 3D world in a 2D simulation, I encourage them to provide references justifying this approach.

      We acknowledge that this is a simplification; however, from an echolocation perspective, a 2D framework represents a worst-case scenario in terms of bat densities and maneuverability:

      · Higher Effective Density: A 2D model forces all bats into a single plane rather than distributing them through a 3D volume, increasing the likelihood of overlap in calls and echoes and making jamming more severe. As described in the text: the average distance to the nearest bat in our simulation is 0.27m (with 100 bats), whereas reported distances in very dense colonies are 0.5m, as observed in Myotis grisescens and Tadarida brasiliensis (Fujioka et al., 2021; Sabol and Hudson, 1995; Betke et al., 2008; Gillam et al, 2010)

      · Reduced Maneuverability: In 3D space, bats can use vertical movement to avoid obstacles and conspecifics. A 2D constraint eliminates this degree of freedom, increasing collision risk and limiting escape options.

      Thus, our 2D model provides a conservative difficult test case, ensuring that our findings are valid under conditions where jamming and collision risks are maximized. Additionally, the 2D framework is computationally efficient, allowing us to perform multiple simulation runs to explore a broad parameter space and systematically test the impact of different variables.

      To address the reviewer’s concern, we have clarified this justification in the revised text and will provide supporting references where applicable: (see Methods lines 407-412)

      The focus on "masking" (which appears to be just in-band noise), especially relative to the problem of misassigned echoes, is concerning. If the bat calls are all the same waveform (downsweep linear FM of some duration, I assume - it's not clear from the text), false echoes would be a major problem. Masking, as the authors define it, just reduces SNR. This reduction is something like sqrt(N), where N is the number of conspecifics whose echoes are audible to the bat, so this allows the detection threshold to be set lower, increasing the probability that a bat's echo will exceed a detection threshold. False echoes present a very different problem. They do not reduce SNR per se, but rather they cause spurious threshold excursions (N of them!) that the bat cannot help but interpret as obstacle detection. I would argue that in dense groups the mis-assignment problem is much more important than the SNR problem.

      There is substantial literature supporting the assumption that bats can recognize their own echoes and distinguish them from conspecific signals (Schnitzler and Bioscience, 2001‏; Kazial, Burnett and Masters, 2001; Burnett and Masters, 2002; Kazial, Kenny and Burnett, 2008; Chili, Xian and Moss, 2009; Yovel et al., 2009; Beetz and Hechavarría, 2022). However, we acknowledge that false echoes may present a major challenge in dense groups. To address this, we explicitly tested the impact of the self-echo identification assumption in our study see Results Figure 4: The impact of confusion on performance, and lines 345-355 in the Discussion.

      Furthermore, we examined a full confusion scenario, where all reflected echoes from conspecifics were misinterpreted as obstacle reflections (i.e., 100% confusion). Our results show that this significantly degrades navigation performance, supporting the argument that echo misassignment is a critical issue. However, we also explored a simple mitigation strategy based on temporal integration with outlier rejection, which provided some improvement in performance. This suggests that real bats may possess additional mechanisms to enhance self-echo identification and reduce false detections. See lines XX in the manuscript for further discussion.

      The criteria set for flight behavior (lines 393-406) are not justified with any empirical evidence of the flight behavior of wild bats in collective flight. How did the authors determine the avoidance distances? Also, what is the justification for the time limit of 15 seconds to emerge from the opening? Instead of an exit probability, why not instead use a time criterion, similar to "How long does it take X% of bats to exit?"

      While we acknowledge that wild bats may employ more complex behaviors for collision avoidance, we chose to implement a simplified decision-making rule in our model to maintain computational tractability.

      The avoidance distances (1.5 m from walls and 0.4 m from other bats) were selected as internal parameters to ensure coherent flight trajectories while maintaining a reasonable collision rate. These distances provide a balance between maneuverability and stability, preventing erratic flight patterns while still enabling effective obstacle avoidance. In the revised paper, we have added supplementary figures illustrating the effect of model parameters on performance, specifically focusing on the avoidance distance.

      The 15-second exit limit was determined as described in the text (Lines 403-404): “A 15-second window was chosen because it is approximately twice the average exit time for 40 bats and allows for a second corrective maneuver if needed.” In other words, it allowed each bat to circle the ‘cave’ twice to exit even in the most crowded environment. This threshold was set to keep simulation time reasonable while allowing sufficient time for most bats to exit successfully.

      We acknowledge that the alternative approach suggested by the reviewer—measuring the time taken for a certain percentage of bats to exit—is also valid. However, in our model, some outlier bats fail to exit and continue flying for many minutes, Such simulations would lead to excessive simulation times making it difficult to generate repetitions and not teaching us much – they usually resulted from the bat slightly missing the opening (see video S1. Our chosen approach ensures practical runtime constraints while still capturing relevant performance metrics.

      What is the empirical justification for the 1-10 calls used for integration?

      The "average exit time for 40 bats" is also confusing and not well explained. Was this determined empirically? From the simulation? If the latter, what are the conditions? Does it include masking, no masking, or which species?

      Previous studies have demonstrated that bats integrate acoustic information received sequentially over several echolocation calls (2-15), effectively constructing an auditory scene in complex environments (Ulanovsky and Moss, 2008; Chili, Xian and Moss, 2009; Moss and Surlykke, 2010; Yovel and Ulanovsky, 2017; Salles, Diebold and Moss, 2020). Additionally, bats are known to produce echolocation sound groups when spatiotemporal localization demands are high (Kothari et al., 2014). Studies have documented call sequences ranging from 2 to 15 grouped calls (Moss et al., 2010), and it has been hypothesized that grouping facilitates echo segregation.

      We did not use a single integration window - we tested integration sizes between 1 and 10 calls and presented the results in Figure 3A. This range was chosen based on prior empirical findings and to explore how different levels of temporal aggregation impact navigation performance. Indeed, the results showed that the performance levels between 5-10 calls integration window (Figure 3A)

      Regarding the average exit time for 40 bats, this value was determined from our simulations, where it represents the mean time for successful exits under standard conditions with masking.

      We have revised the text to clarify these details see, lines 466.

      References:

      Aidan, Y., Bleichman, I. and Ayali, A. (2024) ‘Pausing to swarm: locust intermittent motion is instrumental for swarming-related visual processing’, Biology letters, 20(2), p. 20230468. Available at: https://doi.org/10.1098/rsbl.2023.0468.

      Attanasi, A. et al. (2014) ‘Collective Behaviour without Collective Order in Wild Swarms of Midges’. Edited by T. Vicsek, 10(7). Available at: https://doi.org/10.1371/journal.pcbi.1003697.

      Bastien, R. and Romanczuk, P. (2020) ‘A model of collective behavior based purely on vision’, Science Advances, 6(6). Available at: https://doi.org/10.1126/sciadv.aay0792.

      Beetz, M.J. and Hechavarría, J.C. (2022) ‘Neural Processing of Naturalistic Echolocation Signals in Bats’, Frontiers in Neural Circuits, 16, p. 899370. Available at: https://doi.org/10.3389/FNCIR.2022.899370/BIBTEX.

      Betke, M. et al. (2008) ‘Thermal Imaging Reveals Significantly Smaller Brazilian Free-Tailed Bat Colonies Than Previously Estimated’, Journal of Mammalogy, 89(1), pp. 18–24. Available at: https://doi.org/10.1644/07-MAMM-A-011.1.

      Bialek, W. et al. (2012) ‘Statistical mechanics for natural flocks of birds’, Proceedings of the National Academy of Sciences, 109(13), pp. 4786–4791. Available at: https://doi.org/10.1073/PNAS.1118633109.

      Bode, N.W.F., Franks, D.W. and Wood, A.J. (2011) ‘Limited interactions in flocks: Relating model simulations to empirical data’, Journal of the Royal Society Interface, 8(55), pp. 301–304. Available at: https://doi.org/10.1098/RSIF.2010.0397.

      Boerma, D.B. et al. (2019) ‘Wings as inertial appendages: How bats recover from aerial stumbles’, Journal of Experimental Biology, 222(20). Available at: https://doi.org/10.1242/JEB.204255/VIDEO-3.

      Boonman, A.M., Parsons, S. and Jones, G. (2003) ‘The influence of flight speed on the ranging performance of bats using frequency modulated echolocation pulses’, The Journal of the Acoustical Society of America, 113(1), p. 617. Available at: https://doi.org/10.1121/1.1528175.

      Burnett, S.C. and Masters, W.M. (2002) ‘Identifying Bats Using Computerized Analysis and Artificial Neural Networks’, North American Symposium on Bat Research, 9.

      Cheraghi, A.R., Shahzad, S. and Graffi, K. (2022) ‘Past, Present, and Future of Swarm Robotics’, in Lecture Notes in Networks and Systems. Available at: https://doi.org/10.1007/978-3-030-82199-9_13.

      Chili, C., Xian, W. and Moss, C.F. (2009) ‘Adaptive echolocation behavior in bats for the analysis of auditory scenes’, Journal of Experimental Biology, 212(9), pp. 1392–1404. Available at: https://doi.org/10.1242/jeb.027045.

      Couzin, I.D. et al. (2002) ‘Collective Memory and Spatial Sorting in Animal Groups’, Journal of Theoretical Biology, 218(1), pp. 1–11. Available at: https://doi.org/10.1006/jtbi.2002.3065.

      Couzin, I.D. et al. (2005) ‘Effective leadership and decision-making in animal groups on the move’, Nature, 433(7025), pp. 513–516. Available at: https://doi.org/10.1038/nature03236.

      Davidson, J.D. et al. (2021) ‘Collective detection based on visual information in animal groups’, Journal of the Royal Society, 18(180), p. 2021.02.18.431380. Available at: https://doi.org/10.1098/rsif.2021.0142.

      Faria Dias, P.G. et al. (2021) ‘Swarm robotics: A perspective on the latest reviewed concepts and applications’, Sensors. Available at: https://doi.org/10.3390/s21062062.

      Fujioka, E. et al. (2021) ‘Three-Dimensional Trajectory Construction and Observation of Group Behavior of Wild Bats During Cave Emergence’, Journal of Robotics and Mechatronics, 33(3), pp. 556–563. Available at: https://doi.org/10.20965/jrm.2021.p0556.

      Gautrais, J. et al. (2012) ‘Deciphering Interactions in Moving Animal Groups’, PLOS Computational Biology, 8(9), p. e1002678. Available at: https://doi.org/10.1371/JOURNAL.PCBI.1002678.

      Gillam, E.H. et al. (2010) ‘Echolocation behavior of Brazilian free-tailed bats during dense emergence flights’, Journal of Mammalogy, 91(4), pp. 967–975. Available at: https://doi.org/10.1644/09-MAMM-A-302.1.

      Goldstein, A. et al. (2024) ‘Collective Sensing – On-Board Recordings Reveal How Bats Maneuver Under Severe 4 Acoustic Interference’, Under Review, pp. 1–25.

      Griffin, D.R., Webster, F.A. and Michael, C.R. (1958) ‘THE ECHOLOCATION OF FLYING INSECTS BY BATS ANIMAL BEHAVIOUR , Viii , 3-4’.

      Hagino, T. et al. (2007) ‘Adaptive SONAR sounds by echolocating bats’, International Symposium on Underwater Technology, UT 2007 - International Workshop on Scientific Use of Submarine Cables and Related Technologies 2007, pp. 647–651. Available at: https://doi.org/10.1109/UT.2007.370829.

      Hiryu, S. et al. (2008) ‘Adaptive echolocation sounds of insectivorous bats, Pipistrellus abramus, during foraging flights in the field’, The Journal of the Acoustical Society of America, 124(2), pp. EL51–EL56. Available at: https://doi.org/10.1121/1.2947629.

      Jakobsen, L. et al. (2024) ‘Velocity as an overlooked driver in the echolocation behavior of aerial hawking vespertilionid bats’. Available at: https://doi.org/10.1016/j.cub.2024.12.042.

      Jakobsen, L., Brinkløv, S. and Surlykke, A. (2013) ‘Intensity and directionality of bat echolocation signals’, Frontiers in Physiology, 4 APR(April), pp. 1–9. Available at: https://doi.org/10.3389/fphys.2013.00089.

      Jakobsen, L. and Surlykke, A. (2010) ‘Vespertilionid bats control the width of their biosonar sound beam dynamically during prey pursuit’, 107(31). Available at: https://doi.org/10.1073/pnas.1006630107.

      Jhawar, J. et al. (2020) ‘Noise-induced schooling of fish’, Nature Physics 2020 16:4, 16(4), pp. 488–493. Available at: https://doi.org/10.1038/s41567-020-0787-y.

      Kalko, E.K. V. (1995) ‘Insect pursuit, prey capture and echolocation in pipistrelle bats (Microchirptera)’, Animal Behaviour, 50(4), pp. 861–880.

      Kazial, K.A., Burnett, S.C. and Masters, W.M. (2001) ‘ Individual and Group Variation in Echolocation Calls of Big Brown Bats, Eptesicus Fuscus (Chiroptera: Vespertilionidae) ’, Journal of Mammalogy, 82(2), pp. 339–351. Available at: https://doi.org/10.1644/1545-1542(2001)082<0339:iagvie>2.0.co;2.

      Kazial, K.A., Kenny, T.L. and Burnett, S.C. (2008) ‘Little brown bats (Myotis lucifugus) recognize individual identity of conspecifics using sonar calls’, Ethology, 114(5), pp. 469–478. Available at: https://doi.org/10.1111/j.1439-0310.2008.01483.x.

      Kothari, N.B. et al. (2014) ‘Timing matters: Sonar call groups facilitate target localization in bats’, Frontiers in Physiology, 5 MAY. Available at: https://doi.org/10.3389/fphys.2014.00168.

      Moss, C.F. and Surlykke, A. (2010) ‘Probing the natural scene by echolocation in bats’, Frontiers in Behavioral Neuroscience. Available at: https://doi.org/10.3389/fnbeh.2010.00033.

      Nagy, M. et al. (2010) ‘Hierarchical group dynamics in pigeon flocks’, Nature 2010 464:7290, 464(7290), pp. 890–893. Available at: https://doi.org/10.1038/nature08891.

      Neretti, N. et al. (2003) ‘Time-frequency model for echo-delay resolution in wideband biosonar’, The Journal of the Acoustical Society of America, 113(4), pp. 2137–2145. Available at: https://doi.org/10.1121/1.1554693.

      Parrish, J.K. and Edelstein-Keshet, L. (1999) ‘Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation’, Science, 284(5411), pp. 99–101. Available at: https://doi.org/10.1126/SCIENCE.284.5411.99.

      Partridge, B.L. (1982) ‘The Structure and Function of Fish Schools’, 246(6), pp. 114–123. Available at: https://doi.org/10.2307/24966618.

      Pearce, D.J.G. et al. (2014) ‘Role of projection in the control of bird flocks’, Proceedings of the National Academy of Sciences of the United States of America, 111(29), pp. 10422–10426. Available at: https://doi.org/10.1073/pnas.1402202111.

      Pitcher, T.J., Partridge, B.L. and Wardle, C.S. (1976) ‘A blind fish can school’, Science, 194(4268), pp. 963–965. Available at: https://doi.org/10.1126/science.982056.

      Rosenthal, S.B., Twomey, C.R., Hartnett, A.T., Wu, H.S., Couzin, I.D., et al. (2015) ‘Revealing the hidden networks of interaction in mobile animal groups allows prediction of complex behavioral contagion’, Proceedings of the National Academy of Sciences of the United States of America, 112(15), pp. 4690–4695. Available at: https://doi.org/10.1073/pnas.1420068112.

      Rosenthal, S.B., Twomey, C.R., Hartnett, A.T., Wu, H.S. and Couzin, I.D. (2015) ‘Revealing the hidden networks of interaction in mobile animal groups allows prediction of complex behavioral contagion’, Proceedings of the National Academy of Sciences of the United States of America, 112(15), pp. 4690–4695. Available at: https://doi.org/10.1073/PNAS.1420068112/-/DCSUPPLEMENTAL/PNAS.1420068112.SAPP.PDF.

      Roy, S. et al. (2019) ‘Extracting interactions between flying bat pairs using model-free methods’, Entropy, 21(1). Available at: https://doi.org/10.3390/e21010042.

      Sabol, B.M. and Hudson, M.K. (1995) ‘Technique using thermal infrared-imaging for estimating populations of gray bats’, Journal of Mammalogy, 76(4). Available at: https://doi.org/10.2307/1382618.

      Saillant, P.A. et al. (1993) ‘A computational model of echo processing and acoustic imaging in frequency- modulated echolocating bats: The spectrogram correlation and transformation receiver’, The Journal of the Acoustical Society of America, 94(5). Available at: https://doi.org/10.1121/1.407353.

      Salles, A., Diebold, C.A. and Moss, C.F. (2020) ‘Echolocating bats accumulate information from acoustic snapshots to predict auditory object motion’, Proceedings of the National Academy of Sciences of the United States of America, 117(46), pp. 29229–29238. Available at: https://doi.org/10.1073/PNAS.2011719117/SUPPL_FILE/PNAS.2011719117.SAPP.PDF.

      Sanderson, M.I. et al. (2003) ‘Evaluation of an auditory model for echo delay accuracy in wideband biosonar’, The Journal of the Acoustical Society of America, 114(3), pp. 1648–1659. Available at: https://doi.org/10.1121/1.1598195.

      Schnitzler, H., Bioscience, E.K.- and 2001‏, undefined (no date) ‘Echolocation by insect-eating bats: we define four distinct functional groups of bats and find differences in signal structure that correlate with the typical echolocation ‏’, academic.oup.com‏HU Schnitzler, EKV Kalko‏Bioscience, 2001‏•academic.oup.com‏ [Preprint]. Available at: https://academic.oup.com/bioscience/article-abstract/51/7/557/268230 (Accessed: 17 March 2025).

      Schnitzler, H.-U. et al. (1987) ‘The echolocation and hunting behavior of the bat,Pipistrellus kuhli’, Journal of Comparative Physiology A, 161(2), pp. 267–274. Available at: https://doi.org/10.1007/BF00615246.

      Simmons, J.A. and Kick, S.A. (1983) ‘Interception of Flying Insects by Bats’, Neuroethology and Behavioral Physiology, pp. 267–279. Available at: https://doi.org/10.1007/978-3-642-69271-0_20.

      Strandburg-Peshkin, A. et al. (2013) ‘Visual sensory networks and effective information transfer in animal groups’, Current Biology. Cell Press. Available at: https://doi.org/10.1016/j.cub.2013.07.059.

      Sumpter, D.J.T. et al. (2008) ‘Consensus Decision Making by Fish’, Current Biology, 18(22), pp. 1773–1777. Available at: https://doi.org/10.1016/J.CUB.2008.09.064.

      Surlykke, A., Ghose, K. and Moss, C.F. (2009) ‘Acoustic scanning of natural scenes by echolocation in the big brown bat, Eptesicus fuscus’, Journal of Experimental Biology, 212(7), pp. 1011–1020. Available at: https://doi.org/10.1242/JEB.024620.

      Theriault, D.H. et al. (no date) ‘Reconstruction and analysis of 3D trajectories of Brazilian free-tailed bats in flight‏’, cs-web.bu.edu‏ [Preprint]. Available at: https://cs-web.bu.edu/faculty/betke/papers/2010-027-3d-bat-trajectories.pdf (Accessed: 4 May 2023).

      Ulanovsky, N. and Moss, C.F. (2008) ‘What the bat’s voice tells the bat’s brain’, Proceedings of the National Academy of Sciences of the United States of America, 105(25), pp. 8491–8498. Available at: https://doi.org/10.1073/pnas.0703550105.

      Vanderelst, D. and Peremans, H. (2018) ‘Modeling bat prey capture in echolocating bats : The feasibility of reactive pursuit’, Journal of theoretical biology, 456, pp. 305–314.

      Youssefi, K.A.R. and Rouhani, M. (2021) ‘Swarm intelligence based robotic search in unknown maze-like environments’, Expert Systems with Applications, 178. Available at: https://doi.org/10.1016/j.eswa.2021.114907.

      Yovel, Y. et al. (2009) ‘The voice of bats: How greater mouse-eared bats recognize individuals based on their echolocation calls’, PLoS Computational Biology, 5(6). Available at: https://doi.org/10.1371/journal.pcbi.1000400.

      Yovel, Y. and Ulanovsky, N. (2017) ‘Bat Navigation’, The Curated Reference Collection in Neuroscience and Biobehavioral Psychology, pp. 333–345. Available at: https://doi.org/10.1016/B978-0-12-809324-5.21031-6.

    1. Author response:

      We thank the reviewers for their thorough evaluation and constructive feedback on our manuscript.

      We think that their valuable suggestions will strengthen the manuscript and help us clarify several important points.

      All reviewers acknowledged the importance of our theoretical results and network classification in making pattern formation analysis a more tractable problem. At the same time, they have also raised a number of important concerns that we shall carefully consider.

      A. A major clarification that the reviewers found important concerns the definition of non-trivial pattern transformations and its generalization to higher dimensions. In this regard, the reviewers’ comments are:

      Reviewer #1:

      (on non-trivial pattern transformations):

      (3) All modelling is confined to one spatial dimension, and the very definition of a "non-trivial" transformation is framed in terms of peak positions along a line, which clearly must be reformulated for higher dimensions. It's well-known that diffusions in 1, 2, and 3 dimensions are also dramatically different, so the relevance of the three-class taxonomy to real multicellular tissues remains unclear, or at least should be explained in more detail. Reviewer #2 (on non-trivial pattern transformations):

      (5) The definition of non-trivial pattern formation is provided only in the Supplementary Information, despite its central importance for interpreting the main results. It would significantly improve clarity if this definition were included and explained in the main text. Additionally, it remains unclear how the definition is consistently applied across the different initial conditions. In particular, the authors should clarify how slope-based measures are determined for both the random noise and sharp peak/step function initial states. Furthermore, the authors do not specify how the sign function is evaluated at zero. If the standard mathematical definition sgn(0)=0 is used, then even a simple widening of a peak could fulfill the criterion for nontrivial pattern transformation.

      We agree with Reviewer #2 that including a more detailed definition of non-trivial pattern transformation in the main text would enhance the clarity of the paper. The one-dimensional (1D) definition currently provided in the Supplementary Information was chosen because all computations presented therein involve exclusively one-dimensional patterns. However, we acknowledge that this definition, as it was, did not have a totally unambiguous generalization  to higher dimensions. Therefore, in a revised version of the manuscript, we will incorporate an expanded definition applicable to higher-dimensional cases.

      This general definition of a non-trivial pattern transformation should make no reference to the sign of spatial derivatives of either the initial or resulting patterns. Specifically, a pattern transformation is considered non-trivial if it satisfies the following criteria:

      - It is heterogeneous: The resulting pattern is heterogeneous in space.

      - It is rearranging: The arrangement of critical points (i.e. peaks, valleys and saddle points in a gene product concentration) along the domain in the resulting pattern of a gene product is different to the arrangement of critical points in its initial pattern. This includes the emergence of new critical points, the disappearance of existing ones, or the spatial displacement of critical points from one location to another.

      - It is non-replicating: The spatial arrangement of critical points in the pattern of one gene product must differ from that of any other upstream gene product.

      Nonetheless, our two initial patterns are spatially discontinuous functions: in homogeneous initial patterns, the white noise is discontinuous by definition; and for the spike and spike+homogeneous initial patterns, we use sharp spikes defined by the rectangular function, which is discontinuous at the spike boundaries. Therefore, the aforementioned definition should be supplemented with the following two ad hoc assumptions:

      - Homogeneous initial patterns do not comprise any critical point. White noise in this type of initial patterns represents small thermodynamic fluctuations around the steady state and, for the purpose of pattern transformation, this is equivalent to a constant concentration along the domain.

      - Spike and spike+homogeneous initial patterns each contain a single critical point located at the center of the spike. The sharp spikes, modeled using the rectangular function, serve as a theoretical idealization to facilitate mathematical analysis. Once diffusion begins to act, these sharp boundaries are smoothed into differentiable gradients, maintaining a unique critical point at the center of the initial spike, which is the most relevant information for pattern transformation.

      Finally, it is worth recalling that our gene network classification is fundamentally based on an analysis of the dispersion relation associated with the gene network, and the construction of this dispersion relation is independent of the spatial dimensionality of the domain (i.e. it does not require assuming any specific number of dimensions). The fact that the description of this dispersion relation was in the SI may have been non-ideal for the understandability of the article and will, consequently, be moved to the main text in an upcoming version of the article. Thus, the gene networks that can lead to pattern transformation are the same in 1D, 2D or 3D. As for the resulting patterns, the broad description we provide also applies to any number of dimensions; these would be periodic, non periodic as in the amplified noise patterns or non periodic as in the hierarchic networks. For the latter notice that, except for boundary effects that we later discuss, the spike initial condition is radially symmetric and thus, the patterns resulting from it will also be radially symmetric. We will make this point more explicit in a revised version of the article, especially since, as suggested, this important portion of the Supplementary Information will be incorporated into the main text.

      Reviewer 2 suggests that with our definition of non-trivial pattern transformation, the simple widening of a concentration peak would constitute a non-trivial pattern transformation. This is not the case, as already shown in the figures as a example, since in a widening there is no change in the position of the critical point. A different situation applies if a wide and completely flat concentration peak (i.e. a plateau) forms. As we will explain in the coming version this is not possible because of requirement R5.

      We think that this clarification of the definition of non-trivial pattern transformation will also help clarify the next point (B below) since it would make it clearer that this article does not intend to explain which specific resulting pattern would arise from any given gene network.

      B. The main concern among these relates to the validity of our linearization of the model equations and the extension of the results obtained for the linear system to the fully nonlinear system. In this regard, the reviewers’ comments are:

      Reviewer #1:

      (on linearization):

      (2) A central step in the model formulation is the linearisation of the reaction term around a homogeneous steady state; higher-order kinetics, including ubiquitous bimolecular sinks such as A + B → AB, are simply collapsed into the Jacobian without any stated amplitude bound on the perturbations. Because the manuscript never analyses how far this assumption can be relaxed, the robustness of the three-class taxonomy under realistic nonlinear reactions or large spike amplitudes remains uncertain.

      Reviewer #2:

      (on linearization):

      (2) Most of the proofs presented in the Supplementary Information rely on linearized versions of the governing equations, and it remains unclear how these results extend to the fully nonlinear system. We are concerned that the generality of the conclusions drawn from the linear analysis may be overstated in the main text. For example, in Section S3, the authors introduce the concept of dynamic equivalence of transitive chains (Proposition S3.1) and intracellular transitive M-branching (Proposition S3.2), which pertains to the system's steady-state behavior. However, the proof is based solely on the linearized equations, without additional justification for why the result should hold in the presence of nonlinearities. Moreover, the linearized system is used to analyze the response to a "spike initial pattern of arbitrary height C" (SI Chapter S5.1), yet it is not clear how conclusions derived from the linear regime can be valid for large perturbations, where nonlinear effects are expected to play a significant role. We encourage the authors to clarify the assumptions under which the linearized analysis remains valid and to discuss the potential limitations of applying these results to the nonlinear regime.

      In this article, we address two main questions: first, which gene network topologies can give rise to non-trivial pattern transformations; and second, which broad types of resulting patterns can these gene network topologies give rise to resulting pattern. Thus, we are not intending to explain which exact resulting patterns would arise from any given gene network (i.e. a gene network topology with specific functions and interaction strengths or weights), a question for which non-linearities do indeed matter.

      For most known gene regulatory networks, available empirical information is typically limited to the nature of gene product regulations -indicating whether they act as activators or inhibitors- while details about the specific functional form of these regulations are rare. For instance, given two gene products, i and j, the network may indicate that i acts as an activator of j, implying that the concentration of j increases with that of i. However, this increase could follow a variety of functional forms: it may be quadratic (e.g., ), cubic (e.g., ), or any other function f j(gi). As we explain in the description of our model, we restrict our study to functions with a monotonicity constraint: higher concentrations of i lead to increased production of j (i.e., ).  In other words, a given gene interaction is always inhibitory or activatory, it does not change of sign. This monotonicity constraint corresponds to requirement (R5) in our main text. This requirement it is based on the biologically plausible idea that the complexity of gene regulation in development stems more from the topology of gene networks than from the complexity of the regulation by which a gene product may regulate another (i.e. we use simple monotonic functions).

      Question 1: A critical part to understand question 1 is in the dispersion relation that was explained in SI. From the reviewers’ comments it is clear that having this crucial part in the main text of an upcoming version of the article would improve understandability, specially for question 1.

      In brief, any pattern transformation requires the initial pattern to change. The trigger of such change is a change in the concentration of some gene product, either conceptualized as a noise fluctuation (in the homogeneous initial pattern) or a regulated change in a specific point (in the spike initial pattern). Mathematically, both can be conceptualized as perturbations and, for pattern transformation to be possible, such perturbation should grow so that the initial pattern becomes unstable and can change to another resulting pattern.

      If the perturbation is small, one can use the standard linear perturbation analysis in S6.2 of our Supplementary Information. In other words, the linear analysis is enough to ascertain if a small perturbation would grow or not. A gene network in which this will not happen would be unable to lead to pattern transformation, whichever the nonlinear part of f(g). In that sense, the linear approximation provides a necessary condition that any gene network needs to fulfill to lead to pattern transformation.

      However, the linear analysis would not ascertain whether a specific gene network will actually lead to pattern transformation (i.e., the condition is not sufficient). This, as well as the shape of the specific resulting pattern, may actually depend on the non-linear parts too. As we discuss, based on the dispersion relation, and other complementing arguments along the article, we can also get some insights on the possible patterns from the linear approximation alone (question 2). This arguments hold thanks to the imposition of requirements (R1-R5) on function f(g), which prevent strange behaviors stemming from the nonlinear part of the equation.

      The amplitude bound of perturbations mentioned by Reviewer #1 is addressed by requirements (R2) and (R4). Although the solution to the linear system predicts unbounded growth of unstable eigenmodes, the assume functions f(g) on which the nonlinear terms  eventually halt this growth, thereby ensuring the boundedness of solutions as imposed by (R4). This assumption on the nonlinear part is literally requirement R2 on f(g) in the main text.

      The transitive chains and branchings in section S3 of the Supplementary Information mentioned by the Reviewer #2 are topological properties of gene networks and therefore they influence only the linear part of the reaction-diffusion equations. This is why the proofs in that section are based on the linearized equations. We agree that clarifying this point in the text, as suggested by the reviewer, would improve the reader’s understanding of the section.

      Regarding Reviewer #2’s concerns about large perturbations, we acknowledge that the phrasing using “arbitrary height” may be confusing. For the homogeneous initial conditions these perturbations are assumed to be small because they are actually molecular noise (otherwise the initial condition could not be considered homogenous in the classical sense of developmental biology models). In the spike initial conditions in hierarchic networks the perturbation is not necessarily small. For the analysis provided in the SI we indeed assume that the perturbations are small enough for the linear approximation to be possible. Notice, however, that since these networks require an intracellular self-activating loop upstream of the first extracellular signal, the effective perturbation would rapidly grow to a value determined by such loop.

      In general the height of the initial spike does not affect the fact that hierarchic networks can lead to non-trivial pattern transformation. By definition these networks require the secretion of an extracellular signal from the cells in the spike (otherwise no change in gene product concentrations can occur over space). By definition this signal is not produced by any other cells and, thus, its concentration is governed by diffusion from the spike and its production in the cells in the spike. Thus, whichever the initial height of the spike and whichever the non-linearities in f(g), the signal’s concentration would decrease with the distance from the spike. As explained in the main text, this would lead to non-trivial pattern transformations if other general conditions are met. In general, the height of the initial perturbation can affect which specific pattern transformation would arise from a specific gene network but not which gene network topologies can lead to pattern transformation. This will be more clearly stated in an upcoming version of the article. C. In the following, we respond to the remaining concerns raised by the reviewers:

      Reviewer #1:

      (1) The Results section is difficult to follow. Key logical steps and network configurations are described shortly in prose, which constantly require the reader to address either SI or other parts of the text (see numerous links on the requirements R1-R5 listed at the beginning of the paper) to gain minimal understanding. As a result, a scientifically literate but non-specialist reader may struggle to grasp the argument with a reasonable time invested.

      We acknowledge that the current version of the main text may not be as clear as we intended. Initially, we believed that placing the more technical mathematical passages in the Supplementary Information would make the main text more accessible to readers. However, we agree with the reviewer that including some of these computations in the main text could improve clarity. We also believe that adding a summary table outlining all the model’s requirements would further contribute to that goal.

      Reviewer #2:

      (1) We have serious concerns regarding the validity of the simulation results presented in the manuscript. Rather than simulating the full nonlinear system described by Equation (1), the authors base their results on a truncated expansion (Equation S.8.2) that captures only the time evolution of small deviations around a spatially homogeneous steady state. However, it remains unclear how this reduced system is derived from the full equations specifically, which terms are retained or neglected and why- and how the expansion of the nonlinear function can be steady-state independent, as claimed. Additionally, in simulations involving the spike plus homogeneous initial condition, it is not evident -or, where equations are provided, it is not correct- that the assumed global homogeneous background actually corresponds to a steady state of the full dynamics. We elaborate on these concerns in the following:

      We believe there has been a misunderstanding regarding the presentation of the model equations (S8.2) used throughout our simulations. Accordingly, we agree that this relevant section of the Supplementary Information should be rewritten in a revised version of the manuscript to clarify this issue. Below, we address all the concerns raised by the reviewer.

      Equation (S8.2) represents the full nonlinear system described in Equation (1). While we recognize that the model may oversimplify real biological processes, its purpose is to illustrate our general statements about pattern formation rather than to capture any specific or detailed mechanism. In this context, model (S8.2) offers three key advantages for our goals: it allows rapid manipulation of gene network topology simply by modifying the matrix J, making it ideal for illustrating pattern formation across different network classes; it accommodates gene networks of arbitrary size -unlike other models, such as the classical Gierer-Meinhardt model, which are limited to two-element Turing or noise-amplifying networks-; and, due to the simplicity of its nonlinear terms, this model involves relatively few free parameters, facilitating the fine-tuning needed to identify parameter regions where non-trivial pattern transformations occur.

      Indeed, we find that the ability of model (S8.2) to illustrate our results despite having such simple nonlinear terms -bearing in mind that at least some nonlinearity is always necessary for selforganization- strongly supports the claim that the capacity of a gene network to produce pattern transformations is fully determined by the linear part of Equation (1). In this sense, nonlinear terms primarily influence the precise parameter values at which these transformations occur and contribute to shaping specific features of the resulting patterns.

      Model (S8.2) has been successfully employed in pattern formation studies elsewhere in the literature; accordingly, we provide relevant bibliographic references to support its widespread use.

      We believe the misunderstanding arises from our explanation of the biological interpretation of the model. As noted in the accompanying bibliography, the model is based on a general reactiondiffusion mechanism assuming the existence of a steady state. However, this conceptual reactiondiffusion framework is not the same as our Equation (1); rather, it was introduced by the original proponents of the model in the seminal paper cited in our text. In this context, Equation (S8.2) describes small concentration perturbations around that steady state, where the variables represent deviations in concentration relative to the general steady state.

      The aforementioned general steady state corresponds to the trivial equilibrium point g≡0 in equations (S8.2). Consequently, all our simulations based on model (S8.2) start from this steady state, to which we add white noise to generate homogeneous initial patterns or a sharp spike for the two types of spike initial patterns.

      It is also worth noting that Equations (S8.2) represent a non-dimensional model.

      It is assumed that the homogeneous steady states are given by g_i=0 and g_i=c_i, where 1/c_i = \mu_i or \hat{\mu}_i, independently of the specific network structure. However, the basis for this assumption is unclear, especially since some of the functions do not satisfy this condition -for example, f5 as defined below Eq. S8.10.5. Moreover, if g_i=c_i does not correspond to a true steady state, then the time evolution of deviations from this state is not correctly described by Eq. S8.2, as the zeroth-order terms do not vanish in that case.

      From the explanations above, it is important to distinguish two scales in the process: the scale of small perturbations, where equations (S8.2) apply; and the global scale, where the conceptual general reaction-diffusion system operates. Since the specific form of this general system does not affect equations (S8.2), we assume that it follows any of the models cited in the text, which yield a non-zero steady state at .

      In this sense, Equation (S8.2) represent a small concentration deviation of such global system and g(t ,x) is a relative concentration where g≡0 represents the steady-state at are concentrations above , and g<0 are concentrations below .

      As previously mentioned, simulations are performed using Equations (S8.2) on the basis of the equilibrium point g≡0. The result of these simulations is then superimposed on the non-zero steady state and presented in the figures along the article.

      Using the full model instead of the simplified Equations (S8.2) may result in slightly different resulting patterns, but it does not affect the gene network’s ability to produce pattern transformations, nor does it alter the main structural properties of the patterns—for example, the periodic nature of patterns generated by Turing networks.

      Additionally, the equations used contain only linear terms and a cubic degradation term for each species g_i, while neglecting all quadratic terms and cubic terms involving cross-species interactions (i≠j). An explanation for this selective truncation is not provided, and without knowledge of the full equation (f), it is impossible to assess whether this expansion is mathematically justified. If, as suggested in the Supplementary Information, the linear and cubic terms are derived from f, then at the very least, the Jacobian matrix should depend on the background steady-state concentration. However, the equations for the small deviation around a steady state (including the Jacobian matrix) used in the simulations appear to be independent of the particular steady state concentration.

      The Jacobian of Equation (S8.2) is independent of g because g represents a small perturbation around a steady state of a general reaction-diffusion system. Consequently, the matrix J corresponds to the Jacobian of the general system evaluated at that steady state. Evaluating the Jacobian of equations (S8.2) at the equilibrium point g≡0 -which represents the general steady state- recovers the matrix J.

      This is why we believe that the differences observed between the spike-only initial condition and the spike superimposed on a homogeneous background are not due to the initial conditions themselves, but rather result from a modified reaction scheme introduced through a questionable cutoff.

      "In simulations with spike initial patterns, the reference value g≡0 represents an actual concentration of 0 and therefore, we must add to (S8.2) a Heaviside function Φ acting of f (i.e., Φ(f(g))=f(g) if f(g)>0 , Φ(f(g))=0 if f(g){less than or equal to}0 ) to prevent the existence of negative concentrations for any gene product (i.e., g_i<0 for some i )." (SI chapter S8).

      This cutoff alters the dynamics (no inhibition) and introduces a different reaction scheme between the two simulations. The need for this correction may itself reflect either a problem in the original equations (which should fulfill the necessary conditions and prevent negative concentrations (R4 in main text)) or the inappropriateness of using an expanded approximation which assumes independence on the steady state concentration. It is already questionable if the linearized equations with a cubic degradation term are valid for the spike initial conditions (with different background concentration values), as the amplitude of this perturbation seems rather large.

      For homogeneous and spike+homogeneous initial conditions, we interpret equations (S8.2) as small perturbations around a non-zero steady state of a general reaction-diffusion system. For spike-only initial conditions, that steady state is zero. As we mention before, g≡0 will then represent such steady-state of zero concentration, g>0 are positive concentrations of the general system, and g<0 would represent unfeasible negative concentrations of the general system. Therefore, the use of a cutoff function to handle such initial conditions is justified. Moreover, this cutoff function is the same as the one employed in the reference general system cited in our paper.

      We acknowledge that the cutoff influences the simulations and accounts for the differences observed between spike and spike+homogeneous initial conditions. However, this distinction reflects what occurs in real biological systems, which is precisely why we differentiate these two types of initial states. For instance, the emergence of a periodic pattern in a noise-amplifying network depends critically on the formation of regions with concentrations below the steady state near the initial spike. Such regions can form in spike-plus-homogeneous initial patterns but not in spike-only initial patterns, where concentrations below the steady state would correspond to biologically unfeasible negative values.

      Lastly, we note that under the current simulation scheme, it is not possible to meaningfully assess criteria RH2a and RH2b, as they rely on nonlinear interactions that are absent from the implemented dynamics.

      It is explicitly stated in the relevant subsections of Section S7 in the Supplementary Information that, for the simulations involving RH2a and RH2b, the function f(g) in equation (S8.2) is modified by adding an ad hoc quadratic term to enable the assessment of these criteria.

      (3) Several statements in the main text are presented without accompanying proof or sufficient explanation, which makes it difficult to assess their validity. In some cases, the lack of justification raises serious doubts about whether the claims are generally true. Examples are:

      "For the purpose of clarity we will explain our results as if these cells have a simple arrangement in space (e.g., a 1D line or a 2D square lattice) but, as we will discuss, our results shall apply with the same logic to any distribution of cells in space." (Main text l.145-l.148).

      We believe that the confusion in this statement arises from the ambiguous use of the phrase “our results”. We will revise the text to provide a more precise description. Specifically, by “our results,” we refer to the conclusion that it is possible to determine whether a gene network leads to nontrivial pattern transformations based solely on its topology. This conclusion is independent of the dimensionality of space, as none of our arguments rely on assumptions specific to spatial dimensions. While one-dimensional examples are used for clarity and illustration, the underlying reasoning applies generally. In an improved version of the article, we will clarify this point explicitly and move relevant arguments from the Supplementary Information into the main text.

      Critically, our classification of gene networks is ultimately based on an argument concerning the dispersion relation associated with the network, and the construction of this dispersion relation is independent of the spatial dimensionality of the domain. In this sense, the networks identified in the text as capable of producing pattern transformations will be able to generate non-trivial pattern transformations in any spatial domain and in any number of dimensions. While the specific parameter values that permit such transformations may vary depending on the geometry and dimensionality of the domain, the existence of at least one such parameter set remains unaffected.

      The geometry of the domain can influence the specific form of the resulting patterns, but it does not alter the broader class of patterns (e.g., periodic patterns, peaks emerging around a spike, etc.) that a given gene network topology can produce. One such geometric influence, commonly observed in simulations, involves boundary effects. For example, structures such as peaks or rings forming near the boundaries may appear higher, broader, or spatially shifted compared to those arising in the central regions of the domain. However, we think a pattern consisting of a periodic train of peaks where only those near the boundary are slightly different can still be classified as a periodic pattern.

      "For any non-trivial pattern transformation (as long as it is symmetric around the initial spike), there exists an H gene network capable of producing it from a spike initial pattern." (Main text l.366f).

      A justification for this statement is provided shortly after the claim, although we acknowledge that the current explanation is somewhat cumbersome and would benefit from a clearer presentation in a revised version of the main text.

      A more detailed justification is provided in the Supplementary Information, based on three key ideas. First, any pattern (provided it is symmetric with respect to the initial spike) can be described as an arrangement of peaks with varying heights and spatial positions along a one-dimensional domain. Second, there exists a simple gene network—the diamond network—that, through parameter tuning, can produce two peaks of arbitrary height and symmetric position relative to the initial spike. Third, by placing multiple diamond networks positively upstream of a common gene product, that gene product can express peaks at each location where the upstream diamond networks induce them. Under mild additional conditions, this mechanism allows the formation of essentially any symmetric pattern. These mild conditions, along with a detailed analysis of the diamond network’s ability to generate peaks with controllable height and position, are discussed in the Supplementary Information.

      "In 2D there are no peaks but concentric rings of high gene product concentration centered around the spike, while in 3D there are concentric spherical shells." (Main text l. 447ff).

      This result pertains specifically to pattern transformations arising from spike initial patterns. As defined in the text, spike initial patterns are radially symmetric. Since diffusion preserves radial symmetry, pattern transformations from spike initial patterns in two or three dimensions reduce to effectively one-dimensional transformations along each radial direction. In this framework, each pair of concentration peaks symmetric with respect to the spike in one dimension corresponds to a ring surrounding the spike in two dimensions, and each ring in two dimensions becomes a hollow spherical shell around the spike in three dimensions.

      We agree that including a brief section in the Supplementary Information to clarify these subtleties would be helpful for readers to better understand the generalization of certain patterns to higher dimensions.

      (4) The study identifies one-signal networks and examines how combinations of these structures can give rise to minimal pattern-forming subnetworks. However, the analysis of the combinations of these minimal pattern-forming subnetworks remains relatively brief, and the manuscript does not explore how the results might change if the subnetworks were combined in upstream and downstream configurations. In our view, it is not evident that all possible gene regulatory networks can be fully characterized by these categories, nor that the resulting patterns can be reliably predicted. Rather, the approach appears more suited to identifying which known subnetworks are present within a larger network, without necessarily capturing the full dynamics of more complex configurations.

      We acknowledge that our explanation regarding the combination of sub-networks was relatively brief, and we intend to address this in a revised version. Our argument that combining sub-networks does not produce qualitatively new types of pattern transformations -beyond those already described- is based on the dispersion relation. Although this relation was only detailed in the Supplementary Information, it is central to our argument and will therefore be moved to the main text. Below, we provide an outline of this argument:

      Our study identifies two distinct behaviors of the principal branch of the dispersion relation at large wavenumbers. Based on this, gene networks capable of pattern formation can be classified into two categories: networks of the first kind, where the real part of the principal branch diverges to infinity as the wavenumber increases; and networks of the second kind, where the real part of the principal branch converges to a positive finite value for large wavenumbers. Naturally this argument applies to any gene network irrespectively of which, or how many, sub-networks are used to built it.

      Any gene regulatory network capable of pattern formation falls into one of these two categories. We identified that networks of the first kind contain at least one Turing sub-network, whereas networks of the second kind include either an H sub-network or a noise-amplifying sub-network. In this way, the primary objective of our study -namely, achieving a topological classification of gene regulatory networks capable of pattern formation- is fulfilled. It is important to note that while the dispersion relation provides broad information about the possible resulting patterns a gene network topology can produce (e.g., periodic versus noisy), it does not specify the exact patterns that emerge for each particular set of parameter values.

      Finally, regarding the shape of the resulting patterns, Figure S10 in the Supplementary Information exemplifies the notion that the behavior of combined networks can be understood as a combination of the individual behaviors of each constituent sub-network (note that the contribution of each type of sub-network in the resulting pattern is readily distinguishable). Consequently, we focus our detailed analysis on the patterning properties of the fundamental classes.

      (6) The manuscript lacks a clear and detailed explanation of the underlying model and its assumptions. In particular, it is not well-defined what constitutes a "cell" in the context of the model, nor is it justified why spatial features of cells -such as their size or boundaries- can be neglected. Furthermore, the concept of the extracellular space in the one-dimensional model remains ambiguous, making it unclear which gene products are assumed to diffuse.

      The size of cells is ignored in our model because we assume that they are small enough with respect to the total size of the domain that the space continuous reaction-diffusion equation (equation (1) in the main text) holds. Conceptually, one could understand cells in our model each of the pieces in an even partition of the domain into small subdomains surrounding each position x. This is anyway the standard procedure in most models of pattern formation by reaction-diffusion in embryonic development.

      For extracellular signals, we assume that g(t ,x) corresponds to the concentration of the signal in the extracellular space surrounding the cell located at position x. The extracellular space is any fluid medium for which Fick Laws apply and, therfore, the Fickian diffusion term in equation (1) is valid.

      For intracellular gene products, we assume that g(t ,x) corresponds to the concentration of such gene product within the cell at position x (if the gene product in hand is a transcription factor, for example), or on its surface (if it is a membrane-bound receptor). When collapsed in the continuous equations there is not such difference between being strictly within the cell or on its boundary. The only important fact is that these gene products cannot diffuse.

      Regarding cell boundaries, let us consider an extracellular signal s that regulates a transcriptor factor i within cells (in our model, i is an intracellular gene product). Such regulation shall be mediated by a membrane-bound receptor, which corresponds to intracellular gene product j. In terms of the gene regulatory network this is sji. Cell boundary effects mentioned by the reviewer should be encapsulated in the specific functional form of the regulation function f(g), but they have no effect in the actual topology of the network. Consequently, they are out of the scope of this study: as we mentioned before, considering different non-linear terms for f(g) will affect the parameter range for which a gene network is capable of producing non-trivial pattern transformations, but not their overall ability to produce non-trivial pattern transformations (i.e., the existence of at least one choice of model parameters for which such transformations take place).

      Finally, we would like to once again express our sincere gratitude to all reviewers for their insightful and constructive feedback. We are confident that the thorough peer review process will significantly enhance both the clarity and depth of our work. We greatly value the detailed comments provided and will carefully incorporate them in the preparation of a revised manuscript, which we intend to submit in the coming months.

    1. Author Response

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Given knowledge of the amino acid sequence and of some version of the 3D structure of two monomers that are expected to form a complex, the authors investigate whether it is possible to accurately predict which residues will be in contact in the 3D structure of the expected complex. To this effect, they train a deep learning model that takes as inputs the geometric structures of the individual monomers, per-residue features (PSSMs) extracted from MSAs for each monomer, and rich representations of the amino acid sequences computed with the pre-trained protein language models ESM-1b, MSA Transformer, and ESM-IF. Predicting inter-protein contacts in complexes is an important problem. Multimer variants of AlphaFold, such as AlphaFold-Multimer, are the current state of the art for full protein complex structure prediction, and if the three-dimensional structure of a complex can be accurately predicted then the inter-protein contacts can also be accurately determined. By contrast, the method presented here seeks state-of-the-art performance among models that have been trained end-to-end for inter-protein contact prediction.

      Strengths:

      The paper is carefully written and the method is very well detailed. The model works both for homodimers and heterodimers. The ablation studies convincingly demonstrate that the chosen model architecture is appropriate for the task. Various comparisons suggest that PLMGraph-Inter performs substantially better, given the same input than DeepHomo, GLINTER, CDPred, DeepHomo2, and DRN-1D2D_Inter. As a byproduct of the analysis, a potentially useful heuristic criterion for acceptable contact prediction quality is found by the authors: namely, to have at least 50% precision in the prediction of the top 50 contacts.

      We thank the reviewer for recognizing the strengths of our work!

      Weaknesses:

      My biggest issue with this work is the evaluations made using bound monomer structures as inputs, coming from the very complexes to be predicted. Conformational changes in protein-protein association are the key element of the binding mechanism and are challenging to predict. While the GLINTER paper (Xie & Xu, 2022) is guilty of the same sin, the authors of CDPred (Guo et al., 2022) correctly only report test results obtained using predicted unbound tertiary structures as inputs to their model. Test results using experimental monomer structures in bound states can hide important limitations in the model, and thus say very little about the realistic use cases in which only the unbound structures (experimental or predicted) are available. I therefore strongly suggest reducing the importance given to the results obtained using bound structures and emphasizing instead those obtained using predicted monomer structures as inputs.

      We thank the reviewer for the suggestion! We evaluated PLMGraph-Inter with the predicted monomers and analyzed the result in details (see the “Impact of the monomeric structure quality on contact prediction” section and Figure 3). To mimic the real cases, we even deliberately reduced the performance of AF2 by using reduced MSAs (see the 2nd paragraph in the ““Impact of the monomeric structure quality on contact prediction” section). We leave some of the results in the supplementary of the current manuscript (Table S2). We will move these results to the main text to emphasize the performance of PLMGraph-Inter with the predicted monomers in the revision.

      In particular, the most relevant comparison with AlphaFold-Multimer (AFM) is given in Figure S2, not Figure 6. Unfortunately, it substantially shrinks the proportion of structures for which AFM fails while PLMGraph-Inter performs decently. Still, it would be interesting to investigate why this occurs. One possibility would be that the predicted monomer structures are of bad quality there, and PLMGraph-Inter may be able to rely on a signal from its language model features instead. Finally, AFM multimer confidence values ("iptm + ptm") should be provided, especially in the cases in which AFM struggles.

      We thank the reviewer for the suggestion! Yes! The performance of PLMGraph-Inter drops when the predicted monomers are used in the prediction. However, it is difficult to say which is a fairer comparison, Figure 6 or Figure S2, since AFM also searched monomer templates (see the third paragraph in 7. Supplementary Information : 7.1 Data in the AlphaFold-Multimer preprint: https://www.biorxiv.org/content/10.1101/2021.10.04.463034v2.full) in the prediction. When we checked our AFM runs, we found that 99% of the targets in our study (including all the targets in the four datasets: HomoPDB, HeteroPDB, DHTest and DB5.5) employed at least 20 templates in their predictions, and 87.8% of the targets employed the native templates. We will provide the AFM confidence values of the AFM predictions in the revision.

      Besides, in cases where any experimental structures - bound or unbound - are available and given to PLMGraph-Inter as inputs, they should also be provided to AlphaFold-Multimer (AFM) as templates. Withholding these from AFM only makes the comparison artificially unfair. Hence, a new test should be run using AFM templates, and a new version of Figure 6 should be produced. Additionally, AFM's mean precision, at least for top-50 contact prediction, should be reported so it can be compared with PLMGraph-Inter's.

      We thank the reviewers for the suggestion! We would like to notify that AFM also searched monomer templates (see the third paragraph in 7. Supplementary Information : 7.1 Data in the AlphaFold-Multimer preprint: https://www.biorxiv.org/content/10.1101/2021.10.04.463034v2.full) in the prediction. When we checked our AFM runs, we found that 99% of the targets in our study (including all the targets in the four datasets: HomoPDB, HeteroPDB, DHTest and DB5.5) employed at least 20 templates in their predictions, and 87.8% of the targets employed the native template.

      It's a shame that many of the structures used in the comparison with AFM are actually in the AFM v2 training set. If there are any outside the AFM v2 training set and, ideally, not sequence- or structure-homologous to anything in the AFM v2 training set, they should be discussed and reported on separately. In addition, why not test on structures from the "Benchmark 2" or "Recent-PDB-Multimers" datasets used in the AFM paper?

      We thank the reviewer for the suggestion! The biggest challenge to objectively evaluate AFM is that as far as we known, AFM does not release the PDB ids of its training set and the “Recent-PDB-Multimers” dataset. “Benchmark 2” only includes 17 heterodimer proteins, and the number can be further decreased after removing targets redundant to our training set. We think it is difficult to draw conclusions from such a small number of targets. In the revision, we will analyze the performance of AFM on targets released after the date cutoff of the AFM training set, but with which we cannot totally remove the redundancy between the training and the test sets of AFM.

      It is also worth noting that the AFM v2 weights have now been outdated for a while, and better v3 weights now exist, with a training cutoff of 2021-09-30.

      We thank the reviewer for reminding the new version of AFM. The only difference between AFM V3 and V2 is the cutoff date of the training set. Our test set would have more overlaps with the training set of AFM V3, which is one reason that we think AFM V2 is more appropriate to be used in the comparison.

      Another weakness in the evaluation framework: because PLMGraph-Inter uses structural inputs, it is not sufficient to make its test set non-redundant in sequence to its training set. It must also be non-redundant in structure. The Benchmark 2 dataset mentioned above is an example of a test set constructed by removing structures with homologous templates in the AF2 training set. Something similar should be done here.

      We agree with the reviewer that testing whether the model can keep its performance on targets with no templates (i.e. non-redundant in structure) is important. We will perform the analysis in the revision.

      Finally, the performance of DRN-1D2D for top-50 precision reported in Table 1 suggests to me that, in an ablation study, language model features alone would yield better performance than geometric features alone. So, I am puzzled why model "a" in the ablation is a "geometry-only" model and not a "LM-only" one.

      Using the protein geometric graph to integrate multiple protein language models is the main idea of PLMGraph-Inter. Comparing with our previous work (DRN-1D2D_Inter), we consider the building of the geometric graph as one major contribution of this work. To emphasize the efficacy of this geometric graph, we chose to use the “geometry-only” model as the base model. We will further clarity this in the revision.

      Reviewer #2 (Public Review):

      This work introduces PLMGraph-Inter, a new deep-learning approach for predicting inter-protein contacts, which is crucial for understanding protein-protein interactions. Despite advancements in this field, especially driven by AlphaFold, prediction accuracy and efficiency in terms of computational cost) still remains an area for improvement. PLMGraph-Inter utilizes invariant geometric graphs to integrate the features from multiple protein language models into the structural information of each subunit. When compared against other inter-protein contact prediction methods, PLMGraph-Inter shows better performance which indicates that utilizing both sequence embeddings and structural embeddings is important to achieve high-accuracy predictions with relatively smaller computational costs for the model training.

      The conclusions of this paper are mostly well supported by data, but test examples should be revisited with a more strict sequence identity cutoff to avoid any potential information leakage from the training data. The main figures should be improved to make them easier to understand.

      We thank the reviewer for recognizing the significance of our work! We will revise the manuscript carefully to address the reviewer’s concerns.

      1. The sequence identity cutoff to remove redundancies between training and test set was set to 40%, which is a bit high to remove test examples having homology to training examples. For example, CDPred uses a sequence identity cutoff of 30% to strictly remove redundancies between training and test set examples. To make their results more solid, the authors should have curated test examples with lower sequence identity cutoffs, or have provided the performance changes against sequence identities to the closest training examples.

      We thank the reviewer for the valuable suggestion! Using different thresholds to reduce the redundancy between the test set and the training set is a very good suggestion, and we will perform the analysis in the revision. In the current version of the manuscript, the 40% sequence identity is used as the cutoff for many previous studies used this cutoff (e.g. the Recent-PDB-Multimers used in AlphaFold-Multimer (see: 7.8 Datasets in the AlphaFold-Multimer paper); the work of DSCRIPT: https://www.cell.com/action/showPdf?pii=S2405-4712%2821%2900333-1 (see: the PPI dataset paragraph in the METHODS DETAILS section of the STAR METHODS)). One reason for using the relatively higher threshold for PPI studies is that PPIs are generally not as conserved as protein monomers.

      We performed a preliminary analysis using different thresholds to remove redundancy when preparing this provisional response letter:

      Author response table 1.

      Table1. The performance of PLMGraph-Inter on the HomoPDB and HeteroPDB test sets using native structures(AlphaFold2 predicted structures).

      Method:

      To remove redundancy, we clustered 11096 sequences from the training set and test sets (HomoPDB, HeteroPDB) using MMSeq2 with different sequence identity threshold (40%, 30%, 20%, 10%) (the lowest cutoff for CD-HIT is 40%, so we switched to MMSeq2). Each sequence is then uniquely labeled by the cluster (e.g. cluster 0, cluster 1, …) to which it belongs, from which each PPI can be marked with a pair of clusters (e.g. cluster 0-cluster 1). The PPIs belonging to the same cluster pair (note: cluster n - cluster m and cluster n-cluster m were considered as the same pair) were considered as redundant. For each PPI in the test set, if the pair cluster it belongs to contains the PPI belonging to the training set, we remove that PPI from the test set.

      We will perform more detailed analyses in the revised manuscript.

      1. Figures with head-to-head comparison scatter plots are hard to understand as scatter plots because too many different methods are abstracted into a single plot with multiple colors. It would be better to provide individual head-to-head scatter plots as supplementary figures, not in the main figure.

      We thank the reviewer for the suggestion! We will include the individual head-to-head scatter plots as supplementary figures in the revision.

      3) The authors claim that PLMGraph-Inter is complementary to AlphaFold-multimer as it shows better precision for the cases where AlphaFold-multimer fails. To strengthen the point, the qualities of predicted complex structures via protein-protein docking with predicted contacts as restraints should have been compared to those of AlphaFold-multimer structures.

      We thank the reviewer for the suggestion! We will add this comparison in the revision.

      4) It would be interesting to further analyze whether there is a difference in prediction performance depending on the depth of multiple sequence alignment or the type of complex (antigen-antibody, enzyme-substrates, single species PPI, multiple species PPI, etc).

      We thank the reviewer for the suggestion! We will perform such analysis in the revision.

    1. Author response:

      eLife Assessment 

      This valuable study investigates how the neural representation of individual finger movements changes during the early period of sequence learning. By combining a new method for extracting features from human magnetoencephalography data and decoding analyses, the authors provide incomplete evidence of an early, swift change in the brain regions correlated with sequence learning, including a set of previously unreported frontal cortical regions. The addition of more control analyses to rule out that head movement artefacts influence the findings, and to further explain the proposal of offline contextualization during short rest periods as the basis for improvement performance would strengthen the manuscript. 

      We appreciate the Editorial assessment on our paper’s strengths and novelty.  We have implemented additional control analyses to show that neither task-related eye movements nor increasing overlap of finger movements during learning account for our findings, which are that contextualized neural representations in a network of bilateral frontoparietal brain regions actively contribute to skill learning.  Importantly, we carried out additional analyses showing that contextualization develops predominantly during rest intervals.

      Public Reviews:

      We thank the Reviewers for their comments and suggestions, prompting new analyses and additions that strengthened our report.

      Reviewer #1 (Public review): 

      Summary: 

      This study addresses the issue of rapid skill learning and whether individual sequence elements (here: finger presses) are differentially represented in human MEG data. The authors use a decoding approach to classify individual finger elements and accomplish an accuracy of around 94%. A relevant finding is that the neural representations of individual finger elements dynamically change over the course of learning. This would be highly relevant for any attempts to develop better brain machine interfaces - one now can decode individual elements within a sequence with high precision, but these representations are not static but develop over the course of learning. 

      Strengths: The work follows a large body of work from the same group on the behavioural and neural foundations of sequence learning. The behavioural task is well established and neatly designed to allow for tracking learning and how individual sequence elements contribute. The inclusion of short offline rest periods between learning epochs has been influential because it has revealed that a lot, if not most of the gains in behaviour (ie speed of finger movements) occur in these so-called micro-offline rest periods. The authors use a range of new decoding techniques, and exhaustively interrogate their data in different ways, using different decoding approaches. Regardless of the approach, impressively high decoding accuracies are observed, but when using a hybrid approach that combines the MEG data in different ways, the authors observe decoding accuracies of individual sequence elements from the MEG data of up to 94%. 

      We have previously showed that neural replay of MEG activity representing the practiced skill correlated with micro-offline gains during rest intervals of early learning, 1 consistent with the recent report that hippocampal ripples during these offline periods predict human motor sequence learning2.  However, decoding accuracy in our earlier work1 needed improvement.  Here, we reported a strategy to improve decoding accuracy that could benefit future studies of neural replay or BCI using MEG.

      Weaknesses: 

      There are a few concerns which the authors may well be able to resolve. These are not weaknesses as such, but factors that would be helpful to address as these concern potential contributions to the results that one would like to rule out. Regarding the decoding results shown in Figure 2 etc, a concern is that within individual frequency bands, the highest accuracy seems to be within frequencies that match the rate of keypresses. This is a general concern when relating movement to brain activity, so is not specific to decoding as done here. As far as reported, there was no specific restraint to the arm or shoulder, and even then it is conceivable that small head movements would correlate highly with the vigor of individual finger movements. This concern is supported by the highest contribution in decoding accuracy being in middle frontal regions - midline structures that would be specifically sensitive to movement artefacts and don't seem to come to mind as key structures for very simple sequential keypress tasks such as this - and the overall pattern is remarkably symmetrical (despite being a unimanual finger task) and spatially broad. This issue may well be matching the time course of learning, as the vigor and speed of finger presses will also influence the degree to which the arm/shoulder and head move. This is not to say that useful information is contained within either of the frequencies or broadband data. But it raises the question of whether a lot is dominated by movement "artefacts" and one may get a more specific answer if removing any such contributions. 

      Reviewer #1 expresses concern that the combination of the low-frequency narrow-band decoder results, and the bilateral middle frontal regions displaying the highest average intra-parcel decoding performance across subjects is suggestive that the decoding results could be driven by head movement or other artefacts.

      Head movement artefacts are highly unlikely to contribute meaningfully to our results for the following reasons. First, in addition to ICA denoising, all “recordings were visually inspected and marked to denoise segments containing other large amplitude artifacts due to movements” (see Methods). Second, the response pad was positioned in a manner that minimized wrist, arm or more proximal body movements during the task. Third, while head position was not monitored online for this study, the head was restrained using an inflatable air bladder, and head position was assessed at the beginning and at the end of each recording. Head movement did not exceed 5mm between the beginning and end of each scan for all participants included in the study. Fourth, we agree that despite the steps taken above, it is possible that minor head movements could still contribute to some remaining variance in the MEG data in our study. The Reviewer states a concern that “it is conceivable that small head movements would correlate highly with the vigor of individual finger movements”. However, in order for any such correlations to meaningfully impact decoding performance, such head movements would need to: (A) be consistent and pervasive throughout the recording (which might not be the case if the head movements were related to movement vigor and vigor changed over time); and (B) systematically vary between different finger movements, and also between the same finger movement performed at different sequence locations (see 5-class decoding performance in Figure 4B). The possibility of any head movement artefacts meeting all these conditions is extremely unlikely.

      Given the task design, a much more likely confound in our estimation would be the contribution of eye movement artefacts to the decoder performance (an issue appropriately raised by Reviewer #3 in the comments below). Remember from Figure 1A in the manuscript that an asterisk marks the current position in the sequence and is updated at each keypress. Since participants make very few performance errors, the position of the asterisk on the display is highly correlated with the keypress being made in the sequence. Thus, it is possible that if participants are attending to the visual feedback provided on the display, they may move their eyes in a way that is systematically related to the task.  Since we did record eye movements simultaneously with the MEG recordings (EyeLink 1000 Plus; Fs = 600 Hz), we were able to perform a control analysis to address this question. For each keypress event during trials in which no errors occurred (which is the same time-point that the asterisk position is updated), we extracted three features related to eye movements: 1) the gaze position at the time of asterisk position update (or keyDown event), 2) the gaze position 150ms later, and 3) the peak velocity of the eye movement between the two positions. We then constructed a classifier from these features with the aim of predicting the location of the asterisk (ordinal positions 1-5) on the display. As shown in the confusion matrix below (Author response image 1), the classifier failed to perform above chance levels (Overall cross-validated accuracy = 0.21817):

      Author response image 1.

      Confusion matrix showing that three eye movement features fail to predict asterisk position on the task display above chance levels (Fold 1 test accuracy = 0.21718; Fold 2 test accuracy = 0.22023; Fold 3 test accuracy = 0.21859; Fold 4 test accuracy = 0.22113; Fold 5 test accuracy = 0.21373; Overall cross-validated accuracy = 0.2181). Since the ordinal position of the asterisk on the display is highly correlated with the ordinal position of individual keypresses in the sequence, this analysis provides strong evidence that keypress decoding performance from MEG features is not explained by systematic relationships between finger movement behavior and eye movements (i.e. – behavioral artefacts).

      In fact, inspection of the eye position data revealed that a majority of participants on most trials displayed random walk gaze patterns around a center fixation point, indicating that participants did not attend to the asterisk position on the display. This is consistent with intrinsic generation of the action sequence, and congruent with the fact that the display does not provide explicit feedback related to performance. A similar real-world example would be manually inputting a long password into a secure online application. In this case, one intrinsically generates the sequence from memory and receives similar feedback about the password sequence position (also provided as asterisks), which is typically ignored by the user. The minimal participant engagement with the visual task display observed in this study highlights another important point – that the behavior in explicit sequence learning motor tasks is highly generative in nature rather than reactive to stimulus cues as in the serial reaction time task (SRTT).  This is a crucial difference that must be carefully considered when designing investigations and comparing findings across studies.

      We observed that initial keypress decoding accuracy was predominantly driven by contralateral primary sensorimotor cortex in the initial practice trials before transitioning to bilateral frontoparietal regions by trials 11 or 12 as performance gains plateaued.  The contribution of contralateral primary sensorimotor areas to early skill learning has been extensively reported in humans and non-human animals. 1,3-5  Similarly, the increased involvement of bilateral frontal and parietal regions to decoding during early skill learning in the non-dominant hand is well known.  Enhanced bilateral activation in both frontal and parietal cortex during skill learning has been extensively reported6-11, and appears to be even more prominent during early fine motor skill learning in the non-dominant hand12,13.  The frontal regions identified in these studies are known to play crucial roles in executive control14, motor planning15, and working memory6,8,16-18 processes, while the same parietal regions are known to integrate multimodal sensory feedback and support visuomotor transformations6,8,16-18, in addition to working memory19. Thus, it is not surprising that these regions increasingly contribute to decoding as subjects internalize the sequential task.  We now include a statement reflecting these considerations in the revised Discussion.

      A somewhat related point is this: when combining voxel and parcel space, a concern is whether a degree of circularity may have contributed to the improved accuracy of the combined data, because it seems to use the same MEG signals twice - the voxels most contributing are also those contributing most to a parcel being identified as relevant, as parcels reflect the average of voxels within a boundary. In this context, I struggled to understand the explanation given, ie that the improved accuracy of the hybrid model may be due to "lower spatially resolved whole-brain and higher spatially resolved regional activity patterns".

      We strongly disagree with the Reviewer’s assertion that the construction of the hybrid-space decoder is circular. To clarify, the base feature set for the hybrid-space decoder constructed for all participants includes whole-brain spatial patterns of MEG source activity averaged within parcels. As stated in the manuscript, these 148 inter-parcel features reflect “lower spatially resolved whole-brain activity patterns” or global brain dynamics. We then independently test how well spatial patterns of MEG source activity for all voxels distributed within individual parcels can decode keypress actions. Again, the testing of these intra-parcel spatial patterns, intended to capture “higher spatially resolved regional brain activity patterns”, is completely independent from one another and independent from the weighting of individual inter-parcel features. These intra-parcel features could, for example, provide additional information about muscle activation patterns or the task environment. These approximately 1150 intra-parcel voxels (on average, within the total number varying between subjects) are then combined with the 148 inter-parcel features to construct the final hybrid-space decoder. In fact, this varied spatial filter approach shares some similarities to the construction of convolutional neural networks (CNNs) used to perform object recognition in image classification applications. One could also view this hybrid-space decoding approach as a spatial analogue to common time-frequency based analyses such as theta-gamma phase amplitude coupling (PAC), which combine information from two or more narrow-band spectral features derived from the same time-series data.

      We directly tested this hypothesis – that spatially overlapping intra- and inter-parcel features portray different information – by constructing an alternative hybrid-space decoder (HybridAlt) that excluded average inter-parcel features which spatially overlapped with intra-parcel voxel features, and comparing the performance to the decoder used in the manuscript (HybridOrig). The prediction was that if the overlapping parcel contained similar information to the more spatially resolved voxel patterns, then removing the parcel features (n=8) from the decoding analysis should not impact performance. In fact, despite making up less than 1% of the overall input feature space, removing those parcels resulted in a significant drop in overall performance greater than 2% (78.15% ± SD 7.03% for HybridOrig vs. 75.49% ± SD 7.17% for HybridAlt; Wilcoxon signed rank test, z = 3.7410, p = 1.8326e-04) (Author response image 2).

      Author response image 2.

      Comparison of decoding performances with two different hybrid approaches. HybridAlt: Intra-parcel voxel-space features of top ranked parcels and inter-parcel features of remaining parcels. HybridOrig:  Voxel-space features of top ranked parcels and whole-brain parcel-space features (i.e. – the version used in the manuscript). Dots represent decoding accuracy for individual subjects. Dashed lines indicate the trend in performance change across participants. Note, that HybridOrig (the approach used in our manuscript) significantly outperforms the HybridAlt approach, indicating that the excluded parcel features provide unique information compared to the spatially overlapping intra-parcel voxel patterns.

      Firstly, there will be a relatively high degree of spatial contiguity among voxels because of the nature of the signal measured, i.e. nearby individual voxels are unlikely to be independent. Secondly, the voxel data gives a somewhat misleading sense of precision; the inversion can be set up to give an estimate for each voxel, but there will not just be dependence among adjacent voxels, but also substantial variation in the sensitivity and confidence with which activity can be projected to different parts of the brain. Midline and deeper structures come to mind, where the inversion will be more problematic than for regions along the dorsal convexity of the brain, and a concern is that in those midline structures, the highest decoding accuracy is seen. 

      We definitely agree with the Reviewer that some inter-parcel features representing neighboring (or spatially contiguous) voxels are likely to be correlated. This has been well documented in the MEG literature20,21 and is a particularly important confound to address in functional or effective connectivity analyses (not performed in the present study). In the present analysis, any correlation between adjacent voxels presents a multi-collinearity problem, which effectively reduces the dimensionality of the input feature space. However, as long as there are multiple groups of correlated voxels within each parcel (i.e. - the effective dimensionality is still greater than 1), the intra-parcel spatial patterns could still meaningfully contribute to the decoder performance. Two specific results support this assertion.

      First, we obtained higher decoding accuracy with voxel-space features [74.51% (± SD 7.34%)] compared to parcel space features [68.77% (± SD 7.6%)] (Figure 3B), indicating individual voxels carry more information in decoding the keypresses than the averaged voxel-space features or parcel-space features.  Second, Individual voxels within a parcel showed varying feature importance scores in decoding keypresses (Author response image 3). This finding supports the Reviewer’s assertion that neighboring voxels express similar information, but also shows that the correlated voxels form mini subclusters that are much smaller spatially than the parcel they reside in.

      Author response image 3.

      Feature importance score of individual voxels in decoding keypresses: MRMR was used to rank the individual voxel space features in decoding keypresses and the min-max normalized MRMR score was mapped to a structural brain surface. Note that individual voxels within a parcel showed different contribution to decoding.

       

      Some of these concerns could be addressed by recording head movement (with enough precision) to regress out these contributions. The authors state that head movement was monitored with 3 fiducials, and their time courses ought to provide a way to deal with this issue. The ICA procedure may not have sufficiently dealt with removing movement-related problems, but one could eg relate individual components that were identified to the keypresses as another means for checking. An alternative could be to focus on frequency ranges above the movement frequencies. The accuracy for those still seems impressive and may provide a slightly more biologically plausible assessment. 

      We have already addressed the issue of movement related artefacts in the first response above. With respect to a focus on frequency ranges above movement frequencies, the Reviewer states the “accuracy for those still seems impressive and may provide a slightly more biologically plausible assessment”. First, it is important to note that cortical delta-band oscillations measured with local field potentials (LFPs) in macaques is known to contain important information related to end-effector kinematics22,23 muscle activation patterns24 and temporal sequencing25 during skilled reaching and grasping actions. Thus, there is a substantial body of evidence that low-frequency neural oscillatory activity in this range contains important information about the skill learning behavior investigated in the present study. Second, our own data shows (which the Reviewer also points out) that significant information related to the skill learning behavior is also present in higher frequency bands (see Figure 2A and Figure 3—figure supplement 1). As we pointed out in our earlier response to questions about the hybrid space decoder architecture (see above), it is likely that different, yet complimentary, information is encoded across different temporal frequencies (just as it is encoded across different spatial frequencies). Again, this interpretation is supported by our data as the highest performing classifiers in all cases (when holding all parameters constant) were always constructed from broadband input MEG data (Figure 2A and Figure 3—figure supplement 1).  

      One question concerns the interpretation of the results shown in Figure 4. They imply that during the course of learning, entirely different brain networks underpin the behaviour. Not only that, but they also include regions that would seem rather unexpected to be key nodes for learning and expressing relatively simple finger sequences, such as here. What then is the biological plausibility of these results? The authors seem to circumnavigate this issue by moving into a distance metric that captures the (neural network) changes over the course of learning, but the discussion seems detached from which regions are actually involved; or they offer a rather broad discussion of the anatomical regions identified here, eg in the context of LFOs, where they merely refer to "frontoparietal regions". 

      The Reviewer notes the shift in brain networks driving keypress decoding performance between trials 1, 11 and 36 as shown in Figure 4A. The Reviewer questions whether these substantial shifts in brain network states underpinning the skill are biologically plausible, as well as the likelihood that bilateral superior and middle frontal and parietal cortex are important nodes within these networks.

      First, previous fMRI work in humans performing a similar sequence learning task showed that flexibility in brain network composition (i.e. – changes in brain region members displaying coordinated activity) is up-regulated in novel learning environments and explains differences in learning rates across individuals26.  This work supports our interpretation of the present study data, that brain networks engaged in sequential motor skills rapidly reconfigure during early learning.

      Second, frontoparietal network activity is known to support motor memory encoding during early learning27,28. For example, reactivation events in the posterior parietal29 and medial prefrontal30,31 cortex (MPFC) have been temporally linked to hippocampal replay, and are posited to support memory consolidation across several memory domains32, including motor sequence learning1,33,34.  Further, synchronized interactions between MPFC and hippocampus are more prominent during early learning as opposed to later stages27,35,36, perhaps reflecting “redistribution of hippocampal memories to MPFC” 27.  MPFC contributes to very early memory formation by learning association between contexts, locations, events and adaptive responses during rapid learning37. Consistently, coupling between hippocampus and MPFC has been shown during, and importantly immediately following (rest) initial memory encoding38,39.  Importantly, MPFC activity during initial memory encoding predicts subsequent recall40. Thus, the spatial map required to encode a motor sequence memory may be “built under the supervision of the prefrontal cortex” 28, also engaged in the development of an abstract representation of the sequence41.  In more abstract terms, the prefrontal, premotor and parietal cortices support novice performance “by deploying attentional and control processes” 42-44 required during early learning42-44. The dorsolateral prefrontal cortex DLPFC specifically is thought to engage in goal selection and sequence monitoring during early skill practice45, all consistent with the schema model of declarative memory in which prefrontal cortices play an important role in encoding46,47.  Thus, several prefrontal and frontoparietal regions contributing to long term learning 48 are also engaged in early stages of encoding. Altogether, there is strong biological support for the involvement of bilateral prefrontal and frontoparietal regions to decoding during early skill learning.  We now address this issue in the revised manuscript.

      If I understand correctly, the offline neural representation analysis is in essence the comparison of the last keypress vs the first keypress of the next sequence. In that sense, the activity during offline rest periods is actually not considered. This makes the nomenclature somewhat confusing. While it matches the behavioural analysis, having only key presses one can't do it in any other way, but here the authors actually do have recordings of brain activity during offline rest. So at the very least calling it offline neural representation is misleading to this reviewer because what is compared is activity during the last and during the next keypress, not activity during offline periods. But it also seems a missed opportunity - the authors argue that most of the relevant learning occurs during offline rest periods, yet there is no attempt to actually test whether activity during this period can be useful for the questions at hand here. 

      We agree with the Reviewer that our previous “offline neural representation” nomenclature could be misinterpreted. In the revised manuscript we refer to this difference as the “offline neural representational change”. Please, note that our previous work did link offline neural activity (i.e. – 16-22 Hz beta power and neural replay density during inter-practice rest periods) to observed micro-offline gains49.

      Reviewer #2 (Public review): 

      Summary 

      Dash et al. asked whether and how the neural representation of individual finger movements is "contextualized" within a trained sequence during the very early period of sequential skill learning by using decoding of MEG signal. Specifically, they assessed whether/how the same finger presses (pressing index finger) embedded in the different ordinal positions of a practiced sequence (4-1-3-2-4; here, the numbers 1 through 4 correspond to the little through the index fingers of the non-dominant left hand) change their representation (MEG feature). They did this by computing either the decoding accuracy of the index finger at the ordinal positions 1 vs. 5 (index_OP1 vs index_OP5) or pattern distance between index_OP1 vs. index_OP5 at each training trial and found that both the decoding accuracy and the pattern distance progressively increase over the course of learning trials. More interestingly, they also computed the pattern distance for index_OP5 for the last execution of a practice trial vs. index_OP1 for the first execution in the next practice trial (i.e., across the rest period). This "off-line" distance was significantly larger than the "on-line" distance, which was computed within practice trials and predicted micro-offline skill gain. Based on these results, the authors conclude that the differentiation of representation for the identical movement embedded in different positions of a sequential skill ("contextualization") primarily occurs during early skill learning, especially during rest, consistent with the recent theory of the "micro-offline learning" proposed by the authors' group. I think this is an important and timely topic for the field of motor learning and beyond. <br /> Strengths 

      The specific strengths of the current work are as follows. First, the use of temporally rich neural information (MEG signal) has a large advantage over previous studies testing sequential representations using fMRI. This allowed the authors to examine the earliest period (= the first few minutes of training) of skill learning with finer temporal resolution. Second, through the optimization of MEG feature extraction, the current study achieved extremely high decoding accuracy (approx. 94%) compared to previous works. As claimed by the authors, this is one of the strengths of the paper (but see my comments). Third, although some potential refinement might be needed, comparing "online" and "offline" pattern distance is a neat idea. 

      Weaknesses 

      Along with the strengths I raised above, the paper has some weaknesses. First, the pursuit of high decoding accuracy, especially the choice of time points and window length (i.e., 200 msec window starting from 0 msec from key press onset), casts a shadow on the interpretation of the main result. Currently, it is unclear whether the decoding results simply reflect behavioral change or true underlying neural change. As shown in the behavioral data, the key press speed reached 3~4 presses per second already at around the end of the early learning period (11th trial), which means inter-press intervals become as short as 250-330 msec. Thus, in almost more than 60% of training period data, the time window for MEG feature extraction (200 msec) spans around 60% of the inter-press intervals. Considering that the preparation/cueing of subsequent presses starts ahead of the actual press (e.g., Kornysheva et al., 2019) and/or potential online planning (e.g., Ariani and Diedrichsen, 2019), the decoder likely has captured these future press information as well as the signal related to the current key press, independent of the formation of genuine sequential representation (e.g., "contextualization" of individual press). This may also explain the gradual increase in decoding accuracy or pattern distance between index_OP1 vs. index_OP5 (Figure 4C and 5A), which co-occurred with performance improvement, as shorter inter-press intervals are more favorable for the dissociating the two index finger presses followed by different finger presses. The compromised decoding accuracies for the control sequences can be explained in similar logic. Therefore, more careful consideration and elaborated discussion seem necessary when trying to both achieve high-performance decoding and assess early skill learning, as it can impact all the subsequent analyses.

      The Reviewer raises the possibility that (given the windowing parameters used in the present study) an increase in “contextualization” with learning could simply reflect faster typing speeds as opposed to an actual change in the underlying neural representation. The issue can essentially be framed as a mixing problem. As correct sequences are generated at higher and higher speeds over training, MEG activity patterns related to the planning, execution, evaluation and memory of individual keypresses overlap more in time. Thus, increased overlap between the “4” and “1” keypresses (at the start of the sequence) and “2” and “4” keypresses (at the end of the sequence) could artefactually increase contextualization distances even if the underlying neural representations for the individual keypresses remain unchanged (assuming this mixing of representations is used by the classifier to differentially tag each index finger press). If this were the case, it follows that such mixing effects reflecting the ordinal sequence structure would also be observable in the distribution of decoder misclassifications. For example, “4” keypresses would be more likely to be misclassified as “1” or “2” keypresses (or vice versa) than as “3” keypresses. The confusion matrices presented in Figures 3C and 4B and Figure 3—figure supplement 3A in the previously submitted manuscript do not show this trend in the distribution of misclassifications across the four fingers.

      Moreover, if the representation distance is largely driven by this mixing effect, it’s also possible that the increased overlap between consecutive index finger keypresses during the 4-4 transition marking the end of one sequence and the beginning of the next one could actually mask contextualization-related changes to the underlying neural representations and make them harder to detect. In this case, a decoder tasked with separating individual index finger keypresses into two distinct classes based upon sequence position might show decreased performance with learning as adjacent keypresses overlapped in time with each other to an increasing extent. However, Figure 4C in our previously submitted manuscript does not support this possibility, as the 2-class hybrid classifier displays improved classification performance over early practice trials despite greater temporal overlap.

      We also conducted a new multivariate regression analysis to directly assess whether the neural representation distance score could be predicted by the 4-1, 2-4 and 4-4 keypress transition times observed for each complete correct sequence (both predictor and response variables were z-score normalized within-subject). The results of this analysis affirmed that the possible alternative explanation put forward by the Reviewer is not supported by our data (Adjusted R2 = 0.00431; F = 5.62). We now include this new negative control analysis result in the revised manuscript.

      Overall, we do strongly agree with the Reviewer that the naturalistic, self-paced, generative task employed in the present study results in overlapping brain processes related to planning, execution, evaluation and memory of the action sequence. We also agree that there are several tradeoffs to consider in the construction of the classifiers depending on the study aim. Given our aim of optimizing keypress decoder accuracy in the present study, the set of trade-offs resulted in representations reflecting more the latter three processes, and less so the planning component. Whether separate decoders can be constructed to tease apart the representations or networks supporting these overlapping processes is an important future direction of research in this area. For example, work presently underway in our lab constrains the selection of windowing parameters in a manner that allows individual classifiers to be temporally linked to specific planning, execution, evaluation or memory-related processes to discern which brain networks are involved and how they adaptively reorganize with learning. Results from the present study (Figure 4—figure supplement 2) showing hybrid-space decoder prediction accuracies exceeding 74% for temporal windows spanning as little as 25ms and located up to 100ms prior to the keyDown event strongly support the feasibility of such an approach.

      Related to the above point, testing only one particular sequence (4-1-3-2-4), aside from the control ones, limits the generalizability of the finding. This also may have contributed to the extremely high decoding accuracy reported in the current study. 

      The Reviewer raises a question about the generalizability of the decoder accuracy reported in our study. Fortunately, a comparison between decoder performances on Day 1 and Day 2 datasets does provide some insight into this issue. As the Reviewer points out, the classifiers in this study were trained and tested on keypresses performed while practicing a specific sequence (4-1-3-2-4). The study was designed this way as to avoid the impact of interference effects on learning dynamics. The cross-validated performance of classifiers on MEG data collected within the same session was 90.47% overall accuracy (4-class; Figure 3C). We then tested classifier performance on data collected during a separate MEG session conducted approximately 24 hours later (Day 2; see Figure 3—supplement 3). We observed a reduction in overall accuracy rate to 87.11% when tested on MEG data recorded while participants performed the same learned sequence, and 79.44% when they performed several previously unpracticed sequences. Both changes in accuracy are important with regards to the generalizability of our findings. First, 87.11% performance accuracy for the trained sequence data on Day 2 (a reduction of only 3.36%) indicates that the hybrid-space decoder performance is robust over multiple MEG sessions, and thus, robust to variations in SNR across the MEG sensor array caused by small differences in head position between scans.  This indicates a substantial advantage over sensor-space decoding approaches. Furthermore, when tested on data from unpracticed sequences, overall performance dropped an additional 7.67%. This difference reflects the performance bias of the classifier for the trained sequence, possibly caused by high-order sequence structure being incorporated into the feature weights. In the future, it will be important to understand in more detail how random or repeated keypress sequence training data impacts overall decoder performance and generalization. We strongly agree with the Reviewer that the issue of generalizability is extremely important and have added a new paragraph to the Discussion in the revised manuscript highlighting the strengths and weaknesses of our study with respect to this issue.

      In terms of clinical BCI, one of the potential relevance of the study, as claimed by the authors, it is not clear that the specific time window chosen in the current study (up to 200 msec since key press onset) is really useful. In most cases, clinical BCI would target neural signals with no overt movement execution due to patients' inability to move (e.g., Hochberg et al., 2012). Given the time window, the surprisingly high performance of the current decoder may result from sensory feedback and/or planning of subsequent movement, which may not always be available in the clinical BCI context. Of course, the decoding accuracy is still much higher than chance even when using signal before the key press (as shown in Figure 4 Supplement 2), but it is not immediately clear to me that the authors relate their high decoding accuracy based on post-movement signal to clinical BCI settings.

      The Reviewer questions the relevance of the specific window parameters used in the present study for clinical BCI applications, particularly for paretic patients who are unable to produce finger movements or for whom afferent sensory feedback is no longer intact. We strongly agree with the Reviewer that any intended clinical application must carefully consider these specific input feature constraints dictated by the clinical cohort, and in turn impose appropriate and complimentary constraints on classifier parameters that may differ from the ones used in the present study.  We now highlight this issue in the Discussion of the revised manuscript and relate our present findings to published clinical BCI work within this context.

      One of the important and fascinating claims of the current study is that the "contextualization" of individual finger movements in a trained sequence specifically occurs during short rest periods in very early skill learning, echoing the recent theory of micro-offline learning proposed by the authors' group. Here, I think two points need to be clarified. First, the concept of "contextualization" is kept somewhat blurry throughout the text. It is only at the later part of the Discussion (around line #330 on page 13) that some potential mechanism for the "contextualization" is provided as "what-and-where" binding. Still, it is unclear what "contextualization" actually is in the current data, as the MEG signal analyzed is extracted from 0-200 msec after the keypress. If one thinks something is contextualizing an action, that contextualization should come earlier than the action itself. 

      The Reviewer requests that we: 1) more clearly define our use of the term “contextualization” and 2) provide the rationale for assessing it over a 200ms window aligned to the keyDown event. This choice of window parameters means that the MEG activity used in our analysis was coincident with, rather than preceding, the actual keypresses.  We define contextualization as the differentiation of representation for the identical movement embedded in different positions of a sequential skill. That is, representations of individual action elements progressively incorporate information about their relationship to the overall sequence structure as the skill is learned. We agree with the Reviewer that this can be appropriately interpreted as “what-and-where” binding. We now incorporate this definition in the Introduction of the revised manuscript as requested.

      The window parameters for optimizing accurate decoding individual finger movements were determined using a grid search of the parameter space (a sliding window of variable width between 25-350 ms with 25 ms increments variably aligned from 0 to +100ms with 10ms increments relative to the keyDown event). This approach generated 140 different temporal windows for each keypress for each participant, with the final parameter selection determined through comparison of the resulting performance between each decoder.  Importantly, the decision to optimize for decoding accuracy placed an emphasis on keypress representations characterized by the most consistent and robust features shared across subjects, which in turn maximize statistical power in detecting common learning-related changes. In this case, the optimal window encompassed a 200ms epoch aligned to the keyDown event (t0 = 0 ms).  We then asked if the representations (i.e. – spatial patterns of combined parcel- and voxel-space activity) of the same digit at two different sequence positions changed with practice within this optimal decoding window.  Of course, our findings do not rule out the possibility that contextualization can also be found before or even after this time window, as we did not directly address this issue in the present study.  Ongoing work in our lab, as pointed out above, is investigating contextualization within different time windows tailored specifically for assessing sequence skill action planning, execution, evaluation and memory processes.

      The second point is that the result provided by the authors is not yet convincing enough to support the claim that "contextualization" occurs during rest. In the original analysis, the authors presented the statistical significance regarding the correlation between the "offline" pattern differentiation and micro-offline skill gain (Figure 5. Supplement 1), as well as the larger "offline" distance than "online" distance (Figure 5B). However, this analysis looks like regressing two variables (monotonically) increasing as a function of the trial. Although some information in this analysis, such as what the independent/dependent variables were or how individual subjects were treated, was missing in the Methods, getting a statistically significant slope seems unsurprising in such a situation. Also, curiously, the same quantitative evidence was not provided for its "online" counterpart, and the authors only briefly mentioned in the text that there was no significant correlation between them. It may be true looking at the data in Figure 5A as the online representation distance looks less monotonically changing, but the classification accuracy presented in Figure 4C, which should reflect similar representational distance, shows a more monotonic increase up to the 11th trial. Further, the ways the "online" and "offline" representation distance was estimated seem to make them not directly comparable. While the "online" distance was computed using all the correct press data within each 10 sec of execution, the "offline" distance is basically computed by only two presses (i.e., the last index_OP5 vs. the first index_OP1 separated by 10 sec of rest). Theoretically, the distance between the neural activity patterns for temporally closer events tends to be closer than that between the patterns for temporally far-apart events. It would be fairer to use the distance between the first index_OP1 vs. the last index_OP5 within an execution period for "online" distance, as well. 

      The Reviewer suggests that the current data is not convincing enough to show that contextualization occurs during rest and raises two important concerns: 1) the relationship between online contextualization and micro-online gains is not shown, and 2) the online distance was calculated differently from its offline counterpart (i.e. - instead of calculating the distance between last IndexOP5 and first IndexOP1 from a single trial, the distance was calculated for each sequence within a trial and then averaged).

      We addressed the first concern by performing individual subject correlations between 1) contextualization changes during rest intervals and micro-offline gains; 2) contextualization changes during practice trials and micro-online gains, and 3) contextualization changes during practice trials and micro-offline gains (Author response image 4). We then statistically compared the resulting correlation coefficient distributions and found that within-subject correlations for contextualization changes during rest intervals and micro-offline gains were significantly higher than online contextualization and micro-online gains (t = 3.2827, p = 0.0015) and online contextualization and micro-offline gains (t = 3.7021, p = 5.3013e-04). These results are consistent with our interpretation that micro-offline gains are supported by contextualization changes during the inter-practice rest period.

      Author response image 4.

      Distribution of individual subject correlation coefficients between contextualization changes occurring during practice or rest with  micro-online and micro-offline performance gains. Note that, the correlation distributions were significantly higher for the relationship between contextualization changes during rest and micro-offline gains than for contextualization changes during practice and either micro-online or offline gain.

      With respect to the second concern highlighted above, we agree with the Reviewer that one limitation of the analysis comparing online versus offline changes in contextualization as presented in the reviewed manuscript, is that it does not eliminate the possibility that any differences could simply be explained by the passage of time (which is smaller for the online analysis compared to the offline analysis). The Reviewer suggests an approach that addresses this issue, which we have now carried out.   When quantifying online changes in contextualization from the first IndexOP1 the last IndexOP5 keypress in the same trial we observed no learning-related trend (Author response image 5, right panel). Importantly, offline distances were significantly larger than online distances regardless of the measurement approach and neither predicted online learning (Author response image 6).

      Author response image 5.

      Trial by trial trend of offline (left panel) and online (middle and right panels) changes in contextualization. Offline changes in contextualization were assessed by calculating the distance between neural representations for the last IndexOP5 keypress in the previous trial and the first IndexOP1 keypress in the present trial. Two different approaches were used to characterize online contextualization changes. The analysis included in the reviewed manuscript (middle panel) calculated the distance between IndexOP1 and IndexOP5 for each correct sequence, which was then averaged across the trial. This approach is limited by the lack of control for the passage of time when making online versus offline comparisons. Thus, the second approach controlled for the passage of time by calculating distance between the representations associated with the first IndexOP1 keypress and the last IndexOP5 keypress within the same trial. Note that while the first approach showed an increase online contextualization trend with practice, the second approach did not.

      Author response image 6.

      Relationship between online contextualization and online learning is shown for both within-sequence (left; note that this is the online contextualization measure used in the reviewd manuscript) and across-sequence (right) distance calculation. There was no significant relationship between online learning and online contextualization regardless of the measurement approach.

      A related concern regarding the control analysis, where individual values for max speed and the degree of online contextualization were compared (Figure 5 Supplement 3), is whether the individual difference is meaningful. If I understood correctly, the optimization of the decoding process (temporal window, feature inclusion/reduction, decoder, etc.) was performed for individual participants, and the same feature extraction was also employed for the analysis of representation distance (i.e., contextualization). If this is the case, the distances are individually differently calculated and they may need to be normalized relative to some stable reference (e.g., 1 vs. 4 or average distance within the control sequence presses) before comparison across the individuals. 

      The Reviewer makes a good point here. We have now implemented the suggested normalization procedure in the analysis provided in the revised manuscript.

      Reviewer #3 (Public review): 

      Summary: 

      One goal of this paper is to introduce a new approach for highly accurate decoding of finger movements from human magnetoencephalography data via dimension reduction of a "multi-scale, hybrid" feature space. Following this decoding approach, the authors aim to show that early skill learning involves "contextualization" of the neural coding of individual movements, relative to their position in a sequence of consecutive movements. Furthermore, they aim to show that this "contextualization" develops primarily during short rest periods interspersed with skill training and correlates with a performance metric which the authors interpret as an indicator of offline learning. <br /> Strengths: 

      A clear strength of the paper is the innovative decoding approach, which achieves impressive decoding accuracies via dimension reduction of a "multi-scale, hybrid space". This hybrid-space approach follows the neurobiologically plausible idea of the concurrent distribution of neural coding across local circuits as well as large-scale networks. A further strength of the study is the large number of tested dimension reduction techniques and classifiers (though the manuscript reveals little about the comparison of the latter). 

      We appreciate the Reviewer’s comments regarding the paper’s strengths.

      A simple control analysis based on shuffled class labels could lend further support to this complex decoding approach. As a control analysis that completely rules out any source of overfitting, the authors could test the decoder after shuffling class labels. Following such shuffling, decoding accuracies should drop to chance level for all decoding approaches, including the optimized decoder. This would also provide an estimate of actual chance-level performance (which is informative over and beyond the theoretical chance level). Furthermore, currently, the manuscript does not explain the huge drop in decoding accuracies for the voxel-space decoding (Figure 3B). Finally, the authors' approach to cortical parcellation raises questions regarding the information carried by varying dipole orientations within a parcel (which currently seems to be ignored?) and the implementation of the mean-flipping method (given that there are two dimensions - space and time - what do the authors refer to when they talk about the sign of the "average source", line 477?). 

      The Reviewer recommends that we: 1) conduct an additional control analysis on classifier performance using shuffled class labels, 2) provide a more detailed explanation regarding the drop in decoding accuracies for the voxel-space decoding following LDA dimensionality reduction (see Fig 3B), and 3) provide additional details on how problems related to dipole solution orientations were addressed in the present study.  

      In relation to the first point, we have now implemented a random shuffling approach as a control for the classification analyses. The results of this analysis indicated that the chance level accuracy was 22.12% (± SD 9.1%) for individual keypress decoding (4-class classification), and 18.41% (± SD 7.4%) for individual sequence item decoding (5-class classification), irrespective of the input feature set or the type of decoder used. Thus, the decoding accuracy observed with the final model was substantially higher than these chance levels.  

      Second, please note that the dimensionality of the voxel-space feature set is very high (i.e. – 15684). LDA attempts to map the input features onto a much smaller dimensional space (number of classes-1; e.g. –  3 dimensions, for 4-class keypress decoding). Given the very high dimension of the voxel-space input features in this case, the resulting mapping exhibits reduced accuracy. Despite this general consideration, please refer to Figure 3—figure supplement 3, where we observe improvement in voxel-space decoder performance when utilizing alternative dimensionality reduction techniques.

      The decoders constructed in the present study assess the average spatial patterns across time (as defined by the windowing procedure) in the input feature space.  We now provide additional details in the Methods of the revised manuscript pertaining to the parcellation procedure and how the sign ambiguity problem was addressed in our analysis.

      Weaknesses: 

      A clear weakness of the paper lies in the authors' conclusions regarding "contextualization". Several potential confounds, described below, question the neurobiological implications proposed by the authors and provide a simpler explanation of the results. Furthermore, the paper follows the assumption that short breaks result in offline skill learning, while recent evidence, described below, casts doubt on this assumption. 

      We thank the Reviewer for giving us the opportunity to address these issues in detail (see below).

      The authors interpret the ordinal position information captured by their decoding approach as a reflection of neural coding dedicated to the local context of a movement (Figure 4). One way to dissociate ordinal position information from information about the moving effectors is to train a classifier on one sequence and test the classifier on other sequences that require the same movements, but in different positions50. In the present study, however, participants trained to repeat a single sequence (4-1-3-2-4). As a result, ordinal position information is potentially confounded by the fixed finger transitions around each of the two critical positions (first and fifth press). Across consecutive correct sequences, the first keypress in a given sequence was always preceded by a movement of the index finger (=last movement of the preceding sequence), and followed by a little finger movement. The last keypress, on the other hand, was always preceded by a ring finger movement, and followed by an index finger movement (=first movement of the next sequence). Figure 4 - Supplement 2 shows that finger identity can be decoded with high accuracy (>70%) across a large time window around the time of the key press, up to at least +/-100 ms (and likely beyond, given that decoding accuracy is still high at the boundaries of the window depicted in that figure). This time window approaches the keypress transition times in this study. Given that distinct finger transitions characterized the first and fifth keypress, the classifier could thus rely on persistent (or "lingering") information from the preceding finger movement, and/or "preparatory" information about the subsequent finger movement, in order to dissociate the first and fifth keypress. Currently, the manuscript provides no evidence that the context information captured by the decoding approach is more than a by-product of temporally extended, and therefore overlapping, but independent neural representations of consecutive keypresses that are executed in close temporal proximity - rather than a neural representation dedicated to context. 

      Such temporal overlap of consecutive, independent finger representations may also account for the dynamics of "ordinal coding"/"contextualization", i.e., the increase in 2-class decoding accuracy, across Day 1 (Figure 4C). As learning progresses, both tapping speed and the consistency of keypress transition times increase (Figure 1), i.e., consecutive keypresses are closer in time, and more consistently so. As a result, information related to a given keypress is increasingly overlapping in time with information related to the preceding and subsequent keypresses. The authors seem to argue that their regression analysis in Figure 5 - Figure Supplement 3 speaks against any influence of tapping speed on "ordinal coding" (even though that argument is not made explicitly in the manuscript). However, Figure 5 - Figure Supplement 3 shows inter-individual differences in a between-subject analysis (across trials, as in panel A, or separately for each trial, as in panel B), and, therefore, says little about the within-subject dynamics of "ordinal coding" across the experiment. A regression of trial-by-trial "ordinal coding" on trial-by-trial tapping speed (either within-subject or at a group-level, after averaging across subjects) could address this issue. Given the highly similar dynamics of "ordinal coding" on the one hand (Figure 4C), and tapping speed on the other hand (Figure 1B), I would expect a strong relationship between the two in the suggested within-subject (or group-level) regression. Furthermore, learning should increase the number of (consecutively) correct sequences, and, thus, the consistency of finger transitions. Therefore, the increase in 2-class decoding accuracy may simply reflect an increasing overlap in time of increasingly consistent information from consecutive keypresses, which allows the classifier to dissociate the first and fifth keypress more reliably as learning progresses, simply based on the characteristic finger transitions associated with each. In other words, given that the physical context of a given keypress changes as learning progresses - keypresses move closer together in time and are more consistently correct - it seems problematic to conclude that the mental representation of that context changes. To draw that conclusion, the physical context should remain stable (or any changes to the physical context should be controlled for). 

      The issues raised by Reviewer #3 here are similar to two issues raised by Reviewer #2 above and agree they must both be carefully considered in any evaluation of our findings.

      As both Reviewers pointed out, the classifiers in this study were trained and tested on keypresses performed while practicing a specific sequence (4-1-3-2-4). The study was designed this way as to avoid the impact of interference effects on learning dynamics. The cross-validated performance of classifiers on MEG data collected within the same session was 90.47% overall accuracy (4-class; Figure 3C). We then tested classifier performance on data collected during a separate MEG session conducted approximately 24 hours later (Day 2; see Figure 3—supplement 3). We observed a reduction in overall accuracy rate to 87.11% when tested on MEG data recorded while participants performed the same learned sequence, and 79.44% when they performed several previously unpracticed sequences. This classification performance difference of 7.67% when tested on the Day 2 data could reflect the performance bias of the classifier for the trained sequence, possibly caused by mixed information from temporally close keypresses being incorporated into the feature weights.

      Along these same lines, both Reviewers also raise the possibility that an increase in “ordinal coding/contextualization” with learning could simply reflect an increase in this mixing effect caused by faster typing speeds as opposed to an actual change in the underlying neural representation. The basic idea is that as correct sequences are generated at higher and higher speeds over training, MEG activity patterns related to the planning, execution, evaluation and memory of individual keypresses overlap more in time. Thus, increased overlap between the “4” and “1” keypresses (at the start of the sequence) and “2” and “4” keypresses (at the end of the sequence) could artefactually increase contextualization distances even if the underlying neural representations for the individual keypresses remain unchanged (assuming this mixing of representations is used by the classifier to differentially tag each index finger press). If this were the case, it follows that such mixing effects reflecting the ordinal sequence structure would also be observable in the distribution of decoder misclassifications. For example, “4” keypresses would be more likely to be misclassified as “1” or “2” keypresses (or vice versa) than as “3” keypresses. The confusion matrices presented in Figures 3C and 4B and Figure 3—figure supplement 3A in the previously submitted manuscript do not show this trend in the distribution of misclassifications across the four fingers.

      Following this logic, it’s also possible that if the ordinal coding is largely driven by this mixing effect, the increased overlap between consecutive index finger keypresses during the 4-4 transition marking the end of one sequence and the beginning of the next one could actually mask contextualization-related changes to the underlying neural representations and make them harder to detect. In this case, a decoder tasked with separating individual index finger keypresses into two distinct classes based upon sequence position might show decreased performance with learning as adjacent keypresses overlapped in time with each other to an increasing extent. However, Figure 4C in our previously submitted manuscript does not support this possibility, as the 2-class hybrid classifier displays improved classification performance over early practice trials despite greater temporal overlap.

      As noted in the above replay to Reviewer #2, we also conducted a new multivariate regression analysis to directly assess whether the neural representation distance score could be predicted by the 4-1, 2-4 and 4-4 keypress transition times observed for each complete correct sequence (both predictor and response variables were z-score normalized within-subject). The results of this analysis affirmed that the possible alternative explanation put forward by the Reviewer is not supported by our data (Adjusted R2 = 0.00431; F = 5.62). We now include this new negative control analysis result in the revised manuscript.

      Finally, the Reviewer hints that one way to address this issue would be to compare MEG responses before and after learning for sequences typed at a fixed speed. However, given that the speed-accuracy trade-off should improve with learning, a comparison between unlearned and learned skill states would dictate that the skill be evaluated at a very low fixed speed. Essentially, such a design presents the problem that the post-training test is evaluating the representation in the unlearned behavioral state that is not representative of the acquired skill. Thus, this approach would not address our experimental question: “do neural representations of the same action performed at different locations within a skill sequence contextually differentiate or remain stable as learning evolves”.

      A similar difference in physical context may explain why neural representation distances ("differentiation") differ between rest and practice (Figure 5). The authors define "offline differentiation" by comparing the hybrid space features of the last index finger movement of a trial (ordinal position 5) and the first index finger movement of the next trial (ordinal position 1). However, the latter is not only the first movement in the sequence but also the very first movement in that trial (at least in trials that started with a correct sequence), i.e., not preceded by any recent movement. In contrast, the last index finger of the last correct sequence in the preceding trial includes the characteristic finger transition from the fourth to the fifth movement. Thus, there is more overlapping information arising from the consistent, neighbouring keypresses for the last index finger movement, compared to the first index finger movement of the next trial. A strong difference (larger neural representation distance) between these two movements is, therefore, not surprising, given the task design, and this difference is also expected to increase with learning, given the increase in tapping speed, and the consequent stronger overlap in representations for consecutive keypresses. Furthermore, initiating a new sequence involves pre-planning, while ongoing practice relies on online planning (Ariani et al., eNeuro 2021), i.e., two mental operations that are dissociable at the level of neural representation (Ariani et al., bioRxiv 2023). 

      The Reviewer argues that the comparison of last finger movement of a trial and the first in the next trial are performed in different circumstances and contexts. This is an important point and one we tend to agree with. For this task, the first sequence in a practice trial (which is pre-planned offline) is performed in a somewhat different context from the sequence iterations that follow, which involve temporally overlapping planning, execution and evaluation processes.  The Reviewer is particularly concerned about a difference in the temporal mixing effect issue raised above between the first and last keypresses performed in a trial. However, in contrast to the Reviewers stated argument above, findings from Korneysheva et. al (2019) showed that neural representations of individual actions are competitively queued during the pre-planning period in a manner that reflects the ordinal structure of the learned sequence.  Thus, mixing effects are likely still present for the first keypress in a trial. Also note that we now present new control analyses in multiple responses above confirming that hypothetical mixing effects between adjacent keypresses do not explain our reported contextualization finding. A statement addressing these possibilities raised by the Reviewer has been added to the Discussion in the revised manuscript.

      In relation to pre-planning, ongoing MEG work in our lab is investigating contextualization within different time windows tailored specifically for assessing how sequence skill action planning evolves with learning.

      Given these differences in the physical context and associated mental processes, it is not surprising that "offline differentiation", as defined here, is more pronounced than "online differentiation". For the latter, the authors compared movements that were better matched regarding the presence of consistent preceding and subsequent keypresses (online differentiation was defined as the mean difference between all first vs. last index finger movements during practice).  It is unclear why the authors did not follow a similar definition for "online differentiation" as for "micro-online gains" (and, indeed, a definition that is more consistent with their definition of "offline differentiation"), i.e., the difference between the first index finger movement of the first correct sequence during practice, and the last index finger of the last correct sequence. While these two movements are, again, not matched for the presence of neighbouring keypresses (see the argument above), this mismatch would at least be the same across "offline differentiation" and "online differentiation", so they would be more comparable. 

      This is the same point made earlier by Reviewer #2, and we agree with this assessment. As stated in the response to Reviewer #2 above, we have now carried out quantification of online contextualization using this approach and included it in the revised manuscript. We thank the Reviewer for this suggestion.

      A further complication in interpreting the results regarding "contextualization" stems from the visual feedback that participants received during the task. Each keypress generated an asterisk shown above the string on the screen, irrespective of whether the keypress was correct or incorrect. As a result, incorrect (e.g., additional, or missing) keypresses could shift the phase of the visual feedback string (of asterisks) relative to the ordinal position of the current movement in the sequence (e.g., the fifth movement in the sequence could coincide with the presentation of any asterisk in the string, from the first to the fifth). Given that more incorrect keypresses are expected at the start of the experiment, compared to later stages, the consistency in visual feedback position, relative to the ordinal position of the movement in the sequence, increased across the experiment. A better differentiation between the first and the fifth movement with learning could, therefore, simply reflect better decoding of the more consistent visual feedback, based either on the feedback-induced brain response, or feedback-induced eye movements (the study did not include eye tracking). It is not clear why the authors introduced this complicated visual feedback in their task, besides consistency with their previous studies.

      We strongly agree with the Reviewer that eye movements related to task engagement are important to rule out as a potential driver of the decoding accuracy or contextualization effect. We address this issue above in response to a question raised by Reviewer #1 about the impact of movement related artefacts in general on our findings.

      First, the assumption the Reviewer makes here about the distribution of errors in this task is incorrect. On average across subjects, 2.32% ± 1.48% (mean ± SD) of all keypresses performed were errors, which were evenly distributed across the four possible keypress responses. While errors increased progressively over practice trials, they did so in proportion to the increase in correct keypresses, so that the overall ratio of correct-to-incorrect keypresses remained stable over the training session. Thus, the Reviewer’s assumptions that there is a higher relative frequency of errors in early trials, and a resulting systematic trend phase shift differences between the visual display updates (i.e. – a change in asterisk position above the displayed sequence) and the keypress performed is not substantiated by the data. To the contrary, the asterisk position on the display and the keypress being executed remained highly correlated over the entire training session. We now include a statement about the frequency and distribution of errors in the revised manuscript.

      Given this high correlation, we firmly agree with the Reviewer that the issue of eye movement-related artefacts is still an important one to address. Fortunately, we did collect eye movement data during the MEG recordings so were able to investigate this. As detailed in the response to Reviewer #1 above, we found that gaze positions and eye-movement velocity time-locked to visual display updates (i.e. – a change in asterisk position above the displayed sequence) did not reflect the asterisk location above chance levels (Overall cross-validated accuracy = 0.21817; see Author response image 1). Furthermore, an inspection of the eye position data revealed that a majority of participants on most trials displayed random walk gaze patterns around a center fixation point, indicating that participants did not attend to the asterisk position on the display. This is consistent with intrinsic generation of the action sequence, and congruent with the fact that the display does not provide explicit feedback related to performance. As pointed out above, a similar real-world example would be manually inputting a long password into a secure online application. In this case, one intrinsically generates the sequence from memory and receives similar feedback about the password sequence position (also provided as asterisks), which is typically ignored by the user. Notably, the minimal participant engagement with the visual task display observed in this study highlights an important difference between behavior observed during explicit sequence learning motor tasks (which is highly generative in nature) with reactive responses to stimulus cues in a serial reaction time task (SRTT).  This is a crucial difference that must be carefully considered when comparing findings across studies. All elements pertaining to this new control analysis are now included in the revised manuscript.

      The authors report a significant correlation between "offline differentiation" and cumulative micro-offline gains. However, it would be more informative to correlate trial-by-trial changes in each of the two variables. This would address the question of whether there is a trial-by-trial relation between the degree of "contextualization" and the amount of micro-offline gains - are performance changes (micro-offline gains) less pronounced across rest periods for which the change in "contextualization" is relatively low? Furthermore, is the relationship between micro-offline gains and "offline differentiation" significantly stronger than the relationship between micro-offline gains and "online differentiation"? 

      In response to a similar issue raised above by Reviewer #2, we now include new analyses comparing correlation magnitudes between (1) “online differention” vs micro-online gains, (2) “online differention” vs micro-offline gains and (3) “offline differentiation” and micro-offline gains (see Author response images 4, 5 and 6 above). These new analyses and results have been added to the revised manuscript. Once again, we thank both Reviewers for this suggestion.

      The authors follow the assumption that micro-offline gains reflect offline learning.

      This statement is incorrect. The original Bonstrup et al (2019) 49 paper clearly states that micro-offline gains must be carefully interpreted based upon the behavioral context within which they are observed, and lays out the conditions under which one can have confidence that micro-offline gains reflect offline learning.  In fact, the excellent meta-analysis of Pan & Rickard (2015) 51, which re-interprets the benefits of sleep in overnight skill consolidation from a “reactive inhibition” perspective, was a crucial resource in the experimental design of our initial study49, as well as in all our subsequent work. Pan & Rickard stated:

      “Empirically, reactive inhibition refers to performance worsening that can accumulate during a period of continuous training (Hull, 1943). It tends to dissipate, at least in part, when brief breaks are inserted between blocks of training. If there are multiple performance-break cycles over a training session, as in the motor sequence literature, performance can exhibit a scalloped effect, worsening during each uninterrupted performance block but improving across blocks52,53. Rickard, Cai, Rieth, Jones, and Ard (2008) and Brawn, Fenn, Nusbaum, and Margoliash (2010) 52,53 demonstrated highly robust scalloped reactive inhibition effects using the commonly employed 30 s–30 s performance break cycle, as shown for Rickard et al.’s (2008) massed practice sleep group in Figure 2. The scalloped effect is evident for that group after the first few 30 s blocks of each session. The absence of the scalloped effect during the first few blocks of training in the massed group suggests that rapid learning during that period masks any reactive inhibition effect.”

      Crucially, Pan & Rickard51 made several concrete recommendations for reducing the impact of the reactive inhibition confound on offline learning studies. One of these recommendations was to reduce practice times to 10s (most prior sequence learning studies up until that point had employed 30s long practice trials). They stated:

      “The traditional design involving 30 s-30 s performance break cycles should be abandoned given the evidence that it results in a reactive inhibition confound, and alternative designs with reduced performance duration per block used instead 51. One promising possibility is to switch to 10 s performance durations for each performance-break cycle Instead 51. That design appears sufficient to eliminate at least the majority of the reactive inhibition effect 52,53.”

      We mindfully incorporated recommendations from Pan and Rickard51  into our own study designs including 1) utilizing 10s practice trials and 2) constraining our analysis of micro-offline gains to early learning trials (where performance monotonically increases and 95% of overall performance gains occur), which are prior to the emergence of the “scalloped” performance dynamics that are strongly linked to reactive inhibition effects. 

      However, there is no direct evidence in the literature that micro-offline gains really result from offline learning, i.e., an improvement in skill level.

      We strongly disagree with the Reviewer’s assertion that “there is no direct evidence in the literature that micro-offline gains really result from offline learning, i.e., an improvement in skill level.”  The initial Bönstrup et al. (2019) 49 report was followed up by a large online crowd-sourcing study (Bönstrup et al., 2020) 54. This second (and much larger) study provided several additional important findings supporting our interpretation of micro-offline gains in cases where the important behavioral conditions clarified above were met (see Author response image 7 below for further details on these conditions).

      Author response image 7.

      Micro-offline gains observed in learning and non-learning contexts are attributed to different underlying causes. (A) Micro-offline and online changes relative to overall trial-by-trial learning. This figure is based on data from Bönstrup et al. (2019) 49. During early learning, micro-offline gains (red bars) closely track trial-by-trial performance gains (green line with open circle markers), with minimal contribution from micro-online gains (blue bars). The stated conclusion in Bönstrup et al. (2019) is that micro-offline gains only during this Early Learning stage reflect rapid memory consolidation (see also 54). After early learning, about practice trial 11, skill plateaus. This plateau skill period is characterized by a striking emergence of coupled (and relatively stable) micro-online drops and micro-offline increases. Bönstrup et al. (2019) as well as others in the literature 55-57, argue that micro-offline gains during the plateau period likely reflect recovery from inhibitory performance factors such as reactive inhibition or fatigue, and thus must be excluded from analyses relating micro-offline gains to skill learning.  The Non-repeating groups in Experiments 3 and 4 from Das et al. (2024) suffer from a lack of consideration of these known confounds.

      Evidence documented in that paper54 showed that micro-offline gains during early skill learning were: 1) replicable and generalized to subjects learning the task in their daily living environment (n=389); 2) equivalent when significantly shortening practice period duration, thus confirming that they are not a result of recovery from performance fatigue (n=118);  3) reduced (along with learning rates) by retroactive interference applied immediately after each practice period relative to interference applied after passage of time (n=373), indicating stabilization of the motor memory at a microscale of several seconds consistent with rapid consolidation; and 4) not modified by random termination of the practice periods, ruling out a contribution of predictive motor slowing (N = 71) 54.  Altogether, our findings were strongly consistent with the interpretation that micro-offline gains reflect memory consolidation supporting early skill learning. This is precisely the portion of the learning curve Pan and Rickard51 refer to when they state “…rapid learning during that period masks any reactive inhibition effect”.

      This interpretation is further supported by brain imaging evidence linking known memory-related networks and consolidation mechanisms to micro-offline gains. First, we reported that the density of fast hippocampo-neocortical skill memory replay events increases approximately three-fold during early learning inter-practice rest periods with the density explaining differences in the magnitude of micro-offline gains across subjects1. Second, Jacobacci et al. (2020) independently reproduced our original behavioral findings and reported BOLD fMRI changes in the hippocampus and precuneus (regions also identified in our MEG study1) linked to micro-offline gains during early skill learning. 33 These functional changes were coupled with rapid alterations in brain microstructure in the order of minutes, suggesting that the same network that operates during rest periods of early learning undergoes structural plasticity over several minutes following practice58. Third, even more recently, Chen et al. (2024) provided direct evidence from intracranial EEG in humans linking sharp-wave ripple events (which are known markers for neural replay59) in the hippocampus (80-120 Hz in humans) with micro-offline gains during early skill learning. The authors report that the strong increase in ripple rates tracked learning behavior, both across blocks and across participants. The authors conclude that hippocampal ripples during resting offline periods contribute to motor sequence learning. 2

      Thus, there is actually now substantial evidence in the literature directly supporting the assertion “that micro-offline gains really result from offline learning”.  On the contrary, according to Gupta & Rickard (2024) “…the mechanism underlying RI [reactive inhibition] is not well established” after over 80 years of investigation60, possibly due to the fact that “reactive inhibition” is a categorical description of behavioral effects that likely result from several heterogenous processes with very different underlying mechanisms.

      On the contrary, recent evidence questions this interpretation (Gupta & Rickard, npj Sci Learn 2022; Gupta & Rickard, Sci Rep 2024; Das et al., bioRxiv 2024). Instead, there is evidence that micro-offline gains are transient performance benefits that emerge when participants train with breaks, compared to participants who train without breaks, however, these benefits vanish within seconds after training if both groups of participants perform under comparable conditions (Das et al., bioRxiv 2024). 

      It is important to point out that the recent work of Gupta & Rickard (2022,2024) 55 does not present any data that directly opposes our finding that early skill learning49 is expressed as micro-offline gains during rest breaks. These studies are essentially an extension of the Rickard et al (2008) paper that employed a massed (30s practice followed by 30s breaks) vs spaced (10s practice followed by 10s breaks) to assess if recovery from reactive inhibition effects could account for performance gains measured after several minutes or hours. Gupta & Rickard (2022) added two additional groups (30s practice/10s break and 10s practice/10s break as used in the work from our group). The primary aim of the study was to assess whether it was more likely that changes in performance when retested 5 minutes after skill training (consisting of 12 practice trials for the massed groups and 36 practice trials for the spaced groups) had ended reflected memory consolidation effects or recovery from reactive inhibition effects. The Gupta & Rickard (2024) follow-up paper employed a similar design with the primary difference being that participants performed a fixed number of sequences on each trial as opposed to trials lasting a fixed duration. This was done to facilitate the fitting of a quantitative statistical model to the data.  To reiterate, neither study included any analysis of micro-online or micro-offline gains and did not include any comparison focused on skill gains during early learning. Instead, Gupta & Rickard (2022), reported evidence for reactive inhibition effects for all groups over much longer training periods. Again, we reported the same finding for trials following the early learning period in our original Bönstrup et al. (2019) paper49 (Author response image 7). Also, please note that we reported in this paper that cumulative micro-offline gains over early learning did not correlate with overnight offline consolidation measured 24 hours later49 (see the Results section and further elaboration in the Discussion). Thus, while the composition of our data is supportive of a short-term memory consolidation process operating over several seconds during early learning, it likely differs from those involved over longer training times and offline periods, as assessed by Gupta & Rickard (2022).

      In the recent preprint from Das et al (2024) 61,  the authors make the strong claim that “micro-offline gains during early learning do not reflect offline learning” which is not supported by their own data.   The authors hypothesize that if “micro-offline gains represent offline learning, participants should reach higher skill levels when training with breaks, compared to training without breaks”.  The study utilizes a spaced vs. massed practice group between-subjects design inspired by the reactive inhibition work from Rickard and others to test this hypothesis. Crucially, the design incorporates only a small fraction of the training used in other investigations to evaluate early skill learning1,33,49,54,57,58,62.  A direct comparison between the practice schedule designs for the spaced and massed groups in Das et al., and the training schedule all participants experienced in the original Bönstrup et al. (2019) paper highlights this issue as well as several others (Author response image 8):

      Author response image 8.

      (A) Comparison of Das et al. Spaced & Massed group training session designs, and the training session design from the original Bönstrup et al. (2019) 49 paper. Similar to the approach taken by Das et al., all practice is visualized as 10-second practice trials with a variable number (either 0, 1 or 30) of 10-second-long inter-practice rest intervals to allow for direct comparisons between designs. The two key takeaways from this comparison are that (1) the intervention differences (i.e. – practice schedules) between the Massed and Spaced groups from the Das et al. report are extremely small (less than 12% of the overall session schedule) and (2) the overall amount of practice is much less than compared to the design from the original Bönstrup report 49  (which has been utilized in several subsequent studies). (B) Group-level learning curve data from Bönstrup et al. (2019) 49 is used to estimate the performance range accounted for by the equivalent periods covering Test 1, Training 1 and Test 2 from Das et al (2024). Note that the intervention in the Das et al. study is limited to a period covering less than 50% of the overall learning range.

      First, participants in the original Bönstrup et al. study 49 experienced 157.14% more practice time and 46.97% less inter-practice rest time than the Spaced group in the Das et al. study (Author response image 8).  Thus, the overall amount of practice and rest differ substantially between studies, with much more limited training occurring for participants in Das et al.  

      Second, and perhaps most importantly, the actual intervention (i.e. – the difference in practice schedule between the Spaced and Massed groups) employed by Das et al. covers a very small fraction of the overall training session. Identical practice schedule segments for both the Spaced & Massed groups are indicated by the red shaded area in Author response image 8. Please note that these identical segments cover 94.84% of the Massed group training schedule and 88.01% of the Spaced group training schedule (since it has 60 seconds of additional rest). This means that the actual interventions cover less than 5% (for Massed) and 12% (for Spaced) of the total training session, which minimizes any chance of observing a difference between groups.

      Also note that the very beginning of the practice schedule (during which Figure R9 shows substantial learning is known to occur) is labeled in the Das et al. study as Test 1.  Test 1 encompasses the first 20 seconds of practice (alternatively viewed as the first two 10-second-long practice trials with no inter-practice rest). This is immediately followed by the Training 1 intervention, which is composed of only three 10-second-long practice trials (with 10-second inter-practice rest for the Spaced group and no inter-practice rest for the Massed group). Author response image 8 also shows that since there is no inter-practice rest after the third Training practice trial for the Spaced group, this third trial (for both Training 1 and 2) is actually a part of an identical practice schedule segment shared by both groups (Massed and Spaced), reducing the magnitude of the intervention even further.

      Moreover, we know from the original Bönstrup et al. (2019) paper49 that 46.57% of all overall group-level performance gains occurred between trials 2 and 5 for that study. Thus, Das et al. are limiting their designed intervention to a period covering less than half of the early learning range discussed in the literature, which again, minimizes any chance of observing an effect.

      This issue is amplified even further at Training 2 since skill learning prior to the long 5-minute break is retained, further constraining the performance range over these three trials. A related issue pertains to the trials labeled as Test 1 (trials 1-2) and Test 2 (trials 6-7) by Das et al. Again, we know from the original Bönstrup et al. paper 49 that 18.06% and 14.43% (32.49% total) of all overall group-level performance gains occurred during trials corresponding to Das et al Test 1 and Test 2, respectively. In other words, Das et al averaged skill performance over 20 seconds of practice at two time-points where dramatic skill improvements occur. Pan & Rickard (1995) previously showed that such averaging is known to inject artefacts into analyses of performance gains.

      Furthermore, the structure of the Test in Das et. al study appears to have an interference effect on the Spaced group performance after the training intervention.  This makes sense if you consider that the Spaced group is required to now perform the task in a Massed practice environment (i.e., two 10-second-long practice trials merged into one long trial), further blurring the true intervention effects. This effect is observable in Figure 1C,E of their pre-print. Specifically, while the Massed group continues to show an increase in performance during test relative to the last 10 seconds of practice during training, the Spaced group displays a marked decrease. This decrease is in stark contrast to the monotonic increases observed for both groups at all other time-points.

      Interestingly, when statistical comparisons between the groups are made at the time-points when the intervention is present (as opposed to after it has been removed) then the stated hypothesis, “If micro-offline gains represent offline learning, participants should reach higher skill levels when training with breaks, compared to training without breaks”, is confirmed.

      The data presented by Gupta and Rickard (2022, 2024) and Das et al. (2024) is in many ways more confirmatory of the constraints employed by our group and others with respect to experimental design, analysis and interpretation of study findings, rather than contradictory. Still, it does highlight a limitation of the current micro-online/offline framework, which was originally only intended to be applied to early skill learning over spaced practice schedules when reactive inhibition effects are minimized49. Extrapolation of this current framework to post-plateau performance periods, longer timespans, or non-learning situations (e.g. – the Non-repeating groups from Experiments 3 & 4 in Das et al. (2024)), when reactive inhibition plays a more substantive role, is not warranted. Ultimately, it will be important to develop new paradigms allowing one to independently estimate the different coincident or antagonistic features (e.g. - memory consolidation, planning, working memory and reactive inhibition) contributing to micro-online and micro-offline gains during and after early skill learning within a unifying framework.

      References

      (1) Buch, E. R., Claudino, L., Quentin, R., Bonstrup, M. & Cohen, L. G. Consolidation of human skill linked to waking hippocampo-neocortical replay. Cell Rep 35, 109193 (2021). https://doi.org:10.1016/j.celrep.2021.109193

      (2) Chen, P.-C., Stritzelberger, J., Walther, K., Hamer, H. & Staresina, B. P. Hippocampal ripples during offline periods predict human motor sequence learning. bioRxiv, 2024.2010.2006.614680 (2024). https://doi.org:10.1101/2024.10.06.614680

      (3) Classen, J., Liepert, J., Wise, S. P., Hallett, M. & Cohen, L. G. Rapid plasticity of human cortical movement representation induced by practice. J Neurophysiol 79, 1117-1123 (1998).

      (4) Karni, A. et al. Functional MRI evidence for adult motor cortex plasticity during motor skill learning. Nature 377, 155-158 (1995). https://doi.org:10.1038/377155a0

      (5) Kleim, J. A., Barbay, S. & Nudo, R. J. Functional reorganization of the rat motor cortex following motor skill learning. J Neurophysiol 80, 3321-3325 (1998).

      (6) Shadmehr, R. & Holcomb, H. H. Neural correlates of motor memory consolidation. Science 277, 821-824 (1997).

      (7) Doyon, J. et al. Experience-dependent changes in cerebellar contributions to motor sequence learning. Proc Natl Acad Sci U S A 99, 1017-1022 (2002).

      (8) Toni, I., Ramnani, N., Josephs, O., Ashburner, J. & Passingham, R. E. Learning arbitrary visuomotor associations: temporal dynamic of brain activity. Neuroimage 14, 1048-1057 (2001).

      (9) Grafton, S. T. et al. Functional anatomy of human procedural learning determined with regional cerebral blood flow and PET. J Neurosci 12, 2542-2548 (1992).

      (10) Kennerley, S. W., Sakai, K. & Rushworth, M. F. Organization of action sequences and the role of the pre-SMA. J Neurophysiol 91, 978-993 (2004). https://doi.org:10.1152/jn.00651.2003 00651.2003 [pii]

      (11) Hardwick, R. M., Rottschy, C., Miall, R. C. & Eickhoff, S. B. A quantitative meta-analysis and review of motor learning in the human brain. Neuroimage 67, 283-297 (2013). https://doi.org:10.1016/j.neuroimage.2012.11.020

      (12) Sawamura, D. et al. Acquisition of chopstick-operation skills with the non-dominant hand and concomitant changes in brain activity. Sci Rep 9, 20397 (2019). https://doi.org:10.1038/s41598-019-56956-0

      (13) Lee, S. H., Jin, S. H. & An, J. The difference in cortical activation pattern for complex motor skills: A functional near- infrared spectroscopy study. Sci Rep 9, 14066 (2019). https://doi.org:10.1038/s41598-019-50644-9

      (14) Battaglia-Mayer, A. & Caminiti, R. Corticocortical Systems Underlying High-Order Motor Control. J Neurosci 39, 4404-4421 (2019). https://doi.org:10.1523/JNEUROSCI.2094-18.2019

      (15) Toni, I., Thoenissen, D. & Zilles, K. Movement preparation and motor intention. Neuroimage 14, S110-117 (2001). https://doi.org:10.1006/nimg.2001.0841

      (16) Wolpert, D. M., Goodbody, S. J. & Husain, M. Maintaining internal representations: the role of the human superior parietal lobe. Nat Neurosci 1, 529-533 (1998). https://doi.org:10.1038/2245

      (17) Andersen, R. A. & Buneo, C. A. Intentional maps in posterior parietal cortex. Annu Rev Neurosci 25, 189-220 (2002). https://doi.org:10.1146/annurev.neuro.25.112701.142922 112701.142922 [pii]

      (18) Buneo, C. A. & Andersen, R. A. The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia 44, 2594-2606 (2006). https://doi.org:S0028-3932(05)00333-7 [pii] 10.1016/j.neuropsychologia.2005.10.011

      (19) Grover, S., Wen, W., Viswanathan, V., Gill, C. T. & Reinhart, R. M. G. Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation. Nat Neurosci 25, 1237-1246 (2022). https://doi.org:10.1038/s41593-022-01132-3

      (20) Colclough, G. L. et al. How reliable are MEG resting-state connectivity metrics? Neuroimage 138, 284-293 (2016). https://doi.org:10.1016/j.neuroimage.2016.05.070

      (21) Colclough, G. L., Brookes, M. J., Smith, S. M. & Woolrich, M. W. A symmetric multivariate leakage correction for MEG connectomes. NeuroImage 117, 439-448 (2015). https://doi.org:10.1016/j.neuroimage.2015.03.071

      (22) Mollazadeh, M. et al. Spatiotemporal variation of multiple neurophysiological signals in the primary motor cortex during dexterous reach-to-grasp movements. J Neurosci 31, 15531-15543 (2011). https://doi.org:10.1523/JNEUROSCI.2999-11.2011

      (23) Bansal, A. K., Vargas-Irwin, C. E., Truccolo, W. & Donoghue, J. P. Relationships among low-frequency local field potentials, spiking activity, and three-dimensional reach and grasp kinematics in primary motor and ventral premotor cortices. J Neurophysiol 105, 1603-1619 (2011). https://doi.org:10.1152/jn.00532.2010

      (24) Flint, R. D., Ethier, C., Oby, E. R., Miller, L. E. & Slutzky, M. W. Local field potentials allow accurate decoding of muscle activity. J Neurophysiol 108, 18-24 (2012). https://doi.org:10.1152/jn.00832.2011

      (25) Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51-56 (2012). https://doi.org:10.1038/nature11129

      (26) Bassett, D. S. et al. Dynamic reconfiguration of human brain networks during learning. Proc Natl Acad Sci U S A 108, 7641-7646 (2011). https://doi.org:10.1073/pnas.1018985108

      (27) Albouy, G., King, B. R., Maquet, P. & Doyon, J. Hippocampus and striatum: dynamics and interaction during acquisition and sleep-related motor sequence memory consolidation. Hippocampus 23, 985-1004 (2013). https://doi.org:10.1002/hipo.22183

      (28) Albouy, G. et al. Neural correlates of performance variability during motor sequence acquisition. Neuroimage 60, 324-331 (2012). https://doi.org:10.1016/j.neuroimage.2011.12.049

      (29) Qin, Y. L., McNaughton, B. L., Skaggs, W. E. & Barnes, C. A. Memory reprocessing in corticocortical and hippocampocortical neuronal ensembles. Philos Trans R Soc Lond B Biol Sci 352, 1525-1533 (1997). https://doi.org:10.1098/rstb.1997.0139

      (30) Euston, D. R., Tatsuno, M. & McNaughton, B. L. Fast-forward playback of recent memory sequences in prefrontal cortex during sleep. Science 318, 1147-1150 (2007). https://doi.org:10.1126/science.1148979

      (31) Molle, M. & Born, J. Hippocampus whispering in deep sleep to prefrontal cortex--for good memories? Neuron 61, 496-498 (2009). https://doi.org:S0896-6273(09)00122-6 [pii] 10.1016/j.neuron.2009.02.002

      (32) Frankland, P. W. & Bontempi, B. The organization of recent and remote memories. Nat Rev Neurosci 6, 119-130 (2005). https://doi.org:10.1038/nrn1607

      (33) Jacobacci, F. et al. Rapid hippocampal plasticity supports motor sequence learning. Proc Natl Acad Sci U S A 117, 23898-23903 (2020). https://doi.org:10.1073/pnas.2009576117

      (34) Albouy, G. et al. Maintaining vs. enhancing motor sequence memories: respective roles of striatal and hippocampal systems. Neuroimage 108, 423-434 (2015). https://doi.org:10.1016/j.neuroimage.2014.12.049

      (35) Gais, S. et al. Sleep transforms the cerebral trace of declarative memories. Proc Natl Acad Sci U S A 104, 18778-18783 (2007). https://doi.org:0705454104 [pii] 10.1073/pnas.0705454104

      (36) Sterpenich, V. et al. Sleep promotes the neural reorganization of remote emotional memory. J Neurosci 29, 5143-5152 (2009). https://doi.org:10.1523/JNEUROSCI.0561-09.2009

      (37) Euston, D. R., Gruber, A. J. & McNaughton, B. L. The role of medial prefrontal cortex in memory and decision making. Neuron 76, 1057-1070 (2012). https://doi.org:10.1016/j.neuron.2012.12.002

      (38) van Kesteren, M. T., Fernandez, G., Norris, D. G. & Hermans, E. J. Persistent schema-dependent hippocampal-neocortical connectivity during memory encoding and postencoding rest in humans. Proc Natl Acad Sci U S A 107, 7550-7555 (2010). https://doi.org:10.1073/pnas.0914892107

      (39) van Kesteren, M. T., Ruiter, D. J., Fernandez, G. & Henson, R. N. How schema and novelty augment memory formation. Trends Neurosci 35, 211-219 (2012). https://doi.org:10.1016/j.tins.2012.02.001

      (40) Wagner, A. D. et al. Building memories: remembering and forgetting of verbal experiences as predicted by brain activity. Science (New York, N.Y.) 281, 1188-1191 (1998).

      (41) Ashe, J., Lungu, O. V., Basford, A. T. & Lu, X. Cortical control of motor sequences. Curr Opin Neurobiol 16, 213-221 (2006).

      (42) Hikosaka, O., Nakamura, K., Sakai, K. & Nakahara, H. Central mechanisms of motor skill learning. Curr Opin Neurobiol 12, 217-222 (2002).

      (43) Penhune, V. B. & Steele, C. J. Parallel contributions of cerebellar, striatal and M1 mechanisms to motor sequence learning. Behav. Brain Res. 226, 579-591 (2012). https://doi.org:10.1016/j.bbr.2011.09.044

      (44) Doyon, J. et al. Contributions of the basal ganglia and functionally related brain structures to motor learning. Behavioural brain research 199, 61-75 (2009). https://doi.org:10.1016/j.bbr.2008.11.012

      (45) Schendan, H. E., Searl, M. M., Melrose, R. J. & Stern, C. E. An FMRI study of the role of the medial temporal lobe in implicit and explicit sequence learning. Neuron 37, 1013-1025 (2003). https://doi.org:10.1016/s0896-6273(03)00123-5

      (46) Morris, R. G. M. Elements of a neurobiological theory of hippocampal function: the role of synaptic plasticity, synaptic tagging and schemas. The European journal of neuroscience 23, 2829-2846 (2006). https://doi.org:10.1111/j.1460-9568.2006.04888.x

      (47) Tse, D. et al. Schemas and memory consolidation. Science 316, 76-82 (2007). https://doi.org:10.1126/science.1135935

      (48) Berlot, E., Popp, N. J. & Diedrichsen, J. A critical re-evaluation of fMRI signatures of motor sequence learning. Elife 9 (2020). https://doi.org:10.7554/eLife.55241

      (49) Bonstrup, M. et al. A Rapid Form of Offline Consolidation in Skill Learning. Curr Biol 29, 1346-1351 e1344 (2019). https://doi.org:10.1016/j.cub.2019.02.049

      (50) Kornysheva, K. et al. Neural Competitive Queuing of Ordinal Structure Underlies Skilled Sequential Action. Neuron 101, 1166-1180 e1163 (2019). https://doi.org:10.1016/j.neuron.2019.01.018

      (51) Pan, S. C. & Rickard, T. C. Sleep and motor learning: Is there room for consolidation? Psychol Bull 141, 812-834 (2015). https://doi.org:10.1037/bul0000009

      (52) Rickard, T. C., Cai, D. J., Rieth, C. A., Jones, J. & Ard, M. C. Sleep does not enhance motor sequence learning. J Exp Psychol Learn Mem Cogn 34, 834-842 (2008). https://doi.org:10.1037/0278-7393.34.4.834

      53) Brawn, T. P., Fenn, K. M., Nusbaum, H. C. & Margoliash, D. Consolidating the effects of waking and sleep on motor-sequence learning. J Neurosci 30, 13977-13982 (2010). https://doi.org:10.1523/JNEUROSCI.3295-10.2010

      (54) Bonstrup, M., Iturrate, I., Hebart, M. N., Censor, N. & Cohen, L. G. Mechanisms of offline motor learning at a microscale of seconds in large-scale crowdsourced data. NPJ Sci Learn 5, 7 (2020). https://doi.org:10.1038/s41539-020-0066-9

      (55) Gupta, M. W. & Rickard, T. C. Dissipation of reactive inhibition is sufficient to explain post-rest improvements in motor sequence learning. NPJ Sci Learn 7, 25 (2022). https://doi.org:10.1038/s41539-022-00140-z

      (56) Jacobacci, F. et al. Rapid hippocampal plasticity supports motor sequence learning. Proceedings of the National Academy of Sciences 117, 23898-23903 (2020).

      (57) Brooks, E., Wallis, S., Hendrikse, J. & Coxon, J. Micro-consolidation occurs when learning an implicit motor sequence, but is not influenced by HIIT exercise. NPJ Sci Learn 9, 23 (2024). https://doi.org:10.1038/s41539-024-00238-6

      (58) Deleglise, A. et al. Human motor sequence learning drives transient changes in network topology and hippocampal connectivity early during memory consolidation. Cereb Cortex 33, 6120-6131 (2023). https://doi.org:10.1093/cercor/bhac489

      (59) Buzsaki, G. Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus 25, 1073-1188 (2015). https://doi.org:10.1002/hipo.22488

      (60) Gupta, M. W. & Rickard, T. C. Comparison of online, offline, and hybrid hypotheses of motor sequence learning using a quantitative model that incorporate reactive inhibition. Sci Rep 14, 4661 (2024). https://doi.org:10.1038/s41598-024-52726-9

      (61) Das, A., Karagiorgis, A., Diedrichsen, J., Stenner, M.-P. & Azanon, E. “Micro-offline gains” convey no benefit for motor skill learning. bioRxiv, 2024.2007.2011.602795 (2024). https://doi.org:10.1101/2024.07.11.602795

      (62) Mylonas, D. et al. Maintenance of Procedural Motor Memory across Brief Rest Periods Requires the Hippocampus. J Neurosci 44 (2024). https://doi.org:10.1523/JNEUROSCI.1839-23.2024

    1. Author Response

      Reviewer #1 (Public Review):

      Summary:

      By examining the prevalence of interactions with ancient amino acids of coenzymes in ancient versus recent folds, the authors noticed an increased interaction propensity for ancient interactions. They infer from this that coenzymes might have played an important role in prebiotic proteins.

      Strengths:

      (1) The analysis, which is very straightforward, is technically correct. However, the conclusions might not be as strong as presented.

      (2) This paper presents an excellent summary of contemporary thought on what might have constituted prebiotic proteins and their properties.

      (3) The paper is clearly written.

      We are grateful for the kind comments of the reviewer on our manuscript. However, we would like to clarify a possible misunderstanding in the summary of our study. Specifically, analysis of "ancient versus recent folds" was not really reported in our results. Our analysis concerned "coenzyme age" rather than the "protein folds age" and was focused mainly on interaction with early vs. late amino acids in protein sequence. While structural propensities of the coenzyme binding sites were also analyzed, no distinction on the level of ancient vs. recent folds was assumed and this was only commented on in the discussion, based on previous work of others.

      Weaknesses:

      (1) The conclusions might not be as strong as presented. First of all, while ancient amino acids interact less frequently in late with a given coenzyme, maybe this just reflects the fact that proteins that evolved later might be using residues that have a more favorable binding free energy.

      We would like to point out that there was no distinction to proteins that evolved early or late in our dataset of coenzyme-binding proteins. The aim of our analysis was purely to observe trends in the age of amino acids vs. age of coenzymes. While no direct inference can be made from this about early life as all the proteins are from extant life (as highlighted in the discussion of our work), our goal was to look for intrinsic propensities of early vs. late amino acids in binding to the different coenzyme entities. Indeed, very early interactions would be smeared by the eons of evolutionary history (perhaps also towards more favourable binding free energy, as pointed out also by the reviewer). Nevertheless, significant trends have been recorded across the PDB dataset, pointing to different propensities and mechanistic properties of the binding events. Rather than to a specific evolutionary past, our data therefore point to a “capacity” of the early amino acids to bind certain coenzymes and we believe that this is the major (and standing) conclusion of our work, along with the properties of such interactions. In our revised version, we will carefully go through all the conclusions and make sure that this message stands out but we are confident that the following concluding sentences copied from the abstract and the discussion of our manuscript fully comply with our data:

      “These results imply the plausibility of a coenzyme-peptide functional collaboration preceding the establishment of the Central Dogma and full protein alphabet evolution”

      “While no direct inferences about distant evolutionary past can be drawn from the analysis of extant proteins, the principles guiding these interactions can imply their potential prebiotic feasibility and significance.”

      “This implies that late amino acids would not be necessarily needed for the sovereignty of coenzyme-peptide interplay.”

      We would also like to add that proteins that evolved later might not always have higher free energy of binding. Musil et al., 2021 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8294521/) showed in their study on the example of haloalkane dehalogenase Dha A that the ancestral sequence reconstruction is a powerful tool for designing more stable, but also more active proteins. Ancestral sequence reconstruction relies on finding ancient states of protein families to suggest mutations that will lead to more stable proteins than are currently existing proteins. Their study did not explore the ligand-protein interactions specifically, but showed that ancient states often show more favourable properties than modern proteins.

      (2) What about other small molecules that existed in the probiotic soup? Do they also prefer such ancient amino acids? If so, this might reflect the interaction propensity of specific amino acids rather than the inferred important role of coenzymes.

      We appreciate the comment of the reviewer towards other small molecules, which we assume points mainly towards metal ions (i.e. inorganic cofactors). We completely agree with the reviewer that such interactions are of utmost importance to the origins of life. Intentionally, they were not part of our study, as these have already been studied previously by others (e.g. Bromberg et al., 2022; and reviewed in Frenkel-Pinter et al., 2020) and also us (Fried et al., 2022). For example, it is noteworthy that prebiotically relevant metal binding sites (e.g. of Mg2+) exhibit enrichment in early amino acids such as Asp and Glu while more recent metal (e.g. Cu and Zn) site in the late amino acids His and Cys (Fried et al., 2022). At the same time, comparable analyses of amino acid - coenzyme trends were not available.

      Nevertheless, involvement of metal ions in the coenzyme binding sites was also studied here and pointed to their bigger involvement with the Ancient coenzymes. In the revised version of the manuscript, we will be happy to enlarge the discussion of the studies concerning inorganic cofactors.

      (3) Perhaps the conclusions just reflect the types of active sites that evolved first and nothing more.

      We partly agree on this point with the reviewer but not on the fact why it is listed as the weakness of our study and on the “nothing more” notion. Understanding what the properties of the earliest binding sites is key to merging the gap between prebiotic chemistry and biochemistry. The potential of peptides preceding ribosomal synthesis (and the full alphabet evolution) along with prebiotically plausible coenzymes addresses exactly this gap, which is currently not understood.

      Reviewer #2 (Public Review):

      I enjoyed reading this paper and appreciate the careful analysis performed by the investigators examining whether 'ancient' cofactors are preferentially bound by the first-available amino acids, and whether later 'LUCA' cofactors are bound by the late-arriving amino acids. I've always found this question fascinating as there is a contradiction in inorganic metal-protein complexes (not what is focused on here). Metal coordination of Fe, Ni heavily relies on softer ligands like His and Cys - which are by most models latecomer amino acids. There are no traces of thiols or imidazoles in meteorites - although work by Dvorkin has indicated that could very well be due to acid degradation during extraction. Chris Dupont (PNAS 2005) showed that metal speciation in the early earth (such as proposed by Anbar and prior RJP Williams) matched the purported order of fold emergence.

      As such, cofactor-protein interactions as a driving force for evolution has always made sense to me and I admittedly read this paper biased in its favor. But to make sure, I started to play around with the data that the authors kindly and importantly shared in the supplementary files. Here's what I found:

      Point 1: The correlation between abundance of amino acids and protein age is dominated by glycine. There is a small, but visible difference in old vs new amino acid fractional abundance between Ancient and LUCA proteins (Figure 3, Supplementary Table 3). However, the bias is not evenly distributed among the amino acids - which Figure 4A shows but is hard to digest as presented. So instead I used the spreadsheet in Supplement 3 to calculate the fractional difference FDaa = F(old aa)-F(new aa). As expected from Figure 3, the mean FD for Ancient is greater than the mean FD for LUCA. But when you look at the same table for each amino acid FDcofactor = F(ancient cofactor) - F(LUCA cofactor), you now see that the bias is not evenly distributed between older and newer amino acids at all. In fact, most of the difference can be explained by glycine (FDcofactor = 3.8) and the rest by also including tryptophan (FDcofactor = -3.8). If you remove these two amino acids from the analysis, the trend seen in Figure 3 all but disappears.

      Troubling - so you might argue that Gly is the oldest of the old and Trp is the newest of the new so the argument still stands. Unfortunately, Gly is a lot of things - flexible, small, polar - so what is the real correlation, age, or chemistry? This leads to point 2.

      We truly acknowledge the effort that the reviewer made in the revision of the data and for the thoughtful, deeper analysis. We agree that this deserves further discussion of our data. As invited by the reviewer, we indeed repeated the analysis on the whole dataset. First, we would like to point out that the reviewer was most probably referring to the Supplementary Fig. 2 (and not 3, which concerns protein folds). While the difference between Ancient and LUCA coenzyme binding is indeed most pronounced for Gly and Trp, we failed to confirm that the trend disappears if those two amino acids are removed from the analysis (additional FDcofactors of 3.2 and -3.2 are observed for the early and late amino acids, resp.), as seen in Table I below. The main additional contributors to this effect are Asp (FD of 2.1) and Ser (FD of 1.8) from the early amino acids and Arg (FD of -2.6) and Cys (FD of -1.7) of the late amino acids. Hence, while we agree with the reviewer that Gly and Trp (the oldest and the youngest) contribute to this effect the most, we disagree that the trend reduces to these two amino acids.

      In addition, the most recent coenzyme temporality (the Post-LUCA) was neglected in the reviewer’s analysis. The difference between F (old) and F (new) is even more pronounced in PostLUCA than in LUCA, vs. Ancient (Table II) and depends much less on Trp. Meanwhile, Asp, Ser, Leu, Phe, and Arg dominate the observed phenomenon (Table I). This further supports our lack of agreement with the reviewer’s point. Nevertheless, we remain grateful for this discussion and we will happily include this additional analysis in the Supplementary Material of our revised manuscript.

      Author response table 1.

      Amino acid fractional difference of all coenzymes at residue level

      Author response table 2.

      Amino acid fractional difference of all coenzymes

      Point 2 - The correlation is dominated by phosphate.

      In the ancient cofactor list, all but 4 comprise at least one phosphate (SAM, tetrahydrofolic acid, biopterin, and heme). Except for SAM, the rest have very low Gly abundance. The overall high Gly abundance in the ancient enzymes is due to the chemical property of glycine that can occupy the right-hand side of the Ramachandran plot. This allows it to make the alternating alphaleftalpharight conformation of the P-loop forming Milner-White's anionic nest. If you remove phosphate binding folds from the analysis the trend in Figure 3 vanishes.

      Likewise, Trp is an important functional residue for binding quinones and tuning its redox potential. The LUCA cofactor set is dominated by quinone and derivatives, which likely drives up the new amino acid score for this class of cofactors.

      Once again, we are thankful to the reviewer for raising this point. The role of Gly in the anionic nests proposed by Milner-White and Russel, as well as the Trp role in quinone binding are important points that we would be happy to highlight more in the discussion of the revised manuscript.<br /> Nevertheless, we disagree that the trends reduce only to the phosphate-containing coenzymes and importantly, that “the trend in Figure 3 vanishes” upon their removal. Table III and IV (below) show the data for coenzymes excluding those with phosphate moiety and the trend in Fig. 3 remains, albeit less pronounced.

      Author response table 3.

      Amino acid fractional difference of non-phosphate containing coenzymes

      Author response table 4.

      Amino acid fractional difference of non-phosphate containing coenzymes at residue level

      In summary, while I still believe the premise that cofactors drove the shape of peptides and the folds that came from them - and that Rossmann folds are ancient phosphate-binding proteins, this analysis does not really bring anything new to these ideas that have already been stated by Tawfik/Longo, Milner-White/Russell, and many others.

      I did this analysis ad hoc on a slice of the data the authors provided and could easily have missed something and I encourage the authors to check my work. If it holds up it should be noted that negative results can often be as informative as strong positive ones. I think the signal here is too weak to see in the noise using the current approach.

      We are grateful to the reviewer for encouraging further look at our data. While we hope that the analysis on the whole dataset (listed in Tables I - IV) will change the reviewer’s standpoint on our work, we would still like to comment on the questioned novelty of our results. In fact, the extraordinary works by Tawfik/Longo and Milner-While/Russel (which were cited in our manuscript multiple times) presented one of the motivations for this study. We take the opportunity to copy the part of our discussion that specifically highlights the relevance of their studies, and points out the contribution of our work with respect to theirs.

      “While all the coenzymes bind preferentially to protein residue sidechains, more backbone interactions appear in the ancient coenzyme class when compared to others. This supports an earlier hypothesis that functions of the earliest peptides (possibly of variable compositions and lengths) would be performed with the assistance of the main chain atoms rather than their sidechains (Milner-White and Russel 2011). Longo et al., recently analyzed binding sites of different phosphate-containing ligands which were arguably of high relevance during earliest stages of life, connecting all of today’s core metabolism (Longo et al., 2020 (b)). They observed that unlike the evolutionary younger binding motifs (which rely on sidechain binding), the most ancient lineages indeed bind to phosphate moieties predominantly via the protein backbone. Our analysis assigns this phenomenon primarily to interactions via early amino acids that (as mentioned above) are generally enriched in the binding interface of the ancient coenzymes. This implies that late amino acids would not be necessarily needed for the sovereignty of coenzymepeptide interplay.”

      Unlike any other previous work, our study involves all the major coenzymes (not just the phosphate-containing ones) and is based on their evolutionary age, as well as age of amino acids. It is the first PDB-wide systematic evolutionary analysis of coenzyme-amino acid binding. Besides confirming some earlier theoretical assertions (such as role of backbone interactions in early peptide-coenzyme evolution) and observations (such as occurrence of the ancient phosphatecontaining coenzymes in the oldest protein folds), it uncovers substantial novel knowledge. For example, (i) enrichment of early amino acids in the binding of ancient coenzymes, vs. enrichment of late amino acids in the binding of LUCA and Post-LUCA coenzymes, (ii) the trends in secondary structure content of the binding sites of coenzyme of different temporalities, (iii) increased involvement of metal ions in the ancient coenzyme binding events, and (iv) the capacity of only early amino acids to bind ancient coenzymes. In our humble opinion, all of these points bring important contributions in the peptide-coenzyme knowledge gap which has been discussed in a number of previous studies.

    1. Author response:

      eLife assessment

      This potentially useful study involves neuro-imaging and electrophysiology in a small cohort of congenital cataract patients after sight recovery and age-matched control participants with normal sight. It aims to characterize the effects of early visual deprivation on excitatory and inhibitory balance in the visual cortex. While the findings are taken to suggest the existence of persistent alterations in Glx/GABA ratio and aperiodic EEG signals, the evidence supporting these claims is incomplete. Specifically, small sample sizes, lack of a specific control cohort, and other methodological limitations will likely restrict the usefulness of the work, with relevance limited to scientists working in this particular subfield.

      As pointed out in the public reviews, there are only very few human models which allow for assessing the role of early experience on neural circuit development. While the prevalent research in permanent congenital blindness reveals the response and adaptation of the developing brain to an atypical situation (blindness), research in sight restoration addresses the question of whether and how atypical development can be remediated if typical experience (vision) is restored. The literature on the role of visual experience in the development of E/I balance in humans, assessed via Magnetic Resonance Spectroscopy (MRS), has been limited to a few studies on congenital permanent blindness. Thus, we assessed sight recovery individuals with a history of congenital blindness, as limited evidence from other researchers indicated that the visual cortex E/I ratio might differ compared to normally sighted controls.

      Individuals with total bilateral congenital cataracts who remained untreated until later in life are extremely rare, particularly if only carefully diagnosed patients are included in a study sample. A sample size of 10 patients is, at the very least, typical of past studies in this population, even for exclusively behavioral assessments. In the present study, in addition to behavioral assessment as an indirect measure of sensitive periods, we investigated participants with two neuroimaging methods (Magnetic Resonance Spectroscopy and electroencephalography) to directly assess the neural correlates of sensitive periods in humans. The electroencephalography data allowed us to link the results of our small sample to findings documented in large cohorts of both, sight recovery individuals and permanently congenitally blind individuals. As pointed out in a recent editorial recommending an “exploration-then-estimation procedure,” (“Consideration of Sample Size in Neuroscience Studies,” 2020), exploratory studies like ours provide crucial direction and specific hypotheses for future work.

      We included an age-matched sighted control group recruited from the same community, measured in the same scanner and laboratory, to assess whether early experience is necessary for a typical excitatory/inhibitory (E/I) ratio to emerge in adulthood. The present findings indicate that this is indeed the case. Based on these results, a possible question to answer in future work, with individuals who had developmental cataracts, is whether later visual deprivation causes similar effects. Note that even if visual deprivation at a later stage in life caused similar effects, the current results would not be invalidated; by contrast, they are essential to understand future work on late (permanent or transient) blindness.

      Thus, we think that the present manuscript has far reaching implications for our understanding of the conditions under which E/I balance, a crucial characteristic of brain functioning, emerges in humans.

      Finally, our manuscript is one of the first few studies which relates MRS neurotransmitter concentrations to parameters of EEG aperiodic activity. Since present research has been using aperiodic activity as a correlate of the E/I ratio, and partially of higher cognitive functions, we think that our manuscript additionally contributes to a better understanding of what might be measured with aperiodic neurophysiological activity.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      In this human neuroimaging and electrophysiology study, the authors aimed to characterize the effects of a period of visual deprivation in the sensitive period on excitatory and inhibitory balance in the visual cortex. They attempted to do so by comparing neurochemistry conditions ('eyes open', 'eyes closed') and resting state, and visually evoked EEG activity between ten congenital cataract patients with recovered sight (CC), and ten age-matched control participants (SC) with normal sight.

      First, they used magnetic resonance spectroscopy to measure in vivo neurochemistry from two locations, the primary location of interest in the visual cortex, and a control location in the frontal cortex. Such voxels are used to provide a control for the spatial specificity of any effects because the single-voxel MRS method provides a single sampling location. Using MR-visible proxies of excitatory and inhibitory neurotransmission, Glx and GABA+ respectively, the authors report no group effects in GABA+ or Glx, no difference in the functional conditions 'eyes closed' and 'eyes open'. They found an effect of the group in the ratio of Glx/GABA+ and no similar effect in the control voxel location. They then performed multiple exploratory correlations between MRS measures and visual acuity, and reported a weak positive correlation between the 'eyes open' condition and visual acuity in CC participants.

      The same participants then took part in an EEG experiment. The authors selected only two electrodes placed in the visual cortex for analysis and reported a group difference in an EEG index of neural activity, the aperiodic intercept, as well as the aperiodic slope, considered a proxy for cortical inhibition. They report an exploratory correlation between the aperiodic intercept and Glx in one out of three EEG conditions.

      The authors report the difference in E/I ratio, and interpret the lower E/I ratio as representing an adaptation to visual deprivation, which would have initially caused a higher E/I ratio. Although intriguing, the strength of evidence in support of this view is not strong. Amongst the limitations are the low sample size, a critical control cohort that could provide evidence for a higher E/I ratio in CC patients without recovered sight for example, and lower data quality in the control voxel.

      Strengths of study:

      How sensitive period experience shapes the developing brain is an enduring and important question in neuroscience. This question has been particularly difficult to investigate in humans. The authors recruited a small number of sight-recovered participants with bilateral congenital cataracts to investigate the effect of sensitive period deprivation on the balance of excitation and inhibition in the visual brain using measures of brain chemistry and brain electrophysiology. The research is novel, and the paper was interesting and well-written.

      Limitations:

      (1.1) Low sample size. Ten for CC and ten for SC, and a further two SC participants were rejected due to a lack of frontal control voxel data. The sample size limits the statistical power of the dataset and increases the likelihood of effect inflation.

      Applying strict criteria, we only included individuals who were born with no patterned vision in the CC group. The population of individuals who have remained untreated past infancy is small in India, despite a higher prevalence of childhood cataract than Germany. Indeed, from the original 11 CC and 11 SC participants tested, one participant each from the CC and SC group had to be rejected, as their data had been corrupted, resulting in 10 participants in each group.

      It was a challenge to recruit participants from this rare group with no history of neurological diagnosis/intake of neuromodulatory medications, who were able and willing to undergo both MRS and EEG. For this study, data collection took more than 1.5 years.

      We took care of the validity of our results with two measures; first, assessed not just MRS, but additionally, EEG measures of E/I ratio. The latter allowed us to link results to a larger population of CC individuals, that is, we replicated the results of a larger group of 38 individuals (Ossandón et al., 2023) in our sub-group.

      Second, we included a control voxel. As predicted, all group effects were restricted to the occipital voxel.

      (1.2) Lack of specific control cohort. The control cohort has normal vision. The control cohort is not specific enough to distinguish between people with sight loss due to different causes and patients with congenital cataracts with co-morbidities. Further data from more specific populations, such as patients whose cataracts have not been removed, with developmental cataracts, or congenitally blind participants, would greatly improve the interpretability of the main finding. The lack of a more specific control cohort is a major caveat that limits a conclusive interpretation of the results.

      The existing work on visual deprivation and neurochemical changes, as assessed with MRS, has been limited to permanent congenital blindness. In fact, most of the studies on permanent blindness included only congenitally blind or early blind humans (Coullon et al., 2015; Weaver et al., 2013), or, in separate studies, only late-blind individuals (Bernabeu et al., 2009). Thus, accordingly, we started with the most “extreme” visual deprivation model, sight recovery after congenital blindness. If we had not observed any group difference compared to normally sighted controls, investigating other groups might have been trivial. Based on our results, subsequent studies in late blind individuals, and then individuals with developmental cataracts, can be planned with clear hypotheses.

      (1.3) MRS data quality differences. Data quality in the control voxel appears worse than in the visual cortex voxel. The frontal cortex MRS spectrum shows far broader linewidth than the visual cortex (Supplementary Figures). Compared to the visual voxel, the frontal cortex voxel has less defined Glx and GABA+ peaks; lower GABA+ and Glx concentrations, lower NAA SNR values; lower NAA concentrations. If the data quality is a lot worse in the FC, then small effects may not be detectable.

      Worse data quality in the frontal than the visual cortex has been repeatedly observed in the MRS literature, attributable to magnetic field distortions (Juchem & Graaf, 2017) resulting from the proximity of the region to the sinuses (recent example: (Rideaux et al., 2022)). Nevertheless, we chose the frontal control region rather than a parietal voxel, given the potential  neurochemical changes in multisensory regions of the parietal cortex due to blindness. Such reorganization would be less likely in frontal areas associated with higher cognitive functions. Further, prior MRS studies of the visual cortex have used the frontal cortex as a control region as well (Pitchaimuthu et al., 2017; Rideaux et al., 2022).

      In the present study, we checked that the frontal cortex datasets for Glx and GABA+ concentrations were of sufficient quality: the fit error was below 8.31% in both groups (Supplementary Material S3). For reference, Mikkelsen et al. reported a mean GABA+ fit error of 6.24 +/- 1.95% from a posterior cingulate cortex voxel across 8 GE scanners, using the Gannet pipeline. No absolute cutoffs have been proposed for fit errors. However, MRS studies in special populations (I/E ratio assessed in narcolepsy (Gao et al., 2024), GABA concentration assessed in Autism Spectrum Disorder (Maier et al., 2022)) have used frontal cortex data with a fit error of <10% to identify differences between cohorts (Gao et al., 2024; Pitchaimuthu et al., 2017). Based on the literature, MRS data from the frontal voxel of the present study would have been of sufficient quality to uncover group differences.

      In the revised manuscript, we will add the recently published MRS quality assessment form to the supplementary materials. Additionally, we would like to allude to our apriori prediction of group differences for the visual cortex, but not for the frontal cortex voxel.

      (1.4) Because of the direction of the difference in E/I, the authors interpret their findings as representing signatures of sight improvement after surgery without further evidence, either within the study or from the literature. However, the literature suggests that plasticity and visual deprivation drive the E/I index up rather than down. Decreasing GABA+ is thought to facilitate experience-dependent remodelling. What evidence is there that cortical inhibition increases in response to a visual cortex that is over-sensitised due to congenital cataracts? Without further experimental or literature support this interpretation remains very speculative.

      Indeed, higher inhibition was not predicted, which we attempt to reconcile in our discussion section. We base our discussion mainly on the non-human animal literature, which has shown evidence of homeostatic changes after prolonged visual deprivation in the adult brain (Barnes et al., 2015). It is also interesting to note that after monocular deprivation in adult humans, resting GABA+ levels decreased in the visual cortex (Lunghi et al., 2015). Assuming that after delayed sight restoration, adult neuroplasticity mechanisms must be employed, these studies would predict a “balancing” of the increased excitatory drive following sight restoration by a commensurate increase in inhibition (Keck et al., 2017). Additionally, the EEG results of the present study allowed for speculation regarding the underlying neural mechanisms of an altered E/I ratio. The aperiodic EEG activity suggested higher spontaneous spiking (increased intercept) and increased inhibition (steeper aperiodic slope between 1-20 Hz) in CC vs SC individuals (Ossandón et al., 2023).

      In the revised manuscript, we will more clearly indicate that these speculations are based primarily on non-human animal work, due to the lack of human studies on the subject.

      (1.5) Heterogeneity in the patient group. Congenital cataract (CC) patients experienced a variety of duration of visual impairment and were of different ages. They presented with co-morbidities (absorbed lens, strabismus, nystagmus). Strabismus has been associated with abnormalities in GABAergic inhibition in the visual cortex. The possible interactions with residual vision and confounds of co-morbidities are not experimentally controlled for in the correlations, and not discussed.

      The goal of the present study was to assess whether we would observe changes in E/I ratio after restoring vision at all. We would not have included patients without nystagmus in the CC group of the present study, since it would have been unlikely that they experienced congenital patterned visual deprivation. Amongst diagnosticians, nystagmus or strabismus might not be considered genuine “comorbidities” that emerge in people with congenital cataracts. Rather, these are consequences of congenital visual deprivation, which we employed as diagnostic criteria. Similarly, absorbed lenses are clear signs that cataracts were congenital. As in other models of experience dependent brain development (e.g. the extant literature on congenital permanent blindness, including anophthalmic individuals (Coullon et al., 2015; Weaver et al., 2013), some uncertainty remains regarding whether the (remaining, in our case) abnormalities of the eye, or the blindness they caused, are the factors driving neural changes. In case of people with reversed congenital cataracts, at least the retina is considered to be intact, as they would otherwise not receive cataract removal surgery.

      However, we consider it unlikely that strabismus caused the group differences, because the present study shows group differences in the Glx/GABA+ ratio at rest, regardless of eye opening or eye closure, for which strabismus would have caused distinct effects. By contrast, the link between GABA concentration and, for example, interocular suppression in strabismus, have so far been documented during visual stimulation (Mukerji et al., 2022; Sengpiel et al., 2006), and differed in direction depending on the amblyopic vs. non-amblyopic eye. Further, one MRS study did not find group differences in GABA concentration between the visual cortices of 16 amblyopic individuals and sighted controls (Mukerji et al., 2022), supporting that the differences in Glx/GABA+ concentration which we observed were driven by congenital deprivation, and not amblyopia-associated visual acuity or eye movement differences.  

      In the revised manuscript, we will discuss the inclusion criteria in more detail, and the aforementioned reasons why our data remains interpretable.

      (1.6) Multiple exploratory correlations were performed to relate MRS measures to visual acuity (shown in Supplementary Materials), and only specific ones were shown in the main document. The authors describe the analysis as exploratory in the 'Methods' section. Furthermore, the correlation between visual acuity and E/I metric is weak, and not corrected for multiple comparisons. The results should be presented as preliminary, as no strong conclusions can be made from them. They can provide a hypothesis to test in a future study.

      In the revised manuscript, we will clearly indicate that the exploratory correlation analyses are reported to put forth hypotheses for future studies.

      (1.7) P.16 Given the correlation of the aperiodic intercept with age ("Age negatively correlated with the aperiodic intercept across CC and SC individuals, that is, a flattening of the intercept was observed with age"), age needs to be controlled for in the correlation between neurochemistry and the aperiodic intercept. Glx has also been shown to negatively correlate with age.

      The correlation between chronological age and aperiodic intercept was observed across groups, but the correlation between Glx and the intercept of the aperiodic EEG activity was seen only in the CC group, even though the SC group was matched for age. Thus, such a correlation was very unlikely to  be predominantly driven by an effect of chronological age.

      In the revised manuscript, we will add the linear regressions with age as a covariate included below, for the relationship between aperiodic intercept and Glx concentration in the CC group. 

      a. A linear regression was conducted within the CC group to predict the intercept during visual stimulation, based on age and visual cortex Glx concentration. The results of the regression analysis indicated that the model explained a significant proportion of the variance in the aperiodic intercept, 𝑅2\=0.82_, t_(2,7)=16.1_, 𝑝=0.0024._ Note that the coefficient for age was not significant, 𝛽=0.007, t(7)=0.82, 𝑝=0.439. The regression coefficients and their respective statistics are presented in Author response table 1.

      Author response table 1.

      Regression Analysis Summary for Predicting Aperiodic Intercept (Visual Stimulation) in the CC group

      b. A linear regression was conducted to predict the intercept during eye opening at rest, based on age and visual cortex Glx concentration. The results of the regression analysis indicated that the model explained a significant proportion of the variance in the aperiodic intercept, 𝑅2\=0.842_, t_(2,7)=18.6,  𝑝=0.00159_._ Note that the coefficient for age was not significant, 𝛽=−0.005, t(7)=−0.90, 𝑝=0.400. The regression coefficients and their respective statistics are presented in Author response table 2.

      Author response table 2.

      Regression Analysis Summary for Predicting Aperiodic Intercept (Eyes Open) in the CC group

      c. Given that the Glx coefficient is significant in both models and age does not significantly predict either outcome, it can be concluded that Glx independently predicts the intercept of the aperiodic intercept.

      (1.8) Multiple exploratory correlations were performed to relate MRS to EEG measures (shown in Supplementary Materials), and only specific ones were shown in the main document. Given the multiple measures from the MRS, the correlations with the EEG measures were exploratory, as stated in the text, p.16, and in Figure 4. Yet the introduction said that there was a prior hypothesis "We further hypothesized that neurotransmitter changes would relate to changes in the slope and intercept of the EEG aperiodic activity in the same subjects." It would be great if the text could be revised for consistency and the analysis described as exploratory.

      In the revised manuscript, we will improve the phrasing. We consider the correlation analyses as exploratory due to our sample size and the absence of prior work. However, we did hypothesize that both MRS and EEG markers would concurrently be altered in CC vs SC individuals.

      (1.9) The analysis for the EEG needs to take more advantage of the available data. As far as I understand, only two electrodes were used, yet far more were available as seen in their previous study (Ossandon et al., 2023). The spatial specificity is not established. The authors could use the frontal cortex electrode (FP1, FP2) signals as a control for spatial specificity in the group effects, or even better, all available electrodes and correct for multiple comparisons. Furthermore, they could use the aperiodic intercept vs Glx in SC to evaluate the specificity of the correlation to CC.

      The aperiodic intercept and slope did not differ between CC and SC individuals for Fp1 and Fp2, suggesting the spatial specificity of the results. In the revised manuscript, we will add this analysis to the supplementary material.

      Author response image 1.

      Aperiodic intercept (top) and slope (bottom) for congenital cataract-reversal (CC, red) and age-matched normally sighted control (SC, blue) individuals. Distributions of these parameters are displayed as violin plots for three conditions; at rest with eyes closed (EC), at rest with eyes open (EO) and during visual stimulation (LU). Aperiodic parameters were calculated across electrodes Fp1 and Fp2. Solid black lines indicate mean values, dotted black lines indicate median values. Coloured lines connect values of individual participants across conditions.

      Further, Glx concentration in the visual cortex did not correlate with the aperiodic intercept in the SC group (Figure 4), suggesting that this relationship was indeed specific to the CC group.

      The data from all electrodes has been analyzed and published in other studies as well (Pant et al., 2023; Ossandón et al., 2023).

      Reviewer #2 (Public Review):

      Summary:

      The manuscript reports non-invasive measures of activity and neurochemical profiles of the visual cortex in congenitally blind patients who recovered vision through the surgical removal of bilateral dense cataracts. The declared aim of the study is to find out how restoring visual function after several months or years of complete blindness impacts the balance between excitation and inhibition in the visual cortex.

      Strengths:

      The findings are undoubtedly useful for the community, as they contribute towards characterising the many ways this special population differs from normally sighted individuals. The combination of MRS and EEG measures is a promising strategy to estimate a fundamental physiological parameter - the balance between excitation and inhibition in the visual cortex, which animal studies show to be heavily dependent upon early visual experience. Thus, the reported results pave the way for further studies, which may use a similar approach to evaluate more patients and control groups.

      Weaknesses:

      (2.1) The main issue is the lack of an appropriate comparison group or condition to delineate the effect of sight recovery (as opposed to the effect of congenital blindness). Few previous studies suggested an increased excitation/Inhibition ratio in the visual cortex of congenitally blind patients; the present study reports a decreased E/I ratio instead. The authors claim that this implies a change of E/I ratio following sight recovery. However, supporting this claim would require showing a shift of E/I after vs. before the sight-recovery surgery, or at least it would require comparing patients who did and did not undergo the sight-recovery surgery (as common in the field).

      Longitudinal studies would indeed be the best way to test the hypothesis that the lower E/I ratio in the CC group observed by the present study is a consequence of sight restoration. However, longitudinal studies involving neuroimaging are an effortful challenge, particularly in research conducted outside of major developed countries and dedicated neuroimaging research facilities. Crucially, however, had CC and SC individuals, as well as permanently congenitally blind vs SC individuals (Coullon et al., 2015; Weaver et al., 2013), not differed on any neurochemical markers, such a longitudinal study might have been trivial. Thus, in order to justify and better tailor longitudinal studies, cross-sectional studies are an initial step.

      (2.2) MR Spectroscopy shows a reduced GLX/GABA ratio in patients vs. sighted controls; however, this finding remains rather isolated, not corroborated by other observations. The difference between patients and controls only emerges for the GLX/GABA ratio, but there is no accompanying difference in either the GLX or the GABA concentrations. There is an attempt to relate the MRS data with acuity measurements and electrophysiological indices, but the explorative correlational analyses do not help to build a coherent picture. A bland correlation between GLX/GABA and visual impairment is reported, but this is specific to the patients' group (N=10) and would not hold across groups (the correlation is positive, predicting the lowest GLX/GABA ratio values for the sighted controls - the opposite of what is found). There is also a strong correlation between GLX concentrations and the EEG power at the lowest temporal frequencies. Although this relation is intriguing, it only holds for a very specific combination of parameters (of the many tested): only with eyes open, only in the patient group.

      We interpret these findings differently, that is, in the context of experiments from non-human animals and the larger MRS literature.

      Homeostatic control of E/I balance assumes that the ratio of excitation (reflected here by Glx) and inhibition (reflected here by GABA+) is regulated. Like prior work (Gao et al., 2024, 2024; Narayan et al., 2022; Perica et al., 2022; Steel et al., 2020; Takado et al., 2022; Takei et al., 2016), we assumed that the ratio of Glx/GABA+ is indicative of E/I balance rather than solely the individual neurotransmitter levels. One of the motivations for assessing the ratio vs the absolute concentration is that as per the underlying E/I balance hypothesis, a change in excitation would cause a concomitant change in inhibition, and vice versa, which has been shown in non-human animal work (Fang et al., 2021; Haider et al., 2006; Tao & Poo, 2005) and modeling research (Vreeswijk & Sompolinsky, 1996; Wu et al., 2022). Importantly, our interpretation of the lower E/I ratio is not just from the Glx/GABA+ ratio, but additionally, based on the steeper EEG aperiodic slope (1-20 Hz).  

      As in the discussion section and response 1.4, we did not expect to see a lower Glx/GABA+ ratio in CC individuals. We discuss the possible reasons for the direction of the correlation with visual acuity and aperiodic offset during passive visual stimulation, and offer interpretations and (testable) hypotheses.

      We interpret the direction of the  Glx/GABA+ correlation with visual acuity to imply that patients with highest (compensatory) balancing of the consequences of congenital blindness (hyperexcitation), in light of visual stimulation, are those who recover best. Note, the sighted control group was selected based on their “normal” vision. Thus, clinical visual acuity measures are not expected to sufficiently vary, nor have the resolution to show strong correlations with neurophysiological measures. By contrast, the CC group comprised patients highly varying in visual outcomes, and thus were ideal to investigate such correlations.

      This holds for the correlation between Glx and the aperiodic intercept, as well. Previous work has suggested that the intercept of the aperiodic activity is associated with broadband spiking activity in neural circuits (Manning et al., 2009). Thus, an atypical increase of spiking activity during visual stimulation, as indirectly suggested by “old” non-human primate work on visual deprivation (Hyvärinen et al., 1981) might drive a correlation not observed in healthy populations.

      In the revised manuscript, we will more clearly indicate in the discussion that these are possible post-hoc interpretations. We argue that given the lack of such studies in humans, it is all the more important that extant data be presented completely, even if the direction of the effects are not as expected.

      (2.3) For these reasons, the reported findings do not allow us to draw firm conclusions on the relation between EEG parameters and E/I ratio or on the impact of early (vs. late) visual experience on the excitation/inhibition ratio of the human visual cortex.

      Indeed, the correlations we have tested between the E/I ratio and EEG parameters were exploratory, and have been reported as such. The goal of our study was not to compare the effects of early vs. late visual experience. The goal was to study whether early visual experience is necessary for a typical E/I ratio in visual neural circuits. We provided clear evidence in favor of this hypothesis. Thus, the present results suggest the necessity of investigating the effects of late visual deprivation. In fact, such research is missing in permanent blindness as well.

      Reviewer #3 (Public Review):

      This manuscript examines the impact of congenital visual deprivation on the excitatory/inhibitory (E/I) ratio in the visual cortex using Magnetic Resonance Spectroscopy (MRS) and electroencephalography (EEG) in individuals whose sight was restored. Ten individuals with reversed congenital cataracts were compared to age-matched, normally sighted controls, assessing the cortical E/I balance and its interrelationship to visual acuity. The study reveals that the Glx/GABA ratio in the visual cortex and the intercept and aperiodic signal are significantly altered in those with a history of early visual deprivation, suggesting persistent neurophysiological changes despite visual restoration.

      My expertise is in EEG (particularly in the decomposition of periodic and aperiodic activity) and statistical methods. I have several major concerns in terms of methodological and statistical approaches along with the (over)interpretation of the results. These major concerns are detailed below.

      (3.1) Variability in visual deprivation:

      - The document states a large variability in the duration of visual deprivation (probably also the age at restoration), with significant implications for the sensitivity period's impact on visual circuit development. The variability and its potential effects on the outcomes need thorough exploration and discussion.

      We work with a rare, unique patient population, which makes it difficult to systematically assess the effects of different visual histories while maintaining stringent inclusion criteria such as complete patterned visual deprivation at birth. Regardless, we considered the large variance in age at surgery and time since surgery as supportive of our interpretation: group differences were found despite the large variance in duration of visual deprivation. Moreover, the existing variance was used to explore possible associations between behavior and neural measures, as well as neurochemical and EEG measures.

      In the revised manuscript, we will detail the advantages and disadvantages of our CC sample, with respect to duration of congenital visual deprivation.

      (3.2) Sample size:

      - The small sample size is a major concern as it may not provide sufficient power to detect subtle effects and/or overestimate significant effects, which then tend not to generalize to new data. One of the biggest drivers of the replication crisis in neuroscience.

      We address the small sample size in our discussion, and make clear that small sample sizes were due to the nature of investigations in special populations. It is worth noting that our EEG results fully align  with those of a larger sample of CC individuals (Ossandón et al., 2023), providing us confidence about their validity and reproducibility. Moreover, our MRS results and correlations of those with EEG parameters were spatially specific to occipital cortex measures, as predicted.

      The main problem with the correlation analyses between MRS and EEG measures is that the sample size is simply too small to conduct such an analysis. Moreover, it is unclear from the methods section that this analysis was only conducted in the patient group (which the reviewer assumed from the plots), and not explained why this was done only in the patient group. I would highly recommend removing these correlation analyses.

      We marked the correlation analyses as exploratory; note that we do not base most of our discussion on the results of these analyses. As indicated by Reviewer 1, reporting them allows for deriving more precise hypothesis for future studies. It has to be noted that we investigate an extremely rare population, tested outside of major developed economies and dedicated neuroimaging research facilities. In addition to being a rare patient group, these individuals come from poor communities. Therefore, we consider it justified to report these correlations as exploratory, providing direction for future research.

      (3.3) Statistical concerns:

      - The statistical analyses, particularly the correlations drawn from a small sample, may not provide reliable estimates (see https://www.sciencedirect.com/science/article/pii/S0092656613000858, which clearly describes this problem).

      It would undoubtedly be better to have a larger sample size. We nonetheless think it is of value to the research community to publish this dataset, since 10 multimodal data sets from a carefully diagnosed, rare population, representing a human model for the effects of early experience on brain development, are quite a lot.  Sample sizes in prior neuroimaging studies in transient blindness have most often ranged from n = 1 to n = 10. They nevertheless provided valuable direction for future research, and integration of results across multiple studies provides scientific insights.  

      Identifying possible group differences was the goal of our study, with the correlations being an exploratory analysis, which we have clearly indicated in the methods, results and discussion.

      - Statistical analyses for the MRS: The authors should consider some additional permutation statistics, which are more suitable for small sample sizes. The current statistical model (2x2) design ANOVA is not ideal for such small sample sizes. Moreover, it is unclear why the condition (EO & EC) was chosen as a predictor and not the brain region (visual & frontal) or neurochemicals. Finally, the authors did not provide any information on the alpha level nor any information on correction for multiple comparisons (in the methods section). Finally, even if the groups are matched w.r.t. age, the time between surgery and measurement, the duration of visual deprivation, (and sex?), these should be included as covariates as it has been shown that these are highly related to the measurements of interest (especially for the EEG measurements) and the age range of the current study is large.

      In our ANOVA models, the neurochemicals were the outcome variables, and the conditions were chosen as predictors based on prior work suggesting that Glx/GABA+ might vary with eye closure (Kurcyus et al., 2018). The study was designed based on a hypothesis of group differences localized to the occipital cortex, due to visual deprivation. The frontal cortex voxel was chosen to indicate whether these differences were spatially specific. Therefore, we conducted separate ANOVAs based on this study design.

      In the revised manuscript, we will add permutation analyses for our outcomes, as well as multiple regression models investigating whether the variance in visual history might have driven these results. Note that in the supplementary materials (S6, S7), we have reported the correlations between visual history metrics and MRS/EEG outcomes.

      The alpha level used for the ANOVA models specified in the methods section was 0.05. The alpha level for the exploratory analyses reported in the main manuscript was 0.008, after correcting for (6) multiple comparisons using the Bonferroni correction, also specified in the methods. Note that the p-values following correction are expressed as multiplied by 6, due to most readers assuming an alpha level of 0.05 (see response regarding large p-values).

      We used a control group matched for age and sex. Moreover, the controls were recruited and tested in the same institutes, using the same setup. We feel that we followed the gold standards for recruiting a healthy control group for a patient group.

      - EEG statistical analyses: The same critique as for the MRS statistical analyses applies to the EEG analysis. In addition: was the 2x3 ANOVA conducted for EO and EC independently? This seems to be inconsistent with the approach in the MRS analyses, in which the authors chose EO & EC as predictors in their 2x2 ANOVA.

      The 2x3 ANOVA was not conducted independently for the eyes open/eyes closed condition, the ANOVA conducted on the EEG metrics was 2x3 because it had group (CC, SC) and condition (eyes open (EO), eyes closed (EC) and visual stimulation (LU)) as predictors.

      - Figure 4: The authors report a p-value of >0.999 with a correlation coefficient of -0.42 with a sample size of 10 subjects. This can't be correct (it should be around: p = 0.22). All statistical analyses should be checked.

      As specified in the methods and figure legend, the reported p values in Figure 4 have been corrected using the Bonferroni correction, and therefore multiplied by the number of comparisons, leading to the seemingly large values.

      Additionally, to check all statistical analyses, we put the manuscript through an independent Statistics Check (Nuijten & Polanin, 2020) (https://michelenuijten.shinyapps.io/statcheck-web/) and will upload the consistency report with the revised supplementary material.

      - Figure 2c. Eyes closed condition: The highest score of the *Glx/GABA ratio seems to be ~3.6. In subplot 2a, there seem to be 3 subjects that show a Glx/GABA ratio score > 3.6. How can this be explained? There is also a discrepancy for the eyes-closed condition.

      The three subjects that show the Glx/GABA+ ratio > 3.6 in subplot 2a are in the SC group, whereas the correlations plotted in figure 2c are only for the CC group, where the highest score is indeed ~3.6.

      (3.4) Interpretation of aperiodic signal:

      - Several recent papers demonstrated that the aperiodic signal measured in EEG or ECoG is related to various important aspects such as age, skull thickness, electrode impedance, as well as cognition. Thus, currently, very little is known about the underlying effects which influence the aperiodic intercept and slope. The entire interpretation of the aperiodic slope as a proxy for E/I is based on a computational model and simulation (as described in the Gao et al. paper).

      Apart from the modeling work from Gao et al., multiple papers which have also been cited which used ECoG, EEG and MEG and showed concomitant changes in aperiodic activity with pharmacological manipulation of the E/I ratio (Colombo et al., 2019; Molina et al., 2020; Muthukumaraswamy & Liley, 2018). Further, several prior studies have interpreted changes in the aperiodic slope as reflective of changes in the E/I ratio, including studies of developmental groups (Favaro et al., 2023; Hill et al., 2022; McSweeney et al., 2023; Schaworonkow & Voytek, 2021) as well as patient groups (Molina et al., 2020; Ostlund et al., 2021).

      In the revised manuscript, we will cite those studies not already included in the introduction.

      - Especially the aperiodic intercept is a very sensitive measure to many influences (e.g. skull thickness, electrode impedance...). As crucial results (correlation aperiodic intercept and MRS measures) are facing this problem, this needs to be reevaluated. It is safer to make statements on the aperiodic slope than intercept. In theory, some of the potentially confounding measures are available to the authors (e.g. skull thickness can be computed from T1w images; electrode impedances are usually acquired alongside the EEG data) and could be therefore controlled.

      All electrophysiological measures indeed depend on parameters such as skull thickness and electrode impedance. As in the extant literature using neurophysiological measures to compare brain function between patient and control groups, we used a control group matched in age/ sex, recruited in the same region, tested with the same devices, and analyzed with the same analysis pipeline. For example, impedance was kept below 10 kOhm for all subjects. There is no evidence available suggesting that congenital cataracts are associated with changes in skull thickness that would cause the observed pattern of group results. Moreover, we cannot think of how any of the exploratory correlations between neurophysiological measures and MRS measures could be accounted for by a difference e.g. in skull thickness.

      - The authors wrote: "Higher frequencies (such as 20-40 Hz) have been predominantly associated with local circuit activity and feedforward signaling (Bastos et al., 2018; Van Kerkoerle et al., 2014); the increased 20-40 Hz slope may therefore signal increased spontaneous spiking activity in local networks. We speculate that the steeper slope of the aperiodic activity for the lower frequency range (1-20 Hz) in CC individuals reflects the concomitant increase in inhibition." The authors confuse the interpretation of periodic and aperiodic signals. This section refers to the interpretation of the periodic signal (higher frequencies). This interpretation cannot simply be translated to the aperiodic signal (slope).

      Prior work has not always separated the aperiodic and periodic components, making it unclear what might have driven these effects in our data. The interpretation of the higher frequency range was intended to contrast with the interpretations of lower frequency range, in order to speculate as to why the two aperiodic fits might go in differing directions. We will clarify our interpretation in the revised manuscript. Note that Ossandon et al. reported highly similar results (group differences for CC individuals and for permanently congenitally blind humans) for the aperiodic activity between 20-40 Hz and oscillatory activity in the gamma range. We will allude to these findings in the revised manuscript.

      - The authors further wrote: We used the slope of the aperiodic (1/f) component of the EEG spectrum as an estimate of E/I ratio (Gao et al., 2017; Medel et al., 2020; Muthukumaraswamy & Liley, 2018). This is a highly speculative interpretation with very little empirical evidence. These papers were conducted with ECoG data (mostly in animals) and mostly under anesthesia. Thus, these studies only allow an indirect interpretation by what the 1/f slope in EEG measurements is actually influenced.

      Note that Muthukumaraswamy et al. (2018) used different types of pharmacological manipulations and analyzed periodic and aperiodic MEG activity in addition to monkey ECoG (Medel et al., 2020) (now published as (Medel et al., 2023)) compared EEG activity in addition to ECoG data after propofol administration. The interpretation of our results are in line with a number of recent studies in developing (Hill et al., 2022; Schaworonkow & Voytek, 2021) and special populations using EEG. As mentioned above, several prior studies have used the slope of the 1/f component/aperiodic activity as an indirect measure of the E/I ratio (Favaro et al., 2023; Hill et al., 2022; McSweeney et al., 2023; Molina et al., 2020; Ostlund et al., 2021; Schaworonkow & Voytek, 2021), including studies using scalp-recorded EEG. We will make more clear in the introduction of the revised manuscript that this metric is indirect.

      While a full understanding of aperiodic activity needs to be provided, some convergent ideas have emerged . We think that our results contribute to this enterprise, since our study is, to the best of our knowledge, the first which assessed MRS measured neurotransmitter levels and EEG aperiodic activity.

      (3.5) Problems with EEG preprocessing and analysis:

      - It seems that the authors did not identify bad channels nor address the line noise issue (even a problem if a low pass filter of below-the-line noise was applied).

      As pointed out in the methods and Figure 1, we only analyzed data from two channels, O1 and O2, neither of which were rejected for any participant. Channel rejection was performed for the larger dataset, published elsewhere (Ossandón et al., 2023; Pant et al., 2023).

      In both published works, we did not consider frequency ranges above 40 Hz to avoid any possible contamination with line noise. Here, we focused on activity between 0 and 20 Hz, definitely excluding line noise contaminations. The low pass filter (FIR, 1-45 Hz) guaranteed that any spill-over effects of line noise would be restricted to frequencies just below the upper cutoff frequency.

      Additionally, a prior version of the analysis used the cleanline.m function to remove line noise before filtering, and the group differences remained stable. We will report this analysis in the supplementary version of the revised manuscript. Further, both groups were measured in the same lab, making line noise as an account for the observed group effects highly unlikely. Finally, any of the exploratory MRS-EEG correlations would be hard to explain if the EEG parameters would be contaminated with line noise.

      - What was the percentage of segments that needed to be rejected due to the 120μV criteria? This should be reported specifically for EO & EC and controls and patients.

      The mean percentage of 1 second segments rejected for each resting state condition is below. Mean percentage of 6.25 long segments rejected in each group for the visual stimulation condition are also included, and will be added to the revised manuscript:

      Author response table 3.

      - The authors downsampled the data to 60Hz to "to match the stimulation rate". What is the intention of this? Because the subsequent spectral analyses are conflated by this choice (see Nyquist theorem).

      This data were collected as part of a study designed to evoke alpha activity with visual white-noise, which ranged in luminance with equal power at all frequencies from 1-60 Hz, restricted by the refresh rate of the monitor on which stimuli were presented (Pant et al., 2023). This paradigm and method was developed by VanRullen and colleagues (Schwenk et al., 2020; Vanrullen & MacDonald, 2012), wherein the analysis requires the same sampling rate between the presented frequencies and the EEG data. The downsampling function used here automatically applies an anti-aliasing filter (EEGLAB 2019) .

      - "Subsequently, baseline removal was conducted by subtracting the mean activity across the length of an epoch from every data point." The actual baseline time segment should be specified.

      The time segment was the length of the epoch, that is, 1 second for the resting state conditions and 6.25 seconds for the visual stimulation conditions. This will be explicitly stated in the revised manuscript.

      - "We excluded the alpha range (8-14 Hz) for this fit to avoid biasing the results due to documented differences in alpha activity between CC and SC individuals (Bottari et al., 2016; Ossandón et al., 2023; Pant et al., 2023)." This does not really make sense, as the FOOOF algorithm first fits the 1/f slope, for which the alpha activity is not relevant.

      We did not use the FOOOF algorithm/toolbox in this manuscript. As stated in the methods, we used a 1/f fit to the 1-20 Hz spectrum in the log-log space, and subtracted this fit from the original spectrum to obtain the corrected spectrum. Given the pronounced difference in alpha power between groups (Bottari et al., 2016; Ossandón et al., 2023; Pant et al., 2023), we were concerned it might drive differences in the exponent values.  Our analysis pipeline had been adapted from previous publications of our group and other labs (Ossandón et al., 2023; Voytek et al., 2015; Waschke et al., 2017).

      We have conducted the analysis with and without the exclusion of the alpha range, as well as using the FOOOF toolbox both in the 1-20 Hz and 20-40 Hz ranges (Ossandón et al., 2023); The findings of a steeper slope in the 1-20 Hz range as well as lower alpha power in CC vs SC individuals remained stable. In Ossandón et al., the comparison between the piecewise fits and FOOOF fits led the authors to use the former as it outperformed the FOOOF algorithm for their data.

      - The model fits of the 1/f fitting for EO, EC, and both participant groups should be reported.

      In Figure 3 of the manuscript, we depicted the mean spectra and 1/f fits for each group. We will add the fit quality metrics and show individual subjects’ fits in the revised manuscript.

      (3.6) Validity of GABA measurements and results:

      - According the a newer study by the authors of the Gannet toolbox (https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/abs/10.1002/nbm.5076), the reliability and reproducibility of the gamma-aminobutyric acid (GABA) measurement can vary significantly depending on acquisition and modeling parameter. Thus, did the author address these challenges?

      We took care of data quality while acquiring MRS data by ensuring appropriate voxel placement and linewidth prior to scanning. Acquisition as well as modeling parameters were constant for both groups, so they cannot have driven group differences.

      The linked article compares the reproducibility of GABA measurement using Osprey, which was released in 2020 and uses linear combination modeling to fit the peak as opposed to Gannet’s simple peak fitting (Hupfeld et al., 2024). The study finds better test-retest reliability for Osprey compared to Gannet’s method.

      As the present work was conceptualized in 2018, we used Gannet 3.0, which was the state-of-the-art edited spectral analysis toolbox at the time, and still is widely used. In the revised manuscript, we will include a supplementary section reanalyzing the main findings with Osprey.

      - Furthermore, the authors wrote: "We confirmed the within-subject stability of metabolite quantification by testing a subset of the sighted controls (n=6) 2-4 weeks apart. Looking at the supplementary Figure 5 (which would be rather plotted as ICC or Blant-Altman plots), the within-subject stability compared to between-subject variability seems not to be great. Furthermore, I don't think such a small sample size qualifies for a rigorous assessment of stability.

      Indeed, we did not intend to provide a rigorous assessment of within-subject stability. Rather, we aimed to confirm that data quality/concentration ratios did not systematically differ between the same subjects tested longitudinally; driven, for example, by scanner heating or time of day. As with the phantom testing, we attempted to give readers an idea of the quality of the data, as they were collected from a primarily clinical rather than a research site.

      In the revised manuscript we will remove the statement regarding stability, and add the Blant-Altman plot.

      - "Why might an enhanced inhibitory drive, as indicated by the lower Glx/GABA ratio" Is this interpretation really warranted, as the results of the group differences in the Glx/GABA ratio seem to be rather driven by a decreased Glx concentration in CC rather than an increased GABA (see Figure 2).

      We used the Glx/GABA+ ratio as a measure, rather than individual Glx or GABA+ concentration, which did not significantly differ between groups. As detailed in Response 2.2, we think this metric aligns better with an underlying E/I balance hypothesis and has been used in many previous studies (Gao et al., 2024; Liu et al., 2015; Narayan et al., 2022; Perica et al., 2022).

      Our interpretation of an enhanced inhibitory drive additionally comes from the combination of aperiodic EEG (1-20 Hz) and MRS measures, which, when considered together, are consistent with a decreased E/I ratio.

      In the revised manuscript, we will rephrase this sentence accordingly. 

      - Glx concentration predicted the aperiodic intercept in CC individuals' visual cortices during ambient and flickering visual stimulation. Why specifically investigate the Glx concentration, when the paper is about E/I ratio?

      As stated in the methods, we exploratorily assessed the relationship between all MRS parameters (Glx, GABA+ and Glx/GABA+ ratio) with the aperiodic parameters (slope, offset), and corrected for multiple comparisons accordingly. We think this is a worthwhile analysis considering the rarity of the dataset/population (see 1.2, 1.6, 2.1 and reviewer 1’s comments about future hypotheses). We only report the Glx – aperiodic intercept correlation in the main manuscript as it survived correction for multiple comparisons.

      (3.7) Interpretation of the correlation between MRS measurements and EEG aperiodic signal:

      - The authors wrote: "The intercept of the aperiodic activity was highly correlated with the Glx concentration during rest with eyes open and during flickering stimulation (also see Supplementary Material S11). Based on the assumption that the aperiodic intercept reflects broadband firing (Manning et al., 2009; Winawer et al., 2013), this suggests that the Glx concentration might be related to broadband firing in CC individuals during active and passive visual stimulation." These results should not be interpreted (or with very caution) for several reasons (see also problem with influences on aperiodic intercept and small sample size). This is a result of the exploratory analyses of correlating every EEG parameter with every MRS parameter. This requires well-powered replication before any interpretation can be provided. Furthermore and importantly: why should this be specifically only in CC patients, but not in the SC control group?

      We indicate clearly in all parts of the manuscript that these correlations are presented as exploratory. Further, we interpret the Glx-aperiodic offset correlation, and none of the others, as it survived the Bonferroni correction for multiple comparisons. We offer a hypothesis in the discussion section as to why such a correlation might exist in the CC but not the SC group (see response 2.2), and do not speculate further.

      (3.8) Language and presentation:

      - The manuscript requires language improvements and correction of numerous typos. Over-simplifications and unclear statements are present, which could mislead or confuse readers (see also interpretation of aperiodic signal).

      In the revision, we will check that speculations are clearly marked and typos are removed.

      - The authors state that "Together, the present results provide strong evidence for experience-dependent development of the E/I ratio in the human visual cortex, with consequences for behavior." The results of the study do not provide any strong evidence, because of the small sample size and exploratory analyses approach and not accounting for possible confounding factors.

      We disagree with this statement and allude to convergent evidence of both MRS and neurophysiological measures. The latter link to corresponding results observed in a larger sample of CC individuals (Ossandón et al., 2023).

      - "Our results imply a change in neurotransmitter concentrations as a consequence of *restoring* vision following congenital blindness." This is a speculative statement to infer a causal relationship on cross-sectional data.

      As mentioned under 2.1, we conducted a cross-sectional study which might justify future longitudinal work. In order to advance science, new testable hypotheses were put forward at the end of a manuscript.

      In the revised manuscript we will add “might imply” to better indicate the hypothetical character of this idea.

      - In the limitation section, the authors wrote: "The sample size of the present study is relatively high for the rare population , but undoubtedly, overall, rather small." This sentence should be rewritten, as the study is plein underpowered. The further justification "We nevertheless think that our results are valid. Our findings neurochemically (Glx and GABA+ concentration), and anatomically (visual cortex) specific. The MRS parameters varied with parameters of the aperiodic EEG activity and visual acuity. The group differences for the EEG assessments corresponded to those of a larger sample of CC individuals (n=38) (Ossandón et al., 2023), and effects of chronological age were as expected from the literature." These statements do not provide any validation or justification of small samples. Furthermore, the current data set is a subset of an earlier published paper by the same authors "The EEG data sets reported here were part of data published earlier (Ossandón et al., 2023; Pant et al., 2023)." Thus, the statement "The group differences for the EEG assessments corresponded to those of a larger sample of CC individuals (n=38) " is a circular argument and should be avoided.

      Our intention was not to justify having a small sample, but to justify why we think the results might be valid as they align with/replicate existing literature.

      In the revised manuscript, we will add a figure showing that the EEG results of the 10 subjects considered here correspond to those of the 28 other subjects of Ossandon et al. We will adapt the text accordingly, clearly stating that the pattern of EEG results of the ten subjects reported here replicate those of the 28 additional subjects of Ossandon et al. (2023).

      References

      Barnes, S. J., Sammons, R. P., Jacobsen, R. I., Mackie, J., Keller, G. B., & Keck, T. (2015). Subnetwork-specific homeostatic plasticity in mouse visual cortex in vivo. Neuron, 86(5), 1290–1303. https://doi.org/10.1016/J.NEURON.2015.05.010

      Bernabeu, A., Alfaro, A., García, M., & Fernández, E. (2009). Proton magnetic resonance spectroscopy (1H-MRS) reveals the presence of elevated myo-inositol in the occipital cortex of blind subjects. NeuroImage, 47(4), 1172–1176. https://doi.org/10.1016/j.neuroimage.2009.04.080

      Bottari, D., Troje, N. F., Ley, P., Hense, M., Kekunnaya, R., & Röder, B. (2016). Sight restoration after congenital blindness does not reinstate alpha oscillatory activity in humans. Scientific Reports. https://doi.org/10.1038/srep24683

      Colombo, M. A., Napolitani, M., Boly, M., Gosseries, O., Casarotto, S., Rosanova, M., Brichant, J. F., Boveroux, P., Rex, S., Laureys, S., Massimini, M., Chieregato, A., & Sarasso, S. (2019). The spectral exponent of the resting EEG indexes the presence of consciousness during unresponsiveness induced by propofol, xenon, and ketamine. NeuroImage, 189(September 2018), 631–644. https://doi.org/10.1016/j.neuroimage.2019.01.024

      Consideration of Sample Size in Neuroscience Studies. (2020). Journal of Neuroscience, 40(21), 4076–4077. https://doi.org/10.1523/JNEUROSCI.0866-20.2020

      Coullon, G. S. L., Emir, U. E., Fine, I., Watkins, K. E., & Bridge, H. (2015). Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia. Journal of Neurophysiology. https://doi.org/10.1152/jn.00567.2015

      Fang, Q., Li, Y. T., Peng, B., Li, Z., Zhang, L. I., & Tao, H. W. (2021). Balanced enhancements of synaptic excitation and inhibition underlie developmental maturation of receptive fields in the mouse visual cortex. Journal of Neuroscience, 41(49), 10065–10079. https://doi.org/10.1523/JNEUROSCI.0442-21.2021

      Favaro, J., Colombo, M. A., Mikulan, E., Sartori, S., Nosadini, M., Pelizza, M. F., Rosanova, M., Sarasso, S., Massimini, M., & Toldo, I. (2023). The maturation of aperiodic EEG activity across development reveals a progressive differentiation of wakefulness from sleep. NeuroImage, 277. https://doi.org/10.1016/J.NEUROIMAGE.2023.120264

      Gao, Y., Liu, Y., Zhao, S., Liu, Y., Zhang, C., Hui, S., Mikkelsen, M., Edden, R. A. E., Meng, X., Yu, B., & Xiao, L. (2024). MRS study on the correlation between frontal GABA+/Glx ratio and abnormal cognitive function in medication-naive patients with narcolepsy. Sleep Medicine, 119, 1–8. https://doi.org/10.1016/j.sleep.2024.04.004

      Haider, B., Duque, A., Hasenstaub, A. R., & McCormick, D. A. (2006). Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. Journal of Neuroscience. https://doi.org/10.1523/JNEUROSCI.5297-05.2006

      Hill, A. T., Clark, G. M., Bigelow, F. J., Lum, J. A. G., & Enticott, P. G. (2022). Periodic and aperiodic neural activity displays age-dependent changes across early-to-middle childhood. Developmental Cognitive Neuroscience, 54, 101076. https://doi.org/10.1016/J.DCN.2022.101076

      Hupfeld, K. E., Zöllner, H. J., Hui, S. C. N., Song, Y., Murali-Manohar, S., Yedavalli, V., Oeltzschner, G., Prisciandaro, J. J., & Edden, R. A. E. (2024). Impact of acquisition and modeling parameters on the test–retest reproducibility of edited GABA+. NMR in Biomedicine, 37(4), e5076. https://doi.org/10.1002/nbm.5076

      Hyvärinen, J., Carlson, S., & Hyvärinen, L. (1981). Early visual deprivation alters modality of neuronal responses in area 19 of monkey cortex. Neuroscience Letters, 26(3), 239–243. https://doi.org/10.1016/0304-3940(81)90139-7

      Juchem, C., & Graaf, R. A. de. (2017). B0 magnetic field homogeneity and shimming for in vivo magnetic resonance spectroscopy. Analytical Biochemistry, 529, 17–29. https://doi.org/10.1016/j.ab.2016.06.003

      Keck, T., Hübener, M., & Bonhoeffer, T. (2017). Interactions between synaptic homeostatic mechanisms: An attempt to reconcile BCM theory, synaptic scaling, and changing excitation/inhibition balance. Current Opinion in Neurobiology, 43, 87–93. https://doi.org/10.1016/J.CONB.2017.02.003

      Kurcyus, K., Annac, E., Hanning, N. M., Harris, A. D., Oeltzschner, G., Edden, R., & Riedl, V. (2018). Opposite Dynamics of GABA and Glutamate Levels in the Occipital Cortex during Visual Processing. Journal of Neuroscience, 38(46), 9967–9976. https://doi.org/10.1523/JNEUROSCI.1214-18.2018

      Liu, B., Wang, G., Gao, D., Gao, F., Zhao, B., Qiao, M., Yang, H., Yu, Y., Ren, F., Yang, P., Chen, W., & Rae, C. D. (2015). Alterations of GABA and glutamate-glutamine levels in premenstrual dysphoric disorder: A 3T proton magnetic resonance spectroscopy study. Psychiatry Research - Neuroimaging, 231(1), 64–70. https://doi.org/10.1016/J.PSCYCHRESNS.2014.10.020

      Lunghi, C., Berchicci, M., Morrone, M. C., & Russo, F. D. (2015). Short‐term monocular deprivation alters early components of visual evoked potentials. The Journal of Physiology, 593(19), 4361. https://doi.org/10.1113/JP270950

      Maier, S., Düppers, A. L., Runge, K., Dacko, M., Lange, T., Fangmeier, T., Riedel, A., Ebert, D., Endres, D., Domschke, K., Perlov, E., Nickel, K., & Tebartz van Elst, L. (2022). Increased prefrontal GABA concentrations in adults with autism spectrum disorders. Autism Research, 15(7), 1222–1236. https://doi.org/10.1002/aur.2740

      Manning, J. R., Jacobs, J., Fried, I., & Kahana, M. J. (2009). Broadband shifts in local field potential power spectra are correlated with single-neuron spiking in humans. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 29(43), 13613–13620. https://doi.org/10.1523/JNEUROSCI.2041-09.2009

      McSweeney, M., Morales, S., Valadez, E. A., Buzzell, G. A., Yoder, L., Fifer, W. P., Pini, N., Shuffrey, L. C., Elliott, A. J., Isler, J. R., & Fox, N. A. (2023). Age-related trends in aperiodic EEG activity and alpha oscillations during early- to middle-childhood. NeuroImage, 269, 119925. https://doi.org/10.1016/j.neuroimage.2023.119925

      Medel, V., Irani, M., Crossley, N., Ossandón, T., & Boncompte, G. (2023). Complexity and 1/f slope jointly reflect brain states. Scientific Reports, 13(1), 21700. https://doi.org/10.1038/s41598-023-47316-0

      Medel, V., Irani, M., Ossandón, T., & Boncompte, G. (2020). Complexity and 1/f slope jointly reflect cortical states across different E/I balances. bioRxiv, 2020.09.15.298497. https://doi.org/10.1101/2020.09.15.298497

      Molina, J. L., Voytek, B., Thomas, M. L., Joshi, Y. B., Bhakta, S. G., Talledo, J. A., Swerdlow, N. R., & Light, G. A. (2020). Memantine Effects on Electroencephalographic Measures of Putative Excitatory/Inhibitory Balance in Schizophrenia. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 5(6), 562–568. https://doi.org/10.1016/j.bpsc.2020.02.004

      Mukerji, A., Byrne, K. N., Yang, E., Levi, D. M., & Silver, M. A. (2022). Visual cortical γ−aminobutyric acid and perceptual suppression in amblyopia. Frontiers in Human Neuroscience, 16. https://doi.org/10.3389/fnhum.2022.949395

      Muthukumaraswamy, S. D., & Liley, D. T. (2018). 1/F electrophysiological spectra in resting and drug-induced states can be explained by the dynamics of multiple oscillatory relaxation processes. NeuroImage, 179(November 2017), 582–595. https://doi.org/10.1016/j.neuroimage.2018.06.068

      Narayan, G. A., Hill, K. R., Wengler, K., He, X., Wang, J., Yang, J., Parsey, R. V., & DeLorenzo, C. (2022). Does the change in glutamate to GABA ratio correlate with change in depression severity? A randomized, double-blind clinical trial. Molecular Psychiatry, 27(9), 3833—3841. https://doi.org/10.1038/s41380-022-01730-4

      Nuijten, M. B., & Polanin, J. R. (2020). “statcheck”: Automatically detect statistical reporting inconsistencies to increase reproducibility of meta-analyses. Research Synthesis Methods, 11(5), 574–579. https://doi.org/10.1002/jrsm.1408

      Ossandón, J. P., Stange, L., Gudi-Mindermann, H., Rimmele, J. M., Sourav, S., Bottari, D., Kekunnaya, R., & Röder, B. (2023). The development of oscillatory and aperiodic resting state activity is linked to a sensitive period in humans. NeuroImage, 275, 120171. https://doi.org/10.1016/J.NEUROIMAGE.2023.120171

      Ostlund, B. D., Alperin, B. R., Drew, T., & Karalunas, S. L. (2021). Behavioral and cognitive correlates of the aperiodic (1/f-like) exponent of the EEG power spectrum in adolescents with and without ADHD. Developmental Cognitive Neuroscience, 48, 100931. https://doi.org/10.1016/j.dcn.2021.100931

      Pant, R., Ossandón, J., Stange, L., Shareef, I., Kekunnaya, R., & Röder, B. (2023). Stimulus-evoked and resting-state alpha oscillations show a linked dependence on patterned visual experience for development. NeuroImage: Clinical, 103375. https://doi.org/10.1016/J.NICL.2023.103375

      Perica, M. I., Calabro, F. J., Larsen, B., Foran, W., Yushmanov, V. E., Hetherington, H., Tervo-Clemmens, B., Moon, C.-H., & Luna, B. (2022). Development of frontal GABA and glutamate supports excitation/inhibition balance from adolescence into adulthood. Progress in Neurobiology, 219, 102370. https://doi.org/10.1016/j.pneurobio.2022.102370

      Pitchaimuthu, K., Wu, Q. Z., Carter, O., Nguyen, B. N., Ahn, S., Egan, G. F., & McKendrick, A. M. (2017). Occipital GABA levels in older adults and their relationship to visual perceptual suppression. Scientific Reports, 7(1). https://doi.org/10.1038/S41598-017-14577-5

      Rideaux, R., Ehrhardt, S. E., Wards, Y., Filmer, H. L., Jin, J., Deelchand, D. K., Marjańska, M., Mattingley, J. B., & Dux, P. E. (2022). On the relationship between GABA+ and glutamate across the brain. NeuroImage, 257, 119273. https://doi.org/10.1016/J.NEUROIMAGE.2022.119273

      Schaworonkow, N., & Voytek, B. (2021). Longitudinal changes in aperiodic and periodic activity in electrophysiological recordings in the first seven months of life. Developmental Cognitive Neuroscience, 47. https://doi.org/10.1016/j.dcn.2020.100895

      Schwenk, J. C. B., VanRullen, R., & Bremmer, F. (2020). Dynamics of Visual Perceptual Echoes Following Short-Term Visual Deprivation. Cerebral Cortex Communications, 1(1). https://doi.org/10.1093/TEXCOM/TGAA012

      Sengpiel, F., Jirmann, K.-U., Vorobyov, V., & Eysel, U. T. (2006). Strabismic Suppression Is Mediated by Inhibitory Interactions in the Primary Visual Cortex. Cerebral Cortex, 16(12), 1750–1758. https://doi.org/10.1093/cercor/bhj110

      Steel, A., Mikkelsen, M., Edden, R. A. E., & Robertson, C. E. (2020). Regional balance between glutamate+glutamine and GABA+ in the resting human brain. NeuroImage, 220. https://doi.org/10.1016/J.NEUROIMAGE.2020.117112

      Takado, Y., Takuwa, H., Sampei, K., Urushihata, T., Takahashi, M., Shimojo, M., Uchida, S., Nitta, N., Shibata, S., Nagashima, K., Ochi, Y., Ono, M., Maeda, J., Tomita, Y., Sahara, N., Near, J., Aoki, I., Shibata, K., & Higuchi, M. (2022). MRS-measured glutamate versus GABA reflects excitatory versus inhibitory neural activities in awake mice. Journal of Cerebral Blood Flow & Metabolism, 42(1), 197. https://doi.org/10.1177/0271678X211045449

      Takei, Y., Fujihara, K., Tagawa, M., Hironaga, N., Near, J., Kasagi, M., Takahashi, Y., Motegi, T., Suzuki, Y., Aoyama, Y., Sakurai, N., Yamaguchi, M., Tobimatsu, S., Ujita, K., Tsushima, Y., Narita, K., & Fukuda, M. (2016). The inhibition/excitation ratio related to task-induced oscillatory modulations during a working memory task: A multtimodal-imaging study using MEG and MRS. NeuroImage, 128, 302–315. https://doi.org/10.1016/J.NEUROIMAGE.2015.12.057

      Tao, H. W., & Poo, M. M. (2005). Activity-dependent matching of excitatory and inhibitory inputs during refinement of visual receptive fields. Neuron, 45(6), 829–836. https://doi.org/10.1016/J.NEURON.2005.01.046

      Vanrullen, R., & MacDonald, J. S. P. (2012). Perceptual echoes at 10 Hz in the human brain. Current Biology. https://doi.org/10.1016/j.cub.2012.03.050

      Voytek, B., Kramer, M. A., Case, J., Lepage, K. Q., Tempesta, Z. R., Knight, R. T., & Gazzaley, A. (2015). Age-related changes in 1/f neural electrophysiological noise. Journal of Neuroscience, 35(38). https://doi.org/10.1523/JNEUROSCI.2332-14.2015

      Vreeswijk, C. V., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293), 1724–1726. https://doi.org/10.1126/SCIENCE.274.5293.1724

      Waschke, L., Wöstmann, M., & Obleser, J. (2017). States and traits of neural irregularity in the age-varying human brain. Scientific Reports 2017 7:1, 7(1), 1–12. https://doi.org/10.1038/s41598-017-17766-4

      Weaver, K. E., Richards, T. L., Saenz, M., Petropoulos, H., & Fine, I. (2013). Neurochemical changes within human early blind occipital cortex. Neuroscience. https://doi.org/10.1016/j.neuroscience.2013.08.004

      Wu, Y. K., Miehl, C., & Gjorgjieva, J. (2022). Regulation of circuit organization and function through inhibitory synaptic plasticity. Trends in Neurosciences, 45(12), 884–898. https://doi.org/10.1016/J.TINS.2022.10.006

    1. Author response:

      Reviewer #1 (Public review):

      (1) Legionella effectors are often activated by binding to eukaryote-specific host factors, including actin. The authors should test the following: a) whether Lfat1 can fatty acylate small G-proteins in vitro; b) whether this activity is dependent on actin binding; and c) whether expression of the Y240A mutant in mammalian cells affects the fatty acylation of Rac3 (Figure 6B), or other small G-proteins.

      We were not able to express and purify the full-length recombinant Lfat1 to perform fatty acylation of small GTPases in vitro. However, in cellulo overexpression of the Y240A mutant still retained ability to fatty acylate Rac3 and another small GTPase RheB (see Author response image 1 below). We postulate that under infection conditions, actin-binding might be required to fatty acylate certain GTPases due to the small amount of effector proteins that secreted into the host cell.

      Author response image 1.

      (2) It should be demonstrated that lysine residues on small G-proteins are indeed targeted by Lfat1. Ideally, the functional consequences of these modifications should also be investigated. For example, does fatty acylation of G-proteins affect GTPase activity or binding to downstream effectors?

      We have mutated K178 on RheB and showed that this mutation abolished its fatty acylation by Lfat1 (see Author response image 2 below). We were not able to test if fatty acylation by Lfat1 affect downstream effector binding.

      Author response image 2.

      (3) Line 138: Can the authors clarify whether the Lfat1 ABD induces bundling of F-actin filaments or promotes actin oligomerization? Does the Lfat1 ABD form multimers that bring multiple filaments together? If Lfat1 induces actin oligomerization, this effect should be experimentally tested and reported. Additionally, the impact of Lfat1 binding on actin filament stability should be assessed. This is particularly important given the proposed use of the ABD as an actin probe.

      The ABD domain does not form oligomer as evidenced by gel filtration profile of the ABD domain. However, we do see F-actin bundling in our in vitro -F-actin polymerization experiment when both actin and ABD are in high concentration (data not shown). Under low concentration of ABD, there is not aggregation/bundling effect of F-actin.

      (4) Line 180: I think it's too premature to refer to the interaction as having "high specificity and affinity." We really don't know what else it's binding to.

      We have revised the text and reworded the sentence by removing "high specificity and affinity."

      (5) The authors should reconsider the color scheme used in the structural figures, particularly in Figures 2D and S4.

      Not sure the comments on the color scheme of the structure figures.

      (6) In Figure 3E, the WT curve fits the data poorly, possibly because the actin concentration exceeds the Kd of the interaction. It might fit better to a quadratic.

      We have performed quadratic fitting and replaced Figure 3E.

      (7) The authors propose that the individual helices of the Lfat1 ABD could be expressed on separate proteins and used to target multi-component biological complexes to F-actin by genetically fusing each component to a split alpha-helix. This is an intriguing idea, but it should be tested as a proof of concept to support its feasibility and potential utility.

      It is a good suggestion. We plan to thoroughly test the feasibility of this idea as one of our future directions.

      (7) The plot in Figure S2D appears cropped on the X-axis or was generated from a ~2× binned map rather than the deposited one (pixel size ~0.83 Å, plot suggests ~1.6 Å). The reported pixel size is inconsistent between the Methods and Table 1-please clarify whether 0.83 Å refers to super-resolution.

      Yes, 0.83 Å is super-resolution. We have updated in the cryoEM table

      Reviewer #2 (Public review):

      Weaknesses:

      (1) The authors should use biochemical reactions to analyze the KFAT of Llfat1 on one or two small GTPases shown to be modified by this effector in cellulo. Such reactions may allow them to determine the role of actin binding in its biochemical activity. This notion is particularly relevant in light of recent studies that actin is a co-factor for the activity of LnaB and Ceg14 (PMID: 39009586; PMID: 38776962; PMID: 40394005). In addition, the study should be discussed in the context of these recent findings on the role of actin in the activity of L. pneumophila effectors.

      We have new data showed that Actin binding does not affect Lfat1 enzymatic activity. (see figure; response to Reviewer #1). We have added this new data as Figure S7 to the paper. Accordingly, we also revised the discussion by adding the following paragraph.

      “The discovery of Lfat1 as an F-actin–binding lysine fatty acyl transferase raised the intriguing question of whether its enzymatic activity depends on F-actin binding. Recent studies have shown that other Legionella effectors, such as LnaB and Ceg14, use actin as a co-factor to regulate their activities. For instance, LnaB binds monomeric G-actin to enhance its phosphoryl-AMPylase activity toward phosphorylated residues, resulting in unique ADPylation modifications in host proteins (Fu et al, 2024; Wang et al, 2024). Similarly, Ceg14 is activated by host actin to convert ATP and dATP into adenosine and deoxyadenosine monophosphate, thereby modulating ATP levels in L. pneumophila–infected cells (He et al, 2025). However, this does not appear to be the case for Lfat1. We found that Lfat1 mutants defective in F-actin binding retained the ability to modify host small GTPases when expressed in cells (Figure S7). These findings suggest that, rather than serving as a co-factor, F-actin may serve to localize Lfat1 via its actin-binding domain (ABD), thereby confining its activity to regions enriched in F-actin and enabling spatial specificity in the modification of host targets.”

      (2) The development of the ABD domain of Llfat1 as an F-actin domain is a nice extension of the biochemical and structural experiments. The authors need to compare the new probe to those currently commonly used ones, such as Lifeact, in labeling of the actin cytoskeleton structure.

      We fully agree with the reviewer’s insightful suggestion. However, a direct comparison of the Lfat1 ABD domain with commonly used actin probes such as Lifeact, as well as evaluation of the split α-helix probe (as suggested by Reviewer #1), would require extensive and technically demanding experiments. These are important directions that we plan to pursue in future studies.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study reveals that TRPV1 signaling plays a key role in tympanic membrane (TM) healing by promoting macrophage recruitment and angiogenesis. Using a mouse TM perforation model, researchers found that blood-derived macrophages accumulated near the wound, driving angiogenesis and repair. TRPV1-expressing nerve fibers triggered neuroinflammatory responses, facilitating macrophage recruitment. Genetic Trpv1 mutation reduced macrophage infiltration, angiogenesis, and delayed healing. These findings suggest that targeting TRPV1 or stimulating sensory nerve fibers could enhance TM repair, improve blood flow, and prevent infections. This offers new therapeutic strategies for TM perforations and otitis media in clinical settings. This is an excellent and high-quality study that provides valuable insights into the mechanisms underlying TM wound healing.

      Strengths:

      The work is particularly important for elucidating the cellular and molecular processes involved in TM repair. However, there are several concerns about the current version.

      We sincerely thank Reviewer #1 for their time and effort in evaluating and improving our study. Below, we are pleased to address the Reviewer's concerns point by point.

      Weaknesses:

      Major concerns

      (1) The method of administration will be a critical factor when considering potential therapeutic strategies to promote TM healing. It would be beneficial if the authors could discuss possible delivery methods, such as topical application, transtympanic injection, or systemic administration, and their respective advantages and limitations for targeting TRPV1 signaling. For example, Dr. Kanemaru and his colleagues have proposed the use of Trafermin and Spongel to regenerate the eardrum.

      We are grateful to the reviewer for raising this important point. While the present study primarily focuses on the mechanistic role of TRPV1 in TM repair, we agree that the mode of therapeutic delivery will be pivotal in translating these findings into clinical practice. In response, we will expand the discussion to explore possible delivery methods—such as topical application, transtympanic injection, and systemic routes—along with their respective benefits and challenges. We will also cite the work by Dr. Kanemaru and colleagues as an example of how local delivery systems may facilitate TM regeneration.

      (2) The authors appear to have used surface imaging techniques to observe the TM. However, the TM consists of three distinct layers: the epithelial layer, the fibrous middle layer, and the inner mucosal layer. The authors should clarify whether the proposed mechanism involving TRPV1-mediated macrophage recruitment and angiogenesis is limited to the epithelial layer or if it extends to the deeper layers of the TM.

      We apologize for any confusion caused by our previous description. In our study, we utilized Z-stack confocal imaging to capture the full thickness of the TM, as illustrated in Author response image 1 (reconstructed from the acquired Z-sections). This imaging technique allowed us to encompass all three layers of the TM entirely. Each sample was imaged using a 10X objective on an Olympus fluorescence microscope. Given the conical shape and size of the TM, we imaged it in four quadrants, acquiring approximately 30 optical sections (with a 3 µm step) per region. Each acquired images were projected and exported using FV10ASW 4.2 Viewer, then stitched together using Photoshop. The resulting Z-stack projections enabled us to visualize the distribution of macrophages, angiogenesis, and the localization of nerve fibers throughout the TM. We will include this detailed methodology in our revision to clarify any potential confusion.

      Author response image 1.

      Representative confocal images showing one quadrant of the TM collected from collected from CSR1F<sup>EGFP</sup> bone marrow transplanted mouse at day 7 post-perforation. (A-B) 3D-rendered views from different angles reveal the close spatial relationship between CSF1R<sup>EGFP</sup> cells (green) and blood vessels (red) within the TM. (C) Cross-sectional view highlights the depth-wise distribution of CSF1R<sup>EGFP</sup> cells (green) and blood vessels (red) across the layered TM architecture. All images were processed using Imaris Viewer x64 (version 10.2.0).

      Minor concerns

      In Figure 8, the schematic illustration presents a coronal section of the TM. However, based on the data provided in the manuscript, it is unclear whether the authors directly obtained coronal images in their study. To enhance the clarity and impact of the schematic, it would be helpful to include representative images of coronal sections showing macrophage infiltration, angiogenesis, and nerve fiber distribution in the TM.

      As noted above, we utilized Z-stack confocal imaging to capture the full thickness of the TM, enabling us to visualize structures across all three layers. This approach ensured that all layers were included in our analysis. Due to the thin and curved nature of the TM, traditional cross-sectional imaging often struggles to clearly depict the spatial relationships between macrophages, blood vessels, and nerve fibers, especially at low magnification as shown in Author response image 2. In response to the reviewer's suggestion, we will include representative coronal images in the revised manuscript to better illustrate the distribution of these structures at higher magnification.

      Author response image 2.

      Confocal images of eardrum cross-sections collected at day 1 (A), 3 (B), and 7 (C) post perforation to demonstrate the wound healing processes.

      Reviewer #2 (Public review):

      Summary:

      This study examines the role of TRPV1 signaling in the recruitment of monocyte-derived macrophages and the promotion of angiogenesis during tympanic membrane (TM) wound healing. The authors use a combination of genetic mouse models, macrophage depletion, and transcriptomic approaches to suggest that neuronal TRPV1 activity contributes to macrophage-driven vascular responses necessary for tissue repair.

      Strengths:

      (1) The topic of neuroimmune interactions in tissue regeneration is of interest and underexplored in the context of the TM, which presents a unique model due to its anatomical features.

      (2) The use of reporter mice and bone marrow chimeras allows for some dissection of immune cell origin.

      (3) The authors incorporate transcriptomic data to contextualize inflammatory and angiogenic processes during wound healing.

      We sincerely thank Reviewer #2 for their time and effort in improving our study and recognizing its strengths. Below, we are pleased to address the reviewer's concerns point by point.

      Weaknesses:

      (1) The primary claims of the manuscript are not convincingly supported by the evidence presented. Most of the data are correlative in nature, and no direct mechanistic experiments are included to establish causality between TRPV1 signaling and macrophage recruitment or function.

      We appreciate Reviewer #2's perspective on the lack of molecular mechanisms linking TRPV1 signaling and macrophages. However, our data demonstrates that TRPV1 mutations significantly affect macrophage recruitment and angiogenesis. This initial study primarily focuses on the intriguing phenomenon of how sensory nerve fibers are involved in eardrum immunity and wound healing, an area that has not been clearly reported in the literature before. We believe that further research is necessary to explore this topic in greater depth.

      (2) Functional validation of key molecular players (such as Tac1 or Spp1) is lacking, and their roles are inferred primarily from gene expression data rather than experimentally tested.

      Although we have identified the TAC1 and SPP1 signals as potentially important for TM wound healing for the first time, we agree with the Reviewer's view regarding the lack of molecular mechanisms explored in this study. We have not yet tested the downstream signaling pathways, but we plan to investigate them in a series of future studies. As this is an early report, we will continue to explore these signals and their potential clinical applications based on our initial findings moving forward.

      (3) The reuse of publicly available scRNA-seq data is not sufficiently integrated or extended to yield new biological insights, and it remains largely descriptive.

      We appreciate Reviewer #2 for highlighting this point. Leveraging publicly available scRNA-seq databases and established analysis pipelines not only saves time and resources—my lab recently collected macrophages from the eardrums of postnatal P15 mice, with each trial requiring 20 eardrums from 10 animals to obtain a sufficient number of cells—but also allows researchers to build on previous work and focus on new biological questions without the need to repeat experiments. A prior study conducted by Dr. Tward and his team utilized scRNA-seq data to make initial discoveries related to eardrum wound healing, primarily focusing on epithelial cells rather than macrophages. We are building on their raw data to uncover new biological insights regarding macrophages, even though we have not yet tested the unidentified signals, which we believe will be valuable to our peers.

      (4) The macrophage depletion model (CX3CR1CreER; iDTR) lacks specificity, and possible off-target or systemic effects are not addressed.

      We agree with reviewer #2, although macrophage depletion model used in our study is a standard and well-used animal model (Shi, Hua et al. 2018), which has been used by many other laboratories, it is important to note that any macrophage depletion model may have potential issues. We will discuss this in our revision.

      (5) Several interpretations of the data appear overstated, particularly regarding the necessity of TRPV1 for monocyte recruitment and wound healing.

      We thank the reviewer for pointing this out. We will revise our manuscript where it is overstated accordingly.

      (6) Overall, the study appears to apply known concepts - namely, TRPV1-mediated neurogenic inflammation and macrophage-driven angiogenesis - to a new anatomical site without providing new mechanistic insight or advancing the field substantially.

      Although our study may not seem highly innovative at first glance, it reveals a previously unknown role of the TRPV1 pain signaling pathway in promoting eardrum healing for the first time. This healing process includes the recruitment of monocyte-derived macrophages and the formation of new blood vessels (angiogenesis). While this process has been documented in other organs, most research on macrophage-driven angiogenesis has been conducted using in vitro models, with very few studies demonstrating this process in vivo. Our findings could lead to new translational opportunities, especially considering that tympanic membrane perforation, along with damage-induced otitis media and conductive hearing loss, are common clinical issues affecting millions of people worldwide. Targeting TRPV1 signaling could enhance tympanic membrane immunity, improve blood circulation, promote the repair of damaged tympanic membranes, and ultimately prevent middle ear infections—an idea that has not been previously proposed.

      Overall:

      While the study addresses an interesting topic, the current version does not provide sufficiently strong or novel evidence to support its major conclusions. Additional mechanistic experiments and more rigorous validation would be necessary to substantiate the proposed model and clarify the relevance of the findings beyond this specific tissue context.

      We greatly thank the two reviewers for their helpful critiques to improve our study. We especially thank the Section Editors for their insightful and constructive comments on this initial study.

      References:

      Shi, J., L. Hua, D. Harmer, P. Li and G. Ren (2018). "Cre Driver Mice Targeting Macrophages." Methods Mol Biol 1784: 263-275.

    1. Author response:

      Reviewer #1 (Public review):

      Summary:

      This article investigates the origin of movement slowdown in weightlessness by testing two possible hypotheses: the first is based on a strategic and conservative slowdown, presented as a scaling of the motion kinematics without altering its profile, while the second is based on the hypothesis of a misestimation of effective mass by the brain due to an alteration of gravity-dependent sensory inputs, which alters the kinematics following a controller parameterization error.

      Strengths:

      The article convincingly demonstrates that trajectories are affected in 0g conditions, as in previous work. It is interesting, and the results appear robust. However, I have two major reservations about the current version of the manuscript that prevent me from endorsing the conclusion in its current form.

      Weaknesses:

      (1) First, the hypothesis of a strategic and conservative slow down implicitly assumes a similar cost function, which cannot be guaranteed, tested, or verified. For example, previous work has suggested that changing the ratio between the state and control weight matrices produced an alteration in movement kinematics similar to that presented here, without changing the estimated mass parameter (Crevecoeur et al., 2010, J Neurophysiol, 104 (3), 1301-1313). Thus, the hypothesis of conservative slowing cannot be rejected. Such a strategy could vary with effective mass (thus showing a statistical effect), but the possibility that the data reflect a combination of both mechanisms (strategic slowing and mass misestimation) remains open.

      We test whether changing the ratio between the state and control weight matrices can generate the observed effect. As shown in Author response image 1 and Author response image 2, the cost function change cannot produce a reduced peak velocity/acceleration and their timing advance simultaneously, but a mass estimation change can. In other words, using mass underestimation alone can explain the two key findings, amplitude reduction and timing advance. Yes, we cannot exclude the possibility of a change in cost function on top of the mass underestimation, but the principle of Occam’s Razor would support to adhering to a simple explanation, i.e., using body mass underestimation to explain the key findings. We will include our exploration on possible changes in cost function in the revision (in the Supplemental Materials).

      Author response image 1.

      Simulation using an altered cost function with α = 3.0. Panels A, B, and E show simulated position, velocity, and acceleration profiles, respectively, for the three movement directions. Solid lines correspond to pre- and post-exposure conditions, while dashed lines represent the in-flight condition. Panels C and D display the peak velocity and its timing across the three phases (Pre, In, Post), and Panels F and G show the corresponding peak acceleration and its timing. Note, varying the cost function, while leading to reduced peak velocity/acceleration, leads to an erroneous prediction of delayed timing of peak velocity/acceleration.

      Author response image 2.

      Simulation results using a cost function with α = 0.3. The format is the same as in Author response image 1. Note, this ten-fold decrease in α, while finally getting the timing of peak velocity/acceleration right (advanced or reduced), leads to an erroneous prediction of increased peak velocity/acceleration.

      (2) The main strength of the article is the presence of directional effects expected under the hypothesis of mass estimation error. However, the article lacks a clear demonstration of such an effect: indeed, although there appears to be a significant effect of direction, I was not sure that this effect matched the model's predictions. A directional effect is not sufficient because the model makes clear quantitative predictions about how this effect should vary across directions. In the absence of a quantitative match between the model and the data, the authors' claims regarding the role of misestimating the effective mass remain unsupported.

      Our paper does not aim to quantitatively reproduce human reaching movements in microgravity. We will make this more clearly in the revision.

      (1) The model is a simplification of the actual situation. For example, the model simulates an ideal case of moving a point mass (effective mass) without friction and without considering Coriolis and centripetal torques, while the actual situation is that people move their finger across a touch screen. The two-link arm model assumes planar movements, but our participants move their hand on a table top without vertical support to constrain their movement in 2D.

      (2) Our study merely uses well-established (though simplified) models to qualitatively predict the overall behavioral patterns if mass underestimation is at play. For this purpose, the results are well in line with models’ qualitative predictions: we indeed confirm that key kinematic features (peak velocity and acceleration) follow the same ranking order of movement direction conditions as predicted.

      (3) Using model simulation to qualitatively predict human behavioral patterns is a common practice in motor control studies, prominent examples including the papers on optimal feedback control (Todorov, 2004 and 2005) and movement vigor (Shadmehr et al., 2016). In fact, our model was inspired by the model in the latter paper.

      Citations:

      Todorov, E. (2004). Optimality principles in sensorimotor control. Nature Neuroscience, 7(9), 907.

      Todorov, E. (2005). Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system. Neural Computation, 17(5), 1084–1108.

      Shadmehr, R., Huang, H. J., & Ahmed, A. A. (2016). A Representation of Effort in Decision-Making and Motor Control. Current Biology: CB, 26(14), 1929–1934.

      In general, both the hypotheses of slowing motion (out of caution) and misestimating mass have been put forward in the past, and the added value of this article lies in demonstrating that the effect depended on direction. However, (1) a conservative strategy with a different cost function can also explain the data, and (2) the quantitative match between the directional effect and the model's predictions has not been established.

      Specific points:

      (1) I noted a lack of presentation of raw kinematic traces, which would be necessary to convince me that the directional effect was related to effective mass as stated.

      We are happy to include exemplary speed and acceleration trajectories. One example subject’s detailed trajectories are shown below and will be included in the revision. The reduced and advanced velocity/acceleration peaks are visible in typical trials.

      Author response image 3.

      Hand speed profiles (upper panels), hand acceleration profiles (middle panels) and speed profiles of the primary submovements (lower panels) towards different directions from an example participant.

      (2) The presentation and justification of the model require substantial improvement; the reason for their presence in the supplementary material is unclear, as there is space to present the modelling work in detail in the main text. Regarding the model, some choices require justification: for example, why did the authors ignore the nonlinear Coriolis and centripetal terms?

      Response: In brief, our simulations show that Coriolis and centripetal forces, despite having some directional anisotropy, only have small effects on predicted kinematics (see our responses to Reviewer 2). We will move descriptions of the model into the main text with more justifications for using a simple model.

      (3) The increase in the proportion of trials with subcomponents is interesting, but the explanatory power of this observation is limited, as the initial percentage was already quite high (from 60-70% during the initial study to 70-85% in flight). This suggests that the potential effect of effective mass only explains a small increase in a trend already present in the initial study. A more critical assessment of this result is warranted.

      Response: Indeed, the percentage of submovements only increases slightly, but the more important change is that the IPI (the inter-peak interval between submovements) also increases at the same time. Moreover, it is the effect of IPI that significantly predicts the duration increase in our linear mixed model. We will highlight this fact in our revision to avoid confusion.

      Reviewer #2 (Public review):

      This study explores the underlying causes of the generalized movement slowness observed in astronauts in weightlessness compared to their performance on Earth. The authors argue that this movement slowness stems from an underestimation of mass rather than a deliberate reduction in speed for enhanced stability and safety.

      Overall, this is a fascinating and well-written work. The kinematic analysis is thorough and comprehensive. The design of the study is solid, the collected dataset is rare, and the model tends to add confidence to the proposed conclusions. That being said, I have several comments that could be addressed to consolidate interpretations and improve clarity.

      Main comments:

      (1) Mass underestimation

      a) While this interpretation is supported by data and analyses, it is not clear whether this gives a complete picture of the underlying phenomena. The two hypotheses (i.e., mass underestimation vs deliberate speed reduction) can only be distinguished in terms of velocity/acceleration patterns, which should display specific changes during the flight with a mass underestimation. The experimental data generally shows the expected changes but for the 45{degree sign} condition, no changes are observed during flight compared to the pre- and post-phases (Figure 4). In Figure 5E, only a change in the primary submovement peak velocity is observed for 45{degree sign}, but this finding relies on a more involved decomposition procedure. It suggests that there is something specific about 45{degree sign} (beyond its low effective mass). In such planar movements, 45{degree sign} often corresponds to a movement which is close to single-joint, whereas 90{degree sign} and 135{degree sign} involve multi-joint movements. If so, the increased proportion of submovements in 90{degree sign} and 135{degree sign} could indicate that participants had more difficulties in coordinating multi-joint movements during flight. Besides inertia, Coriolis and centripetal effects may be non-negligible in such fast planar reaching (Hollerbach & Flash, Biol Cyber, 1982) and, interestingly, they would also be affected by a mass underestimation (thus, this is not necessarily incompatible with the author's view; yet predicting the effects of a mass underestimation on Coriolis/centripetal torques would require a two-link arm model). Overall, I found the discrepancy between the 45{degree sign} direction and the other directions under-exploited in the current version of the article. In sum, could the corrective submovements be due to a misestimation of Coriolis/centripetal torques in the multi-joint dynamics (caused specifically -or not- by a mass underestimation)?

      We agree that the effect of mass underestimation is less in the 45° direction than the other two directions, possibly related to its reliance on single-joint (elbow) as opposed to two-joints (elbow and shoulder) movements. Plus, movement correction using one joint is probably easier (as also suggested by another reviewer), this possibility will be further discussed in the revision. However, we find that our model simplification (excluding Coriolis and centripetal torques) does not affect our main conclusions at all. First, we performed a simple simulation and found that, under the current optimal hand trajectory, incorporating Coriolis and centripetal torques has only a limited impact on the resulting joint torques (see simulations in Author response image 4). One reason is that we used smaller movements than Hallerbach & Flash did. In addition, we applied an optimal feedback control model to a more realistic 2-joint arm configuration. Despite its simplicity, this model produced a speed profile consistent with our current predictions and made similar predictions regarding the effects of mass underestimation (Author response image 5). We will provide a more realistic 2-joint arm model muscle dynamics in the revision to improve the simulation further, but the message will be same: including or excluding Coriolis and centripetal torques will not affect the theoretical predictions about mass underestimation. Second, as the reviewer correctly pointed out, the mass (and its underestimation) also affects these two torque terms, thus its effect on kinematic measures is not affected much even with the full model.

      Author response image 4.

      Joint angles and joint torque of shoulder and elbow with simulated trajectories towards different directions. A. Shoulder (green) and elbow (blue) angles change with time for the 45° movement direction. B. Components of joint interaction torques at the shoulder. Solid line: net torque at the shoulder; dotted line: shoulder inertia torque; dashed line: shoulder Coriolis and centripetal torque. C. Same plot as B for the elbow joint. D–F. Coriolis and centripetal components in the full 360° workspace, beyond three movement directions (45°, 90°, and 135°). D. Net torque. E. Inertial torque. F. Combined Coriolis and centripetal torque. Note the polar plots of Coriolis/centripetal torques (F) have a scale that is two magnitudes smaller than that of inertial torque in our simulation. All torques were simulated with the optimal movement duration. Torques were squared and integrated over each trajectory.

      Author response image 5.

      Comparison between simulation results from the full model with the addition of Coriolis/centripetal torques (left) and the simplified model (right). The position profiles (top) and the corresponding speed profiles low) are shown. Solid lines are for normal mass estimation and dashed lines for mass underestimation in microgravity. The three colors represent three movement directions (dark red: 45°, red: 90°, yellow: 135°). The full model used a 2-link arm model without realistic muscle dynamics yet (will include in the formal revision) thus the speed profile is not smooth. Importantly, the full model also predict the same effect of mass underestimation, i.e., reduced peak velocity/acceleration and their timing advance.

      b) Additionally, since the taikonauts are tested after 2 or 3 weeks in flight, one could also assume that neuromuscular deconditioning explains (at least in part) the general decrease in movement speed. Can the authors explain how to rule out this alternative interpretation? For instance, weaker muscles could account for slower movements within a classical time-effort trade-off (as more neural effort would be needed to generate a similar amount of muscle force, thereby suggesting a purposive slowing down of movement). Therefore, could the observed results (slowing down + more submovements) be explained by some neuromuscular deconditioning combined with a difficulty in coordinating multi-joint movements in weightlessness (due to a misestimation or Coriolis/centripetal torques) provide an alternative explanation for the results?

      Response: Neuromuscular deconditioning is indeed a space or microgravity effect; thanks for bringing this up as we omitted the discussion of its possible contribution in the initial submission. However, muscle weakness is less for upper-limb muscles than for postural and lower-limb muscles (Tesch et al., 2005). The handgrip strength decreases 5% to 15% after several months (Moosavi et al., 2021); shoulder and elbow muscles atrophy, though not directly measured, was estimated to be minimal (Shen et al., 2017). The muscle weakness is unlikely to play a major role here since our reaching task involves small movements (~12cm) with joint torques of a magnitude of ~2N·m. Coriolis/centripetal torques does not affect the putative mass effect (as shown above simulations). The reviewer suggests that their poor coordination in microgravity might contribute to slowing down + more submovements. Poor coordination is an umbrella term for any motor control problems, and it can explain any microgravity effect. The feedforward control changes caused by mass underestimation can also be viewed as poor coordination. If we limit it as the coordination of the two joints or coordinating Coriolis/centripetal torques, we should expect to see some trajectory curvature changes in microgravity. However, we further analyzed our reaching trajectories and found no sign of curvature increase in our large collection of reaching movements. We probably have the largest dataset of reaching movements collected in microgravity thus far, given that we had 12 taikonauts and each of them performed about 480 to 840 reaching trials during their spaceflight. We believe the probability of Type II error is quite low here. We will include descriptive statistics of these new analyses in our revision.

      Citation: Tesch, P. A., Berg, H. E., Bring, D., Evans, H. J., & LeBlanc, A. D. (2005). Effects of 17-day spaceflight on knee extensor muscle function and size. European journal of applied physiology, 93(4), 463-468.

      Moosavi, D., Wolovsky, D., Depompeis, A., Uher, D., Lennington, D., Bodden, R., & Garber, C. E. (2021). The effects of spaceflight microgravity on the musculoskeletal system of humans and animals, with an emphasis on exercise as a countermeasure: A systematic scoping review. Physiological Research, 70(2), 119.

      Shen, H., Lim, C., Schwartz, A. G., Andreev-Andrievskiy, A., Deymier, A. C., & Thomopoulos, S. (2017). Effects of spaceflight on the muscles of the murine shoulder. The FASEB Journal, 31(12), 5466.

      (2) Modelling

      a) The model description should be improved as it is currently a mix of discrete time and continuous time formulations. Moreover, an infinite-horizon cost function is used, but I thought the authors used a finite-horizon formulation with the prefixed duration provided by the movement utility maximization framework of Shadmehr et al. (Curr Biol, 2016). Furthermore, was the mass underestimation reflected both in the utility model and the optimal control model? If so, did the authors really compute the feedback control gain with the underestimated mass but simulate the system with the real mass? This is important because the mass appears both in the utility framework and in the LQ framework. Given the current interpretations, the feedforward command is assumed to be erroneous, and the feedback command would allow for motor corrections. Therefore, it could be clarified whether the feedback command also misestimates the mass or not, which may affect its efficiency. For instance, if both feedforward and feedback motor commands are based on wrong internal models (e.g., due to the mass underestimation), one may wonder how the astronauts would execute accurate goal-directed movements.

      b) The model seems to be deterministic in its current form (no motor and sensory noise). Since the framework developed by Todorov (2005) is used, sensorimotor noise could have been readily considered. One could also assume that motor and sensory noise increase in microgravity, and the model could inform on how microgravity affects the number of submovements or endpoint variance due to sensorimotor noise changes, for instance.

      c) Finally, how does the model distinguish the feedforward and feedback components of the motor command that are discussed in the paper, given that the model only yields a feedback control law? Does 'feedforward' refer to the motor plan here (i.e., the prefixed duration and arguably the precomputed feedback gain)?

      We appreciate these very helpful suggestions about our model presentation. Indeed, our initial submission did not give detailed model descriptions in the main text, due to text limits for early submissions. We actually used a finite-horizon framework throughout, with a pre-specified duration derived from the utility model. In the revision, we will make that point clear, and we will also revise the Methods section to explicitly distinguish feedforward vs. feedback components, clarify the use of mass underestimation in both utility and control models, and update the equations accordingly.

      (3) Brevity of movements and speed-accuracy trade-off

      The tested movements are much faster (average duration approx. 350 ms) than similar self-paced movements that have been studied in other works (e.g., Wang et al., J Neurophysiology, 2016; Berret et al., PLOS Comp Biol, 2021, where movements can last about 900-1000 ms). This is consistent with the instructions to reach quickly and accurately, in line with a speed-accuracy trade-off. Was this instruction given to highlight the inertial effects related to the arm's anisotropy? One may however, wonder if the same results would hold for slower self-paced movements (are they also with reduced speed compared to Earth performance?). Moreover, a few other important questions might need to be addressed for completeness: how to ensure that astronauts did remember this instruction during the flight? (could the control group move faster because they better remembered the instruction?). Did the taikonauts perform the experiment on their own during the flight, or did one taikonaut assume the role of the experimenter?

      Thanks for highlighting the brevity of movements in our experiment. Our intention in emphasizing fast movements is to rigorously test whether movement is indeed slowed down in microgravity. The observed prolonged movement duration clearly shows that microgravity affects people’s movement duration, even when they are pushed to move fast. The second reason for using fast movement is to highlight that feedforward control is affected in microgravity. Mass underestimation specifically affects feedforward control in the first place. Slow movement would inevitably have online corrections that might obscure the effect of mass underestimation. Note that movement slowing is not only observed in our speed-emphasized reaching task, but also in whole-arm pointing in other astronauts studies (Berger, 1997; Sangals, 1999), which have been quoted in our paper. We thus believe these findings are generalizable.

      Regarding the consistency of instructions: all our experiments conducted in the Tiangong space station were monitored in real time by experimenters in the Control Center located in Beijing. The task instructions were presented on the initial display of the data acquisition application and ample reading time was allowed. In fact, all the pre-, in-, and post-flight test sessions were administered by the same group of experimenters with the same instruction. It is common that astronauts serve both as participants and experimenters at the same time. And, they were well trained for this type of role on the ground. Note that we had multiple pre-flight test sessions to familiarize them with the task. All these rigorous measures were in place to obtain high-quality data. We will include these experimental details and the rationales for emphasizing fast movements in the revision.

      Citations:

      Berger, M., Mescheriakov, S., Molokanova, E., Lechner-Steinleitner, S., Seguer, N., & Kozlovskaya, I. (1997). Pointing arm movements in short- and long-term spaceflights. Aviation, Space, and Environmental Medicine, 68(9), 781–787.

      Sangals, J., Heuer, H., Manzey, D., & Lorenz, B. (1999). Changed visuomotor transformations during and after prolonged microgravity. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 129(3), 378–390.

      (4) No learning effect

      This is a surprising effect, as mentioned by the authors. Other studies conducted in microgravity have indeed revealed an optimal adaptation of motor patterns in a few dozen trials (e.g., Gaveau et al., eLife, 2016). Perhaps the difference is again related to single-joint versus multi-joint movements. This should be better discussed given the impact of this claim. Typically, why would a "sensory bias of bodily property" persist in microgravity and be a "fundamental constraint of the sensorimotor system"?

      We believe the differences between our study and Gaveau et al.’s study cannot be simply attributed to single-joint versus multi-joint movements. One of the most salient differences is that their adaptation is about incorporating microgravity in control for minimizing effort, while our adaptation is about rightfully perceiving body mass. We will elaborate on possible reasons for the lack of learning in the light of this previous study.

      We can elaborate on “sensory bias” and “fundamental constraint of the sensorimotor system”. If an inertial change is perceived (like an extra weight attached to the forearm, as in previous motor adaptation studies), people can adapt their reaching in tens of trials. In this case, sensory cues are veridical as they correctly inform about the inertial perturbation. However, in microgravity, reduced gravitational pull and proprioceptive inputs constantly inform the controller that the body mass is less than its actual magnitude. In other words, sensory cues in space are misleading for estimating body mass. The resulting sensory bias prevents the sensorimotor system from correctly adapt. Our statement was too brief in the initial submission; we will expand it in the revision.

      Reviewer #3 (Public review):

      Summary:

      The authors describe an interesting study of arm movements carried out in weightlessness after a prolonged exposure to the so-called microgravity conditions of orbital spaceflight. Subjects performed radial point-to-point motions of the fingertip on a touch pad. The authors note a reduction in movement speed in weightlessness, which they hypothesize could be due to either an overall strategy of lowering movement speed to better accommodate the instability of the body in weightlessness or an underestimation of body mass. They conclude for the latter, mainly based on two effects. One, slowing in weightlessness is greater for movement directions with higher effective mass at the end effector of the arm. Two, they present evidence for an increased number of corrective submovements in weightlessness. They contend that this provides conclusive evidence to accept the hypothesis of an underestimation of body mass.

      Strengths:

      In my opinion, the study provides a valuable contribution, the theoretical aspects are well presented through simulations, the statistical analyses are meticulous, the applicable literature is comprehensively considered and cited, and the manuscript is well written.

      Weaknesses:

      Nevertheless, I am of the opinion that the interpretation of the observations leaves room for other possible explanations of the observed phenomenon, thus weakening the strength of the arguments.

      First, I would like to point out an apparent (at least to me) divergence between the predictions and the observed data. Figures 1 and S1 show that the difference between predicted values for the 3 movement directions is almost linear, with predictions for 90º midway between predictions for 45º and 135º. The effective mass at 90º appears to be much closer to that of 45º than to that of 135º (Figure S1A). But the data shown in Figure 2 and Figure 3 indicate that movements at 90º and 135º are grouped together in terms of reaction time, movement duration, and peak acceleration, while both differ significantly from those values for movements at 45º.

      Furthermore, in Figure 4, the change in peak acceleration time and relative time to peak acceleration between 1g and 0g appears to be greater for 90º than for 135º, which appears to me to be at least superficially in contradiction with the predictions from Figure S1. If the effective mass is the key parameter, wouldn't one expect as much difference between 90º and 135º as between 90º and 45º? It is true that peak speed (Figure 3B) and peak speed time (Figure 4B) appear to follow the ordering according to effective mass, but is there a mathematical explanation as to why the ordering is respected for velocity but not acceleration? These inconsistencies weaken the author's conclusions and should be addressed.

      Indeed, the model predicts an almost equal separation between 45° and 90° and between 90° and 135°, while the data indicate that the spacing between 45° and 90° is much smaller than between 90° and 135°. We do not regard the divergence as evidence undermining our main conclusion since 1) the model is a simplification of the actual situation. For example, the model simulates an ideal case of moving a point mass (effective mass) without friction and without considering Coriolis and centripetal torques. 2) Our study does not make quantitative predictions of all the key kinematic measures; that will require model fitting and parameter estimation; instead, our study uses well-established (though simplified) models to qualitatively predict the overall behavioral pattern we would observe. For this purpose, our results are well in line with our expectations: though we did not find equal spacing between direction conditions, we do confirm that the key kinematic properties (Figure 2 and Figure 3 as questioned) follow the same ranking order of directions as predicted.

      We thank the reviewer for pointing out the apparent discrepancy between model simulation and observed data. We will elaborate on the reasons behind the discrepancy in the revision.

      Then, to strengthen the conclusions, I feel that the following points would need to be addressed:

      (1) The authors model the movement control through equations that derive the input control variable in terms of the force acting on the hand and treat the arm as a second-order low-pass filter (Equation 13). Underestimation of the mass in the computation of a feedforward command would lead to a lower-than-expected displacement to that command. But it is not clear if and how the authors account for a potential modification of the time constants of the 2nd order system. The CNS does not effectuate movements with pure torque generators. Muscles have elastic properties that depend on their tonic excitation level, reflex feedback, and other parameters. Indeed, Fisk et al.* showed variations of movement characteristics consistent with lower muscle tone, lower bandwidth, and lower damping ratio in 0g compared to 1g. Could the variations in the response to the initial feedforward command be explained by a misrepresentation of the limbs' damping and natural frequency, leading to greater uncertainty about the consequences of the initial command? This would still be an argument for unadapted feedforward control of the movement, leading to the need for more corrective movements. But it would not necessarily reflect an underestimation of body mass.

      *Fisk, J. O. H. N., Lackner, J. R., & DiZio, P. A. U. L. (1993). Gravitoinertial force level influences arm movement control. Journal of neurophysiology, 69(2), 504-511.

      We agree that muscle properties, tonic excitation level, proprioception-mediated reflexes all contribute to reaching control. Fisk et al. (1993) study indeed showed that arm movement kinematics change, possibly owing to lower muscle tone and/or damping. However, reduced muscle damping and reduced spindle activity are more likely to affect feedback-based movements. Like in Fisk et al.’s study, people performed continuous arm movements with eyes closed; thus their movements largely relied on proprioceptive control. Our major findings are about the feedforward control, i.e., the reduced and “advanced” peak velocity/acceleration in discrete and ballistic reaching movements. Note that the peak acceleration happens as early as approximately 90-100ms into the movements, clearly showing that feedforward control is affected -- a different effect from Fisk et al’s findings. It is unlikely that people “advanced” their peak velocity/acceleration because they feel the need for more later corrective movements. Thus, underestimation of body mass remains the most plausible explanation.

      (2) The movements were measured by having the subjects slide their finger on the surface of a touch screen. In weightlessness, the implications of this contact are expected to be quite different than those on the ground. In weightlessness, the taikonauts would need to actively press downward to maintain contact with the screen, while on Earth, gravity will do the work. The tangential forces that resist movement due to friction might therefore be different in 0g. This could be particularly relevant given that the effect of friction would interact with the limb in a direction-dependent fashion, given the anisotropy of the equivalent mass at the fingertip evoked by the authors. Is there some way to discount or control for these potential effects?

      We agree that friction might play a role here, but normal interaction with a touch screen typically involves friction between 0.1 and 0.5N (e.g., Ayyildiz et al., 2018). We believe that the directional variation is even smaller than 0.1N. It is very small compared to the force used to accelerate the arm for the reaching movement (10-15N). Thus, friction anisotropy is unlikely to explain our data.

      Citation: Ayyildiz M, Scaraggi M, Sirin O, Basdogan C, Persson BNJ. Contact mechanics between the human finger and a touchscreen under electroadhesion. Proc Natl Acad Sci U S A. 2018 Dec 11;115(50):12668-12673.

      (3) The carefully crafted modelling of the limb neglects, nevertheless, the potential instability of the base of the arm. While the taikonauts were able to use their left arm to stabilize their bodies, it is not clear to what extent active stabilization with the contralateral limb can reproduce the stability of the human body seated in a chair in Earth gravity. Unintended motion of the shoulder could account for a smaller-than-expected displacement of the hand in response to the initial feedforward command and/or greater propensity for errors (with a greater need for corrective submovements) in 0g. The direction of movement with respect to the anchoring point could lead to the dependence of the observed effects on movement direction. Could this be tested in some way, e.g., by testing subjects on the ground while standing on an unstable base of support or sitting on a swing, with the same requirement to stabilize the torso using the contralateral arm?

      Body stabilization is always a challenge for human movement studies in space. We minimized its potential confounding effects by using left-hand grasping and foot straps for postural support throughout the experiment. We would argue shoulder stability is an unlikely explanation because unexpected shoulder instability should not affect the feedforward (early) part of the ballistic reaching movement: the reduced peak acceleration and its early peak were observed at about 90-100ms after movement initiation. This effect is too early to be explained by an expected stability issue.

      The arguments for an underestimation of body mass would be strengthened if the authors could address these points in some way.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In this study from Zhu and colleagues, a clear role for MED26 in mouse and human erythropoiesis is demonstrated that is also mapped to amino acids 88-480 of the human protein. The authors also show the unique expression of MED26 in later-stage erythropoiesis and propose transcriptional pausing and condensate formation mechanisms for MED26's role in promoting erythropoiesis. Despite the author's introductory claim that many questions regarding Pol II pausing in mammalian development remain unanswered, the importance of transcriptional pausing in erythropoiesis has actually already been demonstrated (Martell-Smart, et al. 2023, PMID: 37586368, which the authors notably did not cite in this manuscript). Here, the novelty and strength of this study is MED26 and its unique expression kinetics during erythroid development.

      Strengths:

      The widespread characterization of kinetics of mediator complex component expression throughout the erythropoietic timeline is excellent and shows the interesting divergence of MED26 expression pattern from many other mediator complex components. The genetic evidence in conditional knockout mice for erythropoiesis requiring MED26 is outstanding. These are completely new models from the investigators and are an impressive amount of work to have both EpoR-driven deletion and inducible deletion. The effect on red cell number is strong in both. The genetic over-expression experiments are also quite impressive, especially the investigators' structure-function mapping in primary cells. Overall the data is quite convincing regarding the genetic requirement for MED26. The authors should be commended for demonstrating this in multiple rigorous ways.

      Thank you for your positive feedback.

      Weaknesses:

      (1) The authors state that MED26 was nominated for study based on RNA-seq analysis of a prior published dataset. They do not however display any of that RNA-seq analysis with regards to Mediator complex subunits. While they do a good job showing protein-level analysis during erythropoiesis for several subunits, the RNA-seq analysis would allow them to show the developmental expression dynamics of all subunit members.

      Thank you for this helpful suggestion. While we did not originally nominate MED26 based on RNA-seq analysis, we have analyzed the transcript levels of Mediator complex subunits in our RNA-seq data across different stages of erythroid differentiation (Author response image 1). The results indicate that most Mediator subunits, including MED26, display decreased RNA expression over the course of differentiation, with the exception of MED25, as reported previously (Pope et al., Mol Cell Biol 2013. PMID: 23459945).

      Notably, our study is based on initial observations at the protein level, where we found that, unlike most other Mediator subunits that are downregulated during erythropoiesis, MED26 remains relatively abundant. Protein expression levels more directly reflect the combined influences of transcription, translation and degradation processes within cells, and are likely more closely related to biological functions in this context. It is possible that post-transcriptional regulation (such as m6A-mediated improvement of translational efficiency) or post-translational modifications (like escape from ubiquitination) could contribute to the sustained levels of MED26 protein, and this will be an interesting direction for future investigation.

      Author response image 1.

      Relative RNA expression of Mediator complex subunits during erythropoiesis in human CD34+ erythroid cultures. Different differentiation stages from HSPCs to late erythroblasts were identified using CD71 and CD235a markers, progressing sequentially as CD71-CD235a-, CD71+CD235a-, CD71+CD235a+, and CD71-CD235a+. Expression levels were presented as TPM (transcripts per million).

      (2) The authors use an EpoR Cre for red cell-specific MED26 deletion. However, other studies have now shown that the EpoR Cre can also lead to recombination in the macrophage lineage, which clouds some of the in vivo conclusions for erythroid specificity. That being said, the in vitro erythropoiesis experiments here are convincing that there is a major erythroid-intrinsic effect.

      Thank you for this insightful comment. We recognize that EpoR-Cre can drive recombination in both erythroid and macrophage lineages (Zhang et al., Blood 2021, PMID: 34098576). However, EpoR-Cre remains the most widely used Cre for studying erythroid lineage effects in the hematopoietic community. Numerous studies have employed EpoR-Cre for erythroid-specific gene knockout models (Pang et al, Mol Cell Biol 2021, PMID: 22566683; Santana-Codina et al., Haematologica 2019, PMID: 30630985; Xu et al., Science 2013, PMID: 21998251.).

      While a GYPA (CD235a)-Cre model with erythroid specificity has recently been developed (https://www.sciencedirect.com/science/article/pii/S0006497121029074), it has not yet been officially published. We look forward to utilizing the GYPA-Cre model for future studies. As you noted, our in vivo mouse model and primary human CD34+ erythroid differentiation system both demonstrate that MED26 is essential for erythropoiesis, suggesting that the regulatory effects of MED26 in our study are predominantly erythroid-intrinsic.

      (3) Te donor chimerism assessment of mice transplanted with MED26 knockout cells is a bit troubling. First, there are no staining controls shown and the full gating strategy is not shown. Furthermore, the authors use the CD45.1/CD45.2 system to differentiate between donor and recipient cells in erythroblasts. However, CD45 is not expressed from the CD235a+ stage of erythropoiesis onwards, so it is unclear how the authors are detecting essentially zero CD45-negative cells in the erythroblast compartment. This is quite odd and raises questions about the results. That being said, the red cell indices in the mice are the much more convincing data.

      Thank you for your careful and thorough feedback. We have now included negative staining controls (Author response image 2A, top). We agree that CD45 is typically not expressed in erythroid precursors in normal development. Prior studies have characterized BFU-E and CFU-E stages as c-Kit+CD45+Ter119−CD71low and c-Kit+CD45−Ter119−CD71high cells in fetal liver (Katiyar et al, Cells 2023, PMID: 37174702).

      However, our observations indicate that erythroid surface markers differ during hematopoiesis reconstitution following bone marrow transplantation.  We found that nearly all nucleated erythroid progenitors/precursors (Ter119+Hoechst+) express CD45 after hematopoiesis reconstitution (Author response image 2A, bottom).

      To validate our assay, we performed next-generation sequencing by first mixing mouse CD45.1 and CD45.2 total bone marrow cells at a 1:2 ratio. We then isolated nucleated erythroid progenitors/precursors (Ter119+Hoechst+) by FACS and sequenced the CD45 gene locus by targeted sequencing. The resulting CD45 allele distribution matched our initial mixing ratio, confirming the accuracy of our approach (Author response image 2B).

      Moreover, a recent study supports that reconstituted erythroid progenitors can indeed be distinguished by CD45 expression following bone marrow transplantation (He et al., Nature Aging 2024, PMID: 38632351. Extended Data Fig. 8). 

      In conclusion, our data indicate that newly formed erythroid progenitors/precursors post-transplant express CD45, enabling us to identify nucleated erythroid progenitors/precursors by Ter119+Hoechst+ and determine their origin using CD45.1 and CD45.2 markers.

      Author response image 2.

      Representative flow cytometry gating strategy of erythroid chimerism following mouse bone marrow transplantation. A. Gating strategy used in the erythroid chimerism assay. B. Targeted sequencing result of Ter119+Hoechst+ cells isolated by FACS. The cell sample was pre-mixed with 1/3 CD45.2 and 2/3 CD45.1 bone marrow cells. Ptprc is the gene locus for CD45.

      (4) The authors make heavy use of defining "erythroid gene" sets and "non-erythroid gene" sets, but it is unclear what those lists of genes actually are. This makes it hard to assess any claims made about erythroid and non-erythroid genes.

      Thank you for this helpful suggestion. We defined "erythroid genes" and "non-erythroid genes" based on RNA-seq data from Ludwig et al. (Cell Reports 2019. PMID: 31189107. Figure 2 and Table S1). Genes downregulated from stages k1 to k5 are classified as “non-erythroid genes,” while genes upregulated from stages k6 to k7 are classified as “erythroid genes.” We will add this description in the revised manuscript.

      (5) Overall the data regarding condensate formation is difficult to interpret and is the weakest part of this paper. It is also unclear how studies of in vitro condensate formation or studies in 293T or K562 cells can truly relate to highly specialized erythroid biology. This does not detract from the major findings regarding genetic requirements of MED26 in erythropoiesis.

      Thank you for the rigorous feedback. Assessing the condensate properties of MED26 protein in primary CD34+ erythroid cells or mouse models is indeed challenging. As is common in many condensate studies, we used in vitro assays and cellular assays in HEK293T and K562 cells to examine the biophysical properties (Figure S7), condensation formation capacity (Figure 5C and Figure S7C), key phase-separation regions of MED26 protein (Figure S6), and recruitment of pausing factors (Figure 6A-B) in live cells. We then conducted functional assays to demonstrate that the phase-separation region of MED26 can promote erythroid differentiation similarly to the full-length protein in the CD34+ system and K562 cells (Figure 5A). Specifically, overexpressing the MED26 phase-separation domain accelerates erythropoiesis in primary human erythroid culture, while deleting the Intrinsically Disordered Region (IDR) impairs MED26’s ability to form condensates and recruit PAF1 in K562 cells.

      In summary, we used HEK293T cells to study the biochemical and biophysical properties of MED26, and the primary CD34+ differentiation system to examine its developmental roles. Our findings support the conclusion that MED26-associated condensate formation promotes erythropoiesis.

      (6) For many figures, there are some panels where conclusions are drawn, but no statistical quantification of whether a difference is significant or not.

      Thank you for your thorough feedback. We have checked all figures for statistical quantification and added the relevant statistical analysis methods to the corresponding figure legends (Figure 2L and Figure S4C) to clarify the significance of the observed differences. The updated information will be incorporated into the revised manuscript.

      Reviewer #2 (Public review):

      Summary:

      The manuscript by Zhu et al describes a novel role for MED26, a subunit of the Mediator complex, in erythroid development. The authors have discovered that MED26 promotes transcriptional pausing of RNA Pol II, by recruiting pausing-related factors.

      Strengths:

      This is a well-executed study. The authors have employed a range of cutting-edge and appropriate techniques to generate their data, including: CUT&Tag to profile chromatin changes and mediator complex distribution; nuclear run-on sequencing (PRO-seq) to study Pol II dynamics; knockout mice to determine the phenotype of MED26 perturbation in vivo; an ex vivo erythroid differentiation system to perform additional, important, biochemical and perturbation experiments; immunoprecipitation mass spectrometry (IP-MS); and the "optoDroplet" assay to study phase-separation and molecular condensates.

      This is a real highlight of the study. The authors have managed to generate a comprehensive picture by employing these multiple techniques. In doing so, they have also managed to provide greater molecular insight into the workings of the MEDIATOR complex, an important multi-protein complex that plays an important role in a range of biological contexts. The insights the authors have uncovered for different subunits in erythropoiesis will very likely have ramifications in many other settings, in both healthy biology and disease contexts.

      Thank you for your thoughtful summary and encouraging feedback.

      Weaknesses:

      There are almost no discernible weaknesses in the techniques used, nor the interpretation of the data. The IP-MS data was generated in HEK293 cells when it could have been performed in the human CD34+ HSPC system that they employed to generate a number of the other data. This would have been a more natural setting and would have enabled a more like-for-like comparison with the other data.

      Thank you for your positive feedback and insightful suggestions. We will perform validation of the immunoprecipitation results in CD34+ derived erythroid cells to further confirm our findings.

      Reviewer #3 (Public review):

      Summary:

      The authors aim to explore whether other subunits besides MED1 exert specific functions during the process of terminal erythropoiesis with global gene repression, and finally they demonstrated that MED26-enriched condensates drive erythropoiesis through modulating transcription pausing.

      Strengths:

      Through both in vitro and in vivo models, the authors showed that while MED1 and MED26 co-occupy a plethora of genes important for cell survival and proliferation at the HSPC stage, MED26 preferentially marks erythroid genes and recruits pausing-related factors for cell fate specification. Gradually, MED26 becomes the dominant factor in shaping the composition of transcription condensates and transforms the chromatin towards a repressive yet permissive state, achieving global transcription repression in erythropoiesis.

      Thank you for your positive summary and feedback.

      Weaknesses:

      In the in vitro model, the author only used CD34+ cell-derived erythropoiesis as the validation, which is relatively simple, and more in vitro erythropoiesis models need to be used to strengthen the conclusion.

      Thank you for your thoughtful suggestions. We have shown that MED26 promotes erythropoiesis using the primary human CD34+ differentiation system (Figure 2 K-M and Figure S4) and have demonstrated its essential role in erythropoiesis through multiple mouse models (Figure 2A-G and Figure S1-3). Together, these in vitro and in vivo results support our conclusion that MED26 regulates erythropoiesis. However, we are open to further validating our findings with additional in vitro erythropoiesis models, such as iPSC or HUDEP erythroid differentiation systems.

    1. Author Response

      Reviewer #1 (Public Review):

      [...] Genes expressed in the same direction in lowland individuals facing hypoxia (the plastic state) as what is found in the colonised state are defined as adaptative, while genes with the opposite expression pattern were labelled as maladaptive, using the assumption that the colonised state must represent the result of natural selection. Furthermore, genes could be classified as representing reversion plasticity when the expression pattern differed between the plasticity and colonised states and as reinforcement when they were in the same direction (for example more expressed in the plastic state and the colonised state than in the ancestral state). They found that more genes had a plastic expression pattern that was labelled as maladaptive than adaptive. Therefore, some of the genes have an expression pattern in accordance with what would be predicted based on the plasticity-first hypothesis, while others do not.

      Thank you for a precise summary of our work. We appreciate the very encouraging comments recognizing the value of our work. We have addressed concerns from the reviewer in greater detail below.

      Q1. As pointed out by the authors themselves, the fact that temperature was not included as a variable, which would make the experimental design much more complex, misses the opportunity to more accurately reflect the environmental conditions that the colonizer individuals face at high altitude. Also pointed out by the authors, the acclimation experiment in hypoxia lasted 4 weeks. It is possible that longer term effects would be identifiable in gene expression in the lowland individuals facing hypoxia on a longer time scale. Furthermore, a sample size of 3 or 4 individuals per group depending on the tissue for wild individuals may miss some of the natural variation present in these populations. Stating that they have a n=7 for the plastic stage and n= 14 for the ancestral and colonized stages refers to the total number of tissue samples and not the number of individuals, according to supplementary table 1.

      We shared the same concerns as the reviewer. This is partly because it is quite challenging to bring wild birds into captivity to conduct the hypoxia acclimation experiments. We had to work hard to perform acclimation experiments by taking lowland sparrows in a hypoxic condition for a month. We indeed have recognized the similar set of limitations as the review pointed out and have discussed the limitations in the study, i.e., considering hypoxic condition alone, short time acclimation period, etc. Regarding sample sizes, we have collected cardiac muscle from nine individuals (three individuals for each stage) and flight muscle from 12 individuals (four individuals for each stage). We have clarified this in Supplementary Table 1.

      Q2. Finally, I could not find a statement indicating that the lowland individuals placed in hypoxia (plastic stage) were from the same population as the lowland individuals for which transcriptomic data was already available, used as the "ancestral state" group (which themselves seem to come from 3 populations Qinghuangdao, Beijing, and Tianjin, according to supplementary table 2) nor if they were sampled in the same time of year (pre reproduction, during breeding, after, or if they were juveniles, proportion of males or females, etc). These two aspects could affect both gene expression (through neutral or adaptive genetic variation among lowland populations that can affect gene expression, or environmental effects other than hypoxia that differ in these populations' environments or because of their sexes or age). This could potentially also affect the FST analysis done by the authors, which they use to claim that strong selective pressure acted on the expression level of some of the genes in the colonised group.

      The reviewer asked how individual tree sparrows used in the transcriptomic analyses were collected. The individuals used for the hypoxia acclimation experiment and represented the ancestral lowland population were collected from the same locality (Beijing) and at the same season (i.e., pre-breeding) of the year. They are all adults and weight approximately 18g. We have clarified this in the Supplementary Table S1 and Methods. We did not distinguish males from females (both sexes look similar) under the assumption that both sexes respond similarly to hypoxia acclimation in their cardiac and flight muscle gene expression.

      The Supplementary Table 2 lists the individuals that were used for sequence analyses. These individuals were only used for sequence comparisons but not for the transcriptomic analyses. The population genetic structure analyzed in a previously published study showed that there is no clear genetic divergence within the lowland population (i.e., individuals collected from Beijing, Tianjing and Qinhuangdao) or the highland population (i.e., Gangcha and Qinghai Lake). In addition, there was no clear genetic divergence between the highland and lowland populations (Qu et al. 2020).

      Author response image 1.

      Population genetic structure of the Eurasian Tree Sparrow (Passer montanus). The genetic structure generated using FRAPPE. The colors in each column represent the contribution from each subcluster (Qu et al. 2020). Yellow, highland population; blue, lowland population.

      Q4. Impact of the work There has been work showing that populations adapted to high altitude environments show changes in their hypoxia response that differs from the short-term acclimation response of lowland population of the same species. For example, in humans, see Erzurum et al. 2007 and Peng et al. 2017, where they show that the hypoxia response cascade, which starts with the gene HIF (Hypoxia-Inducible Factor) and includes the EPO gene, which codes for erythropoietin, which in turns activates the production of red blood cell, is LESS activated in high altitude individuals compared to the activation level in lowland individuals (which gives it its name). The present work adds to this body of knowledge showing that the short-term response to hypoxia and the long term one can affect different pathways and that acclimation/plasticity does not always predict what physiological traits will evolve in populations that colonize these environments over many generations and additional selection pressure (UV exposure, temperature, nutrient availability). Altogether, this work provides new information on the evolution of reaction norms of genes associated with the physiological response to one of the main environmental variables that affects almost all animals, oxygen availability. It also provides an interesting model system to study this type of question further in a natural population of homeotherms.

      Erzurum, S. C., S. Ghosh, A. J. Janocha, W. Xu, S. Bauer, N. S. Bryan, J. Tejero et al. "Higher blood flow and circulating NO products offset high-altitude hypoxia among Tibetans." Proceedings of the National Academy of Sciences 104, no. 45 (2007): 17593-17598. Peng, Y., C. Cui, Y. He, Ouzhuluobu, H. Zhang, D. Yang, Q. Zhang, Bianbazhuoma, L. Yang, Y. He, et al. 2017. Down-regulation of EPAS1 transcription and genetic adaptation of Tibetans to high-altitude hypoxia. Molecular biology and evolution 34:818-830.

      Thank you for highlighting the potential novelty of our work in light of the big field. We found it very interesting to discuss our results (from a bird species) together with similar findings from humans. In the revised version of manuscript, we have discussed short-term acclimation response and long-term adaptive evolution to a high-elevation environment, as well as how our work provides understanding of the relative roles of short-term plasticity and long-term adaptation. We appreciate the two important work pointed out by the reviewer and we have also cited them in the revised version of manuscript.

      Reviewer #2 (Public Review):

      This is a well-written paper using gene expression in tree sparrow as model traits to distinguish between genetic effects that either reinforce or reverse initial plastic response to environmental changes. Tree sparrow tissues (cardiac and flight muscle) collected in lowland populations subject to hypoxia treatment were profiled for gene expression and compared with previously collected data in 1) highland birds; 2) lowland birds under normal condition to test for differences in directions of changes between initial plastic response and subsequent colonized response. The question is an important and interesting one but I have several major concerns on experimental design and interpretations.

      Thank you for a precise summary of our work and constructive comments to improve this study. We have addressed your concerns in greater detail below.

      Q1. The datasets consist of two sources of data. The hypoxia treated birds collected from the current study and highland and lowland birds in their respective native environment from a previous study. This creates a complete confounding between the hypoxia treatment and experimental batches that it is impossible to draw any conclusions. The sample size is relatively small. Basically correlation among tens of thousands of genes was computed based on merely 12 or 9 samples.

      We appreciate the critical comments from the reviewer. The reviewer raised the concerns about the batch effect from birds collected from the previous study and this study. There is an important detail we didn’t describe in the previous version. All tissues from hypoxia acclimated birds and highland and lowland birds have been collected at the same time (i.e., Qu et al. 2020). RNA library construction and sequencing of these samples were also conducted at the same time, although only the transcriptomic data of lowland and highland tree sparrows were included in Qu et al. (2020). The data from acclimated birds have not been published before.

      In the revised version of manuscript, we also compared log-transformed transcript per million (TPM) across all genes and determined the most conserved genes (i.e., coefficient of variance ≤  0.3 and average TPM ≥ 1 for each sample) for the flight and cardiac muscles, respectively (Hao et al. 2023). We compared the median expression levels of these conserved genes and found no difference among the lowland, hypoxia-exposed lowland, and highland tree sparrows (Wilcoxon signed-rank test, P<0.05). As these results suggested little batch effect on the transcriptomic data, we used TPM values to calculate gene expression level and intensity. This methodological detail has been further clarified in the Methods and we also provided a new supplementary Figure (Figure S5) to show the comparative results.

      Author response image 2.

      The median expression levels of the conserved genes (i.e., coefficient of variance ≤ 0.3 and average TPM ≥ 1 for each sample) did not differ among the lowland, hypoxia-exposed lowland, and highland tree sparrows (Wilcoxon signed-rank test, P<0.05).

      The reviewer also raised the issue of sample size. We certainly would have liked to have more individuals in the study, but this was not possible due to the logistical problem of keeping wild bird in a common garden experiment for a long time. We have acknowledged this in the manuscript. In order to mitigate this we have tested the hypothesis of plasticity following by genetic change using two different tissues (cardiac and flight muscles) and two different datasets (co-expressed gene-set and muscle-associated gene-set). As all these analyses show similar results, they indicate that the main conclusion drawn from this study is robust.

      Q2. Genes are classified into two classes (reversion and reinforcement) based on arbitrarily chosen thresholds. More "reversion" genes are found and this was taken as evidence reversal is more prominent. However, a trivial explanation is that genes must be expressed within a certain range and those plastic changes simply have more space to reverse direction rather than having any biological reason to do so.

      Thank you for the critical comments. There are two questions raised we should like to address them separately. The first concern centered on the issue of arbitrarily chosen thresholds. In our manuscript, we used a range of thresholds, i.e., 50%, 100%, 150% and 200% of change in the gene expression levels of the ancestral lowland tree sparrow to detect genes with reinforcement and reversion plasticity. By this design we wanted to explore the magnitudes of gene expression plasticity (i.e., Ho & Zhang 2018), and whether strength of selection (i.e., genetic variation) changes with the magnitude of gene expression plasticity (i.e., Campbell-Staton et al. 2021).

      As the reviewer pointed out, we have now realized that this threshold selection is arbitrarily. We have thus implemented two other categorization schemes to test the robustness of the observation of unequal proportions of genes with reinforcement and reversion plasticity. Specifically, we used a parametric bootstrap procedure as described in Ho & Zhang (2019), which aimed to identify genes resulting from genuine differences rather than random sampling errors. Bootstrap results suggested that genes exhibiting reversing plasticity significantly outnumber those exhibiting reinforcing plasticity, suggesting that our inference of an excess of genes with reversion plasticity is robust to random sampling errors. We have added these analyses to the revised version of manuscript, and provided results in the Figure 2d and Figure 3d.

      Author response image 3.

      Figure 2a (left) and Figure 2b (right). Frequencies of genes with reinforcement and reversion plasticity (>50%) and their subsets that acquire strong support in the parametric bootstrap analyses (≥ 950/1000).

      In addition, we adapted a bin scheme (i.e., 20%, 40% and 60% bin settings along the spectrum of the reinforcement/reversion plasticity). These analyses based on different categorization schemes revealed similar results, and suggested that our inference of an excess of genes with reversion plasticity is robust. We have provided these results in the Supplementary Figure S2 and S4.

      Author response image 4.

      (A) and Figure S4 (B). Frequencies of genes with reinforcement and reversion plasticity in the flight and cardiac muscle. (A) For genes identified by WGCNA, all comparisons show that there are more genes showing reversion plasticity than those showing reinforcement plasticity for both the flight and cardiac msucles. (B) For genes that associated with muscle phentoypes, all comparisons show that there are more genes showing reversion plasticity than those showing reinforcement plasticity for the flight muscle, while more than 50% of comparisons support an excess of genes with reversion plasticity for the cardiac muscle. Two-tailed binomial test, NS, non-significant; , P < 0.05; , P < 0.01; **, P < 0.001.

      The second issue that the reviewer raised is that the plastic changes simply have more space to reverse direction rather than having any biological reason to do so. While a causal reason why there are more genes with expression levels being reversed than those with expression levels being reinforced at the late stages is still contentious, increasingly many studies show that genes expression plasticity at the early stage may be functionally maladapted to novel environment that the species have recently colonized (i.e., lizard, Campbell-Staton et al. 2021; Escherichia coli, yeast, guppies, chickens and babblers, Ho and Zhang 2018; Ho et al. 2020; Kuo et al. 2023). Our comparisons based on the two genesets that are associated with muscle phenotypes corroborated with these previous studies and showed that initial gene expression plasticity may be nonadaptive to the novel environments (i.e., Ghalambor et al. 2015; Ho & Zhang 2018; Ho et al. 2020; Kuo et al. 2023; Campbell-Staton et al. 2021).

      Q3. The correlation between plastic change and evolved divergence is an artifact due to the definitions of adaptive versus maladaptive changes. For example, the definition of adaptive changes requires that plastic change and evolved divergence are in the same direction (Figure 3a), so the positive correlation was a result of this selection (Figure 3d).

      The reviewer raised an issue that the correlation between plastic change and evolved divergence is an artifact because of the definition of adaptive versus maladaptive changes, for example, Figure 3d. We agree with the reviewer that the correlation analysis is circular because the definition of adaptive and maladaptive plasticity depends on the direction of plastic change matched or opposed that of the colonized tree sparrows. We have thus removed previous Figure 3d-e and related texts from the revised version of manuscript. Meanwhile, we have changed Figure 3a to further clarify the schematic framework.

    1. Reviewer #3 (Public review):

      Summary:

      The authors developed an interesting novel paradigm to probe the effects of cerebellar climbing fiber activation on short-term adaptation of somatosensory neocortical activity during repetitive whisker stimulation. Normally, RWS potentiated whisker responses in pyramidal cells and weakly suppressed them in interneurons, lasting for at least 1h. Crusii Optogenetic climbing fiber activation during RWS reduced or inverted these adaptive changes. This effect was generally mimicked or blocked with chemogenetic SST or VIP activation/suppression as predicted based on their "sign" in the circuit.

      Strengths:

      The central finding about CF modulation of S1 response adaptation is interesting, important, and convincing, and provides a jumping-off point for the field to start to think carefully about cerebellar modulation of neocortical plasticity.

      Weaknesses:

      The SST and VIP results appeared slightly weaker statistically, but I do not personally think this detracts from the importance of the initial finding (if there are multiple underlying mechanisms, modulating one may reproduce only a fraction of the effect size). I found the suggestion that zona incerta may be responsible for the cerebellar effects on S1 to be a more speculative result (it is not so easy with existing technology to effectively modulate this type of polysynaptic pathway), but this may be an interesting topic for the authors to follow up on in more detail in the future.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      The manuscript by Chiu et al describes the modification of the Zwitch strategy to efficiently generate conditional knockouts of zebrafish betapix. They leverage this system to identify a surprising glia-exclusive function of betapix in mediating vascular integrity and angiogenesis. Betapix has been previously associated with vascular integrity and angiogenesis in zebrafish, and betapix function in glia has also been proposed. However, this study identifies glial betapix in vascular stability and angiogenesis for the first time.

      The study derives its strength from the modified CRISPR-based Zwitch approach to identify the specific role of glial betapix (and not neuronal, mural, or endothelial). Using RNA-in situ hybridization and analysis of scRNA-Seq data, they also identify delayed maturation of neurons and glia and implicate a reduction in stathmin levels in the glial knockouts in mediating vascular homeostasis and angiogenesis. The study also implicates a betapix-zfhx3/4-vegfa axis in mediating cerebral angiogenesis.

      There is both technical (the generation of conditional KOs) and knowledge-related (the exclusive role of glial betapix in vascular stability/angiogenesis) novelty in this work that is going to benefit the community significantly.

      While the text is well written, it often elides details of experiments and relies on implicit understanding on the part of the reader. Similarly, the figure legends are laconic and often fail to provide all the relevant details.

      Thanks for this reviewer on his/her overall supports on our manuscript. We have now revised the manuscript text and figure legends making them to have all relevant details as much as we can. 

      Specific comments:

      (1) While the evidence from cKO's implicating glial betapix in vascular stability/angiogenesis is exciting, glia-specific rescue of betapix in the global KOs/mutants (like those performed for stathmin) would be necessary to make a water-tight case for glial betapix.

      We fully agree with the reviewer that it would be ideal to examine glia-specific rescue of betaPix in its global KOs. At the same time, it is difficult to achieve optimal transient expression of betaPix by injecting plasmid clone of gfap:betaPix while it takes long time to establish stable transgenic line gfap:betaPix for rescuing mutant phenotypes. We would like to pursue this line of researches in the future.

      (2) Splice variants of betapix have been shown to have differential roles in haemorrhaging (Liu, 2007). What are the major glial isoforms, and are there specific splice variants in the glial that contribute to the phenotypes described?

      We agree that it would be important to address whether any specific splice variants in glia contribute to betaPix mutant phenotypes. Previous studies have shown that the isoform a of betaPix is ubiquitously expressed across various tissues, while isoforms b, c, and d are predominantly expressed in the nervous system. In mice, the expression level of isoform betaPix-d is essential for the neurite outgrowth and migration. In the nervous system, we have not assessed glial specific betaPix isoforms directly. Our current data cannot rule out whether specific isoform is involved in its function in glial responses. The Zwitch cassette of betaPix resides on intron 5, thus disrupting all transcripts when Cre is activated. However, we are fully aware of the potential of identifying glial betaPix isoform with direct downstream targets. Further studies to dissect their roles in cerebral vascular development and diseases are part of our future plans.

      (3) Liu et al, 2012 demonstrated reduced proliferation of endothelial cells in bbh fish and linked it to deficits in angiogenesis. Are there proliferation/survival defects in endothelial cells in the glial KOs?

      We thank the reviewer for highlighting endothelial cell phenotypes in betaPix mutants. We are aware of endothelial cells might directly link to the mutant defects in angiogenesis. We assessed and quantified endothelial migration by measuring the length of developing central arteries, but we did not examine endothelial cell proliferation/survival defects in glial KOs. In our scRNA-seq analysis, the proportion of endothelial cells reduced among betaPix deficiency, indicating that endothelial cell proliferation/survival might decrease in mutants. In this endothelial cell cluster, we found disrupted transcriptional landscape in a set of angiogenic associated genes (Figure 6M). While these analysis highlights altered angiogenic transcriptome profile in endothelial cells of betaPix knockouts, we acknowledge that our study does not directly address proliferation/survival phenotypes in endothelial cells, which warrants future investigations on the role of betaPix in regulating glia-endothelial cell interaction.  

      Reviewer #2 (Public review):

      Summary:

      Using a genetic model of beta-pix conditional trap, the authors are able to regulate the spatio-temporal depletion of beta-pix, a gene with an established role in maintaining vascular integrity (shown elsewhere). This study provides strong in vivo evidence that glial beta-pix is essential to the development of the blood-brain barrier and maintaining vascular integrity. Using genetic and biochemical approaches, the authors show that PAK1 and Stathmins are in the same signaling axis as beta-pix, and act downstream to it, potentially regulating cytoskeletal remodeling and controlling glial migration. How exactly the glial-specific (beta-pix driven-) signaling influences angiogenesis or vascular integrity is not clear.

      Strengths:

      (1) Developing a conditional gene-trap genetic model which allows for tracking knockin reporter driven by endogenous promoter, plus allowing for knocking down genes. This genetic model enabled the authors to address the relevant scientific questions they were interested in, i.e., a) track expression of beta-pix gene, b) deletion of beta-pix gene in a cell-specific manner.

      (2) The study reveals the glial-specific role of beta-pix, which was unknown earlier. This opens up avenues for further research. (For instance, how do such (multiple) cell-specific signaling converge onto endothelial cells which build the central artery and maintain the blood-brain barriers?)

      We thank this reviewer for his/her overall supports on our work.

      Weaknesses:

      Major:

      (1) The study clearly establishes a role of beta-pix in glial cells, which regulates the length of the central artery and keeps the hemorrhages under control. Nevertheless, it is not clear how this is accomplished.

      (a) Is this phenotype (hemorrhage) a result of the direct interaction of glial cells and the adjacent endothelial cells? If direct, is the communication established through junctions or through secreted molecules?

      Thanks for this critical question. We attempted to address this issue by performing live imaging using light-sheet confocal microscopy, but failed to achieve sub-cellular resolution. We don’t have data to address this critical issue that warrants future investigations. 

      (b) The authors do not exclude the possibility that the effects observed on endothelial cells (quantified as length of central artery) could be secondary to the phenotype observed with deletion of glial beta-pix. For instance, can glial beta-pix regulate angiogenic factors secreted by peri-vascular cells, which consequently regulate the length of the central artery or vascular integrity?

      Thank the reviewer for this critical point. While we found the major defects of endothelial cell migration quantified by the central artery length, could not rule out the participation of signals from other peri-vascular cells. We fully agree that it will be important to address the cell-type specific relationship by angiogenic factors. Of note, degradation of extracellular matrix and focal adhesion is critical for the hemorrhagic phenotypes of bbh mutants. In a previous published study in our group, we found that suppressing the globally induced MEK/ERK/MMP9 signaling in bbh mutants significantly decreases hemorrhages. Accordingly, we edited a paragraph in the Discussion section on pages 24-25. We plan to continue investigating whether the complex interactions in the perivascular space contribute to vascular integrity disruption, as well as the cross-talks among different cell types during vascular development in these mutants. We believe that our model of glial specific betaPix function will guide us to further study cellular interactions in the follow-up studies.

      (c) The pictorial summary of the findings (Figure 7) does not include Zfhx or Vegfa. The data do not provide clarity on how these molecules contribute (directly or indirectly) to endothelial cell integrity. Vegfaa is expressed in the central artery, but the expression of the receptor in these endothelial cells is not shown. Similarly, all other experimental analyses for Zfhx and Vegfa expression were performed in glial cells. More experimental evidence is necessary to show the regulation of angiogenesis (of endothelial cells) by glial beta-pix. Is the Vegfaa receptor present on central arteries, and how does glial depletion of beta-pix affect its expression or response of central artery endothelial cells (both pertaining to angiogenesis and vascular integrity).

      Thank this reviewer for pointing out this critical issue. We have now revised the pictorial summary including Zfhx or Vegfa information in Figure 7. The key receptors of VEGF-A ligand are VEGFR-1 and VEGFR-2. In zebrafish, expression of Vegfr-2, as known as kdrl, is well-documented at endothelial cells including the hindbrain central arteries. We fully agree that it would indeed be of great value to assess changes of kdrl expression pattern after betaPix deficiency in vivo. It warrants future investigations to address how the VEGFA-VEGFR2 signaling in endothelial cells is altered in betaPix mutants.

      (2) Microtubule stabilization via glial beta-pix, claimed in Figure 5M, is unclear. Magnified images for h-betapix OE and h-stmn-1 glial cells are absent. Is this migration regulated by beta-pix through its GEF activity for Cdc42/Rac?

      We have now revised Figure 5M to include magnified images for h-betaPIX and h-STMN1 overexpression groups. It has been shown that there is a positive feedback loop of microtubule regulation consisting of Rac1-Pak1-Stathmin at the cell edge (Zeitz and Kierfeld, 2014 Biophys J.). Previous studies have shown betaPix activates Rac1 through its GEF activity and also regulates the activity of Pak1 via direct binding. As reported by Kwon et al., betaPix-d isoform promotes neurite outgrowth via the PAK-dependent inactivation of Stathmin1. In this work, we did not assess binding activity of betaPix to Rac1 or Pak1. Nevertheless, our data on the rescue experiments via IPA-3 suggest that betaPix deficiency impaired migration through Pak1 signaling. 

      (3) Hemorrhages are caused by compromised vascular integrity, which was not measured (either qualitatively or quantitatively) throughout the manuscript. The authors do measure the length of the central artery in several gene deletion models (2I, 3C. 5F/J, 6G/K), which is indicative of artery growth/ angiogenesis. How (if at all) defects in angiogenesis are an indication of hemorrhage should be explained or established. Do these angiogenic growth defects translate into junctional defects at later developmental time points? Formation and maintenance of endothelial cell junctions within the hemorrhaging arteries should be assessed in fish with deleted beta-pix from astrocytes.

      We appreciate the reviewer’s point and agree that this is a key aspect we need to clarify. To address junctional defects in our model, we re-examined the scRNA-seq data and found mild downregulation of junction protein claudin-5a (cldn5a) levels in the transcriptome analysis of the endothelial cluster (Author response image 1). We agree in principle that single cell RNA sequencing findings should be validated by immunostaining. While we did not measure junctional defects directly in this work, we have previously reported comparable tight junction protein zonula occludens-1 (ZO1) expression between siblings and bbh mutants (Yang et al., 2017 Dis Model Mech). In zebrafish, functionally characterized blood brain barrier (BBB) is only identified after 3 dpf. The lack of mature BBB might be due to the immature status of barrier signature at this developmental stage. Hemorrhage phenotype occurred around 40 hpf, and hematomas would be almost completely absorbed at later stage since most mutants recover and survive to adulthood. Thus future studies are needed to address the junctional characteristics on the cellular and molecular level in later developmental stages of betaPix mutants.   

      Author response image 1.

      Violin plots showing cdh5, cldn5a, cldn5b and oclna expression levels in endothelial sub-cluster. ctrl, control siblings; ko, betaPix knockouts (CRISPR mutants); 1d or 2d, 1 or 2 days post fertilization.

      (4) More information is required about the quality control steps for 10X sequencing (Figure 4, number of cells, reads, etc.). What steps were taken to validate the data quality? The EC groups, 1 and 2-days post-KO are not visible in 4C. One appreciates that the progenitor group is affected the most 2 days post-KO. But since the effects are expected to be on the endothelial cell group as well (which is shown in in vivo data), an extensive analysis should be done on the EC group (like markers for junctional integrity, angiogenesis, mesenchymal interaction, etc.). Are Stathmins limited to glial cells? Are there indicators for angiogenic responses in endothelial cells?

      Thank the reviewer for these critical suggestions. The detailed statements about the quality control steps for 10X sequencing are now provided in the Materials and Methods section. We validate the data quality through multiple steps, including verification of the number of viable cells used in experiment, assessment of peak shapes and fragment sizes of scRNA-seq libraries, confirmation of sufficient cell counts and sequencing reads for data analyses, and implementation of stringent filtering steps to exclude low-quality cells. Stathmins expressions as shown in Violin plots in Figure 4E and stmn1a, stmn1b and stmn4l expressions in UMAP plots in Figure S6C. These expressions are not limited to glial cells but distributed more widely among zebrafish tissues. We would like to point out that despite the small amount, the endothelial cell clusters are presented in Figure 4C with color brown. The proportions of EC groups split by four sample are visualized in Figure S6B and shown significant reduction among betaPix knockouts at 2 dpf, which had similar trend as glial progenitors. In addition, gene ontology analysis identified a set of down-regulated angiogenic genes expression in endothelial cluster (Figure 6M). We realize our interpretation of endothelial cell phenotypes was not sufficiently clear in this work and have now added sentences to the manuscript text on pages 16-17. As noted above, future studies are needed to address how glial betaPix regulates endothelial cell and BBB function. 

      Reviewing Editor Comments:

      comments on your manuscript. Addressing comments 1-3 from Reviewer 1 and comment 1 and its subparts from Reviewer 2 (major weaknesses) will significantly improve the manuscript by reinforcing the cell autonomous requirement of betaPix and also gain mechanistic insights. In addition, extensive proofreading and editing of the text, as well as changes to the figure, figure legends, and the discussion as indicated by both reviewers, will improve the readability and clarity of this manuscript.

      Thanks for Reviewing Editor on his/her supports on this manuscript. As noted above, we are trying to address the reviewers’ comments using the data we obtained in this work, as well as our plans for future investigations. We have now made extensive proofreading and editing of manuscript text and figure legends for improving the readability and clarity of this manuscript.

      Reviewer #1 (Recommendations for the authors):

      (1) The Discussion is written like an introduction with very little engagement with the data generated in the manuscript. The role of betapix-Pak-stathmin and betapix-zfhx3/4-vegfaa is barely discussed and contextualised vis-à-vis the current knowledge in the field.

      We appreciate the reviewer’s critical comments regarding the Discussion section. We have now revised the manuscript text on pages 20-23 to address the role of betapix-Pak-stathmin and betapix-zfhx3/4-vegfaa axis with contributions from this work.

      (2) Line 145: "light sheet microscopy" - explain that this was only for experiments involving fluorescence. Currently, it reads as if the data presented in Figures 1D and E are also obtained via light sheet microscopy. E.g., the paragraph starting on line 139 does not say what line was imaged (and what it labels) to reach the conclusions reached. This detail is not there even in the associated figure legend. Similarly, line 153 discusses radial glia, but there is no indication that these were labelled using Tg (GFAP:GFP) except in the figure annotation. There are various instances of such omissions throughout the text, and they should be remedied to indicate what each line is and what it labels, at least in the first instance.

      Thank the reviewer for their thoughtful points. In this revised version, we have incorporated more statements of the objectives and methodologies in the text in pages 8-9. We hope that the revised manuscript can better present the data with clarifying methodologies and materials used in this work. 

      (3) Figure 1E legend: What is the haemorrhage percentage? Is it the number of embryos per experiment showing hemorrhage? Indicate in the text. In the right panel, what is the number of embryos used? Please ensure all numbers (number of embryos, experiments, etc) used to plot any data in the set of figures in the entire manuscript are clearly indicated.

      Thank the reviewer for the suggestion. In this revised version, we have incorporated more detailed statements in figures and figure legends in the manuscript to show the numbers of embryos used.

      (4) The Discussion section suddenly introduces the blood-brain barrier and extensively discusses it. However, while cerebral haemorrhage can disrupt the BBB and exacerbate the effects of the haemorrhage, this manuscript does not suggest that a weakened BBB is the cause of haemorrhages in betapix mutants. More likely, betapix stabilises and maintains vascular integrity, and loss of this function causes haemorrhaging and subsequent disruption of the BBB. The glial function noted in this study is likely to be distinct from the glial function in BBB development and maintenance. The authors do not show any direct evidence for the latter. These should be shortened, and only relevant aspects facilitating contextualisation of data generated in this manuscript should be retained.

      We have now revised the Discussion section to reduce the introduction of blood-brain barrier and add statements according to the suggestions from both reviewers. We hope that the revisions provide a more relevant and balanced discussion.

      (5) Is the scratch assay in Figure 5 controlled for differences in cell proliferation among the different manipulations?

      We plated the same numbers of cells and cultured them in the same condition. Before conducted scratch assay we replaced medium with serum-free culture medium to reduce the effect from cell proliferation among the different manipulation groups. 

      (6) In the glioblastoma experiments involving betapix KD, does stathmin RNA/protein decrease? What about Ser 16 phosphorylation (as shown for neurons in Kwon et al, 2020)?

      STMN1 RNA was down-regulated by betaPIX deficiency, which was rescued by betaPIX overexpression in glial cells (Author response image 2). These results are similar to those from in vivo analysis (Figure 5A, 5B and S7A). We agree with the reviewer that it would been ideal to examine Ser 16 phosphorylation of Stathmin in our models. However, we believe that our data have established Stathmins function downstream to betaPix.

      Author response image 2.

      qRT-PCR analysis showing that betaPIX over-expression (betaPix OE) rescued STMN1 expression in betaPIX siRNA knockdown (betaPix KD) in U251 cells. Data are presented in mean ± SEM; one-way ANOVA analysis with Dunnett's test, individual P values mentioned in the figure

      (7) How was the rescue of betapix in glioblastoma cells with siRNA-mediated betapix knockdown performed? Is this by betapix-resistant cDNA? Further, no information about isoforms of betapix (both for siRNA-mediated KD and rescue) or stathmin is provided.

      As similar to our Zwitch method that disrupting all betaPix transcripts in vivo, the knockdown of human betaPIX were designed to target conserved region of all transcripts in glioblastoma cell lines. And the rescue human betaPIX were obtained from the U251 cDNA library, ideally all isoforms enriched in the glioblastoma cell line would be isolated. The missing details are now provided in the Materials and Methods section, page 26. 

      (8) It is unclear what the authors' thoughts are on the decrease in stathmin observed and the functional outcome of this decrease. The Discussion could benefit from this.

      Thanks. We have now incorporated a new paragraph in the Discussion section at pages 21-22 addressing that down-regulated expression of Stathmins is associated with functional outcome of this decrease.

      (9) Zfhx4 mRNA injection is performed on bbh and betapixKO (is this a global or glial KO?) and found to rescue haemorrhaging. While vegfaa mRNA increases, it is formally possible that the rescue is not due to the increase in vegfaa (or that vegfaa is sufficient). Injection of vegfaa mRNA could address this issue.

      Zfhx4 mRNA injection was performed on bbh mutants and global betapix knockouts (crispr mutants). To avoid confusion, we have now included a sentence highlighting global knockout mutants used for this rescue experiment. For the second part, we acknowledge that this study cannot definitively prove the necessity of increased vegfaa levels in the rescue experiment. However, our data established Zhfx3/4 as novel downstream effectors to betaPix in cerebral vessel development. And these effects might partly be linked to angiogenic responses regulated by Zhfx3/4. In this revised version, we carefully proposed that Vegfaa signals act downstream of betaPix-Zfhx3/4 axis and highlighted the weakness of our manuscript on not fully investigating sufficiency of Vegfaa in the Discussion section at page 24. We intend to pursue more extensive analysis in our follow-up studies.

      (10) A significant part of the manuscript looks at angiogenesis/vascularisation, however, the title of the paper only reflects vessel integrity (which can be distinct from angiogenesis).

      Thanks. We have now changed the title to: Glial betaPix is essential for blood vessel development in the zebrafish brain

      (11) Line 366: The BBB abbreviation is used without indicating the full form. Perhaps this can be introduced in the preceding sentence.

      We have now edited the following sentence: “The maturation hallmark of central nervous system (CNS) vasculature is acquisition of blood brain barrier (BBB) properties, establishing a stable environment ...” in lines 386-387, Discussion section.

      (12) Line 371: "rupture" and not "rapture".

      We thank the reviewer for pointing out the spelling error, and have now made this correction. 

      (13) Line 416: "is enriched" instead of "enriches"?

      We have now edited as: “...end feet that is enriched with aquaporin-4 ...” in line 411, page 19. 

      (14) The sentence in lines 121-123 should be simplified.

      We have now revised this sentence as the following: “A previous work has shown that bubblehead (bbh<sup>fn40a</sup>) mutant has a global reduction in betaPix transcripts, and bbh<sup>m292</sup> mutant has a hypomorphic mutation in betaPix, thus establishing that betaPix is responsible for bubblehead mutant phenotypes [10]”. 

      (15) No mention in the text of what o-dianisine labels.

      We have now edited the following sentence: “By using o-dianisidine staining to label hemoglobins, we found severe brain hemorrhages ...” in lines 131-133.

      (16) Line 165: Sentence requires improvement. Perhaps "Vascularisation of the central arteries in the zebrafish hindbrain ...".

      We have now edited this sentence as: “Vascularisation of the central arteries in the zebrafish hindbrain starts at 29 hpf.” in this revised version (line 176). 

      (17) Line 184: Why is "hematopoiesis" mentioned? The genesis of blood cells is not tested anywhere in the manuscript.

      Thanks. We have now edited this statement as: “IPA-3 treatment had no effect on heamorrhage induction in betaPix<sup>ct/ct</sup> control siblings.” 

      (18) Line 222-223: Improve "increasing trends". Perhaps "increased relative proportions". Clarify "progenitors" means neuronal and glial progenitors.

      We have now edited this statement: “we found that most neuronal clusters increased relative proportions ...” in this revised version.

      (19) Line 232-233: "arrow indicates" - perhaps "indicated by the arrow"? Also, the arrow indicating gfap needs to be mentioned in the Figure S6A legend. Cannot understand what is meant by "as of its enriched gfap".

      We have now edited in the text as: “Figure S6A, indicated by the arrow”, and added “Box area and arrow highlighting gfap expressions.” in Figure S6 legend. To avoid confusion, we have revised "as of its enriched gfap" sentence as the following: “We next focused on the progenitor cluster owing to the enriched gfap expression and the significantly reduced numbers of cells in this cluster by betaPix deficiency.”

      (20) Line 239 - 240: While the sentence says "... revealed three major categories:", well, more than 3 are mentioned subsequently.

      To avoid possible confusion in the text, we have now removed the sub-category examples and presented the data as: “three major categories: epigenetic remodeling, microtubule organizations and neurotransmitter secretion/transportation (Figure 4D).” 

      (21) Line 252: Stathmins negatively regulate microtubule stability. Why are they referred to as "microtubule polymerization genes stathmins"?

      We are thankful to the reviewer for pointing out this error, and we have now made correction in the text as “microtubule-destabilizing protein Stathmins”.

      (22) Line 262-265: The citation used to indicate concurrence with mouse data is disingenuous. That study did not show a reduction in stathmin levels upon betapix loss. Rather, it showed an increase in Ser16 phosphorylation on stathmin, which reduces stathmin's microtubule destabilising function. Please elaborate on the difference between the two studies.

      We completely agree with the reviewer’s statement that in the cited article, increased Ser16 phosphorylation on stathmin reduces its microtubule destabilising function. While that study did not show a reduction in Stathmin levels, others have shown that transcriptionally downregulated Stathmins are associated with the impaired neuronal and glial development. We have now revised the Discussion section by adding a new paragraph to address the disrupted homeostasis of Stathmins in these previous studies and their possible association with our data. We hope that these changes we made can clarify this issue. 

      (23) Line 310: While ZFHX3 levels are reduced in betapix mutants and KD in glioblastomas, were ZFHX3 and 4 up- or downregulated in the scRNA-Seq data?

      Thanks for this critical point. Indeed, our results showed that ZFHX3 and 4 down-regulated in the glial progenitor cluster in the scRNA-Seq data (Figure S8A) in betaPix knockouts and the FACS-sorted glia cells (Figure S8B). 

      (24) Line 317: "... betaPix acts upstream to Zfhx3/4-VEGFA signaling in regulating angiogenesis ...". While this is established later, the data at the time of this sentence does not warrant this claim.

      We agree with the reviewer’s statement and restated this sentence in the following way: “Zfhx3/4 might act as downstream effector of betaPix.”

      Reviewer #2 (Recommendations for the authors):

      (1) The images shown in 2E/H, 3B, 6F/J can use a schematic that helps readers to understand what to expect or look for. Splitting up the channels may also help in visualizing the vasculature clearly.

      Thank the reviewer for these suggestions. In this revised version, we have included schematic diagrams in the figures and incorporated more detailed statements in the legends.

      (2) Many times, arrows are pointing to structures (2E/H, 3B), but are not explained clearly (neither in the text nor in the legends). In 3B, the arrow is pointing to a negative space.

      (3) Legends are minimalistic and do not provide much information. The reader is left to interpret the data on their own.

      We apologize for not explaining the figures in enough details. In this revised version, we have now incorporated more detailed statements in the figure legends and have adjusted arrows in all figures.

      (4) The text needs heavy proofreading. For example:

      (a) Line 208- the title does not seem appropriate since the following text does not discuss Stathmins at all, which comes later.

      We agree with the reviewer’s statement and restated the title in the following way: “Single-cell transcriptome profiling reveals that gfap-positive progenitors were affected in betaPix knockouts.”

      (b) There is no mention of Figure 7 throughout the text.

      (c) Figure 7 does not include Zfhx or Vegfaa.

      Thank the reviewer for pointing out these errors. We have now revised Figure 7 and incorporated it to corresponding paragraphs in the Discussion section. 

      (5) The discussion seems incoherent in its current state.

      We have now revised the Discussion section according to the suggestions from both reviewers. We hope these revisions adequately address your concerns.

      (6) Please include some of the following points, if possible, in the discussion.

      (a) How is GEF activity of Rac/Cdc42 expected to be affected in beta-pix KO fishes?

      (b) What are the possible different ways the angiogenic pathways merge onto endothelial cells? Or do the authors imagine this process to be entirely driven by glial cells (directly)?

      We would like to thank the reviewer for his/her invaluable suggestions. We have now revised the Discussion section and hope that these changes can provide better and more balanced discussion. Since we have no data directly related to GEF activity of Rac/Cdc42 that might be affected in betaPix mutants, as well as have very limited data showing how glial betaPix regulates cerebral endothelial cells and BBB function, we would like to have the Discussion focused on the CRISPR-induced KI and cKO technologies, glial betaPix function and brain hemorrhage, and the putative role of betaPix-Zfhx3/4-VEGF function in central artery development. 

      References:

      Daub, H., Gevaert, K., Vandekerckhove, J., Sobel, A., and Hall, A. (2001). Rac/Cdc42 and p65PAK regulate the microtubule-destabilizing protein stathmin through phosphorylation at serine 16. J Biol Chem 276, 1677-1680. 10.1074/jbc.C000635200.

      Kim S, Park H, Kang J, Choi S, Sadra A, Huh SO. β-PIX-d, a Member of the ARHGEF7 Guanine Nucleotide Exchange Factor Family, Activates Rac1 and Induces Neuritogenesis in Primary Cortical Neurons. Exp Neurobiol. 2024;33(5):215-224. doi:10.5607/en24026

      Kwon Y, Jeon YW, Kwon M, Cho Y, Park D, Shin JE. βPix-d promotes tubulin acetylation and neurite outgrowth through a PAK/Stathmin1 signaling pathway [published correction appears in PLoS One. 2020 May 13;15(5):e0233327. doi: 10.1371/journal.pone.0233327.]. PLoS One. 2020;15(4):e0230814. Published 2020 Apr 6. doi:10.1371/journal.pone.0230814

      Kwon Y, Lee SJ, Shin YK, Choi JS, Park D, Shin JE. Loss of neuronal βPix isoforms impairs neuronal morphology in the hippocampus and causes behavioral defects. Anim Cells Syst (Seoul). 2025;29(1):57-71. Published 2025 Jan 8. doi:10.1080/19768354.2024.2448999

      Wittmann, T., Bokoch, G.M., and Waterman-Storer, C.M. (2004). Regulation of microtubule destabilizing activity of Op18/stathmin downstream of Rac1. J Biol Chem 279, 6196-6203.10.1074/jbc.M307261200.

      Zeitz, M., and Kierfeld, J. (2014). Feedback mechanism for microtubule length regulation by stathmin gradients. Biophys J 107, 2860-2871.10.1016/j.bpj.2014.10.056.

    1. Do not list sources that you have consulted but not cited.

      Listing unused sources can be misleading and make it seem like you used evidence or research that you didnt.

    1. Although the Huns were excellent cavalry warriors, they weren't that great at besieging cities

      Why were the Huns successful in battle but unable to effectively capture and control cities?

    1. most people think leaders should be role models but does being a role model require moral perfection in every aspect of life or does it only require that that a leader serve as a model in areas relevant to his or her role as a leader

      could be a good counter

    1. R0:

      Reviewer #1: Manuscript as reviewed meets PLOS Global Public Health publication requirements, the author(s) clearly presented the study background, methods, results, discussions and conclusion. My comments and revision request are minor formatting and suggested input. No ethics concerns at this time. Reviewer #2: This is a well-written paper with clear methodology. From the perspective of data science applied to public health, this manuscript does a great job of clearly discussing and defining its methodology, which are all the current best practices. Correcting for class imbalance was a good choice, given the low prevalence of EC in the survey population. The use of SMOTE on the training set only ensured minimal data leakage, and is the current best practice. Using such a large variety of machine learning models creates a challenge in describing each model well enough within one manuscript, and the author did a good job of balancing that challenge.

      I only have a few minor suggestions toc clarify the methodology of the manuscript:

      Please specify upfront how many observations were used in training and testing, and specify how many positive EC outcomes were included in the testing set. With such a low prevalence of a positive outcome in a relatively small set of observations, it is worth mentioning that there are perhaps only 10-20 positive outcomes being predicted in the test set. In the absence of weighting, it may be that characteristics of those few positive outcomes in test set are biasing the predictors, and this is worth mentioning.

      Please discuss how the initial 38 variables were selected from the survey. If there was an initial expert judgment on inclusion into the variable set for feature selection, that should be mentioned.

      Cluster design was mentioned in the PMA survey. This indicates that the survey includes survey weights of some kind. Please discuss whether those weights were addressed in the machine learning methods, or defend why they were not included in the model design. Survey weights can be included in machine learning models to make the predictors more representative of the population of interest.

      In the discussion, please discuss the impact of low precision, where there were many false positives compared to true positives. While it is mentioned, there are consequences (e.g., loss of trust) for low precision prediction models in public health, and this characteristic of the findings could be discussed more.

      Consider including a SHAP dependance plot, because potential interactions are discussed (e.g., knowledge and ad exposure) without showing evidence. A SHAP Dependence plot could take care of this.

      Consider explicitly discussing the limitation of cross-sectional survey data used for prediction, where proxies were used in place of quantitative evidence (e.g., exposure to ads to proxy perceptions).

      Overall, great work, timely, and well constructed. Reviewer #3: SEE word document attached with clear table

      Manuscript Number: PGPH-D-25-01837 Review report

      This manuscript demonstrates a significant strength in its application of advanced machine learning and Explainable AI (XAI) to address the critical public health challenge of low emergency contraceptive (EC) use in Ethiopia. By rigorously testing multiple models and using SMOTE to handle severe class imbalance, it identifies key modifiable predictors like primarily EC awareness and media exposure rather than static socioeconomic factors. The use of SHAP values transforms complex model outputs into actionable insights, revealing that knowledge gaps are the primary barrier. This approach provides a powerful, data-driven blueprint for designing targeted interventions, such as tailored media campaigns and improved health counselling, to effectively increase EC uptake and reduce unintended pregnancies. However, the following points may need to be considered, so as to improve the quality of the paper.

      Topic/ subtopic Issue Suggestions Title: Predicting Utilization of Emergency Contraceptive Usage in Ethiopia and Identifying Its Predictors Using Machine Learning Redundancy. "Utilization" and "Usage" mean the same thing. Predicting the Utilization of Emergency Contraception in Ethiopia and Identifying Its Predictors Using Machine Learning. Affiliation Inconsistent institution name. on page 1 says "College of Medicine Health Science" while first page of manuscript is "College of Health Science". Use consistent affiliation name Abstract "Traditional analyses have struggled to identify complex predictors." For flow, consider: Traditional statistical analyses have struggled to… Abstract "with SMOTE used to address class imbalance" – Grammar: This is a dependent clause. It should be connected to the previous sentence. ..., and the SMOTE was used to address class imbalance. Abstract "Findings highlight that knowledge gaps, not poverty or access, are key barriers to EC use." – Clarity: "access" is vague. Be more specific. ...not poverty or physical access barriers, are key. Introduction Page 3: "moderate’s" Change to moderates ("the way the education level moderate’s religion-based stigma"). Introduction "drives excessive maternal mortality rates of over 500 deaths per 100,000 live births, drives poverty cycles, constrains girls' and women's educational and economic opportunities, and overwhelms poor healthcare infrastructures." – The word "drives" is used twice in close succession. ...contributes to high maternal mortality rates of over 500 deaths per 100,000 live births, perpetuates cycles of poverty, constrains... Introduction "is a central preventive intervention" is a crucial preventive intervention Introduction "the use of EC remains embarrassingly low" "Embarrassingly" is subjective and informal. ...remains critically low. Introduction "tempts women to shun services" Word choice not good. ...pressures women to shun services. Introduction "woefully underserved" Informal. ...significantly underserved. Introduction "yield the predictive resolution necessary" "Resolution" unusual in this context. ...yield the predictive accuracy necessary Introduction "vastness tests for fairness" – Phrase is unclear and likely an error. Correct the phrase to clarity Methods Data Source & Inclusion Criteria: The criteria for selecting the 2,334 women from the larger PMA sample of 8,943 are not explicitly stated. Was it a complete case analysis? This needs clarification as it affects the generalizability of the findings. Clarify if sampling was done or it was a complete case study Methods "The dataset demonstrates low overall missing data prevalence" –"Prevalence" is for diseases outbreaks. The missing data were minimal overall; Methods "offering robust classifier building while preserving real performance measurement." ...facilitating the development of robust classifiers while preserving a realistic assessment of performance. Results "nailing 17 true positives" Informal word choice. ...correctly identifying 17 true positives... Results "It manages this recall strength at the expense of precision, though, which sits at approximately 11%." – "Sits at" is informal. It achieves this high recall at the expense of precision, which was approximately 11%. Results "The most influential positive feature was “heard_emergency”, indicating awareness of emergency services has the greatest influence..." add which . The most influential positive feature was “heard_emergency”, which indicates that awareness of emergency contraception has the greatest influence... Results "This resonates with core assumptions of health behavior theories like the Health Belief Model, which posit perceived knowledge as a harbinger of action." "Harbinger" misused. ...which posit knowledge as a prerequisite for action. Results Page 18: "radio-implemented" Change to radio-delivered or radio-based. Results "Even positive, this reflects continued systemic disincentives documented elsewhere" – Unclear Even not a correct word. Although positively associated, this factor reflects... Results "all the sources of blunting the effect of being in contact with the health system." Grammatically incorrect and unclear. ...all of which blunt the effect of health system contact. Results "One of the thoughtful discoveries of SHAP values was the sizeable negative impact" "Thoughtful" incorrect. A notable discovery from the SHAP analysis was. Results "Isolated use of SMOTE in the training set" – "Isolated" wrong word. Applying SMOTE exclusively to the training set Results "It shifted the ML model from being a prediction device to an analysis tool, not just deciding which features were significant, but the size and sign of their effects, and significantly, potential interactions" Not clear because of parallel verbs. It transformed the ML model from a prediction device into an analytical tool, revealing not only which features were significant but also the magnitude and direction of their effects, as well as potential interactions. Results "Simulation by counterfactual SHAP analysis suggests a hypothetical 30% increase in EC knowledge might boost utilization by approximately 12.7%, a valuable public health gain." The sentence needs clearer explanation. Counterfactual simulation using SHAP values (e.g., calculating the mean impact of increasing the "heard_emergency" feature value) suggested that a 30% increase in EC knowledge could potentially increase utilization by approximately 12.7%, representing a valuable public health gain. Results "Geographic ML modeling over the geographic data would also potentially be able to further optimize resource deployment" Repetition: "Geographic" used twice. Rewrite the sentence for clarity Results "the implied vulnerability evidenced by the 'forced pregnancy' variable (despite missing data concerns) underscore" Not clear as the subject-verb disagreement. .use the word..underscores. Methods Model Selection Justification: The list of eight algorithms is comprehensive, but justification for simpler models like Naive Bayes is weak. Justify the inclusion of Naïve Bayes. Is it possible because they were included as benchmarks. Methods Evaluation Metrics: AUC-ROC emphasized, but for imbalanced problems F1-Score or Precision-Recall AUC may be better. Also consider using F1-Score or Precision as the data is not balanced or Justify the use of AUC-ROC Methods Model Performance Presentation: Logistic Regression focus unclear since Gradient Boosting achieved higher AUC-ROC (0.85). Consider Gradient Boosting as it achieved AUC-ROC 0.85 OR Explain rationale (e.g., performance vs. interpretability). Results Confusion Matrix Analysis (Figure 3): Issue: The analysis states precision is "approximately 11%." Based on the described confusion matrix (TP=17, FP=138), precision is 17 / (17+138) = 11.0%. This is a critical weakness of the model that deserves more emphasis. It means ~89% of the people predicted to be EC users were actually non-users. This has huge implications for the cost and efficiency of any intervention based on this model Discuss this trade-off explicitly: "The model's high recall (85%) comes at the cost of low precision (11%), resulting in a high false positive rate. This suggests the model is well-suited as a screening tool where identifying most true cases is prioritized over resource efficiency, but would require secondary screening or low-cost interventions to target the large number of false positives." Discussion Addressing Limitations More Forcefully: Underreporting of EC likely major issue. Add: "A key limitation is the potential for significant underreporting of EC use due to social desirability bias and stigma..." Conclusion "myth-busting" Word choice is Informal. myth-dispelling Conclusion "stock guarantees of EC" Not clear Consider write as guaranteed EC stock availability Conclusion "This research provides an ethical and evidence-based blueprint to accelerate gains in reducing maternal mortality and advancing reproductive autonomy in Ethiopia and similar settings." – Awkward phrasing. .Conside rephrasing as ..blueprint to reduce maternal mortality and advance... Reviewer #4: This manuscript applies machine learning (ML) and explainable AI (XAI) methods to predict emergency contraceptive (EC) use among women in Ethiopia, using data from the 2023 PMA survey. The authors compare eight algorithms, address severe class imbalance with SMOTE, and use SHAP values to interpret predictors. They find that awareness of EC is the strongest predictor, followed by media exposure and health facility discussions, while demographic variables show limited predictive value.

      However, the results as currently presented are unreliable. Major inconsistencies in reported performance metrics (e.g., contradictory precision values, implausible Naive Bayes results, inflated accuracy) call into question the validity of the analyses. In addition, the small number of EC users makes the modeling unstable, and subgroup analyses are not feasible with this dataset. These issues, combined with over-interpretation of SHAP as causal, limit both the methodological credibility and substantive contribution of the paper.

      Contradictory precision results The performance metrics are inconsistent. Table 4 shows Logistic Regression with SMOTE achieving precision = 0.72 and recall = 0.85, yet the confusion matrix description reports precision at only ~11%. These cannot both be correct. This discrepancy raises questions about the accuracy of the reported results and must be clarified.

      Inflated accuracy The reported accuracy of 0.95 for Logistic Regression with SMOTE appears implausibly high given the extreme class imbalance (4.4% EC use). Accuracy is not an informative measure in this context, and such values raise concerns about potential data leakage or overly optimistic validation. The authors should confirm that the outcome variable or proxy features were not inadvertently included in the predictors.

      Over-interpretation of SHAP The SHAP analysis is framed in causal terms (e.g., a 30% increase in knowledge leading to a 12.7% increase in use). SHAP values describe associations within the model, not causal effects. The manuscript should temper these statements and present SHAP findings as indicators of relative predictive importance, not intervention outcomes.

      Implausible Naive Bayes results Naive Bayes is reported as having accuracy of only 0.06 pre-SMOTE. Given that 95% of the sample did not use EC, even a trivial majority-class classifier would achieve ~95% accuracy. Such a result suggests an error in coding or reporting that must be checked.

      Small minority class vs. model complexity Only 103 EC users were present in the dataset. Training and tuning eight algorithms with hyperparameter searches on such a small minority class risks overfitting and unstable results, even with SMOTE. This limitation should be acknowledged explicitly, with emphasis on the need for validation on independent samples.

      Subgroup analysis claims The manuscript claims fairness testing across subgroups (rural/urban, religion, age), but no results are presented. With so few EC users, subgroup analyses would be underpowered and unreliable. It would be more appropriate to note this limitation rather than imply subgroup robustness.

      Causality Issue The manuscript repeatedly interprets predictive associations as though they were causal effects. For example, SHAP values are used to suggest that increasing knowledge by 30% would increase EC use by 12.7%. Since the data are cross-sectional and observational, such statements are not justified. Machine learning models in this setting can identify predictive patterns, but they cannot establish causal relationships between predictors and outcomes. This overreach is particularly concerning because it could mislead policymakers or practitioners into believing the study provides evidence of causal effects. Reviewer #5: Summary This study investigates the underuse of emergency contraception in Ethiopia using a machine learning framework. Strengths include the application of multiple algorithms, careful handling of class imbalance, and the use of Explainable AI to interpret model outputs. The paper is generally well-structured, and the methodological workflow is presented clearly. At the same time, the results are presented in a way that overstates the model’s practical utility while giving insufficient attention to the precision–recall trade-off. The manuscript should be revised to consistently acknowledge the low precision across the abstract, results, and discussion, and to provide a clear justification for the relevance of a high-recall, low-precision model in this public health context. The limitation posed by the small number of positive cases in the validation set should also be explicitly discussed. Addressing these points is necessary to strengthen the scientific validity of the work. Specific comments 1. Title; It should be shortened to remove redundancy since Utilization and Usage mean the same thing 2. Abstract. I think something key was missed. The aurthors state a recall of 0.85 without mentioning the precision. I see that (Figure 3, page 20) show that the precision is approximately 11%. My understanding of this that for every 100 women the model flags as likely EC non-users who need intervention, 89 of them are false alarms. An abstract must present a balanced view of performance. 3. Methods (About the data): A sample size of 2,334 with a 4.4% prevalence means you only have ~103 positive cases (EC users). After an 80/20 train-test split, your test set contains only ~21 positive cases. This number is critically small and raises serious questions about the stability and generalizability of your reported performance metrics. A different random split could yield vastly different results. I suggest that such a major limitation is addressed upfront in the limitations section and acknowledged in the methods section. 4. Data balancing; I like the write up of this section 5. Evaluation Metrics; The text states the test set has 18.7% EC users, but the abstract and data balancing section state the overall prevalence is 4.4%. Please clarify this discrepancy. Is 18.7% a typo? Or did the stratified split result in a test set with a much higher prevalence than the overall dataset? This needs to be consistent. Could you also add the precision-recall plots, since you state that they were tracked. 6. Results: - In Table 4, the columns are F1 and Score. This seems like a typo. It should likely be a single column: F1 Score. Please correct. - Lastly, i think it would be good to acknowledge the weaknesses of SMOTE Reviewer #6: The title of the article is: Predicting Utilization of Emergency Contraceptive Usage in Ethiopia and Identifying Its Predictors Using Machine Learning. The author explains that traditional analyses have struggled to identify complex predictors and therefore they used machine learning (ML) and Explainable AI (XAI) to improve the prediction and interpretability of Emergency Contraceptive (EC) use. The paper can be published with the following corrections and some are extremely important. In particular methodological perspectives. Category Authors Contribution Comments Objectives The primary objectives are twofold:

      one, to predict the likelihood of EC use with far greater accuracy than conventional regression techniques;

      two, to identify the key modifiable socio-behavioural predictors e.g., self-efficacy, mass media exposure, provider perception, and women's autonomy through XAI methods like SHAP values to yield interpretability and actionable insights. First objective can be modified. Far greater is a vague statement. Measuring accuracy is an indicator of choosing between models but conventional regression techniques why has a problem in this study should focus on that.

      Second objective seems motivation of the study. This objective should be written in clear sentence. Identify predictors to yield interpretability and actionable insights are subjective things. These objective seems ambiguous.

      Methodological view Page 5: Methodologically, it represents a new contribution by rigorously testing the performance of eight alternative ML classifiers and developing an optimized analytical pipeline specifically designed to handle skewed healthcare datasets prevalent in rare outcomes like EC use

      Theoretically, it applies the Socio-Ecological Model (SEM) framework to hierarchically analyze predictors at levels of individual (knowledge, attitudes), interpersonal (partner communication, family influence), community (stigma norms, access), and policy (health system factors) providing an integrated explanation for the interrelating influences on EC behavior. It is not methodological contribution.

      Moreover, author mentioned theoretical contribution. However, it is just exploratory of the data.

      Methodology In page 4: In contrast to conventional statistical approaches, ML algorithms, such as random forests, gradient boosting machines (e.g., XGBoost), and neural networks, can particularly identify complex, high-dimensional patterns within diverse data sets, properly manage missing data, and produce personalized risk predictions with improved accuracy Author mentioned several times about conventional statistical technique. However, in the report author directly reported the model performance of ML. My suggestion is to first run the analysis using traditional or conventional methods and then compare with ML techniques. This is very important. Outcome Variable Page 8: The outcome of interest is EC Usage, a binary measure of whether emergency contraception was used in the last 12 months. This is the dependent variable for analysis. Redundant as at the beginning you mentioned outcome of interest is….. Missing data For handling missingness in our data, a stratified approach based on missingness mechanisms and rates was followed and so on……….. The author used many approaches and it is difficult to keep track. So it is better to explain step by step and pros and cons of each process. Moreover, explain why this approach is best in this study Variables Page 12

      Lots of category under one variable. Some category has very few observations. Justify the necessity. May be we can also show some cross-tabulation analysis result and report the p-value. Research Gap Page 19: The research goes beyond the correlational limitations of previous studies by utilizing predictive analytics to identify the modifiable factors and approximate their hypothetical effects What do you mean by correlational limitations? Moreover, over the report the previous studies were not mentioned in comparison to the authors current approaches. Sa add some recent references and explain the research gap. The Machine learning techniques are not new. So it is required to mention how those machine learning helps in your study as a novelty. All over the report there is a missing of synchronization and coherence of sentences. Moreover, the references, table titles etc are not space maintained. Abstract 1. SMOTE and SHAP 2. Conversely, recent reproductive events such as unintended pregnancy were linked to non-use. Static demographic factors showed poor predictive value. Findings highlight that knowledge gaps, not poverty or access, are key barriers to EC use. Tailored media campaigns and routine health counseling could enhance EC uptake. ML and XAI offer powerful tools for guiding targeted reproductive health interventions. 1. Did not mention what it is?

      1. The message of these sentences are not coherent. I think author can check the whole paper from an English native reviewer.

      R1:

      Reviewer #4: I appreciate the authors' thoughtful revisions and detailed responses. Several of my earlier comments were addressed—specifically, the correction of Naive Bayes reporting errors, improved acknowledgment of sample size limitations, and removal of unsupported subgroup analyses. These are welcome improvements. However, key concerns about the internal consistency of results, causal interpretation of SHAP analyses, and overextension of policy recommendations remain unresolved.

      First, while the outdated "11% precision" text has been removed, the confusion matrix values (TP=102, FP=180, FN=18) still do not correspond to the reported performance metrics. With these numbers, precision would equal roughly 0.36, not the 0.72 cited in Table 4. This suggests an ongoing internal inconsistency between the descriptive counts and the summary metrics. The lack of alignment raises continuing doubts about the reliability of the reported model performance.

      Second, the manuscript still places heavy emphasis on accuracy values approaching 0.92–0.95 despite a highly imbalanced outcome (4.4% EC use). Although the authors state that AUC-ROC and recall were prioritized, the presentation continues to foreground accuracy, which is misleading in this context. No calibration or uncertainty measures (e.g., Brier score, calibration curve) have been added, leaving the reader without a sense of how well the predicted probabilities reflect actual risk.

      Third, although the authors softened their language, the interpretation of SHAP values remains quasi-causal. The new statement—"counterfactual simulation using SHAP values … suggested that a 30% increase in EC knowledge could potentially increase utilization by approximately 12.7%", still presents SHAP outputs as if they represent real-world intervention effects. SHAP analysis identifies predictive associations within a model; it does not estimate the causal impact of changing a feature in the population. Likewise, subsequent phrases such as “integrating a predictive risk-scoring tool can help identify women at high risk” and “geographic machine learning modeling can optimize resource deployment” continue to frame the model as a validated operational tool. These remain prescriptive policy claims that move beyond what a cross-sectional, unvalidated predictive study can substantiate.

      Finally, while the tone of the manuscript has improved, the discussion still reads as policy advocacy rather than analytical interpretation. Phrases like "representing a valuable public health gain”" and "can help optimize resource deployment" give the impression of proven effectiveness rather than exploratory modeling. A clearer distinction between predictive insights and causal or operational evidence is necessary for the study to maintain methodological integrity.

    1. While hot dog vendors have been part of the city’s gray market for decades, changes in state law in 2018 and 2022 removing illegal vending from the police code and streamlining health permits have led to a boom in their numbers. In response, the city started a campaign warning of foodborne illness risks (opens in new tab) and launched a vending task force, a multiagency enforcement team that issues fines and confiscates carts. But it’s a cat-and-mouse game.<img id="5skp1nj4390bt72ou8cvhc5t25" alt="A large, bright yellow stylized sun with long, rectangular rays radiates from the right side on a solid light blue background." credit="" crop="[object Object]" loading="lazy" decoding="async" data-nimg="fixed" style="position:absolute;top:0;left:0;bottom:0;right:0;box-sizing:border-box;padding:0;border:none;margin:auto;display:block;width:0;height:0;min-width:100%;max-width:100%;min-height:100%;max-height:100%;object-fit:cover" class=" lazyloaded" srcSet="/_next/image/?url=https%3A%2F%2Fassets.sfstandard.com%2Fimage%2F994911177489%2Fimage_5skp1nj4390bt72ou8cvhc5t25&amp;w=120&amp;q=75 1x, /_next/image/?url=https%3A%2F%2Fassets.sfstandard.com%2Fimage%2F994911177489%2Fimage_5skp1nj4390bt72ou8cvhc5t25&amp;w=240&amp;q=75 2x" src="/_next/image/?url=https%3A%2F%2Fassets.sfstandard.com%2Fimage%2F994911177489%2Fimage_5skp1nj4390bt72ou8cvhc5t25&amp;w=240&amp;q=75"/>Subscribe to The DailyBecause “I saw a TikTok” doesn’t always cut it. Dozens of stories, daily.Sign up nowThe workers are mostly undocumented immigrants from Central and South America, The Standard found through interviews with more than a dozen. Some have fled crime and violence. Many are seeking asylum and sending money home while they eke out an existence, one sale at a time. Others are victims of human trafficking: vulnerable people smuggled into the U.S. by groups to whom they are indebted.

      NUT GRAF -confirmed by Alex

    1. One category of religion which is negatively correlated with positive social outcomes is polytheism. Polytheism is highly conducive to all sorts of unpleasant behaviors. Most notably, child sacrifice and bestiality are only to be found in polytheistic religions, but not in monotheistic ones. Why?

      none of the following explanations are as parsimonious as the simplest which is: "monotheists are just smarter because they are further along in intellectual development toward atheism." religion among the most primitive people isn't even characterized by gods — they have earthen spirits in the trees and soil. then you get a panoply of gods in the civs of antiquity, then a supreme One Being, before genuine materialism, etc.

    1. Our focus is not on the relation between individual kidsand game content and representation, but rather on how game playpractice and activity are situated within a broader set of cultural and socialengagements and contexts.

      Game play has its effect on people depending on content, pace, and the subject as a whole.

    1. runners and swimmers who competed but were unable to smash records

      Research on this text: The high elevation at the this Olympics mainly affected the distance runners and swimmers, but it is believed that it actually helped athletes in the long jump, high jump, and pole vault.

    1. The argument behind these workplace- and school-led efforts isthat these high tech and higher order skills will enable young people to adapt to a rapidlychanging and unpredictable employment landscape. However, preparing children forcreative and high tech jobs does not guarantee that those jobs will materialize just becauseworkers are standing by.

      They're only available when the tech companies demand it but never guaranteed now more than ever.

    2. College completion rates are increasing for all income groups,but the gap between wealthy and poor has steadily increased from the 1980s to the 2000s,from 31 percent to 45 percent (Dynarski 2014).

      It has become financially challenging as you needed a degree for better jobs which are no longer guaranteed for courses that are more expensive now more than ever.

    3. In this environment, educational credentials alone can no longer expand opportunity sincethey confer a relative, rather than an absolute, benefit. A college degree is a requirementfor most good jobs, but it is no longer a guarantee.

      It was a thing that guaranteed jobs in older markets but now, it is no longer guaranteed with some having to move to use their degrees anymore.

  2. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. taught me to see them as complex individuals who all wanted an education, and having learned these lessons from my students, I can't close my eyes to the fact that many of them do not attend college-something that is taken for granted by many of their even slightly wealthier peers. Thanks to my years of teaching in low-income schools, and thanks to my student teachers, my eyes are wide open to this disparity. I am gathering my strength and planning my agenda for the next chapter in my career: Get those truly left behind ready and into college. I have 20+ more years of work until retirement. Wish me luck. Or join me.

      Ungemah realized that the most significant lesson she learned from her students was how to confront inequality head-on. They helped her understand the structural issues within education: despite their efforts, many students from disadvantaged backgrounds remain systematically excluded from higher education. A teacher's awakening stems not only from professional training but also from the shared reality of education experienced alongside students. Students are not merely learners; they are also the ones who reveal the truth about the education system.

    2. My students taught me during my career. They were the student teach-ers, and they gave me an education I could not have gotten anywhere else.

      This is exactly how it should be and I felt very moved hearing this perspective! A teacher doesn't just teach and talk at their students, but learns from their lived experiences and knowledge that they may not have based on their class and cultural background. A classroom is a collective space where everyone can learn from each other and support each other.

    3. high-poverty secondary schools for over a dozen years woke meup to the educational injustices that arc forged by economic injustice and howthose injustices trickle up and out of high school and into college. My student

      This text made me think about things deeper than I did before. The teacher realizing here that many of her students did not attend community college or any college at all gave her a deep sense of frustration. From her experience, many kids were already uninterested in learning, but many times it seems as if they were set up that way, and the system failed them. This reminds me that many times the way the educational system is not fair; what may seem fair and achievable for some may not for others, which is the most upsetting part to me. Relating to one of our first texts, this reminds me of how school is supposed to be an equalizer for all people, but it seems like it actually does the opposite.

      When the teacher realizes all of the educational justices, it reminds me of when I was young and my mom was a high school teacher in Detroit, Michigan. Detroit is a very diverse area with low income. She was furious with the injustices of the school system, and I was exposed to much of the truth at a young age. I was still very young at this time, and fortunate enough to go to a private school at this time, but I felt for these kids.

    4. Students who live in poverty, however resilient, face obstacles that are lay-ered, like matryoshka dolls, and once one issue is somewhat rectified, another one might reveal itself. These multilayered issues do not make an education or a successful life impossible, but they certainly provide more than a healthy dose of challenges for young people like Denise. This is why I stayed at my "failing" school, with poor students, for years. I could not change the larger circumstances of their lives, but I could do small things within my classroom to ameliorate their situations.

      She knew full well she couldn't change poverty, systems, or social injustice, but she chose to help her students within her means—bringing breakfast to hungry pupils, offering a listening ear and care. These “small acts” embody the most unassuming yet profound spirit of social justice in education: teachers may not save the world, but they can make a classroom warmer. True educational equity is not merely a grand policy slogan; it also stems from every moment in the classroom where a child is seen and cared for.

    5. We all are and we all aren't our stereotypes. During my first years teaching, I was continuously perplexed by how easily my students and I constructed and categorized each other along stereotypical racial lines. They saw me as a typical White girl, and I saw them as typical urban kids. We were flat characters in each other's eyes.

      This passage marks a turning point in the text. Starting from the mutual prejudices between “white teachers” and “disadvantaged minority students,” Ungemah reveals the power of humanizing understanding in educational relationships. She begins to realize that the barriers to education are often not knowledge gaps, but stereotypes and identity divides. When teachers learn to listen and see students as individuals, “they transform from labels into people.” Teachers must cultivate “cultural humility,” acknowledging their own shaping by societal stereotypes and actively dismantling these mutual biases through authentic engagement.

    1. students with disabilities make up 20% of the students enrolled in community college

      It shouldn't surprise me but it does. But percentage of students who selfrevealed is 5.8%

    1. In your example you would simply say approved. The addition of the prefix pre has no meaning for words such as approve. It implies something that is done before approval. Therefore, pre-approved means not yet approved. You do find meaningless phrases like pre-approved and pre-booked used by marketers and advertisers but they cannot be recommended in good English.

      While technically not correct according to dictionary definition, this does at least raise good points about ambiguity/inconsistency in English:

      If it did not already have a pre-established meaning, then the pre- prefix here certainly could make the word mean "prior to approval", could it not? It's only the precedent set by those before us that makes it mean the other thing (that the dictionary says it actually means).

    1. Fairness in AI, defined as the elimination of avoidable bi-ases among groups, is equally important. Training data oftenreflects societal preyudices, which can inadvertently propagatethrough AI algorithms [12]. This raises the risk of discrimi-natory practices in marketing, where targeted content mightreinforce stereotypes or exclude specific demographics.

      This, to me seems like an appeal to fear fallacy because it's dramatizing a future outcome implying consequences without specific, causal evidence. The concern of there being discriminatory practices in marketing through AI is legitimate but relies too heavily on emotional impact rather than one event leading to another.

    2. The “black box” natureof AI, where users cannot fully comprehend how decisions aremade, presents challenges in ensuring transparency and ac-countability [30].

      This statement is both reasonable and factually grounded, as transparency and accountability are widely recognized issues in AI ethics. The premise is sound, but the author could strengthen it by providing empirical examples — for instance, known cases where opaque algorithms led to marketing biases or misinformation.

    1. In adjusting the duties on imports to the object of revenue the influence of the tariff on manufactures will necessarily present itself for consideration. However wise the theory may be which leaves to the sagacity and interest of individuals the application of their industry and resources, there are in this as in other cases exceptions to the general rule. Besides the condition which the theory itself implies of a reciprocal adoption by other nations, experience teaches that so many circumstances must concur in introducing and maturing manufacturing establishments, especially of the more complicated kinds, that a country may remain long without them, although sufficiently advanced and in some respects even peculiarly fitted for carrying them on with success. Under circumstances giving a powerful impulse to manufacturing industry it has made among us a progress and exhibited an efficiency which justify the belief that with a protection not more than is due to the enterprising citizens whose interests are now at stake it will become at an early day not only safe against occasional competitions from abroad, but a source of domestic wealth and even of external commerce.

      This passage discusses the role of tariffs in promoting domestic manufacturing while still raising revenue. The author acknowledges the classical economic theory that individuals should freely direct their industry, but notes exceptions exist, especially for complex manufacturing that requires specific conditions to thrive. The passage emphasizes that manufacturing can develop successfully under protectionist measures, which would safeguard domestic producers from foreign competition, stimulate economic growth, and eventually contribute to both national wealth and international trade. It reflects early American debates on protectionism vs. free trade and the need to balance revenue generation with support for emerging industries.

    1. I AM a poor negro, who with myself and children have had the good fortune to get my freedom, by means of an act of assembly passed on the first of March 1780, and should now with my family be as happy a set of people as any on the face of the earth, but I am told the assembly are going to pass a law to send us all back to our masters. Why dear Mr. Printer, this would be the cruelest act that ever a sett of worthy good gentlemen could be guilty of. To make a law to hang us all, would be merciful, when compared with this law; for many of our masters would treat us with unheard of barbarity, for daring to take the advantage (as we have done) of the law made in our favor.—Our lots in slavery were hard enough to bear: but having tasted the sweets of freedom, we should now be miserable indeed.—Surely no Christian gentlemen can be so cruel! I cannot believe they will pass such a law.—I have read the act which made me free, and I always read it with joy—and I always dwell with particular pleasure on the following words, spoken by the assembly in the top of the said law.

      This passage reflects the perspective of a formerly enslaved African American who gained freedom through a 1780 act of the assembly (likely in Massachusetts, where gradual emancipation laws were passed). The writer expresses fear and outrage that a new law might re-enslave him and his family, emphasizing the cruelty and injustice of taking away liberty once experienced. He contrasts the harshness of slavery with the joy of freedom and appeals to the moral conscience of lawmakers, framing the potential law as incompatible with Christian and humane values. The passage illustrates the precarious nature of freedom for Black people even under legal emancipation and highlights the emotional and ethical dimensions of early anti-slavery struggles.

    1. Therefore there can be no virtue about games.

      objection 2 furthers the conclusion of objection 1 stating that its not God but the devil that is the author of fun, meaning games are of hte devi;.

    1. The rich and the poor are not so far removed from each other as they are in Europe. Some few towns excepted, we are all tillers of the earth, from Nova Scotia to West Florida. We are a people of cultivators, scattered over an immense territory communicating with each other by means of good roads and navigable rivers, united by the silken bands of mild government, all respecting the laws, without dreading their power, because they are equitable. We are all animated with the spirit of an industry which is unfettered and unrestrained, because each person works for himself. If he travels through our rural districts he views not the hostile castle, and the haughty mansion, contrasted with the clay-built hut and miserable cabbin, where cattle and men help to keep each other warm, and dwell in meanness, smoke, and indigence. A pleasing uniformity of decent competence appears throughout our habitations.

      "The rich and the poor are not so far removed from each other as they are in Europe. Some few towns excepted, we are all tillers of the earth, from Nova Scotia to West Florida. We are a people of cultivators, scattered over an immense territory communicating with each other by means of good roads and navigable rivers, united by the silken bands of mild government, all respecting the laws, without dreading their power, because they are equitable. We are all animated with the spirit of an industry which is unfettered and unrestrained, because each person works for himself. If he travels through our rural districts he views not the hostile castle, and the haughty mansion, contrasted with the clay-built hut and miserable cabin, where cattle and men help to keep each other warm, and dwell in meanness, smoke, and indigence. A pleasing uniformity of decent competence appears throughout our habitations."*

      This passage, likely from early colonial or revolutionary-era observations of America, contrasts the social and economic conditions in the American colonies with Europe. The writer emphasizes the relative equality of wealth in America, where most people are farmers working their own land, unlike the stark divide between rich and poor in European society. The colonies are described as spacious, well-connected, and governed fairly, fostering respect for laws rather than fear. The passage highlights a sense of self-reliance and industriousness, where people work for their own benefit, and the countryside reflects modest but decent living, without the extremes of wealth and poverty seen in Europe. It conveys an idealized vision of American rural life as balanced, cooperative, and equitable.

    1. According to the chroniclers, London is far older than Rome. For it was founded by the same race of Trojans, but by Brutus prior to Rome's foundation by Romulus and Remus. Consequently both still have in common the same ancient laws and institutions. The one, just like the other, is divided into wards. In place of consuls, London has sheriffs chosen annually. It has a senatorial order and lesser officials. It has a system of sewers and conduits in the streets. Judicial pleas, arguments, and deliberations each have assigned places, their courts. It has days fixed by custom for the holding of assemblies.

      good comparison of similarities between london and rome, what is a chronicler? - a person who writes accounts of important or historical events.

    1. Cherokee, from what we now call California and the American southeast respectively, both exhibit the common Native American tendency to locate spiritual power in the natural world. For both Native Americans and Europeans, the collision of two continents challenged old ideas and created new ones as well.   Salinan Indian Creation Story When the world was finished, there were as yet no people, but the Bald Eagle was the chief of the animals. He saw the world was incomplete and decided to make some human beings. So he took some clay and modeled the figure of a man and laid him on the ground. At first he was very small but grew rapidly until he reached normal size. But as yet he had no life; he was still asleep. Then the Bald Eagle stood and admired his work. “It is impossible,” said he, “that he should be left alone; he must have a mate.” So he pulled out a feather and laid it beside the sleeping man. Then he left them and went off a short distance, for he knew that a woman was being formed from the feather. But the man was still asleep and did not know what was happening. When the Bald Eagle decided that the woman was about completed, he returned, awoke the man by flapping his wings over him and flew away. The man opened his eyes and stared at the woman. “What does this mean?” he asked. “I thought I was alone!” Then the Bald Eagle returned and said with a smile, “I see you have a mate! Have you had intercourse with her?” “No,” replied the man, for he and the woman knew nothing about each other. Then the Bald Eagle called to Coyote who happened to be going by and said to him, “Do you see that woman?” Try her first!” Coyote was quite willing and complied, but immediately afterwards lay down and died. The Bald Eagle went away and left Coyote dead, but presently returned and revived him. “How did it work?” said the Bald Eagle. “Pretty well, but it nearly kills a man!” replied Coyote. “Will you try it again?” said the Bald Eagle. Coyote agreed, and tried again, and this time survived. Then the Bald Eagle turned to the man and said, “She is all right now; you and she are to live together.”   John Alden Mason, The Ethnology of the Salinan Indians (Berkeley: 1912), 191-192. Available through the Internet Archive   Cherokee creation story The earth is a great island floating in a sea of water, and suspended at each of the four cardinal points by a cord hanging down from the sky vault, which is of solid rock. When the world grows old and worn out, the people will die and the cords will break and let the earth sink down into the ocean, and all will be water again. The Indians are afraid of this. When all was water, the animals were above in Gälûñ’lätï, beyond the arch; but it was very much crowded, and they were wanting more room. They wondered what was below the water, and at last Dâyuni’sï, “Beaver’s Grandchild,” the little Water-beetle, offered to go and see if it could learn. It darted in every direction over the surface of the water, but could find no firm place to rest. Then it dived to the bottom and came up with some soft mud, which began to grow and spread on every side until it became the island which we call the earth. It was afterward fastened to the sky with four cords, but no one remembers who did this. At first the earth was flat and very soft and wet. The animals were anxious to get down, and sent out different birds to see if it was yet dry, but they found no place to alight and came back again to Gälûñ’lätï. At last it seemed to be time, and they sent out the Buzzard and told him to go and make ready for them. This was the Great Buzzard, the father of all the buzzards we see now. He flew all over the earth, low down near the ground, and it was still soft. When he reached the Cherokee country, he was very tired, and his wings began to flap and strike the ground, and wherever they struck the earth there was a valley, and where they turned up again there was a mountain. When the animals above saw this, they were afraid that the whole world would be mountains, so they called him back, but the Cherokee country remains full of mountains to this day. When the earth was dry and the animals came down, it was still dark, so they got the sun and set it in a track to go every day across the island from east to west, just overhead. It was too hot this way, and Tsiska’gïlï’, the Red Crawfish, had his shell scorched a bright red, so that his meat was spoiled; and the Cherokee do not eat it. The conjurers put the sun another hand-breadth higher in the air, but it was still too hot. They raised it another time, and another, until it was seven handbreadths high and just under the sky arch. Then it was right, and they left it so. This is why the conjurers call the highest place Gûlkwâ’gine Di’gälûñ’lätiyûñ’, “the seventh height,” because it is seven hand-breadths above the earth. Every day the sun goes along under this arch, and returns at night on the upper side to the starting place. There is another world under this, and it is like ours in everything–animals, plants, and people–save that the seasons are different. The streams that come down from the mountains are the trails by which we reach this underworld, and the springs at their heads are the doorways by which we enter, it, but to do this one must fast and, go to water and have one of the underground people for a guide. We know that the seasons in the underworld are different from ours, because the water in the springs is always warmer in winter and cooler in summer than the outer air. When the animals and plants were first made–we do not know by whom–they were told to watch and keep awake for seven nights, just as young men now fast and keep awake when they pray to their medicine. They tried to do this, and nearly all were awake through the first night, but the next night several dropped off to sleep, and the third night others were asleep, and then others, until, on the seventh night, of all the animals only the owl, the panther, and one or two more were still awake. To these were given the power to see and to go about in the dark, and to make prey of the birds and animals which must sleep at night. Of the trees only the cedar, the pine, the spruce, the holly, and the laurel were awake to the end, and to them it was given to be always green and to be greatest for medicine, but to the others it was said: “Because you have not endured to the end you shall lose your, hair every winter.” Men came after the animals and plants. At first there were only a brother and sister until he struck her with a fish and told her to multiply, and so it was. In seven days a child was born to her, and thereafter every seven days another, and they increased very fast until there was danger that the world could not keep them. Then it was made that a woman should have only one child in a year, and it has been so ever since.

      Both the Salinan and Cherokee creation stories explain the origins of the world and humanity through nature, animals, and spiritual power. In the Salinan story, the Bald Eagle, the chief of animals, creates the first man from clay and the first woman from a feather, symbolizing life emerging from natural elements. The Cherokee story tells how the earth was formed from mud brought up by a water beetle from beneath a vast sea, shaped into mountains and valleys by a great Buzzard’s wings. The sun was placed in the sky at the right height to sustain life, and the animals and plants were tested for endurance, determining which would be nocturnal or evergreen. Finally, humans came into existence and multiplied, establishing natural order and balance. Both stories emphasize the deep connection between the natural world, animals, and the spiritual creation of life.

    1. Pontiac Calls for War, 1763

      This document also tells us about the effect the European contact had on Native culture as well as the wellbeing and health of Native people. It states that the European trade has softened the Native People and that they do not actually need to rely on them at all but they still do because they are prioritizing comfort over tradition.

    2. but if you were not bad, as you are, you would well do without them.

      Why does accepting help from the Europeans constitute being bad to the Author?

    Annotators

    1. /a/-/æ/ summary

      Arabic /a/ produced with higher tongue position than English /æ/.

      Bilinguals’ productions were native-like in both languages (no L1 attrition).

      No influence of aptitude, but a positive connection between L1 and L2 nativeness for tongue height in sentences.

    2. L2 acquisition, L1 attrition relationship

      Correlations between native-likeness in L1 and L2 were positive, but only significant for F1-Bark in the sentence condition, meaning that bilinguals with more native-like L2 /æ/ also had more native-like L1 /a/ in tongue height.

    3. Participants

      Groups: 4 groups of 15 participants each — Arabic-English (A-E) bilinguals, English-Arabic (E-A) bilinguals, Arabic monolinguals, and English monolinguals.

      1. A-E bilinguals: Native Arabic speakers from Saudi Arabia/Yemen who moved to the UK around age 18.6; lived there ~20 years; spoke Modern Standard Arabic (MSA) and Standard Southern British English (SSBE).
      2. E-A bilinguals: Native English speakers from the UK who moved to Saudi Arabia/Yemen around age 16.7; lived there ~17 years; spoke SSBE and MSA fluently.
      3. All bilinguals: Late consecutive bilinguals who had fully acquired their L1 before learning their L2; both groups were highly proficient in their L2 based on standardized proficiency tests.
      4. Monolinguals: Served as control groups; matched by education, region, and age. None spoke additional languages.

      5. All groups were similar in age and gender distribution, though bilingual groups differed slightly but significantly in age of arrival (AoA) and length of residence (LoR).

    4. we explored if increased sound discrimination aptitude may berelated to more nativelike L1/L2 vowel productions in our bilingual speaker groups.

      This study does not test perception directly, but examines whether general sound discrimination aptitude predicts how accurately bilinguals produce vowels. (The aptitude test used an unfamiliar language (Cantonese) to avoid bias from prior language knowledge.)

    5. Speech Learning Model

      The Speech Learning Model (SLM) explains this through two processes: 1. Category assimilation – when similar L1 and L2 sounds merge, preventing accurate L2 production. 2. Category dissimilation – when similar sounds are exaggerated apart, leading to distinct but nonnative categories.

      These processes can affect both L2 and L1 speech. Changes in the first language (L1) due to L2 influence are known as L1 phonetic attrition.

    1. The Ethiopian army moreover was much larger than that of the Italians. Not countingsoldiers with spears, he had well over 100,000 men with modern rifles. The Italians fortheir part had somewhat more cannon—56 as against Menelik’s 40—but only about17,000 men, of whom 10,596 were Italian and the rest Eritrean levies [draftees orconscripts

      Damn they were severely outnumbered. Is this just European hubris? How could they have expected to possibly defeat him with such a weak army?

    2. I said that because of our friendship, our affairs in Europe might be carriedon with the aid of the sovereign of Italy, but I have not made any treaty which obliges meto do so. I am not the man to employ the aid of another to carry on my affairs your Majestyunderstands very well.

      He flamed him basically

    3. To avoid disputes among themselves the European Powers had devised the GeneralAct of Berlin which was signed on February 26 that yea

      This is so goofy. The Europeans finally sort of realized they should stop fighting constantly but then then in like 20 years they immediately start fighting one of the most horrific wars ever.

    Annotators

    1. Keywords

      It’s interesting how without fail you can always find the ideas of poverty as well with urbanism but as i was growing up i assumed being in an urban neighborhood was always more rich because you’re able to see everyone as yourself. Almost as though as you have less wealth you have more cultural and social wealth

    2. While schools must continue to be beacons of hope, it is disingenuous to sug-gest that schools alone can solve the issue of poverty. Neuman (2009) in her book, Changing the Odds for Children at Risk, expressed concern that while schools are a piece of the poverty puzzle, they are just one piece. Schools cannot eradicate poverty on their own (Neuman, 2009). Let’s look at a poten-tial case study

      Schools can indeed help students, but they cannot eliminate poverty on their own. The causes of poverty are too complex to be eradicated by a single factor like educational success. Teachers' task is not to play the role of saviors attempting to eradicate poverty, but to offer support and hope within reality. Instilling in children from impoverished families the belief that “education can change one's destiny” is precisely what educators should do. Teachers must both understand the complexity of poverty and maintain a conviction to act. Future educators must learn to maintain both the warmth and professionalism of education within systemic constraints—empowering students to find strength through relationship-building, resource-linking, and upholding high expectations, rather than succumbing to pessimism or blind optimism.

    3. The United States has long prided itself on the belief that anyone can succeed in this country—that anyone can pull themselves up by their bootstraps and reach their economic goals. Much of what is lacking from this discussion is the manner in which social policies and institutional arrangements reinforce poverty. It is disingenuous to suggest that people can will themselves out of poverty without looking at the complex contexts which keep them there. Instead, a web of systems and policies interact to help—or stymy—those who are trying to rise out of poverty. Hilfiker (2002) provides a thorough analysis of legislation, economic, and social policy that contribute to the cre-ation and maintenance of impoverished neighborhoods across the United States both historically and contemporarily. Haveman (in Cass, 2010) posits that those in poverty need a variety of supports including (a) skills building (through education), (b) health care, and (c) opportunities to use their skills (through employment possibilities and decent wages). But wages—in con-stant dollars—have fallen; high paying jobs are hard to come by (Anyon, 2005). Anyon argues that these consequences arise from faulty federal

      Viewing poverty as a result of individual willpower or insufficient effort is an ideological bias that overlooks structural inequality. Educators should rethink the root causes of poverty—educational inequality, discrimination, and policy imbalances—rather than simplistically assuming that “hard work alone can change one's fate” when working with students from low-income backgrounds. Instead, they should focus on how social conditions and the distribution of educational resources impact students, avoiding the individualization of systemic issues.

    4. udents from low-income backgrounds are less likely to have access to medical care, which can allow vision, dental, hearing, and other health ailments (including asthma) to go untreated.

      This not only affects their health but also their education. If a student cannot afford to buy glasses they will likely continue to struggle seeing the board in school. Similarly if a student has a hard time hearing, they will face disadvantages which can affect their academic performance. It is unfair for intelligent students to fall behind because they can't afford the luxury of getting medical attention.

    5. The United States has long prided itself on the belief that anyone can succeed in this country—that anyone can pull themselves up by their bootstraps and reach their economic goals. Much of what is lacking from this discussion is the manner in which social policies and institutional arrangements reinforce poverty

      Many, like my family have come in the pursuit of the American Dream. While I do think it's possible, it is much easier said than done, especially for those who come from a low-income. Not everyone has access to resources that can help guide those new to this country. Because of that, many left to fend for themselves, not knowing what is necessary to set themselves for success. Yes, many can find employment, but employment does not equate to the American dream.

    1. My Notes: This video discusses very important points about how almost every element of a site can have a huge effect on your website. One overall topic is the need to make your site have the corrects visual design to be appropriate for the site and its purpose. This topic discusses how colors and font can improve or ruin your site in significant ways. The other topic that is majorly discussed is related to the topics we have seen in class such as the importance of the alignment, placement, and amount of elements in a site. These elements are reiterated to be very important to not only accessibility but the overall visual appeal of site. This is important to keeping users on your site and make their experience positive.

    2. UI and UX 101 for Web Developers and Designers

      This video explains how good design isn’t just about visuals but it’s about solving problems and making things easy to use. The video also showed how testing and feedback help improve the overall experience, so users enjoy interacting with the website or app.

    3. UI and UX 101 for Web Developers and DesignersTap to unmute2xUI and UX 101 for Web Developers and DesignersStefan Mischook 5,964 views 1 year agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.Pull up for precise seeking2:00•You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmgroup besides the live coaching sessionsand uh the interactiveUp nextLiveUpcomingCancelPlay NowStefan MischookSubscribeSubscribedContact: stefan@studioweb.com Entrepreneur | Educator | Tech Mentor I’ve been an entrepreneur since 18, launching my first business in the pet industry before shifting into tech. By 1994, I was building commercial websites, and in 2002, I released my first programming and entrepreneurship courses. In 2011 I launched StudioWeb.com, a gamified teaching and classroom management platform now used in schools across North America. My book, Web Design Start Here (published in 2015), continues to receive great reviews and is available on Amazon. YouTube: What started as a YouTube hobby has grown into a thriving platform where I share insights on coding, entrepreneurship, and tech. I’ve been fortunate to collaborate with top brands, including PayPal, Docker, JetBrains, Wix, BenQ, and more. If you’re looking for a trusted voice in tech and business, let’s connect. StefThe State of the Developer Ecosystem in 202529:00HideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Stable VolumeAmbient modeAnnotationsSubtitles/CC (1)English (auto-generated)Sleep timerOffPlayback speed2QualityAuto (1440p HD)14:1014:17 / 15:42•Watch full videoLive••27:14My Unconventional Coding Story | Self-TaughtTravis Media812K views • 2 years agoLivePlaylist ()Mix (50+)14:21The Only 5 Web Design Skills That Actually Matter (2025)Self-Made Web Designer143K views • 3 months agoLivePlaylist ()Mix (50+)15:59I Just Started My Own Car CompanyAndy Didorosi188K views • 12 days agoLivePlaylist ()Mix (50+)15:24how to progress faster than anyone else (in tech)Phillip Choi64K views • 8 days agoLivePlaylist ()Mix (50+)12:237. Foundational Models for Generative Computational Design by Prof. Ferdous Alam (MIT)Jitesh Panchal4 views • 33 minutes agoLivePlaylist ()Mix (50+)22:41Reacting to 21 Design Portfolios in 22 MinutesFlux Academy925K views • 2 years agoLivePlaylist ()Mix (50+)1:19:28Oz Pearlman (Mentalist): This Small Mistake Makes People Dislike You! They Do This, They’re Lying!The Diary Of A CEO168K views • 10 hours agoLivePlaylist ()Mix (50+)23:31The Most Regretted College DegreesSpeeed551K views • 5 days agoLivePlaylist ()Mix (50+)15:344 levels of UI/UX design (and BIG mistakes to avoid)Tim Gabe311K views • 6 months agoLivePlaylist ()Mix (50+)26:35How to Start Freelancing in 2024Stefan Mischook54K views • 1 year agoLivePlaylist ()Mix (50+)15:09Why 2025 is the single most pivotal year in our lifetime | Peter LeydenBig Think and Freethink682K views • 3 days agoLivePlaylist ()Mix (50+)43:53Freelance Web Developer RoadmapTraversy Media146K views • 7 months agoLivePlaylist ()Mix (50+) Toggle info cards/end screens visibility UI and UX 101 for Web Developers and Designers
      1. Alignment
      2. Negative Space
      3. Fonts
      4. Colors
      5. UX vs UI: Usability

      It's all about the eye and the visual aspect of the website building. You may be a great coder and developer and can create many crazy and fascinating functions, but if your website is too cluttered with buttons, its not going to be usable.

    4. UI and UX 101 for Web Developers and Designers

      I think the best part of this video is at 9 minutes when he starts talking about established standards when it comes to design. Things like making sure in the top left of any page on your site, you have the site logo that you can click, which will take you back to the home page. This could be an easy thing to forget as a beginner designer but becomes highly noticable to users, when they get 8 pages deep and want to go back to the home page but get stuck. That little inconvinience can cause frustration in users.

    5. UI and UX 101 for Web Developers and Designers

      Having a clear navigation menu at the top of a website is important for usability. Users expect the logo on the top left to link back to the homepage and the navigation bar to be easy to find and use. Also, using enough negative space also called white space, helps a website look clean and easy to read. When elements are too close together, users feel overwhelmed, but spacing things out gives the page breathing room.

    6. UI and UX 101 for Web Developers and Designers

      Some Basic Design Principles are: 1. Alignment, making things appear clean and correct. 2. Negative Space, create space around elements so that the information isn't overwhelming to the user. 3. Font Use, have consistent font families 4. Don't Use 'Serif' Fonts in Body Text, its style is too "flairy" (this is a strange rule). 5. Logical Color Use; make colors themed for your site 6. Templates Are Ok, they save design resources

      After these rules, he gets into some strict UX guidelines that I personally disagree with. They're good for learning, but following into fully-fledged programs with this methodology just makes everything look the same!!

    7. UI and UX 101 for Web Developers and Designers

      Making good websites is important. if i couldn't see that would suck, but i probably wouldn't care what a website looked like.

  3. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Their six-bedroom house is worth about $150,000.17 Alexander is an only child. Both parents grew up in small towns in the

      I have family that have large houses but few children and ive come to realize they have almost a sense of entitlement when it comes to adults and how they feel entitled to almost always get a response and be acknowledged

    2. The McAllister's apartment is in a public housing project near a busy street. The com-plex consists of rows of two- and three-story brick units. The buildings, blocky and brown, have small yards enclosed by con-crete and wood fences. Large floodlights are mounted on the comers of the buildings, and wide concrete sidewalks cut through the spaces between units. The ground is bare in many places; paper wrappers and glass litter the area.

      Housing plays a big role in a students ability to thrive. These details might seem unimportant to some, but the environment a student lives in shapes how they interact with school and their peers and teachers. In this home, there isn't much space for a kid to play without dangers of glass or the confinements of a small yard. This aspect of Harold's life that is determined by his class background, shapes his ability to move freely and therefore may shape is participation in school.

    3. Whining, he wonders what he will do.

      It is interesting how having his schedule full and so many planned creative activities has made Alex less creative in some ways. He can't imagine what he could do in his free time because everything is always planned out. When I was younger, I was lucky to have access to an after school program, but there was not too much structure for it, we just got to be outside or inside and play with safety rules and guidelines. This allowed my friends and I to get creative without structure. We would run behind the bungalows, make potions out of the weeds that were growing, and play games we made up. With all of Alex's set activities it is hard for him to develop that type of creativity that was so intrinsic to my childhood.

    4. These are differences with potential long-term consequences. In an historical moment when the dominant society privileges active, informed, assertive clients of health and edu-cational services, the strategies employed by children and parents are not equally effec-tive across classes.

      This is why this research is important. There needs to be an understanding of the true conditions of people based on their class and the long-term effects of that to cultivate an academic culture that does not solely adhere to middle and upper class culture. Students from all socio-economic backgrounds should feel that their education is not only accessible in that they can go to school, but that it is relevant and important to them.

    5. But these works have not given sufficient attention to the meaning of events or to the ways differ-ent family contexts may affect how a given task is executed

      This shows the importance of finding the root cause of certain conditions and then being able to connect the individual experiences back to that root cause. So many of these factors that were being studied separately ultimately can be drawn back to class analysis. Being able to connect them together and find the root cause allows educators to effectively fight back and support their students.

    6. Middle-class parents engage in concerted cultivation by attempting to foster children's talents through organized leisure activities and extensive reasoning. Working-class and poor parents engage in the accomplishment of natural growth, providing the condi-tions under which children can grow but leaving leisure activities to children them-selves. These parents also use directives rather than reasoning. Middle-class chil-dren, both white and black, gain an emerging sense of entitlement from their family life. Race had much less impact than social class.

      This is the most central theoretical passage in the entire work, establishing Lareau's research framework. Through ethnographic observation, she discovered that the logic of parenting within families reflects not only economic circumstances but also embodies cultural capital and social structure. “Nurturing” and “letting nature take its course” respectively symbolize the socialization pathways of the middle class and the working class, determining how children understand authority, communication, and institutions. Parenting styles constitute a “reproduction of cultural capital,” through which parents unconsciously transmit cultural resources that maintain class distinctions. When encountering students from diverse backgrounds, educators should recognize these differences as “cultural logic” rather than “educational deficits.”

    1. As a statistician, I am in strong agreement on the widespread inappropriate use of statistical inference (page 2) and the importance of software. I also strongly agree that “independent critical inspection [is] particularly challenging” (page 3). I also strongly agree that “The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission”, as this is everyone’s problem and nobody’s problem.

      I also agree that automation has encouraged standardisation and I have personally supported standardisation because some practices are so bad that many authors need to be “standardised”. However, I’ve also felt frustration at the sometimes fussy requirements when uploading R packages to CRAN (https://cran.r-project.org/). Similarly, some blanket changes from CRAN seem pedantic. There’s likely a balance between reducing poor practice and becoming too prescriptive.

      In terms of transparency (section 2.4) I did think about the “Verbose=TRUE” option that I sometimes see in R. I tend to turn this on, as it’s good to see more of the workings, but perhaps the default is off? I did look at some packages using the google search: “verbose site:cran.r-project.org/web/packages”. I was also reminded of the difference between Bayesian and frequentist statistical modelling. Frequentist modelling often uses maximum likelihood to create parameter estimates, which usually runs quickly to create the estimates. In contrast, Bayesian methods often use MCMC, which is often slow and creates long chains of estimates; however, the chains will show if the likelihood does not have a clear maximum, which is usually from a badly specified model, whereas the maximum likelihood simply finds any peak. Frustratingly, I often get more push back from reviewers when using Bayesian methods, whereas in my opinion it should be the other way around as the Bayesian estimates have shown far more of the inner workings.

      Some reflection on the growing use of AI to write software may be worthwhile. Presumably this could be more standardised, but there are other concerns. Using automation to check code could also be worthwhile.

      For section 3, I thought that more sharing of code would mean “more eyeballs”, but the sharing needs to be done in FAIR way.

      I wondered if highly-used software should get more scrutiny. Peer review is a scarce resource, so is likely better directed towards high use software. Andrew Gelman recently put forward a similar argument for checking published papers when they reach 250 citations: https://statmodeling.stat.columbia.edu/2025/02/26/pp/.

      I agreed with the need for effort (page 19) and wondered if this paper could call for more effort.

      Minor comments:

      • typo “asses” on page 7.

      • “supercomputers are rare”, should this be “relatively rare” or am I speaking from a privileged university where I’ve always had access to supercomputers.

      • I did think about “testthat” at multiple points whilst reading the paper (https://testthat.r-lib.org/)

      • Can badges on github about downloads and maturity help (page 7)? Although, far from all software is on github.

    2. This summary article does not present new data or experiments but instead takes a broad look at automated reasoning and software. Reviewer #1 thought the article needed much more detail, including citations, examples, screenshots and figures. They were concerned about strong generalisations that were lacking evidence and have provided places where they wanted these details. Reviewer #2 considers the differences between reviewability and the practicalities of reviewing everything, and how being easily able to build-on other software acts as a kind of reproducibility. In my own editorial review, I generally enjoyed reading the paper and it prompted some interesting thoughts on trade-offs with standardisation and the level of detail shown to users for statistical code.

    3. Thank you for submitting this paper. I think the paper requires substantial, major revisions to be published. Throughout the paper I noted many instances where references or examples would help make the intent clear. I also think the message of the paper would benefit from several figures to demonstrate workflows or ideas. The figures presented are essentially tables, and I think the message could be made clearer for the reader if they were presented as flow charts or at least with clear numbering to hook the ideas to the reader - e.g., Figures 1 & 2 would benefit from having numbers on the key ideas.

      The paper is lacking many instances of citation, and at times reads as though it is an essay delivering an opinion. I'm not sure if this is the type of article that the journal would like, but two examples of sentences missing citations are:

      1. "Over the last two decades, an unexpectedly large number of peer-reviewed findings across many scientific disciplines have been found to be irreproducible upon closer inspection." (Introduction, page 2)

      2. "A large number of examples cited in this context involves faulty software or inappropriate use of software" (Introduction, page 3)

      Two examples of sentences missing examples are:

      1. Experimental software evolves at a much faster pace than mature software, and documentation is rarely up to date or complete (in Mature vs. experimental software, page 7). Could the author provide more examples of what "experimental software" is? There is also consistent use of universal terms like "...is rarely up to date or complete", which would be better phrased as "is often not up to date or complete"

      2. There are various techniques for ensuring or verifying that a piece of software conforms to a formal specification.

      Overall the paper introduces many new concepts, and I think it would greatly benefit from being made shorter and more concise, with adding some key figures for the reader to refer back to to understand these new ideas. The paper is well written, and it is clear the author is a great writer, and has put a lot of thought into the ideas. However it is my opinion that because these ideas are so big and require so much unpacking, they are also harder to understand. The reader would benefit from having more guidance to come back to understand these ideas.

      I hope this review is helpful to the author.

      Review comments

      Introduction

      Highlight [page 2]: Ever since the beginnings of organized science in the 17th century, researchers are expected to put all facts supporting their conclusions on the table, and allow their peers to inspect them for accuracy, pertinence, completeness, and bias. Since the 1950s, critical inspection has become an integral part of the publication process in the form of peer review, which is still widely regarded as a key criterion for trustworthy results.

      • and Note [page 2]: Both of these statements feel like they should have some peer review, or reference on them, I believe. What was the beginnings of organised science in the 1600s? Why since the 1950s? Why not sooner? What happened then?

      Highlight [page 2]: Over the last two decades, an unexpectedly large number of peer-reviewed findings across many scientific disciplines have been found to be irreproducible upon closer inspection.

      Highlight [page 2]: In the quantitative sciences, almost all of today’s research critically relies on computational techniques, even when they are not the primary tool for investigation - and Note [page 2]: Again, it does feel like it would be great to acknowledge research in this space.

      Highlight [page 2]: But then, scientists mostly abandoned doubting.

      • and Note [page 2]: This feels like an essay, where show me the evidence for where you can say something like this?

      Highlight [page 2]: Automation bias

      • and Note [page 2]: What is automation bias?

      Highlight [page 3]: A large number of examples cited in this context involves faulty software or inappropriate use of software

      • and Note [page 3]: Can you provide some examples of the examples cited that you are referring to here?

      Highlight [page 3]: A particularly frequent issue is the inappropriate use of statistical inference techniques.

      • and Note [page 3]: Please provide citations to these frequent issues.

      Highlight [page 3]: The Open Science movement has made a first step towards dealing with automated reasoning in insisting on the necessity to publish scientific software, and ideally making the full development process transparent by the adoption of Open Source practices - and Note [page 3]: Could you provide an example of one of these Open Science movements?

      Highlight [page 3]: Almost no scientific software is subjected to independent review today.

      • and Note [page 3]: How can you justify this claim?

      Highlight [page 3]: In fact, we do not even have established processes for performing such reviews

      Highlight [page 3]: as I will show

      • and Note [page 3]: How will you show this?

      Highlight [page 3]: is as much a source of mistakes as defects in the software itself

      • and Note [page 3]: Again, this feels like a statement of fact without evidence or citation.

      Highlight [page 3]: This means that reviewing the use of scientific software requires particular attention to potential mismatches between the software’s behavior and its users’ expectations, in particular concerning edge cases and tacit assumptions made by the software developers. They are necessarily expressed somewhere in the software’s source code, but users are often not aware of them.

      • and Note [page 3]: The same can be said of assumptions for equations and mathematics - the problem here is dealing with abstraction of complexity and the potential unintended consequences.

      Highlight [page 4]: the preservation of epistemic diversity

      • and Note [page 4]: Please define epistemic diversity
      Reviewability of automated reasoning systems

      Highlight [page 5]: The five dimensions of scientific software that influence its reviewability.

      • and Note [page 5]: It might be clearer to number these in the figure, and also I might suggest changing the “convivial” - it’s a pretty unusual word?
      Wide-spectrum vs. situated software

      Highlight [page 6]: In between these extremes, we have in particular domain libraries and tools, which play a very important role in computational science, i.e. in studies where computational techniques are the principal means of investigation

      • and Note [page 6]: I’m not very clear on this example - can you provide an example of a “domain library” or “domain tool” ?

      Highlight [page 6]: Situated software is smaller and simpler, which makes it easier to understand and thus to review.

      • and Note [page 6]: I’m not sure I agree it is always smaller and simpler - the custom code for a new method could be incredibly complicated.

      Highlight [page 6]: Domain tools and libraries

      • and Note [page 6]: Can you give an example of this?
      Mature vs. experimental software

      Highlight [page 7]: Experimental software evolves at a much faster pace than mature software, and documentation is rarely up to date or complete

      • and Note [page 7]: Could the author provide more examples of what “experimental software” is? There is also consistent use of universal terms like “…is rarely up to date or complete”, which would be better phrased as “is often not up to date or complete”

      Highlight [page 7]: An extreme case of experimental software is machine learning models that are constantly updated with new training data.

      • and Note [page 7]: Such as…

      Highlight [page 7]: interlocutor

      • and Note [page 7]: suggest “middle man” or “mediator”, ‘interlocutor’ isn’t a very common word

      Highlight [page 7]: A grey zone

      • and Note [page 7]: I think it would be helpful to discuss black and white zones before this.

      Highlight [page 7]: The libraries of the scientific Python ecosystem

      • and Note [page 7]: Do you mean SciPy? https://scipy.org/. Can you provide an example of the frequent changes that break backward compatibility?

      Highlight [page 7]: too late that some of their critical dependencies are not as mature as they seemed to be

      • and Note [page 7]: Again, can you provide some evidence for this?

      Highlight [page 7]: The main difference in practice is the widespread use of experimental software by unsuspecting scientists who believe it to be mature, whereas users of instrument prototypes are usually well aware of the experimental status of their equipment.

      • and Note [page 7]: Again this feels like an assertion without evidence. Is this an essay, or a research paper?
      Convivial vs. proprietary software

      Highlight [page 8]: Convivial software [Kell 2020], named in reference to Ivan Illich’s book “Tools for conviviality” [Illich 1973], is software that aims at augmenting its users’ agency over their computation

      • and Note [page 8]: It would be really helpful if the author would define the word, “convivial” here. It would also be very useful if they went on to give an example of what they meant by: “…software that aims at augmenting its users’ agency over their computation.” How does it augment the users agency?

      Highlight [page 8]: Shaw recently proposed the less pejorative term vernacular developers [Shaw 2022]

      • and Note [page 8]: Could you provide an example of what makes “vernacular developers” different, or just what they mean by this term?

      Highlight [page 8]: which Illich has described in detail

      • and Note [page 8]: Should this have a citation to Illich then in this sentence?

      Highlight [page 8]: what has happened with computing technology for the general public

      • and Note [page 8]: Can you give an example of this. Do you mean the rise of Apple and Windows? MS Word? Facebook? A couple of examples would be really useful to make this point clear.

      Highlight [page 8]: tech corporations

      • and Note [page 8]: Suggest “tech corporations” be “technology corporations”.

      Highlight [page 8]: Some research communities have fallen into this trap as well, by adopting proprietary tools such as MATLAB as a foundation for their computational tools and models.

      • and Note [page 8]: Can you provide an example of the alternative here, what would be the way to avoid this trap - use software such as Octave, or?

      Highlight [page 8]: Historically, the Free Software movement was born in a universe of convivial technology.

      • and Note [page 8]: If it is historic, can you please provide a reference to this?

      Highlight [page 8]: most of the software they produced and used was placed in the public domain

      • and Note [page 8]: Can you provide an example of this? I’m also curious how the software was placed in the public domain if there was no way to distribute it via the internet.

      Highlight [page 8]: as they saw legal constraints as the main obstacle to preserving conviviality

      • and Note [page 8]: Again, these are conjectures that are lacking a reference or example, can you provide some examples of references of this?

      Highlight [page 9]: Software complexity has led to a creeping loss of user agency, to the point that even building and installing Open Source software from its source code is often no longer accessible to non-experts, making them dependent not only on the development communities, but also on packaging experts. An experience report on building the popular machine learning library PyTorch from source code nicely illustrates this point [Courtès 2021].

      • and Note [page 9]: Can you summarise what makes it difficult to install Open Source Software? Again, this statement feels like it is making a strong generalisation without clear evidence to support this. The article by Courtès (https://hpc.guix.info/blog/2021/09/whats-in-a-package/), actually notes that it’s straightforward to install PyTorch via pip, but using an alternative package manager causes difficulty. The point you are making here seems to be that building and installing most open source software is almost prohibitive, but I think you’ve given strong evidence for this claim, and I don’t understand how this builds into your overall argument.

      Highlight [page 9]: It survives mainly in communities whose technology has its roots in the 1980s, such as programming systems inheriting from Smalltalk (e.g. Squeak, Pharo, and Cuis), or the programmable text editor GNU Emacs.

      • and Note [page 9]: Can you give an example of how it survives in these communities?

      Highlight [page 9]: FLOSS has been rapidly gaining in popularity, and receives strong support from the Open Science movement

      • and Note [page 9]: Can you provide some evidence to back this statement up?

      Highlight [page 9]: the traditional values of scientific research.

      • and Note [page 9]: Can you state what you mean by “traditional values of scientific research”

      Highlight [page 9]: always been convivial

      • and Note [page 9]: Can you provide a further explanation of what makes them convivial?
      Transparent vs. opaque software

      Highlight [page 9]: Transparent software

      • and Note [page 9]: It might be useful to explain a distinction between transparent and open software - or to perhaps open with a statement for why we are talking about transparent and opaque software.

      Highlight [page 9]: Large language models are an extreme example.

      • and Note [page 9]: Based on your definition of transparent software - every action produces a visible result. If I type something into an LLM and get an immediate and visible result, how is this different? It is possible you are stating that the behaviour is able to be easily interpreted, or perhaps the behaviour is easy to understand?

      Highlight [page 10]: Even highly interactive software, for example in data analysis, performs nonobvious computations, yielding output that an experienced user can perhaps judge for plausibility, but not for correctness.

      • and Note [page 10]: Could you give a small example of this?

      Highlight [page 10]: It is much easier to develop trust in transparent than in opaque software.

      • and Note [page 10]: Can you state why it is easier to develop this trust?

      Highlight [page 10]: but also less important

      • and Note [page 10]: Can you state why it is less important?

      Highlight [page 10]: even a very weak trustworthiness indicator such as popularity becomes sufficient

      • and Note [page 10]: becomes sufficient for what? Reviewing? Why does it become sufficient?

      Highlight [page 10]: This is currently a much discussed issue with machine learning models,

      • and Note [page 10]: Given it is currently much discussed, could you link to at least 2 research articles discussing this point?

      Highlight [page 10]: treated extensively in the philosophy of science.

      • and Note [page 10]: Given that is has been treated extensively, can you please provide some key references after this statement? You do go on to cite one paper, but it would be helpful to mention at least a few key articles.
      Size of the minimal execution environment

      Highlight [page 11]: The importance of this execution environment is not sufficiently appreciated by most researchers today, who tend to consider it a technical detail

      • and Note [page 11]: This statement is a bit of a sweeping generalisation - why is it not sufficiently appreciated? What evidence do you have of this?

      Highlight [page 11]: Software environments have only recently been recognized as highly relevant for automated reasoning in science and beyond

      • and Note [page 11]: Where have they been only recently recognised?

      Highlight [page 11]: However, they have not yet found their way into mainstream computational science.

      • and Note [page 11]: Could you provide an example of what it might look like if they were in mainstream computational science? For example, https://github.com/ropensci/rix implements using reproducible environments for R with NIX. What makes this not mainstream? Are you talking about mainstream in the sense of MS Excel? SPSS/SAS/STATA?
      Analogies in experimental and theoretical science

      Highlight [page 12]: Non-industrial components are occasionally made for special needs, but this is discouraged by their high manufacturing cost

      • and Note [page 12]: Can you provide an example of this?

      Highlight [page 12]: cables

      • and Note [page 12]: What do you mean by a cable? As in a computer cable? An electricity cable?

      Highlight [page 13]: which an experienced microscopist will recognize. Software with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diagnose easily.

      • and Note [page 13]: I don’t think this is a fair comparison. Surely there must be instances of experiences microscopists not identifying defects? Similarly, why can’t there be examples of domain expert or professional programmer/computer scientist identifying errors. Don’t unit tests help protect us against some of our errors? Granted, they aren’t bullet proof, and perhaps act more like guard rails.

      Highlight [page 13]: where “traditional” means not relying on any form of automated reasoning.

      • and Note [page 13]: Can you give an example of what a “traditional” scientific model or theory
      Improving the reviewability of automated reasoning systems

      Highlight [page 14]: Figure 2: Four measures that can be taken to make scientific software more trustworthy.

      • and Note [page 14]: Could the author perhaps instead call these “four measures” or perhaps give them a better name, and number them?
      Review the reviewable

      Highlight [page 14]: mature wide-spectrum software

      • and Note [page 14]: Can you give an example of what “mature wide-spectrum software” is?

      Highlight [page 15]: The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission.

      Science vs. the software industry

      Highlight [page 15]: Many computers, operating systems, and compilers were designed specifically for the needs of scientists.

      • and Note [page 15]: Could you give an example of this? E.g., FORTRAN? COBAL?

      Highlight [page 15]: Today, scientists use mostly commodity hardware

      • and Note [page 15]: Can you explain what you mean by “commodity hardware”, and give an example.

      Highlight [page 15]: even considered advantageous if it also creates a barrier to reverse- engineering of the software by competitors

      • and Note [page 15]: Can you give an example of this?

      Highlight [page 15]: few customers (e.g. banks, or medical equipment manufacturers) are willing to pay for

      • and Note [page 15]: What about software like SPSS/STATA/SAS - surely many many industries, and also researchers will pay for software like this that is considered mature?
      Emphasize situated and convivial software

      Highlight [page 16]: a convivial collection of more situated modules, possibly supported by a shared wide-spectrum layer.

      • and Note [page 16]: Could you give an example of what this might look like practically? Are you saying things like SciPy would be restructured into many separate modules, or?

      Highlight [page 16]: In terms of FLOSS jargon, users make a partial fork of the project. Version control systems ensure provenance tracking and support the discovery of other forks. Keeping up to date with relevant forks of one’s software, and with the motivations for them, is part of everyday research work at the same level as keeping up to date with publications in one’s wider community. In fact, another way to describe this approach is full integration of scientific software development into established research practices, rather than keeping it a distinct activity governed by different rules.

      • and Note [page 16]: Could the author provide a diagram or schematic to more clearly show how such a system would work with forks etc?

      Highlight [page 17]: a universe is very

      • and Note [page 17]: Perhaps this could be “would be very different” - since this doesn’t yet exist, right?

      Highlight [page 17]: Improvement thus happens by small-step evolution rather than by large-scale design. While this may look strange to anyone used to today’s software development practices, it is very similar to how scientific models and theories have evolved in the pre-digital era.

      • and Note [page 17]: I think some kind of schematic or workflow to compare existing practices to this new practice would be really useful to articulate these points. I also think this new method of development you are proposing should have a concrete name.

      Highlight [page 17]: Existing code refactoring tools can probably be adapted to support application-specific forks, for example via code specialization. But tools for working with the forks, i.e. discovering, exploring, and comparing code from multiple forks, are so far lacking. The ideal toolbox should support both forking and merging, where merging refers to creating consensual code versions from multiple forks. Such maintenance by consensus would probably be much slower than maintenance performed by a coordinated team.

      • and Note [page 17]: Perhaps an example of screenshot of a diff could be used to demonstrate that we can make these changes between two branches/commits, but comparing multiple is challenging?
      Make scientific software explainable

      Highlight [page 18]: An interesting line of research in software engineering is exploring possibilities to make complete software systems explainable [Nierstrasz and Girba 2022]. Although motivated by situated business applications, the basic ideas should be transferable to scientific computing

      • and Note [page 18]: Is this similar to concepts such as “X-AI” or “X-ML” - that is, “Explainable” Artificial Intelligence or Machine Learning?

      Highlight [page 18]: Unlike traditional notebooks, Glamorous Toolkit [feenk.com 2023],

      • and Note [page 18]: It appears that you have introduced “Glamorous Toolkit” as an example of these three principles? It feels like it should be introduced earlier in this paragraph?

      Highlight [page 18]: In Glamorous Toolkit, whenever you look at some code, you can access corresponding examples (and also other references to the code) with a few mouse clicks

      • and Note [page 18]: I think it would be very beneficial to show screenshots of what the author means - while I can follow the link to Glamorous Toolkit, bitrot is a thing, and that might go away, so it would good to see exactly what the author means when they discuss these examples.
      Use Digital Scientific Notations

      Highlight [page 18]: There are various techniques for ensuring or verifying that a piece of software conforms to a formal specification

      • and Note [page 18]: Can you give an example of these techniques?

      Highlight [page 18]: The use of these tools is, for now, reserved to software that is critical for safety or security,

      • and Note [page 18]: Again, could you give an example of this point? Which tools, and which software is critical for safety or security?

      Highlight [page 19]: formal specifications

      • and Note [page 19]: It would be really helpful if you could demonstrate an example of a formal specification so we can understand how they could be considered constraints.

      Highlight [page 19]: All of them are much more elaborate than the specification of the result they produce. They are also rather opaque.

      • and Note [page 19]: It isn’t clear to me how these are opaque - if the algorithm is defined, it can be understood, how is it opaque?

      Highlight [page 19]: Moreover, specifications are usually more modular than algorithms, which also helps human readers to better understand what the software does [Hinsen 2023]

      • and Note [page 19]: A tight example of this would be really useful to make this point clear. Perhaps with a figure of a specification alongside an algorithm.

      Highlight [page 19]: In software engineering, specifications are written to formalize the expected behavior of the software before it is written. The software is considered correct if it conforms to the specification.

      • and Note [page 19]: Is an example of this test drive development?

      Highlight [page 19]: A formal specification has to evolve in the same way, and is best seen as the formalization of the scientific knowledge. Change can flow from specification to software, but also in the opposite direction.

      • and Note [page 19]: Again, I think a good figure here would be very helpful in articulating this clearly.

      Highlight [page 19]: My own experimental Digital Scientific Notation, Leibniz [Hinsen 2024], is intended to resemble traditional mathematical notation as used e.g. in physics. Its statements are embeddable into a narrative, such as a journal article, and it intentionally lacks typical programming language features such as scopes that do not exist in natural language, nor in mathematical notation.

      • and Note [page 19]: Could we see an example of what this might look like?
      Conclusion

      Highlight [page 20]: Situated software is easy to recognize.

      • and Note [page 20]: Could you provide some examples?

      Highlight [page 20]: Examples from the reproducibility crisis support this view

      • and Note [page 20]: Can you provide some example papers that you mention here?

      Highlight [page 21]: The ideal structure for a reliable scientific software stack would thus consist of a foundation of mature software, on top of which a transparent layer of situated software, such as a script, a notebook, or a workflow, orchestrates the computations that together answer a specific scientific question. Both layers of such a stack are reviewable, as I have explained in section 3.1, but adequate reviewing processes remain to be enacted.

      • and Note [page 21]: Again, I think it would be very insightful for the reader to have a clear figure to rest these ideas upon.

      Highlight [page 21]: has been neglected by research institutions all around the world

      • and Note [page 21]: I do not think this is true - could you instead say “neglected my most/many” perhaps?
    4. In his article Establishing trust in automated reasoning (Hinsen, 2023) Hinsen argues that much of current scientific software lacks reviewability. Because scientific software has become such a central part of many scientific endeavors he worries that unreviewed software might contain mistakes which will never be spotted and consequently taint the scientific record. To illustrate this worry he cites issues with reproductions in different fields of science, which are often subsumed under the umbrella term of reproducibility crises. These crises, though not uncontested, have varied sources. In the field of social psychology reproducibility issues can for example often be traced to errors in statistical analyses, while shifting baselines and data leakage lead to problems in ML. Hinsen is only concerned with errors in scientific software. He suggests that potential errors could be spotted more easily if scientific software would be more reviewable. Thus he proposes five criteria against which reviewability could be judged. I will not discuss them in detail in this commentary and refer the interested reader to Hinsen (2023, section 2) for an extensive discussion. I note though, that the five criteria are meant to ensure an ideal type of reproducibility which Hinsen defines as follows: “Ideally, each piece of software should perform a well-defined computation that is documented in sufficient detail for its users and verifiable by independent reviewers.” (Hinsen, 2023, p.2). I take the upshot of these criteria to be that one could assert the reviewability of a piece of software before actually doing the review. They could thus function, perhaps contrary to Hinsen’s open science convictions, as a gatekeeping device in a peer review process for software. An editor could ”desk reject” software for not fulfilling the criteria before even sending it out to potential reviewers. If I am correct in this interpretation then we should entertain the same caution with them as we do with preregistration.

      To be fair, Hinsen envisions a software review process which differs from current peer review with its acknowledged defects in several ways. He says, ”Developing suitable intermediate processes and institutions for reviewing such software is perhaps possible, but I consider it scientifically more appropriate to restructure such software into a convivial collection of more situated modules, possibly supported by a shared wide-spectrum layer.” (Hinsen, 2023, p.16).

      Convivial software in turn is supposed to augment ”its users’ agency over their computation.” (Hinsen, 2023, p.16). This gives us a hint about the kind of user Hinsen has in mind – it is the software developer as a user. His concept of reviewability aims to make software transparent only to this kind of user (see Hinsen, 2023, p.20). In one of his many comparisons of scientific software to science, he notes that ”[. . . ] the main intellectual artifacts of science, i.e. theories and models, have always been convivial.” (Hinsen, 2023, p.9) and we can guess that he wants this to be the case for software too. But, if at all, scientific theories and models only have ever been convivial for scientists. The comparison also works the other way around, science as much as software is heavily fragmented into modules (disciplines). Scientists have always relied on the results of other scientists – they often have done and still do so without reviewing them. Has this hindered progress? I think one would be hard pressed to answer such a question in general for science, and perhaps it is the same for scientific software.

      As Hinsen admits formal peer review is a quite novel addition to scientific methodology, being enforced on a larger scale only since the past fifty years or so. Science has progressed many years without, so we could ask why scientific software should not do likewise. Hinsen’s answer of course has to do with how he grades such software with respect to his reviewability criteria – obviously, most of it scores badly. Most scientific software is neither reviewed nor reviewable, Hinsen claims. This he considers a defect, because only reviewable software has to potential of being reviewed. Many practical considerations he discusses actually speak against the hope that most reviewable software will actually be reviewed. Still, without reviewability, it is hard, if not impossible, to spot mistakes. A case that was recently brought to my attention emphasizes this point. In Beheim et al. (2021) it is pointed out that a statistical analysis imputed missing values in an archaeo-historical database with the number 0. But for the statistical model (and software!) in use 0 had a different meaning than not available. This casts doubt on the conclusion that was drawn from the model. Beheim et al. were only able to spot this assumption because the code and data were available for review1. Cases like this abound and are examples for invisible programming values that philosopher James Moor discussed in the context of computer ethics (see Moor, 1985, The invisibility factor). Hinsen calls such values “tacit assumptions made by software developers” (Hinsen, 2023, p.3). We might speculate though, what would have happened if this questionable result had been incorporated into the scientific canon. Would later scientists really have continued building on it without ever realizing their shaky foundations? Or would the whole edifice have had to face the tribunal of experience at some point and crumbled? Perhaps the originating problem would never have been found and a whole research program would have been abandoned, perhaps a completely different part would have been blamed and excised – hard to say!

      But maybe reviewability can also serve a different aim than establishing trust in the results of certain pieces of scientific software. Perhaps, it facilitates building on and incorporating pieces of such software in other projects. Its purpose could be more instrumental than epistemic. Although Hinsen seems to worry more about the epistemic problems coming with lack of reviewability, many points he makes implicitly deal with practical problems of software engineering. Whoever has fought against jupyter notebooks with legacy python requirements can immediately relate to his wish for keeping the execution environment as small as possible. For Hinsen software is actually defined by its execution environment (Hinsen, 2023, p.11), thus the complete environment must be available for its reviewability2. Software cannot be really seen as a separate entity and a review always reviews the whole environment. Analogously to Quine-Duhem we could call this situation review holism. But review holism might be less problematic than its scientific cousin suggests. We might not actually need to explicitly review the whole system. Perhaps it is sufficient if we achieve frictionless reproducibility (see Donoho, 2024), that is, other people can more or less easily incorporate and built on the software in question. Firstly if other software which incorporates the software in question works, it already is a type of successful reproduction. Secondly, the process of how software evolves might weed out any major errors, whatever errors remain are perhaps just irrelevant. In all fairness it has to be said that Hinsen does not think this is the case with current software. He argues that ”Software with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diagnose easily.” (Hinsen, 2023, p.13). But if that is the case then Hinsen’s later recourse to reliabilist-style justifications for software correctness is blocked too. We are in a situation for which the late Humphreys coined the term strange error (Rathkopf & Heinrichs, 2023, p.5). Strange errors are a challenge for any reliabilist account of justification because their magnitude can easily overwhelm arduously collected reliability assurances. If computational reliabilism was just reliabilism, and Hinsen seems to take it as such3, it would suffer from this problem too. But computational reliabilism has an additional internalist component, which explicitly allows for the whole toolbox of ”rationalist” software verification methods. If possible we should learn something about our tools other than their mere reliability. As Hacking said, ”[To understand] whether one sees through a microscope, one needs to know quite a lot about the tools.” (Hacking, 1981, p.135).

      I would go so far and say that, if available, internalist justificiations are preferable to reliabilistic guarantees. It is only the case that often they are not and then we might content ourselves with the guarantees reliabilism provides. I said might content here, because such guarantees are unlikely to satisfy the skeptic. Obviously strange errors are always a possibility and no finite observation of correct software behaviour can completely rule them out. But in practice such concerns tend to fade over time, although they provide opportunity for unchecked philosophically skepticism. Many discussions about software opacity feed from such skepticism and this is what I tried to balance with computational reliabilism. In this spirit computational reliabilism was an attempt to temper theoretical skeptics in philosophy, not to give normative guidance to software engineering practice. My view was always that practice has the last say over philosophical concerns. If the emerging view in software engineering practice now is that more skepticism is appropriate, I will happily concur. But I should like to remind the practitioner that evidence for such skepticism has to be given in practice too, mere theoretical possibilities are not sufficient to establish it.

      Reviewability does not mean reviewed. And only reviews can give us trust - or so we might think. As Hinsen acknowledges we should not expect that a majority of scientific software will ever be reviewed. Does this mean we cannot trust the results from such software? Above I tried to sketch a way out of this conundrum: We can view reviewability as advocated by Hinsen as a way to enable frictionless reproducibility, which in turn lets us built upon software, incorporate it in our own projects and use its results. As long as it works in a practically fulfilling way, this might be all the reviewing we need.

      Notes

      1A statistician once told me, that one glance at the raw data of this example immediately made clear to him that whatever problem there was with imputation, the data would never have supported the desired conclusions in any way. One man’s glance is another’s review.

      2Hinsen’s definition of software closely parallels that of Moor, who argued that computer programs are a relation between a computer, a set of instructions and an activity (Moor, 1978, p.214).

      3Hinsen characterizes computational reliabilism as follows, ”As an alternative source of trust, they propose computational reliabilism, which is trust derived from the experience that a computational procedure has produced mostly good results in a large number of applications.” (Hinsen, 2023, p.10)

      References

      Beheim, B., Atkinson, Q. D., Bulbulia, J., Gervais, W., Gray, R. D., Henrich, J., Lang, M., Monroe, M. W., Muthukrishna, M., Norenzayan, A., Purzy- cki, B. G., Shariff, A., Slingerland, E., Spicer, R., & Willard, A. K. (2021). Treatment of missing data determined conclusions regarding moralizing gods. Nature, 595 (7866), E29–E34. https://doi.org/10.1038/s41586-021-03655-4

      Donoho, D. (2024). Data Science at the Singularity. Harvard Data Science Re- view, 6 (1). https://doi.org/10.1162/99608f92.b91339ef

      Hacking, I. (1981). Do We See Through a Microscope? Pacific Philosophical Quarterly, 62 (4), 305–322. https://doi.org/10.1111/j.1468-0114.1981.tb00070.x

      Hinsen, K. (2023, July). Establishing trust in automated reasoning. https:// doi.org/10.31222/osf.io/nt96q

      Moor, J. H. (1978). Three Myths of Computer Science. The British Journal for the Philosophy of Science, 29 (3), 213–222. https://doi.org/10.1093/bjps/29.3.213

      Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16 (4), 266–275. https://doi.org/10.1111/j.1467-9973.1985.tb00173.x

      Rathkopf, C., & Heinrichs, B. (2023). Learning to Live with Strange Error: Be- yond Trustworthiness in Artificial Intelligence Ethics. Cambridge Quarterly of Healthcare Ethics, 1–13. https://doi.org/10.1017/S0963180122000688

    5. Dear editors and reviewers, Thank you for your careful reading of my manuscript and the detailed and insightful feedback. It has contributed significantly to the improvements in the revised version. Please find my detailed responses below.

      1 Reviewer 1

      Thank you for this helpful review, and in particular for pointing out the need for more references, illustrations, and examples in various places of my manuscript. In the case of the section on experimental software, the search for examples made clear to me that the label was in fact badly chosen. I have relabeled the dimension as “stable vs. evolving software”, and rewritten the section almost entirely. Another major change motivated by your feedback is the addition of a figure showing the structure of a typical scientific software stack (Fig. 2), and of three case studies (section 2.7) in which I evaluate scientific software packages according to my five dimensions of reviewability. The discussion of conviviality (section 2.4), a concept that is indeed not widely known yet, has been much expanded. I have followed the advice to add references in many places. I have been more hesitant to follow the requests for additional examples and illustrations, because of the inevitable conflict with the equally understandable request to make the paper more compact. In many cases, I have preferred to refer to examples discussed in the literature. A few comments deserve a more detailed reply:

      Introduction

      Highlight [page 3]: In fact, we do not even have established processes for performing such reviews

      and Note [page 3]: I disagree, there is the Journal of Open Source Software: https://joss.theoj.org/, rOpenSci has a guide for development of peer review of statistical software: https://github.com/ropensci/statistical software-review-book, and also maintain a very clear process of software review: https://ropensci.org/software-review/

      As I say in the section “Review the reviewable”, these reviews are not independent critical examination of the software as I define it. Reviewers are not asked to evaluate the software’s correctness or appropriateness for any specific purpose. They are expected to comment only on formal characteristics of the software publication process (e.g. “is there a license?”), and on a few software engineering quality indicators (“is there a test suite?”).

      Highlight [page 3]: This means that reviewing the use of scientific software requires particular attention to potential mismatches between the software’s behavior and its users’ expectations, in particular concerning edge cases and tacit assumptions made by the software developers. They are necessarily expressed somewhere in the software’s source code, but users are often not aware of them.

      and Note [page 3]: The same can be said of assumptions for equations and mathematics- the problem here is dealing with abstraction of complexity and the potential unintended consequences.

      Indeed. That’s why we need someone other than the authors to go through mathematical reasoning and verify it. Which we do.

      Reviewability of automated reasoning systems

      Wide-spectrum vs. situated software

      Highlight [page 6]: Situated software is smaller and simpler, which makes it easier to understand and thus to review.

      and Note [page 6]: I’m not sure I agree it is always smaller and simpler- the custom code for a new method could be incredibly complicated.

      The comparison is between situated software and more generic software performing the same operation. For example, a script reading one specific CSV file compared to a subroutine reading arbitrary CSV files. I have yet to see a case in which abstraction from a concrete to a generic function makes code smaller or simpler.

      Convivial vs. proprietary software

      Highlight [page 8]: most of the software they produced and used was placed in the public domain

      and Note [page 8]: Can you provide an example of this? I’m also curious how the software was placed in the public domain if there was no way to distribute it via the internet.

      Software distribution in science was well organized long before the Internet, it was just slower and more expensive. Both decks of punched cards and magnetic tapes were routinely sent by mail. The earliest organized software distribution for science I am aware of was the DECUS Software Library in the early 1960s.

      Size of the minimal execution environment

      Note [page 11]: Could you provide an example of what it might look like if they were in mainstream computational science? For example, https://github.com/ropensci/rix implements using reproducible environments for R with NIX. What makes this not mainstream? Are you talking about mainstream in the sense of MS Excel? SPSS/SAS/STATA?

      I have looked for quantitative studies on software use in science that would allow to give a precise meaning to “mainstream”, but I have not been able to find any. Based on my personal experience, mostly with teaching MOOCs on computational science in which students are asked about the software they use, the most widely used platform is Microsoft Windows. Linux is already a minority platform (though overrepresented in computer science), and Nix users are again a small minority among Linux users.

      Analogies in experimental and theoretical science

      Highlight [page 13]: which an experienced microscopist will recognize. Soft ware with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diag- nose easily.

      and Note [page 13]: I don’t think this is a fair comparison. Surely there must be instances of experiences microscopists not identifying defects? Similarly, why can’t there be examples of domain expert or professional program mer/computer scientist identifying errors. Don’t unit tests help protect us against some of our errors? Granted, they aren’t bullet proof, and perhaps act more like guard rails.

      There are probably cases of microscopists not noticing defects, but my point is that if you ask them to look for defects, they know what to do (and I have made this clearer in my text). For contrast, take GROMACS (one of my case studies in the revised manuscript) and ask either an expert programmer or an experienced computational biophysicist if it correctly implements, say, the AMBER force field. They wouldn’t know what to do to answer that question, both because it is ill-defined (there is no precise definition of the AMBER force field) and because the number of possible mistakes and symptoms of mistakes is enormous. I have seen a protein simulation program fail for proteins whose number of atoms was in a narrow interval, defined by the size that a compiler attributed to a specific data structure. I was able to catch and track down this failure only because a result was obviously wrong for my use case. I have never heard of similar issues with microscopes.

      Improving the reviewability of automated reasoning systems

      Review the reviewable

      Highlight [page 15]: The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission.

      and Note [page 15]: I disagree. Monash provides an example here where they view software as a first class research output: https://robjhyndman.com/files/EBS_research_software.pdf

      This example is about superficial reviews in the context of career evaluation. Other institutions have similar processes. As far as I know, none of them ask reviewers to look at the actual code and comment on its correctness or its suitability for some specific purpose.

      Science vs. the software industry

      Highlight [page 15]: few customers (e.g. banks, or medical equipment manufacturers) are willing to pay for

      and Note [page 15]: What about software like SPSS/STATA/SAS- surely many many industries, and also researchers will pay for software like this that is considered mature?

      I could indeed extend the list of examples to include various industries. Compared to the huge number of individuals using PCs and smartphones, that’s still few customers.

      Emphasize situated and convivial software

      Note [page 16]: Could the author provide a diagram or schematic to more clearly show how such a system would work with forks etc?

      I have decided the contrary: I have significantly shortened this section, removing all speculation about how the ideas could be turned into concrete technology. The reason is that I have been working on this topic since I wrote the reviewed version of this manuscript, and I have a lot more to say about it than would be reasonable to include in this work. This will become a separate article.

      Make scientific software explainable

      Note [page 18]: I think it would be very beneficial to show screenshots of what the author means- while I can follow the link to Glamorous Toolkit, bitrot is a thing, and that might go away, so it would good to see exactly what the author means when they discuss these examples.

      Unfortunately, static screenshots can only convey a limited impression of Glamorous Toolkit, but I agree that they have are a more stable support than the software itself. Rather than adding my own screenshots, I refer to a recent paper by the authors of Glamorous Toolkit that includes many screenshots for illustration.

      Use Digital Scientific Notations

      Highlight [page 19]: formal specifications and Note [page 19]: It would be really helpful if you could demonstrate an example of a formal specification so we can understand how they could be considered constraints.

      Highlight [page 19]: Moreover, specifications are usually more modular than algorithms, which also helps human readers to better understand what the software does [Hinsen 2023]

      and Note [page 19]: A tight example of this would be really useful to make this point clear. Perhaps with a figure of a specification alongside an algorithm.

      I do give an example: sorting a list. To write down an actual formalized version, I’d have to introduce a formal specification language and explain it, which I think goes well beyond the scope of this article. Illustrating modularity requires an even larger example. This is, however, an interesting challenge which I’d be happy to take up in a future article.

      Highlight [page 19]: In software engineering, specifications are written to formalize the expected behavior of the software before it is written. The software is considered correct if it conforms to the specification.

      and Note [page 19]: Is an example of this test drive development?

      Not exactly, though the underlying idea is similar: provide a condition that a result must satisfy as evidence for being correct. With testing, the condition is spelt out for one specific input. In a formal specification, the condition is written down for all possible inputs.

      2 Reviewer 2

      First of all, I would like to thank the reviewer for this thoughtful review. It addresses many points that require clarifications in the my article, which I hope to have done adequately in the revised version.

      One such point is the role and form of reviewing processes for software. I have made it clearer that I take “review” to mean “critical independent inspection”. It could be performed by the user of a piece of software, but the standard case should be a review performed by experts at the request of some institution that then publishes the reviewer’s findings. There is no notion of gatekeeping attached to such reviews. Users are free to ignore them. Given that today, we publish and use scientific software without any review at all, the risk of shifting to the opposite extreme of having reviewers become gatekeepers seems unlikely to me.

      Your comment on users being software developers addresses another important point that I had failed to make clear: conviviality is all about diminishing the distinction between developers and users. Users gain agency over their computations at the price of taking on more of a developer role. This is now stated explicitly in the revised article. Your hypothesis that I want scientific software to be convivial is only partially true. I want convivially structured software to be an option for scientists, with adequate infrastructure and tooling support, but I do not consider it to be the best approach for all scientific software.

      The paragraph on the relevance and importance of reviewing in your comment is a valid point of view but, unsurprisingly, not mine. In the grand scheme of science, no specific quality assurance measure is strictly necessary. There is always another layer above that will catch mistakes that weren’t detected in the layer below. It is thus unlikely that unreliable software will cause all of science to crumble. But from many perspectives, including overall efficiency, personal satisfaction of practitioners, and insight derived from the process, it is preferable to catch mistakes as closely as possible to their source. Pre-digital theoreticians have always double-checked their manual calculations before submitting their papers, rather than sending off unchecked results and count on confrontation with experiment for finding mistakes. I believe that we should follow this same approach with software. The cost of mistakes can be quite high. Consider the story of the five retracted protein structures that I cite in my article (Miller, 2006, 10.1126/science.314.5807.1856). The five publications that were retracted involved years of work by researchers, reviewers, and editors. In between their publication and their retraction, other protein crystallographers saw their work rejected because it was in contradiction with the high-profile articles that later turned out to be wrong. The whole story has probably involved a few ruined careers in addition to its monetary cost. In contrast, independent critical examination of the software and the research processes in which it was used would likely have spotted the problem rather quickly (Matthews, 2007).

      You point out that reviewability is also a criterion in choosing software to build on, and I agree. Building on other people’s software requires trusting it. Incorporating it into one’s own work (the core principle of convivial software) requires understanding it. This is in fact what motivated my reflections on this topic. I am not much interested in neatly separating epistemic and practical issues. I am a practitioner, my interest in epistemology comes from a desire for improving practices.

      Review holism is something I have not thought about before. I consider it both impossible to apply in practice and of little practical value. What I am suggesting, and I hope to have made this clearer in my revision, is that reviewing must take into account the dependency graph. Reviewing software X requires a prior review of its dependencies (possibly already done by someone else), and a consideration of how each dependency influences the software under consideration. However, I do not consider Donoho’s “frictionless reproducibility” a sufficient basis for trust. It has the same problem as the widespread practice of tacitly assuming a piece of software to be correct because it is widely used. This reasoning is valid only if mistakes have a high chance of being noticed, and that’s in my experience not true for many kinds of research software. “It works”, when pronounced by a computational scientist, really means “There is no evidence that it doesn’t work”.

      This is also why I point out the chaotic nature of computation. It is not about Humphreys’ “strange errors”, for which I have no solution to offer. It is about the fact that looking for mistakes requires some prior idea of what the symptoms of a mistake might be. Experienced researchers do have such prior ideas for scientific instruments, and also e.g. for numerical algorithms. They come from an understanding of the instruments and their use, including in particular a knowledge of how they can go wrong. But once your substrate is a Turing-complete language, no such understanding is possible any more. Every programmer has made the experience of chasing down some bug that at first sight seems impossible. My long-term hope is that scientific computing will move towards domain-specific languages that are explicitly not Turing-complete, and offer useful guarantees in exchange. Unfortunately, I am not aware of any research in this space.

      I fully agree with you that internalist justifications are preferable to reliabilistic ones. But being fundamentally a pragmatist, I don’t care much about that distinction. Indisputable justification doesn’t really exist anywhere in science. I am fine with trust that has a solid basis, even if there remains a chance of failure. I’d already be happy if every researcher could answer the question “why do you trust your computational results?” in a way that shows signs of critical reflection.

      What I care about ultimately is improving practices in computational science. Over the last 30 years, I have seen numerous mistakes being discovered by chance, often leading to abandoned research projects. Some of these mistakes were due to software bugs, but the most common cause was an incorrect mental model of what the software does. I believe that the best technique we have found so far to spot mistakes in science is critical independent inspection. That’s why I am hoping to see it applied more widely to computation.

      2.1 References

      Miller, G. (2006) A Scientist’s Nightmare: Software Problem Leads to Five Retractions. Science 314, 1856. https://doi.org/10.1126/science.314.5807.1856

      Matthews, B.W. (2007) Five retracted structure reports: Inverted or incorrect? Protein Science 16, 1013. https://doi.org/10.1110/ps.072888607

      3 Editor

      Bayesian methods often use MCMC, which is often slow and creates long chains of estimates; however, the chains will show if the likelihood does not have a clear maximum, which is usually from a badly specified model...

      That is an interesting observation I haven’t seen mentioned bedore. I agree that Bayesian inference is particularly amenable to inspection. One more reason to normalize inspection and inspectability in computational science.

      Some reflection on the growing use of AI to write software may be worthwhile.

      The use of AI in writing and reviewing software is a topic I have considered for this review, since the technology has evolved enormously since I wrote the current version of the manuscript. However, in view of reviewer 1’s constant admonition to back up statements with citations, I refrained from delving into this topic. We all know it’s happening, but it’s too early to observe a clear impact on research software. I have therefore limited myself to a short comment in the Conclusion section.

      I wondered if highly-used software should get more scrutiny.

      This is an interesting suggestion. If and when we get serious about reviewing code, resource allocation will become an important topic. For getting started, it’s probably more productive to review newly published code than heavily used code, because there is a better chance that authors actually act on the feedback and improve their code before it has many users. That in turn will help improve the reviewing process, which is what matters most right now, in my opinion.

      “supercomputers are rare”, should this be “relatively rare” or am I speaking from a privileged university where I’ve always had access to supercomputers.

      If you have easy access to supercomputer, you should indeed consider yourself privileged. But did you ever use supercomputer time for reviewing someone else’s work? I have relatively easy access to supercomputers as well, but I do have to make a re quest and promise to do innovative research with the allocated resources.

      I did think about “testthat” at multiple points whilst reading the paper (https://testthat.r-lib.org/)

      I hadn’t seen “testthat” before, not being much of a user of R. It looks interesting, and reminds me of similar test support features in Smalltalk which I found very helpful. Improving testing culture is definitely a valuable contribution to improving computational practices.

      Can badges on github about downloads and maturity help (page 7)?

      Badges can help, on GitHub or elsewhere, e.g. in scientific software catalogs. I see them as a coarse-grained output of reviewing. The right balance to find is between the visibility of a badge and the precision of a carefully written review report. One risk with badges is the temptation to automate the evaluation that leads to it. This is fine for quantitative measures such as test coverage, but what we mostly lack today is human expert judgement on software.

    1. his is a 23-year-old female

      Tone and Style: The tone and style of this document is mostly technical, but also partially informal. The physician uses short sentences with clear facts regarding the patient's condition. However, the technical writing style typically requires formal writing. While the document includes relatively complete sentences with proper punctuation, there are few errors, including the highlighted portion for this annotation. Therefore, the document may not be able to be referred to as entirely technical.

    1. Unlike the linear and interactive models, it doesn’t view communication as a sequential process with distinct senders and receivers. Instead, it emphasizes that communication is a simultaneous and ongoing process

      This model is very interesting to me. When I think of communication, I typically think of it through the interactive model context, where the sender sends their message to the receiver and the receiver sends back their feedback, in a sequential process. But the transactional model is very intriguing because it presents a more complex way of communication that does not follow a sequential process.

    2. The receiver is the target of the message and is responsible for decoding its meaning based on their own experiences and background.

      The receiver is also very important to note in communication. Not only is it important for the sender to communicate a clear and direct message, but it is also important for the receiver to understand the message, and if they don't they need to communicate that with the sender, so that they can receive clarification.

    3. What went wrong?

      Clearly, a lot went wrong in this scenario. Not only was the email's content confusing and led John to think that it meant the deadline was pushed back to the following Friday, but it also never even made it to Maria because her inbox was flooded. I'm not sure if Sarah is to blame for Maria not seeing her email, but I think that on top of the email, she should have communicated the change in deadline in person, just to be sure that everyone understood.

    4. direct eye contact, for instance, is respectful in some cultures but confrontational in others

      This serves as a good example for why it is useful to understand other cultures so that you do not accidentally offend someone via an unintentional miscommunication.

    1. eLife Assessment

      This paper addresses the significant question of quantifying epistasis patterns, which affect the predictability of evolution, by reanalyzing a recently published combinatorial deep mutational scan experiment. The findings are that epistasis is fluid, i.e. strongly background dependent, but that fitness effects of mutations are predictable based on the wild-type phenotype. However, these potentially interesting claims are inadequately supported by the analysis, because measurement noise is not accounted for, arbitrary cutoffs are used, and global nonlinearities are not sufficiently considered. If the results continue to hold after these major improvements in the analysis, they should be of interest to all biologists working in the field of fitness landscapes.

    2. Reviewer #1 (Public review):

      This paper describes a number of patterns of epistasis in a large fitness landscape dataset recently published by Papkou et al. The paper is motivated by an important goal in the field of evolutionary biology to understand the statistical structure of epistasis in protein fitness landscapes, and it capitalizes on the unique opportunities presented by this new dataset to address this problem.

      The paper reports some interesting previously unobserved patterns that may have implications for our understanding of fitness landscapes and protein evolution. In particular, Figure 5 is very intriguing. However, I have two major concerns detailed below. First, I found the paper rather descriptive (it makes little attempt to gain deeper insights into the origins of the observed patterns) and unfocused (it reports what appears to be a disjointed collection of various statistics without a clear narrative. Second, I have concerns with the statistical rigor of the work.

      (1) I think Figures 5 and 7 are the main, most interesting, and novel results of the paper. However, I don't think that the statement "Only a small fraction of mutations exhibit global epistasis" accurately describes what we see in Figure 5. To me, the most striking feature of this figure is that the effects of most mutations at all sites appear to be a mixture of three patterns. The most interesting pattern noted by the authors is of course the "strong" global epistasis, i.e., when the effect of a mutation is highly negatively correlated with the fitness of the background genotype. The second pattern is a "weak" global epistasis, where the correlation with background fitness is much weaker or non-existent. The third pattern is the vertically spread-out cluster at low-fitness backgrounds, i.e., a mutation has a wide range of mostly positive effects that are clearly not correlated with fitness. What is very interesting to me is that all background genotypes fall into these three groups with respect to almost every mutation, but the proportions of the three groups are different for different mutations. In contrast to the authors' statement, it seems to me that almost all mutations display strong global epistasis in at least a subset of backgrounds. A clear example is C>A mutation at site 3.

      1a. I think the authors ought to try to dissect these patterns and investigate them separately rather than lumping them all together and declaring that global epistasis is rare. For example, I would like to know whether those backgrounds in which mutations exhibit strong global epistasis are the same for all mutations or whether they are mutation- or perhaps position-specific. Both answers could be potentially very interesting, either pointing to some specific site-site interactions or, alternatively, suggesting that the statistical patterns are conserved despite variation in the underlying interactions.

      1b. Another rather remarkable feature of this plot is that the slopes of the strong global epistasis patterns seem to be very similar across mutations. Is this the case? Is there anything special about this slope? For example, does this slope simply reflect the fact that a given mutation becomes essentially lethal (i.e., produces the same minimal fitness) in a certain set of background genotypes?

      1c. Finally, how consistent are these patterns with some null expectations? Specifically, would one expect the same distribution of global epistasis slopes on an uncorrelated landscape? Are the pivot points unusually clustered relative to an expectation on an uncorrelated landscape?

      1d. The shapes of the DFE shown in Figure 7 are also quite interesting, particularly the bimodal nature of the DFE in high-fitness (HF) backgrounds. I think this bimodality must be a reflection of the clustering of mutation-background combinations mentioned above. I think the authors ought to draw this connection explicitly. Do all HF backgrounds have a bimodal DFE? What mutations occupy the "moving" peak?

      1e. In several figures, the authors compare the patterns for HF and low-fitness (LF) genotypes. In some cases, there are some stark differences between these two groups, most notably in the shape of the DFE (Figure 7B, C). But there is no discussion about what could underlie these differences. Why are the statistics of epistasis different for HF and LF genotypes? Can the authors at least speculate about possible reasons? Why do HF and LF genotypes have qualitatively different DFEs? I actually don't quite understand why the transition between bimodal DFE in Figure 7B and unimodal DFE in Figure 7C is so abrupt. Is there something biologically special about the threshold that separates LF and HF genotypes? My understanding was that this was just a statistical cutoff. Perhaps the authors can plot the DFEs for all backgrounds on the same plot and just draw a line that separates HF and LF backgrounds so that the reader can better see whether the DFE shape changes gradually or abruptly.

      1f. The analysis of the synonymous mutations is also interesting. However I think a few additional analyses are necessary to clarify what is happening here. I would like to know the extent to which synonymous mutations are more often neutral compared to non-synonymous ones. Then, synonymous pairs interact in the same way as non-synonymous pair (i.e., plot Figure 1 for synonymous pairs)? Do synonymous or non-synonymous mutations that are neutral exhibit less epistasis than non-neutral ones? Finally, do non-synonymous mutations alter epistasis among other mutations more often than synonymous mutations do? What about synonymous-neutral versus synonymous-non-neutral. Basically, I'd like to understand the extent to which a mutation that is neutral in a given background is more or less likely to alter epistasis between other mutations than a non-neutral mutation in the same background.

      (2) I have two related methodological concerns. First, in several analyses, the authors employ thresholds that appear to be arbitrary. And second, I did not see any account of measurement errors. For example, the authors chose the 0.05 threshold to distinguish between epistasis and no epistasis, but why this particular threshold was chosen is not justified. Another example: is whether the product s12 × (s1 + s2) is greater or smaller than zero for any given mutation is uncertain due to measurement errors. Presumably, how to classify each pair of mutations should depend on the precision with which the fitness of mutants is measured. These thresholds could well be different across mutants. We know, for example, that low-fitness mutants typically have noisier fitness estimates than high-fitness mutants. I think the authors should use a statistically rigorous procedure to categorize mutations and their epistatic interactions. I think it is very important to address this issue. I got very concerned about it when I saw on LL 383-388 that synonymous stop codon mutations appear to modulate epistasis among other mutations. This seems very strange to me and makes me quite worried that this is a result of noise in LF genotypes.

    3. Reviewer #2 (Public review):

      Significance:

      This paper reanalyzes an experimental fitness landscape generated by Papkou et al., who assayed the fitness of all possible combinations of 4 nucleotide states at 9 sites in the E. coli DHFR gene, which confers antibiotic resistance. The 9 nucleotide sites make up 3 amino acid sites in the protein, of which one was shown to be the primary determinant of fitness by Papkou et al. This paper sought to assess whether pairwise epistatic interactions differ among genetic backgrounds at other sites and whether there are major patterns in any such differences. They use a "double mutant cycle" approach to quantify pairwise epistasis, where the epistatic interaction between two mutations is the difference between the measured fitness of the double-mutant and its predicted fitness in the absence of epistasis (which equals the sum of individual effects of each mutation observed in the single mutants relative to the reference genotype). The paper claims that epistasis is "fluid," because pairwise epistatic effects often differs depending on the genetic state at the other site. It also claims that this fluidity is "binary," because pairwise effects depend strongly on the state at nucleotide positions 5 and 6 but weakly on those at other sites. Finally, they compare the distribution of fitness effects (DFE) of single mutations for starting genotypes with similar fitness and find that despite the apparent "fluidity" of interactions this distribution is well-predicted by the fitness of the starting genotype.

      The paper addresses an important question for genetics and evolution: how complex and unpredictable are the effects and interactions among mutations in a protein? Epistasis can make the phenotype hard to predict from the genotype and also affect the evolutionary navigability of a genotype landscape. Whether pairwise epistatic interactions depend on genetic background - that is, whether there are important high-order interactions -- is important because interactions of order greater than pairwise would make phenotypes especially idiosyncratic and difficult to predict from the genotype (or by extrapolating from experimentally measured phenotypes of genotypes randomly sampled from the huge space of possible genotypes). Another interesting question is the sparsity of such high-order interactions: if they exist but mostly depend on a small number of identifiable sequence sites in the background, then this would drastically reduce the complexity and idiosyncrasy relative to a landscape on which "fluidity" involves interactions among groups of all sites in the protein. A number of papers in the recent literature have addressed the topics of high-order epistasis and sparsity and have come to conflicting conclusions. This paper contributes to that body of literature with a case study of one published experimental dataset of high quality. The findings are therefore potentially significant if convincingly supported.

      Validity:

      In my judgment, the major conclusions of this paper are not well supported by the data. There are three major problems with the analysis.

      (1) Lack of statistical tests. The authors conclude that pairwise interactions differ among backgrounds, but no statistical analysis is provided to establish that the observed differences are statistically significant, rather than being attributable to error and noise in the assay measurements. It has been established previously that the methods the authors use to estimate high-order interactions can result in inflated inferences of epistasis because of the propagation of measurement noise (see PMID 31527666 and 39261454). Error propagation can be extreme because first-order mutation effects are calculated as the difference between the measured phenotype of a single-mutant variant and the reference genotype; pairwise effects are then calculated as the difference between the measured phenotype of a double mutant and the sum of the differences described above for the single mutants. This paper claims fluidity when this latter difference itself differs when assessed in two different backgrounds. At each step of these calculations, measurement noise propagates. Because no statistical analysis is provided to evaluate whether these observed differences are greater than expected because of propagated error, the paper has not convincingly established or quantified "fluidity" in epistatic effects.

      (2) Arbitrary cutoffs. Many of the analyses involve assigning pairwise interactions into discrete categories, based on the magnitude and direction of the difference between the predicted and observed phenotypes for a pairwise mutant. For example, the authors categorize as a positive pairwise interaction if the apparent deviation of phenotype from prediction is >0.05, negative if the deviation is <-0.05, and no interaction if the deviation is between these cutoffs. Fluidity is diagnosed when the category for a pairwise interaction differs among backgrounds. These cutoffs are essentially arbitrary, and the effects are assigned to categories without assessing statistical significance. For example, an interaction of 0.06 in one background and 0.04 in another would be classified as fluid, but it is very plausible that such a difference would arise due to error alone. The frequency of epistatic interactions in each category as claimed in the paper, as well as the extent of fluidity across backgrounds, could therefore be systematically overestimated or underestimated, affecting the major conclusions of the study.

      (3) Global nonlinearities. The analyses do not consider the fact that apparent fluidity could be attributable to the fact that fitness measurements are bounded by a minimum (the fitness of cells carrying proteins in which DHFR is essentially nonfunctional) and a maximum (the fitness of cells in which some biological factor other than DHFR function is limiting for fitness). The data are clearly bounded; the original Papkou et al. paper states that 93% of genotypes are at the low-fitness limit at which deleterious effects no longer influence fitness. Because of this bounding, mutations that are strongly deleterious to DHFR function will therefore have an apparently smaller effect when introduced in combination with other deleterious mutations, leading to apparent epistatic interactions; moreover, these apparent interactions will have different magnitudes if they are introduced into backgrounds that themselves differ in DHFR function/fitness, leading to apparent "fluidity" of these interactions. This is a well-established issue in the literature (see PMIDs 30037990, 28100592, 39261454). It is therefore important to adjust for these global nonlinearities before assessing interactions, but the authors have not done this.

      This global nonlinearity could explain much of the fluidity claimed in this paper. It could explain the observation that epistasis does not seem to depend as much on genetic background for low-fitness backgrounds, and the latter is constant (Figure 2B and 2C): these patterns would arise simply because the effects of deleterious mutations are all epistatically masked in backgrounds that are already near the fitness minimum. It would also explain the observations in Figure 7. For background genotypes with relatively high fitness, there are two distinct peaks of fitness effects, which likely correspond to neutral mutations and deleterious mutations that bring fitness to the lower bound of measurement; as the fitness of the background declines, the deleterious mutations have a smaller effect, so the two peaks draw closer to each other, and in the lowest-fitness backgrounds, they collapse into a single unimodal distribution in which all mutations are approximately neutral (with the distribution reflecting only noise).<br /> Global nonlinearity could also explain the apparent "binary" nature of epistasis. Sites 4 and 5 change the second amino acid, and the Papkou paper shows that only 3 amino acid states (C, D, and E) are compatible with function; all others abolish function and yield lower-bound fitness, while mutations at other sites have much weaker effects. The apparent binary nature of epistasis in Figure 5 corresponds to these effects given the nonlinearity of the fitness assay. Most mutations are close to neutral irrespective of the fitness of the background into which they are introduced: these are the "non-epistatic" mutations in the binary scheme. For the mutations at sites 4 and 5 that abolish one of the beneficial mutations, however, these have a strong background-dependence: they are very deleterious when introduced into a high-fitness background but their impact shrinks as they are introduced into backgrounds with progressively lower fitness. The apparent "binary" nature of global epistasis is likely to be a simple artifact of bounding and the bimodal distribution of functional effects: neutral mutations are insensitive to background, while the magnitude of the fitness effect of deleterious mutations declines with background fitness because they are masked by the lower bound. The authors' statement is that "global epistasis often does not hold." This is not established. A more plausible conclusion is that global epistasis imposed by the phenotype limits affects all mutations, but it does so in a nonlinear fashion.

      In conclusion, most of the major claims in the paper could be artifactual. Much of the claimed pairwise epistasis could be caused by measurement noise, the use of arbitrary cutoffs, and the lack of adjustment for global nonlinearity. Much of the fluidity or higher-order epistasis could be attributable to the same issues. And the apparently binary nature of global epistasis is also the expected result of this nonlinearity.

    4. Reviewer #3 (Public review):

      Summary:

      The authors have studied a previously published large dataset on the fitness landscape of a 9 base-pair region of the folA gene. The objective of the paper is to understand various aspects of epistasis in this system, which the authors have achieved through detailed and computationally expensive exploration of the landscape. The authors describe epistasis in this system as "fluid", meaning that it depends sensitively on the genetic background, thereby reducing the predictability of evolution at the genetic level. However, the study also finds two robust patterns. The first is the existence of a "pivot point" for a majority of mutations, which is a fixed growth rate at which the effect of mutations switches from beneficial to deleterious (consistent with a previous study on the topic). The second is the observation that the distribution of fitness effects (DFE) of mutations is predicted quite well by the fitness of the genotype, especially for high-fitness genotypes. While the work does not offer a synthesis of the multitude of reported results, the information provided here raises interesting questions for future studies in this field.

      Strengths:

      A major strength of the study is its detailed and multifaceted approach, which has helped the authors tease out a number of interesting epistatic properties. The study makes a timely contribution by focusing on topical issues like the prevalence of global epistasis, the existence of pivot points, and the dependence of DFE on the background genotype and its fitness. The methodology is presented in a largely transparent manner, which makes it easy to interpret and evaluate the results.

      The authors have classified pairwise epistasis into six types and found that the type of epistasis changes depending on background mutations. Switches happen more frequently for mutations at functionally important sites. Interestingly, the authors find that even synonymous mutations in stop codons can alter the epistatic interaction between mutations in other codons. Consistent with these observations of "fluidity", the study reports limited instances of global epistasis (which predicts a simple linear relationship between the size of a mutational effect and the fitness of the genetic background in which it occurs). Overall, the work presents some evidence for the genetic context-dependent nature of epistasis in this system.

      Weaknesses:

      Despite the wealth of information provided by the study, there are some shortcomings of the paper which must be mentioned.

      (1) In the Significance Statement, the authors say that the "fluid" nature of epistasis is a previously unknown property. This is not accurate. What the authors describe as "fluidity" is essentially the prevalence of certain forms of higher-order epistasis (i.e., epistasis beyond pairwise mutational interactions). The existence of higher-order epistasis is a well-known feature of many landscapes. For example, in an early work, (Szendro et. al., J. Stat. Mech., 2013), the presence of a significant degree of higher-order epistasis was reported for a number of empirical fitness landscapes. Likewise, (Weinreich et. al., Curr. Opin. Genet. Dev., 2013) analysed several fitness landscapes and found that higher-order epistatic terms were on average larger than the pairwise term in nearly all cases. They further showed that ignoring higher-order epistasis leads to a significant overestimate of accessible evolutionary paths. The literature on higher-order epistasis has grown substantially since these early works. Any future versions of the present preprint will benefit from a more thorough contextual discussion of the literature on higher-order epistasis.

      (2) In the paper, the term 'sign epistasis' is used in a way that is different from its well-established meaning. (Pairwise) sign epistasis, in its standard usage, is said to occur when the effect of a mutation switches from beneficial to deleterious (or vice versa) when a mutation occurs at a different locus. The authors require a stronger condition, namely that the sum of the individual effects of two mutations should have the opposite sign from their joint effect. This is a sufficient condition for sign epistasis, but not a necessary one. The property studied by the authors is important in its own right, but it is not equivalent to sign epistasis.

      (3) The authors have looked for global epistasis in all 108 (9x12) mutations, out of which only 16 showed a correlation of R^2 > 0.4. 14 out of these 16 mutations were in the functionally important nucleotide positions. Based on this, the authors conclude that global epistasis is rare in this landscape, and further, that mutations in this landscape can be classified into one of two binary states - those that exhibit global epistasis (a small minority) and those that do not (the majority). I suspect, however, that a biologically significant binary classification based on these data may be premature. Unsurprisingly, mutational effects are stronger at the functional sites as seen in Figure 5 and Figure 2, which means that even if global epistasis is present for all mutations, a statistical signal will be more easily detected for the functionally important sites. Indeed, the authors show that the means of DFEs decrease linearly with background fitness, which hints at the possibility that a weak global epistatic effect may be present (though hard to detect) in the individual mutations. Given the high importance of the phenomenon of global epistasis, it pays to be cautious in interpreting these results.

      (4) The study reports that synonymous mutations frequently change the nature of epistasis between mutations in other codons. However, it is unclear whether this should be surprising, because, as the authors have already noted, synonymous mutations can have an impact on cellular functions. The reader may wonder if the synonymous mutations that cause changes in epistatic interactions in a certain background also tend to be non-neutral in that background. Unfortunately, the fitness effect of synonymous mutations has not been reported in the paper.

      (5) The authors find that DFEs of high-fitness genotypes tend to depend only on fitness and not on genetic composition. This is an intriguing observation, but unfortunately, the authors do not provide any possible explanation or connect it to theoretical literature. I am reminded of work by (Agarwala and Fisher, Theor. Popul. Biol., 2019) as well as (Reddy and Desai, eLife, 2023) where conditions under which the DFE depends only on the fitness have been derived. Any discussion of possible connections to these works could be a useful addition.

    1. Reviewer #1 (Public review):

      Summary:

      The idea is appealing, but the authors have not sufficiently demonstrated the utility of this approach.

      Strengths:

      Novelty of the approach, potential implications for discovering novel interactions

      Comments on revisions:

      The authors have adequately addressed most of my concerns in this improved version of the manuscript

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary: 

      The idea is appealing, but the authors have not sufficiently demonstrated the utility of this approach.

      Strengths: 

      Novelty of the approach, potential impli=cations for discovering novel interactions

      Weaknesses:

      The Duong had introduced their highly elegant peptidisc approach several years ago. In this present work, they combine it with thermal proteome profiling (TPP) and attempt to demonstrate the utility of this combination for identifying novel membrane protein-ligand interactions.

      While I find this idea intriguing, and the approach potentially useful, I do not feel that the authors had sufficiently demonstrated the utility of this approach. My main concern is that no novel interactions are identified and validated. For the presentation of any new methodology, I think this is quite necessary. In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.

      We thank the reviewer for their thoughtful comments. In this revision, we have experimentally addressed the reviewer’s concerns in three ways:

      (1) To demonstrate the utility of our MM-TPP method over the detergent-based TPP workflow (termed DB-TPP), we performed a side-by-side comparison using ATP–VO₄ at 51 °C (Figure 3B and Figure 4A). From the DB-TPP dataset, 7.4% of all identified proteins were annotated as ATP-binding, while 6.4% of proteins differentially stabilized were annotated as ATP-binding. In contrast, in the MM-TPP dataset, 9.3% of all identified proteins were annotated as ATP-binding proteins, while 17% of proteins differentially stabilized were annotated as ATP-binding. The lack of enrichment in the detergent-based approach indicates that the observed differences are likely stochastic, rather than a result of specific ATP–VO₄-mediated stabilization as found with MM-TPP. For instance, several key proteins—BCS1, P2RY6, SLC27A2, ABCB1, ABCC2, and ABCC9— found differentially stabilized using the MM-TPP method showed no such pattern in the DB-TPP dataset. This divergence strongly supports the specificity and utility of our Peptidisc approach. 

      (2) To demonstrate that MM-TPP can resolve not only the broader effects of ATP–VO₄ but also specific ligand–protein interactions, we employed 2-methylthio-ADP (2-MeS-ADP), a selective agonist of the P2RY12 receptor [PMID: 24784220]. In that case, we observed clear thermal stabilization of P2RY12, with more than 6-fold increase in stability at both 51 °C and 57 °C (–log₁₀ p > 5.97; Figure 4B and Figure S4). Notably, no other proteins—including the structurally related but non-responsive P2RY6 receptor- showed comparable stabilization fold change at these temperatures.

      (3) To further probe the reproducibility of the method, we performed an independent MMTPP evaluation with ATP–VO₄ at 51 °C using data-independent acquisition (DIA), in contrast to the data-dependent acquisition (DDA) approach used in the initial study (Figure S5). Overall, 7.8% of all identified proteins were annotated as ATP-binding, and as before, this proportion increased to 17% among proteins with log₂ fold changes greater than 0.5. Specifically, BCS1 and SLC27A2 exhibited strong stabilization (log₂ fold change > 1), while P2RY6, ABCB11, ABCC2, and ABCG2 showed moderate stabilization (log₂ fold changes between 0.5 and 1), and consistent with previous results, P2RX4 was destabilized, with a log₂ fold change below –1. These findings support the consistency and reproducibility of the method across distinct data acquisition methods.

      My main concern is that no novel interactions are identified and validated. For the presentation of any new methodology, I think this is quite necessary.  

      The primary objective of our study is to establish and benchmark the MM-TPP workflow using known targets, rather than to discover novel ligand–protein interactions. Identifying new binders requires extensive screening and downstream validations, which we believe is beyond the scope of this methodological report. Instead, our study highlights the sensitivity and reliability of the MM-TPP approach by demonstrating consistent and reproducible results with well-characterized interactions.

      We respectfully disagree with the notion that introducing a new methodology must necessarily include the discovery of novel interactions. For instance, Martinez Molina et al. [PMID: 23828940] introduced the cellular thermal shift assay (CETSA) by validating established targets such as MetAP2 with TNP-470 and CDK2 with AZD-5438, without identifying novel protein–ligand pairs. Similarly, Kalxdorf et al. [PMID: 33398190] published their cell-surface thermal proteome profiling (CS-TPP) using Ouabain to stabilize the Na⁺/K⁺-ATPase pump in K562 cells, and SB431542 to stabilize its canonical target JAG1. In fact, when these methods revealed additional stabilizations, these were not validated but instead interpreted through reasoning grounded in the literature. For instance, they attributed the SB431542-induced stabilization of MCT1 to its reported role in cell migration and tumor invasiveness, and explained that SLC1A2 stabilization is related to the disruption of Na⁺/K⁺-ATPase activity by Ouabain. In the same way, our interpretation of ATP-VO₄–mediated stabilization of Mao-B is justified by predictive AlphaFold-3 rather than direct orthogonal assays, which are beyond the scope of our methodological presentation. 

      Collectively, the influential studies cited above have set methodological precedents by prioritizing validation and proof-of-concept over merely finding uncharacterized binders. In the same spirit, our work is centred on establishing MM-TPP as a robust platform for probing membrane protein–ligand interactions in a water-soluble format. The discovery of novel binders remains an exciting future direction—one that will build upon the methodological foundation laid by the present study.

      In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.

      We deliberately began this study with our model protein, MsbA, examined under both native and overexpressed conditions, to establish an adequation between MMTPP (Figure 2D) and biochemical stability assays (Figure 2A). This validation has provided us with the foundation to confidently extend MM-TPP to the mouse organ proteome. To demonstrate the validity of our workflow, we have used ATP-VO₄ because it has expected targets. 

      We note that orthogonal validation often requires overproduction and purification of the candidate proteins, including suitable antibodies, which is a true challenge for membrane proteins. Here, we demonstrate that MM-TPP can detect ligand-induced thermal shifts directly in native membrane preparations, without requiring protein overproduction or purification. We also emphasize several influential studies in TPP, including Martinez Molina et al. (PMID: 23828940) and Fang et al. (PMID: 34188175), which focused primarily on establishing and benchmarking the methodology, rather than on extensive orthogonal validation. In the same spirit, our study prioritizes methodological development, and accordingly, several orthogonal validations are now included in this revision.

      [...] and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.

      To clarify, all analyses on ligand-induced stabilization or destabilization were carried out using LFQ values. The sole exception is on Figure 2B, where we used iBAQ values to depict the relative abundance of proteins within a single sample; this to show MsbA's relative level within the E. coli peptidisc library.

      Respectfully, we disagree with the assertion that we are “quantifying rather small differences in abundances using either iBAQ or LFQ.” We were able to clearly distinguish between stabilizations driven by specific ligands binding to their targets versus those caused by non-specific ligands with broader activity. This is further confirmed by comparing 2-MeS-ADP, a selective ligand for P2RY12, with ATP-VO₄, a highly promiscuous ligand, and AMP-PNP, which exhibits intermediate breadth. When tested in triplicate at 51 °C, 2-MeS-ADP significantly altered the thermal stability of 27 proteins,  AMP-PNP 44 proteins, and ATP-VO₄ 230 proteins, consistent with the expectation that broader ligands stabilize more proteins nonspecifically. Importantly, 2-MeS-ADP produced markedly stronger stabilization of its intended target, P2RY12 (–log<sub>10</sub>p = 9.32), than the top stabilized proteins for ATP–VO₄ (DNAJB3, –log₁₀p = 5.87) or AMP-PNP (FTH1, p = 5.34). Moreover, 2-MeS-ADP did not significantly stabilize proteins that were consistently stabilized by the broad ligands, such as SLC27A2, which was strongly stabilized by both ATP-VO<sub>4</sub> and AMP-PNP (–log<sub>10</sub> p>2.5). Together, these findings demonstrate that MMTPP can robustly distinguish between broad-spectrum and target-specific ligands, with selective ligands inducing stronger and more physiologically meaningful stabilization at their intended targets compared to promiscuous ligands.

      Finally, we emphasize that our findings are not marginal, but meet quantitative and statistical rigor consistent with best practices in proteomics. We apply dual thresholds combining effect size (|log₂FC| ≥ 1, i.e., at least a two-fold change) with statistical significance (FDR-adjusted p ≤ 0.05)—criteria commonly used in proteomics methodology studies (e.g., PMID: 24942700, 38724498). Moreover, the stabilization and destabilization events we report are reproducible across biological replicates (n = 3), consistent across adjacent temperatures for most targets, and technically robust across acquisition modes (DDA vs. DIA). Taken together, these results reflect statistically valid and biologically meaningful effects, fully aligned with standards set by prior published proteomics studies.

      Furthermore, the reported changes in abundances are solely based on iBAQ or LFQ analysis. This must be supported by a more quantitative approach such as SILAC or labeled peptides. In summary, I think this story requires a stronger and broader demonstration of the ability of peptidisc-TPP to identify novel physiologically/pharmacologically relevant interactions.

      With respect to labeling strategies, we deliberately avoided using TMT due to concerns about both cost and potential data quality issues. Some recent studies have documented the drawbacks of TMT in contexts directly relevant to our work. For example, a benchmarking study of LiP-MS workflows showed that although TMT increased proteome depth and reduced technical variance, it was less accurate in identifying true drug–protein interactions and produced weaker dose–response correlations compared with label-free DIA approaches [PMID: 40089063]. More broadly, technical reviews have highlighted that isobaric tagging is intrinsically prone to ratio compression and reporterion interference due to co-isolation and co-fragmentation of peptides, which flatten measured fold-changes and obscure biologically meaningful differences [PMID: 22580419, 22036744]. In terms of SILAC, the technique requires metabolic incorporation of heavy amino acids, which is feasible in cultured cells but not in physiologically relevant tissues such as the liver organ used here. SILAC mouse models exist, but they are expensive and time-consuming [PMID: 18662549, 21909926]. We are not a mouse lab, and introducing liver organ SILAC labeling in our workflow is beyond the scope of these revisions. We also note that several hallmark TPP studies have been successfully carried out using label-free quantification [PMID: 25278616, 26379230, 33398190, 23828940], establishing this as an accepted and widely applied approach in the field. 

      To further support our conclusions, we added controls showing that detergent solubilization of mouse liver membranes followed by SP4 cleanup fails to detect ATP-VO₄– mediated stabilization of ATP-binding proteins, underscoring the necessity of Peptidisc reconstitution for capturing ligand-induced thermal stabilization. We also present new data demonstrating selective stabilization of the P2Y12 receptor by its agonist 2-MeS-ADP, providing orthogonal, receptor-specific validation within the MM-TPP framework. Finally, an orthogonal DIA acquisition on separate replicates confirmed robust ATP-vanadate stabilization of ATP-binding proteins, including BCS1l and SLC27A2. Together, these additions reinforce that the observed stabilizations are genuine, physiologically relevant ligand–protein interactions and highlight the unique advantage of the Peptidisc-based workflow in capturing such events.

      Cited Reference:

      24784220: Zhang J, Zhang K, Gao ZG, et al. Agonist-bound structure of the human P2Y₁₂ receptor. Nature.  2014;509(7498):119-122. doi:10.1038/nature13288. 

      23828940: Martinez Molina D, Jafari R, Ignatushchenko M, et al. Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay. Science. 2013;341(6141):84-87. doi:10.1126/science.1233606.

      33398190: Kalxdorf M, Günthner I, Becher I, et al. Cell surface thermal proteome profiling tracks perturbations and drug targets on the plasma membrane. Nat Methods. 2021;18(1):84-91. doi:10.1038/s41592-020-01022-1.

      34188175: Fang S, Kirk PDW, Bantscheff M, Lilley KS, Crook OM. A Bayesian semi-parametric model for thermal proteome profiling. Commun Biol. 2021;4(1):810. doi:10.1038/s42003-021-02306-8.

      24942700: Cox J, Hein MY, Luber CA, Paron I, Nagaraj N, Mann M. Accurate proteome-wide label-free quantification by delayed normalization and maximal peptide ratio extraction, termed MaxLFQ. Mol Cell Proteomics. 2014;13(9):2513-2526. doi:10.1074/mcp.M113.031591.

      38724498: Peng H, Wang H, Kong W, Li J, Goh WWB. Optimizing differential expression analysis for proteomics data via high-performing rules and ensemble inference. Nat Commun. 2024;15(1):3922. doi:10.1038/s41467-02447899-w. 

      40089063: Koudelka T, Bassot C, Piazza I. Benchmarking of quantitative proteomics workflows for limited proteolysis mass spectrometry. Mol Cell Proteomics. 2025;24(4):100945. doi:10.1016/j.mcpro.2025.100945.

      22580419: Christoforou AL, Lilley KS. Isobaric tagging approaches in quantitative proteomics: the ups and downs. Anal Bioanal Chem. 2012;404(4):1029-1037. doi:10.1007/s00216-012-6012-9. 

      22036744: Christoforou AL, Lilley KS. Isobaric tagging approaches in quantitative proteomics: the ups and downs. Anal Bioanal Chem. 2012;404(4):1029-1037. doi:10.1007/s00216-012-6012-9. 

      18662549: Krüger M, Moser M, Ussar S, et al. SILAC mouse for quantitative proteomics uncovers kindlin-3 as an essential factor for red blood cell function. Cell. 2008;134(2):353-364. doi:10.1016/j.cell.2008.05.033.

      21909926: Zanivan S, Krueger M, Mann M. In vivo quantitative proteomics: the SILAC mouse. Methods Mol Biol. 2012;757:435-450. doi:10.1007/978-1-61779-166-6_25. 

      25278616: Kalxdorf M, Becher I, Savitski MM, et al. Temperature-dependent cellular protein stability enables highprecision proteomics profiling. Nat Methods. 2015;12(12):1147-1150. doi:10.1038/nmeth.3651.

      26379230: Savitski MM, Reinhard FBM, Franken H, et al. Tracking cancer drugs in living cells by thermal profiling of the proteome. Science. 2015;346(6205):1255784. doi:10.1126/science.1255784. 

      33452728: Leuenberger P, Ganscha S, Kahraman A, et al. Cell-wide analysis of protein thermal unfolding reveals determinants of thermostability. Science. 2020;355(6327):eaai7825. doi:10.1126/science.aai7825. 

      23066101: Savitski MM, Zinn N, Faelth-Savitski M, et al. Quantitative thermal proteome profiling reveals ligand interactions and thermal stability changes in cells. Nat Methods. 2013;10(12):1094-1096. doi:10.1038/nmeth.2766.  

      30858367: Piazza I, Kochanowski K, Cappelletti V, et al. A machine learning-based chemoproteomic approach to identify drug targets and binding sites in complex proteomes. Nat Commun. 2019;10(1):1216. doi:10.1038/s41467019-09199-0. 

      Reviewer #2 (Public Review):

      Summary:

      The membrane mimetic thermal proteome profiling (MM-TPP) presented by Jandu et al. seems to be a useful way to minimize the interference of detergents in efficient mass spectrometry analysis of membrane proteins. Thermal proteome profiling is a mass spectrometric method that measures binding of a drug to different proteins in a cell lysate by monitoring thermal stabilization of the proteins because of the interaction with the ligands that are being studied. This method has been underexplored for membrane proteome because of the inefficient mass spectrometric detection of membrane proteins and because of the interference from detergents that are used often for membrane protein solubilization.

      Strengths:

      In this report the binding of ligands to membrane protein targets has been monitored in crude membrane lysates or tissue homogenates exalting the efficacy of the method to detect both intended and off-target binding events in a complex physiologically relevant sample setting.

      The manuscript is lucidly written and the data presented seems clear. The only insignificant grammatical error I found was that the 'P' in the word peptidisc is not capitalized in the beginning of the methods section "MM-TPP profiling on membrane proteomes". The clear writing made it easy to understand and evaluate what has been presented. Kudos to the authors.

      Weaknesses:

      While this is a solid report and a promising tool for analyzing membrane protein drug interactions, addressing some of the minor caveats listed below could make it much more impactful.

      The authors claim that MM-TPP is done by "completely circumventing structural perturbations invoked by detergents[1] ". This may not be entirely accurate, because before reconstitution of the membrane proteins in peptidisc, the membrane fractions are solubilized by 1% DDM. The solubilization and following centrifugation step lasts at least for 45 min. It is less likely that all the structural perturbations caused by DDM to various membrane proteins and their transient interactions become completely reversed or rescued by peptidisc reconstitution.

      We thank the reviewer for this insightful comment. In response, we have revised the sentence and expanded the discussion to clarify that the Peptidisc provides a complementary approach to detergent-based preparations for studying membrane proteins, preserving native lipid–protein interactions and stabilization effects that may be diminished in detergent.

      To further address the structural perturbations invoked by detergents, and as already detailed to our response to Reviewer 1, we have compared the thermal profile of the Peptidisc library to the mouse liver membranes solubilized with 1% DDM, after incubation with ATP–VO₄ at 51 °C (Figure 4A). The results with the detergent extract revealed random patterns of stabilization and destabilization, with only 6.4% of differentially stabilized proteins being ATP-binding—comparable to the 7.4% observed in the background. In contrast, in the Peptidisc library, 17% of differentially stabilized proteins were ATP-binding, compared to 9.3% in the background. Thus, while Peptidisc reconstitution does not fully avoid initial detergent exposure, these findings underscore the importance of implementing Peptidisc in the TPP workflow when dealing with membrane proteins.

      In the introduction, the authors make statements such as "..it is widely acknowledged that even mild detergents can disrupt protein structures and activities, leading to challenges in accurately identifying drug targets.." and "[peptidisc] libraries are instrumental in capturing and stabilizing IMPs in their functional states while preserving their interactomes and lipid allosteric modulators...'. These need to be rephrased, as it has been shown by countless studies that even with membrane protein suspended in micelles robust ligand binding assays and binding kinetics have been performed leading to physiologically relevant conclusions and identification of protein-protein and protein-ligand interactions.

      We thank the reviewer for this valuable feedback and fully agree with the point raised. In response, we have revised the Introduction and conclusion to moderate the language concerning the limitations of detergent use. We now explicitly acknowledge that numerous studies have successfully used detergent micelles for ligand-binding assays and kinetic analyses, yielding physiologically relevant insights into both protein–protein and protein–ligand interactions [e.g., PMID: 22004748, 26440106, 31776188].

      At the same time, we clarify that the Peptidisc method offers a complementary advantage, particularly in the context of thermal proteome profiling (TPP), which involves mass spectrometry workflows that are incompatible with detergents. In this setting, Peptidiscs facilitate the detection of ligand-binding events that may be more difficult to observe in detergent micelles.

      We have reframed our discussion accordingly to present Peptidiscs not as a replacement for detergent-based methods, but rather as a complementary tool that broadens the available methodological landscape for studying membrane protein interactions.

      If the method involves detergent solubilization, for example using 1% DDM, it is a bit disingenuous to argue that 'interactomes and lipid allosteric modulators' characterized by lowaffinity interactions will remain intact or can be rescued upon detergent removal. Authors should discuss this or at least highlight the primary caveat of the peptidisc method of membrane protein reconstitution - which is that it begins with detergent solubilization of the proteome and does not completely circumvent structural perturbations invoked by detergents.

      We would like to clarify that, in our current workflow, ligand incubation occurs after reconstitution into Peptidiscs. As such, the method is designed to circumvent the negative effects of detergent during the critical steps involving low-affinity interactions.

      That said, we fully acknowledge that Peptidisc reconstitution begins with detergent solubilization (e.g., 1% DDM), and we have revised the conclusion to explicitly state this important caveat. As the reviewer correctly points out, this initial step may introduce some structural perturbations or result in the loss of weakly associated lipid modulators.

      However, reconstitution into Peptidiscs rapidly restores a detergent-free environment for membrane proteins, which has been shown in our previous studies [PMID: 38577106, 38232390, 31736482, 31364989] to mitigate these effects. Specifically, we have demonstrated that time-limited DDM exposure, followed by Peptidisc reconstitution, minimizes membrane protein delipidation, enhances thermal stability, retains functionality, and preserves multi-protein assemblies.

      It would also be important to test detergents that are even milder than 1% DDM and ones which are harsher than 1% DDM to show that this method of reconstitution can indeed rescue the perturbations to the structure and interactions of the membrane protein done by detergents during solubilization step. 

      We selected 1% DDM based on our previous work [PMID: 37295717, 39313981,38232390], where it consistently enabled robust and reproducible solubilization for Peptidisc reconstitution. We agree that comparing milder detergents (e.g., LMNG) and harsher ones (e.g., SDC) would provide valuable insights into how detergent strength influences structural perturbations, and how effectively these can be mitigated by Peptidisc reconstitution. Preliminary data (not shown) from mouse liver membranes indicate broadly similar proteomic profiles following solubilization with DDM, LMNG, and SDC, although potential differences in functional activity or ligand binding remain to be investigated.

      Based on the methods provided, it appears that the final amount of detergent in peptidisc membrane protein library was 0.008%, which is ~150 uM. The CMC of DDM depending on the amount of NaCl could be between 120-170 uM.

      While we cannot entirely rule out the presence of residual DDM (0.008%) in the raw library, its free concentration may be lower than initially estimated. This is related to the formation of mixed micelles with the amphipathic peptide scaffold, which is supplied in excess during reconstitution. These mixed micelles are subsequently removed during the ultrafiltration step. Furthermore, in related work using His-tagged Peptidiscs [PMID: 32364744], we purified the library by nickel-affinity chromatography following a 5× dilution into a detergent-free buffer. Although this purification step reduced the number of soluble proteins, the same membrane proteins were retained, suggesting that any residual detergent does not significantly interfere with Peptidisc reconstitution. Supporting this, our MM-TPP assays on purified libraries (data not shown) consistently demonstrated stabilization of ATP-binding proteins (e.g., SLC27A2, DNAJB3), indicating that the observed ligand–protein interactions result from successful incorporation into Peptidiscs.

      Perhaps, to completely circumvent the perturbations from detergents other methods of detergentfree solubilization such as using SMA polymers and SMALP reconstitution could be explored for a comparison. Moreover, a comparison of the peptidisc reconstitution with detergent-free extraction strategies, such as SMA copolymers, could lend more strength to the presented method.

      We agree that detergent-free methods such as SMA polymers hold promise for membrane protein solubilization. However, in preliminary single-replicate experiments using SMA2000 at 51 °C in the presence of ATP–VO₄ (data not shown), we observed broad, non-specific stabilization effects. Of the 2,287 quantified proteins, 9.3% were annotated as ATP-binding, yet 9.9% of the 101 proteins showing a log₂ fold change >1 or <–1 were ATPbinding, indicating no meaningful enrichment. Given this lack of specificity and the limited dataset, we chose not to pursue further SMA experiments and have not included them here. However, in a recent study (https://doi.org/10.1101/2025.08.25.672181), we directly compared Peptidisc, SMA, and nanodiscs for liver membrane proteome profiling. In that work, Peptidisc outperformed both SMA and nanodiscs in detecting membrane protein dysregulation between healthy and diseased liver. By extension, we expect Peptidisc to offer superior sensitivity and specificity for detecting ligand-induced stabilization events, such as those observed here with ATP–vanadate.

      Cross-verification of the identified interactions, and subsequent stabilization or destabilizations, should be demonstrated by other in vitro methods of thermal stability and ligand binding analysis using purified protein to support the efficacy of the MM-TPP method. An example cross-verification using SDS-PAGE, of the well-studied MsbA, is shown in Figure 2. In a similar fashion, other discussed targets such as, BCS1L, P2RX4, DgkA, Mao-B, and some un-annotated IMPs shown in supplementary figure 3 that display substantial stabilization or destabilization should be cross-verified.

      We appreciate this suggestion and note that a similar point was raised in R1’s comment “In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.” We have developed a detailed response to R1 on this matter, which equally applies here. 

      Cited Reference:

      35616533: Young JW, Wason IS, Zhao Z, et al. Development of a Method Combining Peptidiscs and Proteomics to Identify, Stabilize, and Purify a Detergent-Sensitive Membrane Protein Assembly. J Proteome Res. 2022;21(7):1748-1758. doi:10.1021/acs.jproteome.2c00129. PMID: 35616533.

      31364989: Carlson ML, Stacey RG, Young JW, et al. Profiling the Escherichia coli membrane protein interactome captured in Peptidisc libraries. Elife. 2019;8:e46615. doi:10.7554/eLife.46615. 

      22004748: O'Malley MA, Helgeson ME, Wagner NJ, Robinson AS. Toward rational design of protein detergent complexes: determinants of mixed micelles that are critical for the in vitro stabilization of a G-protein coupled receptor. Biophys J. 2011;101(8):1938-1948. doi:10.1016/j.bpj.2011.09.018.

      26440106: Allison TM, Reading E, Liko I, Baldwin AJ, Laganowsky A, Robinson CV. Quantifying the stabilizing effects of protein-ligand interactions in the gas phase. Nat Commun. 2015;6:8551. doi:10.1038/ncomms9551.

      31776188: Beckner RL, Zoubak L, Hines KG, Gawrisch K, Yeliseev AA. Probing thermostability of detergentsolubilized CB2 receptor by parallel G protein-activation and ligand-binding assays. J Biol Chem. 2020;295(1):181190. doi:10.1074/jbc.RA119.010696.

      38577106: Jandu RS, Yu H, Zhao Z, Le HT, Kim S, Huan T, Duong van Hoa F. Capture of endogenous lipids in peptidiscs and effect on protein stability and activity. iScience. 2024;27(4):109382. doi:10.1016/j.isci.2024.109382.

      38232390: Antony F, Brough Z, Zhao Z, Duong van Hoa F. Capture of the Mouse Organ Membrane Proteome Specificity in Peptidisc Libraries. J Proteome Res. 2024;23(2):857-867. doi:10.1021/acs.jproteome.3c00825.

      31736482: Saville JW, Troman LA, Duong Van Hoa F. PeptiQuick, a one-step incorporation of membrane proteins into biotinylated peptidiscs for streamlined protein binding assays. J Vis Exp. 2019;(153). doi:10.3791/60661. 

      37295717: Zhao Z, Khurana A, Antony F, et al. A Peptidisc-Based Survey of the Plasma Membrane Proteome of a Mammalian Cell. Mol Cell Proteomics. 2023;22(8):100588. doi:10.1016/j.mcpro.2023.100588. 

      39313981: Antony F, Brough Z, Orangi M, Al-Seragi M, Aoki H, Babu M, Duong van Hoa F. Sensitive Profiling of Mouse Liver Membrane Proteome Dysregulation Following a High-Fat and Alcohol Diet Treatment. Proteomics. 2024;24(23-24):e202300599. doi:10.1002/pmic.202300599. 

      32364744: Young JW, Wason IS, Zhao Z, Rattray DG, Foster LJ, Duong Van Hoa F. His-Tagged Peptidiscs Enable Affinity Purification of the Membrane Proteome for Downstream Mass Spectrometry Analysis. J Proteome Res. 2020;19(7):2553-2562. doi:10.1021/acs.jproteome.0c00022.

      32591519: The M, Käll L. Focus on the spectra that matter by clustering of quantification data in shotgun proteomics. Nat Commun. 2020;11(1):3234. doi:10.1038/s41467-020-17037-3. 

      33188197: Kurzawa N, Becher I, Sridharan S, et al. A computational method for detection of ligand-binding proteins from dose range thermal proteome profiles. Nat Commun. 2020;11(1):5783. doi:10.1038/s41467-02019529-8. 

      26524241: Reinhard FBM, Eberhard D, Werner T, et al. Thermal proteome profiling monitors ligand interactions with cellular membrane proteins. Nat Methods. 2015;12(12):1129-1131. doi:10.1038/nmeth.3652. 

      23828940: Martinez Molina D, Jafari R, Ignatushchenko M, et al. Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay. Science. 2013;341(6141):84-87. doi:10.1126/science.1233606. 

      32133759: Mateus A, Kurzawa N, Becher I, et al. Thermal proteome profiling for interrogating protein interactions. Mol Syst Biol. 2020;16(3):e9232. doi:10.15252/msb.20199232. 

      14755328: Dorsam RT, Kunapuli SP. Central role of the P2Y12 receptor in platelet activation. J Clin Invest. 2004;113(3):340-345. doi:10.1172/JCI20986. 

      Reviewer #1 (Recommendations for the authors):

      “The authors use iBAC or LFQ to compare across samples. This inconsistency is puzzling. As far as I know, LFQ should always be used when comparing across samples”

      As mentioned above, we use iBAQ only in Fig. 2B to illustrate within-sample relative abundance; all comparative analyses elsewhere use LFQ. We have updated the Fig. 2B legend to state this explicitly.

      We used iBAQ Fig. 2B as it provides a notion of protein abundance within a sample, normalizing the summed peptide intensities by the number of theoretically observable peptides. This normalization facilitates comparisons between proteins within the same sample, offering a clearer understanding of their relative molar proportions [PMID: 33452728]. LFQ, by contrast, is optimized for comparing the same protein across different samples. It achieves this by performing delayed normalization to reduce run-to-run variability and by applying maximal peptide ratio extraction, which integrates pairwise peptide intensity ratios across all samples to build a consistent protein-level quantification matrix [PMID: 24942700]. These features make LFQ more robust to missing values and technical variation, thereby enabling accurate detection of relative abundance changes in the same protein under different experimental conditions. This distinction is well supported by the proteomics literature: Smits et al. [PMID: 23066101] used iBAQ specifically to determine the relative abundance of proteins within one sample, whereas LFQ was applied for comparative analyses between conditions.

      “[Regarding Figure 2A] Why does the control also contain ATP-vanadate? Also, I am not aware of a commercially available chemical "ATP-VO4". I assume this is a mistake”

      The control condition in Figure 2A was mislabeled, and the figure has been corrected to remove this discrepancy. In our experiments, ATP and orthovanadate (VO<sub>4</sub>) were added together, and for simplicity this was annotated as “ATP-VO<sub>4</sub>.” 

      “[Regarding Figure 2B] What is the fold change in MsbA iBAQ values? It seems that the differences are quite small, and as such require a more quantitative approach than iBAQ (e.g SILAC or some other internal standard). In addition, what information does this panel add relative to 2C”

      The figure has been updated to clarify that the values shown are log₂transformed iBAQ intensities. Figures 2B and 2C are complementary: Figure 2B shows that in the control sample, MsbA’s peptide abundance decreases with temperatures (51, 56, and 61 °C) relative to the remaining bulk proteins. Figure 2C shows the specific thermal profiles of MsbA in control and ATP–vanadate conditions. To make this clearer, we have added a sentence to the Results section explaining the specific role of Figure 2B.

      Together, these panels indicate that the method can identify ligand-induced stabilization even for proteins whose abundance decreases faster than the bulk during the TPP assay. We have provided the rationale for not using SILAC or TMT labeling in our public response.

      “[Regarding Figure 2C] Although not mentioned in the legend, I assume this is iBAQ quantification, which as mentioned above isn't accurate enough for such small differences. In addition, I find this data confusing: why is MsbA more stable at the lower temperatures in the absence of ATP-vanadate? The smoothed-line representation is misleading, certainly given the low number of data points”

      The data presented represent LFQ values for MsbA, and we have updated the figure legend to clearly indicate this. Additionally, as suggested, we have removed the smoothing line to more accurately reflect the data. Regarding the reviewer’s concern about stability at lower temperatures, we note that MsbA exhibits comparable abundance at 38 °C and 46 °C under both conditions, with overlapping error bars. We therefore interpret these data as indicating no significant difference in stability at the lower temperatures, with ligand-dependent stabilization becoming apparent only at elevated temperatures. We do not exclude the possibility that MsbA stability at these temperatures is affected by the conformational dynamics of this ABC transporter upon ATP binding and hydrolysis.

      “[Regarding Figure 3A] is this raw LFQ data? Why did the authors suddenly change from iBAQ to LFQ? I find this inconsistency puzzling”

      To clarify, all analyses of protein stabilization or destabilization presented in the manuscript are based on LFQ values. The only instance where iBAQ was used is Figure 2B, where it served to illustrate the relative peptide abundance of MsbA within the same sample. We have revised the figure legends and text to make this distinction explicit and ensure consistency in presentation.

      “[Regarding Figure 3B] The non-specific ATP-dependent stabilization increases the likelihood of false positive hits. This limitation is not mentioned by the authors. I think it is important to show other small molecules, in addition to ATP. The authors suggest that their approach is highly relevant for drug screening. Therefore, a good choice is to test an effect of a known stabilizing drug (eg VX-809 and CFTR)”

      We thank the reviewer for this suggestion. As noted in the manuscript (results and discussion sections), ATP is a natural hydrotrope and is therefore expected to induce broad, non-specific stabilization effects, a phenomenon also observed in previous proteome-wide studies, which demonstrated ATP’s widespread influence on cytosolic protein solubility and thermal stability (PMID: 30858367). To demonstrate that MM-TPP can resolve specific ligand–protein interactions beyond these global ATP effects, we tested 2-methylthio-ADP (2-MeS-ADP), a selective agonist of P2RY12 (PMID: 14755328). In these experiments, we observed robust and reproducible stabilization of P2RY12 at both 51°C and 57°C, with no consistent stabilization of unrelated proteins across temperatures. This provides direct evidence that our workflow can distinguish specific from non-specific ligand-induced effects. We selected 2-MeS-ADP due to its structural stability and receptor higher-affinity over ADP, allowing us to extend our existing workflow while testing a receptor-specific interaction. We agree that extending this approach to clinically relevant small-molecule drugs, such as VX-809 with CFTR, would further underscore the pharmacological potential of MM-TPP, and we have now noted this as an important avenue for future studies.

      “X axis of Figure 3B: Log 2 fold difference of what? iBAQ? LFQ? Similar ambiguity regarding the Y axis of 3E. What peptide? And why the constant changes in estimating abundances?”

      We thank the reviewer for pointing out these inaccuracies in the figure annotations. As mentioned above, all analyses (except Figure 2B) are based on LFQ values. We have revised the figure legends and text to make this clear.

      In Figure 3E, “peptide intensity” refers to log2 LFQ peptide intensities derived from the BCS1L protein, as indicated in the figure caption. 

      “The authors suggest that P2RY6 and P2RY12 are stabilized by ADP, the hydrolysis product of ATP. Currently, the support for this suggestion is highly indirect. To support this claim, the authors need to directly show the effect of ADP. In reference to the alpha fold results shown in Figure 4D, the authors state that "Collectively, these data highlight the ability of MM-TPP to detect the side effects of parent compounds, an important consideration for drug development". To support this claim, it is necessary to show that Mao-B is indeed best stabilized with ADP or AMP, rather than ATP.”

      In this revision, we chose not to test ADP directly, as it is a broadly binding, relatively weak ligand that would likely stabilize many proteins without revealing clear target-specific effects. Since we had already evaluated ATP-VO₄, a similarly broad, non-specific ligand, additional testing with ADP would provide limited additional insight. Instead, we prioritized 2-methylthio-ADP, a selective agonist of P2RY12, to more effectively demonstrate the specificity of MM-TPP. With this ligand, we observed clear and reproducible stabilization of P2RY12, underscoring the ability of MM-TPP to resolve receptor–ligand interactions beyond ATP’s broad hydrotropic effects. Importantly, and as expected, we did not observe stabilization of the related purinergic receptor P2RY6, further supporting the specificity of the observed effect.

      We have also revised the AlphaFold-related statement in Figure 4D to adopt a more cautious tone: “Collectively, these data suggest that MM-TPP may detect potential side effects of parent compounds, an important consideration for drug development.” In this context, we use AlphaFold not as a validation tool, but rather as a structural aid to help rationalize why certain off-target proteins (e.g., ATP with Mao-B) exhibit stabilization.

      Reviewer #2 (Recommendations for the authors):

      “In the main text, it will be useful to include the unique peptides table of at least the targets discussed in the manuscript. For example, in presence of AMP-PNP at 51oC P2RY6 shows 4-6 peptides in all n=3 positive & negative ionization modes. But, for P2RY12 only 1-3 peptides were observed. Depending on the sequence length and the relative abundance in the cell of a protein of interest, the number of peptides observed could vary a lot per protein. Given the unique peptide abundance reported in the supplementary file, for various proteins in different conditions, it appears the threshold of observation of two unique peptides for a protein to be analyzed seems less stringent.”

      By applying a filter requiring at least two unique peptides in at least one replicate, we exclude, on average, 15–20% of the total identified proteins. We consider this a reasonable level of stringency that balances confidence in protein identification with the retention of relevant data. This threshold was selected because it aligns with established LC-MS/MS data analysis practices (PMID: 32591519, 33188197, 26524241), and we have included these references in the Methods section to justify our approach. We have included in this revision a Supplemental Table 2 showing the unique peptide counts for proteins highlighted in this study.  

      “It appears that the time of heat treatment for peptidisc library subjected to MM-TPP profiling was chosen as 3 min based on the results presented in Supplementary Figure 1A, especially the loss of MsbA observed in 1% DDM after 3 min heat perturbation. However, when reconstituted in peptidisc there seems to be no loss in MsbA even after 12 mins at 45oC. So, perhaps a longer heat treatment would be a more efficient perturbation.”

      Previous studies indicate that heat exposure of 3–5 minutes is optimal for visualizing protein denaturation (PMID: 23828940, 32133759). We have added a statement to the Results section to justify our choice of heat exposure. Although MsbA remains stable at 45 °C for extended periods, higher temperatures allow for more effective perturbation to reveal destabilization. Supplementary Figure 1A specifically illustrates MsbA instability in detergent environments.

      “Some of the stabilized temperatures listed in Table 1 are a bit confusing. For example, ABCC3 and ABCG2. In the case of ABCC3 stabilization was observed at 51oC and 60oC, but 56oC is not mentioned. In the same way, 51oC is not mentioned for ABCG2. You would expect protein to be stabilized at 56oC if it is stabilized at both 51oC and 60oC. So, it is unclear if the stabilizations were not monitored for these proteins at the missing temperatures in the table or if no peptides could be recorded at these temperatures as in the case of P2RX4 at 60oC in Figure 4C.”

      Both scenarios are represented in our data. For some proteins, like ABCG2, sufficient peptide coverage was achieved, but no stabilization was observed at intermediate temperatures (e.g., 56 °C), likely because the perturbation was not strong enough to reveal an effect. In other cases, such as ABCC3 at 56 °C or P2RX4 at 60 °C, the proteins were not detected due to insufficient peptide identifications at those temperatures, which explains their omission from the table. 

      “In Figure 4C, it is perplexing to note that despite n = 3 there were no peptide fragments detected for P2RX4 at 60oC in presence of ATP-VO4, but they were detected in presence of AMP-PNP. It will be useful to learn authors explanation for this, especially because both of these ligands destabilize P2RX4. In Figure 4B, it would have been great to see the effect of ADP too, to corroborate the theory that ATP metabolites could impact the thermal stability.”

      In Figure 4C, the absence of P2RX4 peptide detection at 60 °C with ATP–VO₄ mirrors variability observed in the corresponding control (n = 6). Specifically, neither the control nor ATP–VO₄ produced unique peptides for P2RX4 at 60 °C in that replicate, whereas peptides were detected at 60 °C in other replicates for both the control and AMPPNP, and at 64 °C for ATP–VO<sub>4</sub>, the controls, and AMP-PNP. Such missing values are a natural feature of MS-based proteomics and can arise from multiple technical factors, including inconsistent heating, incomplete digestion, stochastic MS injection, or interference from Peptidisc peptides. We therefore interpret the absence of peptides in this replicate as a technical artifact rather than evidence against protein destabilization. Importantly, the overall dataset consistently shows that both ATP–VO₄ and AMP-PNP destabilize P2RX4, supporting their characterization as broad, non-specific ligands with off-target effects.

      Because ATP and ADP belong to the same class of broadly binding, non-specific ligands, additional testing with ADP would not provide meaningful mechanistic insight. Instead, we chose to test 2-methylthio-ADP, a selective P2RY12 agonist. This experiment revealed robust, reproducible stabilization of P2RY12, without consistent effects on unrelated proteins at 51 °C and 57 °C, thereby demonstrating the ability of MM-TPP to detect specific receptor–ligand interactions.

      Finally, we note that P2RX4 is not a primary target of ATP–VO<sub>4</sub> or AMP-PNP. Consequently, the observed destabilization of P2RX4 is expected to be less pronounced than the strong, physiologically consistent stabilization of ABC transporters by ATP–VO<sub>4</sub>, as shown in Figure 3D, where the majority of ABC transporters are thermally stabilized across all tested temperatures.

      “As per Figure 4, P2Y receptors P2RY6 and P2RY12 both showed great thermal stability in presence of ATP-VO4 despite their preference for ADP. The authors argue this could be because of ATP metabolism, and binding of the resultant ADP to the P2RY6. If P2RX4 prefers ATP and not the metabolized product ADP that apparently is available, ideally you should not see a change in stability. A stark destabilization would indicate interaction of some sorts. P2X receptors are activated by ATP and are not naturally activated by AMP-PNP. So, destabilization of P2RX4 upon binding to ATP that can activate P2X receptors is conceivable. However, destabilization both in presence of ATP-VO4 and AMP-PNP is unclear. It is perhaps useful to test effect of ADP using this method, and maybe even compare some antagonists such as TNPATP.”

      In this study, we did not directly test ADP, as we had already demonstrated that MM-TPP detects stabilization by broad-binding ligands such as ATP–VO₄. Instead, we focused on a more selective ligand, 2-MeS-ADP, a specific agonist of P2RY12 [PMID: 14755328]. Here, we observed robust and reproducible stabilization of P2RY12 at 51 °C and 57 °C, while P2RY6 showed no significant changes, and no other proteins were consistently stabilized (Figure 4B, S4). This confirms that MM-TPP can distinguish specific ligand–receptor interactions from broader ATP-induced effects. To further explore the assay’s nuance and sensitivity, testing additional nucleotide ligands—including antagonists like TNP-ATP or ATPγS—would provide valuable insights, and we have identified this as an important future direction.

    1. Reviewer #2 (Public review):

      Summary:

      This study characterized the function of SLC35G3, a putative transmembrane UDP-N-acetylglucosamine transporter, in spermatogenesis. They showed that SLC35G3 is testis-specific and expressed in round spermatids. Slc35g3-null males were sterile but females were fertile. Slc35g3-null males produced normal sperm count but sperm showed subtle head morphology. Sperm from Slc35g3-null males have defects in uterotubal junction passage, ZP binding, and oocyte fusion. Loss of SLC35G3 causes abnormal processing and glycosylation of a number sperm proteins in testis and sperm. They demonstrated that SLC35G3 functions as a UDP-GlcNAc transporter in cell lines. Two human SLC35G3 variants impaired its transporter activity, implicating these variants in human infertility.

      Strengths:

      This study is thorough. The mutant phenotype is strong and interesting. The major conclusions are supported by the data. This study demonstrated SLC35G3 as a new and essential factor for male fertility in mice, which is likely conserved in humans.

      Weaknesses:

      Some data interpretations needed to be revised. These have been adequately addressed in the revised manuscript.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      In the present manuscript, Mashiko and colleagues describe a novel phenotype associated with deficient SLC35G3, a testis-specific sugar transporter that is important in glycosylation of key proteins in sperm function. The study characterizes a knockout mouse for this gene and the multifaceted male infertility that ensues. The manuscript is well-written and describes novel physiology through a broad set of appropriate assays.

      Strengths:

      Robust analysis with detailed functional and molecular assays

      Weaknesses:

      (1) The abstract references reported mutations in human SLC35G3, but this is not discussed or correlated to the murine findings to a sufficient degree in the manuscript. The HEK293T experiments are reasonable and add value, but a more detailed discussion of the clinical phenotype of the known mutations in this gene and whether they are recapitulated in this study (or not) would be beneficial.

      Since no patients have been identified, our experiment was conducted to investigate the activity of the mutation found in humans.

      (2) Can the authors expand on how this mutation causes such a wide array of phenotypic defects? I am surprised there is a morphological defect, a fertilization defect, and a transit defect. Do the authors believe all of these are present in humans as well?

      Thank you for your comment. There are many glycoprotein-coding genes that influence sperm head morphology, fertilization defect, and transit defect have been identified in knockout mouse studies, and most of these are conserved in humans. Therefore, we believe that glycan modification by SLC35G3 is also involved in the regulation of human sperm. 

      Reviewer #2 (Public review):

      Summary:

      This study characterized the function of SLC35G3, a putative transmembrane UDP-N-acetylglucosamine transporter, in spermatogenesis. They showed that SLC35G3 is testis-specific and expressed in round spermatids. Slc35g3-null males were sterile, but females were fertile. Slc35g3-null males produced a normal sperm count, but sperm showed subtle head morphology. Sperm from Slc35g3-null males have defects in uterotubal junction passage, ZP binding, and oocyte fusion. Loss of SLC35G3 causes abnormal processing and glycosylation of a number of sperm proteins in the testis and sperm. They demonstrated that SLC35G3 functions as a UDP-GlcNAc transporter in cell lines. Two human SLC35G3 variants impaired their transporter activity, implicating these variants in human infertility.

      Strengths:

      This study is thorough. The mutant phenotype is strong and interesting. The major conclusions are supported by the data. This study demonstrated SLC35G3 as a new and essential factor for male fertility in mice, which is likely conserved in humans.

      Weaknesses:

      Some data interpretations need to be revised.

      Thank you for comments. We revised interpretations.

      Reviewer #1 (Recommendations for the authors):

      (1) The introduction could be structured more efficiently. Much of what is discussed in the first paragraph appears to be redundant to the second paragraph (or perhaps unrelated to the present manuscript).

      In the Introduction, we described the process of glycoprotein formation, 1) quality control or nascent glycoproteins in the ER and its relations importance in sperm fertilizing ability, 2) glycan maturation in the Golgi apparatus and its importance in sperm fertilizing ability, and 3) the supply of nucleotide sugars as the basis of these processes. 

      We would like to retain this structure in the revised manuscript and appreciate your understanding.

      (2) Given the significant difference in morphology between murine and human sperm, can the authors comment on whether these findings are directly translatable to humans?

      Thank you for your comment. There are significant differences in sperm morphology between mice and humans, but many glycoprotein-coding genes that influence sperm head morphology have been identified in knockout mouse studies, and most of these are conserved in humans. Therefore, we believe that glycan modification by SLC35G3 is also involved in the regulation of human sperm head morphology. Observing sperm samples from individuals with SLC35G3 mutations is the most direct approach to verify this point and is considered an important goal for future research. The following text has been added to clarify the point:

      New Line 338; While these proteins are also found in humans, it is still too early to infer the importance of SLC35G3 in the morphogenesis of human sperm heads. Observing sperm samples from individuals with SLC35G3 mutations would be the most direct approach to address this, and we consider it an important objective for future studies.

      (3) Line 194 - while the inability to pass the UTJ may indeed be a component of this infertility phenotype, I would argue that a complete lack of ability to fertilize (even with IVF but not ICSI) suggests that the primary defect is elsewhere. This statement should be removed, and the topic of these two separate mechanisms should be compared/contrasted in the discussion.

      We agree that this is an overstatement, so we changed it;

      New line 187; Thus, the defective UTJ migration is one of the primary causes of Slc35g3-/- male infertility. 

      We believe the current statement in the discussion can stay as it is. 

      Line 379; We reaffirmed that glycosylation-related genes specific to the testis play a crucial role in the synthesis, quality control, and function of glycoproteins on sperm, which are essential for male fertility through their interactions with eggs and the female reproductive system.

      (4) Did the authors consider performing TEM to assess the sperm ultrastructure and the acrosome?

      Since morphological abnormalities were evident even at the macro level, TEM was not performed in this study. In the future, we plan to use immune-TEM against affected/non-affected glycoproteins when the antibodies become available.

      (5) I would argue that Figure 3 should not be labeled as "essential", given the abnormal sperm head morphology compared to humans, the relatively modest difference between the groups on PCA, and more broadly speaking, the relatively poor correlation with morphology and human male infertility. While globozoospermia is clearly an exception, the data in this figure may not translate to human sperm and/or may not be clinically relevant even if it does.

      Indeed, other KO spermatozoa with similar morphological features are known to cause a reduction in litter size but do not result in complete infertility. As discussed in line 1, this head shape is not essential for fertilization. Reviewer 2 also pointed out that the phrase "Slc35g3 is essential for sperm head formation" is too strong; therefore, we would like to revise Fig3 title to "Slc35g3 is involved in the regulation of sperm head morphology."

      (6) Have the authors generated slc35b4 KO mice?

      No, we did not. Since Slc35b4 is expressed throughout the body, a straight knockout may affect other organs or developmental processes. To investigate its role specifically in the testis, it will be necessary to generate a conditional knockout (cKO) model. As this requires considerable cost, time, and labor, we would like to leave it for future investigation.

      Reviewer #2 (Recommendations for the authors):

      (1) Lines 122-123: "it is prominently expressed in the testis, beginning 21 days postpartum (Figure 1B), suggesting expression from the secondary spermatocyte stage to the round spermatid stage in mice." Day 21 indicates the first appearance of round spermatids, but not secondary spermatocytes. Please change to the following: ...suggesting that its expression begins in round spermatids in mice.

      I agree with your comment and have revised the text accordingly (New line 114).

      (2) Figure 1E: What germ cells are they? The type of germ cells needs to be labelled on the image. Double staining with a germ cell marker would be helpful to distinguish germ cells from testicular somatic cells.

      Thank you for your comment. We replaced the Figure 1E as follows.

      To distinguish germ cells from testicular somatic cells, we used the germ cell marker TRA98 antibody. Furthermore, based on the nuclear and GM130 staining pattern, we consider that the Golgi apparatus of round spermatids is labeled.

      (3) Figure 2C: The most abundant WB band is between 20 and 25 kD and is non-specific. Does the arrow point to the expected SLC35G3 band? There are two minor bands above the main non-specific band. Are both bands specific to SLC35G3? Given the strong non-specific band on WB, how specific is the immunofluorescence signal produced by this antibody? These need to be explained and discussed.

      The arrow pointed to the expected size (35kDa).

      We thought that these non-specific bands could be due to blood contamination, so we retried with testicular germ cells. We confirmed that non-specific bands disappeared in the subsequent Western blot analysis. The specificity of the immunofluorescence signal is supported by its complete absence in the KO, as shown in the Supplementary Figures. We have decided to include this improved dataset. Thank you for your comment, which helped us improve the data.

      Author response image 1.

      (4) Line 184: "Slc35g3-/--derived sperm have defects in ZP binding and oolemma fusion ability, but genomic integrity is intact." Producing viable offspring does not necessarily mean that genomic integrity is intact. Suggestion: Slc35g3-/--derived sperm have defects in ZP binding and oolemma fusion ability but produce viable offspring. Likewise, the Figure S9 caption also needs to be changed.

      Thank you for your constructive comment. We have revised the text as you suggested.

      (5) Figure 3. "Slc35g3 is essential for sperm head formation". This statement is too strong. It is not essential for sperm head formation. The sperm head is still formed, but shows subtle deformation.

      Thank you for your suggestion. We changed as follows:

      FIg.3; ”Slc35g3 is involved in the regulation of sperm head morphology.”

      (6) Lines 204-205: Figure 6B: "Interestingly, some bands of sperm acrosome-associated 1 (SPACA1; 26) disappeared in Slc35g3-/- testis lysates." I don't see the absence of SPACA1 bands in -/- testis. This needs to be clearly labeled with arrows. On the contrary, the bands are stronger in Slc35g3-/- testis lysates.

      Thank you for your comment. After carefully considering your comments, we concluded that using "disappeared" is indeed inappropriate. We would like to revise the sentence as follows: New line 197; "Interestingly, SPACA1 (Sperm Acrosome Associated 1; 26) exhibited a subtle difference in banding pattern in the Slc35g3-/- testis lysate."

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      Zhang et al. used a conditional knockout mouse model to re-examine the role of the RNAbinding protein PTBP1 in the transdifferentiation of astroglial cells into neurons. Several earlier studies reported that PTBP1 knockdown can efficiently induce the transdifferentiation of rodent glial cells into neurons, suggesting potential therapeutic applications for neurodegenerative diseases. However, these findings have been contested by subsequent studies, which in turn have been challenged by more recent publications. In their current work, Zhang et al. deleted exon 2 of the Ptbp1 gene using an astrocyte-specific, tamoxifen-inducible Cre line and investigated, using fluorescence imaging and bulk and single-cell RNA-sequencing, whether this manipulation promotes the transdifferentiation of astrocytes into neurons across various brain regions. The data strongly indicate that genetic ablation of PTBP1 is not sufficient to drive efficient conversion of astrocytes into neurons. Interestingly, while PTBP1 loss alters splicing patterns in numerous genes, these changes do not shift the astroglial transcriptome toward a neuronal profile.

      Strengths:

      Although this is not the first report of PTBP1 ablation in mouse astrocytes in vivo, this study utilizes a distinct knockout strategy and provides novel insights into PTBP1-regulated splicing events in astrocytes. The manuscript is well written, and the experiments are technically sound and properly controlled. I believe this study will be of considerable interest to a broad readership.

      Weaknesses:

      (1) The primary point that needs to be addressed is a better understanding of the effect of exon 2 deletion on PTBP1 expression. Figure 4D shows successful deletion of exon 2 in knockout astrocytes. However, assuming that the coverage plots are CPM-normalized, the overall PTBP1 mRNA expression level appears unchanged. Figure 6A further supports this observation. This is surprising, as one would expect that the loss of exon 2 would shift the open reading frame and trigger nonsense-mediated decay of the PTBP1 transcript. Given this uncertainty, the authors should confirm the successful elimination of PTBP1 protein in cKO astrocytes using an orthogonal approach, such as Western blotting, in addition to immunofluorescence. They should also discuss possible reasons why PTBP1 mRNA abundance is not detectably affected by the frameshift.

      We thank the reviewer for raising this important point. Indeed, the deletion of exon 2 introduces a frameshift that is predicted to disrupt the PTBP1 open reading frame and trigger nonsensemediated decay (NMD). While our CPM-normalized coverage plots (Figure 4D) and gene-level expression analysis (Figure 6A) suggest that PTBP1 mRNA levels remain largely unchanged in cKO astrocytes, we acknowledge that this observation is counterintuitive and merits further clarification.

      We suspect that the process of brain tissue dissociation and FACS sorting for bulk or single cell RNA-seq may enrich for nucleic material and thus dilute the NMD signal, which occurs in the cytoplasm. Alternatively, the transcripts (like other genes) may escape NMD for unknown mechanisms. Although a frameshift is a strong indicator for triggering NMD, it does not guarantee NMD will occur in every case. (lines 346-353)

      Regarding the validation of PTBP1 protein depletion in cKO astrocytes by Western blotting, we acknowledge that orthogonal approaches to confirm PTBP1 elimination would address uncertainty around the effect of exon 2 deletion on PTBP1 expression. The low cell yield of cKO astrocytes vis FACS poses a significant burden on obtaining sufficient samples for immunoblotting detection of PTBP1 depletion. On average 3-5 adult animals per genotype (with three different alleles) are needed for each biological replicate. The manuscript contains PTBP1 immunofluorescence staining of brain slides to demonstrate PTBP1 deletion (Figures 1-2, Figure 3 supplement 1). Our characterization of this Ptbp1 deletion allele in other contexts show the loss of full length PTBP1 proteins in ESCs using Western blotting (PMID: 30496473). Furthermore, germline homozygous mutant mice do not survive beyond embryonic day 6, supporting that it is a loss of function allele.

      (2) The authors should analyze PTBP1 expression in WT and cKO substantia nigra samples shown in Figure 3 or justify why this analysis is not necessary.

      We thank the reviewer for pointing out this important question. Although we are using an astrocyte-specific PTBP1 knockout (KO) mouse model, which is designed to delete PTBP1 in all the astrocyte throughout mouse brain, and although we have systematically verified PTBP1 elimination in different mouse brain regions (cortex and striatum) at multiple time points (from 4w to 12w after tamoxifen administration), we agree that it remains necessary and important to demonstrate whether the observed lack of astrocyte-to-neuron conversion is indeed associated with sufficient PTBP1 depletion.

      We have analyzed the PTBP1 expression in the substantia nigra, as we did in the cortex and striatum. We added a new figure (Figure 3-figure supplement 1) to show the results. We found in cKO samples, tdT+ cells lack PTBP1 immunostaining, and there is no overlapping of NeuN+ and tdT+ signals. These results show effective PTBP1 depletion in the substantia nigra, similar to that observed in the cortex and striatum. (line 221-224)

      (3) Lines 236-238 and Figure 4E: The authors report an enrichment of CU-rich sequences near PTBP1-regulated exons. To better compare this with previous studies on position-specific splicing regulation by PTBP1, it would be helpful to assess whether the position of such motifs differs between PTBP1-activated and PTBP1-repressed exons.

      We thank the reviewer for this insightful comment. We agree that assessing the positional distribution of CU-rich motifs between PTBP1-activated and PTBP1-repressed exons would provide valuable insight into the position-specific regulatory mechanisms of PTBP1. In response, we have performed separate motif enrichment analyses for PTBP1-activated and PTBP1-repressed exons and examined whether their positional patterns differ (Figure 4–figure supplement 2).

      Our analysis revealed that CU-rich motifs were significantly enriched in the upstream introns of both activated and repressed exons by PTBP1 loss, with higher enrichment observed in repressed exons (Enrichment ratio = 2.14, q = 9.00×10-5) compared to activated exons (Enrichment ratio = 1.72, q = 7.75×10-5) (Figure 4–figure supplement 2B–C). In contrast, no CU-rich motifs were found downstream of activated exons (Figure 4–figure supplement 2D), while a weak, non-significant enrichment was observed downstream of repressed exons (Enrichment ratio = 1.21, q = 0.225; Figure 4–figure supplement 2E). These results do not necessarily fully fit with a couple of earlier PTBP1 CLIP studies showing differential PTBP1 binding for repressed vs activated exons but are more in line with the Black Lab study (PMID: 24499931) that PTBP1 binds upstream introns of both repressed and activated exons. Either case, PTBP1 affects a diverse set of alternative exons and likely involves diverse contextdependent binding patterns (lines 244-257).

      (4) The analyses in Figure 5 and its supplement strongly suggest that the splicing changes in PTBP1-depleted astrocytes are distinct from those occurring during neuronal differentiation. However, the authors should ensure that these comparisons are not confounded by transcriptome-wide differences in gene expression levels between astrocytes and developing neurons. One way to address this concern would be to compare the new PTBP1 cKO data with publicly available RNA-seq datasets of astrocytes induced to transdifferentiate into neurons using proneural transcription factors (e.g., PMID: 38956165).

      We would like to express our gratitude for the thoughtful feedback. We agree that transcriptome-wide differences in gene expression between astrocytes and developing neurons could confound the interpretation of splicing differences. To address this concern, we have incorporated publicly available RNA-seq datasets from studies in which astrocytes are reprogrammed into neurons using proneural transcription factors, Ngn2 or PmutNgn2 (PMID: 38956165).

      The results of principal component analysis (PCA) for splicing profiles revealed that the in vivo splicing profiles from this study and the in vitro splicing profiles from PMID 38956165 are well separated on PC1 and PC2. While Ngn2/PmutNgn2-induced neurons and control astrocytes started to show distinction on PC3 (and to some degree on PC4), Ptbp1 cKO samples remained tightly grouped with control astrocytes and showed no directional shift toward the neuronal cluster (Figure 5–figure supplement 2B). These findings further support the conclusion that PTBP1 depletion in mature astrocytes does not induce a neuronal-like splicing program, even when compared against neurons derived from the astrocyte lineage (lines 306318).

      The pairwise correlation analysis of percent spliced in between Ptbp1 cKO, control astrocytes, and induced neurons confirmed that Ptbp1 cKO astrocytes are highly similar to control astrocytes (ρ = 0.81) and clearly distinct from induced neurons (ρ = 0.62) (Figure 5– figure supplement 2C), reinforcing the notion that PTBP1 loss alone is insufficient to drive a neuronal-like splicing transition (lines 319-336).

      Consistent with the analysis for splicing profiles, PCA for gene expression profiles showed that control and Ptbp1 cKO astrocytes clustered tightly together and no directional shift toward the neuronal cluster while Ngn2/PmutNgn2-induced neurons and control astrocytes were distributed across a broader range (Figure 6–figure supplement 1A–B). Correlation analysis further supported this result, with a strong similarity between Ptbp1 cKO and control astrocytes (ρ = 0.97), and low similarity between Ptbp1 cKO astrocytes and induced neurons (ρ = 0.27) (Figure 6–figure supplement 1C). These findings indicate that, even with PTBP1 loss, cKO astrocytes retain a transcriptional profile very distinct from that of neurons, underscoring that Ptbp1 deficiency alone does not induce astrocyte-to-neuron reprogramming at the transcriptomic level (lines 366-373).

      Reviewer #2 (Public review):

      Summary:

      The manuscript by Zhang and colleagues describes a study that investigated whether the deletion of PTBP1 in adult astrocytes in mice led to an astrocyte-to-neuron conversion. The study revisited the hypothesis that reduced PTBP1 expression reprogrammed astrocytes to neurons. More than 10 studies have been published on this subject, with contradicting results. Half of the studies supported the hypothesis while the other half did not. The question being addressed is an important one because if the hypothesis is correct, it can lead to exciting therapeutic applications for treating neurodegenerative diseases such as Parkinson's disease.

      In this study, Zhang and colleagues conducted a conditional mouse knockout study to address the question. They used the Cre-LoxP system to specifically delete PTBP1 in adult astrocytes. Through a series of carefully controlled experiments, including cell lineage tracing, the authors found no evidence for the astrocyte-to-neuron conversion.

      The authors then carried out a key experiment that none of the previous studies on the subject did: investigating alternative splicing pattern changes in PTBP1-depleted cells using RNA-seq analysis. The idea is to compare the splicing pattern change caused by PTBP1 deletion in astrocytes to what occurs during neurodevelopment. This is an important experiment that will help illuminate whether the astrocyte-to-neuron transition occurred in the system. The result was consistent with that of the cell staining experiments: no significant transition was detected.

      These experiments demonstrate that, in this experimental setting, PTBT1 deletion in adult astrocytes did not convert the cells to neurons.

      Strengths:

      This is a well-designed, elegantly conducted, and clearly described study that addresses an important question. The conclusions provide important information to the field.

      To this reviewer, this study provided convincing and solid experimental evidence to support the authors' conclusions.

      Weaknesses:

      The Discussion in this manuscript is short and can be expanded. Can the authors speculate what led to the contradictory results in the published studies? The current study, in combination with the study published in Cell in 2021 by Wang and colleagues, suggests that observed difference is not caused by the difference of knockdown vs. knockout. Is it possible that other glial cell types are responsible for the transition? If so, what cells? Oligodendrocytes?

      We are grateful for the reviewer’s careful reading and valuable suggestions. We have expanded the Discussion to include discussion of possible origins of glial cells responsible for neuronal transition. (lines 441-461)

      Reviewer #1 (Recommendations for the authors):

      (1) Throughout the text and figures, it is customary to write loxP with a capital "P".

      We have capitalized “P” in loxP throughout the text and figures.

      (2) It would be helpful to indicate the brain regions analyzed above the images in Figure 1B-C, Figure 2A-B, Figure 1 - Supplement 3, and Figure 2 - Supplement 2, as was done in Figure 1 - Supplement 1.

      The labels indicating brain regions of corresponding images have been added to the figures. 

      (3) The arrowheads in Figure 1C, Figure 2B, Figure 3, and several supplemental panels are nearly equilateral triangles, making their direction difficult to discern. Consider using a more slender or indented design (e.g., ➤).

      We have replaced triangular arrowheads with indented arrowheads in the figures. 

      (4) Lines 181-209: This section should be revised, given that the striatum is not a midbrain structure.

      We have revised this section to reflect our analysis of the striatum as a brain region of the nigrostriatal pathway rather than a midbrain structure. 

      Reviewer #2 (Recommendations for the authors):

      In Supplemental Figure 1, the two open triangles are almost indistinguishable. It would be better if the colors of these open triangles were changed so that it is easier to tell what's what. There is not enough contrast between white and yellow.

      We have changed the open triangle arrowheads to solid yellow and violet arrowheads to improve contrast between labels.

    1. eLife Assessment

      This computational study examines how neurons in the songbird premotor nucleus HVC might generate the precise, sparse burst sequences that drive adult song. The findings would be useful for understanding how intrinsic conductances and HVC microcircuitry may produce neural sequences, but the work is incomplete because of arbitrary network assumptions, insufficient consideration of biological details such as how silent gaps in song sequences are represented, and failure to incorporate interactions with auditory and brainstem inputs. As a result, the study offers limited advance and only a modest conceptual advance over prior models.

    2. Reviewer #2 (Public review):

      Summary:

      In this paper, the authors use numerical simulations to try to understand better a major experimental discovery in songbird neuroscience from 2002 by Richard Hahnloser and collaborators. The 2002 paper found that a certain class of projection neurons in the premotor nucleus HVC of adult male zebra finch songbirds, the neurons that project to another premotor nucleus RA, fired sparsely (once per song motif) and precisely (to about 1 ms accuracy) during singing.

      The experimental discovery is important to understand since it initially suggested that the sparsely firing RA-projecting neurons acted as a simple clock that was localized to HVC and that controlled all details of the temporal hierarchy of singing: notes, syllables, gaps, and motifs. Later experiments suggested that the initial interpretation might be incomplete: that the temporal structure of adult male zebra finch songs instead emerged in a more complicated and distributed way, still not well understood, from the interaction of HVC with multiple other nuclei, including auditory and brainstem areas. So at least two major questions remain unanswered more than two decades after the 2002 experiment: What is the neurobiological mechanism that produces the sparse precise bursting: is it a local circuit in HVC or is it some combination of external input to HVC and local circuitry? And how is the sparse precise bursting in HVC related to a songbird's vocalizations?

      The authors only investigate part of the first question, whether the mechanism for sparse precise bursts is local to HVC. They do so indirectly, by using conductance-based Hodgkin-Huxley-like equations to simulate the spiking dynamics of a simplified network that includes three known major classes of HVC neurons and such that all neurons within a class are assumed to be identical. A strength of the calculations is that the authors include known biophysically deduced details of the different conductances of the three majors classes of HVC neurons, and they take into account what is known, based on sparse paired recordings in slices, about how the three classes connect to one another. One weakness of the paper is that the authors make arbitrary and not-well-motivated assumptions about the network geometry, and they do not use the flexibility of their simulations to study how their results depend on their network assumptions. A second weakness is that they ignore many known experimental details such as projections into HVC from other nuclei, dendritic computations (the somas and dendrites are treated by the authors as point-like isopotential objects), the role of neuromodulators, and known heterogeneity of the interneurons. These weaknesses make it difficult for readers to know the relevance of the simulations for experiments and for advancing theoretical understanding.

      Strengths:

      The authors use conductance-based Hodgkin-Huxley-like equations to simulate spiking activity in a network of neurons intended to model more accurately songbird nucleus HVC of adult male zebra finches. Spiking models are much closer to experiments than models based on firing rates or on 2-state neurons.

      The authors include information deduced from modeling experimental current-clamp data such as the types and properties of conductances. They also take into account how neurons in one class connect to neurons in other classes via excitatory or inhibitory synapses, based on sparse paired recordings in slices by other researchers.

      The authors obtain some new results of modest interest such as how changes in the maximum conductances of four key channels (e.g., A-type K+ currents or Ca-dependent K+ currents) influence the structure and propagation of bursts, while simultaneously being able to mimic accurately current-clamp voltage measurements.

      Weaknesses:

      One weakness of this paper is the lack of a clearly stated, interesting, and relevant scientific question to try to answer. The authors do not discuss adequately in their introduction what questions have recent experimental and theoretical work failed to explain adequately concerning HVC neural dynamics and its role in producing vocalizations. The authors do not discuss adequately why they chose the approach of their paper and how their results address some of these questions.

      For example, the authors need to explain in more detail how their calculations relate to the works of Daou et al, J. Neurophys. 2013 (which already fitted spiking models to neuronal data and identified certain conductances), to Jin et al J. Comput. Neurosci. 2007 (which already discussed how to get bursts using some experimental details), and to the rather similar paper by E. Armstrong and H. Abarbanel, J. Neurophys 2016, which already postulated and studied sequences of microcircuits in HVC. This last paper is not even cited by the authors.

      The authors' main achievement is to show that simulations of a certain simplified and idealized network of spiking neurons, that includes some experimental details but ignores many others, can match some experimental results like current-clamp-derived voltage time series for the three classes of HVC neurons (although this was already reported in earlier work by Daou and collaborators in 2013), and simultaneously the robust propagation of bursts with properties similar to those observed in experiments. The authors also present results about how certain neuronal details and burst propagation change when certain key maximum conductances are varied.

      But these are weak conclusions for two reasons. First, the authors did not do enough calculations to allow the reader to understand how many parameters were needed to obtain these fits and whether simpler circuits, say with fewer parameters and simpler network topology, could do just as well. Second, many previous researchers have demonstrated robust burst propagation in a variety of feed-forward models. So what is new and important about the authors' results compared to the previous computational papers?

      Also missing is a discussion, or at least an acknowledgement, of the fact that not all of the fine experimental details of undershoots, latencies, spike structure, spike accommodation, etc may be relevant for understanding vocalization. While it is nice to know that some model can match these experimental details and produce realistic bursts, that does not mean that all of these details are relevant for the function of producing precise vocalizations. Scientific insights in biology often require exploring which of the many observed details can be ignored, and especially identifying the few that are essential for answering some questions. As one example, if HVC-X neurons are completely removed from the authors' model, does one still get robust and reasonable burst propagation of HVC-RA neurons? While part of nucleus HVC acts as a premotor circuit that drives nucleus RA, part of HVC is also related to learning. It is not clear that HVC-X neurons, which carry out some unknown calculation and transmit information to area X in a learning pathway, are relevant for burst production and propagation of HVC-RA neurons, and so relevant for vocalization. Simulations provide a convenient and direct way to explore questions of this kind.

      One key question to answer is whether the bursting of HVC-RA projection neurons is based on a mechanism local to HVC or is some combination of external driving (say from auditory nuclei) and local circuitry. The authors do not contribute to answering this question because they ignore external driving and assume that the mechanism is some kind of intrinsic feed-forward circuit, which they put in by hand in a rather arbitrary and poorly justified way, by assuming the existence of small microcircuits consisting of a few HVC-RA, HVC-X, and HVC-I neurons that somehow correspond to "sub-syllabic segments". To my knowledge, experiments do not suggest the existence of such microcircuits nor does theory suggest the need for such microcircuits.

      Another weakness of this paper is an unsatisfactory discussion of how the model was obtained, validated, and simulated. The authors should state as clearly as possible, in one location such as an appendix, what is the total number of independent parameters for the entire network and how parameter values were deduced from data or assigned by hand. With enough parameters and variables, many details can be fit arbitrarily accurately so researchers have to be careful to avoid overfitting. If parameter values were obtained by fitting to data, the authors should state clearly what was the fitting algorithm (some iterative nonlinear method, whose results can depend on the initial choice of parameters), what was the error function used for fitting (sum of least squares?), and what data were used for the fitting.

      The authors should also state clearly what is the dynamical state of the network, the vector of quantities that evolve over time. (What is the dimension of that vector, which is also the number of ordinary differential equations that have to be integrated?) The authors do not mention what initial state was used to start the numerical integrations, whether transient dynamics were observed and what were their properties, or how the results depend on the choice of initial state. The authors do not discuss how they determined that their model was programmed correctly (it is difficult to avoid typing errors when writing several pages or more of a code in any language) or how they determined the accuracy of the numerical integration method beyond fitting to experimental data, say by varying the time step size over some range or by comparing two different integration algorithms.

      Also disappointing is that the authors do not make any predictions to test, except rather weak ones such as that varying a maximum conductance sufficiently (which might be possible by using dynamic clamps) might cause burst propagation to stop or change its properties. Based on their results, the authors do not make suggestions for further experiments or calculations, but they should.

      Comments on revised version:

      The second version, unfortunately, did not address most of the substantive comments so that, while some parts of the discussion were expanded, most of the serious scientific weaknesses mentioned in the first round of review remain. The revised preprint is not a substantive improvement over the first.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      The paper presents a model for sequence generation in the zebra finch HVC, which adheres to cellular properties measured experimentally. However, the model is fine-tuned and exhibits limited robustness to noise inherent in the inhibitory interneurons within the HVC, as well as to fluctuations in connectivity between neurons. Although the proposed microcircuits are introduced as units for sub-syllabic segments (SSS), the backbone of the network remains a feedforward chain of HVC_RA neurons, similar to previous models.

      Strengths:

      The model incorporates all three of the major types of HVC neurons. The ion channels used and their kinetics are based on experimental measurements. The connection patterns of the neurons are also constrained by the experiments.

      Weaknesses:

      The model is described as consisting of micro-circuits corresponding to SSS. This presentation gives the impression that the model's structure is distinct from previous models, which connected HVC_RA neurons in feedforward chain networks (Jin et al 2007, Li & Greenside, 2006; Long et al 2010; Egger et al 2020). However, the authors implement single HVC_RA neurons into chain networks within each micro-circuit and then connect the end of the chain to the start of the chain in the subsequent micro-circuit. Thus, the HVC_RA neuron in their model forms a single-neuron chain. This structure is essentially a simplified version of earlier models.

      In the model of the paper, the chain network drives the HVC_I and HVC_X neurons. The role of the micro-circuits is more significant in organizing the connections: specifically, from HVC_RA neurons to HVC_I neurons, and from HVC_I neurons to both HVC_X and HVC_RA neurons.

      We thank Reviewer 1 for their thoughtful comments.

      While the reviewer is correct about the fact that the propagation of sequential activity in this model is primarily carried by HVC<sub>RA</sub> neurons in a feed-forward manner, we need to emphasize that this is true only if there is no intrinsic or synaptic perturbation to the HVC network. For example, we showed in Figures 10 and 12 how altering the intrinsic properties of HVC<sub>X</sub> neurons or for interneurons disrupts sequence propagation. In other words, while HVC<sub>RA</sub> neurons are the key forces to carry the chain forward, the interplay between excitation and inhibition in our network as well as the intrinsic parameters for all classes of HVC neurons are equally important forces in carrying the chain of activity forward. Thus, the stability of activity propagation necessary for song production depend on a finely balanced network of HVC neurons, with all classes contributing to the overall dynamics. Moreover, all existing models that describe premotor sequence generation in the HVC either assume a distributed model (Elmaleh et al., 2021) that dictates that local HVC circuitry is not sufficient to advance the sequence but rather depends upon moment to-moment feedback through Uva (Hamaguchi et al., 2016), or assume models that rely on intrinsic connections within HVC to propagate sequential activity. In the latter case, some models assume that HVC is composed of multiple discrete subnetworks that encode individual song elements (Glaze & Troyer, 2013; Long & Fee, 2008; Wang et al., 2008), but lacks the local connectivity to link the subnetworks, while other models assume that HVC may have sufficient information in its intrinsic connections to form a single continuous network sequence (Long et al. 2010). The HVC model we present extends the concept of a feedforward network by incorporating additional neuronal classes that influence the propagation of activity (interneurons and HVC<sub>X</sub> neurons). We have shown that any disturbance of the intrinsic or synaptic conductances of these latter neurons will disrupt activity in the circuit even when HVC<sub>RA</sub> neurons properties are maintained. 

      In regard to the similarities between our model and earlier models, several aspects of our model distinguish it from prior work. In short, while several models of how sequence is generated within HVC have been proposed (Cannon et al., 2015; Drew & Abbott, 2003; Egger et al., 2020; Elmaleh et al., 2021; Galvis et al., 2018; Gibb et al., 2009a, 2009b; Hamaguchi et al., 2016; Jin, 2009; Long & Fee, 2008; Markowitz et al., 2015), all the models proposed either rely on intrinsic HVC circuitry to propagate sequential activity, rely on extrinsic feedback to advance the sequence or rely on both. These models do not capture the complex details of spike morphology, do not include the right ionic currents, do not incorporate all classes of HVC neurons, or do not generate realistic firing patterns as seen in vivo. Our model is the first biophysically realistic model that incorporates all classes of HVC neurons and their intrinsic properties. We tuned the intrinsic and the synaptic properties bases on the traces collected by Daou et al. (2013) and Mooney and Prather (2005) as shown in Figure 3. The three classes of model neurons incorporated to our network as well as the synaptic currents that connect them are based on Hodgkin- Huxley formalisms that contain ion channels and synaptic currents which had been pharmacologically identified. This is an advancement over prior models that primarily focused on the role of synaptic interactions or external inputs. The model is based on feedforward chain of microcircuits that encode for the different sub-syllabic segments and that interact with each other through structured feedback inhibition, defining an ordered sequence of cell firing. Moreover, while several models highlight the critical role of inhibitory interneurons in shaping the timing and propagation of bursts of activity in HVC<sub>RA</sub> neurons, our work offers an intricate and comprehensive model that help understand this critical role played by inhibition in shaping song dynamics and ensuring sequence propagation.

      How useful is this concept of micro-circuits? HVC neurons fire continuously even during the silent gaps. There are no SSS during these silent gaps.

      Regarding the concern about the usefulness of the 'microcircuit' concept in our study, we appreciate the comment and we are glad to clarify its relevance in our network. While we acknowledge that HVC<sub>RA</sub> neurons interconnect microcircuits, our model's dynamics are still best described within the framework of microcircuitry particularly due to the firing behavior of HVC<sub>X</sub> neurons and interneurons. Here, we are referring to microcircuits in a more functional sense, rather than rigid, isolated spatial divisions (Cannon et al. 2015), and we now make this clear on page 21. A microcircuit in our model reflects the local rules that govern the interaction between all HVC neuron classes within the broader network, and that are essential for proper activity propagation. For example, HVC<sub>INT</sub> neurons belonging to any microcircuit burst densely and at times other than the moments when the corresponding encoded SSS is being “sung”. What makes a particular interneuron belong to this microcircuit or the other is merely the fact that it cannot inhibit HVC<sub>RA</sub> neurons that are housed in the microcircuit it belongs to. In particular, if HVC<sub>INT</sub> inhibits HVC<sub>RA</sub> in the same microcircuit, some of the HVC<sub>RA</sub> bursts in the microcircuit might be silenced by the dense and strong HVC<sub>INT</sub> inhibition breaking the chain of activity again. Similarly, HVC<sub>X</sub> neurons were selected to be housed within microcircuits due to the following reason: if an HVC<sub>X</sub> neuron belonging to microcircuit i sends excitatory input to an HVC<sub>INT</sub> neuron in microcircuit j, and that interneuron happens to select an HVC<sub>RA</sub> neuron from microcircuit i, then the propagation of sequential activity will halt, and we’ll be in a scenario similar to what was described earlier for HVC<sub>INT</sub> neurons inhibiting HVC<sub>RA</sub> neurons in the same microcircuit.

      We agree that there are no sub-syllabic segments described during the silent gaps and we thank the reviewer to pointing this out. Although silent gaps are integral to the overall process of song production, we have not elaborated on them in this model due to the lack of a clear, biophysically grounded representation for the gaps themselves at the level of HVC. Our primary focus has been on modeling the active, syllable-producing phases of the song, where the HVC network’s sequential dynamics are critical for song. However, one can think the encoding of silent gaps via similar mechanisms that encode SSSs, where each gap is encoded by similar microcircuits comprised of the three classes of HVC neurons (let’s call them GAP rather than SSS) that are active only during the silent gaps. In this case, the propagation of sequential activity is carried throughout the GAPs from the last SSS of the previous syllable to the first SSS of the subsequent syllable. This is no described more clearly on page 22 of the manuscript.

      A significant issue of the current model is that the HVC_RA to HVC_RA connections require fine-tuning, with the network functioning only within a narrow range of g_AMPA (Figure 2B). Similarly, the connections from HVC_I neurons to HVC_RA neurons also require fine-tuning. This sensitivity arises because the somatic properties of HVC_RA neurons are insufficient to produce the stereotypical bursts of spikes observed in recordings from singing birds, as demonstrated in previous studies (Jin et al 2007; Long et al 2010). In these previous works, to address this limitation, a dendritic spike mechanism was introduced to generate an intrinsic bursting capability, which is absent in the somatic compartment of HVC_RA neurons. This dendritic mechanism significantly enhances the robustness of the chain network, eliminating the need to fine-tune any synaptic conductances, including those from HVC_I neurons (Long et al 2010). Why is it important that the model should NOT be sensitive to the connection strengths?

      We thank the reviewer for the comment. While mathematical models designed for highly complex nonlinear biological processes tangentially touch the biological realism, the current network as is right now is the first biologically realistic-enough network model designed for HVC that explains sequence propagation. We do not include dendritic processes in our network although that increases the realistic dynamics for various reasons. 1) The ion channels we integrated into the somatic compartment are known pharmacologically (Daou et al. 2013), but we don’t know about the dendritic compartment’s intrinsic properties of HVC neurons and the cocktail of ion channels that are expressed there. 2) We are able to generate realistic bursting in HVC<sub>RA</sub> neurons despite the single compartment, and the main emphasis in this network is on the interactions between excitation and inhibition, the effects of ion channels in modulating sequence propagation, etc … 3) The network model already incorporates thousands of ODEs that govern the dynamics of each of the HVC neurons, so we did not want to add more complexity to the network especially that we don’t know the biophysical properties of the dendritic compartments.

      Therefore, our present focus is on somatic dynamics and the interaction between HVC<sub>RA</sub> and HVC<sub>INT</sub> neurons, but we acknowledge the importance of these processes in enhancing network resiliency. Although we agree that adding dendritic processes improves robustness, we still think that somatic processes alone can offer insightful information on the sequential dynamics of the HVC network. While the network should be robust across a wide range of parameters, it is also essential that certain parameters are designed to filter out weaker signals, ensuring that only reliable, precise patterns of activity propagate. Hence, we specifically chose to make the HVC<sub>RA</sub>-to-HVC<sub>RA</sub> excitatory connections more sensitive (narrow range of values) such that only strong, precise and meaningful stimuli can propagate through the network representing the high stereotypy and precision seen in song production.

      First, the firing of HVC_I neurons is highly noisy and unreliable. HVC_I neurons fire spontaneous, random spikes under baseline conditions. During singing, their spike timing is imprecise and can vary significantly from trial to trial, with spikes appearing or disappearing across different trials. As a result, their inputs to HVC_RA neurons are inherently noisy. If the model relies on precisely tuned inputs from HVC_I neurons, the natural fluctuations in HVC_I firing would render the model non-functional. The authors should incorporate noisy HVC_I neurons into their model to evaluate whether this noise would render the model non-functional.

      We acknowledge that under baseline and singing settings, interneurons fire in an extremely noisy and inaccurate manner, although they exhibit time locked episodes in their activity (Hahnloser et al 2002, Kozhinikov and Fee 2007). In order to mimic the biological variability of these neurons, our model does, in fact, include a stochastic current to reflect the intrinsic noise and random variations in interneuron firing shown in vivo (and we highlight this in the Methods). However, to make sure the network is resilient to this randomness in interneuron firing, introduced a stochastic input current of the form I<sub>noise</sub> (t)= σ.ξ(t) where ξ(t) is a Gaussian white noise with zero mean and unit variance, and σ is the noise amplitude. This stochastic drive was introduced to every model neuron and it mimics the fluctuations in synaptic input arising from random presynaptic activity and background noise. For values of σ within 1-5% of the mean synaptic conductance, the stochastic current has no effect on network propagation. For larger values of σ, the desired network activity was disrupted or halted. We now talk about this on page 22 of the manuscript.  

      Second, Kosche et al. (2015) demonstrated that reducing inhibition by suppressing HVC_I neuron activity makes HVC_RA firing less sparse but does not compromise the temporal precision of the bursts. In this experiment, the local application of gabazine should have severely disrupted HVC_I activity. However, it did not affect the timing precision of HVC_RA neuron firing, emphasizing the robustness of the HVC timing circuit. This robustness is inconsistent with the predictions of the current model, which depends on finely tuned inputs and should, therefore, be vulnerable to such disruptions.

      We thank the reviewer for the comment. The differences between the Kosche et al. (2015) findings and the predictions of our model arise from differences in the aspect of HVC function we are modeling. Our model is more sensitive to inhibition, which is a designed mechanism for achieving precise song patterning. This is a modeling simplification we adopted to capture specific characteristics of HVC function. Hence, Kosche et al. (2015) findings do not invalidate the approach of our model, but highlights that HVC likely operates with several, redundant mechanisms that overall ensure temporal precision. 

      Third, the reliance on fine-tuning of HVC_RA connections becomes problematic if the model is scaled up to include groups of HVC_RA neurons forming a chain network, rather than the single HVC_RA neurons used in the current work. With groups of HVC_RA neurons, the summation of presynaptic inputs to each HVC_RA neuron would need to be precisely maintained for the model to function. However, experimental evidence shows that the HVC circuit remains functional despite perturbations, such as a few degrees of cooling, micro-lesions, or turnover of HVC_RA neurons. Such robustness cannot be accounted for by a model that depends on finely tuned connections, as seen in the current implementation.

      Our model of individual HVC<sub>RA</sub> neurons and as stated previously is reductive model that focuses on understanding the mechanisms that govern sequential neural activity. We agree that scaling the model to include many of HVC<sub>RA</sub> neurons poses challenges, specifically concerning the summation of presynaptic inputs. However, our model can still be adapted to a larger network without requiring the level of fine-tuning currently needed. In fact, the current fine-tuning of synaptic connections in the model is a reflection of fundamental network mechanisms rather than a limitation when scaling to a larger network. Besides, one important feature of this neural network is redundancy. Even if some neurons or synaptic connections are impaired, other neurons or pathways can compensate for these changes, allowing the activity propagation to remain intact.

      The authors examined how altering the channel properties of neurons affects the activity in their model. While this approach is valid, many of the observed effects may stem from the delicate balancing required in their model for proper function. In the current model, HVC_X neurons burst as a result of rebound activity driven by the I_H current. Rebound bursts mediated by the I_H current typically require a highly hyperpolarized membrane potential. However, this mechanism would fail if the reversal potential of inhibition is higher than the required level of hyperpolarization. Furthermore, Mooney (2000) demonstrated that depolarizing the membrane potential of HVC_X neurons did not prevent bursts of these neurons during forward playback of the bird's own song, suggesting that these bursts (at least under anesthesia, which may be a different state altogether) are not necessarily caused by rebound activity. This discrepancy should be addressed or considered in the model.

      In our HVC network model, one goal with HVC<sub>X</sub> neurons is to generate bursts in their underlying neuron population. Since HVC<sub>X</sub> neurons in our model receive only inhibitory inputs from interneurons, we rely on inhibition followed by rebound bursts orchestrated by the I<sub>H</sub> and the I<sub>CaT</sub> currents to achieve this goal. The interplay between the T-type Ca<sup>++</sup> current and the H current in our model is fundamental to generate their corresponding bursts, as they are sufficient for producing the desired behavior in the network. Due to this interplay, we do not need significant inhibition to generate rebound bursts, because the T-type Ca<sub>++</sub> current’s conductance can be stronger leading to robust rebound bursting even when the degree of inhibition is not very strong. This is now highlighted on page 42 in the revised version.

      Some figures contain direct copies of figures from published papers. It is perhaps a better practice to replace them with schematics if possible.

      We wanted on purpose to keep the results shown in Mooney and Prather (2005) to be shown as is, in order to compare them with our model simulations highlighting the degree of resemblance. We believe that creating schematics of the Mooney and Prather (2005) results will not have the same impact, similarly creating a schematic for Hahnloser et al (2002) results won’t help much. However, if the reviewer still believes that we should do that, we’re happy to do it.

      Reviewer #2 (Public review):

      Summary:

      In this paper, the authors use numerical simulations to try to understand better a major experimental discovery in songbird neuroscience from 2002 by Richard Hahnloser and collaborators. The 2002 paper found that a certain class of projection neurons in the premotor nucleus HVC of adult male zebra finch songbirds, the neurons that project to another premotor nucleus RA, fired sparsely (once per song motif) and precisely (to about 1 ms accuracy) during singing.

      The experimental discovery is important to understand since it initially suggested that the sparsely firing RA-projecting neurons acted as a simple clock that was localized to HVC and that controlled all details of the temporal hierarchy of singing: notes, syllables, gaps, and motifs. Later experiments suggested that the initial interpretation might be incomplete: that the temporal structure of adult male zebra finch songs instead emerged in a more complicated and distributed way, still not well understood, from the interaction of HVC with multiple other nuclei, including auditory and brainstem areas. So at least two major questions remain unanswered more than two decades after the 2002 experiment: What is the neurobiological mechanism that produces the sparse precise bursting: is it a local circuit in HVC or is it some combination of external input to HVC and local circuitry? And how is the sparse precise bursting in HVC related to a songbird's vocalizations? The authors only investigate part of the first question, whether the mechanism for sparse precise bursts is local to HVC. They do so indirectly, by using conductance-based Hodgkin-Huxley-like equations to simulate the spiking dynamics of a simplified network that includes three known major classes of HVC neurons and such that all neurons within a class are assumed to be identical. A strength of the calculations is that the authors include known biophysically deduced details of the different conductances of the three major classes of HVC neurons, and they take into account what is known, based on sparse paired recordings in slices, about how the three classes connect to one another. One weakness of the paper is that the authors make arbitrary and not well-motivated assumptions about the network geometry, and they do not use the flexibility of their simulations to study how their results depend on their network assumptions. A second weakness is that they ignore many known experimental details such as projections into HVC from other nuclei, dendritic computations (the somas and dendrites are treated by the authors as point-like isopotential objects), the role of neuromodulators, and known heterogeneity of the interneurons. These weaknesses make it difficult for readers to know the relevance of the simulations for experiments and for advancing theoretical understanding.

      Strengths:

      The authors use conductance-based Hodgkin-Huxley-like equations to simulate spiking activity in a network of neurons intended to model more accurately songbird nucleus HVC of adult male zebra finches. Spiking models are much closer to experiments than models based on firing rates or on 2-state neurons.

      The authors include information deduced from modeling experimental current-clamp data such as the types and properties of conductances. They also take into account how neurons in one class connect to neurons in other classes via excitatory or inhibitory synapses, based on sparse paired recordings in slices by other researchers. The authors obtain some new results of modest interest such as how changes in the maximum conductances of four key channels (e.g., A-type K+ currents or Ca-dependent K+ currents) influence the structure and propagation of bursts, while simultaneously being able to mimic accurately current-clamp voltage measurements.

      Weaknesses:

      One weakness of this paper is the lack of a clearly stated, interesting, and relevant scientific question to try to answer. In the introduction, the authors do not discuss adequately which questions recent experimental and theoretical work have failed to explain adequately, concerning HVC neural dynamics and its role in producing vocalizations. The authors do not discuss adequately why they chose the approach of their paper and how their results address some of these questions.

      For example, the authors need to explain in more detail how their calculations relate to the works of Daou et al, J. Neurophys. 2013 (which already fitted spiking models to neuronal data and identified certain conductances), to Jin et al J. Comput. Neurosci. 2007 (which already discussed how to get bursts using some experimental details), and to the rather similar paper by E. Armstrong and H. Abarbanel, J. Neurophys 2016, which already postulated and studied sequences of microcircuits in HVC. This last paper is not even cited by the authors.

      We thank the reviewer for this valuable comment, and we agree that we did not clarify enough throughout the paper the utility of our model or how it advanced our understanding of the HVC dynamics and circuitry. To that end, we revised several places of the manuscript and made sure to cite and highlight the relevance and relatedness of the mentioned papers.

      In short, and as mentioned to Reviewer 1, while several models of how sequence is generated within HVC have been proposed (Cannon et al., 2015; Drew & Abbott, 2003; Egger et al., 2020; Elmaleh et al., 2021; Galvis et al., 2018; Gibb et al., 2009a, 2009b; Hamaguchi et al., 2016; Jin, 2009; Long & Fee, 2008; Markowitz et al., 2015; Jin et al., 2007), all the models proposed either rely on intrinsic HVC circuitry to propagate sequential activity, rely on extrinsic feedback to advance the sequence or rely on both. These models do not capture the complex details of spike morphology, do not include the right ionic currents, do not incorporate all classes of HVC neurons, or do not generate realistic firing patterns as seen in vivo. Our model is the first biophysically realistic model that incorporates all classes of HVC neurons and their intrinsic properties. 

      No existing hypothesis had been challenged with our model, rather; our model is a distillation of the various models that’s been proposed for the HVC network. We go over this in detail in the Discussion. We believe that the network model we developed provide a step forward in describing the biophysics of HVC circuitry, and may throw a new light on certain dynamics in the mammalian brain, particularly the motor cortex and the hippocampus regions where precisely-timed sequential activity is crucial. We suggest that temporally-precise sequential activity may be a manifestation of neural networks comprised of chain of microcircuits, each containing pools of excitatory and inhibitory neurons, with local interplay among neurons of the same microcircuit and global interplays across the various microcircuits, and with structured inhibition as well as intrinsic properties synchronizing the neuronal pools and stabilizing timing within a firing sequence.

      The authors' main achievement is to show that simulations of a certain simplified and idealized network of spiking neurons, which includes some experimental details but ignores many others, match some experimental results like current-clamp-derived voltage time series for the three classes of HVC neurons (although this was already reported in earlier work by Daou and collaborators in 2013), and simultaneously the robust propagation of bursts with properties similar to those observed in experiments. The authors also present results about how certain neuronal details and burst propagation change when certain key maximum conductances are varied. However, these are weak conclusions for two reasons. First, the authors did not do enough calculations to allow the reader to understand how many parameters were needed to obtain these fits and whether simpler circuits, say with fewer parameters and simpler network topology, could do just as well. Second, many previous researchers have demonstrated robust burst propagation in a variety of feed-forward models. So what is new and important about the authors' results compared to the previous computational papers?

      A major novelty of our work is the incorporation of experimental data with detailed network models. While earlier works have established robust burst propagation, our model uses realistic ion channel kinetics and feedback inhibition not only to reproduce experimental neural activity patterns but also to suggest prospective mechanisms for song sequence production in the most biophysical way possible. This aspect that distinguishes our work from other feed-forward models. We go over this in detail in the Discussion. However, the reviewer is right regarding the details of the calculations conducted for the fits, we will make sure to highlight this in the Methods and throughout the manuscript with more details.

      We believe that the network model we developed provide a step forward in describing the biophysics of HVC circuitry, and may throw a new light on certain dynamics in the mammalian brain, particularly the motor cortex and the hippocampus regions where precisely-timed sequential activity is crucial. We suggest that temporally-precise sequential activity may be a manifestation of neural networks comprised of chain of microcircuits, each containing pools of excitatory and inhibitory neurons, with local interplay among neurons of the same microcircuit and global interplays across the various microcircuits, and with structured inhibition as well as intrinsic properties synchronizing the neuronal pools and stabilizing timing within a firing sequence.

      Also missing is a discussion, or at least an acknowledgment, of the fact that not all of the fine experimental details of undershoots, latencies, spike structure, spike accommodation, etc may be relevant for understanding vocalization. While it is nice to know that some models can match these experimental details and produce realistic bursts, that does not mean that all of these details are relevant for the function of producing precise vocalizations. Scientific insights in biology often require exploring which of the many observed details can be ignored and especially identifying the few that are essential for answering some questions. As one example, if HVC-X neurons are completely removed from the authors' model, does one still get robust and reasonable burst propagation of HVC-RA neurons? While part of the nucleus HVC acts as a premotor circuit that drives the nucleus RA, part of HVC is also related to learning. It is not clear that HVC-X neurons, which carry out some unknown calculation and transmit information to area X in a learning pathway, are relevant for burst production and propagation of HVCRA neurons, and so relevant for vocalization. Simulations provide a convenient and direct way to explore questions of this kind.

      One key question to answer is whether the bursting of HVC-RA projection neurons is based on a mechanism local to HVC or is some combination of external driving (say from auditory nuclei) and local circuitry. The authors do not contribute to answering this question because they ignore external driving and assume that the mechanism is some kind of intrinsic feed-forward circuit, which they put in by hand in a rather arbitrary and poorly justified way, by assuming the existence of small microcircuits consisting of a few HVC-RA, HVC-X, and HVC-I neurons that somehow correspond to "sub-syllabic segments". To my knowledge, experiments do not suggest the existence of such microcircuits nor does theory suggest the need for such microcircuits. 

      Recent results showed a tight correlation between the intrinsic properties of neurons and features of song (Daou and Margoliash 2020, Medina and Margoliash 2024), where adult birds that exhibit similar songs tend to have similar intrinsic properties. While this is relevant, we acknowledge that not all details may be necessary for every aspect of vocalization, and future models could simplify concentrate on core dynamics and exclude certain features while still providing insights into the primary mechanisms.

      The question of whether HVC<sub>X</sub> neurons are relevant for burst propagation given that our model includes these neurons as part of the network for completeness, the reviewer is correct, the propagation of sequential activity in this model is primarily carried by HVC<sub>RA</sub> neurons in a feed-forward manner, but only if there is no perturbation to the HVC network. For example, we have shown how altering the intrinsic properties of HVC<sub>X</sub> neurons or for interneurons disrupts sequence propagation. In other words, while HVC neurons are the key forces to carry the chain forward, the interplay between excitation and inhibition in our network as well as the intrinsic parameters for all classes of HVC neurons are equally important forces in carrying the chain of activity forward. Thus, the stability of activity propagation necessary for song production depend on a finely balanced network of HVC neurons, with all classes contributing to the overall dynamics.

      We agree with the reviewer however that a potential drawback of our model is that its sole focus is on local excitatory connectivity within the HVC (Kornfeld et al., 2017; Long et al., 2010), while HVC neurons receive afferent excitatory connections (Akutagawa & Konishi, 2010; Nottebohm et al., 1982) that plays significant roles in their local dynamics. For example, the excitatory inputs that HVC neurons receive from Uvaeformis may be crucial in initiating (Andalman et al., 2011; Danish et al., 2017; Galvis et al., 2018) or sustaining (Hamaguchi et al., 2016) the sequential activity. While we acknowledge this limitation, our main contribution in this work is the biophysical insights onto how the patterning activity in HVC is largely shaped by the intrinsic properties of the individual neurons as well as the synaptic properties where excitation and inhibition play a major role in enabling neurons to generate their characteristic bursts during singing. This is true and holds irrespective of whether an external drive is injected onto the microcircuits or not. We elaborated on this further in the revised version in the Discussion.

      Another weakness of this paper is an unsatisfactory discussion of how the model was obtained, validated, and simulated. The authors should state as clearly as possible, in one location such as an appendix, what is the total number of independent parameters for the entire network and how parameter values were deduced from data or assigned by hand. With enough parameters and variables, many details can be fit arbitrarily accurately so researchers have to be careful to avoid overfitting. If parameter values were obtained by fitting to data, the authors should state clearly what the fitting algorithm was (some iterative nonlinear method, whose results can depend on the initial choice of parameters), what the error function used for fitting (sum of least squares?) was, and what data were used for the fitting.

      The authors should also state clearly the dynamical state of the network, the vector of quantities that evolve over time. (What is the dimension of that vector, which is also the number of ordinary differential equations that have to be integrated?) The authors do not mention what initial state was used to start the numerical integrations, whether transient dynamics were observed and what were their properties, or how the results depended on the choice of the initial state. The authors do not discuss how they determined that their model was programmed correctly (it is difficult to avoid typing errors when writing several pages or more of a code in any language) or how they determined the accuracy of the numerical integration method beyond fitting to experimental data, say by varying the time step size over some range or by comparing two different integration algorithms.

      We thank the reviewer again. The fitting process in our model occurred only at the first stage where the synaptic parameters were fit to the Mooney and Prather as well as the Kosche results. There was no data shared and we merely looked at the figures in those papers and checked the amplitude of the elicited currents, the magnitudes of DC-evoked excitations etc … and we replicated that in our model. While this is suboptimal, it was better for us to start with it rather than simply using equations for synaptic currents from the literature for other types of neurons (that are not even HVC’s or in the songbird) and integrate them into our network model. The number of ODEs that govern the dynamics of every model neuron is listed on page 10 of the manuscript as well as in the Appendix.  Moreover, we highlighted the details of this fitting process in the revised version.

      Also disappointing is that the authors do not make any predictions to test, except rather weak ones such as that varying a maximum conductance sufficiently (which might be possible by using dynamic clamps) might cause burst propagation to stop or change its properties. Based on their results, the authors do not make suggestions for further experiments or calculations, but they should.

      We agree that making experimental testable predictions is crucial for the advancement of the model. Our predictions include testing whether eradication of a class of neurons such as HVC<sub>X</sub> neurons disrupts activity propagation which can be done through targeted neuron elimination. This also can be done through preventing rebound bursting in HVC<sub>X</sub> by pharmacologically blocking the I<sub>H</sub> channels. Others include down regulation of certain ion channels (pharmacologically done through ion blockers) and testing which current is fundamental for song production (and there a plenty of test based our results, like the SK current, the T-type Ca<sup>2+</sup> current, the A-type K<sup>+</sup> current, etc…). We incorporated these into the Discussion of the revised manuscript to better demonstrate the model's applicability and to guide future research directions.

      Main issues:

      (1) Parameters are overly fine-tuned and often do not match known biology to generate chains. This fine-tuning does not reveal fundamental insights.

      (1a) Specific conductances (e.g. AMPA) are finely tweaked to generate bursts, in part due to a lack of a dendritic mechanism for burst generation. A dendritic mechanism likely reflects the true biology of HVC neurons.

      We acknowledge that the model does not include active dendritic processes and we do not regard this as a limitation. In fact, our present approach, although simplified, is intended to focus on somatic mechanisms to identify minimal conditions required for stable sequential propagation. We know HVC<sub>RA</sub> neurons possess thin, spiny dendrites which can contribute to burst initiation and shaping. Future models that include such nonlinear dendritic mechanisms would likely reduce the need for fine tuning of specific conductances at the soma and consequently better match the known biology of HVC<sub>RA</sub> neurons. 

      In text: “While our simplified, somatically driven architecture enables better exploration of mechanisms for sequence propagation, future extensions of the model will incorporate dendritic compartments to more accurately reflect the intrinsic bursting mechanisms observed in HVC<sub>RA</sub> neurons.”

      (1b) In this paper, microcircuits are simulated and then concatenated to make the HVC chain, resulting in no representations during silent gaps. This is out of touch with the known HVC function. There is no anatomical nor functional evidence for microcircuits of the kind discussed in this paper or in the earlier and rather similar paper by Eve Armstrong and Henry Abarbanel (J. Neurophy 2016). One can write a large number of papers in which one makes arbitrary unconstrained guesses of network structure in HVC and, unless they reveal some novel principle or surprising detail, they are all going to be weak.

      Although the model is composed of sequentially activated microcircuits, the gaps between each microcircuit’s output do not represent complete silence in the network. During these periods, other neurons such as those in other microcircuits may still exhibit bursting activity. Thus, what may appear as a 'silent gap' from the perspective of a given output microcircuit is, in fact, part of the ongoing background dynamics of the larger HVC neuron network. We fully acknowledge the reviewer's point that there is no direct anatomical or physiological evidence supporting the presence of microcircuits with this structure in HVC. Our intention was not to propose the existence of such a physical model but to use it as a computational simplification to make precise sequential bursting activity feasible given the biologically realistic neuronal dynamics used. Hence, our use of 'microcircuits' refers to a modeling construct rather than a structural hypothesis. Even if the network topology is hypothetical, we still believe that the temporal structuring suggested allows us to generate specific predictions for future work about burst timing and neuronal connections.

      (1c) HVC interneuron discharge in the author's model is overly precise; addressing the observation that these neurons can exhibit noisy discharge. Real HVC interneurons are noisy. This issue is critical: All reviewers strongly recommend that the authors should, at the minimum in a revision, focus on incorporating HVC-I noise in their model.

      We agree that capturing the variability in interneuron bursting is critical for biological realism. In our model, HVC interneurons receive stochastic background current that introduces variability in their firing patterns as observed in vivo. This variability is seen in our simulations and produces more biologically realistic dynamics while maintaining sequence propagation. We clarify this implementation in the Methods section. 

      (1d) Address the finding that Kosche et al show that even with reduced inhibition, HVCra neuronal timing is preserved; it is the burst pattern that is affected.

      The differences between the Kosche et al. (2015) findings and the predictions of our model arise from differences in the aspect of HVC function we are modeling. Our model is more sensitive to inhibition, which is a designed mechanism for achieving precise song patterning. This is a modeling simplification we adopted to capture specific characteristics of HVC function. 

      We acknowledged this point in the discussion: “While findings of Kosche et al. (2015) emphasize the robustness of the HVC timing circuit to inhibition, our model is more sensitive to inhibition, highlighting that HVC likely operates with several, redundant mechanisms that overall ensure temporal precision.”

      (1e) The real HVC is robust to microlesions, cooling, and HVCra neuron turnover. The model in this paper relies on precise HVCra connectivity and is not robust.

      Although our model is grounded in the biologically observed behavior of HVC neurons in vivo, we don’t claim that it fully captures the resilience seen in the HVC network. Instead, we see this as a simplified framework that helps us explore the basic principles of sequential activity. In the future, adding features like recurrent excitation, synaptic plasticity, or homeostatic mechanisms could make the model more robust.

      (1f) There is unclear motivation for Ih-driven HVCx bursting, given past findings from the Mooney group.

      Daou et al (2013) noticed that the observed in HVC<sub>X</sub> and HVC<sub>INT</sub> neurons in response to hyperpolarizing current pulses (Dutar et al. 1998; Kubota and Saito 1991; Kubota and Taniguchi 1998) was completely abolished after the application of the drug ZD 7288 in all of the neurons tested indicating that the sag in these HVC neurons is due to the hyperpolarization-activated inward current (I<sub>h</sub>). in addition, the sag and the rebound seen in these two neuron groups were larger as for larger hyperpolarization current pulses.

      (1g) The initial conditions of the network and its activity under those conditions, as well as the possible reliance on external inputs, are not defined.

      In our model, network activity is initiated through a brief, stochastic excitatory input to a small HVC<sub>RA</sub> neuron of one microcircuit. This drive represents a simplified version of external input from upstream brain regions known to project to HVC, such as nuclei in the high vocal center's auditory pathways such as Nif and Uva. Modeling the activity of these upstream regions and their influence on HVC dynamics is an ongoing research work to be published in the future.

      (1h) It has been known from the time of Hodgkin and Huxley how to include temperature dependences for neuronal dynamics so another suggestion is for the authors to add such dependences for the three classes of neurons and see if their simulation causes burst frequencies to speed up or slow down as T is varied.

      We added this as limitation to the discussion section: “Our model was run at a fixed physiological temperature, but it's well known going all the way back to Hodgkin and Huxley that both ion channel activity and synaptic dynamics can change with temperature. In future work, adding temperature scaling (like Q10 factors) could help us explore how burst timing and sequence speed change with temperature changes, and how neural activity in HVC would/would not preserve its precision under different physiological conditions.”

      (2) The scope of the paper and its objectives must be clearly defined. Defining the scope and providing caveats for what is not considered will help the reader contextualize this study with other work.

      (2a) The paper does not consider the role of external inputs to HVC, which are very likely important for the capacity of the HVC chain to tile the entire song, including silent gaps.

      The role of afferent input to HVC particularly from nuclei such as Uva and Nif is critical in shaping the timing and initiation of HVC sequences throughout the song, including silent intervals. In fact, external inputs are likely involved in more than just triggering sequences, they may also influence the continuity of activity across motifs. However, in this study, we chose to focus on the intrinsic dynamics of HVC as a step toward understanding the internal mechanisms required for generating temporally precise sequences and for this reason, we used a simplified external input only to initiate activity in the chain.

      (2b) The paper does not consider important dendritic mechanisms that almost certainly facilitate the all-or-none bursting behavior of HVC projection neurons. the authors need to mention and discuss that current-clamped neuronal response - in which an electrode is inserted into the soma and then a constant current-step is applied - bypasses dendritic structure and dendritic processing and so is an incomplete way to characterize a neuron's properties. In particular, claiming to fit current-clamp data accurately and then claiming that one now has a biophysically accurate network model, as the authors do, is greatly misleading.

      While we addressed this is 1a, we do not suggest that our model is a fully accurate biophysical representation of HVC network. Instead, we see it as a simplified framework that helps reveal how much of HVC’s sequential activity can be explained by somatic properties and synaptic interactions alone. However, additional biological mechanisms, like dendritic processing, are likely to play an important role and should be explored in future work.

      (2c) The introduction does not provide a clear motivation for the paper - what hypotheses are being tested? What is at stake in the model outcomes? It is not inherently informative to take a known biological representation and fine-tune a limited model to replicate that representation.

      We explicitly added the hypotheses to the revised introduction.

      (2d) There have been several published modeling efforts applied to the HVC chain (Seung, Fee, Long, Greenside, Jin, Margoliash, Abarbanel). These and others need to be introduced adequately, and it needs to be crystal clear what, if anything, the present study is adding to the canon.

      While several influential models have explored how HVC might generate sequences ranging from synfire chains to recurrent dynamics or externally driven sequences (e.g., Seung, Fee, Long, Greenside, Jin, Abarbanel, and others), these models could not capture the detailed dynamics observed in vivo. Our aim was to bridge a gap in the modeling literature by exploring how far biophysically grounded intrinsic properties and experimentally supported synaptic connections that are local to the HVC can alone produce temporally precise sequences. We have proven that these mechanisms are sufficient to generate these sequences, although some missing components (such as dendritic mechanisms or external inputs) might be needed to fully capture the complexity and robustness of HVC function.

      (2e) The authors mention learning prominently in the abstract, summary, and introduction but this paper has nothing to do with learning. Most or all mentions of learning should be deleted since they are misleading.

      We appreciate the reviewer’s observation however our intent by referencing learning was not to suggest that our model directly simulates learning processes, but rather to place HVC function within the broader context of song learning and production, where temporal sequencing plays a fundamental role. Yet, repeated references to learning may be misleading given that our current model does not incorporate plasticity, synaptic modification, or developmental changes. Hence, we have carefully revised the manuscript to rephrase mentions of learning unless directly relevant to context. 

      (3) Using the model for hypothesis generation and prediction of experimental results.

      (3a) The utility of a model is to provide conceptual insight into how or why the real HVC functions as it does, or to predict outcomes in yet-to-be conducted experiments to help motivate future studies. This paper does not adequately achieve these goals.

      We revised the Discussion of the manuscript to better emphasize potential contributions and point out many experiments that could validate or challenge the model’s predictions. These include dynamic clamp or ion channel blockers targeting A-type K<sup>+</sup> in HVC<sub>RA</sub> neurons to assess their impact on burst precision, optogenetic disruption of inhibitory interneurons to observe changes in burst timing and sequence propagation, pharmacological modulation of I<sub>h</sub> or I<sub>CaT</sub> in HVC<sub>X</sub> and interneurons etc. 

      (3b) Additionally, it can be interesting to conduct an experiment on an existing model; for example, what happens to the HVCra chain in your model if you delete the HVCx neurons? What happens if you block NMDA receptors? Such an approach in a modeling paper can help motivate hypotheses and endow the paper with a sense of purpose.

      We agree that running targeted experiments to test our computational model such as removing an HVC neuron population or blocking a synaptic receptor can be a powerful way to generate new ideas and guide future experiments. While we didn’t include these specific tests in the current study, the model is well suited for this kind of exploration. For instance, removing interneurons could help us better understand their role in shaping the timing of HVC<sub>RA</sub> bursts. These are great directions for future experiments, and we now highlight this in the discussion as a way the model could be used to guide experiments.

      (4) Changes to the paper's organization may improve clarity.

      (4a) Nearly all equations should be moved to an Appendix so that the main part of the paper can focus on the science: assumptions made, details of simulations, conclusions obtained, and their significance. The authors present many equations without discussion which weakens the paper.

      Equations moved to appendix.

      (4b) There are many grammatical errors, e.g., verbs do not match the subject in terms of being single or plural. The authors need to run their manuscript through a grammar checker.

      Done.

      (4c) Many of the figures are poorly designed and should be substantially modified. E.g. in Figure 1B, too many colors are used, making it hard to grasp what is being plotted and the colors are not needed. Figures 1C and 1D are entire figures taken from other papers, and there is no way a reader will be able to see or appreciate all the details when this figure is published on a single page. Figure 2 uses colors for dots that are almost identical, and the colors could be avoided by using different symbols. Figure 5 fills an entire page but most of the figure conveys no information, there is no need to show the same details for all 120 neurons, just show the top 1/3 of this figure; the same for Figure 7, a lot of unnecessary information is being included. Figure 10, the bottom time series of spikes should be replaced with a time series of rates, cannot extract useful information.

      Adjusted as requested. 

      (4d) Table 1 is long and largely uninteresting, and should be moved to an appendix.

      Table 1 moved to appendix.

      (4e) Many sentences are not carefully written, which greatly weakens the paper. As one typical example, the first sentence in the Discussion section "In this study, we have designed a neural network model that describes [sic] zebra finch song production in the HVC." This is inaccurate, the model does not describe song production, it just explores some properties of one nucleus involved with song production. Just one or few sentences like this is ok but there are so many sentences of this kind that the reader loses faith in the authors.

      Thank you for raising this point, we revised the manuscript to improve the precision of the writing. We replaced the first sentence of the discussion with this: "In this study, we developed a biophysically realistic neural network model to explore how intrinsic neuronal properties and local connectivity within the songbird nucleus HVC may support the generation of temporally precise activity sequences associated with zebra finch song."

    1. PLEASE NOTE: Transit Times are being impacted by delays caused by the Brexit changes. Brexit has impacted distribution services from the UK to Europe as all shipments go through a formal customs clearance process. GFS are constantly reviewing the best options to limit any impact in service. Countries that are part of the EU are shown in bold.

      We would add this to the very bottom of the page - we understand the importance but it takes valuable seo space and interaction on the page

  4. doc-0o-bs-apps-viewer.googleusercontent.com doc-0o-bs-apps-viewer.googleusercontent.com
    1. inventive, transnational menu of dimsum, epitomizes its consciousness ofmetropolitaneity and its sense of cultural identity that recognizes, butstrives to mature from, its Chinese roots

      easy to dismiss as a modern colonial trend even but such anti central state authority figures have a long history in south east asia

    1. Joint Public Review:

      Summary:

      The authors previously published a study of RGC boutons in the dLGN in developing wild-type mice and developing mutant mice with disrupted spontaneous activity. In the current manuscript, they have broken down their analysis of RGC boutons according to the number of Homer/Bassoon puncta associated with each vGlut3 cluster.

      The authors find that, in the first post-natal week, RGC boutons with multiple active zones (mAZs) are about a third as common as boutons with a single active zone (sAZ). The size of the vGluT2 cluster associated with each bouton was proportional to the number of active zones present in each bouton. Within the author's ability to estimate these values (n=3 per group, 95% of results expected to be within ~2.5 standard deviations), these results are consistent across groups: 1) dominant eye vs. non-dominant eye, 2) wild-type mice vs. mice with activity blocked, and at 3) ages P2, P4, and P8. The authors also found that mAZs and sAZs also have roughly the same number (about 1.5) of sAZs clustered around them (within 1.5 um).

      There has been much discussion with the reviewers through multiple versions of this paper. of how to interpret these findings. Based on a large number of tests for statistical significance, the authors interpreted the presence of a statistical significance difference as evidence that "Eye-specific active zone clustering underlies synaptic competition in the developing visual system (title of previous version of manuscript)". The reviewers have focused on the small effect size as indicating that the small differences observed are not informative regarding this biological question. The authors have now tempered this interpretation.

      Strengths:

      The source dataset is high resolution data showing the colocalization of multiple synaptic proteins across development. Added to this data is labeling that distinguishes axons from the right eye from axons from the left eye. The first order analysis of this data showing changes in synapse density and in the occurrence of multi-active zone synapses is useful information about the development of an important model for activity dependent synaptic remodeling.

      Reviewing Editor's comment on the latest revision (without sending the paper back to the individual reviewers):

      In their latest revision, the authors have moderated earlier causal claims, incorporated additional statistical controls, and largely maintained their original interpretation of the data. While these changes address some prior concerns, the underlying issues remain. The previous review emphasized that the reported effect sizes were small and therefore hard to link to biological relevance. The authors argue that the effect sizes are large. Given the lack of a biological argument for this effect size, this point is really semantic. We would like to point out that the effect size measurement the authors used is likely a standard effect size calculation (the difference between groups is divided by the standard deviation of the groups). With only three experiments and irregular variance, it is likely that their estimates of standard deviation-and therefore effect size-are unreliable. Overall, the revisions improve presentation but do not substantively resolve the difficulty in drawing strong conclusions from the data set raised earlier.

    2. Author response:

      The following is the authors’ response to the previous reviews

      Reviewer #1 (Public review):

      Summary

      The authors previously published a study of RGC boutons in the dLGN in developing wild-type mice and developing mutant mice with disrupted spontaneous activity. In the current manuscript, they have broken down their analysis of RGC boutons according to the number of Homer/Bassoon puncta associated with each vGlut3 cluster.

      The authors find that, in the first post-natal week, RGC boutons with multiple active zones (mAZs) are about a third as common as boutons with a single active zone (sAZ). The size of the vGluT2 cluster associated with each bouton was proportional to the number of active zones present in each bouton. Within the author's ability to estimate these values (n=3 per group, 95% of results expected to be within ~2.5 standard deviations), these results are consistent across groups: 1) dominant eye vs. nondominant eye, 2) wild-type mice vs. mice with activity blocked, and at 3) ages P2, P4, and P8. The authors also found that mAZs and sAZs also have roughly the same number (about 1.5) of sAZs clustered around them (within 1.5 um).

      However, the authors do not interpret this consistency between groups as evidence that active zone clustering is not a specific marker or driver of activity dependent synaptic segregation. Rather, the authors perform a large number of tests for statistical significance and cite the presence or absence of statistical significance as evidence that "Eye-specific active zone clustering underlies synaptic competition in the developing visual system (title)". I don't believe this conclusion is supported by the evidence.

      We have revised the title to be descriptive: "Eye-specific differences in active zone addition during synaptic competition in the developing visual system." While our correlative approach does not establish direct causality, our findings provide important structural evidence that complements existing functional studies of activity-dependent synaptic refinement. We have carefully revised the text throughout to avoid causal language, focusing instead on the developmental patterns we observe.

      Strengths

      The source dataset is high resolution data showing the colocalization of multiple synaptic proteins across development. Added to this data is labeling that distinguishes axons from the right eye from axons from the left eye. The first order analysis of this data showing changes in synapse density and in the occurrence of multi-active zone synapses is useful information about the development of an important model for activity dependent synaptic remodeling.

      Weaknesses

      In my previous review I argued that it was not possible to determine, from their analysis, whether the differences they were reporting between groups was important to the biology of the system. The authors have made some changes to their statistics (paired t-tests) and use some less derived measures of clustering. However, they still fail to present a meaningfully quantitative argument that the observed group differences are important. The authors base most of their claims on small differences between groups. There are two big problems with this practice. First, the differences between groups appear too small to be biologically important. Second, the differences between groups that are used as evidence for how the biology works are generally smaller than the precision of the author's sampling. That is, the differences are as likely to be false positives as true positives.

      (1) Effect size. The title claims: "Eye-specific active zone clustering underlies synaptic competition in the developing visual system". Such a claim might be supported if the authors found that mAZs are only found in dominant-eye RGCs and that eye-specific segregation doesn't begin until some threshold of mAZ frequency is reached. Instead, the behavior of mAZs is roughly the same across all conditions. For example, the clear trend in Figure 4C and D is that measures of clustering between mAZ and sAZ are as similar as could reasonably be expected by the experimental design. However, some of the comparisons of very similar values produced p-values < 0.05. The authors use this fact to argue that the negligible differences between mAZ and sAZs explain the development of the dramatic differences in the distribution of ipsilateral and contralateral RGCs.

      We have changed the title to avoid implying a causal relationship between clustering and eye-specific segregation. Our key findings in Figures 4C and 4D demonstrate effect sizes >2.0 with high statistical power (Supplemental Table S2). While the absolute magnitude of differences is modest (5-7%), these high effect sizes combined with low inter-animal variability demonstrate consistent, reproducible biological phenomena. During development, small differences during critical periods can have profound downstream consequences for synaptic refinement outcomes.

      We acknowledge that significance in Figure 4 arises due to low variance between biological replicates rather than large mean differences. We have revised the text to describe these as "slight" differences and that "WT mice show a tendency toward forming more synapses near mAZ inputs," reflecting appropriate caution in our interpretation while maintaining the statistical robustness of our findings.

      (2) Sample size. Performing a large number of significance tests and comparing pvalues is not hypothesis testing and is not descriptive science. At best, with large sample sizes and controls for multiple tests, this approach could be considered exploratory. With n=3 for each group, many comparisons of many derived measures, among many groups, and no control for multiple testing, this approach constitutes a random result generator.

      The authors argue that n=3 is a large sample size for the type of high resolution / large volume data being used. It is true that many electron microscopy studies with n=1 are used to reveal the patterns of organization that are possible within an individual. However, such studies cannot control individual variation and are, therefore, not appropriate for identifying subtle differences between groups.

      In response to previous critiques along these lines, the authors argue they have dealt with this issue by limiting their analysis to within-individual paired comparisons. There are several problems with their thinking in this approach. The main problem is that they did not change the logic of their arguments, only which direction they pointed the t-tests. Instead of claiming that two groups are different because p < 0.05, they say that two groups are different because one produced p < 0.05 and the other produced p > 0.05. These arguments are not statistically valid or biologically meaningful.

      We have implemented rigorous statistical controls, applying false discovery rate (FDR) correction using the Benjamini-Hochberg method (α = 0.05) within each experimental condition (age × genotype combination). This correction strategy treats each condition as addressing a distinct experimental question: “What synaptic properties differ between left eye and right eye inputs in this specific developmental stage and genotype?” The approach appropriately controls for multiple testing while preserving power to detect biologically meaningful differences. We applied FDR correction separately to the ~20-34 measurements (varying by age and genotype) within each of the six experimental conditions, resulting in condition-specific adjusted p-values reported in updated Supplemental Table S2. This correction confirmed the robustness of our key findings. We do not base conclusions solely on comparing p-values across conditions. Our interpretations focus on effect sizes, confidence intervals, and consistent patterns within each condition, with statistical significance providing supporting evidence rather than the primary basis for biological conclusions.

      To the best of my understanding, the results are consistent with the following model:

      RGCs form mAZs at large boutons (known)

      About a quarter of week-one RGC boutons are mAZs (new observation)

      Vesicle clustering is proportional to active zone number (~new observation)

      RGC synapse density increases during the first post-week (known)

      Blocking activity reduces synapse density (known)

      Contralateral eye RGCs for more and larger synapses in the lateral dLGN (known)

      While mAZ formation is known in adult and juvenile dLGN, the formation of mAZ boutons during eye-specific competition represents new information with important functional implications. Synapses with multiple release sites should be stronger than single-active-zone synapses, suggesting a structural correlate for competitive advantage during refinement.

      We demonstrate distinct developmental patterns for sAZ versus mAZ contacts during the first postnatal week. Multi-active zone density favors the dominant eye, while single active-zone synapse density from the competing eye increases from P2-P4 to match dominant-eye levels. This reveals that newly formed synapses from the competing eye predominantly contain single release sites, marking P4-P8 as a critical window for understanding molecular mechanisms driving synaptic elimination.

      Our results show that altered retinal activity patterns (β2KO mice) reduce synapse density during eye-specific competition. We relied on β2 knockout mice, which retain retinal waves and spontaneous spike activity but with disrupted patterns and output levels compared to controls. We make no claims about complete activity blockade. Previous studies using different activity manipulations (epibatidine, TTX) have examined terminal morphology, but effects on synapse density during competition remain largely unknown. Achieving complete retinal activity blockade is technically challenging, making it of interest to revisit the role of activity using more precise manipulations to control spike output and relative timing.

      With n=3 and effect sizes smaller than 1 standard deviation, a statistically significant result is about as likely to be a false positive as a true positive.

      A true-positive statistically significant result does is not evidence of a meaningful deviation from a biological model.

      Our conclusions are based on results with effect sizes substantially larger than 1. Key findings demonstrate effect sizes exceeding 2.0. These large effect sizes, combined with rigorous FDR correction and low inter-animal variability, provide evidence against false positive results. During critical developmental periods, consistent structural differences, even those modest in absolute magnitude, can reflect important regulatory mechanisms that influence refinement outcomes. All statistical results, effect sizes, and power analyses are reported in Supplementary Tables S2, with confidence intervals in Supplementary Table S3. We have revised the text in several places where small differences are presented to reflect appropriate caution in our interpretation.

      Providing plots that show the number of active zones present in boutons across these various conditions is useful. However, I could find no compelling deviation from the above default predictions that would influence how I see the role of mAZs in activity dependent eye-specific segregation.

      Below are critiques of most of the claims of the manuscript.

      Claim (abstract): individual retinogeniculate boutons begin forming multiple nearby presynaptic active zones during the first postnatal week.

      Confirmed by data.

      Claim (abstract): the dominant-eye forms more numerous mAZ contacts,

      Misleading: The dominant-eye (by definition) forms more contacts than the nondominant eye. That includes mAZ.

      While the dominant eye forms more total contacts, the pattern depends critically on contact type and developmental stage. The dominant eye forms more mAZ contacts across all ages (Figures 2 and S1). However, for sAZ contacts, the two eyes form similar numbers at P4, with the non-dominant eye showing increased sAZ formation during this critical period. This differential pattern by synapse type represents an important aspect of how synaptic competition unfolds structurally.

      Claim (abstract): At the height of competition, the non-dominant-eye projection adds many single active zone (sAZ) synapses

      Weak: While the individual observation is strong, it is a surprising deviation based on a single n=3 experiment in a study that performed twelve such experiments (six ages, mutant/wildtype, sAZ/mAZ)

      The difference in eye-specific sAZ formation at P2 and P8 had effect sizes of ~5.3 and ~2.7 respectively (after FDR correction the difference was still significant at P2 and trending at P8). At P4, no effect was observed by paired T-test and the 5/95% confidence intervals ranged from -0.021-0.008 synapses/m<sup>3</sup>. The consistency of this pattern across P2 and P8, combined with the large effect sizes, supports the reliability of this developmental finding. We report all effect sizes and power test analyses in Supplemental Table S2, and confidence intervals in Supplemental Table S3. 

      Claim (abstract): Together, these findings reveal eye-specific differences in release site addition during synaptic competition in circuits essential for visual perception and behavior.

      False: This claim is unambiguously false. The above findings, even if true, do not argue for any functional significance to active zone clustering.

      Our phrasing “circuits essential for visual perception and behavior” referred to the general importance of binocular organization in the retinogeniculate system for visual processing and we did not intend to claim direct functional significance of our structural data. For clarity we have deleted the latter part of this sentence. In lines 35-37, the abstract now reads “Together, these findings reveal eye-specific differences in release site addition that correlate with axonal refinement outcomes during retinogeniculate refinement.”

      Claim (line 84): "At the peak of synaptic competition midway through the first postnatal week, the non-dominant-eye formed numerous sAZ inputs, equalizing the global synapse density between the two eyes"

      Weak: At one of twelve measures (age, bouton type, genotype) performed with 3 mice each, one density measure was about twice as high as expected.

      The difference in eye-specific sAZ formation at P2 and P8 had effect sizes of ~5.3 and ~2.7 respectively (after FDR correction the difference was still significant at P2 and trending at P8). At P4, no effect was observed by paired T-test and the 5/95% confidence intervals ranged from -0.021-0.008 synapses/m<sup>3</sup>. The consistency of this pattern across P2 and P8, combined with the large effect sizes, supports the reliability of this developmental finding. We report all effect sizes and power test analyses in Supplemental Table S2, and confidence intervals in Supplemental Table S3. 

      Claim (line 172): "In WT mice, both mAZ (Fig. 3A, left) and sAZ (Fig. 3B, left) inputs showed significant eye-specific volume differences at each age."

      Questionable: There appears to be a trend, but the size and consistency is unclear.

      Claim (line 175): "the median VGluT2 cluster volume in dominant-eye mAZ inputs was 3.72 fold larger than that of non-dominant-eye inputs (Fig. 3A, left)."

      Cherry picking. Twelve differences were measured with an n of 3, 3 each time. The biggest difference of the group was cited. No analysis is provided for the range of uncertainty about this measure (2.5 standard deviations) as an individual sample or as one of twelve comparisons.

      Claim (line 174): "In the middle of eye-specific competition at P4 in WT mice, the median VGluT2 cluster volume in dominant-eye mAZ inputs was 3.72 fold larger than that of non-dominant-eye inputs (Fig. 3A, left). In contrast, β2KO mice showed a smaller 1.1 fold difference at the same age (Fig. 3A, right panel). For sAZ synapses at P4, the magnitudes of eye-specific differences in VGluT2 volume were smaller: 1.35-fold in WT (Fig. 3B, left) and 0.41-fold in β2KO mice (Fig. 3B, right). Thus, both mAZ and sAZ input size favors the dominant eye, with larger eye-specific differences seen in WT mice (see Table S3)."

      No way to judge the reliability of the analysis and trivial conclusion: To analyze effect size the authors choose the median value of three measures (whatever the middle value is). They then make four comparisons at the time point where they observed the biggest difference in favor of their hypothesis. There is no way to determine how much we should trust these numbers besides spending time with the mislabeled scatter plots. The authors then claim that this analysis provides evidence that there is a difference in vGluT2 cluster volume between dominant and non-dominant RGCs and that that difference is activity dependent. The conclusion that dominant axons have bigger boutons and that mutants that lack the property that would drive segregation would show less of a difference is very consistent with the literature. Moreover, there is no context provided about what 1.35 or 1.1 fold difference means for the biology of the system.

      We focused on P4 for biological reasons rather than post-hoc selection. P4 represents the established peak of synaptic competition when eye-specific synapse densities are globally equivalent. This is a timepoint consistently highlighted throughout our manuscript and supported by previous literature. We have modified our presentation from fold changes to measured eye-specific differences in volume (mean ± standard error) and added confidence intervals in Supplemental Table S3. The effect sizes for eye-specific differences in VGluT2 volume at P4 are robust: ~2.3 and ~1.5 for mAZ and sAZ measurements in WT mice, and ~2.5 and ~1.8 in β2KO mice, with all analyses well-powered (Supplemental Table S2).

      We were unable to identify any mislabeled scatter plots and believe all figures are correctly labeled. While dominant-eye advantage in bouton size is consistent with previous literature, our study provides the first detailed analysis of how this develops specifically during the critical period of competition, with distinct patterns for single versus multi-active zone contacts. Our data show that dominant-eye inputs have larger vesicle pools that scale with active zone number. While this suggests enhanced transmission capacity, we make no direct physiological claims based on structural data alone.

      Claim (189): "This shows that vesicle docking at release sites favors the dominant-eye as we previously reported but is similar for like eye type inputs regardless of AZ number."

      Contradicts core claim of manuscript: Consistent with previous literature, there is an activity dependent relative increase in vGlut2 clustering of dominant eye RGCs. The new information is that that activity dependence is more or less the same in sAZ and mAZ. The only plausible alternative is that vGlut2 scaling only increases in mAZ which would be consistent with the claims of their paper. That is not what they found. To the extent that the analysis presented in this manuscript tests a hypothesis, this is it. The claim of the title has been refuted by figure 3.

      We report the volume of docked vesicle signal (VGluT2) nearby each active zone, finding this is greater for dominant-eye synapses. Within each eye-specific synapse population, vesicle signal per active zone is similar regardless of whether these are part of single- or multi-active zone contacts. This is consistent with a modular program of active zone assembly and maintenance: core molecular programs facilitate docking at each AZ similarly regardless of how many AZs are nearby. 

      This finding does not contradict our main conclusions but rather provides insight into how synaptic advantages are structured. The dominant eye's advantage may arise in part from forming more multi-AZ contacts (which have proportionally more docked vesicles) rather than from enhanced vesicle loading per individual active zone. This organization may reflect how developmental competition operates through contact number and active zone addition rather than fundamental changes to individual release site properties.

      We have changed the title to be descriptive rather than mechanistic.

      Claim (line 235): "For the non-dominant eye projection, however, clustered mAZ inputs outnumbered clustered sAZ inputs at P4 (Fig. 4C, bottom left panel), the age when this eye adds sAZ synapses (Fig. 2C)."

      Misleading: The overwhelming trend across 24 comparisons is that the sAZ clustering looks like mAZ clustering. That is the objective and unambiguous result. Among these 24 underpowered tests (n=3), there were a few p-values < 0.05. The authors base their interpretation of cell behavior on these crossings.

      In Figures 4C and 4D we report significant results with high effect sizes (effect sizes all greater than 2; see Supplemental Table S2). The mean differences are modest (5-7%) and significance arises due to low variance between biological replicates. We acknowledge that clustering patterns are generally similar between mAZ and sAZ inputs across most conditions. We have revised the text to describe these as “slight” differences and that “WT mice show a tendency toward forming more synapses near mAZ inputs”, reflecting appropriate caution in our interpretation while noting the statistical consistency of these patterns.

      Claim (line 328): "The failure to add synapses reduced synaptic clustering and more inputs formed in isolation in the mutants compared to controls."

      Trivially true: Density was lower in mutant.

      We have rewritten the sentence for clarity: “The failure to add synapses could explain the observation that synaptic clustering was reduced and more inputs formed in isolation in the mutants compared to controls.”

      Claim (line 332): "While our findings support a role for spontaneous retinal activity in presynaptic release site addition and clustering..."

      Not meaningfully supported by evidence: I could not find meaningful differences between WT and mutant beside the already known dramatic difference in synapse density.

      We have changed the sentence to avoid overinterpreting the results. The new sentence in lines 415-417 reads: “While our results highlight developmental changes in presynaptic release site addition and clustering, activity-dependent postsynaptic mechanisms also influence input refinement at later stages.”

      Reviewer #2 (Public review):

      Summary:

      In this manuscript, Zhang and Speer examine changes in the spatial organization of synaptic proteins during eye specific segregation, a developmental period when axons from the two eyes initially mingle and gradually segregate into eye-specific regions of the dorsal lateral geniculate. The authors use STORM microscopy and immunostain presynaptic (VGluT2, Bassoon) and postsynaptic (Homer) proteins to identify synaptic release sites. Activity-dependent changes of this spatial organization are identified by comparing the β2KO mice to WT mice. They describe two types of synapses based on Bassoon clustering: the multiple active zone (mAZ) synapse and single active zone (sAZ) synapse. In this revision, the authors have added EM data to support the idea that mAZ synapses represent boutons with multiple release sites. They have also reanalyzed their data set with different statistical approaches.

      Strengths:

      The data presented is of good quality and provides an unprecedented view at high resolution of the presynaptic components of the retinogeniculate synapse during active developmental remodeling. This approach offers an advance to the previous mouse EM studies of this synapse because of the CTB label allows identification of the eye from which the presynaptic terminal arises.

      Weaknesses:

      While the interpretation of this data set is much more grounded in this second revised submission, some of the authors' conclusions/statements still lack convincing supporting evidence. In particular, the data does not support the title: "Eye-specific active zone clustering underlies synaptic competition in the developing visual system". The data show that there are fewer synapses made for both contra- and ipsi- inputs in the β2KO mice-- this fact alone can account for the differences in clustering. There is no evidence linking clustering to synaptic competition. Moreover, the findings of differences in AZ# or distance between AZs that the authors report are quite small and it is not clear whether they are functionally meaningful.

      We thank the reviewer for their helpful suggestions that improved the manuscript in this revision. We have changed the title to remove the reference to “clustering” and to avoid implying any causal relationships. The new title is descriptive: “Eye-specific differences in active zone addition during synaptic competition in the developing visual system”.

      To further address the reviewers comments, we have removed the remaining references to activity-dependent effects on synaptic development (line 36, line 96, line 415). We have also modified the text in lines 411-413 to state that “The failure to add synapses could explain the observation that synaptic clustering was reduced and more inputs formed in isolation in the mutants compared to controls.”

      We have also updated our presentation of results for Figure 4 to ensure that we do not causally link clustering to synaptic competition. In Figures 4C and 4D we report significant results with high effect sizes (effect sizes all greater than 2; see Supplemental Table S2). The mean differences are modest (5-7%) and significance arises due to low variance between biological replicates. We acknowledge that clustering patterns are generally similar between mAZ and sAZ inputs across most conditions. We have revised the text to describe these as “slight” differences and that “WT mice show a tendency toward forming more synapses near mAZ inputs”, reflecting appropriate caution in our interpretation while noting the statistical consistency of these patterns.

      Reviewer #3 (Public review):

      This study is a follow-up to a recent study of synaptic development based on a powerful data set that combines anterograde labeling, immunofluorescence labeling of synaptic proteins, and STORM imaging (Cell Reports, 2023). Specifically, they use anti-Vglut2 label to determine the size of the presynaptic structure (which they describe as the vesicle pool size), anti-Bassoon to label active zones with the resolution to count them, and anti-Homer to identify postsynaptic densities. Their previous study compared the detailed synaptic structure across the development of synapses made with contraprojecting vs. ipsi-projecting RGCs and compared this developmental profile with a mouse model with reduced retinal waves. In this study, they produce a new detailed analysis on the same data set in which they classify synapses into "multi-active zone" vs. "single-active zone" synapses and assess the number and spacing of these synapses. The authors use measurements to make conclusions about the role of retinal waves in the generation of same-eye synaptic clusters. The authors interpret these results as providing insight into how neural activity drives synapse maturation, the strength of their conclusions is not directly tested by their analysis.

      Strengths:

      This is a fantastic data set for describing the structural details of synapse development in a part of the brain undergoing activity-dependent synaptic rearrangements. The fact that they can differentiate the eye of origin is what makes this data set unique over previous structural work. The addition of example images from the EM dataset provides confidence in their categorization scheme.

      Weaknesses:

      Though the descriptions of single vs multi-active zone synapses are important and represent a significant advance, the authors continue to make unsupported conclusions regarding the biological processes driving these changes. Although this revision includes additional information about the populations tested and the tests conducted, the authors do not address the issue raised by previous reviews. Specifically, they provide no assessment of what effect size represents a biologically meaningful result. For example, a more appropriate title is "The distribution of eye-specific single vs multiactive zone is altered in mice with reduced spontaneous activity" rather than concluding that this difference in clustering is somehow related to synaptic competition. Of course, the authors are free to speculate, but many of the conclusions of the paper are not supported by their results.

      We appreciate the reviewer’s helpful critique. We have changed the title to be descriptive and avoid implying causal relationships. 

      We have applied false discovery rate (FDR) correction using the Benjamini-Hochberg method with α = 0.05 within each experimental condition (age × genotype combination). The FDR correction treats each condition as addressing a distinct experimental question: 'What synaptic properties differ between left eye and right eye inputs in this specific developmental stage and genotype?'

      This correction strategy is appropriate because: 1) we focus our statistical comparisons within each age/genotype; 2) each age-genotype combination represents a separate biological context where different synaptic properties between eye-of-origin may be relevant; and 3) this approach controls for multiple testing within each experimental question while maintaining statistical power to detect meaningful biological differences.

      We applied FDR correction separately to the ~20-34 measurements (varying with age and genotype) within each of the six experimental conditions (P2-WT, P2-ß2, P4-WT, P4-ß2, P8-WT, P8-ß2), resulting in condition-specific adjusted p-values. These are reported in the updated Supplemental Table S2. Figures have been also been updated to reflect the FDR-adjusted values. Selected between-genotype comparisons are presented descriptively using 5/95% confidence intervals. This correction confirmed the robustness of our key findings.

      With regard to the biological significance of effect sizes, our key findings demonstrate effect sizes >2.0, indicating robust effects. During critical developmental periods, consistent structural differences, even those modest in absolute magnitude, can reflect important regulatory mechanisms that influence refinement outcomes. The differences in synaptic organization we observe occur during the first postnatal week when eyespecific competition is active, suggesting these patterns may be relevant to understanding how structural advantages emerge during synaptic refinement.

      Reviewer #1 (Recommendations for the authors):

      I have tried to understand the analysis and biology of this manuscript as best I can. I believe the analytical approach taken is not reliable and I have explained why in my public comments. I don't believe this manuscript is unique in taking this approach. I have recently published a paper on how common this approach is and why it doesn't work. I don't want to give the impression that the problem with the analysis was that it was not computationally sophisticated enough or that you did not jump through a specific statistical hoop. If I strip out the arguments that depend on misinterpretations of p-values and -instead- look at the scatterplots, I come up with a very different view of the data than what is described in the paper.

      The information in the plots could be translated into a rigorous statistical analysis of estimated differences between groups given the uncertainties of the experimental design. I don't really think that analysis would be useful. I think it would have been enough to publish the plots and report your estimates of the number of active zones in RGCs during development. I don't see evidence of an additional effect.

      We appreciate the reviewer’s helpful comments throughout the review process. Mean active zone numbers per mAZ contact are presented in Figure S2D/E. We look forward to further technical and computational advances that will help us increase our data acquisition throughput and sample sizes when designing future studies. 

      Reviewer #2 (Recommendations for the authors):

      The authors should modify the title and other text to be more consistent with the data. There is no evidence that active zone clustering has any direct relationship to synaptic competition.

      We appreciate the reviewer’s helpful suggestions to ensure appropriate language around causal effects. We have modified the title to accurately reflect the results: "Eyespecific differences in active zone addition during synaptic competition in the developing visual system." We have revised the text in the abstract, introduction, and results section for Figures 4 to be consistent with the data and not imply causality of synapse clustering on segregation phenotypes.

      Reviewer #3 (Recommendations for the authors):

      Change the title.

      We appreciate the reviewer’s feedback throughout the review process. We have modified the title to accurately reflect the results: "Eye-specific differences in active zone addition during synaptic competition in the developing visual system."

    1. Keep users signed in: include offline_access

      Ideally, this shouldn't exist here but be covered as part of initiate login article as a mandatory way of doing things; If someone wants to login with scalekit, they would want to receive tokens; so, we should just tell them as part of initiate login, send these scopes - "openid email profile offline_access"

    1. Reviewer #1 (Public review):

      Summary:

      The previous evidence for NMDARs containing N1, N2, and N3 subunits (t-NMDARs) was weak. All previous results could be explained by mixtures of di-heteromeric receptors. The authors here set out to identify t-NMDARs both in vitro and in the brain.

      Strengths:

      The single-channel recording is quite convincing because the authors could reproduce previous results in their system, but could also then add new observations. It is quite hard (if not impossible) to obtain the N1-N2A-N3A result at 100 µM Glu/Gly from a mixture, because the N1-N2A diheteromer has such a high open probability. Therefore, any idea that this might be, in fact, two receptors (GluN1-N2A and GluN1-N3A) is trivially falsified. The authors might prefer to make this argument based on the reduction of open probability, which cannot be achieved from a mixture masquerading as a single channel.

      With regard to crosslinker usage in brain tissue, these are very impressive attempts, which I applaud. The fluorescence images of the brain sections look convincing. But the bands corresponding to N2-N3 crosslinked subunits from neurons or the brain are faint. I would want more information to be convinced that these faint bands come from GluN2-N3 dimers.

      Weaknesses:

      In the first part of the paper, where the CryoEM structure is determined, it's not really clear to me the extent to which Fab binding might bias the position of the ATDs (and even then the arrangement of each subunit within the whole complex). Then, much later at the end of the results, there is a structural analysis that claims to be integrative (Figure 7) but does not obviously rely on any other data than the structures, but does mention this point about the Fabs. The results could be rearranged to make these points clearer.

      I have my biggest doubts about the crosslinking of native receptors. For the biochemistry from neurons or brain tissue, this is a very ambitious idea that has been hard to execute over the past 15-20 years. The authors use AzF for the obvious reason that this was done before in NMDARs. The constructs that have been assembled are neat. But AzF is a really bad crosslinker. The authors attribute the weak bands to subunit mobility, but the minor abundance is more likely due to the strong constraints on AzF crosslinking and its unsuitable photochemistry in general (very easily activated with room light, for example).

      There is no information at all given about the wavelength, intensity, duration of UV exposure, and how, for example, the right exposure was determined. How were the samples protected in between?

    2. Reviewer #2 (Public review):

      Summary:

      The authors purified and solved by cryo-EM a structure of tri-heteromeric GluN1/GluN2A/GluN3A NMDA receptors, whose existence has long been contentious. Using patch-clamp electrophysiology on GluN1/GluN2/GluN3A NMDARs reconstituted into liposomes, they characterized the function of this NMDAR subtype. Finally, thanks to site-targeted crosslinking using unnatural amino acid incorporation, they show that the GluN2A subunit can crosslink with the GluN3A subunit in a cellular context, both in recombinant systems (HEK cells) and neuronal cultures and in vivo.

      Strengths:

      The NMDAR GluN3 subunit is a glycine-binding subunit that was long thought to assemble into GluN1/GluN2/GluN3 tri-heteromeric receptors during development, acting as a brake for synaptic development. However, several studies based on single subunit counting (Ulbrich et al., PNAS 2008) and ex vivo/in vivo electrophysiology have challenged the existence of these tri-heteromers (see Bossi, Pizzamiglio et al., Trends Neurosci. 2023). A large part of the controversy stems from the difficulty in isolating the tri-heteromeric population from their di-heteromeric counterparts, which led to a lack of knowledge on the biophysical and pharmacological properties of putative GluN1/GluN2/GluN3 receptors. To counteract this problem, the authors used a two-step purification method - first with a strep-tag attached to the GluN3 subunit, then with a His tag attached to the GluN2 subunit - to isolate GluN1/GluN2/GluN3 tri-heteromers from GluN1/GluN2A and GluN1/GluN3 di-heteromers, and they did observe these entities in Western blot and FSEC. They solved a cryo-EM structure of this NMDAR subtype using specific FAbs to identify the GluN1 and GluN2A subunits, showing an asymmetrical, splayed architecture. Then, they reconstituted the purified receptors in lipid vesicles to perform single-channel electrophysiological recordings. Finally, in order to validate the tri-heteromeric arrangement in a cellular system, they performed photocrosslinking experiments between the GluN2A and GluN3 subunits. For this purpose, a photoactivatable unnatural amino acid (AzF) was incorporated at the bottom of GluN2A NTD, a region embedded within the receptor complex that is predicted to be in close proximity to the GluN3 subunit. This is an elegant approach to validate the existence of GluN1/GluN2/GluN3 tri-hets, since at the chosen AzF incorporation position, crosslinking between GluN2A and GluN3 is more likely to reflect interaction of subunits within the same receptor complex than between two receptors. They show crosslinking between GluN2A and GluN3 in the presence of AzF and UV light, but not if UV light or AzF were not provided, suggesting that GluN2A and GluN3 can indeed be incorporated in the same complex. In a further attempt to demonstrate the physiological relevance of these tri-heteromers, they performed the same crosslinking experiments in cultured neurons and even native brain samples. While unnatural amino acid incorporation is now a well-established technique in vitro, such an approach is very difficult to implement in vivo. The technical effort put into the validation of the presence of these tri-heteromers in vivo should thus be commended.

      Overall, all the strategies used by this paper to prove the existence of GluN1/GluN2/GluN3 tri-heteromers, and investigate their structure and function, are well-thought-out and very elegant. But the current data do not fully support the conclusions of the paper.

      Weaknesses:

      All the experiments aiming at proving the existence of GluN1/GluN2/GluN3 tri-heteromers rely on the purification of these receptors from whole cell extracts. There is therefore no proof that these receptors are expressed at the membrane and are functional. This is a limitation that has been overlooked and should be discussed in the manuscript. In addition, in the current manuscript state, each demonstration suffers from caveats that do not allow for a firm conclusion about the existence and the properties of this receptor subtype.

      (1) In Cryo-EM images of GluN1/GluN2A/GluN3A receptors, the GluN3 subunit is identified as the subunit having no Fab bound to it. How can the authors be sure that this is indeed the GluN3A subunit and not a GluN2A subunit that has not bound the Fab? Does the GluN3A subunit carry features that would allow distinguishing it independently of Fab binding? In addition, it is surprising that the authors did not incubate the tri-heteromers with a Fab against GluN3A, since Extended Figure 3 shows that such a Fab is available.

      (2) Whether the single-channel recordings reflect the activity of GluN1/GluN2/GluN3 tri-heteromers is not convincing. Indeed, currents from liposomes containing these tri-heteromers have two conductance levels that correspond to the conductances of the corresponding di-heteromers. There is therefore a need for additional proof that the measured currents do not reflect a mixture of currents from N1/2A di-heteromers on one side, and N1/3A di-heteromers on the other side. What is the purity of the N1/3A sample? Indeed, given the high open probability and high conductance of N1/2A tri-heteromers, even a small fraction of them could significantly contribute to the single-channel currents. Additionally, although the authors show no current induced by 3uM glycine alone on proteoliposomes with the N1/2A/3A prep (no stats provided, though), given the sharp dependence of N1/3A currents on glycine concentration, this control alone cannot rule out the presence of contaminant N1/3A dihets in the preparation.

      Finally, pharmacological characterization of these tri-heteromers is lacking. In vivo, the presence of tri-heteromeric GluN1/GluN2/GluN3 tri-heteromers was inferred from recordings of NMDARs activated by glutamate but with low magnesium sensitivity. What is the effect of magnesium on N1/2A/3A currents? Does APV, the classical NMDAR antagonist acting at the glutamate site, inhibit the tri-heteromers? What is the effect of CGP-78608, which inhibits GluN1/GluN2 NMDARs but potentiates GluN1/GluN3 NMDARs? Such pharmacological characterization is critical to validate that the measured currents are indeed carried by a tri-heteromeric population, and would also be very important to identify such tri-heteromers in native tissues.

      (3) Validation of GluN1/GluN2/GluN3 tri-heteromer expression by photocrosslinking: The mixture of constructions used (full-length or CTD-truncated constructs, with or without tags) is confusing, and it is difficult to track the correct molecular weight of the different constructs. In Figure 6, the band corresponding to a putative GluN3/GluN2A dimer is very weak. In addition, given the differences in molecular weights between the GluN2 subunits and GluN3, we would expect the band corresponding to a GluN2A/GluN2B to migrate differently from the GluN2A/GluN3 dimer, but all high molecular weight bands seem to be a the same level in the blot. Finally, in the source data, the blots display additional bands that were not dismissed by the authors without justification. In short, better clarification of the constructs and more careful interpretation of the blots are necessary to support the conclusions claimed by the authors.

    1. Reviewer #2 (Public review):

      Summary:

      The authors investigated whether early-life malaria exposure has long-term effects on immune responses to unrelated antigens. They leveraged a natural experiment in coastal Kenya where two adjacent communities (Junju and Ngerenya) experienced divergent malaria transmission patterns after 2004. Using 15 years of longitudinal data from 123 children with weekly malaria surveillance and annual serological sampling, they measured antibody responses to multiple pathogens using a protein microarray technology and ELISA.

      Strengths:

      (1) Extensive longitudinal data collection with weekly malaria surveillance, enabling precise exposure classification.

      (2) Use of a natural experiment design that allows for causal inference about malaria's immunological effects.

      (3) Broad panel of antigens tested, demonstrating generalized rather than antigen-specific effects.

      (4) Within-cohort analysis in Ngerenya controls for geographic and environmental factors.

      (5) Validation of key findings using both serologic microarray and ELISA.

      (6) Important public health implications for vaccine strategies in malaria-endemic regions.

      Weaknesses:

      (1) Lack of participants' characteristics (socio-economic, nutritional, physical).

      (2) Somewhat limited sample size (longitudinal analysis of 123 children total), with further subdivision reducing statistical power for some analyses.

      (3) Potential confounding by unmeasured socioeconomic, nutritional, or environmental factors between communities.

      (4) Lack of ability to determine the direction of the associations found between malaria exposure and other IgG levels to unrelated pathogens.

      (5) Despite good longitudinal data, the main analysis was conducted as a cross-sectional analysis at age 10 for many comparisons, which limits the understanding of temporal dynamics.

      (6) Statistical analysis is limited to univariable comparisons without consideration for confounders or adjusting for multiple comparisons.

      (7) No mechanistic understanding of how early malaria exposure creates lasting immunosuppression.

      (8) No understanding of the clinical Implications of the reduced IgG levels observed in the area with high malaria exposure.

      Assessment of Claims:

      The data appear to support the authors' primary claims, but the strength of the evidence is limited, and the results should be interpreted with caution. Together with the currently available evidence of P. falciparum's impact on the host's immune function, this natural experiment design provides further evidence for a relationship between early malaria exposure and reduced antibody responses. The within-Ngerenya analysis controls for geographic factors and thus enhances the quality of the evidence; however, it still fails to account for the physical, nutritional, and socio-economic factors that may have driven the observed changes. Additionally, the mechanism underlying this effect remains unclear, and the clinical significance of reduced antibody levels is not established.

      Impact and Utility:

      This work has fundamental implications for understanding vaccine effectiveness in malaria-endemic regions and may contribute to informing vaccination strategies. The findings, if strengthened, would suggest that children in areas of high malaria transmission may require modified immunization approaches. The dataset provides a valuable resource for future studies of malaria's immunological legacy.

      Context:

      This study builds on prior work showing acute immunosuppressive effects of malaria but uniquely attempts to demonstrate the durability of these effects years after exposure. The natural experiment design addresses limitations of previous observational studies by providing a more controlled comparison.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Summary: This work by Matsui et al. examined the function of a gene Stand Stil (stil) in Drosophila in regulation of germ cell death in the female germline. They show that stil mutants contain many apoptotic cells, leading to germ cell loss and infertility. Gene expression analysis showed upregulation of pro-apoptotic genes such as rpr in stil mutant. DamID experiment further showed that stil binds to rpr promoter region to repress its expression. Additionally, they also show that undifferentiated germ cells are resistant to cell death in stil mutant (but stil mutant still eventually loses all germ cells).

      Major comments: Overall, experiments adhere to a general standard of rigor, and each result is fairly convincing. In that sense, this paper warrants publication, as a paper that revealed a new gene important for preventing germ cell death. With that said, I feel that this paper does not reveal a new biological insight. In a nutshell, this paper is about a transcriptional repressor for pro-apoptotic gene, hence its depletion leads to cell death. Data is solid and the conclusion is well supported. But the readers will be left wondering why nature implemented such control? Unless one can show what kind of defects stil rpr double mutant (which rescues germ cell loss phenotype) exhibits, there is no insight why the balance of pro-apoptotic gene and its repressor is important. The paper discusses the 'molecular' mechanisms that explain the phenomenon, but it does not provide insights. The lack of conceptual advancement is the limitation of this work.

      Response:

      We thank the reviewer for pointing out a biological insight into the evolutionary rationale underlying the adoption of such a regulatory mechanism in nature. To address this point, we assessed the evolutionary conservation of rpr and stil through BLAST searches and comparative analyses. Our results showed that both genes are Diptera-restricted, whereas their key domains (the rpr IAP-binding motif and the Stil BED finger) are widely conserved across metazoans. In this phylogenetic context, we propose that Stil acts as a dedicated repressor of rpr in the Drosophila female germline, thereby establishing an apoptotic control architecture in which hid predominates and rpr is repressed by Stil. This explains why the balance between a potent effector (Rpr) and its repressor (Stil) is critical in oogenesis; preventing catastrophic germline loss while preserving hid-mediated responsiveness.

      We have incorporated these phylogenetic analyses and the perspective into the revised Discussion section as follows.

      Revised Page 22, Line 475; rpr is conserved only within Diptera, although its IAP-binding motif, essential for apoptosis induction, is broadly conserved across metazoans (Du et al., 2000; Gottfried et al., 2004; Hegde et al., 2002; Shi, 2002; Verhagen et al., 2000; Vucic et al., 1998; Wing et al., 2001; L. Zhou, 2005) (Fig. S7). Similarly, stil is also restricted to Diptera, predominantly within Drosophila, whereas its BED-type zinc finger domain is widely conserved among diverse organisms (Aravind, 2000; Hayward et al., 2013; Tue et al., 2017b; H. Zhou et al., 2016). Phylogenetic patterns across Diptera are consistent with a model in which stil acts as a dedicated repressor of rpr in the Drosophila germline cells (Fig. S7). Due to its potent pro-apoptotic activity, rpr must be stringently repressed in a spatiotemporal manner through mechanisms that are specific to both cell type and developmental stage. During embryogenesis, repression of rpr is mediated by the Dpp-signaling factor Shn, which binds to the rpr regulatory region, whereas in intestinal stem cells (ISCs), its expression is suppressed through chromatin conformation. In Drosophila female germline cells, hid serves as the primary regulator of apoptosis, while rpr activity is generally suppressed (Park et al., 2019; Xing et al., 2015). However, rpr mutants exhibit reduced fertility despite producing viable eggs (Fig. 3H), suggesting that rpr-mediated apoptosis may be required for proper egg development. Accordingly, we propose that stil restrains rpr in the Drosophila female germline, allowing hid to predominate in apoptotic regulation.

      New Fig. S7;

      The legend of new Fig. S7;

      Figure S7 Conservation of Rpr and Stil within Diptera

      Homologs of Drosophila melanogaster Rpr and Stil were identified by BLASTp, aligned, and analyzed phylogenetically. Homologs are present across Dipteran lineages, with the genus Drosophila highlighted in blue. Branch lengths indicate the expected number of substitutions per site, as shown by the scale bar.

      Minor comments: Although this is a minor point, and this is not specifically pointing a finger at the author of this paper, I really don't like the term 'safeguard'. This term is now overutilized to add hype to papers, when 'is necessary' is sufficient. In this case, unless the answer is provided as to 'against what stil is safeguarding germ cells', this term is not meaningful. For example, if one can show that stil specifically senses germline-specific threat and tweaks the regular apoptotic pathway based on germline-specific needs, then the term 'safeguard' may be warranted.

      Response:

      In light of the reviewer's comment, we have revised the title of the manuscript to replace 'safeguard' with 'ensure,' which better reflects the demonstrated function of Stil without overstating its role. The new title of the manuscript is: 'Transcriptional Repression of reaper by Stand Still Ensures Female Germline Development in Drosophila'

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      In this well-executed study, Matsui et al. investigate how the female Drosophila germline prevents inappropriate apoptosis during development. They identify stand still (stil) as a key germline-specific repressor of apoptosis. Stil mutant flies are homozygous viable but female sterile due to widespread germ cell loss at the time of eclosion, which is driven by activation of the pro-apoptotic gene reaper (rpr) and caspase-dependent cell death. Germline-specific expression of anti-apoptotic factors such as p35 can rescue this phenotype, confirming that the defect lies in apoptotic regulation. The authors show that Stil directly represses rpr transcription through its BED-type zinc finger domain. Notably, undifferentiated germline cells remain resistant to apoptosis in the absence of stil, which the authors attribute to a silenced chromatin state at the rpr locus, marked by H3K9me3. These findings support a dual mechanism of protection: transcriptional repression of rpr by Stil, and a potential parallel chromatin-based silencing mechanism operating specifically in undifferentiated cells.

      Major Issues:

      1. Clarify cell identity in Figure 2E: It is unclear whether the apoptotic cells shown are somatic or germline in origin. Including a somatic marker such as 1B1 would allow the reader to clearly distinguish the apoptotic population and better interpret the figure.

      Response:

      We thank the reviewer for this helpful suggestion. Occasionally, the signal of the germline marker Vasa can be attenuated in dying germline cells. As suggested by the reviewer, we also tested α-Spectrin (a plasma membrane and fusome marker) instead of 1B1 together with TUNEL labeling, but this approach did not clearly distinguish somatic from germline apoptotic cells. To directly clarify cell identity, we now provide an improved co-stained image in which TUNEL-positive nuclei are surrounded by Vasa-positive cytoplasm, indicating a germline origin. Figure 2E has been updated accordingly.

      New Fig. 2E;

      Quantification of undifferentiated cells in mutants: There appears to be inconsistency in the representation of undifferentiated germ cells across figures. Early panels show near-complete germline loss, while later analyses focus on undifferentiated cells that are reportedly apoptosis-resistant. The authors should quantify the proportion of ovarioles retaining undifferentiated cells and present this data in Figure 1 or the supplements to resolve this discrepancy.

      Response:

      Thank you for raising the important point regarding the apparent inconsistency in the representation of undifferentiated germ cell populations. In early panes (Fig.1C, D), we analyzed adult ovaries of stil loss-of function mutants where all germline cells including undifferentiated germline stem cells (GSCs) are almost completely lost (Fig. 1C), showing nearly 100% agametic ovarioles. However, in later analysis such as those in Fig. 5A, B, we showed 3rd instar-larval ovaries of stil loss-of function mutants containing a few surviving germline cells nearby the future cap cell, the niche providing stem cell ligand, Decapentaplegic (Dpp) (Xie & Spradling, 1998). This suggests that Dpp-responsive undifferentiated germline cells may be relatively resistant to apoptosis caused by stil loss.

      Indeed, the GSC-like cells generated by the overexpression of a constitutively active form of Dpp receptor, Thickveins (Tkv.CA) or loss of the differentiation factor bam, were resistant to apoptosis caused by stil loss (Fig. 5C, D). These GSC-like cells may possess enhanced stemness, owing to either excessively elevated Dpp signaling or complete loss of bam, which could lead to stronger repression of rpr expression through tighter chromatin compaction.

      We added this argument in the Results section of the revised manuscript as follows.

      Revised Page 16, Line 361; Compared to GSCs, which were almost completely lost in stil mutants, GSC-like cells may retain a more robust stemness owing to the extremely elevated Dpp signaling pathway, potentially resulting in stronger repression of rpr expression.

      Interpretation of chromatin state at the rpr locus: The claim that H3K9me3, but not H3K27me3, marks the rpr locus is not fully convincing given the low ChIP-seq signal shown. Including a comparison to a known positive control locus would strengthen the argument. Alternatively, the authors could broaden the discussion to include global chromatin reorganization during germ cell to maternal transition, as reported in Kotb et al., 2024 and how such changes may impact rpr accessibility. Also stl mutant rescued with P53 have a "string of pearls" phenotype that are associated with germ cell to maternal transition defects (Figure S3, p53 OE)

      Response:

      We thank the reviewer for the thoughtful and constructive comment regarding the interpretation of chromatin state at the rpr locus. To strengthen the inference that the rpr locus shows H3K9me3 enrichment, whereas clear H3K27me3 enrichment is not evident, we have now included ChIP-seq signal profiles for known positive control loci, using light (lt) as an H3K9me3-enriched locus (Akkouche et al., 2017; Greil et al., 2003) and Ultrabithorax (Ubx) as a canonical H3K27me3 target (Torres-Campana et al., 2022). These comparisons support our interpretation that H3K9me3, rather than H3K27me3, characterize chromatin around the rpr locus in GSCs. Accordingly, while we do not exclude a minor H3K27me3 contribution, our analyses indicate H3K9me3 as the predominant signature at rpr in GSCs.

      New Fig.6B and 6C;

      The legend of new Fig. 6B and Fig. 6C;

      (B) H3K9me3 ChIP-seq signal at the rpr locus and the lt locus (H3K9me3-positive control) in GSCs and 4C NCs. (C) H3K27me3 ChIP-seq signal at the rpr locus and the Ubx locus (H3K27me3-positive control) in GSCs and 32C NCs.

      A sentence of Result section was revised as below.

      Revised Page 17, Line 396; As internal controls, we confirmed H3K9me3 enrichment at the light (lt) locus and H3K27me3 enrichment at the Ultrabithorax (Ubx) locus, consistent with their established chromatin states (Akkouche et al., 2017; Greil et al., 2003; Torres-Campana et al., 2022); relative to these controls, the rpr locus shows H3K9me3 but no clear H3K27me3 enrichment in GSCs.

      Regarding the suggestion to broaden the discussion to include global chromatin reorganization during the germline-to-maternal transition, as reported in Kotb et al., 2024, we agree that this is an important avenue for understanding rpr accessibility. The "string of pearls" phenotype observed in stil mutants rescued with P35 overexpression (Figure S3) is consistent with perturbations during this transition. However, a detailed analysis of such chromatin reorganization and its potential impact on rpr regulation lies beyond the scope of the present study and represents a valuable direction for future work.

      Broader analysis of rpr regulation in somatic cells: It would be informative to examine publicly available chromatin or transcriptional data for the rpr locus in somatic ovarian cells. This could help clarify whether rpr regulation by Stil is truly germline-specific or reflects broader developmental trends. This will also clarify why the flies are homozygous viable but female sterile.

      Response:

      We thank the reviewer for this insightful suggestion. We agree that exploring chromatin accessibility and transcriptional regulation at the rpr locus in somatic ovarian cells would provide valuable insights into tissue- or cell-type-specific chromatin environments that influence rpr expression.

      However, to our knowledge, there are currently no publicly available ATAC-seq or comparable chromatin datasets for purified ovarian somatic cells, including follicle cells or ovarian somatic cells (OSCs). As such, we are unable to incorporate this analysis in the current study. Nevertheless, we fully recognize the importance of this line of inquiry and consider it a valuable direction for future research.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Summary:

      This manuscript describes the characterization of stand still (stil), a previously identified gene needed for germ cell survival in Drosophila. The molecular function of Stil has until now remained poorly understood. This new work shows that loss of stil results in reaper (rpr)-dependent apoptosis within female germ cells. Loss of rpr suppresses many of the phenotypes observed in stil mutants. Experiments performed using Drosophila cell culture suggest that Stil binds to elements within the rpr promoter. DamID and structure/function experiments indicate that Stil likely directly represses the transcription of rpr within germ cells.

      In general, the experiments are well executed, and the data largely support the basic claims of the authors. Replicates are included and appropriate statistical analyses have been provided. The text and figures clear and accurate. Appropriate references were cited. There are a few things the authors should address or rephrase before publication.

      On page 9 line 190-192. The authors state "Altogether, these findings indicate that the loss of stil function not only triggers apoptosis that can be suppressed by apoptosis inhibitors but also causes defects in oogenesis progression that are not rescued by blocking cell death." Failure to rescue defects during mid-oogenesis could be due to insufficient transgene expression. Indeed, loss of rpr appears to rescue the fertility of stil mutants. The conclusions of this section should be restated.

      Response:

      We agree that the failure to rescue mid-oogenesis defects by P35 overexpression may, at least in part, be due to insufficient transgene expression. This explanation is particularly plausible given that loss of rpr more effectively restored fertility in stil mutants. As suggested by the reviewer, we have revised the relevant sentences, to avoid misinterpretation as below.

      Revised Page 9, Line 191; Altogether, these findings indicate that the loss of stil function triggers apoptosis that can be suppressed by apoptosis inhibitors.

      Revised Page 12, Line 253; The complete rescue of germline survival in stil rpr double mutants also suggests that the failure of P35 overexpression to restore mid-oogenesis defects may partly reflect insufficient transgene expression (Fig. S3).

      The authors should present the overlap between genes that change expression in a stil mutant and those in which the DamID experiments indicate are directly bound by Stil protein. DamID can sometimes give spurious results depending on expression levels. Further discussion along this point is necessary.

      Response:

      We thank the reviewer for raising this issue. As suggested, we have now analyzed the overlap between genes that are differentially expressed in stil mutant ovaries (identified by RNA-seq with stil mutant expressing P35) and genes that are potentially bound by Stil based on DamID-seq data (promoter-proximal peaks {less than or equal to}1 kb) as Supplementary Table 4. The list includes genes with DamID peaks within promoter regions and that also exhibit significant differential expression (|log2FC| > 1, adjusted p The overlap between DamID-seq and RNA-seq comprises 682 genes, including rpr, supporting the idea that Stil regulates rpr expression through interaction with its upstream promoter region. However, the detected peak signal at rpr was 3.41, which was not that strong, suggesting that Stil may also bind to and regulate other genes in female germline cells. Investigating the potential role of Stil in regulating other genes represents an important future direction of our study.

      We have included this analysis and argument in the revised manuscript as below.

      Revised Page 13, Line 280; A total of 682 genes with Stil-enriched peaks detected at promoter regions ({less than or equal to}1 kb) showed significantly altered expression in RNA-seq analysis of stil mutants expressing P35, including rpr (Supplementary Table 4).

      Revised Page 20, Line 440; Notably, the DamID peak intensity at the rpr locus reached 3.41, which is moderate rather than strong (Supplementary Table 4). This suggests that, in addition to repressing rpr, Stil may bind to and regulate other genomic loci in the female germline. Investigating the repertoire of Stil target genes and elucidating their roles in germline cells will be an important future direction of this study.

      For structure function experiments, a western blot showing expression levels of the different transgenes in ovaries should be included.

      Response:

      We thank the reviewer for this helpful comment. To address this point, we examined the expression levels of the four Stil variants (FL, NT, CT, and AAYA) in ovaries driven by a germline driver under a wild-type background using Western blotting. The representative blot and quantification from three biological replicates showed comparable expression levels among the variants, with the CT variant displaying a slightly reduced signal. Importantly, AAYA showed expression comparable to FL yet, like CT, failed to rescue, indicating that the rescue failure is not explained by expression-level differences. These data instead support a requirement for the BED-type zinc finger for Stil function in the germline. While we cannot fully exclude a minor contribution from the slightly lower expression of the CT variant to the lack of rescue, the AAYA result argues that loss of BED-type zinc-finger function is the primary cause; we note this caveat in the revised text. The corresponding data are now presented in Figure S6A of the revised manuscript.

      New Fig. S6A;

      The legend of new Fig. S6A;

      (A) Western blot analysis of 6×Myc-tagged Stil variants (FL, NT, CT, and AAYA) driven by NGT40-Gal4; NosGal4-VP16, with y w as a control. Stil variants were detected with anti-Myc, and α-Tubulin (αTub) served as a loading control. Arrowheads indicate Stil variant proteins. The lower panel shows quantification of the Myc/αTub signal ratio normalized to FL. Error bars indicate standard deviation (s.d.) (n = 3).

      A sentence of Result section was revised as below.

      Revised Page 13, Line 291; The expression of all four Stil variant proteins from the transgenes was confirmed, although Stil-CT showed a slightly reduced expression level (Fig. S6A)

      Revised Page 14, Line 305; Although CT shows slightly lower expression, AAYA fails to rescue despite FL-like expression, indicating that expression level is not limiting and that loss of the BED-type zinc finger underlies the phenotype.

      "With the addition of the new Fig. S6A, the following figure labels have been updated;

      Fig. S6A →S6B

      Fig. S6B → S6C

      Fig. S6C → S6D

      Fig. S6D → S6E

      Individual data points should be shown in each graph in place of simple bar graphs. This type of presentation was inconsistent throughout the paper.

      Response:

      We thank the reviewer for this constructive comment. In line with the reviewer's suggestion, we have revised the relevant graphs to include individual data points overlaid on bar plots with error bars. This modification enables readers to better assess data variability. We also ensured consistency in data presentation among the revised figures while maintaining clarity throughout the manuscript.

      Reference "G & D., 1997" should be properly formatted.

      Page 6 line 117 and 121- a couple of instances where "cell" should be "cells"

      Page 14 line 304- typo "Still"

      Response:

      As suggested, we have revised all figures to display individual data points in each graph instead of using simple bar graphs. This change has been applied consistently throughout the manuscript to improve data transparency and readability. The revised figures include Figure 1A, 2B, S1A, and S2A.

      We have also corrected the following textual issues;

      ・The reference "G & D., 1997" has been properly formatted as "Pennetta & Pauli, 1997".

      ・On page 6, lines 119 and 123, "cell" has been corrected to "cells" to ensure grammatical accuracy.

      ・On page 14, line 315, the typo "Still" has been corrected to "Stil".

      Reviewer #3 (Significance (Required)):

      The significance of the work lies in characterizing a previously unknown function of Stil. By showing that Stil acts to repress transcription of the cell death gene rpr, the authors provide new insights into how programmed cell death is regulated in the Drosophila female germline. Readers interested in reproductive biology, cell death, chromatin, and general developmental biology will find value in these new findings.

      One thing to consider is the possibility that Stil represses rpr in the context of the double strand breaks that form during meiosis. Experiments in the paper indicate that stil knockdown results in TUNEL labeling in region 2A/2B of the germarium. The authors should consider co-labeling for a meiosis marker (C(3)G or gammaH2Av) to see if this PCD correlates with this expression. In addition, they could test whether loss of Spo11 (mei-W68) suppresses stil phenotypes during early germ cell development. Relating the function of Stil to repression of cell death during this critical time of germ cell development would elevate the impact and significance of the paper. However, this may be considered beyond the scope of the current study.

      Response:

      We deeply thank the reviewer for this insightful and thought-provoking suggestion.

      As suggested, we conducted co-staining with γH2Av (DBS marker), as well as genetic interaction experiments with Spo11 (mei-W68) mutants to address this question shown below. In region 2 across all genotypes including y w control, and stil heterozygous and homozygous ovaries expressing P35, γH2Av signals were discernible and subsequently lost in region 3 through the meiotic recombination-specific DNA repair program (Additional Figure A). In stil mutants, however, an additional strong γH2Av signal was specifically observed in the oocyte, beyond the expected meiotic pattern. Furthermore, loss of meiotic recombination factors, including mei-W68, in stil mutants partially rescued the germline loss phenotype, although not to the same extent as in rpr mutants (Additional Figure B, C: 43.5 % in mei-W68-GLKD, 23.9 % in mei-P22P22 and 12.8 % in vilya826 versus 100 % with loss of rpr in Fig. 3E, F of the revised manuscript). These findings suggest that accumulation of meiotic DSBs is not the main cause of rpr upregulation in stil mutants. We feel that these analyses are beyond the scope of the current study, which focuses on identifying Stil as a transcriptional repressor of rpr and characterizing its role in germline apoptosis. Elucidating other mechanisms that elevate rpr expression in stil mutants will be the focus of future work. Hence, we are providing these data here for the reviewer's reference, but if the reviewer prefers, we would be happy to incorporate them into the manuscript.

      Additional Figure (A) Immunostaining of ovarioles from y w, stilEY16156/CyO; P35 OE (NGT40; NosGal4-VP16> P35), stilEY16156; P35 OE flies with antibody against DNA double-strand break marker H2Av (green), Vasa (red), and DAPI (blue). Insets show enlarged views of egg chamber. White dots indicate oocyte nuclei, Scale bar: 50 μm (ovariole) and 20 μm (egg chamber). (B) Immunofluorescence of Vasa (red) and DAPI (blue) in ovaries from stilEY16156, stilEY16156; mei-W68-GLKD (driven by NGT40; NosGal4-VP16), stilEY16156; meiP22P22, and stilEY16156; vilya826. Scale bar: 50 μm. (C) Quantification of the percentage of ovarioles containing germline cells in 2-3-day-old females. The genotypes of females are indicated below the x-axis, and the number of germaria analyzed is shown above each bar. Error bars represent the standard deviation (s.d.).

      Akkouche, A., Mugat, B., Barckmann, B., Varela-Chavez, C., Li, B., Raffel, R., Pélisson, A. & Chambeyron, S. (2017). Piwi Is Required during Drosophila Embryogenesis to License Dual-Strand piRNA Clusters for Transposon Repression in Adult Ovaries. Molecular Cell, 66(3), 411-419.e4. https://doi.org/10.1016/j.molcel.2017.03.017

      Greil, F., Kraan, I. van der, Delrow, J., Smothers, J. F., Wit, E. de, Bussemaker, H. J., Driel, R. van, Henikoff, S. & Steensel, B. van. (2003). Distinct HP1 and Su(var)3-9 complexes bind to sets of developmentally coexpressed genes depending on chromosomal location. Genes & Development, 17(22), 2825-2838. https://doi.org/10.1101/gad.281503

      Röper, K. & Brown, N. H. (2004). A Spectraplakin Is Enriched on the Fusome and Organizes Microtubules during Oocyte Specification in Drosophila. Current Biology, 14(2), 99-110. https://doi.org/10.1016/j.cub.2003.12.056

      Torres-Campana, D., Horard, B., Denaud, S., Benoit, G., Loppin, B. & Orsi, G. A. (2022). Three classes of epigenomic regulators converge to hyperactivate the essential maternal gene deadhead within a heterochromatin mini-domain. PLoS Genetics, 18(1), e1009615. https://doi.org/10.1371/journal.pgen.1009615

      Xie, T. & Spradling, A. C. (1998). decapentaplegic Is Essential for the Maintenance and Division of Germline Stem Cells in the Drosophila Ovary. Cell, 94(2), 251-260. https://doi.org/10.1016/s0092-8674(00)81424-5

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary:

      This manuscript describes the characterization of stand still (stil), a previously identified gene needed for germ cell survival in Drosophila. The molecular function of Stil has until now remained poorly understood. This new work shows that loss of stil results in reaper (rpr)-dependent apoptosis within female germ cells. Loss of rpr suppresses many of the phenotypes observed in stil mutants. Experiments performed using Drosophila cell culture suggest that Stil binds to elements within the rpr promoter. DamID and structure/function experiments indicate that Stil likely directly represses the transcription of rpr within germ cells.

      In general, the experiments are well executed, and the data largely support the basic claims of the authors. Replicates are included and appropriate statistical analyses have been provided. The text and figures clear and accurate. Appropriate references were cited. There are a few things the authors should address or rephrase before publication.

      On page 9 line 190-192. The authors state "Altogether, these findings indicate that the loss of stil function not only triggers apoptosis that can be suppressed by apoptosis inhibitors but also causes defects in oogenesis progression that are not rescued by blocking cell death." Failure to rescue defects during mid-oogenesis could be due to insufficient transgene expression. Indeed, loss of rpr appears to rescue the fertility of stil mutants. The conclusions of this section should be restated.

      The authors should present the overlap between genes that change expression in a stil mutant and those in which the DamID experiments indicate are directly bound by Stil protein. DamID can sometimes give spurious results depending on expression levels. Further discussion along this point is necessary.

      For structure function experiments, a western blot showing expression levels of the different transgenes in ovaries should be included.

      Individual data points should be shown in each graph in place of simple bar graphs. This type of presentation was inconsistent throughout the paper.

      Reference "G & D., 1997" should be properly formatted. Page 6 line 117 and 121- a couple of instances where "cell" should be "cells" Page 14 line 304- typo "Still"

      Referee cross-commenting

      I also agree with the points raised by the other two reviewers. I think we are in general agreement on the strengths and weaknesses of the study.

      Significance

      The significance of the work lies in characterizing a previously unknown function of Stil. By showing that Stil acts to repress transcription of the cell death gene rpr, the authors provide new insights into how programmed cell death is regulated in the Drosophila female germline. Readers interested in reproductive biology, cell death, chromatin, and general developmental biology will find value in these new findings.

      One thing to consider is the possibility that Stil represses rpr in the context of the double strand breaks that form during meiosis. Experiments in the paper indicate that stil knockdown results in TUNEL labeling in region 2A/2B of the germarium. The authors should consider co-labeling for a meiosis marker (C(3)G or gammaH2Av) to see if this PCD correlates with this expression. In addition, they could test whether loss of Spo11 (mei-W68) suppresses stil phenotypes during early germ cell development. Relating the function of Stil to repression of cell death during this critical time of germ cell development would elevate the impact and significance of the paper. However, this may be considered beyond the scope of the current study.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      In this well-executed study, Matsui et al. investigate how the female Drosophila germline prevents inappropriate apoptosis during development. They identify stand still (stil) as a key germline-specific repressor of apoptosis. Stil mutant flies are homozygous viable but female sterile due to widespread germ cell loss at the time of eclosion, which is driven by activation of the pro-apoptotic gene reaper (rpr) and caspase-dependent cell death. Germline-specific expression of anti-apoptotic factors such as p35 can rescue this phenotype, confirming that the defect lies in apoptotic regulation. The authors show that Stil directly represses rpr transcription through its BED-type zinc finger domain. Notably, undifferentiated germline cells remain resistant to apoptosis in the absence of stil, which the authors attribute to a silenced chromatin state at the rpr locus, marked by H3K9me3. These findings support a dual mechanism of protection: transcriptional repression of rpr by Stil, and a potential parallel chromatin-based silencing mechanism operating specifically in undifferentiated cells.

      Major Issues:

      1. Clarify cell identity in Figure 2E: It is unclear whether the apoptotic cells shown are somatic or germline in origin. Including a somatic marker such as 1B1 would allow the reader to clearly distinguish the apoptotic population and better interpret the figure.
      2. Quantification of undifferentiated cells in mutants: There appears to be inconsistency in the representation of undifferentiated germ cells across figures. Early panels show near-complete germline loss, while later analyses focus on undifferentiated cells that are reportedly apoptosis-resistant. The authors should quantify the proportion of ovarioles retaining undifferentiated cells and present this data in Figure 1 or the supplements to resolve this discrepancy.
      3. Interpretation of chromatin state at the rpr locus: The claim that H3K9me3, but not H3K27me3, marks the rpr locus is not fully convincing given the low ChIP-seq signal shown. Including a comparison to a known positive control locus would strengthen the argument. Alternatively, the authors could broaden the discussion to include global chromatin reorganization during germ cell to maternal transition, as reported in Kotb et al., 2024 and how such changes may impact rpr accessibility. Also stl mutant rescued with P53 have a "string of pearls" phenotype that are associated with germ cell to maternal transition defects (Figure S3, p53 OE)
      4. Broader analysis of rpr regulation in somatic cells: It would be informative to examine publicly available chromatin or transcriptional data for the rpr locus in somatic ovarian cells. This could help clarify whether rpr regulation by Stil is truly germline-specific or reflects broader developmental trends. This will also clarify why the flies are homozygous viable but female sterile.

      Referee cross-commenting

      I agree with the assessment of the other two reviewers. I think reviewer 3 point of "the overlap between genes that change expression in a stil mutant and those in which the DamID experiments indicate are directly bound by Stil" is important and needs to be addressed.

      Significance

      This study provides important insight into how germline cells in Drosophila evade apoptosis through both transcriptional and chromatin-based regulation. While reaper is a well-known effector of apoptosis, the identification of stil as a direct repressor in the female germline adds a new layer of cell type-specific control. The authors also delineate an epigenetic mechanism that protects undifferentiated germline cells, highlighting stage-specific differences in apoptotic susceptibility. This dual mechanism is conceptually significant and expands our understanding of how cell survival is maintained during gametogenesis. However, the precise novelty of stil relative to other rpr regulators could be articulated more clearly, and some data interpretations would benefit from additional clarification.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary: This work by Matsui et al. examined the function of a gene Stand Stil (stil) in Drosophila in regulation of germ cell death in the female germline. They show that stil mutants contain many apoptotic cells, leading to germ cell loss and infertility. Gene expression analysis showed upregulation of pro-apoptotic genes such as rpr in stil mutant. DamID experiment further showed that stil binds to rpr promoter region to repress its expression. Additionally, they also show that undifferentiated germ cells are resistant to cell death in stil mutant (but stil mutant still eventually loses all germ cells).

      Major comments: Overall, experiments adhere to a general standard of rigor, and each result is fairly convincing. In that sense, this paper warrants publication, as a paper that revealed a new gene important for preventing germ cell death. With that said, I feel that this paper does not reveal a new biological insight. In a nutshell, this paper is about a transcriptional repressor for pro-apoptotic gene, hence its depletion leads to cell death. Data is solid and the conclusion is well supported. But the readers will be left wondering why nature implemented such control? Unless one can show what kind of defects stil rpr double mutant (which rescues germ cell loss phenotype) exhibits, there is no insight why the balance of pro-apoptotic gene and its repressor is important. The paper discusses the 'molecular' mechanisms that explain the phenomenon, but it does not provide insights. The lack of conceptual advancement is the limitation of this work.

      Minor comments: Although this is a minor point, and this is not specifically pointing a finger at the author of this paper, I really don't like the term 'safeguard'. This term is now overutilized to add hype to papers, when 'is necessary' is sufficient. In this case, unless the answer is provided as to 'against what stil is safeguarding germ cells', this term is not meaningful. For example, if one can show that stil specifically senses germline-specific threat and tweaks the regular apoptotic pathway based on germline-specific needs, then the term 'safeguard' may be warranted.

      Referee cross-commenting

      I also agree with other reviewers.

      Significance

      As I summarized above, as is, this manuscript's impact is limited to identifying a gene that is required to prevent germ cell death.

    1. Members of neutral cultures do not telegraph their feelings, but keep them carefully controlled and subdued

      Defines emotion control within communication — particularly in Japan, U.K., etc.

    Annotators

    1. Aregion is an area that shares some sort of common characteristic that binds the areainto a whole.

      This reminded me of different regions of Alaska. It's similar to how Southeast Alaska is wildly different from northern Alaska. Southeast Alaska has a wetter and temperate climate, whereas Northern Alaska has a much colder and drier climate. Both of these regions are part of Alaska and share similar characteristics, but ultimately are vastly different.

    Tags

    Annotators

    1. One of Synthesizer's most complex tasks is tracking overlapping memory writes:

      This is second important part. In which cases this aliasing resolution is required? "Overlapping" is just one example but not all.

      Example one: Suppose that MSTORE is going to store a DataPt "X" (32 bytes) in the memory at offset 0x03. After some time has passed, MLOAD is loading a 32-byte memory value at offset 0x00 to the stack. Say this "Y". Suppose there have been no "overlapping" during the meantime. Do you think the returned stack value "Y" is still the same as "X" even if there was no overlapping?

      Example 2: In general, Calldata can be much longer than 32 bytes. So whenever EVM is going to load specific function input argument "Y" onto the stack, it chunks the Calldata.

      It's quite tricky for a Synthesizer to shadow this, since DataPts cannot deal with words greater than 32 bytes! The current version of the Synthesizer avoids solving this problem: it simply takes the resulted chunk made by the EVM as an Oracle. The next version, currently in development, will fundamentally solve this: it will create another virtual MemoryPt dedicated to CallData and store DataPts for the function selector and function arguments there—this process is the reverse of resolving aliasing.

      Please see this code for dealing with "CALLDATALOAD".

    2. Important: The RPC connection remains active throughout execution, not just during initialization. When the EVM encounters SLOAD, BALANCE, EXTCODESIZE, etc., it queries the blockchain state through RPC in real-time.

      This is true for the moment, but in the future updates it will be changed to not to do this.

    1. spent my early years in a large, single-family suburban home, the crown jewel of my immigrant parents' AmericanDream. I have fond memories of playing and raking leaves in the big backyard. Then my parents divorced, and mymother and I moved to a smaller townhouse.I eventually came to appreciate the house's more manageable size.When I moved to the city for college and work, I lived in even smaller apartments. I fell in love with the lively, walkableurban life and the freedom of not having to worry about a car or a long commute.The people around me make it feel like I'm part of something bigger than the walls of my home

      Suburbia House: "The immigrant dream" is it the top priority? Many seem to like the town houses as it is more walkable and accessible, but some want the space for children.

    2. My view of the American Dream hasn't changed, but my view of how the American Dream meets our needs haschanged as we've aged

      The American Dream does not fit the needs of people

    3. We want to move, but we cannot afford it. Our real estate agent daughter says she could sell ourhome easily and at a profit. It still would not be enough. Affordable fourplexes, duplexes or one-story townhouses

      Housing market chaos

    Annotators

    1. Don’t simply copy the designs you find in your research. The competitors may not be using best practices. Instead, be inspired by the solutions found in your research and adapt the solutions to fit your brand, product, and users.

      This is a good reminder to prioritize design principles and user needs over aesthetics. It’s easy to copy features you like, but taking time to consider why each element is necessary is valuable practice. Going a step further by adapting solutions rather than simply copying them can also strengthen your design skills.

  5. sk-sagepub-com.offcampus.lib.washington.edu sk-sagepub-com.offcampus.lib.washington.edu
    1. Von Franz suggests that the anima has a positive side, however, that enables men to do such things as find the right marriage partners and explore their inner values, leading them to more profound insights into their own psyches. The animus functions in much the same way for women. It is formed, von Franz suggests, essentially by the woman’s father, and can have positive and negative influences. It can lead to coldness, obstinacy, and hypercritical behavior, but, conversely, it can help a woman to develop inner strength, to take an enterprising approach to life, and to relate to men in positive ways.

      We must stop shaming men for showing their anima side and stop shaming women for their animus's side.

    2. Texts that feature the police or have religious messages are obviously superego texts.

      Interesting that this author says these messages are always superego texts. I'd say religion mostly is, but police? well....

    1. A thesis is not your paper’s topic, but rather your interpretation of the question or subject.

      It's good to know that it will be my own interpretation vs a perfect 100% factual piece of paper. Knowing that there is room for my own interpretation is refreshing.

    1. If you can’t find it, say, “I looked but couldn’t find it”, instead of “You didn’t include one.” Both may mean the same thing, but the former sounds less aggressive and accusatory, and the reason for that is that you state that you as the reader tried to accomplish the given task of finding the thesis statement.

      This is a good example of ways not to "call out" your classmates.

  6. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Students from poor families need to be told this, and more, they need to be made to believe it. 4

      Teaching is not merely about imparting knowledge; it is about “helping students believe they deserve respect.” Students from disadvantaged backgrounds are often told by society that “you're not good enough,” and it is the teacher's role to challenge this narrative. Education should shift from being an “academic tool” to embodying “humanistic care.” True educational equality does not mean “providing identical resources,” but rather offering the same belief and dignity. A single word from a teacher, a moment of genuine attention, can become the starting point for a student to rebuild their confidence.

    2. When I started school, I soon learned that being poor might mean both the things I thought it did and also something much, much worse: It meant that I was inferior to those who were not poor; I was less than. It's a terrible feeling to become aware at an early age that not having money somehow means that you are less deserving in the classroom than students who are more privileged, that you are less deserving of a teacher's attention or praise, that you arc less deserving of good grades, that your financial shortcomings indicate that your parents have failed in some way.

      This sentence marks the pivotal turning point in the entire text—the author's first realization that “poverty” is not merely an economic condition but also a social identity. School taught him not only knowledge but also an invisible “hidden curriculum”: that the poor are “second-class.” This awareness did not stem from direct instruction but from peer exclusion, teacher indifference, and society's unspoken norms. Teachers must remain vigilant against “silent discrimination in the classroom”—such as judgments based on clothing, homework, or parental involvement—which can make students feel evaluated or marginalized. Education should convey dignity and equality, not inadvertently replicate societal inequalities.

    3. However, elementary teachers have an impact on the future of student achievement that reaches beyond the classroom.

      It's interesting how people so early on in our lives play a huge role in our future achievement. I see this being true in many aspects. Hearing my teachers talk about their higher education experience inspired me to do the same. Not only do educators play a huge role in influencing students for higher education, but often times help create a welcoming and safe environment for students who don't feel safe anywhere else. Because of this I strongly believe elementary school and its educators are what shape student success.

    4. In fact, the biggest downside to being poor was that my mom and dad had to work really hard.

      I have always wondered what the implications are from parents who work all the time and have little time to see their kids. I have seen this on both sides, low-income students whose parents work a lot to survive, and well-off students whose parents are always on business trips they rarely see their kids. While both are done for different reasons, I think the experience of the kids is similar emotionally.The lack of a parent figure in the home leads many to become independent. While the experiences can be different, I find many of the students in similar situations are able to find a common ground In how they felt, but plan to go about their lives in different ways. Low-inccome students often want a well paying job that allows them to see their kids, while wealthier students follow a similar path made by their parents.m

    1. An evaluation judges the value of something and determines its worth. Evaluations in everyday experiences are often dictated by both set standards but are also influenced by opinion and prior knowledge.

      When is it okay to use your own opinion when evaluating paragraphs? When you have enough knowledge on a certain topic? Or using a specific criteria?

    1. As Lightning to the Children eased With explanation kind The Truth must dazzle gradually Or every man be blind

      The poem likens truth to lightning: sudden, overwhelming. Like a child learns the meaning of lightning bit by bit, truth must be delivered gently. Otherwise, the shock is too much

    2. Tell all the truth but tell it slant — Success in Circuit lies Too bright for our infirm Delight The Truth's superb surprise

      Dickinson tells us to reveal the whole truth, but indirectly. The truth, she suggest is too brilliant. "too bright for our infirm delight".

  7. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Over and over and over again in school I had been cued both verbally and nonverbally that I was poor. I wasn't good enough, I didn't have enough, and what I had was the wrong thing. School projects, holidays, extracurricular activities, and field trips would send a surge of panic through our house because they were yet another expense. There are other curricula besides the one being verbalized.

      After becoming a teacher, the author realized that schools not only teach the explicit academic curriculum but also impart a “hidden curriculum” through unspoken language, activities, and attitudes—a curriculum that teaches children their place in society. The phrase “More is caught than taught” reveals the unconscious biases in education: students “learn” inferiority through being ignored, compared, and pitied—a lesson far more profound than any textbook knowledge.

    2. My egg was spectacular, and I was thrilled to carry it proudly into school the next day. And that's when I saw the other eggs. Danny's egg was dressed exactly like Abraham Lincoln. It had a top hat and a black jacket with a white shirt and stiff paper collar. Its face was painted like a china doll, and it had real hair that had been liberated from a curly-haired sister for a beard and mous-tache. It had its own little stand. It looked presidential.

      The author once took pride in his own efforts, yet through comparison came to feel the shame brought by poverty. His homemade “flag eggs” symbolized innocent patriotism and dreams of equality, but the reality of competition exposed class divisions—who could afford better materials, who received parental help, determined “whose work would be admired.” Many school family events suffer from this issue: ostensibly a contest of creativity, they essentially become displays of economic resources. When school activities overlook students' economic disparities, they often inadvertently “put poverty on public display.” Educators must rethink evaluation criteria to prevent classrooms from becoming places of humiliation.

    3. ou're poor, White trash," Danny hissed as he sashayed by me on the dusty, pebble-filled p!a~ground at first recess. I started to cry, and I remember that Phillip laughed and said, "He's crying like someone just threw dirt in his eyes." And that's exactly what it felt like being told you're poor without being ready for it. I had no idea-absolutely no inkling whatsoever-that I'd spent the last eight years in poverty

      This is the emotional climax of the entire piece, the moment when the author's “awareness of poverty” was awakened. Before this, he lived in the natural mountains and forests, utterly oblivious to economic status; but a single insult from a classmate, like a mirror reflecting society, made him ‘learn’ the meaning of poverty for the first time through others' eyes. The playground, the dust, and the tears here are not merely childhood memories; they symbolize the process by which the poor are “labeled” by society. Schools, meant to be sanctuaries of learning, instead become extensions of societal class prejudice. Peer language wields immense power in shaping children's self-perception. Education should foster self-respect and equality, not “educate children into poverty” through peer discrimination.

    4. And I learned fast that making Father's Day cards was awful. I made them silently, then obediently took them home and gave them to my bewildered mother.

      Although I was always fond of art projects in preparation for mother and father's day, I always felt bad for the students who did not have one or the other. While it is important to teach young students the importance of appreciation and giving gifts during an important day, it can make others feel uncomfortable and secluded from the rest. During my time working on such crafts, I never realized there could be students next to me who were participating just to fit in, but had no one to give the gift to.

    1. Some keep the Sabbath in Surplice – I, just wear my Wings – And instead of tolling the Bell, for Church, Our little Sexton – sings.

      Here, Emily keeps her tone playful but quietly defiant. In the video, the curator mentions that Dickinson's strength in creating her own language of beauty and belief; how she turned small private moments into acts of spiritual freedom.

    1. And you that shall cross from shore to shore years hence are more to me, and more in my meditations, than you might suppose.

      This is a powerful sentence. You're not only speaking to present readers but to future people that will cross the ferry as well. You suggest that you feel connected across time.

    1. I lean and loafe at my ease observing a spear of summer grass.

      The verb “loafe” is wonderful. It's not idle in a lazy sense only, but a kind of active repose, observing the world. You recline, but you pay attention.

  8. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Poor children often breathe contaminated air and drink impure water. Their households are more crowded, noisy, and physically deteriorated, and they contain a greater number of safety hazards

      Poverty exists not only in economic data but also manifests in the concrete environments of daily life. The settings where impoverished children live are filled with pollution, noise, and hazards—all of which directly or indirectly impact their physical health and ability to focus on learning. Poor air quality can lead to respiratory issues, increasing absenteeism; noise and crowded conditions make it difficult for children to concentrate on their studies. When evaluating student performance, educators must account for the invisible factor of “environmental stressors.” Distractions, fatigue, and slow responses may not stem from attitude issues but rather from excessive cognitive load imposed by their surroundings.

    2. I defi ne poverty as a chronic and debilitating condition that results from multiple adverse synergistic risk factors and affects the mind, body, and soul.

      This sentence is the core definition of the entire chapter. Jensen emphasizes that poverty is not merely an economic phenomenon, but a state that chronically undermines an individual's overall capacity, simultaneously affecting psychological, physical, and spiritual well-being. Traditional educational perspectives often reduce “poverty” to a lack of money or resources. When working with low-income students, educators must go beyond providing “material assistance” and also address emotional support, self-efficacy, and social belonging. Truly helping students from impoverished backgrounds requires a holistic perspective—one that builds security and trust through emotional, social, and cognitive dimensions.

    3. Many nonminority or middle-class teachers cannot under-stand why children from poor backgrounds act the way they do at school.

      I dont think it is necessarily their fault they are unaware of others situation. There is a lack of awareness and lack of emphasis on educating others about financial hardships many face. Many are oblivious to the fact that someone could be living in poverty because they have never encountered someone in that situation. It is unfair to judge those who are unaware, rather it's important to focus on the education system which is meant to inform students but often fails to do its job.

    4. In reality, the cost of living varies dra-matically based on geography; for example, people classifi ed as poor in San Francisco might not feel as poor if they lived in Clay County, Kentucky.

      This is a reality for many people, especially across California, on of the most expensive states to live in. However, I don't believe this accounts for changes in income. Yes, if someone living in California were to move to a state like Kentucky they'd likely have the financial stability to live well. But, as time continues and people settle in, they'll be forced to work the wages within that state, not California. This can potentially cause financial hardships because of the sudden change in lifestyle. Although it could be beneficial to move out of state, it is important to be cautions.

  9. www.tripleeframework.com www.tripleeframework.com
    1. The Triple E Framework is meant to be used as a coaching tool to support teachers in their instructional choices around and with technology tools.

      I like the idea of this tool acting as a coaching tool to help teachers because often teachers are left on their own to navigate new tools and they don't always have a way to tell if the tech they are finding will hold real value in their classroom. This like the text says should help teachers in their decision of is this tool "flashy and new" or is it going to hold actual value in my classroom. I know many tools that I have felt were going to do well but when I put them into actual practice they often fell short of the expectations. Having a tool that could have helped me get to that conclusion before introducing it to my class would have been helpful.

    1. Lyra Hale. New Book Says Facebook Employees Abused Access to Track and Stalk Women. The Mary Sue, July 2021. URL: https://www.themarysue.com/facebook-employees-abused-access-target-women/ (visited on 2023-12-06).

      This article didn't really surprise me. We know Facebook was formed as a Tinder-style sexual rating system, so information of women being exploited by Facebook employees isn't unexpected from this company. This article brought up two specific examples of men using their power as Facebook employees to track the locations of women in real time, but also notes 52 total employees being fired for abusing their access to users' information.

    2. Lyra Hale. New Book Says Facebook Employees Abused Access to Track and Stalk Women. The Mary Sue, July 2021. URL:

      We usually trustfully hand over our data to technology companies, but often overlook the fact that these companies are also composed of people. Since people are human, they always have some ill-intentioned thoughts. This requires more advanced institutions such as the government to supervise them in order to protect people's rights. When such incidents occur, it indicates that there is a significant lack of regulatory participation in this field, and this is an urgent problem that needs to be addressed now.

    3. Jacob Kastrenakes. Facebook stored millions of Instagram passwords in plain text. The Verge, April 2019. URL: https://www.theverge.com/2019/4/18/18485599/facebook-instagram-passwords-plain-text-millions-users (visited on 2023-12-06).

      This article is documenting that Facebook stored passwords of users in a way that made them accessible to around 200,000 employees. I don't fear my information being "leaked" due to lack of assets that are desirable but that could just be a lack of understanding but this did concern me.

    4. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      I read this report from NPR titled "After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users". I was really shocked. The data of 53 million people was leaked, and yet the company chose not to notify the users? This made me realize that big companies claim to "value privacy", but when problems occur, their first reaction is usually to protect themselves rather than protect the users. After reading it, I will be more cautious about the personal information on social media. After all, sometimes "the sense of security" is just an illusion.

    1. Amazon Plans to Replace More Than Half a Million Jobs With Robots
      • Internal documents reviewed by The New York Times show Amazon plans to automate up to 75% of its operations in the coming years.
      • The company expects automation to replace or eliminate over 500,000 U.S. jobs by 2033, primarily in warehouses and fulfillment centers.
      • By 2027, automation could allow Amazon to avoid hiring around 160,000 new workers, saving about 30 cents per package shipped.
      • This strategy is projected to save $12.6 billion in labor costs between 2025 and 2027.
      • Amazon’s workforce tripled since 2018 to approximately 1.2 million U.S. employees, but automation is expected to stabilize or reduce future headcount despite rising sales.
      • Executives presented to the board that automation could let the company double sales volume by 2033 without needing additional hires.
      • Amazon’s Shreveport, Louisiana warehouse serves as the model for the future: it operates with 25% fewer workers and about 1,000 robots.
      • A new facility in Virginia Beach and retrofitted older ones like Stone Mountain, Georgia, are following this design, which may shift employment toward more temporary and technical roles.
      • The company is instructing staff to use softer language—such as “advanced technology” or “cobots” (collaborative robots)—instead of terms like “AI” or “robots,” to ease concerns about job loss.
      • Amazon has begun planning community outreach initiatives (parades, local events) to offset the reputational risks of large-scale automation.
      • The company has denied that the documents represent official policy, claiming they reflect the views of one internal group, and emphasized ongoing seasonal hiring (250,000 roles for holidays).
      • Analysts suggest this plan could serve as a blueprint for other major employers, including Walmart and UPS, potentially reshaping U.S. blue‑collar job markets.
      • The automation push continues a trajectory started with Amazon’s $775 million acquisition of Kiva Systems in 2012, which introduced mobile warehouse robots that revolutionized internal logistics.
      • Recent innovations include robots like Blue Jay, Vulcan, and Proteus, aimed at performing tasks such as sorting, picking, and packaging with minimal human oversight.
      • Long-term, Amazon may require fewer warehouse workers but more robot technicians and engineers, signaling a broader shift in labor type rather than total employment.
    1. Science has a sexism problem. Women’s research is often cited less than men’s, even when it’s just as good

      I feel like this idea is brought up in a lot of conversations but people don't actually know how big of a problem this is. We miss out on important perspectives that couldn't have been found otherwise.

    2. For a long time, science was seen as purely objective—free from bias or personal influence. But feminist thinkerschallenged that idea. They pointed out that science has historically been dominated by men, and that this shaped what questions were asked, how studies were designed, and whose experiences were ignored.

      Allowing more diverse and unique practices and theories into science allowed it to grow beyond its rigid ways. This opened the doors to a "higher ceiling" of scientific thought.

    3. truth was possible—but only if people worked together to get closer to it.

      This is important because it shows how science is a collective team effort, not an individual one. It requires cooperation between individuals to bring up the best results.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process [i6] for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time [i7]).

      This section explains password storage well, but it should explicitly separate encryption from hashing: sites should store passwords with a salted, slow hash (e.g., bcrypt and Argon2), not reversible encryption. Reversible schemes mean one leaked key exposes all passwords; slow hashing makes credential-stuffing economically painful. Minimal user practice: password manager + unique long passwords + TOTP or hardware-key 2FA.

    2. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely.

      Not only social platforms, but also the security of the online space has risen to a national level. In recent years, with the development of the Internet, more and more information has been digitized. Even some confidential data is stored in databases. Although these databases are usually very secure, they will be the first targets to be attacked once a war breaks out. For example, before the war between Russia and Ukraine, the Russian Cyber Security Department had already begun to attack some official websites of Ukraine. Therefore, today, cyber security in the network space has become particularly important.

    3. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users

      I could just read the article but I'll do that later. but basically, I've always been confused on when hackers release the passwords of a bunch or users of a website or similar. Not so much how they do it (I still don't know how) but more so how they share that information. Like, do they just share the passwords without it's respected user? In that case it wouldn't be absolutely terrible since you still wouldn't know which password is for what account, but a smart hacker to maybe use a bot to try each of the 153 million passwords on one account (would still take ages, but at least you have a finite number of passwords to try). Or, do hackers put up all the password along with the users in a massive spreadsheet? That would make sense, you can just look up an account to hack and hack it easily. But do they share this on public platforms like Reddit? Do they share it directly with each other? Do they post it on some sort of evil dark web place? I'll find out I guess.

    4. Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women [i10]

      I find it interesting that this article was released in 2021, but the incidents cited took place in 2015. Does this mean these incidents in which Facebook employees abused their power were not open to the public for 6 years?

    5. But social media companies often fail at keeping our information secure.

      I can see how this is true. Meta using selling information to ad agencies can easily lead to leaks and scams from other sources.

    6. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      I think the sentence "Although we are concerned about information privacy, we often share information with social media platforms and believe they will safely keep these details" is particularly true. We always say we want to protect privacy, but in reality, we still readily click "agree" and hand over our information without hesitation. Seeing this sentence made me reflect a bit - perhaps we have become too accustomed to convenience, and thus have overlooked the aspect of security.

    1. Ask yourself and others in your program the following:1. Is the policy practical?2. Is the policy age-appropriate for all the children you care for and for yourenvironment?3. Will center based staff, (or family child care assistant if program is familychild care), be able to incorporate the policy and procedures into the dailyoperations of the program? What training may they need?4. Is the information in the policy accessible and easy to use?5. Does the policy do what it’s intended to do regarding the children’s healthand safety?Page 9 TAChildGuidanceGCC20051107

      I think I will share this with the others on my teaching team - They are veteran teachers but the way this text puts things plainly and sets out to clearly identify a guidance plan to turn to when challenging behavior presents itself is important.

    2. Separate the child from the environment, but have the child remain withinthe teacher/provider’s immediate and direct supervision until the child isable to regain self-control and re-join the group;• Have the teacher/provider place him/herself in close proximity to the childuntil the child is able to regain self-control when the child cannot beremoved from the environment. In this instance, the teacher/provider mustalso remove anything within the child’s immediate reach that is a potentialdanger to the child or others.• If necessary, the teacher/provider may use another adult to support andassist in calming the child until the child is able to regain self-control.• Talk calmly to the child; this is always appropriate.Page 4 TAChildGuidanceGCC20051107

      I like these ways of responding - the child is not separated or singled out but provided additional support as most young children need when challenging behavior is presenting itself. We do most of these things in our classroom. Although there is one child who benefits from a hug and we ask him to verbally request that to respect him needs and only give when he requests.

    3. Supportive holding of children should be considered only in the following situations:• The child’s safety is at risk;• The safety of other children or adults is at risk;• The child must be moved in order to be safely supervised;• The child demonstrates a sustained behavior that is highly disruptive and/or upsettingto other children necessitating moving the child.

      Although personally I keep physical contact to a minimum when a child is upset because it can sometimes escalate the situation but these guidelines are good to know to safeguard our team.

    4. The Department of Early Education and Care supports the tremendous work thatis done each day in child care centers, school age programs and family child carehomes. It’s your hard work and efforts that make child care programs and familychild care homes safe, caring environments where children can grow, discover,play and learn.

      I have never read this article with my teaching group but can already tell this will help us with a plethora of useful information to help us in this teaching journey. At the pre-K age, self-control is really non-existent, especially if the basis for those skills aren't being enforced at home. It's a challenge at this age because they are VERY cute, and I'm just the teacher, so I can image the challenge of establishing boundaries and teaching self-control at home is a large one.

    5. Ask anyone and they will tell you that helping children develop self-control is anenormous challenge and responsibility.

      I totally agree - it's a very tall order. Teaching children self control in a classroom is very different from teaching children when you are babysitting or teaching your own children. I do believe that because I do not ye have children, the techniques I am learning to be effective will also change the way I parent. But this skill is very hard to teach in a positive way.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy.

      Relating back to this, I agree but also think most of your privacy gets taken away when you enter social media platforms. From what I have seen, people who post lifestyle content usually always get doxed by a random unemployed person who has too much time on their hands and slows every part of the video down until they find what they need. So I think we lose our privacy both to the media platforms and to the people online

    2. For example, a social media application might offer us a way of “Private Messaging” [i1] (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      This makes me really think about how even things we think are "private", like our privates messages, really aren't. These social media companies have accesses to pretty much all our activity online/on their apps. For example, when you allow apps like Instagram and Tiktok to access your photos and videos, they pretty much can see everything in your camera roll. This also makes me wonder if these social media companies can use this as black mail to important people like celebrities and politicians. People who influence our world.

    1. C

      I like that each of the center buttons open in a new window, but opening in a new window without flagging that is getting us dinged in Silktide. Do we need to add an "open in new window" icon next to the text on anything that will do that?

    1. Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them [i19]. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages [i20] Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much [i21] Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize [i22], John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala [i23]. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them [i24]. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles [i25], which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites migh

      This section makes me think on the internet nowadays, there's absolutely no way to keep your information to yourself. People's information is in so many different companies, and users would not know how their information is being used either. Users has no control over their own privacy although it's something about themselves.

    1. A larger clinical trial (NCT04154943) for patients with CSCC is in progress to validate these findings.

      v1.1 Update

      Since guideline publication, this phase II study assessing neoadjuvant cemiplimab reported a pCR rate of 51% and a major pathologic response rate of 13% in 70 patients with resectable stage II, III, or IV (M0) CSCC. Notably, an additional 9 patients were treated but did not undergo surgery. (This regimen was not FDA-approved at the time of update v1.1). [Ref 179]

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Point-by-Point Response to Reviewers for Manuscript #RC-2024-02720

      Manuscript Title: Molecular and Neural Circuit Mechanisms Underlying Sexual Experience-dependent Long-Term Memory in Drosophila.

      Corresponding Author: Woo Jae Kim

      We extend our sincere gratitude to the Managing Editor and both reviewers for their diligent and insightful evaluation of our manuscript. The comprehensive feedback provided has been invaluable, guiding us to significantly strengthen the manuscript's scientific rigor, logical cohesion, and overall impact. We have undertaken a substantial revision, incorporating new experimental evidence, reframing the central narrative, and improving data presentation to address all concerns raised.

      The major revisions include:

      1. New Experimental Evidence: We have performed three new sets of experiments to address key questions raised by the reviewers. First, we used the protein synthesis inhibitor cycloheximide to pharmacologically validate that the observed memory is indeed a form of long-term memory (LTM). Then, we performed genetic intersectional analyses to determine if the identified Yuelao (YL) neurons express the canonical sex-determination transcription factors doublesex (dsx) and fruitless (fru).
      2. Narrative Reframing and Logical Restructuring: We fully agree with the reviewers that the logic of the original manuscript was confusing, particularly regarding the distinction between the broad Mushroom Body (MB) Kenyon Cell (KC) population and the specific YL neurons. The manuscript has been extensively rewritten to present a clear, hypothesis-driven narrative. We now frame the initial KC-related findings as part of a broader screening effort that logically led to the identification and focused investigation of the YL neuron circuit.
      3. Refined Central Claim: Guided by the reviewers' feedback and our new data, we have sharpened our central claim. We now propose that YL neurons constitute a critical circuit for forming attractive taste- and pheromone-based memories derived from Gr5a neuronal inputs. This form of appetitive memory is distinct from the previously characterized internal reward state associated with ejaculation, adding a new layer to our understanding of how male flies remember and evaluate reproductive experiences.
      4. Improved Data Quality and Analysis: In response to valid critiques, all imaging figures have been replaced with high-resolution versions. Furthermore, our methods for fluorescence quantification, particularly for the TRIC calcium imaging experiments, have been corrected to include normalization against an internal reference channel, adhering to established best practices. All requested genetic control experiments have been performed. We are confident that these comprehensive revisions have fully addressed all concerns and have transformed our manuscript into a much stronger, more focused, and logically sound contribution. We thank you again for the opportunity to improve our work and look forward to your evaluation of the revised manuscript.

      Responses to Reviewer #1

      General Comments: This study explores the molecular and neural circuitry mechanisms underlying sexual experience-dependent long-term memory (SELTM) in male Drosophila. The authors use behavioral, imaging, and bioinformatics approaches to identify YL neurons, a subset of mushroom body (MB) projecting neurons, as crucial for SELTM formation. They propose that YL neurons receive inputs from WG neurons via the sNPF-sNPFR pathway and implicate molecular players such as orb2, fmr1, MDAR2-CaMK, and synaptic plasticity in their function.

      However, the evidence presented does not adequately support the authors' claims. The data fail to cohesively tell a logical story, and key conclusions appear to be based on assumptions and correlations rather than robust evidence.

      • Answer: We are deeply grateful to both reviewers for their thorough and constructive evaluation of our manuscript. Their collective feedback has been instrumental in helping us to clarify the study's rationale, strengthen our interpretations, and significantly improve the overall quality and impact of the work. We appreciate the recognition of our study's potential to advance the understanding of how sexual experience modifies future mating behaviors and to elucidate the neuronal and molecular mechanisms of how memory regulates a key sexual behavior in male Drosophila*.

      • *In response to the general comments, we have undertaken a major revision of the manuscript to improve the clarity, logic, and presentation. We have rewritten the Abstract and Introduction to more clearly define "sexual experience-dependent long-term memory" (SELTM) and articulate its significance in the context of adaptive decision-making and interval timing. The entire manuscript has been restructured to present a more logical, hypothesis-driven narrative that clearly distinguishes our initial broad screening from the focused investigation of the YL neuron circuit. We have also incorporated alternative interpretations of our data, particularly regarding the role of the YL circuit in regulating baseline mating duration in naive males, which has added more depth to the study. Finally, all figures have been remade in high resolution, and all requested genetic controls and methodological clarifications have been added to ensure rigor and reproducibility. We are confident that these revisions have addressed the reviewers' concerns and have resulted in a much stronger manuscript.

      Comment 1: The study identifies the knowledge gap (lines 103-104) but fails to integrate relevant literature, particularly Shohat-Ophir et al., Science (2012), and Zer-Krispil et al., Curr Biol (2018). These studies established that ejaculation induces appetitive memory in male Drosophila via corazonin and NPF neurons. The current study does not provide direct evidence that the "act of mating itself" drives SELTM, as it includes both courtship and copulation.

      Response: Thank you for highlighting these two landmark studies. We fully agree that Shohat-Ophir et al., Science (2012) and Zer-Krispil et al., Curr Biol (2018) were pivotal in demonstrating that ejaculation—and the accompanying corazonin/NPF signalling—can establish an appetitive memory in males.

      In the revised manuscript we have now integrated both papers on lines 111-118:

      “Previous work has shown that successful copulation is intrinsically rewarding to male Drosophila: a single mating encounter elevates brain neuropeptide F (NPF) levels and suppresses subsequent ethanol preference19. Importantly, Zer-Krispil et al. further demonstrated that ejaculation itself—artificially induced by optogenetic activation of corazonin (Crz) neurons—is sufficient to mimic this reward state, driving appetitive memory formation and up-regulation of NPF. These findings indicate that the act of ejaculation, rather than the entire courtship sequence, is the critical sensory event that gates post-mating reward.”

      Comment 2: The nature of the observed long-lasting reduced mating duration requires clearer characterization: Is this an associative memory or experience-dependent behavioral plasticity? Can the formation of this long-term memory be blocked by protein synthesis inhibitors, such as cycloheximide?

      Response: We thank the reviewer for this excellent suggestion to pharmacologically characterize the nature of the memory. To definitively test whether the observed SMD is a form of protein synthesis-dependent long-term memory (LTM), we performed a new experiment as suggested.

      We have now included data in new Figure supplement 1I showing that feeding males the protein synthesis inhibitor cycloheximide (CXM) for 24 hours immediately following the sexual experience completely blocks the formation of the long-lasting SMD phenotype. Control flies fed a vehicle solution exhibited robust SMD. This result provides strong evidence that SELTM is not merely a form of transient behavioral plasticity but is a genuine form of LTM that requires de novo protein synthesis for its consolidation, a hallmark of LTM across species.[1]

      The revised text was put on lines 173-176:

      " To determine whether the persistent reduction in mating duration (SMD) depends on de-novo protein synthesis, we fed males the translational inhibitor cycloheximide (CXM). Under this regimen, CXM completely abolished the SMD phenotype (Fig. 1I)."

      Comment 3: While schematics illustrate the working hypotheses, the text lacks detailed explanations, leaving the reader unclear about the rationale behind certain conclusions.

      __Response: __Thank you very much for this insightful comment. We fully agree that the original manuscript did not provide sufficient textual justification for the conclusions derived from the schematics. In the revised version we have therefore added comprehensive explanations immediately following each figure (or schematic) that explicitly state the underlying rationale, the key observations supporting our hypotheses, and the logical steps leading to each conclusion. We believe these additions now make the reasoning transparent and easy to follow. We appreciate your feedback, which has substantially improved the clarity of our work.

      • *

      Comment 4*: The logic to draw certain conclusions was confusing and misleading. - For instance, the role of orb2 in SELTM is examined via knockdown in MB Kenyon cells (KCs) (using ok107>orb2-RNAi), which is irrelevant to the claim that orb2 functions in YL neurons. Additionally, RNAseq analyses (Fig. 1N-S) focusing on orb2 expression in a/b KCs are irrelevant to and cannot support the claim that Orb2 functions in YL neurons. *

      *- Similarly, the claim (lines 302-303) that sNPF-R expression is exclusive to MB KCs conflicts with data showing effects when sNPF-R is knocked down in YL neurons. How can knocking-down a gene, which is exclusively expressed in neural population A, in neural population B affect a phenotype? This inconsistency undermines the interpretation of the results. *

      *- Other examples include lines 223-227 and lines 246-249. It is very confusing how the authors came to the indications. *

      - The authors also kept confusing the readers and themselves by mistakenly referring to MB KC a-lobe and YL a-lobe projection. They may know the difference between the two neural populations but they did not always refer to the right one in the text.

      Response: We agree completely with the reviewer that the logic in the original manuscript was confusing and failed to clearly distinguish between the general MB Kenyon Cell (KC) population and the specific YL projection neurons. This was a major flaw, and we are grateful for the opportunity to correct it. We have undertaken a major revision of the manuscript's narrative and structure to present a clear, logical progression of discovery.

      The new logical flow of the manuscript is as follows:

      1. We first establish that sexual experience induces a robust, long-lasting SMD behavior that is dependent on protein synthesis
      2. We then perform initial experiments to implicate the MB as a key brain region. We show that broad inhibition of MB KCs (using the ok107-GAL4 driver) disrupts SMD behavior.This result establishes the general involvement of the MB but lacks cellular specificity.
      3. The remainder of the manuscript then focuses specifically on dissecting the molecular and cellular properties of these YL neurons. Finally, we have meticulously edited the entire manuscript to ensure that we always use precise terminology, clearly distinguishing between "YL neuron projections to the MB α-lobe" and the "MB KC α-lobe."

      Comment 5*: The imaging figures provided are unfocused and poorly resolved, making it difficult to assess data quality. *

      *- Colocalization analyses of orb2 and YL are unconvincing... Maximum intensity projection images are insufficient... complete image stacks with staining of orb2, YL, and KCs (MB-dsRed) are needed for validation. *

      - Quantification of imaging data appears flawed. For example, claims of orb2 and CaMKII upregulation in MB a-lobe projections (e.g., Fig. S2F-J, Fig. 3M,N) are confounded by widespread increases in intensity across the brain, lacking specificity.

      • *

      *- The TRIC experiment analysis should normalize GFP signals to internal reference channel (RFP in the TRIC construct)... *

      - In Fig. 6H-J, methods for counting synapse numbers are not described. How are synapse numbers counted in these low-resolution images?

      Response: We sincerely apologize for the poor quality of the imaging data presented in the original manuscript. We agree with the reviewer's critiques and have taken comprehensive steps to rectify these issues.

      • Image Quality: We apologize for not including the full image data in the original submission. The complete figure is now presented in revised Fig. 2J .
      • Fluorescence Quantification: The fluorescence quantification has been re-analyzed. The Methods section now includes a detailed description of our protocol.
      • TRIC Normalization: We apologize for not stating this explicitly in the previous version. As now described in the revised Methods subsection “Quantitative Analysis of Fluorescence Intensity”, all TRIC images were acquired with identical laser power and exposure settings. The GFP signal was background-corrected and then normalized to the RFP fluorescence encoded by the TRIC construct itself (UAS-mCD8RFP), which serves as an internal reference for construct expression and mounting thickness.
      • Synapse Counting: We agree with the reviewer that the resolution of our images was insufficient for accurate synapse particle counting. We have therefore removed the problematic analysis from the former Fig 6H-J. Our conclusions regarding synaptic plasticity now rest on the more robust and quantifiable data showing a significant increase in the total area of dendritic (DenMark) and presynaptic (syt.eGFP) markers. Comment 6: The study presents data from unrelated learning paradigms (e.g., olfactory associative learning, courtship conditioning; Fig. 7) without justifying how these paradigms relate to SELTM. Particularly, the authors claimed that SELTM is related to Gr5a, which leads to appetitive memories, which involve PAM dopaminergic neurons and MB horizontal lobes. However, the olfactory associative learning with electric shock and courtship conditioning lead to aversive memories, that involve PPL1 dopaminergic neurons and the vertical lobes.

      • *

      Response: We thank the reviewer for requesting clarification on the rationale for including these experiments. The purpose of these assays was to test the specificity of the YL neuron circuit. A key question is whether YL neurons represent a general-purpose LTM circuit or one specialized for a particular memory modality.

      The data show that knockdown of Orb2 or Nmdar2 specifically in YL neurons has no effect on the formation of LTM for aversive olfactory conditioning or aversive courtship conditioning. These negative results are critically important, as they demonstrate that the YL circuit is

      not required for all forms of LTM. This finding strongly supports our revised central claim that YL neurons are specialized for processing appetitive memories derived from the specific sensory context of mating (i.e., taste and pheromonal cues from Gr5a neurons).

      To improve the narrative flow of the main text, We rearranged the order of the articles. The relevant description is in lines 398-401:

      “To determine whether YL neurons constitute a general LTM circuit or are dedicated to the appetitive context of mating, we tested two canonical aversive paradigms: electric-shock olfactory conditioning and courtship conditioning. If YL neurons serve as a universal LTM module, their genetic impairment should also impair aversive memory.”

      lines 469-472:

      “The inability of YL perturbation to impair aversive memories (Fig. 7) corroborates that this micro-circuit is dedicated to Gr5a-dependent SELTM rather than acting as a generic LTM hub”

      Minor Issues

      Comment 1: Fig 2F. YL projections are labeled as MBONs. Clarify whether YL neurons are the upstream or downstream (MBON) of KCs.

      __Response: __Thank you for this helpful comment. As Huang et al., 2018[2] (Nat. Commun. 9:872) have mentioned, the MB093C-GAL4 driver is the MBON-α3 mushroom body output neuro. Consequently, YL neurons are positioned downstream of the MBON-α3.

      We have now clarified this point in the revised manuscript lines 217-222:

      “Each of these neurons extends a vertical fiber to the dorsal brain region, where they form dense arbors within the α-lobes of the mushroom body. Because the MB093C-GAL4 driver labels MBON-α3 output neuron[51], these YL arbors are positioned postsynaptically within the α-lobe and relay mushroom-body output to the anterior, middle, and posterior superior-medial protocerebrum.”

      Comment 2: Extensive language polishing is required, as several sentences are unclear (e.g., lines 169-172).

      Response: We apologize for the lack of clarity in the original text. The entire manuscript has undergone extensive revision and professional language editing to improve readability, precision, and grammatical accuracy.

      Responses to Reviewer #2


      Major Comments

      Comment 1: Clearer articulation of the rationale, motivation, and significance of the overall study design and individual experiments can strengthen the manuscript and promote readership. For example, the beginnings of the abstract and introduction should define what authors mean by sexual experience-dependent long-term memory and its significance (including why it is "significant for reproductive success" (lines 46 and 92)). Similarly, employing more concrete language throughout the text will help anchor and contextualize the study. Interpretation is occasionally insufficient or does not follow directly from the data provided.

      Response: We thank the reviewer for this valuable advice. We agree that the motivation and significance of our study were not articulated clearly enough. We have rewritten the Abstract and the beginning of the Introduction to address this. The revised text now explicitly defines SELTM as a protein synthesis-dependent, appetitive memory formed in response to gustatory and pheromonal cues. We explain its significance in the context of adaptive behavior, linking it to interval timing, a process by which male flies strategically adjust their mating investment (i.e., mating duration) based on prior experience to optimize reproductive success and energy expenditure. This framing provides a clearer context for our investigation into its underlying neural and molecular mechanisms.

      Comment 2: Long term memory: I do not work on Drosophila memory, but a cursory search suggests that the field generally considers long term memory in Drosophila to last for 24 hr to days (courtship memory lasts for >24 hr). SMD decays between 12-24 hr after copulation. Could SMD be considered a short-term effect?

      Response: This is an important point of clarification, as described in our response to Reviewer #1 (Major Comment 2), we have performed a new experiment demonstrating that the formation of SMD is blocked by the protein synthesis inhibitor cycloheximide (Figure 1I). This dependence on de novo protein synthesis is a defining characteristic of LTM, distinguishing it from short- and intermediate-term memory forms.[1] where memories lasting 12-24 hours are well-established as forms of LTM.[3] Therefore, based on both its duration and its molecular requirements, SMD represents a bona fide form of LTM.

      The relevant statement is in lines 174-178:

      "To determine whether the persistent reduction in mating duration (SMD) depends on de-novo protein synthesis, we fed males the translational inhibitor cycloheximide (CXM). Under this regimen, CXM completely abolished the SMD phenotype (Fig. 1I). This finding suggests that the reduction in mating investment is contingent upon the formation of LTM."

      Comment 3: Fig 1B-E share the same control (naive) group. If these experiments were performed in the same replicate(s), they should be plotted in the same figure. If not, please provide more details on how experimental blocks were set up and how controls compared between replicates.

      Response: Thank you for this helpful suggestion. We understand that sharing the same naive control across multiple panels (Fig. 1B–E) may raise concerns about data independence. However, we chose to present these panels separately for the following reasons:

      1. Clarity and Readability: Each panel (B–E) represents a distinct temporal condition (0 h, 6 h, 12 h, 24 h post-isolation). Separating them avoids visual clutter and allows readers to focus on one time point at a time, improving interpretability.

      __ Consistency with Internal Controls:__

      Although the naive group is identical across panels, each experimental block (i.e., each isolation time point) was run independently on same days, with internal controls (naive vs. experienced) included in every block. This ensures that statistical comparisons remain valid within each panel, even if the naive data overlap.

      We have now added a clear statement in the figure legend explaining that the naive group is shared across panels and that each time point was tested independently with internal controls. This maintains transparency while preserving the visual clarity of the current layout.

      Comment 4: Serial mating (Fig 1F-H): please provide details on the methods. How much time elapsed between successive matings? Is a paired statistical test used? Sperm depletion also affects mating duration, and without this information the authors' conclusion (lines 155-156) does not automatically follow from the data.

      Response:

      1. __ Interval between successive matings__ We have rewritten the Methods to state explicitly that “as soon as one copulation ended the male was transferred immediately to a fresh virgin female, so the next mating began immediately.”

      we add new method:

      " Serial mating ____duration ____assay

      Serial mating duration assay was identical to the standard procedure except that each male was presented with four DF virgin females in immediate succession: upon termination of the first copulation the male was immediately put into a fresh chamber containing the next virgin, the timer was restarted at first contact, and this step was repeated until four complete matings were recorded or 5 min elapsed without initiation, whichever came first."

      __ Statistical test__

      We apologize for omitting this detail. Unpaired t-test was used: for male the mating duration before (naïve) and after sexual experience was recorded, yielding paired observations. Prism’s unpaired t-test module was therefore applied to evaluate the mean difference.

      The figure legend now states “with error bars representing SEM. Asterisks represent significant differences, as revealed by the Unpaired t test and ns represents non-significant difference (**p __ Mating duration versus sperm depletion__

      We apologize for not having made it clear that these two observations are complementary, not contradictory. Previous work has shown that when male Drosophila copulate repeatedly, mating duration remains stable even though the number of sperm transferred—and thus the number of progeny sired—declines progressively [4]

      The revised text is as follows (lines235-241):

      "Previous work has shown that when male Drosophila copulate repeatedly, mating duration remains stable even though the number of sperm transferred—and thus the number of progeny sired—declines progressively. This dissociation confirms that the constant mating duration we observe in our serial-mating experiment (Fig. 1F–H) is consistent with normal sperm depletion and does not compromise the conclusion that the experience-dependent reduction in mating duration reflects long-term memory."

      Thank you for helping us improve the clarity of our study.

      Comment 5: Mating duration assay: Which isolation interval was chosen for the rest of the SMD experiments? The 12 hr en masse mating setup is relatively uncommon among studies on courtship/copulation/post-copulatory phenotypes, and introduces uncertainty and variability in the number and timing of matings that occurred during the 12 hr-window. This source of variability and its implication in interpreting the data should be acknowledged. Moreover, the 3 studies referenced in the methods all house males in groups of 4, whereas this study uses groups of 40. Could density confound the manifestation of SMD?

      Response: We thank the reviewer for these important methodological questions.

      • Isolation Interval: We have clarified in the Methods that virgin females were introduced into vials for last 1 day before assay.
      • Housing Density: This is an excellent point. To control for any potential effects of housing density itself, we have clarified that our "naive" control males are also housed in groups of 40 for the same duration as the "experienced" males. Therefore, the only difference between the two groups is the presence of females, isolating the effect of sexual experience from the effect of social density. Comment 6: SMD behavior: comparing orb2 mutants and controls (Fig 1M and Fig S1K-L), loss of orb2 actually reduces the mating duration in native males (mean ~15 min) relative to controls (~20 min), and have possibly no effect on experienced males (~15 min). This is inconsistent with the SMD behavior demonstrated in Fig 1B-E. The same pattern is found for mushroom body silencing (Fig 1P, Fig S1M-N), orb2 knockdown in YL neurons (Fig 2D, Fig S2A-B), Fmr1 knockdown in YL neurons (Fig 3D, Fig S2B, S3D) and most other experiments where mating duration is not significantly different between naive and experienced males. This might demonstrate a separate role of YL neurons and its related circuit in regulating mating duration in naive males. Could the authors discuss this interpretation? As an aside, plotting genetic controls next to experimental groups is customary and facilitates comparisons between relevant groups.

      Response: Thank you very much for this insightful observation.

      1. Baseline differences among genotypes We agree that absolute mating duration differs slightly between genotypes (e.g. naive orb2∆/+ about 15 min vs. wild-type CS about 20 min). Such differences are common when mutations or transgenes are introduced into distinct genetic backgrounds, and they do not affect the within-genotype comparison that is the essence of SMD (sexual-experience-dependent shortening of mating duration). Therefore, for every experiment we compared naive vs. experienced males of the identical genotype, keeping all other variables constant.

      Consistency of SMD across figures

      In every manipulation that disrupts SMD memory (orb2∆, MB silencing, orb2-RNAi in YL neurons, Fmr1-RNAi in YL neurons, etc.) the naive–experienced difference disappears, whereas the genetic controls retain a significant ΔMD. This is fully consistent with Fig. 1B–E and demonstrates that the memory trace, not the basal duration, is abolished.

      Figure layout

      Following your suggestion, we have re-ordered all bar graphs so that the relevant genetic controls are placed immediately adjacent to the experimental groups, making within-panel comparisons easier.

      We hope these clarifications and adjustments address your concerns.

      Comment 7: Bitmap figures: unfortunately the bitmap figures are compressed and their resolution makes it difficult to evaluate the visual evidence.

      Response: We apologize for the poor quality of the figures. All figures in the revised manuscript, including the scRNA-seq plots, have been remade as high-resolution vector graphics to ensure clarity and detail. For better understanding, different colored illustrations are also placed next to the scRNA-seq.

      Comment 8: Sexual dimorphism of YL neurons: many neurons involved in sexual behaviors express dsx and/or fru. Do YL neurons express them?

      Response: This is an excellent question. To address it, we performed a new set of experiments using genetic intersectional tools to test for the expression of doublesex (dsx) and fruitless (fru) in YL neurons. Our analysis, presented in figure supplement 2B, reveals that YL neurons are indeed fru-negative and dsx-negative. We therefore conclude that YL neurons do not belong to the canonical fru- or dsx-expressing neuronal classes and are unlikely to be intrinsically sex-specific.

      We add explanation in lines 223-229:

      "Our further analysis confirmed the presence of only three pairs of nuclei near the SOG in male brains, whereas female brains exhibit a greater number of nuclei near the AL (Fig. 2I), suggesting subtle sexual dimorphisms in GAL4MB093C-expressing neurons. Importantly, these neurons do not overlap with either fru- or dsx-expressing cells: co-immunostaining for GFP and Fru or Dsx revealed almost no colocalization in any brain region examined (Fig. S2B), indicating that YL neurons are distinct from the canonical sex-specific fru/dsx circuits."

      Comment 9: Genetic controls for some crucial experiments are not provided, e.g. Fig 2J, Fig S3C, Fig S3E-F Fig 5B-C, F, Q-R, Fig S5A-E.

      Response: We thank the reviewer for their careful attention to detail. We have now performed all the missing genetic control experiments.

      Comment 10: Colocalization experiments: please provide more detail on how fluorescence is normalized for each channel across images, especially when the overall expression of an effector is up- or down-regulated after mating.

      Response: We have updated the Methods section under "Quantitative Analysis of Fluorescence Intensity" and "Colocalization Analysis" to provide a detailed description of our normalization procedure.

      Comment 11: Please resolve this apparent contradiction on the expression of Nmdar1 and 2 in YL neurons. On line 261: "both receptors co-expressing in Orb2-positive MB Kenyon cells"; on line 279-281 "Nmdar1 is not expressed with YL neurons [...] whereas Nmdar2 is expressed in a single pair of YL neurons..."

      Response: We apologize for this contradiction, which arose from the confusing narrative structure of the original manuscript. As detailed in our response to Reviewer #1 (Major Comment 4), we have reframed the manuscript.

      Comment 12: Particle analysis (Fig 6H-J): experienced males seem to have more synapses but trend towards smaller average size. It would be helpful to show number of synapses and average size as paired data, or show that the total particle area is larger in experienced males.

      Response: We agree with the reviewer that this analysis was inconclusive and potentially misleading due to the limitations of image resolution. As noted in our response to Reviewer #1, we have removed this particle analysis (former Fig 6H-J) from the revised manuscript. Our claim for increased synaptic plasticity is now supported by the more robust measurement of the total fluorescence area of the pre- and postsynaptic markers, which shows a significant increase in experienced males.

      Minor Comments

      We thank the reviewer for their meticulous attention to detail. We have addressed all minor comments as follows:

      Comment 1: 1. Some figures (e.g. Fig 3M-R) and experiments (e.g. oenocyte scRNA-seq) are not referenced in the text. dnc data is shown alongside amn and rut but the rationale for its inclusion is not provided.

      __Response: __Original Fig. 3M-R (now Fig,3 M-O) was referenced on line 283. The rationale for including dnc data (as a canonical memory mutant) is now clarified in the text on lines 187-189:

      "To ask whether the same molecular machinery underlies the SMD that follows sexual experience, we tested three classical memory mutants: dunce (dnc), amnesiac (amn), and rutabaga(rut)."

      Comment 2: Some references might not point to the intended article (e.g. ref 123).

      __Response: __The reference list has been checked and corrected.


      Comment 3. Please plot genetic controls next to experimental genotypes as they are a crucial part of the experiment.


      __Response: __All relevant figures now include plots of genetic controls next to experimental genotypes.

      Comment 4. The "estimation statistics" plots are not necessary since the authors show individual data points. To further enhance data transparency, the authors may consider reducing the alpha and/or dot size so the individual data points are more readily visible.

      Response: Thank you for this helpful suggestion! We fully agree that data transparency is essential. After carefully testing lower alpha values and smaller dot sizes, we found that either change markedly obscured the dense regions of the distributions. So we didn't change the size of the point.

      The estimation-statistics overlays are kept for two courteous reasons: (i) they provide an immediate visual estimate of the mean difference and its 95 % confidence interval, which is the key statistic we base our conclusions on, and (ii) they spare readers from having to cross-reference separate tables.


      Comment 5. For accessibility, please avoid using green and red in the same plot.

      __Response: __We fully agree that red–green combinations can be problematic for colour-vision-impaired readers. In the present manuscript, however, the only panel that juxtaposes pure red and pure green is the Fly-SCOPE co-expression data. These scRNA-seq plots are provided only as supportive reference; the actual quantitative conclusions are based on independent genetic and imaging experiments that use magenta, cyan, yellow, and greyscale palettes. Moreover, the scope images are accompanied by detailed text descriptions of the overlapping cell clusters, so no essential information is lost even if the colours are indistinguishable

      Comment 6. Fly Cell Atlas: please show color scales used for each gene as the color thresholds are gene-specific by default.The 3-color overlap on SCope also makes it very difficult to see the expression pattern for each gene. One possibility is outlining the Kenyon cells on the tSNE plots and showing the expression for each gene of interest.

      Response: Thank you for this helpful suggestion. To avoid the ambiguity that arises from RGB blending in the three-colour overlay, we have added a small colour-mixing diagram next to the t-SNE plots (revised Fig. 1). This key shows the exact hues produced by pairwise and three-way overlaps:

      • Red + Green = Yellow

      • Red + Blue = Magenta

      • Green + Blue = Cyan

      • Red + Green + Blue = White

      Thus, yellow, magenta or cyan dots indicate co-expression of two genes, while white dots mark cells where all three genes are detected. this diagram allows readers to interpret overlap colours at a glance without re-entering SCope.

      Comment 7. Please also refer to Fly Cell Atlas as such. SCope is a visualization platform that houses multiple datasets.

      __Response: __The reference to Fly Cell Atlas was added.

      Comment 8. Please introduce acronyms and genetic reagents the first time they are mentioned.

      __Response: __All acronyms and genetic reagents are now defined upon their first use.

      Comment 9. Line 184: please specify "split-GAL4 reagents" instead of "advanced genetic tools".

      __Response: __We have replaced "advanced genetic tools" with the more specific term "Split-GAL4 reagents."


      Comment 10. Line 187: there are a few other lines with p>0.05 or p>0.01, so "uniquely" is inaccurate. Are the p-values in Table 1 corrected for multiple testing?

      __Response: __The term "uniquely" has been revised for accuracy. No correction for multiple testing was applied because each entry in Table 1 represents a single pairwise comparison (naive vs. exp). Thus only one p-value was generated per experiment.

      Comment 11. Some immunofluorescence panels lack scale bars.

      __Response: __Scale bars have been added to all immunofluorescence panels.


      Comment 12. Fig S2G-I: do authors mean "naive" instead of "group"?

      __Response: __The term "group" in Fig S2G-I has been corrected to "naive."

      Comment 13. Movie 1 should be referenced when YL neurons are first introduced.

      __Response: __Movie 1 is now referenced when YL neurons are first introduced in the text.

      Comment 14. Is Fig 4L similar to Fig 6L-N?

      __Response: __This error has been corrected after the article was reformatted

      Comment 15. Fig 7: please plot olfactory conditioning experiment results as either percentages, preference index, or paired numbers. "Number of flies/tube" is not as informative.

      __Response: __Thank you for pointing this out. The bars in Fig. 7 indeed represent paired numbers, but we realise this was not stated explicitly. We apologize for the lack of clarity. In the revised manuscript we explained it in detail in figure legend and method. In the figure, we also marked the percentage of flies that chose to avoid the side of the stimulus with gas, and explained it in the Figure legend.




      Reference

      1. Lagasse F, Devaud J-M, Mery F. A Switch from Cycloheximide-Resistant Consolidated Memory to Cycloheximide-Sensitive Reconsolidation and Extinction in Drosophila. J Neurosci. 2009;29: 2225–2230. doi:10.1523/jneurosci.3789-08.2009
      2. Huang C, Maxey JR, Sinha S, Savall J, Gong Y, Schnitzer MJ. Long-term optical brain imaging in live adult fruit flies. Nat Commun. 2018;9: 872. doi:10.1038/s41467-018-02873-1
      3. Tonoki A, Davis RL. Aging Impairs Protein-Synthesis-Dependent Long-Term Memory in Drosophila. J Neurosci. 2015;35: 1173–1180. doi:10.1523/jneurosci.0978-14.2015
      4. Macartney EL, Zeender V, Meena A, Nardo AND, Bonduriansky R, Lüpold S. Sperm depletion in relation to developmental nutrition and genotype in Drosophila melanogaster. Evol Int J Org Evol. 2021;75: 2830–2841. doi:10.1111/evo.14373
    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Summary:

      Sun et al. show that Orb2-expressing, glutamatergic mushroom body neurons (YL neurons) are central to the "shorter mating duration (SMD)" behavior, where males reduce their mating duration up to 12 hours after the initial copulation. The authors use SMD as a model for understanding sexual experience-dependent long-term memory in males. A few genes implicated in long-term memory (Fmr1, CrebB) are required in YL neurons for SMD. The Nmdar-CaMKII signaling pathways is also implicated, and mating attenuates Ca2+ signaling and increases synaptic plasticity in the mushroom body and subesophageal ganglion.

      Major comments:

      1. Clearer articulation of the rationale, motivation, and significance of the overall study design and individual experiments can strengthen the manuscript and promote readership. For example, the beginnings of the abstract and introduction should define what authors mean by sexual experience-dependent long-term memory and its significance (including why it is "significant for reproductive success" (lines 46 and 92)). Similarly, employing more concrete language throughout the text will help anchor and contextualize the study. Interpretation is occasionally insufficient or does not follow directly from the data provided.
      2. Long term memory: I do not work on Drosophila memory, but a cursory search suggests that the field generally considers long term memory in Drosophila to last for 24 hr to days (courtship memory lasts for >24 hr). SMD decays between 12-24 hr after copulation. Could SMD be considered a short-term effect?
      3. Fig 1B-E share the same control (naive) group. If these experiments were performed in the same replicate(s), they should be plotted in the same figure. If not, please provide more details on how experimental blocks were set up and how controls compared between replicates.
      4. Serial mating (Fig 1F-H): please provide details on the methods. How much time elapsed between successive matings? Is a paired statistical test used? Sperm depletion also affects mating duration, and without this information the authors' conclusion (lines 155-156) does not automatically follow from the data.
      5. Mating duration assay: Which isolation interval was chosen for the rest of the SMD experiments? The 12 hr en masse mating setup is relatively uncommon among studies on courtship/copulation/post-copulatory phenotypes, and introduces uncertainty and variability in the number and timing of matings that occurred during the 12 hr-window. This source of variability and its implication in interpreting the data should be acknowledged. Moreover, the 3 studies referenced in the methods all house males in groups of 4, whereas this study uses groups of 40. Could density confound the manifestation of SMD?
      6. SMD behavior: comparing orb2 mutants and controls (Fig 1M and Fig S1K-L), loss of orb2 actually reduces the mating duration in native males (mean ~15 min) relative to controls (~20 min), and have possibly no effect on experienced males (~15 min). This is inconsistent with the SMD behavior demonstrated in Fig 1B-E. The same pattern is found for mushroom body silencing (Fig 1P, Fig S1M-N), orb2 knockdown in YL neurons (Fig 2D, Fig S2A-B), Fmr1 knockdown in YL neurons (Fig 3D, Fig S2B, S3D) and most other experiments where mating duration is not significantly different between naive and experienced males. This might demonstrate a separate role of YL neurons and its related circuit in regulating mating duration in naive males. Could the authors discuss this interpretation? As an aside, plotting genetic controls next to experimental groups is customary and facilitates comparisons between relevant groups.
      7. Bitmap figures: unfortunately the bitmap figures are compressed and their resolution makes it difficult to evaluate the visual evidence.
      8. Sexual dimorphism of YL neurons: many neurons involved in sexual behaviors express dsx and/or fru. Do YL neurons express them? If they do, they might be a subset of characterized and named dsx/fru neurons.
      9. Genetic controls for some crucial experiments are not provided, e.g. Fig 2J, Fig S3C, Fig S3E-F Fig 5B-C, F, Q-R, Fig S5A-E.
      10. Colocalization experiments: please provide more detail on how fluorescence is normalized for each channel across images, especially when the overall expression of an effector is up- or down-regulated after mating.
      11. Please resolve this apparent contradiction on the expression of Nmdar1 and 2 in YL neurons. On line 261: "both receptors co-expressing in Orb2-positive MB Kenyon cells"; on line 279-281 "Nmdar1 is not expressed with YL neurons [...] whereas Nmdar2 is expressed in a single pair of YL neurons in both male and female brains".
      12. Particle analysis (Fig 6H-J): experienced males seem to have more synapses but trend towards smaller average size. It would be helpful to show number of synapses and average size as paired data, or show that the total particle area is larger in experienced males.

      Minor comments:

      1. Some figures (e.g. Fig 3M-R) and experiments (e.g. oenocyte scRNA-seq) are not referenced in the text. dnc data is shown alongside amn and rut but the rationale for its inclusion is not provided.
      2. Some references might not point to the intended article (e.g. ref 123).
      3. Please plot genetic controls next to experimental genotypes as they are a crucial part of the experiment.
      4. The "estimation statistics" plots are not necessary since the authors show individual data points. To further enhance data transparency, the authors may consider reducing the alpha and/or dot size so the individual data points are more readily visible.
      5. For accessibility, please avoid using green and red in the same plot.
      6. Fly Cell Atlas: please show color scales used for each gene as the color thresholds are gene-specific by default.The 3-color overlap on SCope also makes it very difficult to see the expression pattern for each gene. One possibility is outlining the Kenyon cells on the tSNE plots and showing the expression for each gene of interest.
      7. Please also refer to Fly Cell Atlas as such. SCope is a visualization platform that houses multiple datasets.
      8. Please introduce acronyms and genetic reagents the first time they are mentioned.
      9. Line 184: please specify "split-GAL4 reagents" instead of "advanced genetic tools".
      10. Line 187: there are a few other lines with p>0.05 or p>0.01, so "uniquely" is inaccurate. Are the p-values in Table 1 corrected for multiple testing?
      11. Some immunofluorescence panels lack scale bars.
      12. Fig S2G-I: do authors mean "naive" instead of "group"?
      13. Movie 1 should be referenced when YL neurons are first introduced.
      14. Is Fig 4L similar to Fig 6L-N?
      15. Fig 7: please plot olfactory conditioning experiment results as either percentages, preference index, or paired numbers. "Number of flies/tube" is not as informative.

      Significance

      The manuscript describes an extensive and comprehensive set of experiments aimed at elucidating the role of a subset of mushroom body neurons in mediating a male post-mating sexual behavior, which the authors use as a model for sexual experience-dependent long-term memory. Long-term post-mating responses in females have been well characterized in Drosophila and other insects, but post-mating long term memory in males are less well understood despite a few studies reporting their importance in mating success. How males adjust their mating duration based on internal and external cues can reveal insights about decision making and interval timer mechanisms. This study represents a functional advancement in the neuronal and molecular mechanisms of how memory and experience regulates a sexual behavior in male Drosophila. Overall, the manuscript can significantly benefit from general editing on clearer articulation of rationale and more appropriate interpretations of data. Higher resolution versions of bitmap figures is also crucial. The SMD experiments invite an alternative interpretation of data that centers on YL neurons' role on regulating mating duration in naive males, which alongside other roles of the mushroom body demonstrated in this manuscript, could add more depth to the study.

      The findings in this manuscript will be of interest to a specialized audience interested in memory, neural circuits of behavior, and Drosophila sexual behavior. I work on Drosophila sexual behavior and circuits, but lacking experience on memory research, I am not as familiar with the mushroom body and conditioning experiments.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      This study explores the molecular and neural circuitry mechanisms underlying sexual experience-dependent long-term memory (SELTM) in male Drosophila. The authors use behavioral, imaging, and bioinformatics approaches to identify YL neurons, a subset of mushroom body (MB) projecting neurons, as crucial for SELTM formation. They propose that YL neurons receive inputs from WG neurons via the sNPF-sNPFR pathway and implicate molecular players such as orb2, fmr1, MDAR2-CaMK, and synaptic plasticity in their function.

      However, the evidence presented does not adequately support the authors' claims. The data fail to cohesively tell a logical story, and key conclusions appear to be based on assumptions and correlations rather than robust evidence.

      Major comments:

      1. The study identifies the knowledge gap (lines 103-104) but fails to integrate relevant literature, particularly Shohat-Ophir et al., Science (2012), and Zer-Krispil et al., Curr Biol (2018). These studies established that ejaculation induces appetitive memory in male Drosophila via corazonin and NPF neurons. The current study does not provide direct evidence that the "act of mating itself" drives SELTM, as it includes both courtship and copulation.
      2. The nature of the observed long-lasting reduced mating duration requires clearer characterization: Is this an associative memory or experience-dependent behavioral plasticity? Can the formation of this long-term memory be blocked by protein synthesis inhibitors, such as cycloheximide?
      3. While schematics illustrate the working hypotheses, the text lacks detailed explanations, leaving the reader unclear about the rationale behind certain conclusions.
      4. The logic to draw certain conclusions was confusing and misleading.
        • For instance, the role of orb2 in SELTM is examined via knockdown in MB Kenyon cells (KCs) (using ok107>orb2-RNAi), which is irrelevant to the claim that orb2 functions in YL neurons. Additionally, RNAseq analyses (Fig. 1N-S) focusing on orb2 expression in a/b KCs are irrelevant to and cannot support the claim that Orb2 functions in YL neurons.
        • Similarly, the claim (lines 302-303) that sNPF-R expression is exclusive to MB KCs conflicts with data showing effects when sNPF-R is knocked down in YL neurons. How can knocking-down a gene, which is exclusively expressed in neural population A, in neural population B affect a phenotype? This inconsistency undermines the interpretation of the results.
        • Other examples include lines 223-227 and lines 246-249. It is very confusing how the authors came to the indications.
        • The authors also kept confusing the readers and themselves by mistakenly referring to MB KC a-lobe and YL a-lobe projection. They may know the difference between the two neural populations but they did not always refer to the right one in the text.
      5. The imaging figures provided are unfocused and poorly resolved, making it difficult to assess data quality.
        • Colocalization analyses of orb2 and YL are unconvincing, especially given that orb2 is well-documented in literature as expressed in MB a-KCs and YL projection wrapping MB a-lobe. Maximum intensity projection images are insufficient for confirming colocalization; complete image stacks with staining of orb2, YL, and KCs (MB-dsRed) are needed for validation.
        • Quantification of imaging data appears flawed. For example, claims of orb2 and CaMKII upregulation in MB a-lobe projections (e.g., Fig. S2F-J, Fig. 3M,N) are confounded by widespread increases in intensity across the brain, lacking specificity.
        • The TRIC experiment analysis should normalize GFP signals to internal reference channel (RFP in the TRIC construct), as per established protocols in the original paper.
        • In Fig. 6H-J, methods for counting synapse numbers are not described. How are synapse numbers counted in these low-resolution images?
      6. The study presents data from unrelated learning paradigms (e.g., olfactory associative learning, courtship conditioning; Fig. 7) without justifying how these paradigms relate to SELTM. Particularly, the authors claimed that SELTM is related to Gr5a, which leads to appetitive memories, which involve PAM dopaminergic neurons and MB horizontal lobes. However, the olfactory associative learning with electric shock and courtship conditioning lead to aversive memories, that involve PPL1 dopaminergic neurons and the vertical lobes.
      7. Some figures are not referred to in the text. For example, Fig S1 K and L (also, what's the difference between these two figures?) and Fig 3M-R. What is MB-V3 in Fig 4J-K?

      Minor issues

      1. Fig 2F. YL projections are labeled as MBONs. Clarify whether YL neurons are the upstream or downstream (MBON) of KCs.
      2. Extensive language polishing is required, as several sentences are unclear (e.g., lines 169-172).

      Significance

      This study potentially advances our understanding of how sexual experience modifies future mating behaviors. While previous work has shown that mating induces appetitive memory in males, the mechanisms linking this memory to future mating behavior remain poorly understood. This work could provide valuable insights into these mechanisms, pending appropriate revisions.

    1. No one exposed to the misery of trench warfare could hang onto illusions of the heroism and nobility of the struggle they were engaged in. The cold, the mud, and the terror of pointless charges over the top ordered by commanders who had no clue what they were doing and who rarely led their men into the slaughter – all these factors were captured by journalists and then by novelists like the American Ernest Hemingway (A Farewell to Arms, 1929), the German Erich Maria Remarque

      This part shows that the crisis wasn’t just social but emotional. Uncertainty caused peoples strife after the war. Even the universe seemed unstable. This new worldview influenced all types of art, suggesting that truth is known and knowledge is limited.

    1. Accessibility is a key component of Inclusive Design, but the two concepts have distinct focuses. Accessibility falls within Inclusive Design and focuses on materials, spaces, and access being available to all. Meanwhile, Inclusive Design focuses the end goal and works backwards to create multiple paths to get there. While accessibility often focuses on meeting specific needs, Inclusive Design proactively considers a wide range of abilities, preferences, and perspectives from the outset, ensuring that the design works equitably for everyone. Together, they promote environments where all individuals can fully participate.

      Sounds like pedagogy. I think i'm still not getting the difference between inclusive design and universal design for learning.

    1. However, McAdoo (1978) has suggested that the problems and issues that others (Aschenbrenner, 1978; Martin & Martin, 1978; Stack, 1974) have observed are probably tied to the socioeconomic level of the particular extended family

      the negative effects aren't tied to race, but socioeconomic status (which can be tied back to race in some ways but getting into that gets us off track of the research question)

    2. Other significant familial associations, for example, parental, sibling, avuncular, and cousin links of spouses and offsprings, can and do have important direct and indirect influence on immediate family life experience

      These familial bonds have importance, but because they aren't included in the "standard" definition, research on the effects are limited

    1. What incentives do social media companies have to protect privacy?

      At the moment there are still a good amount of legal protections that restrict companies from being too flagrant with your privacy but over the lat few decades that has become more and more of a market is companies actually selling people's information. I also think that people don't like the idea of a company having all their information so maybe another thing stopping them is the public sentiment toward private companies doing these massive privacy violations. But there is a lot of money on the other side of that fence and more and more and more companies are deciding that they want to take that leap and violate privacy.

    1. According to the American Academy of Child and Adolescent Psychiatry, the frontal cortex in the brain, where reasoning and thinking before acting occurs, is not fully formed in teenagers. However, the amygdala, “responsible for immediate reactions including fear and aggressive behavior,” is fully formed early in life. This means teens aren’t as good at considering the consequences of their behavior before they react, so the adults in their lives should limit the risks in their lives until they’re better able to reason through them.

      facts are boring but overall good

    2. Some days my sixteen-year-old niece, Rachael, does all of her homework, helps friends study after school, and practices her cello, and other days she forgets her books at school, lies about where she’s going, and doesn’t do her chores. This sporadic behavior seems like it comes out of nowhere, but it turns out teenage brains are different from adult brains, causing teens to sometimes not think about consequences before they act.

      to personal bad

    1. Have you ever realized that your first impression about someone was wrong? It is common to have this experience, but it can be useful to understand more about why you thought this. Were your first impressions based on someone's race, gender, general appearance, age, etc.?

      This engages the reader by asking a reflective question and raising curiosity about assumptions and stereotypes.

    2. Have you ever realized that your first impression about someone was wrong? It is common to have this experience, but it can be useful to understand more about why you thought this. Were your first impressions based on someone's race, gender, general appearance, age, etc.?

      hook

  12. www.tandfonline.com www.tandfonline.com
    1. .

      the first priniciple that the author's argue that contribute to a successful multinational federation is Staatsvolk. it's not a pancea but the evidence demonstrates that the more heterogenous a federation is the more likely it will be unstable, face secessionism, or break up becuase the minorities are more likely to think they can prevail. the authors suggest that multinational federations without staatsvolk to survive as democratic entites they must develop consociational practices to protect interests of all comm. adding Gannon into this paragraph because they share similar views, the majority nation must appropriately behave to maintain stability.

    2. .

      mentions india's refusal to recognize religion, not ethnicity, as the basis of state formation. india is a successful multinational federation but due to their refusal to recognize religion they have had issues with Kashmir and Punjab. violoence would be a result of centralising decisions.

    3. .

      the authors of this paper counters the argument that mono-nation-building strategies can be used as an alternative for deeply diverse states is that these strategies have not been successful. UK's civic and unitary state did not prevent the nationalism of its different nations. Therefore the UK had to use a devolution strategy but it still does not quell nationalism.

    4. .

      Yugoslavia isn't actually a multinational federation, it was decentralised but that doesn't mean its was democratic, it was held together by the League of Communists. Other "multinational federations" which are more like pseudo-federations include USSR, Czechoslovakia, and Nigeria. they all had weak or no overarching identities and no democratic mechanism for developing those identities

    5. .

      this harks back to an article i read a little bit of (i think it was the Yugoslavia one) where america's motivation relied on ideological differences to further break up that federation. but that's an interpretation. regarding the paragraph, american academic argue that the break-up of former communist federations are due to their implemenation of "ethno-federal" strcutures. Jack Snyder argues that ethnofederalism tends to heighten and politicise ethnic consciousness, creating self-conscious intelligentsia and org strucutres of an ethnic state in waiting. implying that federalism leaves ethnic groups waiting for something they will not receiving leading to nationalism and tensions. additionally snyder notes that nationalist violence happened only where ethnofederal institutions channelled pol activity along ethnic lines (ex: USSR and Yugoslavia).

    1. __________________________________________________________________
      1. Snacks (I do not really snack a whole lot, but they are nice to have just in case).
      2. Books (I love buying new books, but I have so many that I already have and should read).
      3. Make up (I do not wear makeup often, so why do I have so much?)
    1. For more information about the development of creative writing

      Q4. Kaitlin Breuchel In concluding their argument, Ball and Loewe assert that there is creativity in all writing, and not just what we term "creative writing." They argue that setting a definition for creativity to be used in some forms of writing makes people imagine and thought that enter ordinary writing. From what I have read, I view "creative writing" differently i think it cannot just be referring to poetry or fiction but any form of writing where a person is deciding, communicating ideas, and speaking with others.

    1. paid time off, parental leave, additional compensation, and retirement plans. Everymonth, we have a staffing meeting, which brings together employees, investors, higher-ups, etc.These meetings drive innovation and problem-solving for any current issues that have emerged.It also boosts morale, which is especially important within a company with many different levelsof employment. When people from all levels work together, it signals that all input is valued,which increases job satisfaction, employee engagement, and a sense of belonging.We believe in fair employment practices for all applicants who are interested in workingwith us. We will not deny anyone the ability to apply to a position at our company. Our hiringmanagers perform an objective hiring process, free of bias and discrimination. Bean There iscommitted to preventing all forms of harassment within the workplace. Creating a hostile workenvironment doesn’t promote job satisfaction, which can have permanent effects on thecompany. Harassment based on protected characteristics is illegal and will not be tolerated atBean There. This policy not only applies to employees, but we also expect our customers not toharass each other or the employees. We make it public knowledge to our customers andemployees on how and to whom they can report instances of harassment.

      You may wish to consider some bulleted lists to make the examples stand out more visually.

    2. Honesty is a central component of everyday life at Bean There. It’s a must within ourcompany, because without it, it wouldn’t function correctly. Honesty entails telling the truth,whether or not you’re in the wrong

      For each value, list at least one practice that demonstrates that value. It need not be long, but it does make your commitment clearer to the reader of the document.

    Annotators