35,428 Matching Annotations
  1. Feb 2024
    1. Reviewer #4 (Public Review):

      I am a new reviewer for this manuscript, which has been reviewed before. The authors provide a variational autoencoder that has three objectives in the loss: linear reconstruction of behavior from embeddings, reconstruction of neural data, and KL divergence term related to the variational model elements. They take the output of the VAE as the "behaviorally relevant" part of neural data and call the residual "behaviorally irrelevant". Results aim to inspect the linear versus nonlinear behavior decoding using the original raw neural data versus the inferred behaviorally relevant and irrelevant parts of the signal.

      Overall, studying neural computations that are behaviorally relevant or not is an important problem, which several previous studies have explored (for example PSID in (Sani et al. 2021), TNDM in (Hurwitz et al. 2021), TAME-GP in (Balzani et al. 2023), pi-VAE in (Zhou and Wei 2020), and dPCA in (Kobak et al. 2016), etc). However, this manuscript does not properly put their work in the context of such prior works. For example, the abstract states "One solution is to accurately separate behaviorally-relevant and irrelevant signals, but this approach remains elusive", which is not the case given that these prior works have done that. The same is true for various claims in the main text, for example "Furthermore, we found that the dimensionality of primary subspace of raw signals (26, 64, and 45 for datasets A, B, and C) is significantly higher than that of behaviorally-relevant signals (7, 13, and 9), indicating that using raw signals to estimate the neural dimensionality of behaviors leads to an overestimation" (line 321). This finding was presented in (Sani et al. 2021) and (Hurwitz et al. 2021), which is not clarified here. This issue of putting the work in context has been brought up by other reviewers previously but seems to remain largely unaddressed. The introduction is inaccurate also in that it mixes up methods that were designed for separation of behaviorally relevant information with those that are unsupervised and do not aim to do so (e.g., LFADS). The introduction should be significantly revised to explicitly discuss prior models/works that specifically formulated this behavior separation and what these prior studies found, and how this study differs.

      Beyond the above, some of the main claims/conclusions made by the manuscript are not properly supported by the analyses and results, which has also been brought up by other reviewers but not fully addressed. First, the analyses here do not support the linear readout from the motor cortex because i) by construction, the VAE here is trained to have a linear readout from its embedding in its loss, which can bias its outputs toward doing well with a linear decoder/readout, and ii) the overall mapping from neural data to behavior includes both the VAE and the linear readout and thus is always nonlinear (even when a linear Kalman filter is used for decoding). This claim is also vague as there is no definition of readout from "motor cortex" or what it means. Why is the readout from the bottleneck of this particular VAE the readout of motor cortex? Second, other claims about properties of individual neurons are also confounded because the VAE is a population-level model that extracts the bottleneck from all neurons. Thus, information can leak from any set of neurons to other sets of neurons during the inference of behaviorally relevant parts of signals. Overall, the results do not convincingly support the claims, and thus the claims should be carefully revised and significantly tempered to avoid misinterpretation by readers.

      Below I briefly expand on these as well as other issues, and provide suggestions:

      1) Claims about linearity of "motor cortex" readout are not supported by results yet stated even in the abstract. Instead, what the results support is that for decoding behavior from the output of the dVAE model -- that is trained specifically to have a linear behavior readout from its embedding -- a nonlinear readout does not help. This result can be biased by the very construction of the dVAE's loss that encourages a linear readout/decoding from embeddings, and thus does not imply a finding about motor cortex.

      2) Related to the above, it is unclear what the manuscript means by readout from motor cortex. A clearer definition of "readout" (a mapping from what to what?) in general is needed. The mapping that the linearity/nonlinearity claims refer to is from the *inferred* behaviorally relevant neural signals, which themselves are inferred nonlinearly using the VAE. This should be explicitly clarified in all claims, i.e., that only the mapping from distilled signals to behavior is linear, not the whole mapping from neural data to behavior. Again, to say the readout from motor cortex is linear is not supported, including in the abstract.

      3) Claims about individual neurons are also confounded. The d-VAE distilling processing is a population level embedding so the individual distilled neurons are not obtainable on their own without using the population data. This population level approach also raises the possibility that information can leak from one neuron to another during distillation, which is indeed what the authors hope would recover true information about individual neurons that wasn't there in the recording (the pixel denoising example). The authors acknowledge the possibility that information could leak to a neuron that didn't truly have that information and try to rule it out to some extent with some simulations and by comparing the distilled behaviorally relevant signals to the original neural signals. But ultimately, the distilled signals are different enough from the original signals to substantially improve decoding of low information neurons, and one cannot be sure if all of the information in distilled signals from any individual neuron truly belongs to that neuron. It is still quite likely that some of the improved behavior prediction of the distilled version of low-information neurons is due to leakage of behaviorally relevant information from other neurons, not the former's inherent behavioral information. This should be explicitly acknowledged in the manuscript.

      4) Given the nuances involved in appropriate comparisons across methods and since two of the datasets are public, the authors should provide their complete code (not just the dVAE method code), including the code for data loading, data preprocessing, model fitting and model evaluation for all methods and public datasets. This will alleviate concerns and allow readers to confirm conclusions (e.g., figure 2) for themselves down the line.

      5) Related to 1) above, the authors should explore the results if the affine network h(.) (from embedding to behavior) was replaced with a nonlinear ANN. Perhaps linear decoders would no longer be as close to nonlinear decoders. Regardless, the claim of linearity should be revised as described in 1) and 2) above, and all caveats should be discussed.

      6) The beginning of the section on the "smaller R2 neurons" should clearly define what R2 is being discussed. Based on the response to previous reviewers, this R2 "signifies the proportion of neuronal activity variance explained by the linear encoding model, calculated using raw signals". This should be mentioned and made clear in the main text whenever this R2 is referred to.

      7) Various terms require clear definitions. The authors sometimes use vague terminology (e.g., "useless") without a clear definition. Similarly, discussions regarding dimensionality could benefit from more precise definitions. How is neural dimensionality defined? For example, how is "neural dimensionality of specific behaviors" (line 590) defined? Related to this, I agree with Reviewer 2 that a clear definition of irrelevant should be mentioned that clarifies that relevance is roughly taken as "correlated or predictive with a fixed time lag". The analyses do not explore relevance with arbitrary time lags between neural and behavior data.

      8) CEBRA itself doesn't provide a neural reconstruction from its embeddings, but one could obtain one via a regression from extracted CEBRA embeddings to neural data. In addition to decoding results of CEBRA (figure S3), the neural reconstruction of CEBRA should be computed and CEBRA should be added to Figure 2 to see how the behaviorally relevant and irrelevant signals from CEBRA compare to other methods.

      References:

      Kobak, Dmitry, Wieland Brendel, Christos Constantinidis, Claudia E Feierstein, Adam Kepecs, Zachary F Mainen, Xue-Lian Qi, Ranulfo Romo, Naoshige Uchida, and Christian K Machens. 2016. "Demixed Principal Component Analysis of Neural Population Data." Edited by Mark CW van Rossum. eLife 5 (April): e10989. https://doi.org/10.7554/eLife.10989.

      Sani, Omid G., Hamidreza Abbaspourazad, Yan T. Wong, Bijan Pesaran, and Maryam M. Shanechi. 2021. "Modeling Behaviorally Relevant Neural Dynamics Enabled by Preferential Subspace Identification." Nature Neuroscience 24 (1): 140-49. https://doi.org/10.1038/s41593-020-00733-0.

      Zhou, Ding, and Xue-Xin Wei. 2020. "Learning Identifiable and Interpretable Latent Models of High-Dimensional Neural Activity Using Pi-VAE." In Advances in Neural Information Processing Systems, 33:7234-47. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2020/hash/510f2318f324cf07fce24c3a4b89c771-Abstract.html.

      Hurwitz, Cole, Akash Srivastava, Kai Xu, Justin Jude, Matthew Perich, Lee Miller, and Matthias Hennig. 2021. "Targeted Neural Dynamical Modeling." In Advances in Neural Information Processing Systems. Vol. 34. https://proceedings.neurips.cc/paper/2021/hash/f5cfbc876972bd0d031c8abc37344c28-Abstract.html.

      Balzani, Edoardo, Jean-Paul G. Noel, Pedro Herrero-Vidal, Dora E. Angelaki, and Cristina Savin. 2023. "A Probabilistic Framework for Task-Aligned Intra- and Inter-Area Neural Manifold Estimation." In . https://openreview.net/forum?id=kt-dcBQcSA.

    1. “client” code

      "clients" generally refer to computer applications or devices that access services or resources from a server in a network.

      can range from web browsers accessing internet pages, to applications fetching data from a database server, or devices like smartphones or computers running software that interacts with remote servers.

      central to much of modern computing and internet-based services.

    1. Reviewer #3 (Public Review):

      Summary:<br /> The authors propose to invert a mechanistic model of phototransduction in mouse and rod photoreceptors to derive stimuli that compensate for nonlinearities in these cells. They fit the model to a large set of photoreceptor recordings and show in additional data that the compensation works. This can allow the exclusion of photoreceptors as a source of nonlinear computation in the retina, as desired to pinpoint nonlinearities in retinal computation. Overall, the recordings made by the authors are impressive and I appreciate the simplicity and elegance of the idea. The data support the authors' conclusions but the presentation can be improved.

      Strengths:<br /> - The authors collected an impressive set of recordings from mouse and primate photoreceptors, which is very challenging to obtain.<br /> - The authors propose to exploit mechanistic mathematical models of well-understood phototransduction to design light stimuli that compensate for nonlinearities.<br /> - The authors demonstrate through additional experiments that their proposed approach works.

      Weaknesses:<br /> - The authors use numerical optimization for fitting the parameters of the photoreceptor model to the data. Recently, the field of simulation-based inference has developed methods to do so, including quantification of the uncertainty of the resulting estimates. Since the authors state that two different procedures were used due to the different amounts of data collected from different cells, it may be worthwhile to rather test these methods, as implemented e.g. in the SBI toolbox (https://joss.theoj.org/papers/10.21105/joss.02505). This would also allow them to directly identify dependencies between parameters, and obtain associated uncertainty estimates. This would also make the discussion of how well constrained the parameters are by the data or how much they vary more principled because the SBI uncertainty estimates could be used.

      - In several places, the authors refer the reader to look up specific values e.g. of parameters in the associated MATLAB code. I don't think this is appropriate, important values/findings/facts should be in the paper (lines 142, 114, 168). I would even find the precise values that the authors measure interesting, so I think the authors should show them in a figure/table. In general, I would like to see also the average variance explained by different models summarized in a table and precise mean/median values for all important quantities (like the response amplitude ratios in Figures 6/9).

      - If the proposed model is supposed to model photoreceptor adaptation on a longer time scale, I fail to see why this can be an invertible model. Could the authors explain this better? I suspect that the model is mainly about nonlinearities as the authors also discuss in lines 360ff.

      - The important Figures 6-8 are very hard to read, as it is not easy to see what the stimulus is, the modified stimulus, the response with and without modification, what the desired output looks like, and what is measured for part B. Reworking these figures would be highly recommended.

      - If I understand Figure 6 correctly, part B is about quantifying the relative size of the response to the little first flash to the little second flash. While clearly, the response amplitude of the second flash is only 50% for the second flash compared to the first flash in primate rod and cones in the original condition, the modified stimulus seems to overcompensate and result in 130% response for the second flash. How do the authors explain this? A similar effect occurs in Figure 9, which the authors should also discuss.

    1. So What Would a Static Site Generator for the Rest of Us Like Like?

      Not like a static site generator, that's for sure. Normal people don't a step in between input source code and the output. They don't want a difference between input and output at all. Programmers want a compilation step, because they're programmers.

    1. It can also do the opposite with gender-biased language that results in stricter enforcement of rules for female minority students rather than other sub-groups.

      I do think this is something to consider and question, because often dress-codes are more directed toward female students than male. What I was wondering as well is can a parent take legal action if they disagree with the school's dress code?

    2. Code of Ethics that binds its members together through shared values and purpose.

      I would like to be mindful of this as I become a teacher because I think it is important to adhere to what is expected of me.

    1. software easier to write, promising that you could code in plain English

      python, c++ are programs written by programs. Of course those programs are much more harder for their syntax is broader allowing to be more precise when communicating with hardware/the computer.

  2. Jan 2024
    1. I began to program nearly 20 years ago

      of my 4 years of knowing how to code, I would say code is a sort of communication framework between a user and a computer. We code because even though we can put 2+2 together we can give a computer a function where a user inputs two different numbers and now the computer can add things way faster than we can.

    2. Can you tell me what code is?” “No

      If you can't explain something you're supposed to be good at are you doing a good job?

    1. Résumé vidéo [00:48:26][^1^][1] - [01:11:02][^2^][2]:

      La troisième partie de la vidéo traite de la notion de valeurs de la République, de son origine, de sa définition et de sa transmission. L'intervenant, Ismaël Ferhat, présente une analyse sémantique, juridique et pédagogique de cette notion, en prenant l'exemple de la laïcité.

      Points forts: + [00:48:26][^3^][3] L'émergence récente des valeurs de la République * Une notion peu utilisée avant les années 80 * Une augmentation dans les publications, les médias et les codes juridiques * Une rupture liée aux débats sur l'immigration, le multiculturalisme et le séparatisme + [00:54:01][^4^][4] L'absence de définition constitutionnelle des valeurs de la République * Aucun texte ni décision du Conseil constitutionnel sur le sujet * Un avis du Conseil d'État qui distingue les valeurs des principes * Une définition circonscrite dans le code de l'éducation + [01:00:00][^5^][5] L'enjeu de l'éducation aux valeurs de la République * Une montée de la laïcité dans les programmes scolaires depuis les années 90 * Une nécessité d'expliquer les règles et de déconstruire les oppositions * Une évaluation positive de la maîtrise du principe de laïcité par les élèves

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      1. General Statements [optional]

      All four reviewers have positive comments on the paper. We totally agree with their comments, and proposed controls and experiments. Most of them are already introduced in the present text and several new figures added, as we had the controls/experiments proposed. Few others are now being done and we hope to have the complete set of experiments ready in 2-3 months.

      2. Description of the planned revisions

      Insert here a point-by-point reply that explains what revisions, additional experimentations and analyses are planned to address the points raised by the referees.

      Reviewer #1

      Most comments of this reviewer have already been done and included in the transferred manuscript, except for part of the first comment:

      1.1 b. Is it possible that the loss of function of Wengen on its own has a phenotype? If so, that would suggest that Wgn in addition to its role in regeneration might be implicated in pro-survival processes in homeostatic conditions?

      This issue is very important to understand the differential role of Wgn and Grnd. First of all, Wengen knock out (wgnKO; Andersen et al., 2015) is viable in homozygosis. However, in this paper we have focused on inducible mutants. Therefore, we have now crossed the flies to get the genotype hh-Gal UAS RNAi wgn and we will check for apoptotic phenotype, as suggested. This will take us few weeks of work.

      Reviewer #2 Most comments have been already carried out and included in the transferred manuscript, except these ones:

      *2.3. Aside from wgn, other RNAi experiments are not validated through independent RNAi lines. I suggest expanding the Supplemental Figures to reproduce a few key findings with independent RNAi lines. *

      We have recently received a set of independent RNAi line to repeat the experiments for Traf1, Traf2, Ask1 from Bloomington Stock Center. And We did not do it before mainly because we wanted to focus on wgn and grnd. However, we agree with the Reviewer 2 and we will do the experiments. Another RNAi from VDRC for grnd and Tak1 have been ordered. These experiments will take about 2 months from the crosses to the analysis of results (some flies still to arrive, and many crosses will be done at 17ºC).

      *2 4. In Figure 1E, the authors show that wgn RNAi enhances cell death caused by hh>egr. What is missing here is a wgn RNAi control without hh>egr. Is there any cell death caused by the loss of wgn alone (without hh>egr)? *

      This control is now in progress. Expected to have it complete in 2 weeks.

      *2.5. If wgn RNAi causes some degree of cell death, is the observed effect with hh>egr a significant genetic interaction, or merely additive? *

      The result from the previous comment will help us to respond this point.

        1. Is the wgn-p38 pathway sufficient to block egr induced cell death? The authors could test this by having hh>egr in the licT1.1 background. The authors have a more complex experiment in Figure 3, where licT1.1 is introduced into the hh>egr, wgn RNAi background. However, testing the effect of licT1.1 without wgn would establish a more direct relationship between egr and wgn-p38. *

      We have set the crosses for the experiment hh>egr and licT1.1 as suggested. The results will be included in the new version of the manuscript. 1 month.

      Reviewer #3

      All comments already carried out and included in the transferred manuscript. See next sections.

      __Reviewer #4 __

      Major comments:

      *4.3 In Figure 5, the cells expressing Rpr appeared to be pulled/extruded basally as expected. It would be beneficial to quantify Wgn and Grnd signals along cross-sections and provide higher magnification images of domain boundaries to illustrate differences in TNFR localization and levels. ** The micrographs for Grnd Figure 5B,D, F capture substantial signal from the peripodial epithelium where the salE/Pv> driver is likely not active? *

      We will do a thorough quantification of high-resolution stacks of images and include higher magnification of the analyzed stacks. To this aim, we need some more weeks to collect the images of each genotype, processed and quantify them. We propose to do have this work done in two months.

      *4.4 The non-autonomous induction of Wgn seems stronger when facing dying Rpr overexpressing cells simultaneously depleted of Eiger compared to Rpr OE alone. Should this be a reproducible, could the authors discuss potential reason for this observation? *

      It is difficult to respond this question, without quantification. The quantification suggested in the previous point, will allow us to state if Wgn is more accumulated in rpr +egr than rpr alone. Therefore, the previous point will tell us if there are significant differences and if, so it will help us to discuss it.

      Timing: The entire plan can be executed in 2-3 month.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. If no revisions have been carried out yet, please leave this section empty.

      __Reviewer #1 __

      1.1 a- *The result in Fig1.H is somehow surprising. How does the overexpression of Egr induce caspase activation in the absence of its receptor Grnd? *

      The results of Fig. 1H, in which egr+grndRNAi+wgnRNAi results in high apoptosis indicates that wgn down regulation compromises survival even in the absence of grnd. The reviewer correctly points that “How does the overexpression of Egr induce caspase activation in the absence of its receptor Grnd?”.

      There is evidence that Eiger is involved in the regulation of the pro-apoptotic gene head involution defective (hid) in primordial germ cells (Maezawa 2009 Dev. Growth Differ., 51 (4) (2009), pp. 453-461) and in the elimination of damaged neurons during development (Shklover et al., 2015). Moreover, Eiger is necessary for HID stabilization and regulates HID-induced apoptosis independently of JNK signaling (Shklover et al., 2015). Therefore, in our discs egr activation in the absence of grnd and wgn can still result in apoptosis because of the absence of wgn’s survival signal and, presumably, activation of hid.

      We have introduced this issue in the text as:

      “To check for epistasis between grnd and wgn, we activated hh> egrweak and knocked down both TNFRs. We found high levels of cell death compared to wgn RNAi alone (Fig. 1H and 1I), which demonstrates that wgn down-regulation is dominant over grnd. This is surprising as it is generally assumed that Egr interacts with Grnd to induce apoptosis via JNK, which in turn activates the proapoptotic gene hid (Andersen et al., 2015; Diwanji & Bergmann, 2020; Fogarty et al., 2016; Igaki et al., 2002; Moreno, Yan, et al., 2002; Sanchez et al., 2019; Shlevkov & Morata, 2012). Interestingly, Egr is necessary for HID stabilization and can regulate HID-induced apoptosis independently of JNK (Shklover et al., 2015). Therefore, cells egrweak that downregulate grnd and wgn can still be eliminated because the lack of both Wgn-survival signal and the pro-apoptotic Grnd/JNK signal could result in an alternative pathway of apoptosis.”

      *1.2- In Fig.6, it would be relevant to include wengen inactivation within the domain where rpr is expressed to show that wengen is not required autonomously for regeneration (sal>rpr + wgn RNAi). What is the phenotype of the adult wing of sal-lexA>rpr + nub-gal4 >wgn RNAi animals.? *

      We have already added a new figure (Fig. S4C) containing this data. As shown, both wgnRNAi alone and wgn RNAi + rpr do not show relevant anomalies and regenerate normally. Therefore, we conclude that wgn is not autonomously required for regeneration.

      The adult wings sal-lexA>rpr + nub-gal4 >wgn RNA result in a strong aberration, as regeneration is inhibited. This experiment has been also added in another figure (Fig. S4B) it is done.

      *1.4 Minor- In fig.1I, it is surprising that knockdown of neither Grnd nor dTRAF2 significantly affects Egr-induced apoptosis *

      After applying a One-Way ANOVA test to compare all the groups against all the groups in fig. 1B no significative differences were detected between Control and RNAi grnd or RNAi dTRAF2 (p>0,05). But if we apply a Student’s T test, which is less restrictive, we obtain, indeed, significative differences:

      Control vs. RNAigrnd p=9,48x10-7

      Control vs. RNAi dTRAF2 p=2,47x10-7

      We have now added in the text:

      “Note that when egrweak cells downregulated dTRAF2 or grnd the cell death area ratio is slightly lower than egrweak alone (Fig. 1I), comfirming that dTRAF2 and Grnd contribute to apoptosis in egrweak cells.”

      *1.5 Minor The ability of the wing disc to regenerate has been associated with the induction of a developmental delay mediated by Dilp8. Are the authors observing this developmental delay is the case of sal-lexA>rpr + Ap-gal4 >wgn RNAi or sal-lexA>rpr + Ci-Gal4>wgn RNAi *

      The developmental delay due to Dilp8 has been observed by many laboratories, indeed. The question of the reviewer is relevant because if there is no delay in pupariation, regeneration could be compromised not because regeneration has been affected but because after pupariation regeneration is impeded.

      However, delay in pupariation has been found in our experiments. Usually for 11hrs of heat shock (to induce apoptosis) we found 1-2 days of delay.

      We have added the following text:

      “The ability of the wing disc to regenerate after genetic ablation has been associated with the induction of a developmental delay (Colombani et al., 2012; Garelli et al., 2012; Jaszczak et al., 2015; Katsuyama et al., 2015; Smith-Bolton et al., 2009). All genotypes analyzed in figure 6 showed a similar developmental delay of 1-2 days (at 17ºC) after genetic ablation in comparison to the animals of the same genotype in which no genetic ablation was induced, i.e. developed continuously at 17ºC (Fig. S4A). After the adults emerged, the wings were dissected, and regeneration was analyzed.”

      *1.7 Minor - The investigation of the evolutionary origin of TNFR in drosophila included in Fig.2 is cutting a bit the flow of the results. *

      The evolutionary origin starts now with a sentence that can smoothen the flow and few changes in that paragraph have been made:

      “Opposing roles between proteins of the TNFR superfamily suggests that they have an ancient origin and have followed divergent evolutionary paths. To track the differences observed between grnd and wgn, we decided to investigate the evolutionary origin of these two Drosophila genes.”

      *1.8 Minor The authors could explain in more details the double transactivation system for non-fly specialists. *

      The entire section has been re-written in Material and Methods.

      *1.9 Minor - It can be interesting to include and/or discuss these few references: *

      *PLoS Genet. 2019 Aug; 15(8): e1008133. ** PLoS Genet. 2022 Dec 5;18(12):e1010533. FEBS Lett. 2023 Oct;597(19):2416-2432. *

      *Curr Biol. 2016 Mar 7;26(5):575-84. *

      *Nat Commun. 2020 Jul 20;11(1):3631. **

      *

      All these references, and few others, have been introduced in the text.

      __Reviewer #2 __ *2. 1. The authors find that wgn RNAi enhances hh>egr-induced apoptosis. They validate the results with two independent RNAi lines in Figure S1. However, Figure S1 is missing a control without wgn RNAi, and therefore, the results are difficult to assess. *

      Fig S1A now contains this control.

        1. Are the two independent wgn RNAi lines targeting different regions of the coding sequence? *

      As the regions targeted by the 2 RNAi’s are different, see below, we have included in the text:

      “This observation was corroborated with an independent RNAi-wgn strain targeting a different region in the coding sequence (Fig. S1A and S1B). “

      Bloomington BL55275 (dsRNA-HMCO3962)

      VDRC GD9152 (dsRNA-GD3427)

      *2.7. In Figure 4, the authors show that egr expression induces ROS and performs anti-oxidant experiments. This part could be strengthened if they show that the ROS sensor signal disappears after Sod::Cat expression. *

      We had done this experiment and there is a definitively drop in Mitosox in discs in which the weak allele of egr is active. We have included this new image in Figure 4G and in the text.

      *2.8. How effective is egr RNAi? In Figure 5E, F, the authors knock down egr and obtain negative results. Based on this, the authors argue that Wgn localization occurs through an egr-independent mechanism. Drawing strong conclusions based on a negative result with egr RNAi is not a good practice since one cannot rule out residual egr activity that mediates the effect (of course , because there is cell death as well, death cells express egr). I suggest either finding ways to completely abolish egr function, or tone down the conclusion. *

      We have used ‘after knocking down eiger’ instead of in the ‘absence’ or ‘abolish’ eiger.

        1. Figure 6 shows that wgn RNAi aggravates the reaper phenotype. What's missing is a control that expresses wgn RNAi but not reaper. *

      Control experiments using the UAS-wgnRNAi in the absence of rpr are now shown in figure S4C.

      Reviewer #3 ____ 3.1.Minor Fig 6C-E would need a control disc without induced apoptosis (ie wildtype disc) stained for phospho-p38 as a baseline comparison. This is important to judge the significance of the remaining phospho-p38 in panel E where wgn is knocked down. The authors write ** " However, after knocking down wgn, phosphorylated p38 in the wing pouch ** surrounding the apoptotic cells was abolished (Fig. 6E)." *Depending on the amount of phospho-p38 in control discs, this may need to be rephrased to "strongly reduced" instead of "abolished". *

      A control disc stained with P-p38 has been added in Figure 6.

      We have changes ‘abolished’ by ‘strongly reduced’.

      * 3.2. This sentence in the Intro needs fixing because TNFa doesn't transduce the signal from TNFR to Ask1 since it's upstream of TNFR: "TNFα can transduce the TRAF-mediated signal from TNFR to Ask1 to modulate its activity (Hoeflich et al., 1999; Nishitoh et al., 1998, p. 0; Obsil & Obsilova, 2017; Shiizaki et al., 2013)." *

      We have rephrased this sentence by:

      “TNFα binds to TNFRs which in turn interact with TRAFs to transduce the signal to Ask1 to modulate its activity”.

      *3.3a In the results section, the authors start by ectopically overexpressing Eiger. Are there conditions where Eiger expression is induced in the wing? If yes, it would be helpful for the reader to mention that this system with the genetically GAL4-induced expression of Eiger aims to phenocopy one of these conditions. *

      Eiger ectopic expression has been induced in the wing to generate apoptosis. This is a common technique in Drosophila, and the Reviewer3 is right that a sentence should be useful for the reader.

      A sentence has been introduced at the beginning of the results section:

      “Ectopic expression of egr in Drosophila imaginal discs results in JNK-dependent apoptosis (Brodsky et al., 2004; Igaki et al., 2002; Moreno, Yan, et al., 2002).”

      *3.3b Fig 2C is not very self-explanatory: it is worth writing out what Hsa (H. sapiens), Bla and Sco stand for (there is plenty of space). *

      We have re-designed figure 2 to make it more self-explanatory.

      *3.4. This sentence is confusing: ** " ...Wgn localization were due to ROS or to the expression of egr, we used RNAi to knock down egr in the apoptotic cells and found that reduced Egr/TNFα had no effect on Wgn localization (Fig. 5E, 5F)." The authors may want to specify that Wgn is still accumulated even without Egr. ("No effect" is unclear). *

      This sentence has been modifies as:

      “Wgn localization were due to ROS or to the expression of egr, we used RNAi to knock down egr in the apoptotic cells and found that Wgn accumulation was not altered by the knocking down Egr/TNFα (Fig. 5E, 5F). “

      *3.5 Comment. It discovers that Wengen is activated by ROS. In fact, since Wengen binds TNF with an affinity that is several orders of magnitude lower than Grindelwald, and since Wengen is not even located at the cell membrane, these data call into question whether Wengen is a TNF receptor, or a ROS receptor? Could the authors comment on this ? Could it be that the results obtained in the past showing that Wengen is activated by TNF were actually due to TNF inducing apoptosis, leading to production of ROS, leading to activation of Wengen?

      *

      We totally agree with Reviewer#3. We have added a final paragraph in the discussion section.

      “We speculate that the subcellular location of Wgn and Grnd may contribute to the different functions of both receptors. Grnd is more exposed at the apical side of the plasma membrane, which makes this receptor more accessible for ligand interactions (Palmerini et al., 2021). Wgn, embedded in cytoplasmic vesicles, is less accessible to the ligand and could be more restricted to being activated by local sources of signaling molecules, such as ROS. In contrast to initial reports (Kanda et al., 2002; Kauppila et al., 2003), los-of-function of wgn does not rescue Egr-induced apoptosis in the Drosophila eye (Andersen et al., 2015), which supports our observation in the wing that Wgn is not required for Egr-induced apoptosis. Instead, Egr-induced apoptosis generates ROS which target intracellular Wgn to foster a cell survival program of cells close to the apoptotic zone.”

      __Reviewer #4 __

      *4.1 b Are phospho-p38 levels increased in cells expressing Egr[weak]? *

      We have the results of these experiments. To respond to this point, a new figure has been added (Fig. S4) in which we show the P-p38 levels are increased (non-autonomously) in egrw, as previously found for reaper. In addition, we show that egrw + activation of p35 and egrw + activation of Sod1::Cat results in strong reduction of P-p38. This indicates that P-p38 is stimulated by the ROS produced by apoptotic cells.

      The text now:

      “It is worth noting that cells egrw induce phosphorylation of p38 in neighboring cells (Fig. S4A) and that, as previously found for rpr (REF), depends on ROS generated by egrw apoptotic cells (Fig. S4B, C).”

      *4.2 In Figure 4C it appears that the Dcp-1 positive cells move apically rather than basally. Including nuclear staining would be very informative allowing assessment of tissue morphology. ** The authors focus on the pouch region of the wing imaginal disc, where phenotypes are strong and obvious. However, the hh-Gal4 driver also affects posterior cells in the hinge and notum, where the effects of Eiger[weak] overexpression seem weaker (e.g., minimal to no MitoSox signal in hinge and notum posterior cells). Could the authors explain this observation? *

      Point 1: Actually, cells move more basally, though some move more apical as well. Depending on the section cells the image could be confusing. To solve that, we show now a plane on these discs at apical and a plane basal. Both high magnifications. There one can see that there is more concentration of pyknotic nuclei basally. We have added this observation in a new supplementary figure (Fig. S3) and the corresponding text in page 5: “Apoptotic cells in egrweak are characterized by pyknotic nuclei and are positive for Dcp1. These cells tend to concentrate in the basal side of the epithelium, although some are scattered apically (Fig. S3). Accumulation of Wgn was observed in healthy anterior cells adjacent to both apical and basal egrweak cells (Fig. 4, Fig. S3A, B).”

      Point 2 Comment on MitoSOX in notum: At the stages of the imaginal discs used in this study, almost all notum cells are anterior compartment. The hh positive cells in notum much less abundant, therefore most of the staining was found in the posterior compartment of the wing pouch.

      *4.5 Figure 6 C-E. Does WgnRNAi potentiates and GrndRNAi suppress Rpr-induced apoptosis similarly to their effects when knocked down in Eiger[weak]OE cells? *

      The areas controlled by salE/Pv >rpr (dotted lines) are full of pycnotic nuclei, which indicates concentration of apoptotic cells in all genotypes shown.

      Thus, in the conditions generated here, apoptosis is not inhibited and grnd RNAi does not interfere with the activation of P-p38. In wgn knock down, phospho-p38 is strongly inhibited, indicating the importance of wgn in phosphorylation of p38.

      To clarify this point, we have added in the text: “Note that rpr-induced apoptosis is not suppressed after knocking down grnd or wgn.” Also in the figure legend we added: “White lines in the confocal images outline the salE/Pv-LHG,LexO-rpr dark area full of pyknotic nuclei of apoptotic cells.”

      4.6 The activation of p38 following salE/Pv>rpr-mediated ablation as shown by immunostaining is noteworthy. While loss Grnd knockdown leads to phospho-p38 signal enrichment around the rpr-expressing cells, WgnRNAi results in reduced phospho-p38 signal in the wing pouch but also beyond the nub-expression domain. Do salE/Pv>rpr nub>WgnRNAi cells still generate ROS?

      So far there is no evidence of Wengen as a ROS scavenger. We have evidence that ROS (using MitoSox probe) are produced in egrweak + Wgn RNAi cells. Thus, the inhibition of wgn expression does not block ROS production. A new figure shows this observation (Figure S4A).

      4.7 Are ROS responsible for the long-range signaling and p38 activation, referring to authors' previous work Santabarbara-Ruiz et al., 2019, PLoS Genet 15(1): e1007926. https://doi.org/10.1371/journal. pgen.1007926, Figure 5G?

      ROS are responsible for p38 activation as shown in a new figure (Fig. S4). In this new figure egrweak is activated in hh, and p38 is most of cells in the posterior compartment, and also anterior. However, after blocking apoptosis or ROS production, this P-p38 is reduced.

      4.8 Minor I propose rephrasing the description of "UAS-Egr[weak] transgene, a strain that produces a reduced Egr/TNFα activity". It could imply a loss of function strain rather than a transgene that causes mild/moderate Egr overexpression.

      The sentence has been rephrases as suggested (End of the first paragragraph in results section).

      *4.9 Minor. I recommend the authors to revise the charts for improved clarity in genotype representation. For example, in Figure 1I, the label "control-GFP" might be misleading. It would be beneficial to specify that "control" refers to Eiger[weak] alone with other manipulations being done simultaneously with Eiger[weak] overexpression. *

      All charts have been revised.

      4.10 Minor. Additionally, considering that individuals with color blindness may struggle to differentiate between red and green colors, I strongly suggest using a color-blind-friendly palette, especially in Figure 4A, C, G, and Figure 4A, C, E." ** All images have been revised for color blind code.

      • 11 Minor. Providing detailed information regarding the reagents used in the study, such as Catalogue Numbers or RRIDs, is beneficial for enhancing reproducibility. *

      We have added the RRID and Cat #. If no ID was available, we added the reference or contact.

      4.12 This reviewer points two limitations that we are now trying to solve:

      *Limitations: *

      *Quality of the imaging – higher magnification images and quantification would enhance the study. ** The summarizing model may contain excessive speculations that lack support from the data or references to the existing literature. *

      Quality of imaging. We have now an extra supplemental figure with higher magnifications. Extra higher magnifications will be included in the next version as well as quantification, as exposed for the Revision Plan points 4.3 and 4.4.

      Model: We have re-written the paragraph on the model, introduced references and drop some speculations. We hope the current version will be more convincing for the reader.

      4. Description of analyses that authors prefer not to carry out

      Please include a point-by-point response explaining why some of the requested data or additional analyses might not be necessary or cannot be provided within the scope of a revision. This can be due to time or resource limitations or in case of disagreement about the necessity of such additional data given the scope of the study. Please leave empty if not applicable.

      Reviewer 1

      *1.3. Is the overexpression of Wengen sufficient to increase tissue regeneration? *

      The suggestion of the reviewer is a key point in regeneration biology: how to accelerate regeneration?

      We have demonstrated that Wengen is upstream the Ask1-p38 axis that drives regeneration. The reviewer wonders if Wengen overexpression can result in increase in regeneration. In a previous work we have demonstrated that p38 activation is key for regeneration but its overexpression can be deleterious (Esteban-Collado et al., 2021). Only in discs that sensitized for low p38 (starvation, low Akt, Ask1S83A mutant), the overexpression rescues regeneration. Therefore, the levels of the Wgn-Ask1-p38 have to be very tightly controlled. An excess will be deleterious. We are aware of the importance of the question, but at this point we do not have the technology to finely control Wgn-Ask1-p38 levels to do this experiment.

      1.6 Minor - It possible to test the induction of apoptosis in a wgn null mutant background to see if the phenotype is even stronger than the one observed with RNAi (the wgn RNAi is induced at the same time than egr or rpr overexpression).

      Flies wgnKO survive, but they gave us problems when carrying transgenes for our design of genetic ablation. Indeed, we tried to generate wgnKO carrying Gal4+tubGal80+eigerweak without success.

      In addition, the reason we have used an inducible mutant is because it allows us to work in time and space without altering expression in other cell types beyond wing discs. Wgn is required in other organs during development like gut, trachea and axon growth, etc.., and thus, we ensure the affected cells belong to the tissue analyzed.

    1. Résumé vidéo [00:00:00][^1^][1] - [00:22:32][^2^][2]:

      Cette vidéo explique ce qu'est la dotation horaire globale (DHG) et comment elle est répartie entre les établissements scolaires du second degré. Elle aborde les aspects juridiques, les étapes du processus, les éléments de calcul, les conséquences pour les enseignants et les élèves, et les problèmes rencontrés.

      Points forts: + [00:00:00][^3^][3] La DHG, c'est quoi ? * Une enveloppe d'heures d'enseignement attribuée à chaque établissement * Un élément d'autonomie pédagogique et éducative * Un sujet sensible qui impacte les conditions de travail et d'apprentissage + [00:00:44][^4^][4] Le cadre juridique de la DHG * Le code de l'éducation et les arrêtés ministériels qui définissent les horaires réglementaires, les dotations spécifiques, les heures d'autonomie, etc. * Le rôle du chef d'établissement, du conseil pédagogique et du conseil d'administration dans la répartition des heures + [00:05:00][^5^][5] Les étapes du processus de la DHG * La loi de finances qui fixe le budget de l'éducation nationale * La répartition des dotations nationales entre les académies, les départements et les établissements * La communication de la DHG aux chefs d'établissement entre janvier et février * La construction du tableau de répartition des moyens donnés (TRMD) en concertation avec les équipes pédagogiques * Le vote du TRMD en conseil d'administration * Les ajustements possibles en fonction des effectifs réels et des besoins spécifiques + [00:15:01][^6^][6] Les éléments de calcul de la DHG * Les heures réglementaires minimales par discipline et par niveau * Les heures statutaires liées aux obligations de service des enseignants * Les heures spécifiques liées aux projets et aux dispositifs de l'établissement * Les heures d'autonomie qui permettent de moduler la répartition des heures et de proposer des enseignements complémentaires ou facultatifs * Les heures supplémentaires annuelles (HSA) ou effectives (HSE) qui compensent le manque d'heures postes + [00:19:20][^7^][7] Les conséquences de la DHG pour les enseignants et les élèves * Les compléments de service donnés ou reçus qui obligent les enseignants à se partager entre plusieurs établissements * Les suppressions de postes ou de divisions qui entraînent la disparition ou la réduction de certaines disciplines * Les heures dans une autre discipline qui sont proposées aux enseignants sous-service * Les effectifs surchargés qui nuisent à la qualité de l'enseignement et à la sécurité des élèves * Les élèves fictifs qui sont créés pour gonfler les effectifs et sauver les moyens

    1. Commençons par la lumière : dans chaque item plante, on vient ajouter un composant  CareScale   et on lui passe la prop value :

      Impossible à comprendre où va cette ligne de code. Il faut être plus précis, ça devient fatigant. Nous sommes en apprentissage on ne devrait pas avoir à deviner les choses ou a perdre du temps à chercher une information qui devrait nous être fournie.

    1. IO. No man can completely guard women by force; but they can be guardedby the employment of the ( following) expedients

      Ah yes, more restricting rules for women. Code #13 does seem reasonable and it explains why women can't act irrational. However, the rest of the codes that follow BELOW seem like women are deemed as children and irresponsible.

    2. The Laws of Manu (c. 100 e.c.e.-200 c.e.)

      "The Laws of Manu" certainly NOT similar to the "Code of Hammurabi" since the Laws of Manu focuses more so on women.

    3. Women must be honoured and adorned

      This a strong law to start with, especially with women being unvalued compared to other code of laws like the Code of Hammurabi. Analyzing from this

    4. 13.oDrinking (spirituous liquor), associating with wicked people, separationofrom the husband, rambling abroad, sleeping (at unseasonable hours), and dwelling in other men's houses, are the six causes of the ruin of women

      This code of law seems familiar because this is how Hispanic and Asian families depict a "ruined" woman. My mother would very much agree. Ha!

    5. aAkshamala, a woman of the lowest birth, being united to Vasishtha andaSarangi (being united) to Mandapala, became worthy ofhonour.2

      From this law code, it seems they value their women. Despite of the woman's low status on the caste they still seem to be valued with their gods.

    Annotators

    1. Demo: Writing Functions

      This page provides an introduction to the concept and utility of functions in Python. The progression from basic functions to more complex ones with parameters and return values is well-structured, making it accessible for beginners like me. Particularly helpful are the practical examples and clear explanation of benefits such as code reusability, ease of maintenance, and improved code organization.

    2. We can also re-define our functions by doing a new def statement. It will just replace the old functions definition with the new one:

      Using a for loop to run a defined function makes it easy for me. I don't need to repeatedly type a large paragraph for it to be paraphrased. At the same time, the def function allows me to be more efficient. By doing this, I can reuse it when I need to, making changes to the code easier as well.

    1. Le télétravail a été introduit dans le Code du travail à l’article 1222-9 par la loi du 23 mars 2012 (l’article 46 de la loi dite Warsmann définit le télétravail). Cette loi prévoit des mesures de protection des données et de préservation de la vie privée. L’Accord National Interprofessionnel du 19 juillet 2005 dans son article premier donne du télétravail la définition suivante :

      Cadre juridique posé par l'auteur- argument épistémique

    1. Data can be poisoned intentionally as well. For example, in 2021, workers at Kellogg’s were upset at their working conditions, so they agreed to go on strike, and not work until Kellogg’s agreed to improve their work conditions. Kellogg’s announced that they would hire new workers to replace the striking workers: Kellogg’s proposed pay and benefits cuts while forcing workers to work severe overtime as long as 16-hour-days for seven days a week. Some workers stayed on the job for months without a single day off. The company refuses to meet the union’s proposals for better pay, hours, and benefits, so they went on strike. Earlier this week, the company announced it would permanently replace 1,400 striking workers. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers – Vice MotherBoard People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:

      This demonstrates the potential of coordinated online campaigns to disrupt corporate behavior and amplify worker demand. By weaponizing data through the mass submission of fake job applications, protesters have leveraged technology to make a tangible impact. This tactic, while controversial, demonstrates a novel form of digital strike that blurs the line between activism and online disruption and raises questions about the ethics and effectiveness of such methods in labor disputes.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      The present study provides a phylogenetic analysis of the size prefrontal areas in primates, aiming to investigate whether relative size of the rostral prefrontal cortex (frontal pole) and dorsolateral prefrontal cortex volume vary according to known ecological or social variables.

      I am very much in favor of the general approach taken in this study. Neuroimaging now allows us to obtain more detailed anatomical data in a much larger range of species than ever before and this study shows the questions that can be asked using these types of data. In general, the study is conducted with care, focusing on anatomical precision in definition of the cortical areas and using appropriate statistical techniques, such as PGLS. That said, there are some points where I feel the authors could have taken their care a bit further and, as a result, inform the community even more about what is in their data.

      We thank the reviewer for this globally positive evaluation of our work, and we appreciate the advices to improve our manuscript.

      The introduction sets up the contrast of 'ecological' (mostly foraging) and social variables of a primate's life that can be reflected in the relative size of brain regions. This debate is for a large part a relic of the literature and the authors themselves state in a number of places that perhaps the contrast is a bit artificial. I feel that they could go further in this. Social behavior could easily be a solution to foraging problems, making them variables that are not in competition, but simply different levels of explanation. This point has been made in some of the recent work by Robin Dunbar and Susanne Shultz.

      Thank you for this constructive comment, and we acknowledge that the contrast between social vs ecological brain is relatively marginal here. Based also on the first remark by reviewer 3, we have reformulated the introduction to emphasize what we think is actually more critical: the link between cognitive functions as defined in laboratory conditions and socio-ecological variables measured in natural conditions. And the fact that here, we use brain measures as a potential tool to relate these laboratory vs natural variables through a common scenario. Also, we were already mentioning the potential interaction between social and foraging processes in the discussion, but we are happy to add a reference to recent studies by S. Shultz and R. Dunbar (2022), which is indeed directly relevant. We thank the reviewer for pointing out this literature.

      In a similar vein, the hypotheses of relating frontal pole to 'meta-cognition' and dorsolateral PFC to 'working memory' is a dramatic oversimplification of the complexity of cognitive function and does a disservice to the careful approach of the rest of the manuscript.

      We agree that the formulation of which functions we were attributing to the distinct brain regions might not have been clear enough, but the functional relation between frontal pole and metacognition in the one hand, and DLPFC and working memory on the other hand, have been firmly established in the literature, both through laboratory studies and through clinical data. Clearly, no single brain region is necessary and sufficient for any cognitive operation, but decades of neuropsychology have demonstrated the differential implication of distinct brain regions in distinct functions, which is all we mean here. We have made a specific point on that topic in the discussion (cf p. 16). We have also reformulated the introduction to clarify that, even if the relation between these regions and their functions (FP/ metacognition; DLPFC/ working memory) was clear in laboratory conditions, it was not clear whether this mapping could be used for real life conditions. And therefore whether that simplification was somehow justified beyond the lab (and the clinics), and whether these neuro-cognitive concepts could be applied to natural conditions, are indeed critical questions that we wanted to address. The central goal of the present study was precisely to evaluate the extent to which this brain/cognition relation could be used to understand more natural behaviors and functions, and we hope that it appears more clearly now.

      One can also question the predicted relationship between frontal pole meta-cognition and social abilities versus foraging, as Passingham and Wise show in their 2012 book that it is frontal pole size that correlates with learning ability-an argument that they used to relate this part of the brain to foraging abilities. I would strongly suggest the authors refrain from using such descriptive terms. Why not simply use the names of the variables actually showing significant correlations with relative size of the areas?

      We basically agree with the reviewer, and we acknowledge the lack of clarity in the introduction of the previous manuscript. There were indeed lots of ambiguity in what we were referring to as ‘function’, associated with a given brain region. « Function » referred to way to many things! We have reformulated the introduction not only to clarify the different types of functions that were attributed to distinct brain regions in the literature but also to clarify how this study was addressing the question: by trying to articulate concepts from neuroscience laboratory studies with concepts from behavioral ecology and evolution using intuitive scenarios. We hope that the present version of the introduction makes that point clearer.

      The major methodological judgements in this paper are of course in the delineation of the frontal pole and dorsolateral prefrontal cortex. As I said above, I appreciate how carefully the authors describe their anatomical procedure, allowing researchers to replicate and extend their work. They are also careful not to relate their regions of interest to precise cytoarchitectonic areas, as such a claim would be impossible to make without more evidence. That said, there is a judgement call made in using the principal sulcus as a boundary defining landmark for FP in monkeys and the superior frontal sulcus in apes. I do not believe that these sulci are homologous. Indeed, the authors themselves go on to argue that dorsolateral prefrontal cortex, where studied using cytoarchitecture, stretches to the fundus of principal sulcus in monkeys, but all the way to the inferior frontal sulcus in apes. That means that using the fundus of PS is not a good landmark.

      We thank the reviewer for his kind remarks on our careful descriptions. But then, it is not clear whether our choice of using the principal sulcus as a boundary for FP in monkeys vs the superior frontal sulcus in apes is actually a judgement call. First, and foremost, there is no clear and unambiguous definition of what should be the boundaries of the FP. By contrast with cytoarchitectonic maps, but clearly this is out of reach here. In humans and great apes we used Bludau et al 2014 (i.e. sup frontal sulcus), and in monkeys, we chose a conservative landmark that eliminated area 9, which is traditionally associated with the DLPFC (Petrides, 2005; Petrides et al, 2012; Semendeferi et al, 2001).

      Of course, any definition will attract criticism, so the best solution might be to run the analysis multiple times, using different definitions for the areas, and see how this affects results.

      Indeed, functional maps indicate that dorsal part of anterior PFC in monkeys is functionally part of FP. But again, cytoarchitectonic maps also indicate that this part of the brain includes BA 9, which is traditionally associated with DLPFC (Petrides, 2005; Petrides et al, 2012). As already pointed out in the discussion, there is a functional continuum between FP and DLPFC and our goal when using PS as dorsal border was to be very conservative and to exclude the ambiguous area. But we agree with the reviewer that given that this decision is arbitrary, it was worth exploring other definitions of the FP volume. So, we did complete a new analysis with a less conservative definition of the FP, to include this ambiguous dorsal area, and it is now included in the supplementary material. Maybe as expected, including the ambiguous area in the FP volume shifted the relation with socio-ecological variables towards the pattern displayed by the DLPFC (ie the influence of population density decreased). The most parsimonious interpretation of this results is that when extending the border of the FP region to cover a part of the brain which might belong to the DLPFC, or which might be somehow functionally intermediate between the 2, the specific relation of the FP with socio-ecological variables decreases. Thus, even if we agree that it was important to conduct this analysis, we believe that it only confirms the difficulty to identify a clear boundary between FP and DLPFC. Again, we have clearly explained throughout the manuscript that we admit the lack of precision in our definitions of the functional brain regions. In that frame, the conservative option seems more appropriate and for the sake of clarity, the results of the additional analysis of a FP volume that includes the ambiguous area is only included in the supplementary material.

      If I understand correctly, the PGLS was run separately for the three brain measure (whole brain, FP, DLPFC). However, given that the measures are so highly correlated, is there an argument for an analysis that allows testing on residuals. In other words, to test effects of relative size of FP and DLPFC over and above brain size?

      Generally, using residuals as “data” (or pseudo-data) is not recommended in statistical analyses. Two widely cited references from the ecological literature are:

      Garcia-Berthou E. (2001) On the Misuse of Residuals in Ecology: Testing Regression Residuals vs. the Analysis of Covariance. Journal of Animal Ecology, 70 (4): 708-711.

      Freckleton RP. (2002). On the misuse of residuals in ecology: regression of residuals vs. multiple regression. Journal of Animal Ecology 71: 542–545. https://doi.org/10.1046/ j.1365-2656.2002.00618.x.

      The main reason for this recommendation is that residuals are dependent on the fitted model, and thus on the particular sample under consideration and the eventual significant effects that can be inferred.

      In the discussion and introduction, the authors discuss how size of the area is a proxy for number of neurons. However, as shown by Herculano-Houzel, this assumption does not hold across species. Across monkeys and apes, for instance, there is a different in how many neurons can be packed per volume of brain. There is even earlier work from Semendeferi showing how frontal pole especially shows distinct neuron-to-volume ratios.

      We appreciate the reviewer’s comment, but the references to Herculano-Houzel that we have in mind do indicate that the assumption is legitimate within primates.

      Herculano-Houzel et al (2007) show that the neuronal density of the cortex is well conserved across primate species (but only monkeys were studied). The conclusion of that study is that using volumes as a proxy for number of neurons, as a measure of computational capacity, should be avoided between rodents and primates (and as they showed later, even more so with birds, for which neuronal density is higher). BUT within primates, since neuronal densities are conserved, volume is a good predictor of number of neurons. Gabi et al (2016) provide evidence that the neuronal density of the PFC is well conserved between humans and non-human primates, which implies that including humans and great apes in the comparison is legitimate. In addition, the brain regions included in the analysis presumably include very similar architectonic regions (e.g. BA 10 for FP, BA 9/46 for DLPFC), which also suggests that the neuronal density should be relatively well conserved across species. Altogether, we believe that there is sufficient evidence to support the idea that the volume of a PFC region in primates is a good proxy for the number of neurons in that region, and therefore of its computational capacity.

      Semendeferi and colleagues (2001) pointed out some differences in cytoarchitectonic properties across parts of the FP and discussed how these properties could 1) be used to identify area 10 across species 2) be associated with distinct computational properties, with the idea that thicker ‘cell body free’ layers would leave more space for establishing connections (across dendrites and axons). This pioneering work, together with more recent imaging studies on functional connectivity (e.g. Sallet et al, 2013) emphasize the critical contribution of connectivity pattern as a tool for comparative anatomy. But unfortunately, as pointed out in the discussion already, this is currently out of reach for us.

      We acknowledge the limitations, and to be fair, the notion of computational capacity itself is hard to define operationally. Based on the work of Herculano-Houzel et al, average density is conserved enough across primates (including humans) to justify our approximation. We have tried to define our regions of interest using both anatomical and functional maps and, thanks to the reviewer’s suggestions, we even tried several ways to segment these regions. Functional maps in macaques and humans do not exactly match cytoarchitectonic maps, presumably because functions rely not only upon the cytoarchitectonics but also on connectivity patterns (e.g. Sallet et al, 2013).

      In sum, we appreciate the reviewer’s point but feel that, given the current understanding of brain functions and the relative conservation of neuronal density across primate PFC regions, the volume of a PFC region seems to be reasonable proxy for its number of neurons, and therefore its computational capacity. We have added these points to the discussions, and we hope that the reader will be able to get a fair sense of how legitimate is that position, given the literature.

      Overall, I think this is a very valuable approach and the study demonstrates what can now be achieved in evolutionary neuroscience. I do believe that they authors can be even more thorough and precise in their measurements and claims.

      Reviewer #2 (Public Review):

      In the manuscript entitled "Linking the evolution of two prefrontal brain regions to social and foraging challenges in primates" the authors measure the volume of the frontal pole (FP, related to metacognition) and the dorsolateral prefrontal cortex (DLPFC, related to working memory) in 16 primate species to evaluate the influence of socio-ecological factors on the size of these cortical regions. The authors select 11 socio-ecological variables and use a phylogenetic generalized least squares (PGLS) approach to evaluate the joint influence of these socio-ecological variables on the neuro-anatomical variability of FP and DLPFC across the 16 selected primate species; in this way, the authors take into account the phylogenetic relations across primate species in their attempt to discover the influence of socio-ecological variables on FP and DLPF evolution.

      The authors run their studies on brains collected from 1920 to 1970 and preserved in formalin solution. Also, they obtained data from the Mussée National d´Histoire Naturelle in Paris and from the Allen Brain Institute in California. The main findings consist in showing that the volume of the FP, the DLPFC, and the Rest of the Brain (ROB) across the 16 selected primate species is related to three socio-ecological variables: body mass, daily traveled distance, and population density. The authors conclude that metacognition and working memory are critical for foraging in primates and that FP volume is more sensitive to social constraints than DLPFC volume.

      The topic addressed in the present manuscript is relevant for understanding human brain evolution from the point of view of primate research, which, unfortunately, is a shrinking field in neuroscience.

      We must not have been clear enough in our manuscript, because our goal is precisely not to separate humans from other primates. This is why, in contrast to other studies, we have included human and non-human primates in the same models. If our goal had been to study human evolution, we would have included fossil data (endocasts) from the human lineage.

      But the experimental design has two major weak points: the absence of lissencephalic primates among the selected species and the delimitation of FP and DLPFC. Also, a general theoretical and experimental frame linking evolution (phylogeny) and development (ontogeny) is lacking.

      We admit that lissencephalic species could not be included in this study because we use sulci as key landmarks. We believe that including lissencephalic primates would have introduced a bias and noise in our comparisons, as the delimitations and landmarks would have been different for gyrencephalic and lissencephalic primates. Concerning development, it is simply beyond the scope of our study.

      Major comments.

      1) Is the brain modular? Is there modularity in brain evolution?: The entire manuscript is organized around the idea that the brain is a mosaic of units that have separate evolutionary trajectories:

      "In terms of evolution, the functional heterogeneity of distinct brain regions is captured by the notion of 'mosaic brain', where distinct brain regions could show a specific relation with various socio-ecological challenges, and therefore have relatively separate evolutionary trajectories".

      This hypothesis is problematic for several reasons. One of them is that each evolutionary module of the brain mosaic should originate in embryological development from a defined progenitor (or progenitors) domain [see García-Calero and Puelles (2020)]. Also, each evolutionary module should comprise connections with other modules; in the present case, FP and DLPFC have not evolved alone but in concert with, at least, their corresponding thalamic nuclei and striatal sector. Did those nuclei and sectors also expand across the selected primate species? Can the authors relate FP and DLPFC expansion to a shared progenitor domain across the analyzed species? This would be key to proposing homology hypotheses for FP and DLPFC across the selected species. The authors use all the time the comparative approach but never explicitly their criteria for defining homology of the cerebral cortex sectors analyzed.

      We do not understand what the referee is referring to with the word ‘module’, and why it relates to development. Same thing for the anatomical relation with subcortical structures. Yes, the identity of distinct functional cortical regions relies upon subcortical inputs during development, but clearly this is neither technically feasible, nor relevant here anyways.

      We acknowledge, however, that our definition of functional regions was not precise enough, and we have updated the introduction to clarify that point. In short, we clearly do not want to make a strong case for the functional borders that we chose for the regions of interest here (FP and DLPFC), but rather use those regions as proxies for their corresponding functions as defined in laboratory conditions for a couple of species (rhesus macaques and humans, essentially).

      Contemporary developmental biology has showed that the selection of morphological brain features happens within severe developmental constrains. Thus, the authors need a hypothesis linking the evolutionary expansion of FP and DLPFC during development. Otherwise, the claims form the mosaic brain and modularity lack fundamental support.

      Once again, we do not think that our definition of modules matches what the reviewer has in mind, i.e. modules defined by populations of neurons that developed together (e.g. visual thalamic neurons innervating visual cortices, themselves innervating visual thalamic neurons). Rather, the notion of mosaic brain refers to the fact that different parts of the brain are susceptible to distinct (but not necessarily exclusive) sources of selective pressures. The extent to which these ‘developmental’ modules are related to ‘evolutionary’ modules is clearly beyond the scope of this paper.

      Our goal here was to evaluate the extent to which modules that were defined based on cognitive operations identified in laboratory conditions could be related (across species) to socio-ecological factors as measured in wild animals. Again, we agree that the way these modules/ functional maps were defined in the paper were confusing, and we hope that the new version of the manuscript makes this point clearer.

      Also, the authors refer most of the time to brain regions, which is confusing because they are analyzing cerebral cortex regions.

      We do not understand why the term ‘brain’ is more confusing than ‘cerebral cortex’, especially for a wide audience.

      2) Definition and delimitation of FP and DLPFC: The precedent questions are also related to the definition and parcellation of FP and DLPFC. How homologous cortical sectors are defined across primate species? And then, how are those sectors parcellated?

      The authors delimited the FP:

      "...according to different criteria: it should match the functional anatomy for known species (macaques and humans, essentially) and be reliable enough to be applied to other species using macroscopic neuroanatomical landmarks".

      There is an implicit homology criterion here: two cortical regions in two primate species are homologs if these regions have similar functional anatomy based on cortico-cortical connections. Also, macroscopic neuroanatomical landmarks serve to limit the homologs across species.

      This is highly problematic. First, because similar function means analogy and not necessarily homology [for further explanation see Puelles et al. (2019); García-Cabezas et al. (2022)].

      We are not sure to follow the Reviewer’s point here. First, it is not clear what would be the evolutionary scenario implied by this comment (evolutionary divergence followed by reversion leading to convergence?). Second, based on the literature, both the DLPFC and the FP display strong similarities between macaques and humans, in terms of connectivity patterns (Sallet et al, 2013), in terms of lesion-induced deficit and in terms of task-related activity (Mansouri et al, 2017). These criteria are usually sufficient to call 2 regions functionally equivalent. We do not see how this explanation is "highly problematic" as it is clearly the most parsimonious based on our current knowledge.

      Second, because there are several lissencephalic primate species; in these primates, like marmosets and squirrel monkeys, the whole approach of the authors could not have been implemented. Should we suppose that lissencephalic primates lack FP or DLPFC?

      We understand neither the reviewer’s logic, nor the tone. We understand that the reviewer is concerned by the debate on whether some laboratory species are more relevant than others for studying the human prefrontal cortex, but this is clearly not the objective of our work. As explained in the manuscript, we identified FP and DLPFC based on functional maps in humans and laboratory monkeys (macaques), and we used specific gyri as landmarks that could be reliably used in other species. And, as rightfully pointed out by reviewer 1, this is in and off itself not so trivial. Of course, lissencephalic animals could not be studied because we could not find these landmarks, but why would it mean that they do not have a prefrontal cortex? The reviewer implies that species that we did not study do not have a prefrontal cortex, which makes little sense. Standards in the field of comparative anatomy of the PFC, especially when it implies rodents (lissencephalic also) include cytoarchitectonic and connectivity criteria, but obviously we are not in a position to address it here. We have, however, included references to the seminal work of Angela Roberts and collaborator in the discussion on marmosets prefrontal functions, to reinforce the idea that the functional organization is relatively well conserved across all primates (with or without gyri on their brain) (Dias et al, 1996; Roberts et al, 2007).

      Do these primates have significantly more simplistic ways of life than gyrencephalic primates? Marmosets and squirrel monkeys have quite small brains; does it imply that they have not experience the influence of socio-ecological factors on the size of FP, DLPFC, and the rest of the brain?

      Again, none of this is relevant here, because we could not draw conclusions on species that we cannot study for methodological reasons. The reviewer seems to believe that an absence of evidence is equivalent to an evidence of absence, but we do not.

      The authors state that:

      "the strong development of executive functions in species with larger prefrontal cortices is related to an absolute increase in number of neurons, rather than in an increase in the ration between the number of neurons in the PFC vs the rest of the brain".

      How does it apply to marmosets and squirrel monkeys?

      Again, we do not understand the reviewer’s point, since it is widely admitted that lissencephalic monkeys display both a prefrontal cortex and executive functions (again, see the work of Angela Roberts cited above). Our goal here was certainly not to get into the debate of what is the prefrontal cortex in a handful of laboratory species, but to evaluate the relevance of laboratory based neuro-cognitive concepts for understanding primates in general, and in their natural environment.

      References:

      García-Cabezas MA, Hacker JL, Zikopoulos B (2022) Homology of neocortical areas in rats and primates based on cortical type analysis: an update of the Hypothesis on the Dual Origin of the Neocortex. Brain structure & function Online ahead of print. doi:doi.org/ 10.1007/s00429-022-02548-0

      García-Calero E, Puelles L (2020) Histogenetic radial models as aids to understanding complex brain structures: The amygdalar radial model as a recent example. Front Neuroanat 14:590011. doi:10.3389/fnana.2020.590011

      Nieuwenhuys R, Puelles L (2016) Towards a New Neuromorphology. doi:10.1007/978-3-319-25693-1

      Puelles L, Alonso A, Garcia-Calero E, Martinez-de-la-Torre M (2019) Concentric ring topology of mammalian cortical sectors and relevance for patterning studies. J Comp Neurol 527 (10):1731-1752. doi:10.1002/cne.24650

      Reviewer #3 (Public Review):

      This is an interesting manuscript that addresses a longstanding debate in evolutionary biology - whether social or ecological factors are primarily responsible for the evolution of the large human brain. To address this, the authors examine the relationship between the size of two prefrontal regions involved in metacognition and working memory (DLPFC and FP) and socioecological variables across 16 primate species. I recommend major revisions to this manuscript due to: 1) a lack of clarity surrounding model construction; and 2) an inappropriate treatment of the relative importance of different predictors (due to a lack of scaling/normalization of predictor variables prior to analysis). My comments are organized by section below:

      We thank the reviewer for the globally positive evaluation and for the constructive remarks. Introduction:

      • Well written and thorough, but the questions presented could use restructuring.

      Again, we thank the reviewer, and we believe that this is coherent with some of the remarks of reviewer 1. We have extensively revised the introduction, toning down the social vs ecological brain issue to focus more on what is the objective of the work (evaluating the relevance of lab based neuro-cognitive concepts for understanding natural behavior in primates).

      Methods:

      • It is unclear which combinations of models were compared or why only population density and distance travelled tested appear to have been included.

      The details of the model comparison analysis were presented as a table in the supplementary material (#3, details of the model comparison data), but we understand that this was not clear enough. We have provided more explanation both in the main manuscript and in the supplements. All variables were considered a priori; however, we proceeded beforehand to an exploratory analyses which led us to exclude some variables because of their lack of resolution (not enough categories for qualitative variables) or strong cross-correlations with other quantitative variables. There were much more than three variables included in the models but the combination of these 3 (body mass, daily traveled distance and population density) best predicted (had the smallest AIC) the size of the brain regions. We provide additional information about these exploratory analyses in the supplementary material, sections 2 and 3.

      • Brain size (vs. body size) should be used as a predictor in the models.

      We do not understand the theoretical reason for replacing body size by brain size in the models. Brain size is not a socio-ecological variable. And of course, that would be impossible for modeling brain size itself. Or is it that the reviewer suggests to use brain size as a covariate to evaluate the effects of other variables in the model over and above the effect on brain size? But what is the theoretical basis for this?

      • It is not appropriate to compare the impact of different predictors using their coefficients if the variables were not scaled prior to analysis.

      We thank the Reviewer for this comment; however, standardized coefficients are not unproblematic because their calculations are based on the estimated standard-deviations of the variables which are likely to be affected by sampling (in effect more than the means). We note that the methods of standardized coefficients have attracted several criticisms in the literature (see the References section in https://en.wikipedia.org/wiki/Standardized_coefficient). Nevertheless, we now provide a table with these coefficients which makes an easy comparison for the present study. We also updated tables 1, 2 and 3 to include standardized beta values.

      Reviewer #1 (Recommendations For The Authors):

      N/A

      Reviewer #2 (Recommendations For The Authors):

      Contemporary developmental biology has showed that the brain of all mammals, including primates, develops out of a bauplan (or blueprint) made of several fundamental morphological units that have invariant topological relations across species (Nieuwenhuys and Puelles 2016).

      At some point in the discussion the authors acknowledge that:

      "Our aim here was clearly not to provide a clear identification of anatomical boundaries across brain regions in individual species, as others have done using much finer neuroanatomical methods. Such a fine neuroanatomical characterization appears impossible to carry on for a sample size of species compatible with PGLS".

      I do not think it would be impossible to carry such neuroanatomical characterization. It would take time and effort, but it is feasible. Such characterization, if performed within the framework of contemporary developmental biology, would allow for well-founded definition and delineation of cortical sectors across primate species, including lissencephalic ones, and would allow for meaningful homologies and interspecies comparisons.

      We do not see how our work would benefit from developmental biology at that point, because it is concerned with evolution, and these are very distinct biological phenomena. We do not understand the reviewer’s focus on lissencephalic species, because they are not so prevalent across primates, and it is unlikely that adding a couple of lissencephalic species will change much to the conclusions.

      Minor points:

      • Please, format references according to the instructions of the journal.

      Ok - done

      • The authors could use the same color code across Figures 1, 2, and 3.

      Ok – done

      • The authors say that group hunting "only occurs in a few primate species", but it also occurs in wolves, whales, and other mammalian species.

      We focus on primates here, these other species are irrelevant. Again, this is beside the point.

      Reviewer #3 (Recommendations For The Authors):

      My comments are organized by section below:

      Introduction:

      • Well written and thorough

      • The two questions presented towards the end of the intro are not clear and do not guide the structure of the methods/results sections. I believe one it would be more appropriate to ask if: 1) the relative proportions of the FP and DLPFC (relative to ROB) are consistent across primates; and 2) if the relative size of these region is best predicted by social and/ or ecological variables. Then, the results sections could be organized according to these questions (current results section 1 = 1; current results sections 2, 3, 4 = 2.1, 2.2, 2.3)

      As explained above, we agree with the reviewer that the introduction was somehow misleading and we have edited it extensively. We do not, however, agree with the reviewer regarding the relative (vs absolute) measure. We have discussed this in our response to reviewer 1 regarding the comparison of regional volumes as proxies for number of neurons. The best predictor of the computing capacity of a brain region is its number of neurons, but there is no reason to believe that this capacity should decrease if the rest of the brain increases, as implied by the relative measure that the reviewer proposes. That debate is probably critical in the field of comparative neuroanatomy, and confronting different perspectives would surely be both interesting and insightful, but we feel that it is beyond the scope of the present article.

      Methods:

      • While the methods are straightforward and generally well described, it is unclear which combinations of models were compared or why only population density and distance travelled tested appear to have been included (in e.g., Fig SI 3.1) even though many more variables were collected.

      We agree that this was not clear enough, and we have tried to improve the description of our model comparison approach, both in the main text and in the supplementary material.

      • Why was body mass rather than ROB used as a predictor in the models? The authors should instead/also include analyses using ROB (so the analysis is of FP and DLPFC size relative to brain size). Using body mass confounds the analyses since they will be impacted by differences in brain size relative body size.


      Again, we have addressed this issue above. First, body size is a socio-ecological variable (if anything, it especially predicts energetic needs and energy expenditure), but ROB is clearly not. We do not see the theoretical relevance of ROB in a socio-ecological model. Second, from a neurobiological point of view, since within primates the volume of a given brain region is directly related to its number of neurons (again, see work of Herculano-Houzel), which is a good proxy for its computing capacity, we do not see the theoretical reason for considering ROB.

      • It is not appropriate to compare the impact of different predictors using their coefficients if the variables were not scaled prior to analysis. The authors need to implement this in their approach to make such claims.

      We thank the reviewer again for pointing that out. We have addressed this question above.

      • Differences across primates in terms of frontal lobe networks throughout the brain should be acknowledged (e.g., Barrett et al. 2020, J Neurosci).

      We have added that reference to the discussion, together with other references showing that the difference between human and non-human primates is significant, but essentially quantitative, rather than qualitative (the building blocks are relatively well conserved, but their relative weight differs a lot). Thank you for pointing it out.

      I hope the authors find my comments helpful in revising their manuscript.

      And we thank again the reviewer for the helpful and constructive comments.

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      This manuscript provides a fundamental contribution to the understanding of the role of intrinsically disordered proteins in circadian clocks and the potential involvement of phase separation mechanisms. The authors convincingly report on the structural and biochemical aspects and the molecular interactions of the intrinsically disordered protein FRQ. This paper will be of interest to scientists focusing on circadian clock regulation, liquid-liquid phase separation, and phosphorylation.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      "Phosphorylation, disorder, and phase separation govern the behavior of Frequency in the fungal circadian clock" is a convincing manuscript that delves into the structural and biochemical aspects of FRQ and the FFC under both LLPS and non-LLPS conditions. Circadian clocks serve as adaptations to the daily rhythms of sunlight, providing a reliable internal representation of local time.

      All circadian clocks are composed of positive and negative components. The FFC contributes negative feedback to the Neurospora circadian oscillator. It consists of FRQ, CK1, and FRH. The FFC facilitates close interaction between CK1 and the WCC, with CK1-mediated phosphorylation disrupting WCC:c-box interactions necessary for restarting the circadian cycle.

      Despite the significance of FRQ and the FFC, challenges associated with purifying and stabilizing FRQ have hindered in vitro studies. Here, researchers successfully developed a protocol for purifying recombinant FRQ expressed in E. coli.

      Armed with full-length FRQ, they utilized spin-labeled FRQ, CK1, and FRH to gain structural insights into FRQ and the FFC using ESR. These studies revealed a somewhat ordered core and a disordered periphery in FRQ, consistent with prior investigations using limited proteolysis assays. Additionally, p-FRQ exhibited greater conformational flexibility than np-FRQ, and CK1 and FRH were found in close proximity within the FFC. The study further demonstrated that under LLPS conditions in vitro, FRQ undergoes phase separation, encapsulating FRH and CK1 within LLPS droplets, ultimately diminishing CK1 activity within the FFC. Intriguingly, higher temperatures enhanced LLPS formation, suggesting a potential role of LLPS in the fungal clock's temperature compensation mechanism.

      Biological significance was supported by live imaging of Neurospora, revealing FRQ foci at the periphery of nuclei consistent with LLPS. The amino acid sequence of FRQ conferred LLPS properties, and a comparison of clock repressor protein sequences in other eukaryotes indicated that LLPS formation might be a conserved process within the negative arms of these circadian clocks.

      In summary, this manuscript represents a valuable advancement with solid evidence in the understanding of a circadian clock system that has proven challenging to characterize structurally due to obstacles linked to FRQ purification and stability. The implications of LLPS formation in the negative arm of other eukaryotic clocks and its role in temperature compensation are highly intriguing.

      Strengths:

      The strengths of the manuscript include the scientific rigor of the experiments, the importance of the topic to the field of chronobiology, and new mechanistic insights obtained.

      Weaknesses:

      This reviewer had questions regarding some of the conclusions reached.

      Recommendations For The Authors:

      The reviewer has a few questions for the authors:

      1) Concerning the reduced activity of sequestered CK1 within LLPS droplets with FRQ, to what extent is this decrease attributed to distinct buffer conditions for LLPS formation compared to non-LLPS conditions?

      We don’t believe that these buffer conditions significantly influence the change in FRQ phosphorylation by CK1 observed at elevated temperatures. The pH and ionic strength of the buffer are in keeping with physiological conditions (300 mM NaCl, 50 mM sodium phosphate, 10 mM MgCl2, pH 7.5); CK1 autophosphorylation is robust and generally increases with temperature under these conditions (Figure 7B). However, as LLPS increases CK1 autophosphorylation remains high, whereas phosphorylation of FRQ dramatically decreases. In fact, we chose to alter temperature specifically to induce changes in phase behavior under constant buffer conditions. In this way LLPS could be increased, and FRQ phosphorylation evaluated, without altering the solution composition. Thus, we believe that the reduced CK1 kinase activity toward FRQ as a substrate is directly due to the impact of the generated LLPS milieu, i.e. the changes in structural/dynamic properties of FRQ and/or CK1 induced by the effects of being a phase separate microenvironment, which could be substantially different from non-phase separated buffer environment. For example, previous work done on the disordered region of DDX4 [Brady et al. 2017, and Nott et al. 2015] show that even the amount of water content and stability of biomolecules such as double strand nucleic acids encapsulated within the droplets differ between non- and phase separated DDX4 samples.

      Nott T.J. et al. Phase transition of a disordered nuage protein generates environmentally responsive membraneless organelles. Mol. Cell. 2015 57 936-947.

      Brady J.P. et al. Structural and hydrodynamic properties of an intrinsically disordered region of a germ cell-specific protein on phase separation. PNAS 2017 114 8194-8203.

      In the results section we have clarified the use of temperature to control LLPS, “We compared the phosphorylation of FRQ by CK1 in a buffer that supports phase separation under different temperatures, using the latter as a means to control the degree of LLPS without altering the solution composition.”

      On p.16 of the discussion we have elaborated on the above point, “We believe that the reduced CK1 kinase activity toward FRQ as a substrate is directly due to the impact of the generated LLPS milieu, i.e. the changes in structural/dynamic properties of FRQ and/or CK1 induced by the effects of being a phase separate microenvironment, which could be substantially different from non-phase separated buffer environment. For example, previous work done on the disordered region of DDX4 {Brady, 2017 #130;Nott, 2015 #131} show that even the amount of water content and stability of biomolecules such as double strand nucleic acids encapsulated within the droplets differ between non- and phase separated DDX4 samples. Indeed, the spin-labeling experiments indicate that the dynamics of FRQ have been altered by LLPS (Fig. 7D).”

      2) The DEER technique demonstrated spatial proximity between FRH and CK1 when bound to FRQ in the FFC. Is there evidence suggesting their lack of proximity in the absence of FRQ? Also, how important is this spatial proximity to FFC function?

      We have additional data substantiating that FRH and CK1 do not interact in the absence of FRQ. In the revised paper we have included the results of a SEC-MALS experiment showing that FRH and CK1 elute separately when mixed in equimolar amounts and applied to an analytical S200 column coupled to a MALS detector (Figure 1 below and Fig. S8). The importance of the FRH and CK1 proximity is currently unknown, but there are reasons to believe that it could have functional consequences. For example, CK1, as recruited by FRQ, phosphorylates the White-Collar Complex (WCC) in the repressive arm of the circadian oscillator [e.g. He et al. Genes Dev. 20, 2552 (2006); Wang et al, Mol. Cell 74, 771 (2019)]. Interactions between the WCC and the FFC are mediated at least in part by FRH binding to White Collar-2 [Conrad et al. EMBO J. 35, 1707 (2016)]. Thus, FRH:FRQ may effectively bridge CK1 to the WCC to facilitate the phosphorylation of the latter by the former.

      He et al. CKI and CKII mediate the FREQUENCY-dependent phosphorylation of the WHITE COLLAR complex to close the Neurospora circadian negative feedback loop. Genes Dev. 2006 20, 2552-2565.

      Wang B. et al. The Phospho-Code Determining Circadian Feedback Loop Closure and Output in Neurospora Mol. Cell 2019 74, 771-784.

      Conrad et al. Structure of the frequency-interacting RNA helicase: a protein interaction hub for the circadian clock. EMBO J. 2016 35, 1707-1719.

      Author response image 1.

      Size-exclusion chromatography- multiangle light scattering (SEC-MALS) of a mixture of purified FRH and CK1. The proteins elute separately as monomers with no evidence of co-migration.

      3) Is there any indication that impairing FRQ's ability to undergo LLPS disrupts clock function?

      We do not currently have direct evidence that LLPS of FRQ is essential for clock function. These experiments are ongoing, but complicated by the fact that changes to FRQ predicted to alter LLPS behavior also have the potential to perturb its many other clock-related functions that include dynamic interactions with partners, dynamic post-translational modification and rates of synthesis and degradation. That said, the intrinsic disorder of FRQ is important for it to act as a protein interaction hub, and large intrinsically disordered regions (IDRs) very often mediate LLPS, as is certainly the case here. In this work, we argue that the ability of FRQ to sequester clock proteins during the TTFL may involve LLPS. Additionally, we show that the phosphorylation state of FRQ, which is a critical factor in clock period determination, depends on LLPS. Given that the conditions under which FRQ phase separates are physiological in nature and that live-cell imaging is consistent with FRQ phase separation in the nucleus, it seems likely that FRQ does phase separate in Neurospora. Furthermore, given that the sequence features of FRQ that mediate phase-separation are conserved not only across FRQ homologs but also in other functionally related clock proteins, it is probable, albeit worthy of further investigation, that LLPS has functional consequences for the clock. See the response to reviewer 3 for more discussion on this topic.

      Minor Points:

      Indeed, we have included a reference to this paper on p. 3: “Emerging studies in plants (Jung, et al., 2020), flies (Xiao, et al., 2021) and cyanobacteria (Cohen, et al., 2014; Pattanayak, et al., 2020) implicate LLPS in circadian clocks, and in Neurospora it has recently been shown that the Period-2 (PRD-2) RNA-binding protein influences frq mRNA localization through a mechanism potentially mediated by LLPS (Bartholomai, et al., 2022).”

      • On page 9, six lines from the top, please insert "of" between "distributions" and "p-FRQ".

      We have corrected this typo.

      Reviewer #2 (Public Review):

      Summary:

      This study presents data from a broad range of methods (biochemical, EPR, SAXS, microscopy, etc.) on the large, disordered protein FRQ relevant to circadian clocks and its interaction partners FRH and CK1, providing novel and fundamental insight into oligomerization state, local dynamics, and overall structure as a function of phosphorylation and association. Liquid-liquid phase separation is observed. These findings have bearings on the mechanistic understanding of circadian clocks, and on functional aspects of disordered proteins in general.

      Strengths:

      This is a thorough work that is well presented. The data are of overall high quality given the difficulty of working with an intrinsically disordered protein, and the conclusions are sufficiently circumspect and qualitative to not overinterpret the mostly low-resolution data.

      Weaknesses:

      None

      Recommendations For The Authors:

      1)Fig.2B: Beyond the SEC part (absorbance vs elution volume), I don't understand this plot, in particular the horizontal lines. They appear to be correlating molecular weight with normalized absorption at 280 nm, but the chromatogram amplitudes are different. Clarify, or modify the plot. There are also some disconnected line segments between 10-11 mL - these seem to be spurious.

      We apologize for the confusion. The horizontal lines are meant to only denote the average molecular weights of the elution peaks and not correlate with the A280 values. The disconnected lines are the light-scattering molecular weight readouts from which the horizontal lines are derived. The problematic nature of the figure is that the full elution traces and MALS traces across the peaks call for different scales to best depict the relevant features of the data. We have reworked the figure and legend to make the key points more clear.

      2) It could be useful to add AF2 secondary structure predictions, pLDDT, and the helical propensity analysis to the sequence ribbon in Fig.1C.

      Thank you for the suggestion, we have updated the figure to incorporate the pLDDT scores into the linear sequence map, as well as the secondary structure predictions.

      3) Fig.3D: It would be better to show the raw data rather than the fits. At the same time, I appreciate the fact that the authors resisted the temptation to show distance distributions.

      Yes, we agree that it is important to show the raw data; it is included in the supplementary section. Depicting the raw data here unfortunately obscures the differences in the traces and we believe that showing the data as a superposition is quite useful to convey the main differences among the sites. However, we have now explicitly stated in the figure legend that the corresponding raw data traces are given in Figures S5-6.

      4) Fig.5: For all distance distributions, error intervals should be added (typically done in terms of shaded bands around the best-fit distribution). As shown, precision is visually overstated. The error analysis shown in the SI is dubious, as it shows some distances have no error whatsoever (e.g. 6nm in 370C-490C), which is not possible.

      We did previously show the error intervals in the SI, but we agree that it is better to include them here as well, and have done so in the new Figure 5. With respect to the error analysis, we are following the methodology described in the following paper:

      Srivastava, M. and Freed J., Singular Value Decomposition Method To Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance: II. Estimating Uncertainty. J. Phys Chem A (2019) 123:359-370. doi: 10.1021/acs.jpca.8b07673.

      Briefly, the uncertainty we are plotting is showing the "range" of singular values over which the singular value decomposition (SVD) solution remains converged. For most of the data displayed in this paper we only used the first few singular values (SVs) and the solution remained converged for ± 1 or 2 SVs near the optimum solution. For example, if the optimum solution was 4 SVs then the range in which the solution remained converged is ~3-6 SVs. We plot three lines - lowest range of SVs, highest range of SVs and optimum number of SVs – in the SI figures the optimum SV solution is shown in black and the region between the converged solutions with the highest and lowest number of SVs is shaded in red. Owing to the point-wise reconstruction of the distance distribution, the SVD method enables localized uncertainty at each distance value. Therefore, some points will have high uncertainty, whereas others low. The distance that may appear to have no uncertainty has actually very low uncertainty; which can be seen at close inspection. In these cases, we observe this "isosbestic" type behavior where the P(r) appears to change little across the acceptable solutions and hence there is only a small range of P(r) values at that particular r. This behavior results from multimodal distributions wherein the change in SVs shifts neighboring peaks to lower and higher distances respectively, producing an apparent cancelation effect. What we believe is most important for the biochemical interpretation, and accurately reflected by this analysis, is the general width of the uncertainty across the distribution and how this impacts the error in both the mean and the overall skewing of the distribution at short or long distances.

      Details of the error treatment as described above have been added to the supplementary methods section.

      5) The Discussion (p.13) states that the SAXS and DEER data show that disorder is greater than in a molten globule and smaller than in a denatured protein. Evidence to support this statement (molten globule DEER/SAXS reference data etc.) should be made explicit.

      We will make the statement more explicit by changing it to the following: “Notably, the shape of the Kratky plots generated from the SAXS data suggest a degree of disorder that is substantially greater than that expected of a molten globule (Kataoka, et al., 1997), but far from that of a completely denatured protein (Kikhney, et al., 2015; Martin, Erik W., et al., 2021). Similarly, the DEER distributions, though non-uniform across the various sites examined, indicate more disorder than that of a molten globule (Selmke et al., 2018) but more order than a completely unfolded protein (van Son et al. 2015).”

      van Son, M., et al. Double Electron−Electron Spin Resonance Tracks Flavodoxin Folding, J. Phys. Chem. B 2015, 119, 13507−13514. doi: 10.1021/acs.jpcb.5b00856.

      Selmke, B. et al. Open and Closed Form of Maltose Binding Protein in Its Native and Molten Globule State As Studied by Electron Paramagnetic Resonance Spectroscopy. Biochemistry 2018, 57, 5507−5512 doi: 10.1021/acs.biochem.8b00322.

      6) Fig. S11B could be promoted to the main paper.

      This comment makes a good point. Figure 8 is now an updated scheme, similar to the previous Fig. S11B. Thank you for the suggestion.

      Minor corrections:

      p.1: "composed from" -> "composed of"

      p.2: TFFLs -> TTFLs

      p.2: "and CK1 via" => "and to CK1 via"

      p.5: "Nickel" -> "nickel"

      p.5: "Size Exclusion Chromatography" -> "Size exclusion chromatography"

      p.5: "Multi Angle Light Scattering" -> "multi-angle light scattering"

      Fig.2 caption: "non-phosphorylated (np-FRQ)" -> "non-phosphorylated FRQ (np-FRQ)"

      Fig. S3: What are the units on the horizontal axis?

      Fig. 5H is too small

      Fig. S8, S9: all distance distribution plots show a spurious "1"

      Fig. 6A has font sizes that are too small to read

      p.11: "cytoplasm facing" -> "cytoplasm-facing"

      p.11: "temperature dependent" -> "temperature-dependent"

      p.12: "substrate-sequestration and product-release" -> "substrate sequestration and product release"

      p.12: "depend highly buffer composition" -> "depend highly on buffer composition"

      We thank the reviewer for finding these errors and their attention to detail. All of these minor points have been addressed in the revised manuscript.

      Reviewer #3 (Public Review):

      Summary:

      The manuscript from Tariq and Maurici et al. presents important biochemical and biophysical data linking protein phosphorylation to phase separation behavior in the repressive arm of the Neurospora circadian clock. This is an important topic that contributes to what is likely a conceptual shift in the field. While I find the connection to the in vivo physiology of the clock to be still unclear, this can be a topic handled in future studies.

      Strengths:

      The ability to prepare purified versions of unphosphorylated FRQ and P-FRQ phosphorylated by CK-1 is a major advance that allowed the authors to characterize the role of phosphorylation in structural changes in FRQ and its impact on phase separation in vitro.

      Weaknesses:

      The major question that remains unanswered from my perspective is whether phase separation plays a key role in the feedback loop that sustains oscillation (for example by creating a nonlinear dependence on overall FRQ phosphorylation) or whether it has a distinct physiological role that is not required for sustained oscillation.

      The reviewer raises the key question regarding data suggesting LLPS and phase separated regions in circadian systems. To date condensates have been seen in cyanobacteria (Cohen et al, 2014, Pattanayak et al, 2020) where there are foci containing KaiA/C during the night, in Drosophila (Xiao et al, 2021) where PER and dCLK colocalize in nuclear foci near the periphery during the repressive phase, and in Neurospora (Bartholomai et al, 2022) where the RNA binding protein PRD-2 sequesters frq and ck1a transcripts in perinuclear phase separated regions. Because the proteins responsible for the phase separation in cyanobacteria and Drosophila are not known, it is not possible to seamlessly disrupt the separation to test its biological significance (Yuan et al, 2022), so only in Neurospora has it been possible to associate loss of phase separation with clock effects. There, loss of PRD-2, or mutation of its RNA-binding domains, results in a ~3 hr period lengthening as well as loss of perinuclear localization of frq transcripts. A very recent manuscript (Xie et al., 2024) calls into question both the importance and very existence of LLPS of clock proteins at least as regards to mammalian cells, noting that it may be an artefact of overexpression in some places where it is seen, and that at normal levels of expression there is no evidence for elevated levels at the nuclear periphery. Artefacts resulting from overexpression plainly cannot be a problem for our study nor for Xiao et al. 2021 as in both cases the relevant clock protein, FRQ or PER, was labeled at the endogenous locus and expressed under its native promoter. Also, it may be worth noting that although we called attention to enrichment of FRQ[NeonGreen] at the nuclear periphery, there remained abundant FRQ within the core of the nucleus in our live-cell imaging.

      Cohen SE, et al.: Dynamic localization of the cyanobacterial circadian clock proteins. Curr Biol 2014, 24:1836–1844, https://doi.org/10.1016/j.cub.2014.07.036.

      Pattanayak GK, et al.: Daily cycles of reversible protein condensation in cyanobacteria. Cell Rep 2020, 32:108032, https://doi.org/10.1016/j.celrep.2020.108032.

      Xiao Y, Yuan Y, Jimenez M, Soni N, Yadlapalli S: Clock proteins regulate spatiotemporal organization of clock genes to control circadian rhythms. Proc Natl Acad Sci U S A 2021, 118, https://doi.org/10.1073/pnas.2019756118.

      Bartholomai BM, Gladfelter AS, Loros JJ, Dunlap JC. 2022 PRD-2 mediates clock-regulated perinuclear localization of clock gene RNAs within the circadian cycle of Neurospora. Proc Natl Acad Sci U S A. 119(31):e2203078119. doi: 10.1073/pnas.2203078119.

      Yuan et al., Curr Biol 78: 102129, 2022. https://doi.org/10.1016/j.ceb.2022.102129

      Pancheng Xie, Xiaowen Xie, Congrong Ye, Kevin M. Dean, Isara Laothamatas , S K Tahajjul T Taufique, Joseph Takahashi, Shin Yamazaki, Ying Xu, and Yi Liu (2024). Mammalian circadian clock proteins form dynamic interacting microbodies distinct from phase separation. Proc. Nat. Acad. Sci. USA. In press.

      We have updated the discussion on p. 15 accordingly:

      “Live cell imaging of fluorescently-tagged FRQ proteins is consistent with FRQ phase separation in N. crassa nuclei. FRQ is plainly not homogenously dispersed within nuclei, and the concentrated foci observed at specific positions in the nuclei indicate condensate behavior similar to that observed for other phase separating proteins (Bartholomai, et al., 2022; Caragliano, et al., 2022; Gonzalez, A., et al., 2021; Tatavosian, et al., 2019; Xiao, et al., 2021). While ongoing experiments are exploring more deeply the spatiotemporal dynamics of FRQ condensates in nuclei, the small size of fungal nuclei as well as their rapid movement with cytoplasmic bulk flow through the hyphal syncytium makes these experiments difficult. Of particular interest is drawing comparisons between FRQ and the Drosophila Period protein, which has been observed in similar foci that change in size and subnuclear localization throughout the circadian cycle (Meyer, et al., 2006; Xiao, et al., 2021), although it must be noted that the foci we observed are considerably more dynamic in size and shape than those reported for PER in Drosophila (Xiao, et al., 2021). A very recent manuscript (Xie, et al., 2024) calls into question the importance and very existence of LLPS of clock proteins at least in regards to mammalian cells, noting that it may be an artifact of overexpression in some instances where it is seen, and that at normal levels of expression there is no evidence for elevated levels at the nuclear periphery. Artifacts resulting from overexpression are unlikely to be a problem for our study and that of Xiao et al as in both cases clock proteins were tagged at their endogenous locus and expressed from their native promoters. Although we noted enrichment of FRQmNeonGreen near the nuclear envelope in our live-cell imaging, there remained abundant FRQ within the core of the nucleus.”

      Recommendations For The Authors:

      The data in Fig 6 showing microscopy of Neurospora is suggestive but needs more information/controls. Does the strain that expresses FRQ-mNeonGreen have normal circadian rhythms? How were the cultures handled (in terms of circadian entrainment etc.) for imaging? Do samples taken at different clock times appear different in terms of punctate structures in microscopy? The authors cite the Xiao 2021 paper in Drosophila, but would be good to see if the in vivo picture is fundamentally similar in Neurospora.

      All of the live-cell images we report were from cells grown in constant light; in the dark, strains bearing FRQ[NeonGreen] have normally robust rhythms with a slightly elongated period length as measured by a frq Cbox-luc reporter. Although we are interested, of course, in whether and if so how the punctate structures changed as function of circadian time, this is work in progress and beyond the scope of the present study. This said, it is plain to see from the movie included as a Supplemental file here that the puncta we see are moving and fusing/splitting on a scale of seconds whereas those reported in Drosophila by Xiao et al. (Xiao et al, 2021, above) were stable for many minutes; thus the FRQ foci seen in Neurospora are quite a bit more dynamic than those in Drosophila.

      We have updated the results section on p. 11 to provide this information more clearly: “FRQ thus tagged and driven by its own promoter is expressed at physiologically normal levels, and strains bearing FRQmNeonGreen as the only source of FRQ are robustly rhythmic with a slightly longer than normal period length. Live-cell imaging in Neurospora crassa offers atypical challenges because the mycelia grow as syncytia, with continuous rapid nuclei motion during the time of imaging. This constant movement of nuclei is compounded by the very low intranuclear abundance of FRQ and the small size of fungal nuclei, making not readily feasible visualization of intranuclear droplet fission/fusion cycles or intranuclear fluorescent photobleaching recovery experiments (FRAP) that could report on liquid-like properties. Nonetheless, bright and dynamic foci-like spots were observed well inside the nucleus and near the nuclear periphery, which is delineated by the cytoplasm-facing nucleoporin Son-1 tagged with mApple at its C-terminus (Fig. 6D,E, Movie S1). Such foci are characteristic of phase separated IDPs (Bartholomai, et al., 2022; Caragliano, et al., 2022; Gonzalez, A., et al., 2021; Tatavosian, et al., 2019) and share similar patterning to that seen for clock proteins in Drosophila (Meyer, et al., 2006; Xiao, et al., 2021), although the foci we observed are substantially more dynamic than those reported in Drosophila.”

      Another issue where some commentary would be helpful: Fig 7 shows that phase separation behavior is strongly temperature dependent (not biophysically surprising). Is that at odds with the known temperature compensation of the circadian rhythm if LLPS indeed plays a key role in the oscillator?

      We believe that the dependence of CK1-mediated FRQ phosphorylation on temperature, as manifested by FRQ phase separation, is consistent with temperature compensation within the Neurospora circadian oscillator. The phenomenon of temperature compensation by circadian clocks involves the intransigence of the oscillator period to temperature change. Stability of period with temperature change would not necessarily be expected of a generic chemical oscillator, which would run faster (shorter period) at higher temperature owing to Arrhenius behavior of the underlying chemical reactions. Circadian phosphorylation of FRQ is one such chemical process that contributes to the oscillation of FRQ abundance on which the clock is based. Reduced CK1 phosphorylation of FRQ causes both longer periods [Mehra et al., 2009] and loss of temperature compensation (manifested as a reduction of period length at higher temperature) [Liu et al, Nat Comm, 10, 4352 (2019); Hu et al, mBio, 12, e01425 (2021)]. Thus, the ability of increased LLPS formation at elevated temperature to reduce FRQ phosphorylation by CK1 (but not intrinsic CK1 autophosphorylation) would be a means to counter a decreasing period length that would otherwise manifest in an under compensated system. As further negative feedback on the system, LLPS is also promoted by FRQ phosphorylation itself, which in turn will reduce phosphorylation by CK1. Thus, both increased FRQ phosphorylation and temperature will couple to increased LLPS and mitigate period shortening through reduction of CK1 activity.

      Mehra et al., A Role for Casein Kinase 2 in the Mechanism Underlying Circadian Temperature Compensation. May 15, 2009. Cell 137, 749–760,

      Liu et al. FRQ-CK1 interaction determines the period of circadian rhythms in Neurospora. Nat Comm. 2019, 10 4352.

      Hu et al FRQ-CK1 Interaction Underlies Temperature Compensation of the Neurospora Circadian Clock mBio 2021 12 WOS:000693451600006.

      We have added Figure 8 to clarify the interpretation of the temperature compensation implicaitons of our work, the legend of which reads:

      “Figure 8: LLPS may play a role in temperature compensation of the clock through modulation of FRQ phosphorylation. Reduced CK1 phosphorylation of FRQ causes both longer periods (Mehra, et al., 2009) and loss of temperature compensation (manifested as a shortening of period at higher temperature) (Hu, et al., 2021; Liu, X., et al., 2019). Thus, the ability of increased LLPS at elevated temperature (larger grey circle) to reduce FRQ phosphorylation by CK1 will counter a shortening period that would otherwise manifest in an under compensated system. As further negative feedback, LLPS is also promoted by increased FRQ phosphorylation, which in turn will reduce phosphorylation by CK1. Thus, both increased FRQ phosphorylation and temperature favor LLPS and reduction of CK1 activity.”

      one minor comment: The chemical structures in Fig 3A have some issues where the "N" and "S" are flipped. Would be good to remake these figures to fix this problem.

      We apologize, the figure has been replaced with an improved version.

    1. 8.2. Data From the Reddit API# When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them. You can also look up information on the data that you can get from the Reddit API by looking at the Reddit API Documentation. The Reddit API lets you access just some of the data that Reddit tracks, but Reddit and other social media platforms track much more than they let you have access to.

      This section provides a helpful introduction to accessing Reddit data using the PRAW library and the significance of APIs in this process. However, it might be beneficial for readers if the text included specific examples of the types of requests one can make with PRAW and the kinds of data accessible through these requests. For instance, illustrating how to retrieve posts from a specific subreddit or how to analyze user comments could make the API's capabilities more tangible. Additionally, offering tips or resources on navigating the PRAW documentation would greatly assist beginners in overcoming the initial learning curve mentioned.

    1. This is a new page note on code llama

    2. Code Llama is specialized for code-specific tasks and isn’t appropriate as a foundation model for other tasks.

      Be careful on this.

    1. Author Response

      The following is the authors’ response to the current reviews.

      We would firstly like to thank all reviewers for their comments and support of this manuscript.

      Reviewer #1 (Recommendations For The Authors):

      No further recommendations.

      Reviewer #2 (Recommendations For The Authors):

      All of my comments have been sufficiently addressed.

      Reviewer #3 (Recommendations For The Authors):

      Thanks for responding to my former recommendations constructively. I believe these points have been fully addressed in this new version.

      However, I have not seen any comments on the points I raised in my former public review concerning the I-2 dependence of the FonSIX4 cell death. Do you know whether FonSIX4 would trigger cell death in tissues not expressing any I-2?

      We are a little confused concerning this comment. I-2 is a different class of resistance protein (NLR) that recognises Avr2 and this is likely to be intracellular. From the previous public review, we believe reviewer 3 may have been asking us to clarify the dependence of I (MM or M82) on FonSIX4 cell death. We have performed these controls by expressing FonSIX4 and associated FonSIX4/Avr1 chimeras in N. benthamiana (with the PR-1 signal peptide for efficient secretion of effectors) and it does not cause cell death in the absence of the I receptor – see S11F Fig. This was not explicitly conveyed in text so we have included the following in text: “Using the N. benthamiana assay we show FonSIX4 is recognised by I receptors from both cultivars (IM82 and iMoneymaker) and cell death is dependent on the presence of IM82 or iMoneymaker (Fig 5B, S11 Fig).”

      I still recommend discussing whether the Avr1 residues crucial for Avr activity are in the same structural regions of the C-terminal domain where previous work has identified residues under diversifying selection in symbiotic fungal FOLD proteins.

      The region important for recognition does encompass some residues within the structural region identified to be under diversifying selection in FOLD effectors from Rhizophagus irregularis previously reported (two residues within one beta-strand). However, we also see residues that don’t overlap to this area. We also note that the mycFOLD proteins analysed in symbiotic fungi are heavily skewed towards strong structurally similarity with FolSIX6 (similar cysteine spacing within both N and C-domains and structural orientation of the N and C-domains) rather than Avr1. We are under the impression that Avr1 was not included in the analysis of diversifying selection in symbiotic fungal FOLD proteins, it also is unclear to us if close Avr1 homologues are present. With this in mind, and considering our already lengthy discussion (as previously highlighted during reviewer), we have decided not to include further discussion concerning this point.


      The following is the authors’ response to the original reviews.

      We would like to thank the editor(s) and reviewers for their work concerning our manuscript. Most of the suggested changes were related to text changes which we have incorporated into the revised version. Please find our response to reviewers below.

      Reviewer #1 (Recommendations For The Authors):

      I only have very minor suggestions for the authors. The first one comes from reading the manuscript and finding it very dense with so many acronyms. This will limit the audience that will read the study and appreciate its impact. This is more noticeable in the Results, with many passages that I would suggest moving to Methodology.

      We thank reviewer 1 for their very positive review. We understand that due to the nature of this study, which includes many protein alleles/mutations that were expressed with different boundaries etc., it is difficult to achieve this. Reviewer 2 asked for more details to be provided. We hope we have achieved a nice balance in the revised manuscript.

      Something else that would facilitate the reading of the manuscript is the effectors name. The authors use the SIX name or the Avr name for some effectors and it makes it difficult to follow up.

      We have tried to make this consistent for Avr1 (SIX4), Avr2 (SIX3) and Avr3 (SIX1). Other SIX effectors are not known Avrs so the SIX names were used.

      Reading the manuscript and seeing how in most of the sections the authors used a computational approach followed by an experimental approach, I wonder why Alphafold2-multimer was not used to investigate the interaction between the effector and the receptor?

      This is a great suggestion, we have certainly investigated this, however to date there is no experimental evidence to directly support the direct interaction between I and Avr1. Post review, we spent some time trying to capture an interaction using a co-immunoprecipitation approach however to date we have not been able to obtain robust data that support this. We are currently looking to study this utilising protein biophysics/biochemistry but this work will take some time.

      Reviewer #2 (Recommendations For The Authors):

      We thank reviewer 2 for the very thorough editing and recommendations. We have incorporated all minor text edits below into the manuscript.

      Line 43: perhaps "Effector recognition" instead of "Effector detection", to be consistent with line 51?

      Line 60: Change to "leads".

      Line 79: Italicise Avr2.

      Line 94: Add the acronym ETI in parentheses after "effector-triggered immunity".

      Line 106: "(Leptosphaeria Avirulence-Supressing)" should be "(Leptosphaeria Avirulence and Supressing)".

      Line 112: Change "defined" to "define".

      Line 119: Spell out the species name on first use.

      Line 205: Glomeromycota is a division rather than a genus. Consistent with Fig 2, it also does not need to italicized.

      Line 207: Change "basidiomycete" to "Division Basidiomycota", consistent with Fig 2.

      Line 214: Change "alignment of Avr1, Avr3, SIX6 and SIX13" to "alignment of the mature Avr1, Avr3, SIX6 and SIX13 sequences".

      Line 324: Change "solved structures" to "solved protein structures".

      Line 335: Spell out acronyms like "MS" on first use in figure legends. Also dpi in other figure legends.

      Line 341: replace "effector-triggered immunity (ETI)" with "(ETI)" - see comment on Line 94.

      Line 370: Change "domains" to "domain".

      Line 374: In the title, change "C-terminus" to C-domain", consistent with the rest of the figure legend.

      Line 404: Change "(basidiomycetes and ascomycetes)" to "(Basidiomycota and Ascomycota fungi)", consistent with Fig 2C.

      Line 416: Change "in" to "by".

      Line 427: un-italicize the parentheses.

      Line 519: First mention of NLR. Spell out the acronym on first use in main text. S5 and S11 figure titles should be bolded.

      Line 852: Replace "@" with "at".

      S4 Table: Gene names should be italicised.

      S5 Table: Needs to be indicated that the primer sequences are in the 5´-3´ orientation.

      With regards to the Agrobacterium tumefaciens-mediated transient expression assays involving co-expression of the Avr1 effector and I immune receptor, the authors need to make clear how many biological replicates were performed as this information is only provided for the ion leakage assay.

      We have added these data to the figure legend

      Line 57: For me, the text "Fol secretes a limited number of structurally related effectors" reads as Fol secretes structurally related effectors, but very few of them are structurally related. Perhaps it would be better to say that the effector repertoire of Fol is made up of proteins that adopt a limited number of structural folds, or that the effector repertoire can be classified into a reduced set of structural families?

      This edit has been incorporated.

      Lines 66-67: Subtle re-wording required for "The best-characterized pathosystem is F. oxysporum f. sp. lycopersici (Fol)", as a pathosystem is made up of a pathogen and its host. Perhaps "The best-characterized pathosystem involves F. oxysporum f. sp. lycopersici (Fol) and tomato".

      Sentence has been reworded.

      Line 113 and throughout: Stick with one of "resistance protein", "receptor", "immune receptor" and "immunity receptor" throughout the manuscript.

      We have decided to use both receptor and immunity receptor as not all receptors investigated in the manuscript provide immunity.

      Lines 149-150: The title does not fully represent what is shown in the figure. The text "that is unique among fungal effectors" can be deleted as there is nothing in Fig 1 that shows that the fold is unique to fungal effectors.

      Figure title has been changed.

      Line 173: The RMSD of Avr3 is stated as being 3.7 Å, but in S3 Fig it is stated as being 3.6 Å.

      This was a mistake in the main text and has been corrected.

      Lines 202-204: This sentence needs to be reworded, as the way that it is written implies that the Diversispora and Rhizophagus genera are in the Ascomycota division. Also, "Ascomycetes" should be changed to "Ascomycota fungi", consistent with Fig 2.

      Sentence has been reworded.

      Line 233: "Scores above 8". What type of scores? Z-scores?

      These are Z-scores. This has been added in text.

      Lines 242-246: It is stated that SIX9 and SIX11 share structural similarity to various RNA-binding proteins, but no scores used to make these assessments is given. The scores should be provided in the text.

      Z-scores have been added.

      Fig 4A: SIX3 should be Avr2, consistent with line 292. The gene names should be italicised in Fig 4A.

      SIX3 was changed to Avr2. Gene names have been italicised.

      Line 356: Subtle rewording required, as "co-infiltrated with both IM82 and iMoneymaker" implies that you infiltrated with protein rather than Agrobacterium strains.

      Sentence has been reworded.

      Fig 5A, Fig 5C and Line 380: Light blue is used, but this looks grey. Perhaps change colour, as grey is already used to show the pro-domain in Fig 5A (or simply change the colour used to highlight the pro-domain)?

      Colour depicting the C-domain was changed.

      Lines 530-531: This text is no longer correct. Rlm4 and Rlm3 are now known to be alleles of Rlm9. See: Haddadi, P., Larkan, N. J., Van deWouw, A., Zhang, Y., Neik, T. X., Beynon, E., ... & Borhan, M. H. (2022). Brassica napus genes Rlm4 and Rlm7, conferring resistance to Leptosphaeria maculans, are alleles of the Rlm9 wall‐associated kinase‐like resistance locus. Plant Biotechnology Journal, 20(7), 1229.

      We thank the reviewer for picking this up. This text has been updated.

      Line 553: Provide more information on what the PR1 signal peptide is.

      More information about the PR1 signal peptide has been added.

      Lines 767-781: Descriptions and naming conventions of proteins throughout the figure legend need to be consistent and better reflect their makeup. For example, I think it would be best to put the sequence range after each protein mentioned - e.g. Avr118-242 or Avr159-242 instead of Avr1, PSL1_C37S18-111 instead of PSL1_C37S, etc. Furthermore, it is often stated that a protein is full-length when it lacks a signal peptide - my thought is that if a proteins lack its signal peptide, it is not full-length. The acronym "PD" also needs to be spelled out as "pro-domain (PD)" in the figure legend.

      We have incorporated sequence range for proteins that were produced upon first use. Sequence ranges that were modelled in AlphaFold2 were not added in text because they can be found in Supplementary Table 3.

      Lines 853-845: It is stated the sizes of proteins are indicated above the chromatogram in S10 Fig, but this is not the case. It is also not clear from S10B Fig that the faint peaks correspond to the peaks in the Fig 4B chromatogram. In S10D Fig, the stick of C58S is difficult to see. Perhaps change the colour or use an arrow/asterisk?

      Protein size estimates have been added above the chromatogram. Added text to indicate that the faint peaks correspond to peaks in Fig 4B. Added an asterisk in S10D Fig to identify the location of C58.

      S14 Fig is not mentioned/referenced in the main text of the manuscript.

      This was a mistake and has been added.

      The reference list needs to be updated to accommodate those referenced bioRxiv preprints that have now been published in peer-reviewed journals.

      The reference list has been updated.

      Reviewer #3 (Recommendations For The Authors):

      It would be good to discuss whether the pro-domains affecting virulence or avirulence activity.

      Kex2, the protease that cleaves the pro-domain functions in the golgi. We therefore suspect that the pro-domain is removed prior to secretion. For recombinant protein production in E. coli we find that these pro-domains are necessary to obtain soluble protein (doi: 10.1111/nph.17516). As we require the pro-domain for protein production and can not completely removing them from our preps, we cannot perform experiments to test this and subsequently comment further. In a paper that identified SIX effectors in tomato utilising proteomics approach (https://bsppjournals.onlinelibrary.wiley.com/doi/10.1111/j.1364-3703.2007.00384.x), it appears that the pro-domains were not captured in this analysis. This supports the conclusion that they are not associated with the mature/secreted protein.

      The authors stated that the C-terminal domain of SIX6 has a single disulfide bond unique to SIX6. Please clarify in which context is it unique: in Fusarium or across all FOLD proteins?

      This is in direct comparison to Avr1 and Avr3. The disulfide in the C-domain of SIX6 is unique compared to Avr1 and Avr3. This has been made clear in text.

      The structural similarity of FOLD proteins to other known structures have been discussed (lines 460ff), but it is not clear whether all structures and models identified in this work would yield cysteine inhibitor and tumor necrosis factors as best structural matches in the database or whether this is specific to a single FOLD protein. Please consider discussing recently published findings by others (Teulet et al. 2023, New Phytologist) on this aspect.

      This analysis was performed for Avr1, we obtained relatively low similarity hits for Avr3/Six6. We have updated this text accordingly… “Unfortunately, the FOLD effectors share little overall structural similarity with known structures in the PDB outside of the similarity with each other. At a domain level, the N-domain of the FOLD effector Avr1 has some structural similarities with cystatin cysteine protease inhibitors (PDB code: 4N6V, PDB code: 5ZC1) [60, 61], and the C-domain with tumour necrosis factors (PDB code: 6X83) [62] and carbohydrate-binding lectins (PDB code: 2WQ4) [63]. Relatively weak hits were observed for Avr3/Six6.”

      It might be useful to clearly point out that the ToxA fold and the C-terminus of the FOLD fold are different.

      We have secondary structural topology maps of the FOLD and ToxA-like families in S8 Fig which highlight the differences in topology between these two families.

      Please add information to Fig.S8 listing the approach to generate the secondary structure topology maps.

      We have added this information in the figure caption.

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      This work presents H3-OPT, a deep learning method that effectively combines existing techniques for the prediction of antibody structure. This work is important because the method can aid the design of antibodies, which are key tools in many research and industrial applications. The experiments for validation are solid.

      Comments to Author:

      Several points remain partially unclear, such as:

      1). Which examples constitute proper validation;

      Thank you for your kind reminder. We have modified the text of the experiments for validation to identify which examples constitute proper validation. We have corrected the “Finally, H3-OPT also shows lower Cα-RMSDs compared to AF2 or tFold-Ab for the majority of targets in an expanded benchmark dataset, including all antibody structures from CAMEO 2022” into “Finally, H3-OPT also shows lower Cα-RMSDs compared to AF2 or tFold-Ab for the majority (six of seven) of targets in an expanded benchmark dataset, including all antibody structures from CAMEO 2022” and added the following sentence in the experimental validation section of our revised manuscript to clarify which examples constitute proper validation: “AlphaFold2 outperformed IgFold on these targets”.

      2) What the relevance of the molecular dynamics calculations as performed is;

      Thank you for your comment, and I apologize for any confusion. The goal of our molecular dynamics calculations is to compare the differences in binding affinities, an important issue of antibody engineering, between AlphaFold2-predicted complexes and H3-OPT-predicted complexes. Molecular dynamics simulations enable the investigation of the dynamic behaviors and interactions of these complexes over time. Unlike other tools for predicting binding free energy, MM/PBSA or MM/GBSA calculations provide dynamic properties of complexes by sampling conformational space, which helps in obtaining more accurate estimates of binding free energy. In summary, our molecular dynamics calculations demonstrated that the binding free energies of H3-OPT-predicted complexes are closer to those of native complexes. We have included the following sentence in our manuscript to provide an explanation of the molecular dynamics calculations: “Since affinity prediction plays a crucial role in antibody therapeutics engineering, we performed MD simulations to compare the differences in binding affinities between AF2-predicted complexes and H3-OPT-predicted complexes.”.

      3) The statistics for some of the comparisons;

      Thank you for the comment. We have incorporated statistics for some of the comparisons in the revised version of our manuscript and added the following sentence in the Methods section: “We conducted two-sided t-test analyses to assess the statistical significance of differences between the various groups. Statistical significance was considered when the p-values were less than 0.05. These statistical analyses were carried out using Python 3.10 with the Scipy library (version 1.10.1).”.

      4) The lack of comparison with other existing methods.

      We appreciate your valuable comments and suggestions. Conducting comparisons with a broader set of existing methods can further facilitate discussions on the strengths and weaknesses of each method, as well as the accuracy of our method. In our study, we conducted a comparison of H3-OPT with many existing methods, including AlphaFold2, HelixFold-Single, ESMFold, and IgFold. We demonstrated that several protein structure prediction methods, such as ESMFold and HelixFold-Single, do not match the accuracy of AlphaFold2 in CDR-H3 prediction. Additionally, we performed a detailed comparison between H3-OPT, AlphaFold2, and IgFold (the latest antibody structure prediction method) for each target.

      We sincerely thank the comment and have introduced a comparison with OmegaFold. The results have been incorporated into the relevant sections (Fig 4a-b) of the revised manuscript.

      Author response image 1.

      Public Reviews

      Comments to Author:

      Reviewer #1 (Public Review):

      Summary:

      The authors developed a deep learning method called H3-OPT, which combines the strength of AF2 and PLM to reach better prediction accuracy of antibody CDR-H3 loops than AF2 and IgFold. These improvements will have an impact on antibody structure prediction and design.

      Strengths:

      The training data are carefully selected and clustered, the network design is simple and effective.

      The improvements include smaller average Ca RMSD, backbone RMSD, side chain RMSD, more accurate surface residues and/or SASA, and more accurate H3 loop-antigen contacts.

      The performance is validated from multiple angles.

      Weaknesses:

      1) There are very limited prediction-then-validation cases, basically just one case.

      Thanks for pointing out this issue. The number of prediction-then-validation cases is helpful to show the generalization ability of our model. However, obtaining experimental structures is both costly and labor-intensive. Furthermore, experimental validation cases only capture a limited portion of the sequence space in comparison to the broader diversity of antibody sequences.

      To address this challenge, we have collected different datasets to serve as benchmarks for evaluating the performance of H3-OPT, including our non-redundant test set and the CAMEO dataset. The introduction of these datasets allows for effective assessments of H3-OPT’s performance without biases and tackles the obstacle of limited prediction-then-validation cases.

      Reviewer #2 (Public Review):

      This work provides a new tool (H3-Opt) for the prediction of antibody and nanobody structures, based on the combination of AlphaFold2 and a pre-trained protein language model, with a focus on predicting the challenging CDR-H3 loops with enhanced accuracy than previously developed approaches. This task is of high value for the development of new therapeutic antibodies. The paper provides an external validation consisting of 131 sequences, with further analysis of the results by segregating the test sets into three subsets of varying difficulty and comparison with other available methods. Furthermore, the approach was validated by comparing three experimentally solved 3D structures of anti-VEGF nanobodies with the H3-Opt predictions

      Strengths:

      The experimental design to train and validate the new approach has been clearly described, including the dataset compilation and its representative sampling into training, validation and test sets, and structure preparation. The results of the in-silico validation are quite convincing and support the authors' conclusions.

      The datasets used to train and validate the tool and the code are made available by the authors, which ensures transparency and reproducibility, and allows future benchmarking exercises with incoming new tools.

      Compared to AlphaFold2, the authors' optimization seems to produce better results for the most challenging subsets of the test set.

      Weaknesses:

      1) The scope of the binding affinity prediction using molecular dynamics is not that clearly justified in the paper.

      We sincerely appreciate your valuable comment. We have added the following sentence in our manuscript to justify the scope of the molecular dynamics calculations: “Since affinity prediction plays a crucial role in antibody therapeutics engineering, we performed MD simulations to compare the differences in binding affinities between AF2-predicted complexes and H3-OPT-predicted complexes.”.

      2) Some parts of the manuscript should be clarified, particularly the ones that relate to the experimental validation of the predictions made by the reported method. It is not absolutely clear whether the experimental validation is truly a prospective validation. Since the methodological aspects of the experimental determination are not provided here, it seems that this may not be the case. This is a key aspect of the manuscript that should be described more clearly.

      Thank you for the reminder about experimental validation of our predictions. The sequence identities of the wild-type nanobody VH domain and H3 loop, when compared with the best template, are 0.816 and 0.647, respectively. As a result, these mutants exhibited low sequence similarity to our dataset, indicating the absence of prediction bias for these targets. Thus, H3-OPT outperformed IgFold on these mutants, demonstrating our model's strong generalization ability. In summary, the experimental validation actually serves as a prospective validation.

      Thanks for your comments, we have added the following sentence to provide the methodological aspects of the experimental determination: “The protein expression, purification and crystallization experiments were described previously. The proteins used in the crystallization experiments were unlabeled. Upon thawing the frozen protein on ice, we performed a centrifugation step to eliminate any potential crystal nucleus and precipitants. Subsequently, we mixed the protein at a 1:1 ratio with commercial crystal condition kits using the sitting-drop vapor diffusion method facilitated by the Protein Crystallization Screening System (TTP LabTech, mosquito). After several days of optimization, single crystals were successfully cultivated at 21°C and promptly flash-frozen in liquid nitrogen. The diffraction data from various crystals were collected at the Shanghai Synchrotron Research Facility and subsequently processed using the aquarium pipeline.”

      3) Some Figures would benefit from a clearer presentation.

      We sincerely thanks for your careful reading. According to your comments, we have made extensive modifications to make our presentation more convincing and clearer (Fig 2c-f).

      Author response image 2.

      Reviewer #3 (Public Review):

      Summary:

      The manuscript introduces a new computational framework for choosing 'the best method' according to the case for getting the best possible structural prediction for the CDR-H3 loop. The authors show their strategy improves on average the accuracy of the predictions on datasets of increasing difficulty in comparison to several state-of-the-art methods. They also show the benefits of improving the structural predictions of the CDR-H3 in the evaluation of different properties that may be relevant for drug discovery and therapeutic design.

      Strengths:

      The authors introduce a novel framework, which can be easily adapted and improved. The authors use a well-defined dataset to test their new method. A modest average accuracy gain is obtained in comparison to other state-of-the art methods for the same task while avoiding testing different prediction approaches.

      Weaknesses:

      1) The accuracy gain is mainly ascribed to easy cases, while the accuracy and precision for moderate to challenging cases are comparable to other PLM methods (see Fig. 4b and Extended Data Fig. 2). That raises the question: how likely is it to be in a moderate or challenging scenario? For example, it is not clear whether the comparison to the solved X-ray structures of anti-VEGF nanobodies represents an easy or challenging case for H3-OPT. The mutant nanobodies seem not to provide any further validation as the single mutations are very far away from the CDR-H3 loop and they do not disrupt the structure in any way. Indeed, RMSD values follow the same trend in H3-OPT and IgFold predictions (Fig. 4c). A more challenging test and interesting application could be solving the structure of a designed or mutated CDR-H3 loop.

      Thank you for your rigorous consideration. When the experimental structure is unavailable, it is difficult to directly determinate whether the target is easy-to-predict or challenging. We have conducted our non-redundant test set in which the number of easy-to-predict targets is comparable to the other two groups. Due to the limited availability of experimental antibody structures, especially nanobody structures, accurately predicting CDR-H3 remains a challenge. In our manuscript, we discuss the strengths and weakness of AlphaFold2 and other PLM-based methods, and we introduce H3-OPT as a comprehensive solution for antibody CDR3 modeling.

      We also appreciate your comment on experimental structures. We fully agree with your opinion and made attempts to solve the experimental structures of seven mutants, including two mutants (Y95F and Q118N) which are close to CDR-H3 loop. Unfortunately, we tried seven different reagent kits with a total of 672 crystallization conditions, but were unable to obtain crystals for these mutants. Despite the mutants we successfully solved may not have significantly disrupted the structures of CDR-H3 loops, they have still provided valuable insights into the differences between MSA-based methods and MSA-free methods (such as IgFold) for antibody structure modeling.

      We have further conducted a benchmarking study using two examples, PDBID 5U15 and 5U0R, both consisting of 18 residues in CDR-H3, to evaluate H3-OPT's performance in predicting mutated H3 loops. In the first case (target 5U15), AlphaFold2 failed to provide an accurate prediction of the extended orientation of the H3 loop, resulting in a less accurate prediction (Cα-RMSD = 10.25 Å) compared to H3-OPT (Cα-RMSD = 5.56 Å). In the second case (target 5U0R, a mutant of 5U15 in CDR3 loop), AlphaFold2 and H3-OPT achieved Cα-RMSDs of 6.10 Å and 4.25 Å, respectively. Additionally, the Cα-RMSDs of OmegaFold predictions were 8.05 Å and 9.84 Å, respectively. These findings suggest that both AlphaFold2 and OmegaFold effectively captured the mutation effects on conformations but achieved lower accuracy in predicting long CDR3 loops when compared to H3-OPT.

      2) The proposed method lacks a confidence score or a warning to help guide the users in moderate to challenging cases.

      We appreciate your suggestions and we have trained a separate module to predict confidence scores. We used the MSE loss for confidence prediction, where the label error was calculated as the Cα deviation of each residue after alignment. The inputs of this module are the same as those used for H3-OPT, and it generates a confidence score ranging from 0 to 100.

      3) The fact that AF2 outperforms H3-OPT in some particular cases (e.g. Fig. 2c and Extended Data Fig. 3) raises the question: is there still room for improvements? It is not clear how sensible is H3-OPT to the defined parameters. In the same line, bench-marking against other available prediction algorithms, such as OmegaFold, could shed light on the actual accuracy limit. We totally understand your concern. Many papers have suggested that PLM-based models are computationally efficient but may have unsatisfactory accuracy when high-resolution templates and MSA are available (Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies, Ruffolo, J. A. et al, 2023). However, the accuracy of AF2 decreased substantially when the MSA information is limited. Therefore, we directly retained high-confidence structures of AF2 and introduced a PSPM to improve the accuracy of the targets with long CDR-H3 loops and few sequence homologs. The improvement in mean Cα-RMSD demonstrated the room for accurately predicting CDR-H3 loops.

      We also appreciate your kind comment on defined parameters. In fact, once a benchmark dataset is established, determining an optimal cutoff value through parameter searching can indeed further improve the performance of H3-OPT in CDR3 structure prediction. However, it is important to note that this optimal cutoff value heavily depends on the testing dataset being used. Therefore, we provide a recommended cutoff value and offer a program interface for users who wish to manually define the cutoff value based on their specific requirements. Here, we showed the average Cα-RMSDs of our test set under different confidence cutoffs and the results have been added in the text accordingly.

      Author response table 1.

      We also appreciate your reminder, and we have conducted a benchmark against OmegaFold. The results have been included in the manuscript (Fig 4a-b).

      Author response image 3.

      Reviewer #1 (Recommendations For The Authors):

      1) In Fig 3a, please also compare IgFold and H3-OPT (merge Fig. S2 into Fig 3a)

      In Fig 3b, please separate Sub2 and Sub3, and add IgFold's performance.

      Thank you very much for your professional advice. We have made revisions to the figures based on your suggestions.

      Author response image 4.

      2) For the three experimentally solved structures of anti-VEGF nanobodies, what are the sequence identities of the VH domain and H3 loop, compared to the best available template? What is the length of the H3 loop? Which category (Sub1/2/3) do the targets belong to? What is the performance of AF2 or AF2-Multimer on the three targets?

      We feel sorry for these confusions. The sequence identities of the VH domain and H3 loop are 0.816 and 0.647, respectively, comparing with the best template. The CDR-H3 lengths of these nanobodies are both 17. According to our classification strategy, these nanobodies belong to Sub1. The confidence scores of these AlphaFold2 predicted loops were all higher than 0.8, and these loops were accepted as the outputs of H3-OPT by CBM.

      3) Is AF2-Multimer better than AF2, when using the sequences of antibody VH and antigen as input?

      Thanks for your suggestions. Many papers have benchmarked AlphaFold2-Multimer for protein complex modeling and demonstrated the accuracy of AlphaFold2-Multimer on predicting the protein complex is far from satisfactory (Benchmarking AlphaFold for protein complex modeling reveals accuracy determinants, Rui Yin, et al., 2022). Additionally, there is no significantly difference between AlphaFold2 and AlphaFold2-Multimer on antibody modeling (Structural Modeling of Nanobodies: A Benchmark of State-of-the-Art Artificial Intelligence Programs, Mario S. Valdés-Tresanco, et al., 2023)

      From the data perspective, we employed a non-redundant dataset for training and validation. Since these structures are valuable, considering the antigen sequence would reduce the size of our dataset, potentially leading to underfitting.

      4) For H3 loop grafting, I noticed that only identical target and template H3 sequences can trigger grafting (lines 348-349). How many such cases are in the test set?

      We appreciate your comment from this perspective. There are thirty targets in our database with identical CDR-H3 templates.

      Reviewer #2 (Recommendations For The Authors):

      • It is not clear to me whether the three structures apparently used as experimental confirmation of the predictions have been determined previously in this study or not. This is a key aspect, as a retrospective validation does not have the same conceptual value as a prospective, a posteriori validation. Please note that different parts of the text suggest different things in this regard "The model was validated by experimentally solving three structures of anti-VEGF nanobodies predicted by H3-OPT" is not exactly the same as "we then sought to validate H3-OPT using three experimentally determined structures of anti-VEGF nanobodies, including a wild-type (WT) and two mutant (Mut1 and Mut2) structures, that were recently deposited in protein data bank". The authors are kindly advised to make this point clear. By the way, "protein data bank" should be in upper case letters.

      We gratefully thank you for your feedback and fully understand your concerns. To validate the performance of H3-OPT, we initially solved the structures of both the wild-type and mutants of anti-VEGF nanobodies and submitted these structures to Protein Data Bank. We have corrected “that were recently deposited in protein data bank” into “that were recently deposited in Protein Data Bank” in our revised manuscript.

      • It would be good to clarify the goal and importance of the binding affinity prediction, as it seems a bit disconnected from the rest of the paper. Also, it would be good to include the production MD runs as Sup, Mat.

      Thanks for your valuable comment. We have added the following sentence in our manuscript to clarify the goal and importance of the molecular dynamics calculations: “Since affinity prediction plays a crucial role in antibody therapeutics engineering, we performed MD simulations to compare the differences in binding affinities between AF2-predicted complexes and H3-OPT-predicted complexes.”. The details of production runs have been described in Method section.

      • Has any statistical test been performed to compare the mean Cα-RMSD values across the modeling approaches included in the benchmark exercise?

      Thanks for this kind recommendation. We conducted a statistical test to assess the performance of different modeling approaches and demonstrated significant improvements with H3-OPT compared to other methods (p<0.001). Additionally, we have trained H3-OPT with five random seeds and compared mean Cα-RMSD values with all five models of AF2. Here, we showed the average Cα-RMSDs of H3-OPT and AlphaFold2.

      Author response table 1.

      • In Fig. 2c-f, I think it would be adequate to make the ordering criterion of the data points explicit in the caption or the graph itself.

      We appreciate your comment and suggestion. We have revised the graph in the manuscript accordingly.

      Author response image 5.

      • Please revise Figure S2 caption and/or its content. It is not clear, in parts b and c, which is the performance of H3-OPT. Why weren´t some other antibody-specific tools such as IgFold included in this comparison?

      Thanks for your comments. The performance of H3-OPT is not included in Figure S2. Prior to training H3-OPT, we conducted several preliminary studies, and the detailed results are available in the supplementary sections. We showed that AlphaFold2 outperformed other methods (including AI-based methods and TBM methods) and produced sub-angstrom predictions in framework regions. The comparison of IgFold with other methods was discussed in a previous work (Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies, Ruffolo, J. A. et al, 2023). In that study, we found that IgFold largely yielded results comparable to AlphaFold2 but with lower prediction cost. Additionally, we have also conducted a detailed comparison of CDR-H3 loops with IgFold in our main text.

      • It is stated that "The relative binding affinities of the antigen-antibody complexes were evaluated using the Python script...". Which Python script?

      Thank you for your comments, and I apologize for the confusion. This python script is a module of AMBER software, we have corrected “The relative binding affinities of the antigen-antibody complexes were evaluated using the python script” into “The relative binding affinities of the antigen-antibody complexes were evaluated using the MMPBSA module of AMBER software”.

      Reviewer #3 (Recommendations For The Authors):

      Does H3-OPT improve the AF2 score on the CDR-H3? It would be interesting to see whether grafted and PSPM loops improve the pLDDT score by using for example AF2Rank [https://doi.org/10.1103/PhysRevLett.129.238101]. That could also be a way to include a confidence score into H3-OPT.

      We are so grateful for your kind question. H3-OPT could not provide a confidence score for output in current version, so we did not know whether H3-OPT improve the AF2 score or not.

      We appreciate your kind recommendations and have calculated the pLDDT scores of all models predicted by H3-OPT and AF2 using AF2Rank. We showed that the average of pLDDT scores of different predicted models did not match the results of Cα-RMSD values.

      Author response table 3.

      Therefore, we have trained a separate module to predict the confidence score of the optimized CDR-H3 loops. We hope that this module can provide users with reliable guidance on whether to use predicted CDR-H3 loops.

      The test case of Nb PDB id. 8CWU is an interesting example where AF2 outperforms H3-OPT and PLMs. The top AF2 model according to ColabFold (using default options and no template [https://doi.org/10.1038/s41592-022-01488-1]) shows a remarkably good model of the CDR-H3, explaining the low Ca-RMSD in the Extended Data Fig. 3. However, the pLDDT score of the 4 tip residues (out of 12), forming the hairpin of the CDR-H3 loop, pushes down the average value bellow the CBM cut-off of 80. I wonder if there is a lesson to learn from that test case. How sensible is H3-OPT to the CBM cut-off definition? Have the authors tried weighting the residue pLDDT score by some structural criteria before averaging? I guess AF2 may have less confidence in hydrophobic tip residues in exposed loops as the solvent context may not provide enough support for the pLDDT score.

      Thanks for your valuable feedback. We showed the average Cα-RMSDs of our test set under different confidence cutoffs and the results have been added in the text accordingly.

      Author response table 4.

      We greatly appreciate your comment on this perspective. Inspired on your kind suggestions, we will explore the relationship between cutoff values and structural information in related work. Your feedback is highly valuable as it will contribute to the development of our approach.

      A comparison against the new folding prediction method OmegaFold [https://doi.org/10.1101/2022.07.21.500999] is missed. OmegaFold seems to outperform AF2, ESM, and IgFold among others in predicting the CDR-H3 loop conformation (See [https://doi.org/10.3390/molecules28103991] and [https://doi.org/10.1101/2022.07.21.500999]). Indeed, prediction of anti-VEGF Nb structure (PDB WT_QF_0329, chain B in supplementary data) by OmegaFold as implemented in ColabFold [https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/beta/omegafold.ipynb] and setting 10 cycles, renders Ca-RMSD 1.472 Å for CDR-H3 (residues 98-115).

      We appreciate your valuable suggestion. We have added the comparison against OmegaFold in our manuscript. The results have been included in the manuscript (Fig 4a-b).

      Author response image 6.

      In our test set, OmegaFold outperformed ESMFold in predicting the CDR-H3 loop conformation. However, it failed to match the accuracy of AF2, IgFold, and H3-OPT. We discussed the difference between MSA-based methods (such as AlphaFold2) and MSA-free methods (such as IgFold) in predicting CDR-H3 loops. Similarly, OmegaFold provided comparative results with HelixFold-Single and other MSA-free methods but still failed to match the accuracy of AlphaFold2 and H3-OPT on Sub1.

      The time-consuming step in H3-OPT is the AF2 prediction. However, most of the time is spent in modeling the mAb and Nb scaffolds, which are already very well predicted by PLMs (See Fig. 4 in [https://doi.org/10.3390/molecules28103991]). Hence, why not use e.g. OmegaFold as the first step, whose score also correlates to the RMSD values [https://doi.org/10.3390/molecules28103991]? If that fails, then use AF2 or grafting. Alternatively, use a PLM model to generate a template, remove/mask the CDR loops (at least CDR-H3), and pass it as a template to AF2 to optimize the structure with or without MSA (e.g. using AF2Rank).

      Thanks for your professional feedbacks. It is really true that the speed of MSA searching limited the application of high-throughput structure prediction. Previous studies have demonstrated that the deep learning methods performed well on framework residues. We once tried to directly predict the conformations of CDR-H3 loops using PLM-based methods, but this initial version of H3-OPT lacking the CBM could not replicate the accuracy of AF2 in Sub1. Similarly, we showed that IgFold and OmegaFold also provide lower accuracy in Sub1 (average Cα-RMSD is 1.71 Å and 1.83 Å, respectively, whereas AF2 predicted an average of 1.07 Å). Therefore, The predictions of AlphaFold2 not only produce scaffolds but also provide the highest quality of CDR-H3 loops when high-resolution templates and MSA are available.

      Thank you once again for your kind recommendation. In the current version of H3-OPT, we have highlighted the strengths of H3-OPT in combining the AF2 and PLM models in various scenarios. AF2 can provide accurate predictions for short loops with fewer than 10 amino acids, and PLM-based models show little or no improvement in such cases. In the next version of H3-OPT, as the first step, we plan to replace the AF2 models with other methods if any accurate MSA-free method becomes available in the future.

      Line 115: The statement "IgFold provided higher accuracy in Sub3" is not supported by Fig. 2a.

      We are sorry for our carelessness. We have corrected “IgFold provided higher accuracy in Sub3” into “IgFold provided higher accuracy in Sub3 (Fig. 3a)”.

      Lines 195-203: What is the statistical significance of results in Fig 5a and 5b?

      Thank you for your kind comments. The surface residues of AF2 models are significantly higher than those of H3-OPT models (p < 0.005). In Fig. 5b, H3-OPT models predicted lower values than AF2 models in terms of various surface properties, including polarity (p <0.05) and hydrophilicity (p < 0.001).

      Lines 212-213: It is not easy to compare and quantify the differences between electrostatic maps in Fig. 5d. Showing a Dmap (e.g. mapmodel - mapexperiment) would be a better option. Additionally, there is no methodological description of how the maps were generated nor the scale of the represented potential.

      Thank you for pointing this out. We have modified the figure (Fig. 5d) according to your kind recommendation and added following sentences to clarify the methodological description on the surface electrostatic potential:

      “Analysis of surface electrostatic potential

      We generated two-dimensional projections of CDR-H3 loop’s surface electrostatic potential using SURFMAP v2.0.0 (based on GitHub from February 2023: commit: e0d51a10debc96775468912ccd8de01e239d1900) with default parameters. The 2D surface maps were calculated by subtracting the surface projection of H3-OPT or AF2 predicted H3 loops to their native structures.”

      Author response image 7.

      Lines 237-240 and Table 2: What is the meaning of comparing the average free energy of the whole set? Why free energies should be comparable among test cases? I think the correct way is to compare the mean pair-to-pair difference to the experimental structure. Similarly, reporting a precision in the order of 0.01 kcal/mol seems too precise for the used methodology, what is the statistical significance of the results? Were sampling issues accounted for by performing replicates or longer MDs?

      Thanks for your rigorous advice and pointing out these issues. We have modified the comparisons of free energies of different predicted methods and corrected the precision of these results. The average binding free energies of H3-OPT complexes is lower than AF2 predicted complexes, but there is no significant difference between these energies (p >0.05).

      Author response table 4.

      Comparison of binding affinities obtained from MD simulations using AF2 and H3-OPT.

      Thanks for your comments on this perspective. Longer MD simulations often achieve better convergence for the average behavior of the system, while replicates provide insights into the variability and robustness of the results. In our manuscript, each MD simulation had a length of 100 nanoseconds, with the initial 90 nanoseconds dedicated to achieving system equilibrium, which was verified by monitoring RMSD (Root Mean Square Deviation). The remaining 10 nanoseconds of each simulation were used for the calculation of free energy. This approach allowed us to balance the need for extensive sampling with the verification of system stability.

      Regarding MD simulations for CDR-H3 refinement, its successful application highly depends on the starting conformation, the force field, and the sampling strategy [https://doi.org/10.1021/acs.jctc.1c00341]. In particular, the applied plan MD seems a very limited strategy (there is not much information about the simulated times in the supplementary material). Similarly, local structure optimizations with QM methods are not expected to improve a starting conformation that is far from the experimental conformation.

      Thank you very much for your valuable feedback. We fully agree with your insights regarding the limitations of MD simulations. Before training H3-OPT, we showed the challenge of accurately predicting CDR-H3 structures. We then tried to optimize the CDR-H3 loops by computational tools, such as MD simulations and QM methods (detailed information of MD simulations is provided in the main text). Unfortunately, these methods were not expected to improve the accuracy of AF2 predicted CDR-H3 loops. These results showed that MD simulations and QM methods not only are time-consuming, but also failed to optimize the CDR-H3 loops. Therefore, we developed H3-OPT to tackle these issues and improve the accuracy of CDR3-H3 for the development of antibody therapeutics.

      Text improvements

      Relevant statistical and methodological parameters are presented in a dispersed manner throughout the text. For example, the number of structures in test, training, and validation datasets is first presented in the caption of Fig. 4. Similarly, the sequence identity % to define redundancy is defined in the caption of Fig. 1a instead of lines 87-88, where authors define "we constructed a non-redundant dataset with 1286 high-resolution (<2.5 Å)". Is the sequence redundancy for the CDR-H3 or the whole mAb/Nb?

      Thank you for pointing out these issues. We have added the number of structures in each subgroup in the caption of Fig. 1a: “Clustering of the filtered, high-resolution structures yielded three datasets for training (n = 1021), validation (n = 134), and testing (n = 131).” and corrected “As data quality has large effects on prediction accuracy, we constructed a non-redundant dataset with 1286 high-resolution (<2.5 Å) antibody structures from SAbDab” into “As data quality has large effects on prediction accuracy, we constructed a non-redundant dataset (sequence identity < 0.8) with 1286 high-resolution (<2.5 Å) antibody structures from SAbDab” in the revised manuscript. The sequence redundancy applies to the whole mAb/Nb.

      The description of ablation studies is not easy to follow. For example, what does removing TGM mean in practical terms (e.g. only AF2 is used, or PSPM is applied if AF2 score < 80)? Similarly, what does removing CBM mean in practical terms (e.g. all AF2 models are optimized by PSPM, and no grafting is done)? Thanks for your comments and suggestions. We have corrected “d, Differences in H3-OPT accuracy without the template module. e, Differences in H3-OPT accuracy without the CBM. f, Differences in H3-OPT accuracy without the TGM.” into “d, Differences in H3-OPT accuracy without the template module. This ablation study means only PSPM is used. e, Differences in H3-OPT accuracy without the CBM. This ablation study means input loop is optimized by TGM and PSPM. f, Differences in H3-OPT accuracy without the TGM. This ablation study means input loop is optimized by CBM and PSPM.”.

      Authors should report the values in the text using the same statistical descriptor that is used in the figures to help the analysis by the reader. For example, in lines 223-224 a precision score of 0.75 for H3-OPT is reported in the text (I assume this is the average value), while the median of ~0.85 is shown in Fig. 6a.

      Thank you for your careful checks. We have corrected “After identifying the contact residues of antigens by H3-OPT, we found that H3-OPT could substantially outperform AF2 (Fig. 6a), with a precision of 0.75 and accuracy of 0.94 compared to 0.66 precision and 0.92 accuracy of AF2.” into “After identifying the contact residues of antigens by H3-OPT, we found that H3-OPT could substantially outperform AF2 (Fig. 6a), with a median precision of 0.83 and accuracy of 0.97 compared to 0.64 precision and 0.95 accuracy of AF2.” in proper place of manuscript.

      Minor corrections

      Lines 91-94: What do length values mean? e.g. is 0-2 Å the RMSD from the experimental structure?

      We appreciate your comment and apologize for any confusion. The RMSD value is actually from experimental structure. The RMSD value evaluates the deviation of predicted CDR-H3 loop from native structure and also represents the degree of prediction difficulty in AlphaFold2 predictions. We have added following sentence in the proper place of the revised manuscript: “(RMSD, a measure of the difference between the predicted structure and an experimental or reference structure)”.

      Line 120: is the "AF2 confidence score" for the full-length or CDR-H3?

      We gratefully appreciate for your valuable comment and have corrected “Interestingly, we observed that AF2 confidence score shared a strong negative correlation with Cα-RMSDs (Pearson correlation coefficient =-0.67 (Fig. 2b)” into “Interestingly, we observed that AF2 confidence score of CDR-H3 shared a strong negative correlation with Cα-RMSDs (Pearson correlation coefficient =-0.67 (Fig. 2b)” in the revised manuscript.

      Line 166: Do authors mean "Taken" instead of "Token"?

      We are really sorry for our careless mistakes. Thank you for your reminder.

      Line 258: Reference to Fig. 1 seems wrong, do authors mean Fig. 4?

      We sincerely thank the reviewer for careful reading. As suggested by the reviewer, we have corrected the “Fig. 1” into “Fig. 4”.

      Author response image 7.

      Point out which plot corresponds to AF2 and which one to H3-OPT

      Thanks for pointing out this issue. We have added the legends of this figure in the proper positions in our manuscript.

    2. Reviewer #2 (Public Review):

      This work provides a new tool (H3-Opt) for the prediction of antibody and nanobody structures, based on the combination of AlphaFold2 and a pre-trained protein language model, with a focus on predicting the challenging CDR-H3 loops with enhanced accuracy than previously developed approaches. This task is of high value for the development of new therapeutic antibodies. The paper provides an external validation consisting of 131 sequences, with further analysis of the results by segregating the test sets in three subsets of varying difficulty and comparison with other available methods. Furthermore, the approach was validated by comparing three experimentally solved 3D structures of anti-VEGF nanobodies with the H3-Opt predictions

      Strengths:

      The experimental design to train and validate the new approach has been clearly described, including the dataset compilation and its representative sampling into training, validation and test sets, and structure preparation. The results of the in silico validation are quite convincing and support the authors' conclusions.

      The datasets used to train and validate the tool and the code are made available by the authors, which ensures transparency and reproducibiity, and allows future benchmarking exercises with incoming new tools.

      Compared to AlphaFold2, the authors' optimization seems to produce better results for the most challenging subsets of the test set.

      Weaknesses:

      The comparison of affinity predictions derived from AlphaFold2 and H3-opt models, based on molecular dynamics simulations, should have been discussed in depth. In some cases, there are huge differences between the estimations from H3-opt models and those from experimental structures. It seems that the authors obtained average differences of the real delta, instead of average differences of the absolute value of the delta. This can be misleading, because high negative differences might be compensated by high positive differences when computing the mean value. Moreover, it would have been good for the authors to disclose the trajectories from the MD simulations.

    1. Author Response

      We would like to thank the senior editor, reviewing editor and all the reviewers for taking out precious time to review our manuscript and appreciating our study. We are excited that all of you have found strength in our work and have provided comments to strengthen it further. We sincerely appreciate the valuable comments and suggestions, which we believe will help us to further improve the quality of our work.

      Reviewer 1

      The manuscript by Dubey et al. examines the function of the acetyltransferase Tip60. The authors show that (auto)acetylation of a lysine residue in Tip60 is important for its nuclear localization and liquid-liquid-phase-separation (LLPS). The main observations are: (i) Tip60 is localized to the nucleus, where it typically forms punctate foci. (ii) An intrinsically disordered region (IDR) within Tip60 is critical for the normal distribution of Tip60. (iii) Within the IDR the authors show that a lysine residue (K187), that is auto-acetylated, is critical. Mutation of that lysine residue to a non-acetylable arginine abolishes the behavior. (iv) biochemical experiments show that the formation of the punctate foci may be consistent with LLPS.

      On balance, this is an interesting study that describes the role of acetylation of Tip60 in controlling its biochemical behavior as well as its localization and function in cells. The authors mention in their Discussion section other examples showing that acetylation can change the behavior of proteins with respect to LLPS; depending on the specific context, acetylation can promote (as here for Tip60) or impair LLPS.

      Strengths:

      The experiments are largely convincing and appear to be well executed.

      Weaknesses:

      The main concern I have is that all in vivo (i.e. in cells) experiments are done with overexpression in Cos-1 cells, in the presence of the endogenous protein. No attempt is made to use e.g. cells that would be KO for Tip60 in order to have a cleaner system or to look at the endogenous protein. It would be reassuring to know that what the authors observe with highly overexpressed proteins also takes place with endogenous proteins.

      Response: The main reason to perform these experiments with overexpression system was to generate different point mutants and deletion mutants of TIP60 and analyse their effect on its properties and functions. To validate our observations with overexpression system, we also examined localization pattern of endogenous TIP60 by IFA and results depict similar kind of foci pattern within the nucleus as observed with overexpressed TIP60 protein (Figure 4A). However, we understand the reviewers concern and agree to repeat some of the overexpression experiments under endogenous TIP60 knockdown conditions using siRNA or shRNA against 3’ UTR region.

      Also, it is not clear how often the experiments have been repeated and additional quantifications (e.g. of western blots) would be useful.

      Response: The experiments were performed as independent biological replicates (n=3) and this is mentioned in the figure legends. Regarding the suggestion for quantifying Western blots, we want to bring into the notice that where ever required (for blots such as Figure 2F, 6H) that require quantitative estimation, graph representing quantitated value with p-value had already been added. However as suggested, in addition, quantitation for Figure 6D will be performed and added in the revised version.

      In addition, regarding the LLPS description (Figure 1), it would be important to show the wetting behaviour and the temperature-dependent reversibility of the droplet formation.

      Response: We appreciate the suggestion, and we will perform these assays and include the results in the revised version.

      In Fig 3C the mutant (K187R) Tip60 is cytoplasmic, but still appears to form foci. Is this still reflecting phase separation, or some form of aggregation?

      Response: TIP60 (K187R) mutant remains cytosolic with homogenous distribution as shown in Figure 2E. Also with TIP60 partners like PXR or p53, this mutant protein remains homogenously distributed in the cytosol. However, when co-expressed with TIP60 (Wild-type) protein, this mutant protein although still remain cytosolic some foci-like pattern is also observed at the nuclear periphery which we believe could be accumulated aggregates.

      Reviewer 2

      The manuscript "Autoacetylation-mediated phase separation of TIP60 is critical for its functions" by Dubey S. et al reported that the acetyltransferase TIP60 undergoes phase separation in vitro and cell nuclei. The intrinsically disordered region (IDR) of TIP60, particularly K187 within the IDR, is critical for phase separation and nuclear import. The authors showed that K187 is autoacetylated, which is important for TIP60 nuclear localization and activity on histone H4. The authors did several experiments to examine the function of K187R mutants including chromatin binding, oligomerization, phase separation, and nuclear foci formation. However, the physiological relevance of these experiments is not clear since TIP60 K187R mutants do not get into nuclei. The authors also functionally tested the cancer-derived R188P mutant, which mimics K187R in nuclear localization, disruption of wound healing, and DNA damage repair. However, similar to K187R, the R188P mutant is also deficient in nuclear import, and therefore, its defects cannot be directly attributed to the disruption of the phase separation property of TIP60. The main deficiency of the manuscript is the lack of support for the conclusion that "autoacetylation-mediated phase separation of TIP60 is critical for its functions".

      This study offers some intriguing observations. However, the evidence supporting the primary conclusion, specifically regarding the necessity of the intrinsically disordered region (IDR) and K187ac of TIP60 for its phase separation and function in cells, lacks sufficient support and warrants more scrutiny. Additionally, certain aspects of the experimental design are perplexing and lack controls to exclude alternative interpretations. The manuscript can benefit from additional editing and proofreading to improve clarity.

      Response: We understand the point raised by the reviewer, however we would like to draw his attention to the data where we clearly demonstrated that acetylation of lysine 187 within the IDR of TIP60 is required for its phase separation (Figure 2J). We would like to draw reviewer’s attention to other TIP60 mutants within IDR (R177H, R188H, K189R) which all enters the nucleus and make phase separated foci. Cancer-associated mutation at R188 behaves similarly because it also hampers TIP60 acetylation at the adjacent K187 residue. Our in vitro and in cellulo results clearly demonstrate that autoacetylation of TIP60 at K187 within its IDR is critical for multiple functions including its translocation inside the nucleus, its protein-protein interaction and oligomerization which are prerequisite for phase separation of TIP60.

      There are two putative NLS sequences (NLS #1 from aa145; NLS #2 from aa184) in TIP60, both of which are within the IDR. Deletion of the whole IDR is therefore expected to abolish the nuclear localization of TIP60. Since K187 is within NLS #2, the cytoplasmic localization of the IDR and K187R mutants may not be related to the ability of TIP60 to phase separation.

      Response: We are not disputing the presence of putative NLS within IDR region of TIP60, however our results through different mutations within IDR region (K76, K80, K148, K150, R177, R178, R188, K189) clearly demonstrate that only K187 residue acetylation is critical to shuttle TIP60 inside the nucleus while all other lysine mutants located within these putative NLS region exhibited no impact on TIP60’s nuclear shuttling. We have mentioned this in our discussion, that autoacetylation of TIP60’s K187 may induce local structural modifications in its IDR which is critical for translocating TIP60 inside the nucleus where it undergoes phase separation critical for its functions. A previous example of similar kind shows, acetylation of lysine within the NLS region of TyrRS by PCAF promote its nuclear localization (Cao X et al 2017, PNAS). IDR region (which also contains K187 site) is important for phase separation once the protein enters inside the nucleus. This could be the cell’s mechanism to prevent unwarranted action of TIP60 until it enters the nucleus and phase separate on chromatin at appropriate locations.

      The chromatin-binding activity of TIP60 depends on HAT activity, but not phase-separation (Fig 1I), (Fig 2B). How do the authors reconcile the fact that the K187R mutant is able to bind to chromatin with lower activity than the HAT mutant (Fig 2F, 2I)?

      Response: K187 acetylation is required for TIP60’s nuclear translocation but not critical for chromatin binding. When soluble fraction is prepared in fractionation experiment, nuclear membrane is disrupted and TIP60 (K187R) mutant has no longer hindrance in accessing the chromatin and thus can load on the chromatin (although not as efficient as Wild-type protein). For efficient chromatin binding auto-acetylation of other lysine residues in TIP60 is required which might be hampered due to reduced catalytic activity or not sufficient enough to maintain equilibrium with HDAC’s activity inside the nucleus. In case of K187R, the reduced auto-acetylation is captured when protein is the cytosol. During fractionation, once this mutant has access to chromatin, it might auto-acetylate other lysine residues critical for chromatin loading (remember catalytic domain is intact in this mutant). This is evident due to hyper auto-acetylation of Wild-type protein compared to K187R or HAT mutant proteins. We want to bring into notice that phase-separation occurs only after efficient chromatin loading of TIP60 that is the reason that under in-cellulo conditions, both K187R (which cannot enter the nucleus) and HAT mutant (which enters the nucleus but fails to efficiently binds onto the chromatin) fails to form phase separated nuclear punctate foci.

      The DIC images of phase separation in Fig 2I need to be improved. The image for K187R showed the irregular shape of the condensates, which suggests particles in solution or on the slide. The authors may need to use fluorescent-tagged TIP60 in the in vitro LLPS experiments.

      Response: We believe this comment is for figure 2J. The irregularly shaped condensates observed for TIP60 K187R are unique to the mutant protein and are not caused by particles on the slide. We would like to draw reviewer’s attention to supplementary figure S2A, where DIC images for TIP60 (Wild-type) protein tested under different protein and PEG8000 conditions are completely clear where protein did not made phase separated droplets ruling out the probability of particles in solution or slides.

      The authors mentioned that the HAT mutant of TIP60 does not phase separate, which needs to be included.

      Response: We have already added the image of RFP-TIP60 (HAT mutant) in supplementary Fig S4A (panel 2) in the manuscript.

      Related to Point 3, the HAT mutant that doesn't form punctate foci by itself, can incorporate into WT TIP60 (Fig 5A). In vitro LLPS assay for WT, HAT, and K187R mutants with or without acetylation should be included. WT and mutant TIP can be labelled with GFP and RFP, respectively.

      Response: We would like to draw reviewer’s attention towards our co-expression experiments performed in Figure 5 where Wild-type protein (both tagged and untagged condition) is able to phase separate and make punctate foci with co-expressed HAT mutant protein (with depleted autoacetylation capacity). We believe these in cellulo experiments are already able to answer the queries what reviewer is suggesting to acheive by in vitro experiments.

      Fig 3A and 3B showed that neither K187 mutant nor HAT mutant could oligomerize. If both experiments were conducted in the absence of in vitro acetylation, how do the authors reconcile these results?

      Response: We thank the reviewer for highlighting our oversight in omitting the mention of acetyl coenzyme A here. To induce acetylation under in vitro conditions, we have added 10 µM acetyl CoA into the reactions depicted in Figure 3A and 3B. The information for acetyl CoA for Figure 3B was already included in the GST-pull down assay (material and methods section). We will add the same in the oligomerization assay of material and methods in the revised manuscript.

      In Fig 4, the colocalization images showed little overlap between TIP60 and nuclear speckle (NS) marker SC35, indicating that the majority of TIP60 localized in the nuclear structure other than NS. Have the authors tried to perturbate the NS by depleting the NS scaffold protein and examining TIP60 foci formation? Do PXR and TP53 localize to NS?

      Response: Under normal conditions majority of TIP60 is not localized in nuclear speckles (NS) so we believe that perturbing NS will not have significant effect on TIP60 foci formation. Interestingly, recently a study by Shelly Burger group (Alexander KA et al Mol Cell. 2021 15;81(8):1666-1681) had shown that p53 localizes to NS to regulate subset of its targeted genes. We have mentioned about it in our discussion section. No information is available about localization of PXR in NS.

      Were TIP60 substrates, H4 (or NCP), PXR, TP53, present inTIP60 condensates in vitro? It's interesting to see both PXR and TP53 had homogenous nuclear signals when expressed together with K187R, R188P (Fig 6E, 6G), or HAT (Suppl Fig S4A) mutants. Are PXR or TP53 nuclear foci dependent on their acetylation by TIP60? This can and should be tested.

      Response: Both p53 and PXR are known to be acetylated by TIP60. In case of PXR, TIP60 acetylate PXR at lysine 170 and this TIP60-mediated acetylation of PXR at K170 is important for TIP60-PXR foci which now we know are formed by phase separation (Bakshi K et al Sci Rep. 2017 Jun 16;7(1):3635).

      Since R188P mutant, like K187R, does not get into the nuclei, it is not suitable to use this mutant to examine the functional relevance of phase separation for TIP60. The authors need to find another mutant in IDR that retains nuclear localization and overall HAT activity but specifically disrupts phase separation. Otherwise, the conclusion needs to be restated. All cancer-derived mutants need to be tested for LLPS in vitro.

      Response: We appreciate the reviewer’s point here, but it is important to note that the objective of these experiments is to understand the impact of K187R (critical in multiple aspects of TIP60 including phase separation) and R188P (a naturally occurring cancer-associated mutation and behaving similarly to K187R) on TIP60’s activities to determine their functional relevance. As suggested by the reviewer to test and find IDR mutant that fails to phase separate however retains nuclear localization and catalytic activity can be examined in future studies.

      For all cellular experiments, it is not mentioned whether endogenous TIP60 was removed and absent in the cell lines used in this study. It's important to clarify this point because the localization and function of mutant TIP60 are affected by WT TIP60 (Fig 5).

      Response: Endogenous TIP60 was present in in cellulo experiments, however as suggested by reviewer 1 we will perform some of the in cellulo experiments under endogenous TIP60 knockdown condition to validate our findings.

      It is troubling that H4 peptide is used for in vitro HAT assay since TIP60 has much higher activity on nucleosomes and its preferred substrates include H2A.

      Response: The purpose of using H4 peptide in the HAT assay is to determine the impact of mutations of TIP60’s catalytic activity. As H4 is one of the major histone substrate for TIP60, we believe it satisfy the objective of experiments.

      Reviewer 3

      This study presents results arguing that the mammalian acetyltransferase Tip60/KAT5 auto-acetylates itself on one specific lysine residue before the MYST domain, which in turn favors not only nuclear localization but also condensate formation on chromatin through LLPS. The authors further argue that this modification is responsible for the bulk of Tip60 autoacetylation and acetyltransferase activity towards histone H4. Finally, they suggest that it is required for association with txn factors and in vivo function in gene regulation and DNA damage response.

      These are very wide and important claims and, while some results are interesting and intriguing, there is not really close to enough work performed/data presented to support them. In addition, some results are redundant between them, lack consistency in the mutants analyzed, and show contradiction between them. The most important shortcoming of the study is the fact that every single experiment in cells was done in over-expressed conditions, from transiently transfected cells. It is well known that these conditions can lead to non-specific mass effects, cellular localization not reflecting native conditions, and disruption of native interactome. On that topic, it is quite striking that the authors completely ignore the fact that Tip60 is exclusively found as part of a stable large multi-subunit complex in vivo, with more than 15 different proteins. Thus, arguing for a single residue acetylation regulating condensate formation and most Tip60 functions while ignoring native conditions (and the fact that Tip60 cannot function outside its native complex) does not allow me to support this study.

      Response: We appreciate the reviewer’s point here, but it is important to note that the main purpose to use overexpression system in the study is to analyse the effect of different generated point/deletion mutations on TIP60. We have overexpressed proteins with different tags (GFP or RFP) or without tags (Figure 3C, Figure 5) to confirm the behaviour of protein which remains unperturbed due to presence of tags. To validate we have also examined localization of endogenous TIP60 protein which also depict similar localization behaviour as overexpressed protein. We would like to draw attention that there are several reports in literature where similar kind of overexpression system are used to determine functions of TIP60 and its mutants. Also nuclear foci pattern observed for TIP60 in our studies is also reported by several other groups.

      Sun, Y., et. al. (2005) A role for the Tip60 histone acetyltransferase in the acetylation and activation of ATM. Proc Natl Acad Sci U S A, 102(37):13182-7.

      Kim, C.-H. et al. (2015) ‘The chromodomain-containing histone acetyltransferase TIP60 acts as a code reader, recognizing the epigenetic codes for initiating transcription’, Bioscience, Biotechnology, and Biochemistry, 79(4), pp. 532–538.

      Wee, C. L. et al. (2014) ‘Nuclear Arc Interacts with the Histone Acetyltransferase Tip60 to Modify H4K12 Acetylation(1,2,3).’, eNeuro, 1(1). doi: 10.1523/ENEURO.0019-14.2014.

      However, as a caution and suggested by other reviewers also we will perform some of these overexpression experiments in absence of endogenous TIP60 by using 3’ UTR specific siRNA/shRNA.

      We thank the reviewer for his comment on muti-subunit complex proteins and we would like to expand our study by determining the interaction of some of the complex subunits with TIP60 ((Wild-type) that forms nuclear condensates), TIP60 ((HAT mutant) that enters the nucleus but do not form condensates) and TIP60 ((K187R) that do not enter the nucleus and do not form condensates). We will include the result of these experiments in the revised manuscript.

      • It is known that over-expression after transient transfection can lead to non-specific acetylation of lysines on the proteins, likely in part to protect from proteasome-mediated degradation. It is not clear whether the Kac sites targeted in the experiments are based on published/public data. In that sense, it is surprising that the K327R mutant does not behave like a HAT-dead mutant (which is what exactly?) or the K187R mutant as this site needs to be auto-acetylated to free the catalytic pocket, so essential for acetyltransferase activity like in all MYST-family HATs. In addition, the effect of K187R on the total acetyl-lysine signal of Tip60 is very surprising as this site does not seem to be a dominant one in public databases.

      Response: We have chosen autoacetylation sites based on previously published studies where LC-MS/MS and in vitro acetylation assays were used to identified autoacetylation sites in TIP60 which includes K187. We have already mentioned about it in the manuscript and have quoted the references (1. Yang, C., et al (2012). Function of the active site lysine autoacetylation in Tip60 catalysis. PloS one 7, e32886. 10.1371/journal.pone.0032886. 2. Yi, J., et al (2014). Regulation of histone acetyltransferase TIP60 function by histone deacetylase 3. The Journal of biological chemistry 289, 33878–33886. 10.1074/jbc.M114.575266.). We would like to emphasize that both these studies have identified K187 as autoacetylation site in TIP60. Since TIP60 HAT mutant (with significantly reduced catalytic activity) can also enter nucleus, it is not surprising that K327 could also enter the nucleus.

      • As the physiological relevance of the results is not clear, the mutants need to be analyzed at the native level of expression to study real functional effects on transcription and localization (ChIP/IF). It is not clear the claim that Tip60 forms nuclear foci/punctate signals at physiological levels is based on what. This is certainly debated because in part of the poor choice of antibodies available for IF analysis. In that sense, it is not clear which Ab is used in the Westerns. Endogenous Tip60 is known to be expressed in multiple isoforms from splice variants, the most dominant one being isoform 2 (PLIP) which lacks a big part (aa96-147) of the so-called IDR domain presented in the study. Does this major isoform behave the same?

      Response: TIP60 antibody used in the study is from Santa Cruz (Cat. No.- sc-166323). This antibody is widely used for TIP60 detection by several methods and has been cited in numerous publications. Cat. No. will be mentioned in the manuscript. Regarding isoforms, three isoforms are known for TIP60 among which isoform 2 is majorly expressed and used in our study. Isoform and 1 and 2 have same length of IDR (150 amino acids) while isoform 3 has IDR of 97 amino acids. Interestingly, the K187 is present in all the isoforms (already mentioned in the manuscript) and missing region (96-147 amino acid) in isoform 3 has less propensity for disordered region (marked in blue circle). This clearly shows that all the isoforms of TIP60 has the tendency to phase separate.

      Author response image 1.

      • It is extremely strange to show that the K187R mutant fails to get in the nuclei by cell imaging but remains chromatin-bound by fractionation... If K187 is auto-acetylated and required to enter the nucleus, why would a HAT-dead mutant not behave the same?

      Response: We would like to draw attention that both HAT mutant and K187R mutant are not completely catalytically dead. As our data shows both these mutants have catalytic activity although at significantly decreased levels. We believe that K187 acetylation is critical for TIP60 to enter the nucleus and once TIP60 shuttles inside the nucleus autoacetylation of other sites is required for efficient chromatin binding of TIP60. In fractionation assay, nuclear membrane is dissolved while preparing the soluble fraction so there is no hindrance for K187R mutant in accessing the chromatin. While in the case of HAT mutant, it can acetylate the K187 site and thus is able to enter the nucleus however this residual catalytic activity is either not able to autoacetylate other residues required for its efficient chromatin binding or to counter activities of HDAC’s deacetylating the TIP60.

      • If K187 acetylation is key to Tip60 function, it would be most logical (and classical) to test a K187Q acetyl-mimic substitution. In that sense, what happens with the R188Q mutant? That all goes back to the fact that this cluster of basic residues looks quite like an NLS.

      Response: As suggested we will generate acetylation mimicking mutant for K187 site and examine it. Result will be added in the revised manuscript.

      • The effect of the mutant on the TIP60 complex itself needs to be analyzed, e.g. for associated subunits like p400, ING3, TRRAP, Brd8...

      Response: As suggested we will examine the effect of mutations on TIP60 complex

    1. Author Response:

      We take the liberty to thank all of you for your constructive and inspiring comments, which will help us substantially improve the final version of the paper. Before our final revision with details, I am writing this provisional letter to have a quick response to our reviewers’ comments.

      I first give a quick and short summary for your public reviews, then respond point-by-point.

      Editors:

      1. More discussion is needed.

      2. More discussion about eye fixation during adaptation. Discuss why increasing visual uncertainty by blurring the cursor in the present study produces the opposite findings of previous studies (Tsay et al., 2021; Makino et al., 2023).

      3. Discuss the broad impact of the current model.

      4. Share the codes and the metadata (instead of the current data format).

      Response: This is a concise summary of the major concerns listed in the public review. Given these concerns are easy to address, we are giving a quick but point-to-point response for now. The elaborate version will be put into our formal revision.

      **Reviewer 1: **

      1) More credit should be given to the PReMo model: a) The PReMo model also proposes that perceptual error drives implicit adaptation, as in a new publication in Tsay et al., 2023, which was not public at the time of the current writing; and b) The PReMo model can account for some dataset, e.g. Fig 4A.

      Response: We will add this new citation and point out that the new paper also uses the term perceptual error. We will also point out that the PReMo model has the potential to explain Fig 4A, though for now, it assumes an additional visual shift to explain the positive proprioceptive changes relative to the target. We would expand the discussion about the comparison between the two models.

      2) The present study produced an opposite finding of a previous finding, i.e., upregulating visual uncertainty (by cursor blurring here) decreases adaptation for large perturbations but less so for small perturbations, while previous studies have shown the opposite (by using a cursor cloud; Tsay et al., 2021; Makino et al., 2023). This needs explanation.

      Response: Using the cursor cloud (Tsay et al., 2021, Makino et al., 2023) to modulate visual uncertainty has inherent drawbacks that make it unsuitable for testing the sensory uncertainty effect for visuomotor rotation. For the error clamp paradigm, the error is defined as angular deviation. The cursor cloud consists of multiple cursors spanning over a range of angles, which affects both the sensory uncertainty (the intended outcome) AND the sensory estimate of angles (the error itself, the undesired outcome). In Bayesian terms, the cursor cloud aims to modulate the sigma of a distribution (sigma_v in our model), but it additionally affects the mean of the distribution (mu). This unnecessary confound is avoided by using cursor blurring, which is still a cursor with its center (mu) unchanged from an un-blurred cursor. Furthermore, as correctly pointed out in the original paper by Tsay et al., 2021, the cursor cloud often overlaps with the visual target. This “target hit” would affect adaptation, possibly via a reward learning mechanism (See Kim et al., 2019 eLife). This is a second confound that accompanies the cursor cloud. We will expand our discussion to explain the discrepancy between our findings and previous findings.

      3) The estimation of visual uncertainty (our exp1) required people to fixate on the target, while this might not reflect the actual scenario during adaptation where people are free to look wherever they want.

      Response: Our data shows otherwise: in a typical error-clamp setting, people fixate on the target for the majority of the time. For our Exp1, the fixation on the straight line between the starting position and the target is 86%-95% (as shown in Figure S1). We also collected eye-tracking data in our Exp4, which is a typical error-clamp experiment. More than 95% of gaze falls with +/- 50 pixels around the center of the screen, even slightly higher than Exp1. We will provide this part of the data in the revision. In fact, we designed our Exp1 to mimic the eye-tracking pattern as in typical error-clamp learning with carefully executed pilot experiments.

      This high percentage of fixating on the target is not surprising: the error-clamp task requires participants to use their hands to move towards the target and to ignore the cursor. In fact, we would also like to point out that the high percentage of fixation on the aiming target is also true for conventional visuomotor rotation, which involves strategic re-aiming (shown in de Brouwer et al. 2018; Bromberg et al. 2019; we have an upcoming paper to show this). This is one reason that our new theory would also apply to other types of motor adaptation.

      4) More methodology details are needed. E.g., a figure showing the visual blurring, a figure showing individual data, a table showing data from individual sessions, code sharing, and a possible new correlational analysis.

      Response: All these additional methodological/analysis information will be provided. We were self-limited by writing a short paper, but the revision would be extended for all these details.

      Reviewer 2:

      1) More discussions are needed since the focus of this study is narrowly confined to visuomotor rotation. “A general computational principle, and its contributions to other motor learning paradigms remain to be explored”.

      Response: This is a great suggestion since we also think our original Discussion has not elaborated on the possible broad impact of our theory. Our model is not limited to the error-clamp adaptation, where the participants were explicitly told to ignore the rotated cursor. The error-clamp paradigm is one rare example that implicit motor learning can be isolated in a nearly idealistic way. Our findings thus imply two key aspects of implicit adaptation: 1) localizing one’s effector is implicitly processed and continuously used to update the motor plan; 2) Bayesian cue combination is at the core of integrating multimodal feedback and motor-related cues (motor prediction cue in our model) when forming procedural knowledge for action control.

      We will propose that the same two principles should be applied to various kinds of motor adaptation and motor skill learning, which constitutes motor learning in general. Most of our knowledge about motor adaptation is from visuomotor rotation, prism adaptation, force field adaptation, and saccadic adaptation. The first three types all involve localizing one’s effector under the influence of perturbed sensory feedback, and they also have implicit learning. We believe they can be modeled by variants of our model, or at least we should consider using the two principles above to think of their computational nature. For skill learning, especially for de novo learning, the area still lacks a fundamental computational model that accounts for the skill acquisition process on the level of relevant movement cues. Our model suggests a promising route, i.e., repetitive movements with a Bayesian cue combination of movement-related cues might underlie the implicit process of motor skills.

      We will add more discussion on the possible broad implications of our model in the revision.

      Reviewer 3:

      1) Similar to Reviewer 1, raised the concern about whether people’s fixation in typical motor adaptation settings is similar to the fixation that we instructed in our Exp1.

      Response: see above.

      2) Similar to Reviewer 2, the concern was raised about whether our new theory is applicable to a broad context. Especially, error clamp appears to be a strange experimental manipulation that has no real-life appeal, “(i)Ignoring errors and suppressing adaptation would also be a disastrous strategy to use in the real world”.

      Response: about the broad impact of our model, please see responses to Reviewer 2 above. We agree that ignoring errors (and thus “trying” to suppress adaptation) should not be a movement strategy for real-world intentional tasks. However, even in real life, we constantly attend to one thing and do the other thing; that’s when implicit motor processes are in charge. Furthermore, it is this exact “ignoring” instruction that elicits the implicit adaptation that we can work on. In this sense, the error-clamp paradigm is a great vehicle to isolate implicit adaptation and allows us to unpack its cognitive mechanism.

      3) In Exp1, the 1s delay between the movement end and the presentation of the reference cursor might inflate the actual visual uncertainty.

      Response: The 1s delay of the reference cursor would not inflate the estimate of visual uncertainty. Our Exp1 used a similar paradigm by visual science (e.g., White, Levi, and Aitsebaomo, Vision Research, 1992), which shows that delay does not lead to an obvious increase in visual uncertainty over a broad range of values (from 0.2s to >1s, see their Figure 5-6). We will add more methodology justifications in our revision.

      4) Our Fig4A used Tsay et al., 2021 data, which, in the reviewer’s view, is not an appropriate measure of proprioceptive bias. The reason is that in this dataset, “participants actively move to a visual target, the reported hand positions do not reflect proprioception, but mostly the remembered position of the target participants were trying to move to.”

      Response: We agree that Tsay et al., 2021 study used an unconventional way to measure the influence of implicit adaptation on proprioception. And, their observed “proprioceptive changes” should not be called “proprioceptive bias” which is conventionally a reserved term for measuring the difference between the estimated hand location relative to the actual hand location (and better to be a passively moved hand). However, we think their dataset is still subject to the same Bayesian cue combination principle and thus can be modeled. Our modeling of this dataset includes all relevant cues: the implicitly perceived hand position and the proprioceptive cue (given that the hand stays at the movement end). Both cues are in the extrinsic coordinates, which happened to set the target position as zero. But where to set the zero (whether it is the target or the actual hand location) does not matter for the model fitting. Note that our Exp4 is also based on PEA modeling of proprioceptive bias, and this time the data is presented relative to the actual location.

      In the revision, we would keep the current Fig4A and start to call the data as proprioceptive change as opposed to proprioceptive bias to follow the convention.

    2. Reviewer #1 (Public Review):

      This valuable study demonstrates a novel mechanism by which implicit motor adaptation saturates for large visual errors in a principled normative Bayesian manner. Additionally, the study revealed two notable empirical findings: visual uncertainty increases for larger visual errors in the periphery, and proprioceptive shifts/implicit motor adaptation are non-monotonic, rather than ramp-like. This study is highly relevant for researchers in sensory cue integration and motor learning. However, I find some areas where statistical quantification is incomplete, and the contextualization of previous studies to be puzzling.

      Issue #1: Contextualization of past studies.

      While I agree that previous studies have focused on how sensory errors drive motor adaptation (e.g., Burge et al., 2008; Wei and Kording, 2009), I don't think the PReMo model was contextualized properly. Indeed, while PReMo should have adopted clearer language - given that proprioception (sensory) and kinaesthesia (perception) have been used interchangeably, something we now make clear in our new study (Tsay, Chandy, et al. 2023) - PReMo's central contribution is that a perceptual error drives implicit adaptation (see Abstract): the mismatch between the felt (perceived) and desired hand position. The current paper overlooks this contribution. I encourage the authors to contextualize PReMo's contribution more clearly throughout. Not mentioned in the current study, for example, PReMo accounts for the continuous changes in perceived hand position in Figure 4 (Figure 7 in the PReMo study).

      There is no doubt that the current study provides important additional constraints on what determines perceived hand position: Firstly, it offers a normative Bayesian perspective in determining perceived hand position. PReMo suggests that perceived hand position is determined by integrating motor predictions with proprioception, then adding a proprioceptive shift; PEA formulates this as the optimal integration of these three inputs. Secondly, PReMo assumed visual uncertainty to remain constant for different visual errors; PEA suggests that visual uncertainty ought to increase (but see Issue #2).

      Issue #2: Failed replication of previous results on the effect of visual uncertainty.

      2a. A key finding of this paper is that visual uncertainty linearly increases in the periphery; a constraint crucial for explaining the non-monotonicity in implicit adaptation. One notable methodological deviation from previous studies is the requirement to fixate on the target: Notably, in the current experiments, participants were asked to fixate on the target, a constraint not imposed in previous studies. In a free-viewing environment, visual uncertainty may not attenuate as fast, and hence, implicit adaptation does not attenuate as quickly as that revealed in the current design with larger visual errors. Seems like this current fixation design, while important, needs to be properly contextualized considering how it may not represent most implicit adaptation experiments.

      2b. Moreover, the current results - visual uncertainty attenuates implicit adaptation in response to large, but not small, visual errors - deviates from several past studies that have shown that visual uncertainty attenuates implicit adaptation to small, but not large, visual errors (Tsay, Avraham, et al. 2021; Makino, Hayashi, and Nozaki, n.d.; Shyr and Joshi 2023). What do the authors attribute this empirical difference to? Would this free-viewing environment also result in the opposite pattern in the effect of visual uncertainty on implicit adaptation for small and large visual errors?

      2c. In the current study, the measure of visual uncertainty might be inflated by brief presentation times of comparison and referent visual stimuli (only 150 ms; our previous study allowed for a 500 ms viewing time to make sure participants see the comparison stimuli). Relatedly, there are some individuals whose visual uncertainty is greater than 20 degrees standard deviation. This seems very large, and less likely in a free-viewing environment.

      2d. One important confound between clear and uncertain (blurred) visual conditions is the number of cursors on the screen. The number of cursors may have an attenuating effect on implicit adaptation simply due to task-irrelevant attentional demands (Parvin et al. 2022), rather than that of visual uncertainty. Could the authors provide a figure showing these blurred stimuli (gaussian clouds) in the context of the experimental paradigm? Note that we addressed this confound in the past by comparing participants with and without low vision, where only one visual cursor is provided for both groups (Tsay, Tan, et al. 2023).

      Issue #3: More methodological details are needed.

      3a. It's unclear why, in Figure 4, PEA predicts an overshoot in terms of perceived hand position from the target. In PReMo, we specified a visual shift in the perceived target position, shifted towards the adapted hand position, which may result in overshooting of the perceived hand position with this target position. This visual shift phenomenon has been discovered in previous studies (e.g., (Simani, McGuire, and Sabes 2007)).

      3b. The extent of implicit adaptation in Experiment 2, especially with smaller errors, is unclear. The implicit adaptation function seems to be still increasing, at least by visual inspection. Can the authors comment on this trend, and relatedly, show individual data points that help the reader appreciate the variability inherent to these data?

      3c. The same participants were asked to return for multiple days/experiments. Given that the authors acknowledge potential session effects, with attenuation upon re-exposure to the same rotation (Avraham et al. 2021), how does re-exposure affect the current results? Could the authors provide clarity, perhaps a table, to show shared participants between experiments and provide evidence showing how session order may not be impacting results?

      3d. The number of trials per experiment should be detailed more clearly in the Methods section (e.g., Exp 4). Moreover, could the authors please provide relevant code on how they implemented their computational models? This would aid in future implementation of these models in future work. I, for one, am enthusiastic to build on PEA.

      3f. In addition to predicting a correlation between proprioceptive shift and implicit adaptation on a group level, both PReMo and PEA (but not causal inference) predict a correlation between individual differences in proprioceptive shift and proprioceptive uncertainty with the extent of implicit adaptation (Tsay, Kim, et al. 2021). Interestingly, shift and uncertainty are independent (see Figures 4F and 6C in Tsay et al, 2021). Does PEA also predict independence between shift and uncertainty? It seems like PEA does predict a correlation.

      References:

      Avraham, Guy, Ryan Morehead, Hyosub E. Kim, and Richard B. Ivry. 2021. "Reexposure to a Sensorimotor Perturbation Produces Opposite Effects on Explicit and Implicit Learning Processes." PLoS Biology 19 (3): e3001147.<br /> Makino, Yuto, Takuji Hayashi, and Daichi Nozaki. n.d. "Divisively Normalized Neuronal Processing of Uncertain Visual Feedback for Visuomotor Learning."<br /> Parvin, Darius E., Kristy V. Dang, Alissa R. Stover, Richard B. Ivry, and J. Ryan Morehead. 2022. "Implicit Adaptation Is Modulated by the Relevance of Feedback." BioRxiv. https://doi.org/10.1101/2022.01.19.476924.<br /> Shyr, Megan C., and Sanjay S. Joshi. 2023. "A Case Study of the Validity of Web-Based Visuomotor Rotation Experiments." Journal of Cognitive Neuroscience, October, 1-24.<br /> Simani, M. C., L. M. M. McGuire, and P. N. Sabes. 2007. "Visual-Shift Adaptation Is Composed of Separable Sensory and Task-Dependent Effects." Journal of Neurophysiology 98 (5): 2827-41.<br /> Tsay, Jonathan S., Guy Avraham, Hyosub E. Kim, Darius E. Parvin, Zixuan Wang, and Richard B. Ivry. 2021. "The Effect of Visual Uncertainty on Implicit Motor Adaptation." Journal of Neurophysiology 125 (1): 12-22.<br /> Tsay, Jonathan S., Anisha M. Chandy, Romeo Chua, R. Chris Miall, Jonathan Cole, Alessandro Farnè, Richard B. Ivry, and Fabrice R. Sarlegna. 2023. "Implicit Motor Adaptation and Perceived Hand Position without Proprioception: A Kinesthetic Error May Be Derived from Efferent Signals." BioRxiv. https://doi.org/10.1101/2023.01.19.524726.<br /> Tsay, Jonathan S., Hyosub E. Kim, Darius E. Parvin, Alissa R. Stover, and Richard B. Ivry. 2021. "Individual Differences in Proprioception Predict the Extent of Implicit Sensorimotor Adaptation." Journal of Neurophysiology, March. https://doi.org/10.1152/jn.00585.2020.<br /> Tsay, Jonathan S., Steven Tan, Marlena Chu, Richard B. Ivry, and Emily A. Cooper. 2023. "Low Vision Impairs Implicit Sensorimotor Adaptation in Response to Small Errors, but Not Large Errors." Journal of Cognitive Neuroscience, January, 1-13.

    1. the syntax of the languages were more suited toNLS linkage conventions, and the files themselves werebetter structured, we might again reach a point of feelingthat the structure is well understood, and the effect ofchanges in code can be properly predicted

      NLS linkage conventions

      for - hyper link - hyperlink - what's in a link? - keep the links outside the file

      link

      the original hyperlink that was active and allowed dynamic exploration of linked context with appropriate viewspecs

      Now in Plex or Trail Mark.s the identity of a high resolution addressable content (the addressed is derived from the plexmark trailmark being present in an outline

      whether it is in its own document or a clueon or plexon or a clue structure or plex structure outline within the context of a plexon clueon page virtual document

      is not couple with a view spec but the very identity of a document carries marks that indicate - intentional - semantic - symmathetic - self-describing

      qualifier marks tht in turn are amenable to interpretation by humans and machines such that "viewspec" or rahter "miorphic interpretations" can be associated with them

    2. do away with listing by organizing thedocumentation and code into an easily accessible, monolithic,structured NLS file.

      becuase of the immanent structuring no linear listing were ever used in Meta-Lisp in practice, dynamic navigation

      although a complete listing via depth first traversal was always possible and told the complete story

    3. If the SPL compiler is in Tree Meta, the parse rules willcontain only parse information and node-building directions.This should make them much more readable, a feature alwayswanted by those who try to figure out commands of NLS byreading the code in the SPLs

      parse rule

      only contain parse information and node-building

      just like MetaLISP!

    4. figure out what a pieceof code is "supposed" to do

      figure out supposed to do

    1. Parameters:id «Object|Number|String» value of _id to query by [options] «Object» optional see Query.prototype.setOptions() [options.strict] «Boolean|String» overwrites the schema's strict mode option [options.translateAliases=null] «Boolean» If set to true, translates any schema-defined aliases in filter, projection, update, and distinct. Throws an error if there are any conflicts where both alias and raw property are defined on the same object. Returns:«Query» See:Model.findOneAndDeletemongodbIssue a MongoDB findOneAndDelete() command by a document's _id field. In other words, findByIdAndDelete(id) is a shorthand for findOneAndDelete({ _id: id }). This function triggers the following middleware. findOneAndDelete()

      Certainly! Let's go through the possible outcomes and what the function would return in each case:

      Case 1: Document Found and Deleted

      If the document with the specified _id is found and successfully deleted, the function will return the deleted document. In the example:

      ```javascript const mongoose = require('mongoose'); const User = mongoose.model('User');

      const userIdToDelete = "5f4fbf7d94bf551063d84924";

      User.findByIdAndDelete(userIdToDelete) .then(deletedUser => { if (deletedUser) { console.log(User with _id ${userIdToDelete} deleted successfully.); console.log("Deleted User:", deletedUser); } else { console.log(No user found with _id ${userIdToDelete}.); } }) .catch(error => { console.error(Error deleting user: ${error}); }); ```

      If the user with _id "5f4fbf7d94bf551063d84924" exists, the output might look like:

      User with _id 5f4fbf7d94bf551063d84924 deleted successfully. Deleted User: { "_id": ObjectId("5f4fbf7d94bf551063d84924"), "name": "John Doe", "age": 30, "email": "john@example.com" }

      Case 2: Document Not Found

      If no document with the specified _id is found, the function will return null. In the example:

      ```javascript const mongoose = require('mongoose'); const User = mongoose.model('User');

      const userIdToDelete = "nonexistent_id";

      User.findByIdAndDelete(userIdToDelete) .then(deletedUser => { if (deletedUser) { console.log(User with _id ${userIdToDelete} deleted successfully.); console.log("Deleted User:", deletedUser); } else { console.log(No user found with _id ${userIdToDelete}.); } }) .catch(error => { console.error(Error deleting user: ${error}); }); ```

      If there's no user with the specified _id, the output might look like:

      No user found with _id nonexistent_id.

      Case 3: Error Occurs

      If an error occurs during the operation (e.g., database connection issues), the function will reject the promise, and the .catch block will be executed to handle the error. In the example:

      ```javascript const mongoose = require('mongoose'); const User = mongoose.model('User');

      const userIdToDelete = "5f4fbf7d94bf551063d84924";

      User.findByIdAndDelete(userIdToDelete) .then(deletedUser => { if (deletedUser) { console.log(User with _id ${userIdToDelete} deleted successfully.); console.log("Deleted User:", deletedUser); } else { console.log(No user found with _id ${userIdToDelete}.); } }) .catch(error => { console.error(Error deleting user: ${error}); }); ```

      If there's an error, the output might look like:

      Error deleting user: [Error message describing the issue]Certainly! Let's break down the provided information into simpler terms with an example:

      Purpose:

      The goal of this function is to find a document in a MongoDB database using its _id field and then delete that document.

      Parameters:

      • id: This is the unique identifier of the document you want to find and delete. It can be an object, number, or string representing the _id value.
      • options: An optional object that allows you to customize the behavior of the operation.

      Returns:

      The function returns a Query object, which is a way to interact with MongoDB queries.

      Example:

      Suppose you have a MongoDB collection named "users" with documents like this:

      json { "_id": ObjectId("5f4fbf7d94bf551063d84924"), "name": "John Doe", "age": 30, "email": "john@example.com" }

      Now, if you want to find and delete the user with the _id "5f4fbf7d94bf551063d84924," you can use the function like this:

      ```javascript const mongoose = require('mongoose'); const User = mongoose.model('User'); // Assume you have a model named 'User'

      const userIdToDelete = "5f4fbf7d94bf551063d84924";

      // Using findByIdAndDelete function User.findByIdAndDelete(userIdToDelete) .then(deletedUser => { if (deletedUser) { console.log(User with _id ${userIdToDelete} deleted successfully.); } else { console.log(No user found with _id ${userIdToDelete}.); } }) .catch(error => { console.error(Error deleting user: ${error}); }); ```

      This code finds the user with the specified _id and deletes it using the findByIdAndDelete function. The function returns a promise, so we use .then to handle success and .catch to handle errors. The deletedUser variable contains the deleted user document if found, or it is null if no user was found with the specified _id.

    1. Resetting state with a key You’ll often encounter the key attribute when rendering lists. However, it also serves another purpose. You can reset a component’s state by passing a different key to a component. In this example, the Reset button changes the version state variable, which we pass as a key to the Form. When the key changes, React re-creates the Form component (and all of its children) from scratch, so its state gets reset. Read preserving and resetting state to learn more. App.jsApp.js Download ResetFork9912345678910111213141516171819202122232425262728293031import { useState } from 'react';export default function App() { const [version, setVersion] = useState(0); function handleReset() { setVersion(version + 1); } return ( <> <button onClick={handleReset}>Reset</button> <Form key={version} /> </> );}function Form() { const [name, setName] = useState('Taylor'); return ( <> <input value={name} onChange={e => setName(e.target.value)} /> <p>Hello, {name}.</p> </> );}Show more Storing information from previous renders Usually, you will update state in event handlers. However, in rare cases you might want to adjust state in response to rendering — for example, you might want to change a state variable when a prop changes. In most cases, you don’t need this: If the value you need can be computed entirely from the current props or other state, remove that redundant state altogether. If you’re worried about recomputing too often, the useMemo Hook can help. If you want to reset the entire component tree’s state, pass a different key to your component. If you can, update all the relevant state in the event handlers. In the rare case that none of these apply, there is a pattern you can use to update state based on the values that have been rendered so far, by calling a set function while your component is rendering. Here’s an example. This CountLabel component displays the count prop passed to it: export default function CountLabel({ count }) { return <h1>{count}</h1>} Say you want to show whether the counter has increased or decreased since the last change. The count prop doesn’t tell you this — you need to keep track of its previous value. Add the prevCount state variable to track it. Add another state variable called trend to hold whether the count has increased or decreased. Compare prevCount with count, and if they’re not equal, update both prevCount and trend. Now you can show both the current count prop and how it has changed since the last render. App.jsCountLabel.jsCountLabel.js ResetFork991234567891011121314151617import { useState } from 'react';export default function CountLabel({ count }) { const [prevCount, setPrevCount] = useState(count); const [trend, setTrend] = useState(null); if (prevCount !== count) { setPrevCount(count); setTrend(count > prevCount ? 'increasing' : 'decreasing'); } return ( <> <h1>{count}</h1> {trend && <p>The count is {trend}</p>} </> );}Show more Note that if you call a set function while rendering, it must be inside a condition like prevCount !== count, and there must be a call like setPrevCount(count) inside of the condition. Otherwise, your component would re-render in a loop until it crashes. Also, you can only update the state of the currently rendering component like this. Calling the set function of another component during rendering is an error. Finally, your set call should still update state without mutation — this doesn’t mean you can break other rules of pure functions. This pattern can be hard to understand and is usually best avoided. However, it’s better than updating state in an effect. When you call the set function during render, React will re-render that component immediately after your component exits with a return statement, and before rendering the children. This way, children don’t need to render twice. The rest of your component function will still execute (and the result will be thrown away). If your condition is below all the Hook calls, you may add an early return; to restart rendering earlier. Troubleshooting I’ve updated the state, but logging gives me the old value Calling the set function does not change state in the running code: function handleClick() { console.log(count); // 0 setCount(count + 1); // Request a re-render with 1 console.log(count); // Still 0! setTimeout(() => { console.log(count); // Also 0! }, 5000);} This is because states behaves like a snapshot. Updating state requests another render with the new state value, but does not affect the count JavaScript variable in your already-running event handler. If you need to use the next state, you can save it in a variable before passing it to the set function: const nextCount = count + 1;setCount(nextCount);console.log(count); // 0console.log(nextCount); // 1 I’ve updated the state, but the screen doesn’t update React will ignore your update if the next state is equal to the previous state, as determined by an Object.is comparison. This usually happens when you change an object or an array in state directly: obj.x = 10; // 🚩 Wrong: mutating existing objectsetObj(obj); // 🚩 Doesn't do anything You mutated an existing obj object and passed it back to setObj, so React ignored the update. To fix this, you need to ensure that you’re always replacing objects and arrays in state instead of mutating them: // ✅ Correct: creating a new objectsetObj({ ...obj, x: 10}); I’m getting an error: “Too many re-renders” You might get an error that says: Too many re-renders. React limits the number of renders to prevent an infinite loop. Typically, this means that you’re unconditionally setting state during render, so your component enters a loop: render, set state (which causes a render), render, set state (which causes a render), and so on. Very often, this is caused by a mistake in specifying an event handler: // 🚩 Wrong: calls the handler during renderreturn <button onClick={handleClick()}>Click me</button>// ✅ Correct: passes down the event handlerreturn <button onClick={handleClick}>Click me</button>// ✅ Correct: passes down an inline functionreturn <button onClick={(e) => handleClick(e)}>Click me</button> If you can’t find the cause of this error, click on the arrow next to the error in the console and look through the JavaScript stack to find the specific set function call responsible for the error. My initializer or updater function runs twice In Strict Mode, React will call some of your functions twice instead of once: function TodoList() { // This component function will run twice for every render. const [todos, setTodos] = useState(() => { // This initializer function will run twice during initialization. return createTodos(); }); function handleClick() { setTodos(prevTodos => { // This updater function will run twice for every click. return [...prevTodos, createTodo()]; }); } // ... This is expected and shouldn’t break your code. This development-only behavior helps you keep components pure. React uses the result of one of the calls, and ignores the result of the other call. As long as your component, initializer, and updater functions are pure, this shouldn’t affect your logic. However, if they are accidentally impure, this helps you notice the mistakes. For example, this impure updater function mutates an array in state: setTodos(prevTodos => { // 🚩 Mistake: mutating state prevTodos.push(createTodo());}); Because React calls your updater function twice, you’ll see the todo was added twice, so you’ll know that there is a mistake. In this example, you can fix the mistake by replacing the array instead of mutating it: setTodos(prevTodos => { // ✅ Correct: replacing with new state return [...prevTodos, createTodo()];}); Now that this updater function is pure, calling it an extra time doesn’t make a difference in behavior. This is why React calling it twice helps you find mistakes. Only component, initializer, and updater functions need to be pure. Event handlers don’t need to be pure, so React will never call your event handlers twice. Read keeping components pure to learn more. I’m trying to set state to a function, but it gets called instead You can’t put a function into state like this: const [fn, setFn] = useState(someFunction);function handleClick() { setFn(someOtherFunction);} Because you’re passing a function, React assumes that someFunction is an initializer function, and that someOtherFunction is an updater function, so it tries to call them and store the result. To actually store a function, you have to put () => before them in both cases. Then React will store the functions you pass. const [fn, setFn] = useState(() => someFunction);function handleClick() { setFn(() => someOtherFunction);}

      Certainly! Let's break down the explanation with examples in simple words:

      Resetting State with a Key:

      In React, you can reset a component's state by changing its key. This is demonstrated in the example:

      ```jsx import { useState } from 'react';

      export default function App() { const [version, setVersion] = useState(0);

      function handleReset() { setVersion(version + 1); }

      return ( <> <button onClick={handleReset}>Reset</button> {/ Form component with a key based on the version state /} <Form key={version} /> <br /> ); }

      function Form() { const [name, setName] = useState('Taylor');

      return ( <> <input value={name} onChange={e => setName(e.target.value)} />

      Hello, {name}.

      <br /> ); } ```

      In this example, the Form component's key is tied to the version state. When you click the "Reset" button, it increments the version, causing React to recreate the Form component from scratch. This results in the state of the Form being reset.

      Storing Information from Previous Renders:

      Sometimes, you might want to adjust state in response to rendering. The example introduces the CountLabel component:

      ```jsx import { useState } from 'react';

      export default function CountLabel({ count }) { const [prevCount, setPrevCount] = useState(count); const [trend, setTrend] = useState(null);

      if (prevCount !== count) { setPrevCount(count); setTrend(count > prevCount ? 'increasing' : 'decreasing'); }

      return ( <>

      {count}

      {trend &&

      The count is {trend}

      } <br /> ); } ```

      Here, the CountLabel component tracks the previous count and determines the trend (increasing or decreasing) when the count prop changes during rendering.

      Troubleshooting Tips:

      The examples provide troubleshooting tips for common issues, such as avoiding infinite re-renders, handling state updates, and understanding the behavior of initializer and updater functions.

      Summary:

      • Changing a component's key can reset its state.
      • Storing information from previous renders may require additional state variables.
      • Troubleshooting tips help address common issues related to state updates.

      These concepts are essential for effective state management in React components.

    2. Is using an updater always preferred? Show DetailsYou might hear a recommendation to always write code like setAge(a => a + 1) if the state you’re setting is calculated from the previous state. There is no harm in it, but it is also not always necessary.In most cases, there is no difference between these two approaches. React always makes sure that for intentional user actions, like clicks, the age state variable would be updated before the next click. This means there is no risk of a click handler seeing a “stale” age at the beginning of the event handler.However, if you do multiple updates within the same event, updaters can be helpful. They’re also helpful if accessing the state variable itself is inconvenient (you might run into this when optimizing re-renders).If you prefer consistency over slightly more verbose syntax, it’s reasonable to always write an updater if the state you’re setting is calculated from the previous state. If it’s calculated from the previous state of some other state variable, you might want to combine them into one object and use a reducer. The difference between passing an updater and passing the next state directly1. Passing the updater function 2. Passing the next state directly Example 1 of 2: Passing the updater function This example passes the updater function, so the “+3” button works.App.jsApp.js Download ResetFork99123456789101112131415161718192021222324import { useState } from 'react';export default function Counter() { const [age, setAge] = useState(42); function increment() { setAge(a => a + 1); } return ( <> <h1>Your age: {age}</h1> <button onClick={() => { increment(); increment(); increment(); }}>+3</button> <button onClick={() => { increment(); }}>+1</button> </> );}Show moreNext Example Updating objects and arrays in state You can put objects and arrays into state. In React, state is considered read-only, so you should replace it rather than mutate your existing objects. For example, if you have a form object in state, don’t mutate it: // 🚩 Don't mutate an object in state like this:form.firstName = 'Taylor'; Instead, replace the whole object by creating a new one: // ✅ Replace state with a new objectsetForm({ ...form, firstName: 'Taylor'}); Read updating objects in state and updating arrays in state to learn more. Examples of objects and arrays in state1. Form (object) 2. Form (nested object) 3. List (array) 4. Writing concise update logic with Immer Example 1 of 4: Form (object) In this example, the form state variable holds an object. Each input has a change handler that calls setForm with the next state of the entire form. The { ...form } spread syntax ensures that the state object is replaced rather than mutated.App.jsApp.js Download ResetFork991234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556import { useState } from 'react';export default function Form() { const [form, setForm] = useState({ firstName: 'Barbara', lastName: 'Hepworth', email: 'bhepworth@sculpture.com', }); return ( <> <label> First name: <input value={form.firstName} onChange={e => { setForm({ ...form, firstName: e.target.value }); }} /> </label> <label> Last name: <input value={form.lastName} onChange={e => { setForm({ ...form, lastName: e.target.value }); }} /> </label> <label> Email: <input value={form.email} onChange={e => { setForm({ ...form, email: e.target.value }); }} /> </label> <p> {form.firstName}{' '} {form.lastName}{' '} ({form.email}) </p> </> );}Show moreNext Example

      Certainly! Let's explore the difference between passing an updater function and passing the next state directly in simple words with examples:

      Example 1: Passing the Updater Function

      ```jsx import { useState } from 'react';

      export default function Counter() { const [age, setAge] = useState(42);

      function increment() { setAge(a => a + 1); }

      return ( <>

      Your age: {age}

      <button onClick={() => { increment(); increment(); increment(); }}>+3</button> <button onClick={() => { increment(); }}>+1</button> <br /> ); } ```

      In this example:

      • The increment function uses an updater function to increase the age by 1.
      • The "+3" button calls increment three times. Because of the updater function, each call gets the latest state, resulting in the age being incremented by 3.

      Example 2: Passing the Next State Directly

      ```jsx import { useState } from 'react';

      export default function Counter() { const [age, setAge] = useState(42);

      function increment() { setAge(age + 1); }

      return ( <>

      Your age: {age}

      <button onClick={() => { increment(); increment(); increment(); }}>+3</button> <button onClick={() => { increment(); }}>+1</button> <br /> ); } ```

      In this example:

      • The increment function directly passes the next state (age + 1) to setAge.
      • The "+3" button calls increment three times. However, because the state updates are asynchronous, the value of age may not be what you expect due to the batching of updates.

      Why Use Updater Functions?

      Using updater functions is often preferred, especially when you need to calculate the next state based on the current state. It ensures that each call gets the most recent state, even if there are multiple state updates in quick succession.

      Example 3: Updating an Object in State

      ```jsx import { useState } from 'react';

      export default function Form() { const [form, setForm] = useState({ firstName: 'Barbara', lastName: 'Hepworth', email: 'bhepworth@sculpture.com', });

      return ( <> <label> First name: <input value={form.firstName} onChange={e => { setForm({ ...form, firstName: e.target.value }); }} /> </label> <label> Last name: <input value={form.lastName} onChange={e => { setForm({ ...form, lastName: e.target.value }); }} /> </label> <label> Email: <input value={form.email} onChange={e => { setForm({ ...form, email: e.target.value }); }} /> </label>

      {form.firstName}{' '} {form.lastName}{' '} ({form.email})

      <br /> ); } ```

      In this example, we use the spread syntax and an updater function to update an object (form) in state.

      Simple Explanation:

      • Updater Function: Use it when you want to ensure that the state you're working with is the most recent one, especially in scenarios with quick and multiple updates.

      • Directly Passing Next State: This approach may not guarantee the most up-to-date state due to batching, and it's typically used when the next state doesn't depend on the current state.

      In summary, while both approaches work, using updater functions is often preferred when the next state depends on the current state, providing a more reliable and expected behavior in your React components.

    3. Updating state based on the previous state Suppose the age is 42. This handler calls setAge(age + 1) three times: function handleClick() { setAge(age + 1); // setAge(42 + 1) setAge(age + 1); // setAge(42 + 1) setAge(age + 1); // setAge(42 + 1)} However, after one click, age will only be 43 rather than 45! This is because calling the set function does not update the age state variable in the already running code. So each setAge(age + 1) call becomes setAge(43). To solve this problem, you may pass an updater function to setAge instead of the next state: function handleClick() { setAge(a => a + 1); // setAge(42 => 43) setAge(a => a + 1); // setAge(43 => 44) setAge(a => a + 1); // setAge(44 => 45)} Here, a => a + 1 is your updater function. It takes the pending state and calculates the next state from it. React puts your updater functions in a queue. Then, during the next render, it will call them in the same order: a => a + 1 will receive 42 as the pending state and return 43 as the next state. a => a + 1 will receive 43 as the pending state and return 44 as the next state. a => a + 1 will receive 44 as the pending state and return 45 as the next state. There are no other queued updates, so React will store 45 as the current state in the end. By convention, it’s common to name the pending state argument for the first letter of the state variable name, like a for age. However, you may also call it like prevAge or something else that you find clearer. React may call your updaters twice in development to verify that they are pure.

      Certainly! Let's dive into this concept with more detail and simple language:

      Updating State Based on Previous State:

      In React, when you want to update the state based on its current value, you need to be mindful of the asynchronous nature of the setState function. If you repeatedly call setState with the same state value in a synchronous block of code, React might not have updated the state in between calls.

      Example of the Issue:

      Consider the following code:

      ```jsx function Counter() { const [age, setAge] = useState(42);

      function handleClick() { setAge(age + 1); // age is 42, setAge(42 + 1) setAge(age + 1); // age is still 42, setAge(42 + 1) setAge(age + 1); // age is still 42, setAge(42 + 1) }

      return ( <div>

      Age: {age}

      <button onClick={handleClick}>Increment Age</button> </div> ); } ```

      Here, even though setAge is called three times in a row, after the first call, age remains 42 in the subsequent calls because the state updates are asynchronous.

      Solving the Problem with Updater Function:

      To ensure that you're working with the most recent state, you can pass an updater function to setAge instead of the next state directly:

      ```jsx function Counter() { const [age, setAge] = useState(42);

      function handleClick() { setAge((prevAge) => prevAge + 1); // prevAge is 42, setAge(42 => 43) setAge((prevAge) => prevAge + 1); // prevAge is now 43, setAge(43 => 44) setAge((prevAge) => prevAge + 1); // prevAge is now 44, setAge(44 => 45) }

      return ( <div>

      Age: {age}

      <button onClick={handleClick}>Increment Age</button> </div> ); } ```

      Now, by using an updater function with setAge, React ensures that each call gets the most up-to-date state (prevAge), and you get the correct sequence of state updates.

      How Updater Function Works:

      • The updater function, in this case, takes the previous state (prevAge) and calculates the next state by adding 1.
      • React puts these updater functions in a queue.
      • During the next render, React calls each updater function in the order they were queued, ensuring the correct sequence of state updates.

      By convention, the argument to the updater function is often named after the state variable, like prevAge. This makes it clear that it represents the previous state of the age variable.

      Strict Mode Warnings:

      In development mode, React might call your updater functions twice to check for accidental impurities. However, this doesn't affect the production behavior.

      In simpler terms, using updater functions with setState ensures that you work with the latest state when updating based on the previous state, preventing issues caused by the asynchronous nature of state updates in React.

      Certainly! Let's break down the concept of batching state updates in React in simple words with an example:

      Batching State Updates in React:

      When you make multiple state updates in a React component, React doesn't immediately apply each update. Instead, it batches them together for efficiency. This means that several state updates are grouped and processed together to avoid unnecessary renders.

      Example - Without Batching:

      Consider this example without batching:

      ```jsx function Counter() { const [count, setCount] = useState(0);

      function handleClick() { setCount(count + 1); // Update 1 setCount(count + 1); // Update 2 setCount(count + 1); // Update 3 }

      return ( <div>

      Count: {count}

      <button onClick={handleClick}>Increment Count</button> </div> ); } ```

      In this example:

      • Initially, count is 0.
      • When you click the button, three consecutive state updates are triggered.

      Without batching, React would process each update immediately. After the first update, the subsequent updates would still see the old count value.

      Example - With Batching:

      React batches these updates together, ensuring they are processed in a more efficient way. Here's how React would handle it:

      ```jsx function Counter() { const [count, setCount] = useState(0);

      function handleClick() { // React batches these updates setCount(count + 1); // Update 1 setCount(count + 1); // Update 2 setCount(count + 1); // Update 3 // React processes them together, avoiding unnecessary renders }

      return ( <div>

      Count: {count}

      <button onClick={handleClick}>Increment Count</button> </div> ); } ```

      With batching, React processes these state updates together. It calculates the new state based on the current state at the time of the batch, ensuring that the updates are applied more efficiently.

      Simple Explanation:

      Batching is like collecting multiple change requests and applying them all at once, rather than one by one. It's similar to a waiter taking multiple orders from a table and bringing all the dishes together to save time.

      In the context of React, batching state updates helps prevent unnecessary renders and makes the application more efficient.

      In summary, React batches state updates to optimize performance by processing multiple updates together, resulting in a more responsive and efficient user interface.

    4. Caveats useState is a Hook, so you can only call it at the top level of your component or your own Hooks. You can’t call it inside loops or conditions. If you need that, extract a new component and move the state into it. In Strict Mode, React will call your initializer function twice in order to help you find accidental impurities. This is development-only behavior and does not affect production. If your initializer function is pure (as it should be), this should not affect the behavior. The result from one of the calls will be ignored. set functions, like setSomething(nextState) The set function returned by useState lets you update the state to a different value and trigger a re-render. You can pass the next state directly, or a function that calculates it from the previous state: const [name, setName] = useState('Edward');function handleClick() { setName('Taylor'); setAge(a => a + 1); // ... Parameters nextState: The value that you want the state to be. It can be a value of any type, but there is a special behavior for functions. If you pass a function as nextState, it will be treated as an updater function. It must be pure, should take the pending state as its only argument, and should return the next state. React will put your updater function in a queue and re-render your component. During the next render, React will calculate the next state by applying all of the queued updaters to the previous state. See an example below. Returns set functions do not have a return value. Caveats The set function only updates the state variable for the next render. If you read the state variable after calling the set function, you will still get the old value that was on the screen before your call. If the new value you provide is identical to the current state, as determined by an Object.is comparison, React will skip re-rendering the component and its children. This is an optimization. Although in some cases React may still need to call your component before skipping the children, it shouldn’t affect your code. React batches state updates. It updates the screen after all the event handlers have run and have called their set functions. This prevents multiple re-renders during a single event. In the rare case that you need to force React to update the screen earlier, for example to access the DOM, you can use flushSync. Calling the set function during rendering is only allowed from within the currently rendering component. React will discard its output and immediately attempt to render it again with the new state. This pattern is rarely needed, but you can use it to store information from the previous renders. See an example below. In Strict Mode, React will call your updater function twice in order to help you find accidental impurities. This is development-only behavior and does not affect production. If your updater function is pure (as it should be), this should not affect the behavior. The result from one of the calls will be ignored. Usage Adding state to a component Call useState at the top level of your component to declare one or more state variables. import { useState } from 'react';function MyComponent() { const [age, setAge] = useState(42); const [name, setName] = useState('Taylor'); // ... The convention is to name state variables like [something, setSomething] using array destructuring. useState returns an array with exactly two items: The current state of this state variable, initially set to the initial state you provided. The set function that lets you change it to any other value in response to interaction. To update what’s on the screen, call the set function with some next state: function handleClick() { setName('Robin');} React will store the next state, render your component again with the new values, and update the UI. PitfallCalling the set function does not change the current state in the already executing code:function handleClick() { setName('Robin'); console.log(name); // Still "Taylor"!}It only affects what useState will return starting from the next render.

      Absolutely, let's simplify the concepts with examples:

      1. Declaring State with useState:

      When using useState in React, you declare state variables at the top level of your component. The convention is to name state variables using array destructuring, like [something, setSomething].

      ```jsx import { useState } from 'react';

      function MyComponent() { // Declaring state variables: age and name const [age, setAge] = useState(42); const [name, setName] = useState('Taylor'); // ... } ```

      Here, age and name are state variables initialized with default values, and setAge and setName are the corresponding functions to update these state variables.

      2. Updating State with set Functions:

      To update the state and trigger a re-render, you use the set functions:

      jsx function handleClick() { // Updating the state variables setName('Robin'); setAge((prevAge) => prevAge + 1); // Using a function to calculate the next state based on the previous state }

      • setName('Robin'): Updates the name state variable directly.
      • setAge((prevAge) => prevAge + 1): Uses a function to calculate the next state of age based on the previous state.

      3. Pitfall: Asynchronous Nature:

      One pitfall to be aware of is that calling the set function doesn't immediately change the state in the current code execution:

      jsx function handleClick() { setName('Robin'); console.log(name); // Still "Taylor"! }

      Here, the console.log will still output the old value of name. The state update takes effect on the next render, not immediately.

      4. Avoiding Unnecessary Renders:

      React has optimizations in place to avoid unnecessary renders. If the new state provided to the set function is identical to the current state, React skips re-rendering:

      jsx setName('Taylor'); // No re-render because the new value is the same as the current value

      This is an optimization to make React more efficient.

      5. Special Behavior for Functions:

      If you pass a function to setState, it's treated as an updater function, and it's called during the next render:

      jsx setAge((prevAge) => prevAge + 1);

      Here, the function receives the previous state (prevAge) and returns the next state, allowing more dynamic state updates.

      6. Strict Mode Warnings:

      In Strict Mode, React might call your updater function twice during development to help identify accidental impurities. However, this doesn't affect production.

      In simple terms, useState is a powerful hook that allows functional components to have and manage state, enabling dynamic and interactive user interfaces in React.

    1. How props change over time The Clock component below receives two props from its parent component: color and time. (The parent component’s code is omitted because it uses state, which we won’t dive into just yet.) Try changing the color in the select box below: Clock.jsClock.js ResetFork912345678export default function Clock({ color, time }) { return ( <h1 style={{ color: color }}> {time} </h1> );} This example illustrates that a component may receive different props over time. Props are not always static! Here, the time prop changes every second, and the color prop changes when you select another color. Props reflect a component’s data at any point in time, rather than only in the beginning. However, props are immutable—a term from computer science meaning “unchangeable”. When a component needs to change its props (for example, in response to a user interaction or new data), it will have to “ask” its parent component to pass it different props—a new object! Its old props will then be cast aside, and eventually the JavaScript engine will reclaim the memory taken by them. Don’t try to “change props”. When you need to respond to the user input (like changing the selected color), you will need to “set state”, which you can learn about in State: A Component’s Memory. Recap To pass props, add them to the JSX, just like you would with HTML attributes. To read props, use the function Avatar({ person, size }) destructuring syntax. You can specify a default value like size = 100, which is used for missing and undefined props. You can forward all props with <Avatar {...props} /> JSX spread syntax, but don’t overuse it! Nested JSX like <Card><Avatar /></Card> will appear as Card component’s children prop. Props are read-only snapshots in time: every render receives a new version of props. You can’t change props. When you need interactivity, you’ll need to set state.

      Certainly! Let's break it down in simple terms:

      1. Props Changing Over Time:

      In React, a component can receive different pieces of information, called "props," from its parent component. Props can change over time, reflecting the changing state of the application.

      2. Example: Clock Component:

      Imagine a Clock component that displays the current time and has a specified color. The Clock component is given two props: color and time.

      jsx // Clock.js export default function Clock({ color, time }) { return ( <h1 style={{ color: color }}> {time} </h1> ); }

      Here, the Clock component uses these props to display the time in a specified color.

      3. Props Are Immutable:

      Props are like snapshots of data at a specific moment. They represent the state of the component at a particular time. However, once set, props are immutable, meaning they cannot be directly changed.

      4. Changing Props:

      If you want to change the information shown by a component in response to user actions (like selecting a different color), you don't directly change the props. Instead, you ask the parent component to provide new props. The component then receives these new props, and it re-renders with the updated information.

      5. Setting State for Interactivity:

      To handle user interactions or dynamic changes, you use "state." If you need to update information dynamically (like changing the selected color in the example), you use state. Changing state triggers a re-render of the component with the updated information.

      6. Recap:

      • Passing Props: Props are like information passed down to a component. You add them in JSX, just like HTML attributes.

      • Reading Props: Use destructuring syntax to read props inside the component.

      • Immutable Nature: Props are immutable, meaning you can't directly change them. If something needs to change, ask the parent component for new props.

      • Setting Default Values: You can set default values for props, ensuring they have a value even if not explicitly provided.

      • Interactivity and State: For dynamic changes and user interactivity, use "state" instead of trying to change props directly.

      By understanding and following these principles, you ensure that your React components stay predictable and can handle changes effectively.

      • The historian will have some puzzles to solve --> interesting way of stating that their logic is not clear? (3)
      • Eugenics always had an evaluative logic at its core. Some human life was of more value -- to the state, the nation, the race, future generations -- than other life, and thus its advocates sought to implement these practices differentially ( 4) ==> did we ever examine who was making these policies and procedures and place a value on their lives
      • Peak in the 1920s --> im assuming that this is referring to the US
      • Contributing factors = nationalism, tech changes, changes in ideas about citizenry, labor force, future fitness (4) ABOUT MODERNITY
      • Darwins work that highlighted that we are a part of nature and that we follow the same natural laws as everyone else critically undermined the idea that humans were made specially (4)
      • Humans could and should replace natural selection by other processes that are more merciful and not less effective (4) huh --> he thought eugenics is merciful and more effective than natural selection (I wonder why this became so popular, especially when this time period is known for being religious
      • Galton understood eugenics to be rational planning of and intervention into human breeding --> thought that the state should intervene into peoples own choices and such (5)
      • Positive and negative eugenics (5)
      • Oneida, NY the women professed: "we do not belong to ourselves", disrespectfully what in the world is this cult (5)
      • American eugenics and ideas about eugenics shaping NS Germany
      • 1926 Soviet Civic Code (5)
      • aiming to restrict the reproduction of those with conditions and diseases considered heritable (CONSIDERED) --> did they know exactly (6)
      • eugenic intervention directed at those who already belonged (6)
      • Different priorities to the varying eugenic movements across the world --> southern US v NS Germany v Australian English (6)
      • "Polluting" was not racial outsiders but often marginalized insiders
      • Geographic insight --> more concentrated on the urban problem populations of industrialization (6) --> why
      • America --> poor, rural whites: southern European immigrants; African Americans (7)
      • Anxieties about interracial marriage were frequently linked to colonial rule (7)
      • Race crossing --> Erst Rodenwalt (7)
      • Australia --> half-caste problem (7) biological and cultural assimilation influenced by eugenic ideas ==> boarding school type beat
      • Prospect of extinction (8)
      • primitive societies where weak offspring were not nursed were admired as naturally eugenic (8) ==> looking at primitive examples to propel an ide of modernity
      • relationship between eugenic and birth control
      • In Eastern Europe --> more public health spending (8)
      • into the management of infectious disease (9)
      • Kenya example --> uneducable making their education irrelevant to the state (9)
      • linked to the proliferation of asylums from the 19th century
      • Dr. Harry Haselden withheld treatment for newborns --> what about the Hippocratic oath (9)
      • designations of fit and unfit applied to both populations and to individuals
      • one eye on the past generations and one eye on the future (9)
      • pedigree chart (10)
      • both research and propaganda
      • Darwin knew about the principles of genetics, yet married his first cousin (10)
      • insurance used information from Biometrika (10)
      • Issue with the pedigree system bc it thinks that things like intelligence and other "good" characteristics are associated with inheritance but not with access to social support and resources (10)
      • Eastern Europe --> eugenics could match and enshrine the aspirations of new nation-states anxious to establish their legitimacy (12)
      • Japanese modernization manifested itself as nationalism (quality over quantity) (12)
      • homogeneity over heterogeneity (12)
      • strong connection between eugenics and the left and to progressive reform and politics (13)
      • sterilization has informed the concept of "rights" in reproductive arenas (14)
      • Consent --> Nuremberg trials (14)
      • States that were looking to modernize were those who took up eugenics enthusiastically
      • NS Germany (14/15)
      • mythical concern --> where did this originate from (15)
      • similarities across so many countries and cultures is interesting in the eugenics movements (15)
      • Eugenics in immigration regulations (16)
      • Central America and Southern Europe distinguished itself (16)
      • Cross cultural ideas and exchange allowed eugenics to move so fast and so far but then they all wanted to be independent and create their own identity (16)
      • Hygiene was also linked to eugenics --> "Rassenhygiene" in DE (17)
      • Interesting to see how they distanced themselves from forms of eugenics that didn't come from their own home country
      • sterilization procedures were antithetical to catholic doctrine and were highly questionable (18)
      • British sought the legalization of sterilization but voluntary sterilization was always their aim (18)
      • US is a contradiction --> commitment to liberty also passed compulsory sterilization laws (18)
      • Voluntary v coerced (18)
      • eugenics has always had opposition (19)
      • Jennings essentially conceptualized pre-natal testing (19)
      • 1970s drew large opposition (20)
      • Tuskegee syphilis experiment highlighted conversations about races and medical ethics (20)
      • This was a period of strongly left-oriented intellectual critique of science (20)

    Annotators

    1. They draw statistical correlations between a person’s zip code or languagepatterns and her potential to pay back a loan or handle a job.

      This is incredibly sad but visible in our BIPOC communities.

    1. Social workers should engage in social and political action that seeks to ensure that all people have equal access to the resources, employment, services, and opportunities they require to meet their basic human needs and to develop fully. Social workers should be aware of the impact of the political arena on practice and should advocate for changes in policy and legislation to improve social conditions to meet basic human needs and promote social justice.

      While exploring the NASW Code of Ethics, I found an important aspect related to power and inequality in the section on Social and Political Action (6.04). This section emphasizes the duty of social workers to engage in action against social injustice. However, it raises questions about the depth of guidance provided regarding power dynamics in the profession. I've noted this section to express my curiosity about how the Code of Ethics could offer clearer guidance on addressing power imbalances within social work organizations and in broader society. This encourages a deeper exploration of strategies to challenge oppressive systems.

    1. Then as part of our loop, each time we go through a review we will add 1 to our num_reviews variable

      Instead of keeping track of a num_reviews variable, we can also loop over the index and review of each movie simultaneously by using enumerate(movies). This gives us fewer variables to think about and makes the code a bit more elegant and readable.

    1. Consider the code block below. What prints?

      The length count includes all the spaces, period(,) and exclamation mark.

    1. dans les icônes de l'en-tête et du pied-de-page qui permettent de faire des liens, réduire l'opacité à 0.5 au survol.

      Dans le code HTML de à-propos il y à une erreur : - vous avez rajouter class="lien-icone" au niveau du logo de la page a propos , ce qui lui applique le changement d'opacité au survole identique au lien hypertexte Instagram et twitter qui possède eu aussi la classe déclarer .lien-icone.

    1. Six other escape sequences are valid in JavaScript: Code Result \b Backspace \f Form Feed \n New Line \r Carriage Return \t Horizontal Tabulator \v Vertical Tabulator

      Escape sequences in JavaScript are special combinations of characters that are used to represent characters that would otherwise be difficult or impossible to include directly in a string. These sequences start with a backslash \ followed by another character or characters.

      Here are some common escape sequences in JavaScript:

      1. \n - Newline:
      2. Represents a line break.

      javascript console.log("Hello\nWorld"); // Output: // Hello // World

      1. \t - Tab:
      2. Represents a horizontal tab.

      javascript console.log("This\tis\tTab"); // Output: This is Tab

      1. \ - Backslash:
      2. Represents a literal backslash.

      javascript console.log("This is a backslash: \\"); // Output: This is a backslash: \

      1. \' - Single Quote:
      2. Represents a single quote within a string declared with single quotes.

      javascript console.log('It\'s a sunny day.'); // Output: It's a sunny day.

      1. \" - Double Quote:
      2. Represents a double quote within a string declared with double quotes.

      javascript console.log("She said, \"Hello!\""); // Output: She said, "Hello!"

      1. \uXXXX - Unicode Escape:
      2. Represents a Unicode character, where XXXX is the Unicode code point in hexadecimal.

      javascript console.log("\u0041"); // Output: A

      These escape sequences allow you to include special characters in your strings without causing syntax errors. For example, if you want to include a quote character within a string that is already enclosed in quotes, you can use the escape sequence to prevent confusion and errors.

  3. developer.mozilla.org developer.mozilla.org
    1. he localeCompare() method enables string comparison in a similar fashion as strcmp() — it allows sorting strings in a locale-aware manner.String primitives and String objects Note that JavaScript distinguishes between String objects and primitive string values. (The same is true of Boolean and Numbers.) String literals (denoted by double or single quotes) and strings returned from String calls in a non-constructor context (that is, called without using the new keyword) are primitive strings. In contexts where a method is to be invoked on a primitive string or a property lookup occurs, JavaScript will automatically wrap the string primitive and call the method or perform the property lookup on the wrapper object instead. jsCopy to Clipboardconst strPrim = "foo"; // A literal is a string primitive const strPrim2 = String(1); // Coerced into the string primitive "1" const strPrim3 = String(true); // Coerced into the string primitive "true" const strObj = new String(strPrim); // String with new returns a string wrapper object. console.log(typeof strPrim); // "string" console.log(typeof strPrim2); // "string" console.log(typeof strPrim3); // "string" console.log(typeof strObj); // "object" Copy And SaveShareAsk Copilot Warning: You should rarely find yourself using String as a constructor. String primitives and String objects also give different results when using eval(). Primitives passed to eval are treated as source code; String objects are treated as all other objects are, by returning the object. For example: jsCopy to Clipboardconst s1 = "2 + 2"; // creates a string primitive const s2 = new String("2 + 2"); // creates a String object console.log(eval(s1)); // returns the number 4 console.log(eval(s2)); // returns the string "2 + 2" Copy And SaveShareAsk Copilot For these reasons, the code may break when it encounters String objects when it expects a primitive string instead, although generally, authors need not worry about the distinction. A String object can always be converted to its primitive counterpart with the valueOf() method. jsCopy to Clipboardconsole.log(eval(s2.valueOf())); // returns the number 4 Copy And SaveShareAsk Copilot

      Certainly! Let's break it down in simpler terms:

      String Primitives and String Objects:

      • In JavaScript, there are two types of strings: string primitives and String objects.
      • String literals (created using double or single quotes) and strings returned from non-constructor String calls are primitive strings.
      • String objects are created using the new String() syntax.

      Examples:

      ```javascript const strPrim = "foo"; // A string primitive const strPrim2 = String(1); // Coerced into the string primitive "1" const strPrim3 = String(true); // Coerced into the string primitive "true" const strObj = new String(strPrim); // String with new returns a string wrapper object.

      console.log(typeof strPrim); // "string" console.log(typeof strPrim2); // "string" console.log(typeof strPrim3); // "string" console.log(typeof strObj); // "object" ```

      • strPrim, strPrim2, and strPrim3 are string primitives, while strObj is a String object.

      Using eval():

      • When using eval() with string primitives, the string is treated as source code, and the result is evaluated.
      • When using eval() with String objects, it treats the object like any other object and returns the object itself.

      Examples:

      ```javascript const s1 = "2 + 2"; // creates a string primitive const s2 = new String("2 + 2"); // creates a String object

      console.log(eval(s1)); // returns the number 4 console.log(eval(s2)); // returns the string "2 + 2" ```

      • eval(s1) evaluates the string primitive as code and returns the result (number 4).
      • eval(s2) returns the String object itself (string "2 + 2"), not the evaluated result.

      Conversion to Primitive:

      • To convert a String object to its primitive counterpart, you can use the valueOf() method.

      Example:

      javascript console.log(eval(s2.valueOf())); // returns the number 4

      • valueOf() converts the String object (s2) to its primitive form before being evaluated, resulting in the number 4.

      Conclusion:

      • In general, it's rare to use String as a constructor (using new String()), and working with string primitives is more common. Understanding the differences is important when dealing with certain situations like using eval() or expecting a specific type of string.
    1. Social workers should not engage in dual or multiple relationships with clients or former clients in which there is a risk of exploitation or potential harm to the client. In instances when dual or multiple relationships are unavoidable, social workers should take steps to protect clients and are responsible for setting clear, appropriate, and culturally sensitive boundaries.

      This section of the NASW code of ethics raises important questions regarding power and structural inequality because sometimes it may be hard for social workers to find the clear boundary of dual-relationships. For example, if a client invites you to an event they are holding that is unrelated to your professional relationship like a retirement party or another life milestone event, how do you decipher which type of event is appropriate to attend and which is not? Due to your professional influence on this person's life, what if your personal bias for this client exceeds the level that is appropriate for a standard social worker/client relationship but you don't see it that way because they are "inviting you in" or consenting to it. I feel it's important for social workers to build strong rapport with their clients to facilitate more effective practice however, ones judgement on when this rapport building may be crossing the line into unprofessional territory may vary on a case by case basis.

    1. Social workers’ social change efforts are focused primarily on issues of poverty, unemployment, discrimination, and other forms of social injustice. These activities seek to promote sensitivity to and knowledge about oppression and cultural and ethnic diversity.

      This part of the NASW code of ethics pertains to my field work because in my role as the housing navigation specialist at the shelter, I not only help clients secure permanent housing through government subsidies but, I also advocate on their behalf when they are being faced with blatant discrimination. This advocacy is not only reflected in my efforts to educate my clients on their own housing rights but also by educating landlords, relators, and property managers on the policy of housing subsidies like state wide section 8 and local municipality housing voucher programs. Recently, I was working with a client who has a section 8 voucher through the state. This client was about to sign their request for tenancy with the landlord but once the landlord looked over the DCA (department of community affairs) terms of agreement they stated in writing that they "did not want to move forward with a voucher holder." Luckily, New Jersey is one of the 49 states that have fair housing legislature in place so I was able to report this correspondence to my clients DCA worker as discrimination and they were granted more time to secure permanent housing. There is a lot of stigma tied to the homeless community, especially those who obtain housing subsidies through local and federal government entities. This is why legislature like The Fair Housing Act is so important in the work that I do and the vulnerable community I serve.

    1. About Automated functional testing via the Chrome DevTools Protocol. Easy to use and open source. Generates unique CSS and Xpath selectors. Outputs code for multiple testing frameworks, including Jest, Puppeteer, Selenium Webdriver and Cypress.

    1. useRefuseRef is a React Hook that lets you reference a value that’s not needed for rendering.const ref = useRef(initialValue) Reference useRef(initialValue) Usage Referencing a value with a ref Manipulating the DOM with a ref Avoiding recreating the ref contents Troubleshooting I can’t get a ref to a custom component Reference useRef(initialValue) Call useRef at the top level of your component to declare a ref. import { useRef } from 'react';function MyComponent() { const intervalRef = useRef(0); const inputRef = useRef(null); // ... See more examples below. Parameters initialValue: The value you want the ref object’s current property to be initially. It can be a value of any type. This argument is ignored after the initial render. Returns useRef returns an object with a single property: current: Initially, it’s set to the initialValue you have passed. You can later set it to something else. If you pass the ref object to React as a ref attribute to a JSX node, React will set its current property. On the next renders, useRef will return the same object.

      React Hooks are functions provided by React that allow you to use state and lifecycle features in functional components, making them more powerful and expressive. They were introduced in React version 16.8 to let developers use state and other React features without writing a class.

      useState Hook:

      useState is a Hook that allows you to add state to functional components. It returns an array with two elements: the current state value and a function that lets you update it. Here's a simple example:

      ```jsx import React, { useState } from 'react';

      function Counter() { // Declare a state variable named "count" with an initial value of 0 const [count, setCount] = useState(0);

      return ( <div>

      Count: {count}

      <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); } ```

      In this example, count is the state variable, and setCount is the function to update it. When the "Increment" button is clicked, it updates the count state, and React automatically re-renders the component.

      useEffect Hook:

      useEffect is a Hook that enables you to perform side effects in functional components. It's similar to lifecycle methods in class components. Here's a simple example fetching data:

      ```jsx import React, { useState, useEffect } from 'react';

      function DataFetcher() { const [data, setData] = useState(null);

      useEffect(() => { // Fetch data when the component mounts fetch('https://api.example.com/data') .then(response => response.json()) .then(data => setData(data));

      // Clean up any resources when the component unmounts
      return () => {
        // Cleanup code here
      };
      

      }, []); // The empty array ensures this effect runs only once on mount

      return ( <div>

      Data: {data ? data.value : 'Loading...'}

      </div> ); } ```

      In this example, useEffect runs when the component mounts. It fetches data and updates the state. The empty dependency array ([]) means the effect runs only once when the component mounts.

      useContext Hook:

      useContext is a Hook that allows you to subscribe to React context without introducing nesting. It lets you access the value of a context directly.

      ```jsx import React, { useContext } from 'react';

      const ThemeContext = React.createContext('light');

      function ThemedComponent() { const theme = useContext(ThemeContext);

      return

      Current Theme: {theme}

      ; } ```

      In this example, ThemedComponent can directly access the current theme value from the ThemeContext.

      These are just a few examples of React Hooks. There are others like useReducer, useCallback, useMemo, etc., each serving a specific purpose to enhance the functionality of functional components. Hooks allow you to manage state and side effects more effectively in functional components, making your code cleaner and more maintainable. Absolutely! Let's break down the statement in simpler terms:

      useRef Explanation:

      useRef is like a tool in React that helps you create a special reference to something, and this reference won't force your component to re-render when the referenced thing changes.

      Simple Explanation:

      1. Creating a Reference:
      2. With useRef, you can create a reference to a value, like a number, an object, or even a DOM element.

      ```jsx import { useRef } from 'react';

      function MyComponent() { const myRef = useRef(0); // Creating a reference to the number 0 return <div>{myRef.current}</div>; } ```

      In this example, myRef is a reference to the number 0.

      1. No Re-rendering:
      2. When the value of the reference changes, your component won't automatically re-render. This is useful when you want to keep track of something without affecting the UI.

      ```jsx import { useRef, useState, useEffect } from 'react';

      function Counter() { const count = useRef(0); // Creating a reference to the number 0 const [renderCount, setRenderCount] = useState(0);

       useEffect(() => {
         // This effect will run when the component renders
         setRenderCount(renderCount + 1);
      
         // But changing the count won't cause a re-render
         count.current = count.current + 1;
       }, [count, renderCount]);
      
       return (
         <div>
           <p>Render Count: {renderCount}</p>
           <p>Count (no re-render): {count.current}</p>
         </div>
       );
      

      } ```

      In this example, changing the count won't trigger a re-render of the component.

      When to Use useRef:

      • Use useRef when you want to keep track of a value that doesn't affect your component's display directly and shouldn't cause re-renders.

      • Common use cases include interacting with the DOM directly or storing mutable values that shouldn't trigger updates.

      In summary, useRef is a handy tool in React for creating references that won't cause your component to re-render when the referenced value changes. It's useful for scenarios where you need to keep track of something without impacting the UI. Certainly! Let's break down the statement in very simple terms with multiple examples:

      Explanation:

      1. Memoized Object:
      2. When you use useRef to create a reference, React keeps track of it. Instead of creating a new reference object every time your component re-renders, React gives you the same reference object.

      ```jsx import { useRef } from 'react';

      function MyComponent() { const myRef = useRef(0);

       // On each render, myRef is the same object
       console.log(myRef === myRef); // true
      

      } ```

      In this example, myRef remains the same object across renders.

      1. Avoiding Recreation:
      2. This memoization is useful when you don't want to recreate the reference object, especially when dealing with functions. It helps to maintain consistency and avoids unnecessary re-initialization.

      ```jsx import { useRef, useEffect } from 'react';

      function EffectComponent() { const handleClick = useRef(() => { console.log('Button clicked!'); });

       useEffect(() => {
         // Access the function without causing a re-render
         handleClick.current();
       }, []);
      
       return <button onClick={handleClick.current}>Click me</button>;
      

      } ```

      In this example, handleClick remains the same function reference across renders.

      1. Ensuring Same Object:
      2. The memoized object ensures that the reference remains consistent, even when the component re-renders. This is essential when dealing with external functions or references that you want to persist.

      ```jsx import { useRef } from 'react';

      function ReferenceComponent() { const externalFunction = () => { console.log('External function'); };

       const externalFunctionRef = useRef(externalFunction);
      
       // externalFunctionRef.current is the same function across renders
       console.log(externalFunctionRef.current === externalFunction); // true
      

      } ```

      In this example, externalFunctionRef.current is the same function as externalFunction across renders.

      Simplified Summary:

      • useRef gives you the same reference object across renders.
      • This is useful to avoid recreating the reference, especially when dealing with functions.
      • It ensures consistency and stability, preventing unnecessary re-initialization.

      In essence, useRef helps you maintain the same reference object, ensuring stability and preventing unnecessary re-creation, which is particularly handy when working with functions or external references.Sure, let's break down useRef in simple terms with examples:

      What is useRef?

      useRef is a React Hook that allows you to create a reference to a value that won't trigger a re-render when it changes. It's often used for things that are not directly related to rendering, like accessing or manipulating the DOM.

      How to Use useRef:

      1. Referencing a Value: You can use useRef to create a reference to a value, such as a number or an object. This value can be accessed and modified without causing your component to re-render.

      ```jsx import { useRef } from 'react';

      function MyComponent() { const intervalRef = useRef(0); // Reference to a number const inputRef = useRef(null); // Reference to an element (initially null) // ... ```

      1. Manipulating the DOM: useRef is commonly used for interacting with the DOM directly. For example, if you want to focus on an input element or keep track of some DOM-related state without triggering a re-render.

      ```jsx import { useRef, useEffect } from 'react';

      function MyComponent() { const inputRef = useRef(null);

       useEffect(() => {
         // Focus on the input element when the component mounts
         inputRef.current.focus();
       }, []);
      
       return <input ref={inputRef} />;
      

      } ```

      1. Avoiding Recreating the Ref Contents: useRef is memoized, meaning it returns the same object on every render. This is useful when you want to avoid recreating the ref object, especially when dealing with functions.

      ```jsx import { useRef, useEffect } from 'react';

      function MyComponent() { const handleClick = useRef(() => { console.log('Button clicked!'); });

       useEffect(() => {
         // Access the function without causing a re-render
         handleClick.current();
       }, []);
      
       return <button onClick={handleClick.current}>Click me</button>;
      

      } ```

      Parameters and Returns:

      • Parameters:
      • initialValue: The value you want the ref object’s current property to be initially. It can be a value of any type. This argument is ignored after the initial render.

      • Returns:

      • useRef returns an object with a single property:
        • current: Initially set to the initialValue you have passed. You can later set it to something else. If you pass the ref object to React as a ref attribute to a JSX node, React will set its current property.

      In simple terms, useRef is a tool to keep track of values or elements that won't cause your component to re-render every time they change. It's commonly used for interacting with the DOM and handling mutable values in a React component.

    1. Academic dishonesty is defined as: an act of deception in which a student claims credit for the work or effort of another person or uses unauthorized materials or fabricated information in any academic work. Academic dishonesty is a violation of the DVC ‘Student Code of Conduct’ and will not be tolerated. Academic dishonesty diminishes the quality of scholarship at Diablo Valley College and hurts the majority of students who conduct themselves honestly.

      This passage reveals the importance of academic integrity and how academic dishonesty is not tolerated at DVC.

    1. While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      Trolling can be driven by a nihilistic mindset where empathy is disregarded, aiming solely to provoke reactions, often manifesting as hostility towards specific groups. This form of trolling is widely condemned as it inflicts harm, failing to adhere to basic principles of respect and integrity.

    2. 7.6. Ethics and Trolling# 7.6.1. Background: Forming Groups# Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad? One answer to this challenge is that we use various heuristics (that is, shortcuts for thinking) like stereotypes and signaling to quickly guess where a person stands in relation to us. Sometimes wearing items of a certain brand signals to people with similar commitments that you might be on the same page. Sometimes features that are strongly associated with certain social groups—stereotypes—are assumed to tell us whether or not we can trust someone. Have you ever tried to change or mask your accent, to avoid being marked as from a certain region? Have you ever felt the need to conceal something about yourself that is often stereotyped, or to use an ingroup signal to deflect people’s attention from a stereotyped feature? There is a reason why stereotypes are so tenacious: they work… sort of. Humans are brilliant at finding patterns, and we use pattern recognition to increase the efficiency of our cognitive processing. We also respond to patterns and absorb patterns of speech production and style of dress from the people around us. We do have a tendency to display elements of our history and identity, even if we have never thought about it before. This creates an issue, however, when the stereotype is not apt in some way. This might be because we diverge in some way from the categories that mark us, so the stereotype is inaccurate. Or this might be because the stereotype also encodes value judgments that are unwarranted, and which lead to problems with implicit bias. Some people do not need to think loads about how they present in order to come across to people in ways that are accurate and supportive of who they really are. Some people think very carefully about how they curate a set of signals that enable them to accurately let people know who they are or to conceal who they are from people outside their squad. Because patterns are so central to how our brains process information, patterns become extremely important to how societies change or stay the same. TV tropes is a website that tracks patterns in media, such as the jump scare The Seven Basic Plots Patterns build habits. Habits build norms. Norms build our reality. To create a social group and have it be sustainable, we depend on stable patterns, habits, and norms to create the reality of the grouping. In a diverse community, there are many subsets of patterns, habits, and norms which go into creating the overall social reality. Part of how people manage their social reality is by enforcing the patterns, habits, and norms which identify us; another way we do this is by enforcing, or policing, which subsets of patterns, habits, and norms get to be recognized as valid parts of the broader social reality. Both of these tactics can be done in appropriate, just, and responsible ways, or in highly unjust ways. 7.6.2. Ethics of Disruption (Trolling)# Trolling is a method of disrupting the way things are, including group structure and practices. Like these group-forming practices, disruptive trolling can be deployed in just or unjust ways. (We will come back to that.) These disruptive tactics can also be engaged with different moods, ranging from playful (like some flashmobs), to demonstrative (like activism and protests), to hostile, to warring, to genocidal. You may have heard people say that the difference between a coup and a revolution is whether it succeeds and gets to later tell the story, or gets quashed. You may have also heard that the difference between a traitor and a hero depends on who is telling the story. As this class discusses trolling, as well as many of the other topics of social media behavior coming up in the weeks ahead, you are encouraged to bear this duality of value in mind. Trolling is a term given to describe behavior that aims to disrupt (among other things). To make value judgments or ethical judgments about instances of disruptive behavior, we will need to be thoughtful and nuanced about how we decide to pass judgments. One way to begin examining any instance of disruptive behavior is to ask what is being disrupted: a pattern, a habit, a norm, a whole community? And how do we judge the value of the thing being disrupted? Returning to the difference between a coup and a revolution, we might say that a national-level disruption is a coup if it fails, and a revolution if it succeeds. Or we might say that such a disruption is a coup if it intends to disrupt a legitimate instance of political domination/statehood, but a revolution if the instance of political domination is illegitimate. If you take a close look at English-language headlines in the news about uprisings occurring near to or far from here, it should become quickly apparent that both of these reasons can drive an author’s choice to style an event as a coup. To understand what the author is trying to say, we need to look inside the situation and see what assumptions are driving their choice to characterize the disruption in the way that they do. Trolling is disruptive behavior, and whether we class it as problematic or okay depends in part on how we judge the legitimacy of the social reality which is being disrupted. Trolling can be used, in principle, for good or bad ends. 7.6.3. Trolling and Nihilism# While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework). 7.6.4. Reflection Exercise# Revisit the K-Pop protest trolling example in section 7.3. Take your list of ethical frameworks from Chapter 2 and work through them one by one, applying each tool to the K-Pop trolling. For each theory, think of how many different ways the theory could hook up with the example. For example, when using a virtue ethics type of tool, consider how many different people’s character and flourishing could be developed through this? When using a tool based on outcomes, like consequentialism, how many different elements of the outcome can you think of? The goal here is to come up with as many variations as you can, to see how the tools of ethical analysis can help us see into different aspects of the situation. Once you have made your big list of considerations, choose 2-3 items that, in your view, feel most important. Based on those 2-3 items, do you evaluate this trolling event as having been morally good? Why? What changes to this example would change your overall decision on whether the action is ethical?

      The section provides a profound exploration of the complexities involved in understanding and evaluating disruptive behaviors in social media contexts. It compellingly illustrates how the formation of groups, the use of stereotypes, and the enforcement of norms are all deeply intertwined with our cognitive processes and societal structures. The examination of trolling as a form of disruption that can be deployed for both just and unjust ends invites readers to reflect on the multifaceted nature of these actions and their ethical implications.

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      This study presents potentially valuable results on glutamine-rich motifs in relation to protein expression and alternative genetic codes. The author's interpretation of the results is so far only supported by incomplete evidence, due to a lack of acknowledgment of alternative explanations, missing controls and statistical analysis and writing unclear to non experts in the field. These shortcomings could be at least partially overcome by additional experiments, thorough rewriting, or both.

      We thank both the Reviewing Editor and Senior Editor for handling this manuscript.

      Based on your suggestions, we have provided controls, performed statistical analysis, and rewrote our manuscript. The revised manuscript is significantly improved and more accessible to non-experts in the field.

      Reviewer #1 (Public Review):

      Summary

      This work contains 3 sections. The first section describes how protein domains with SQ motifs can increase the abundance of a lacZ reporter in yeast. The authors call this phenomenon autonomous protein expression-enhancing activity, and this finding is well supported. The authors show evidence that this increase in protein abundance and enzymatic activity is not due to changes in plasmid copy number or mRNA abundance, and that this phenomenon is not affected by mutants in translational quality control. It was not completely clear whether the increased protein abundance is due to increased translation or to increased protein stability.

      In section 2, the authors performed mutagenesis of three N-terminal domains to study how protein sequence changes protein stability and enzymatic activity of the fusions. These data are very interesting, but this section needs more interpretation. It is not clear if the effect is due to the number of S/T/Q/N amino acids or due to the number of phosphorylation sites.

      In section 3, the authors undertake an extensive computational analysis of amino acid runs in 27 species. Many aspects of this section are fascinating to an expert reader. They identify regions with poly-X tracks. These data were not normalized correctly: I think that a null expectation for how often poly-X track occur should be built for each species based on the underlying prevalence of amino acids in that species. As a result, I believe that the claim is not well supported by the data.

      Strengths

      This work is about an interesting topic and contains stimulating bioinformatics analysis. The first two sections, where the authors investigate how S/T/Q/N abundance modulates protein expression level, is well supported by the data. The bioinformatics analysis of Q abundance in ciliate proteomes is fascinating. There are some ciliates that have repurposed stop codons to code for Q. The authors find that in these proteomes, Q-runs are greatly expanded. They offer interesting speculations on how this expansion might impact protein function.

      Weakness

      At this time, the manuscript is disorganized and difficult to read. An expert in the field, who will not be distracted by the disorganization, will find some very interesting results included. In particular, the order of the introduction does not match the rest of the paper.

      In the first and second sections, where the authors investigate how S/T/Q/N abundance modulates protein expression levels, it is unclear if the effect is due to the number of phosphorylation sites or the number of S/T/Q/N residues.

      There are three reasons why the number of phosphorylation sites in the Q-rich motifs is not relevant to their autonomous protein expression-enhancing (PEE) activities:

      First, we have reported previously that phosphorylation-defective Rad51-NTD (Rad51-3SA) and wild-type Rad51-NTD exhibit similar autonomous PEE activity. Mec1/Tel1-dependent phosphorylation of Rad51-NTD antagonizes the proteasomal degradation pathway, increasing the half-life of Rad51 from ∼30 min to ≥180 min (1). (page 1, lines 11-14)

      Second, in our preprint manuscript, we have already shown that phosphorylation-defective Rad53-SCD1 (Rad51-SCD1-5STA) also exhibits autonomous PEE activity similar to that of wild-type Rad53-SCD (Figure 2D, Figure 4A and Figure 4C). We have highlighted this point in our revised manuscript (page 9, lines 19-21).

      Third, as revealed by the results of Figure 4, it is the percentages, and not the numbers, of S/T/Q/N residues that are correlated with the PEE activities of Q-rich motifs.

      The authors also do not discuss if the N-end rule for protein stability applies to the lacZ reporter or the fusion proteins.

      The autonomous PEE function of S/T/Q-rich NTDs is unlikely to be relevant to the N-end rule. The N-end rule links the in vivo half-life of a protein to the identity of its N-terminal residues. In S. cerevisiae, the N-end rule operates as part of the ubiquitin system and comprises two pathways. First, the Arg/N-end rule pathway, involving a single N-terminal amidohydrolase Nta1, mediates deamidation of N-terminal asparagine (N) and glutamine (Q) into aspartate (D) and glutamate (E), which in turn are arginylated by a single Ate1 R-transferase, generating the Arg/N degron. N-terminal R and other primary degrons are recognized by a single N-recognin Ubr1 in concert with ubiquitin-conjugating Ubc2/Rad6. Ubr1 can also recognize several other N-terminal residues, including lysine (K), histidine (H), phenylalanine (F), tryptophan (W), leucine (L) and isoleucine (I) (68-70). Second, the Ac/N-end rule pathway targets proteins containing N-terminally acetylated (Ac) residues. Prior to acetylation, the first amino acid methionine (M) is catalytically removed by Met-aminopeptidases (MetAPs), unless a residue at position 2 is non-permissive (too large) for MetAPs. If a retained N-terminal M or otherwise a valine (V), cysteine (C), alanine (A), serine (S) or threonine (T) residue is followed by residues that allow N-terminal acetylation, the proteins containing these AcN degrons are targeted for ubiquitylation and proteasome-mediated degradation by the Doa10 E3 ligase (71).

      The PEE activities of these S/T/Q-rich domains are unlikely to arise from counteracting the N-end rule for two reasons. First, the first two amino acid residues of Rad51-NTD, Hop1-SCD, Rad53-SCD1, Sup35-PND, Rad51-ΔN, and LacZ-NVH are MS, ME, ME, MS, ME, and MI, respectively, where M is methionine, S is serine, E is glutamic acid and I is isoleucine. Second, Sml1-NTD behaves similarly to these N-terminal fusion tags, despite its methionine and glutamine (MQ) amino acid signature at the N-terminus. (Page 12, line 3 to page 13, line 2)

      The most interesting part of the paper is an exploration of S/T/Q/N-rich regions and other repetitive AA runs in 27 proteomes, particularly ciliates. However, this analysis is missing a critical control that makes it nearly impossible to evaluate the importance of the findings. The authors find the abundance of different amino acid runs in various proteomes. They also report the background abundance of each amino acid. They do not use this background abundance to normalize the runs of amino acids to create a null expectation from each proteome. For example, it has been clear for some time (Ruff, 2017; Ruff et al., 2016) that Drosophila contains a very high background of Q's in the proteome and it is necessary to control for this background abundance when finding runs of Q's.

      We apologize for not explaining sufficiently well the topic eliciting this reviewer’s concern in our preprint manuscript. In the second paragraph of page 14, we cite six references to highlight that SCDs are overrepresented in yeast and human proteins involved in several biological processes (5, 43) and that polyX prevalence differs among species (79-82).

      We will cite a reference by Kiersten M. Ruff in our revised manuscript (38).

      K. M. Ruff, J. B. Warner, A. Posey and P. S. Tan (2017) Polyglutamine length dependent structural properties and phase behavior of huntingtin exon1. Biophysical Journal 112, 511a.

      The authors could easily address this problem with the data and analysis they have already collected. However, at this time, without this normalization, I am hesitant to trust the lists of proteins with long runs of amino acid and the ensuing GO enrichment analysis. Ruff KM. 2017. Washington University in St.

      Ruff KM, Holehouse AS, Richardson MGO, Pappu RV. 2016. Proteomic and Biophysical Analysis of Polar Tracts. Biophys J 110:556a.

      We thank Reviewer #1 for this helpful suggestion and now address this issue by means of a different approach described below.

      Based on a previous study (43), we applied seven different thresholds to seek both short and long, as well as pure and impure, polyX strings in 20 different representative near-complete proteomes, including 4X (4/4), 5X (4/5-5/5), 6X (4/6-6/6), 7X (4/7-7/7), 8-10X (≥50%X), 11-10X (≥50%X) and ≥21X (≥50%X).

      To normalize the runs of amino acids and create a null expectation from each proteome, we determined the ratios of the overall number of X residues for each of the seven polyX motifs relative to those in the entire proteome of each species, respectively. The results of four different polyX motifs are shown in our revised manuscript, i.e., polyQ (Figure 7), polyN (Figure 8), polyS (Figure 9) and polyT (Figure 10). Thus, polyX prevalence differs among species and the overall X contents of polyX motifs often but not always correlate with the X usage frequency in entire proteomes (43).

      Most importantly, our results reveal that, compared to Stentor coeruleus or several non-ciliate eukaryotic organisms (e.g., Plasmodium falciparum, Caenorhabditis elegans, Danio rerio, Mus musculus and Homo sapiens), the five ciliates with reassigned TAAQ and TAGQ codons not only have higher Q usage frequencies, but also more polyQ motifs in their proteomes (Figure 7). In contrast, polyQ motifs prevail in Candida albicans, Candida tropicalis, Dictyostelium discoideum, Chlamydomonas reinhardtii, Drosophila melanogaster and Aedes aegypti, though the Q usage frequencies in their entire proteomes are not significantly higher than those of other eukaryotes (Figure 1). Due to their higher N usage frequencies, Dictyostelium discoideum, Plasmodium falciparum and Pseudocohnilembus persalinus have more polyN motifs than the other 23 eukaryotes we examined here (Figure 8). Generally speaking, all 26 eukaryotes we assessed have similar S usage frequencies and percentages of S contents in polyS motifs (Figure 9). Among these 26 eukaryotes, Dictyostelium discoideum possesses many more polyT motifs, though its T usage frequency is similar to that of the other 25 eukaryotes (Figure 10).

      In conclusion, these new normalized results confirm that the reassignment of stop codons to Q indeed results in both higher Q usage frequencies and more polyQ motifs in ciliates.  

      Reviewer #2 (Public Review):

      Summary:

      This study seeks to understand the connection between protein sequence and function in disordered regions enriched in polar amino acids (specifically Q, N, S and T). While the authors suggest that specific motifs facilitate protein-enhancing activities, their findings are correlative, and the evidence is incomplete. Similarly, the authors propose that the re-assignment of stop codons to glutamine-encoding codons underlies the greater user of glutamine in a subset of ciliates, but again, the conclusions here are, at best, correlative. The authors perform extensive bioinformatic analysis, with detailed (albeit somewhat ad hoc) discussion on a number of proteins. Overall, the results presented here are interesting, but are unable to exclude competing hypotheses.

      Strengths:

      Following up on previous work, the authors wish to uncover a mechanism associated with poly-Q and SCD motifs explaining proposed protein expression-enhancing activities. They note that these motifs often occur IDRs and hypothesize that structural plasticity could be capitalized upon as a mechanism of diversification in evolution. To investigate this further, they employ bioinformatics to investigate the sequence features of proteomes of 27 eukaryotes. They deepen their sequence space exploration uncovering sub-phylum-specific features associated with species in which a stop-codon substitution has occurred. The authors propose this stop-codon substitution underlies an expansion of ploy-Q repeats and increased glutamine distribution.

      Weaknesses:

      The preprint provides extensive, detailed, and entirely unnecessary background information throughout, hampering reading and making it difficult to understand the ideas being proposed.

      The introduction provides a large amount of detailed background that appears entirely irrelevant for the paper. Many places detailed discussions on specific proteins that are likely of interest to the authors occur, yet without context, this does not enhance the paper for the reader.

      The paper uses many unnecessary, new, or redefined acronyms which makes reading difficult. As examples:

      1) Prion forming domains (PFDs). Do the authors mean prion-like domains (PLDs), an established term with an empirical definition from the PLAAC algorithm? If yes, they should say this. If not, they must define what a prion-forming domain is formally.

      The N-terminal domain (1-123 amino acids) of S. cerevisiae Sup35 was already referred to as a “prion forming domain (PFD)” in 2006 (48). Since then, PFD has also been employed as an acronym in other yeast prion papers (Cox, B.S. et al. 2007; Toombs, T. et al. 2011).

      B. S. Cox, L. Byrne, M. F., Tuite, Protein Stability. Prion 1, 170-178 (2007). J. A. Toombs, N. M. Liss, K. R. Cobble, Z. Ben-Musa, E. D. Ross, [PSI+] maintenance is dependent on the composition, not primary sequence, of the oligopeptide repeat domain. PLoS One 6, e21953 (2011).

      2) SCD is already an acronym in the IDP field (meaning sequence charge decoration) - the authors should avoid this as their chosen acronym for Serine(S) / threonine (T)-glutamine (Q) cluster domains. Moreover, do we really need another acronym here (we do not).

      SCD was first used in 2005 as an acronym for the Serine (S)/threonine (T)-glutamine (Q) cluster domain in the DNA damage checkpoint field (4). Almost a decade later, SCD became an acronym for “sequence charge decoration” (Sawle, L. et al. 2015; Firman, T. et al. 2018).

      L. Sawle and K, Ghosh, A theoretical method to compute sequence dependent configurational properties in charged polymers and proteins. J. Chem Phys. 143, 085101(2015).

      T. Firman and Ghosh, K. Sequence charge decoration dictates coil-globule transition in intrinsically disordered proteins. J. Chem Phys. 148, 123305 (2018).

      3) Protein expression-enhancing (PEE) - just say expression-enhancing, there is no need for an acronym here.

      Thank you. Since we have shown that the addition of Q-rich motifs to LacZ affects protein expression rather than transcription, we think it is better to use the “PEE” acronym.

      The results suggest autonomous protein expression-enhancing activities of regions of multiple proteins containing Q-rich and SCD motifs. Their definition of expression-enhancing activities is vague and the evidence they provide to support the claim is weak. While their previous work may support their claim with more evidence, it should be explained in more detail. The assay they choose is a fusion reporter measuring beta-galactosidase activity and tracking expression levels. Given the presented data they have shown that they can drive the expression of their reporters and that beta gal remains active, in addition to the increase in expression of fusion reporter during the stress response. They have not detailed what their control and mock treatment is, which makes complete understanding of their experimental approach difficult. Furthermore, their nuclear localization signal on the tag could be influencing the degradation kinetics or sequestering the reporter, leading to its accumulation and the appearance of enhanced expression. Their evidence refuting ubiquitin-mediated degradation does not have a convincing control.

      Although this reviewer’s concern regarding our use of a nuclear localization signal on the tag is understandable, we are confident that this signal does not bias our findings for two reasons. First, the negative control LacZ-NV also possesses the same nuclear localization signal (Figure 1A, lane 2). Second, another fusion target, Rad51-ΔN, does not harbor the NVH tag (Figure 1D, lanes 3-4). Compared to wild-type Rad51, Rad51-ΔN is highly labile. In our previous study, removal of the NTD from Rad51 reduced by ~97% the protein levels of corresponding Rad51-ΔN proteins relative to wild-type (1).

      Based on the experimental results, the authors then go on to perform bioinformatic analysis of SCD proteins and polyX proteins. Unfortunately, there is no clear hypothesis for what is being tested; there is a vague sense of investigating polyX/SCD regions, but I did not find the connection between the first and section compelling (especially given polar-rich regions have been shown to engage in many different functions). As such, this bioinformatic analysis largely presents as many lists of percentages without any meaningful interpretation. The bioinformatics analysis lacks any kind of rigorous statistical tests, making it difficult to evaluate the conclusions drawn. The methods section is severely lacking. Specifically, many of the methods require the reader to read many other papers. While referencing prior work is of course, important, the authors should ensure the methods in this paper provide the details needed to allow a reader to evaluate the work being presented. As it stands, this is not the case.

      Thank you. As described in detail below, we have now performed rigorous statistical testing using the GofuncR package (Figure 11, Figure 12 and DS7-DS32).

      Overall, my major concern with this work is that the authors make two central claims in this paper (as per the Discussion). The authors claim that Q-rich motifs enhance protein expression. The implication here is that Q-rich motif IDRs are special, but this is not tested. As such, they cannot exclude the competing hypothesis ("N-terminal disordered regions enhance expression").

      In fact, “N-terminal disordered regions enhance expression” exactly summarizes our hypothesis.

      On pages 12-13 and Figure 4 of our preprint manuscript, we explained our hypothesis in the paragraph entitled “The relationship between PEE function, amino acid contents, and structural flexibility”.

      The authors also do not explore the possibility that this effect is in part/entirely driven by mRNA-level effects (see Verma Na Comms 2019).

      As pointed out by the first reviewer, we present evidence that the increase in protein abundance and enzymatic activity is not due to changes in plasmid copy number or mRNA abundance (Figure 2), and that this phenomenon is not affected in translational quality control mutants (Figure 3).

      As such, while these observations are interesting, they feel preliminary and, in my opinion, cannot be used to draw hard conclusions on how N-terminal IDR sequence features influence protein expression. This does not mean the authors are necessarily wrong, but from the data presented here, I do not believe strong conclusions can be drawn. That re-assignment of stop codons to Q increases proteome-wide Q usage. I was unable to understand what result led the authors to this conclusion.

      My reading of the results is that a subset of ciliates has re-assigned UAA and UAG from the stop codon to Q. Those ciliates have more polyQ-containing proteins. However, they also have more polyN-containing proteins and proteins enriched in S/T-Q clusters. Surely if this were a stop-codon-dependent effect, we'd ONLY see an enhancement in Q-richness, not a corresponding enhancement in all polar-rich IDR frequencies? It seems the better working hypothesis is that free-floating climate proteomes are enriched in polar amino acids compared to sessile ciliates.

      We thank this reviewer for raising this point, however her/his comments are not supported by the results in Figure 7.

      Regardless, the absence of any kind of statistical analysis makes it hard to draw strong conclusions here.

      We apologize for not explaining more clearly the results of Tables 5-7 in our preprint manuscript.

      To address the concerns about our GO enrichment analysis by both reviewers, we have now performed rigorous statistical testing for SCD and polyQ protein overrepresentation using the GOfuncR package (https://bioconductor.org/packages/release/bioc/html/GOfuncR.html). GOfuncR is an R package program that conducts standard candidate vs. background enrichment analysis by means of the hypergeometric test. We then adjusted the raw p-values according to the Family-wise error rate (FWER). The same method had been applied to GO enrichment analysis of human genomes (89).

      The results presented in Figure 11 and Figure 12 (DS7-DS32) support our hypothesis that Q-rich motifs prevail in proteins involved in specialized biological processes, including Saccharomyces cerevisiae RNA-mediated transposition, Candida albicans filamentous growth, peptidyl-glutamic acid modification in ciliates with reassigned stop codons (TAAQ and TAGQ), Tetrahymena thermophila xylan catabolism, Dictyostelium discoideum sexual reproduction, Plasmodium falciparum infection, as well as the nervous systems of Drosophila melanogaster, Mus musculus, and Homo sapiens (78). In contrast, peptidyl-glutamic acid modification and microtubule-based movement are not overrepresented with Q-rich proteins in Stentor coeruleus, a ciliate with standard stop codons.

      Recommendations for the authors:

      Please note that you control which revisions to undertake from the public reviews and recommendations for the authors.

      Reviewer #1 (Recommendations For The Authors):

      The order of paragraphs in the introduction was very difficult to follow. Each paragraph was clear and easy to understand, but the order of paragraphs did not make sense to this reader. The order of events in the abstract matches the order of events in the results section. However, the order of paragraphs in the introduction is completely different and this was very confusing. This disordered list of facts might make sense to an expert reader but makes it hard for a non-expert reader to understand.

      Apologies. We endeavored to improve the flow of our revised manuscript to make it more readable.

      The section beginning on pg 12 focused on figures 4 and 5 was very interesting and highly promising. However, it was initially hard for me to tell from the main text what the experiment was. Please add to the text an explanation of the experiment, because it is hard to figure out what was going on from the figures alone. Figure 4 is fantastic, but would be improved by adding error bars and scaling the x-axis to be the same in panels B,C,D.

      Thank you for this recommendation. We have now scaled both the x-axis and y-axis equivalently in panels B, C and D of Figure 4. Error bars are too small to be included.

      It is hard to tell if the key variable is the number of S/T/Q/N residues or the number of phosphosites. I think a good control would be to add a regression against the number of putative phosphosites. The sequences are well designed. I loved this part but as a reader, I need more interpretation about why it matters and how it explains the PEE.

      As described above, we have shown that the number of phosphorylation sites in the Q-rich motifs is not relevant to their autonomous protein expression-enhancing (PEE) activities.

      I believe that the prevalence of polyX runs is not meaningful without normalizing for the background abundance of each amino acid. The proteome-wide abundance and the assumption that amino acids occur independently can be used to form a baseline expectation for which runs are longer than expected by chance. I think Figures 6 and 7 should go into the supplement and be replaced in the main text with a figure where Figure 6 is normalized by Figure 7. For example in P. falciparum, there are many N-runs (Figure 6), but the proteome has the highest fraction of N’s (Figure 7).

      Thank you for these suggestions. The three figures in our preprint manuscript (Figures 6-8) have been moved into the supplementary information (Figures S1-S3). For normalization, we have provided four new figures (Figures 7-10) in our revised manuscript.

      The analysis of ciliate proteomes was fascinating. I am particularly interested in the GO enrichment for “peptidyl-glutamic acid modification” (pg 20) because these enzymes might be modifying some of Q’s in the Q-runs. I might be wrong about this idea or confused about the chemistry. Do these ciliates live in Q-rich environments? Or nitrogen rich environments?

      Polymeric modifications (polymodifications) are a hallmark of C-terminal tubulin tails, whereas secondary peptide chains of glutamic acids (polyglutamylation) and glycines (polyglycylation) are catalyzed from the γ-carboxyl group of primary chain glutamic acids. It is not clear if these enzymes can modify some of the Q’s in the Q-runs.

      To our knowledge, ciliates are abundant in almost every liquid water environment, i.e., oceans/seas, marine sediments, lakes, ponds, and rivers, and even soils.

      I think you should include more discussion about how the codons that code for Q’s are prone to slippage during DNA replication, and thus many Q-runs are unstable and expand (e.g. Huntington’s Disease). The end of pg 24 or pg 25 would be good places.

      We thank the reviewer for these comments.

      PolyQ motifs have a particular length-dependent codon usage that relates to strand slippage in CAG/CTG trinucleotide repeat regions during DNA replication. In most organisms having standard genetic codons, Q is encoded by CAGQ and CAAQ. Here, we have determined and compared proteome-wide Q contents, as well as the CAGQ usage frequencies (i.e., the ratio between CAGQ and the sum of CAGQ, CAGQ, TAAQ, and TAGQ).

      Our results reveal that the likelihood of forming long CAG/CTG trinucleotide repeats are higher in five eukaryotes due to their higher CAGQ usage frequencies, including Drosophila melanogaster (86.6% Q), Danio rerio (74.0% Q), Mus musculus (74.0% Q), Homo sapiens (73.5% Q), and Chlamydomonas reinhardtii (87.3% Q) (orange background, Table 2). In contrast, another five eukaryotes that possess high numbers of polyQ motifs (i.e., Dictyostelium discoideum, Candida albicans, Candida tropicalis, Plasmodium falciparum and Stentor coeruleus) (Figure 1) utilize more CAAQ (96.2%, 84.6%, 84.5%, 86.7% and 75.7%) than CAAQ (3.8%, 15.4%, 15.5%, 13.3% and 24.3%), respectively, to avoid the formation of long CAG/CTG trinucleotide repeats (green background, Table 2). Similarly, all five ciliates with reassigned stop codons (TAAQ and TAGQ) have low CAGQ usage frequencies (i.e., from 3.8% Q in Pseudocohnilembus persalinus to 12.6% Q in Oxytricha trifallax) (red font, Table 2). Accordingly, the CAG-slippage mechanism might operate more frequently in Chlamydomonas reinhardtii, Drosophila melanogaster, Danio rerio, Mus musculus and Homo sapiens than in Dictyostelium discoideum, Candida albicans, Candida tropicalis, Plasmodium falciparum, Stentor coeruleus and the five ciliates with reassigned stop codons (TAAQ and TAGQ).

      Author response table 1.

      Usage frequencies of TAA, TAG, TAAQ, TAGQ, CAAQ and CAGQ codons in the entire proteomes of 20 different organisms.

      Pg 7, paragraph 2 has no direction. Please add the conclusion of the paragraph to the first sentence.

      This paragraph has been moved to the “Introduction” section” of the revised manuscript.

      Pg 8, I suggest only mentioning the PFDs used in the experiments. The rest are distracting.

      We have addressed this concern above.

      Pg 12. Please revise the "The relationship...." text to explain the experiment.

      We apologize for not explaining this topic sufficiently well in our preprint manuscript.

      SCDs are often structurally flexible sequences (4) or even IDRs. Using IUPred2A (https://iupred2a.elte.hu/plot_new), a web-server for identifying disordered protein regions (88), we found that Rad51-NTD (1-66 a.a.) (1), Rad53-SCD1 (1-29 a.a.) and Sup35-NPD (1-39 a.a.) are highly structurally flexible. Since a high content of serine (S), threonine (T), glutamine (Q), asparanine (N) is a common feature of IDRs (17-20), we applied alanine scanning mutagenesis approach to reduce the percentages of S, T, Q or N in Rad51-NTD, Rad53-SCD1 or Sup35-NPD, respectively. As shown in Figure 4 and Figure 5, there is a very strong positive relationship between STQ and STQN amino acid percentages and β-galactosidase activities. (Page 13, lines 5-10)

      Pg 13, first full paragraph, "Futionally, IDRs..." I think this paragraph belongs in the Discussion.

      This paragraph is now in the “Introduction” section (Page 5, Lines 11-15).

      Pg. 15, I think the order of paragraphs should be swapped.

      These paragraphs have been removed or rewritten in the “Introduction section” of our revised manuscript.

      Pg 17 (and other parts) I found the lists of numbers and percentages hard to read and I think you should refer readers to the tables.

      Thank you. In the revised manuscript, we have avoided using lists of numbers and percentages, unless we feel they are absolutely essential.

      Pg. 19 please add more interpretation to the last paragraph. It is very cool but I need help understanding the result. Are these proteins diverging rapidly? Perhaps this is a place to include the idea of codon slippage during DNA replication.

      Thank you. The new results in Table 2 indicate that the CAG-slippage mechanism is unlikely to operate in ciliates with reassigned stop codons (TAAQ and TAGQ).

      Pg 24. "Based on our findings from this study, we suggest that Q-rich motifs are useful toolkits for generating novel diversity during protein evolution, including by enabling greater protein expression, protein-protein interactions, posttranslational modifications, increased solubility, and tunable stability, among other important traits." This idea needs to be cited. Keith Dunker has written extensively about this idea as have others. Perhaps also discuss why Poly Q rich regions are different from other IDRs and different from other IDRs that phase-separate.

      Agreed, we have cited two of Keith Dunker’s papers in our revised manuscript (73, 74).

      Minor notes:

      Please define Borg genomes (pg 25).

      Borgs are long extrachromosomal DNA sequences in methane-oxidizing Methanoperedens archaea, which display the potential to augment methane oxidation (101). They are now described in our revised manuscript. (Page 15, lines 12-14)

      Reviewer #2 (Recommendations For The Authors):

      The authors dance around disorder but never really quantify or show data. This seems like a strange blindspot.

      We apologize for not explaining this topic sufficiently well in our preprint manuscript. We have endeavored to do so in our revised manuscript.

      The authors claim the expression enhancement is "autonomous," but they have not ruled things out that would make it not autonomous.

      Evidence of the “autonomous” nature of expression enhancement is presented in Figure 1, Figure 4, and Figure 5 of the preprint manuscript.

      Recommendations for improving the writing and presentation.

      The title does not recapitulate the entire body of work. The first 5 figures are not represented by the title in any way, and indeed, I have serious misgivings as to whether the conclusion stated in the title is supported by the work. I would strongly suggest the authors change the title.

      Figure 2 could be supplemental.

      Thank you. We think it is important to keep Figure 2 in the text.

      Figures 4 and 5 are not discussed much or particularly well.

      This reviewer’s opinion of Figure 4 and Figure 5 is in stark contrast to those of the first reviewer.

      The introduction, while very thorough, takes away from the main findings of the paper. It is more suited to a review and not a tailored set of minimal information necessary to set up the question and findings of the paper. The question that the authors are after is also not very clear.

      Thank you. The entire “Introduction” section has been extensively rewritten in the revised manuscript.

      Schematics of their fusion constructs and changes to the sequence would be nice, even if supplemental.

      Schematics of the fusion constructs are provided in Figure 1A.

      The methods section should be substantially expanded.

      The method section in the revised manuscript has been rewritten and expanded. The six Javascript programs used in this work are listed in Table S4.

      The text is not always suited to the general audience and readership of eLife.

      We have now rewritten parts of our manuscript to make it more accessible to the broad readership of eLife.

      In some cases, section headers really don't match what is presented, or there is no evidence to back the claim.

      The section headers in the revised manuscript have been corrected.

      A lot of the listed results in the back half of the paper could be a supplemental table, listing %s in a paragraph (several of them in a row) is never nice

      Acknowledged. In the revised manuscript, we have removed almost all sentences listing %s.

      Minor corrections to the text and figures.

      There is a reference to table 1 multiple times, and it seems that there is a missing table. The current table 1 does not seem to be the same table referred to in some places throughout the text.

      Apologies for this mistake, which we have now corrected in our revised manuscript.

      In some places its not clear where new work is and where previous work is mentioned. It would help if the authors clearly stated "In previous work...."

      Acknowledged. We have corrected this oversight in our revised manuscript.

      Not all strains are listed in the strain table (KO's in figure 3 are not included)

      Apologies, we have now corrected Table S2, as suggested by this reviewer.

      Author response table 2.

      S. cerevisiae strains used in this study

    1. GuardSince headers can be sent in requests and received in responses, and have various limitations about what information can and should be mutable, headers' objects have a guard property. This is not exposed to the Web, but it affects which mutation operations are allowed on the headers object. Possible guard values are: none: default. request: guard for a headers object obtained from a request (Request.headers). request-no-cors: guard for a headers object obtained from a request created with Request.mode no-cors. response: guard for a headers object obtained from a response (Response.headers). immutable: guard that renders a headers object read-only; mostly used for ServiceWorkers. Note: You may not append or set the Content-Length header on a guarded headers object for a response. Similarly, inserting Set-Cookie into a response header is not allowed: ServiceWorkers are not allowed to set cookies via synthesized responses. Response objectsAs you have seen above, Response instances are returned when fetch() promises are resolved. The most common response properties you'll use are: Response.status — An integer (default value 200) containing the response status code. Response.statusText — A string (default value ""), which corresponds to the HTTP status code message. Note that HTTP/2 does not support status messages. Response.ok — seen in use above, this is a shorthand for checking that status is in the range 200-299 inclusive. This returns a boolean value. They can also be created programmatically via JavaScript, but this is only really useful in ServiceWorkers, when you are providing a custom response to a received request using a respondWith() method: jsCopy to Clipboardconst myBody = new Blob(); addEventListener("fetch", (event) => { // ServiceWorker intercepting a fetch event.respondWith( new Response(myBody, { headers: { "Content-Type": "text/plain" }, }), ); }); Copy And SaveShareAsk Copilot The Response() constructor takes two optional arguments — a body for the response, and an init object (similar to the one that Request() accepts.) Note: The static method error() returns an error response. Similarly, redirect() returns a response resulting in a redirect to a specified URL. These are also only relevant to Service Workers. BodyBoth requests and responses may contain body data. A body is an instance of any of the following types: ArrayBuffer TypedArray (Uint8Array and friends) DataView Blob File String, or a string literal URLSearchParams FormData The Request and Response interfaces share the following methods to extract a body. These all return a promise that is eventually resolved with the actual content. Request.arrayBuffer() / Response.arrayBuffer() Request.blob() / Response.blob() Request.formData() / Response.formData() Request.json() / Response.json() Request.text() / Response.text() This makes usage of non-textual data much easier than it was with XHR. Request bodies can be set by passing body parameters: jsCopy to Clipboardconst form = new FormData(document.getElementById("login-form")); fetch("/login", { method: "POST", body: form, }); Copy And SaveShareAsk Copilot Both request and response (and by extension the fetch() function), will try to intelligently determine the content type. A request will also automatically set a Content-Type header if none is set in the options parameter.Feature detectionFetch API support can be detected by checking for the existence of Headers, Request, Response or fetch() on the Window or Worker scope. For example: jsCopy to Clipboardif (window.fetch) { // run my fetch request here } else { // do something with XMLHttpRequest? } Copy And SaveShareAsk CopilotDifferences from jQuery.ajax()The fetch specification differs from jQuery.ajax() in the following significant ways: The promise returned from fetch() won't reject on HTTP errors even if the response is an HTTP 404 or 500. Instead, as soon as the server responds with headers, the promise will resolve (with the ok property of the response set to false if the response isn't in the range 200–299). The promise will only reject on network failure or if anything prevented the request from completing. Unless fetch() is called with the credentials option set to include, fetch(): won't send cookies in cross-origin requests won't set any cookies sent back in cross-origin responses As of August 2018, the default credentials policy changed to same-origin. See also ServiceWorker API HTTP access control (CORS) HTTP Fetch polyfill Fetch examples on GitHub Found a content problem with this page?Edit the page on GitHub.Report the content issue.View the source on GitHub.Want to get more involved? Learn how to contribute.This page was last modified on Aug 18, 2023 by MDN contributors.

      Certainly! Let's break down the provided information into simple terms:

      1. Guard in Headers:
      2. Headers are like additional notes attached to an HTTP request or response.
      3. They have a property called "guard" that determines which operations are allowed on the headers object.
      4. Guard values include:
        • none (default): No specific restrictions.
        • request: Applied to headers obtained from a request.
        • request-no-cors: Applied to headers from a no-cors request.
        • response: Applied to headers obtained from a response.
        • immutable: Renders headers read-only, commonly used in ServiceWorkers.

      Example: javascript const myHeaders = new Headers(); // Applying guard to headers obtained from a request const requestHeaders = new Headers(myHeaders, { guard: "request" });

      1. Response Objects:
      2. When you make a request using fetch(), it returns a Response object.
      3. Key properties include:
        • Response.status: An integer indicating the response status code (e.g., 200 for success).
        • Response.statusText: A string corresponding to the HTTP status code message.
        • Response.ok: A boolean indicating if the status is in the range 200-299.

      Example: javascript const response = await fetch("https://example.com/api/data"); console.log(response.status); // 200 console.log(response.statusText); // OK console.log(response.ok); // true

      1. Creating Response Object Programmatically:
      2. You can create a Response object programmatically in a ServiceWorker.
      3. This is useful when providing a custom response to a received request using respondWith().

      Example: ```javascript const myBody = new Blob();

      addEventListener("fetch", (event) => { // ServiceWorker intercepting a fetch event.respondWith( new Response(myBody, { headers: { "Content-Type": "text/plain" }, }), ); }); ```

      1. Body Content in Requests and Responses:
      2. Both requests and responses may have body content, which can be of various types (e.g., ArrayBuffer, Blob, JSON).
      3. Methods like arrayBuffer(), blob(), formData(), json(), and text() help extract the body content.

      Example: javascript const responseData = await response.json(); // Or for request body const formData = new FormData(document.getElementById("myForm")); fetch("/api/data", { method: "POST", body: formData, });

      1. Feature Detection:
      2. You can check if the Fetch API is supported in a browser by looking for the existence of fetch on the window or Worker scope.

      Example: javascript if (window.fetch) { // Fetch is supported, proceed with fetch requests } else { // Use alternative like XMLHttpRequest }

      1. Differences from jQuery.ajax():
      2. The Fetch API differs from jQuery.ajax() in terms of error handling and handling of credentials in cross-origin requests.

      Example: javascript fetch("https://example.com/api/data") .then((response) => { if (!response.ok) { throw new Error("Network response was not OK"); } return response.json(); }) .then((data) => { console.log(data); }) .catch((error) => { console.error("Error:", error); });

      These explanations and examples aim to simplify the concepts introduced in the provided information about Headers, Response objects, Body content, and feature detection in the Fetch API.

    2. Uploading a fileFiles can be uploaded using an HTML <input type="file" /> input element, FormData() and fetch(). jsCopy to Clipboardasync function upload(formData) { try { const response = await fetch("https://example.com/profile/avatar", { method: "PUT", body: formData, }); const result = await response.json(); console.log("Success:", result); } catch (error) { console.error("Error:", error); } } const formData = new FormData(); const fileField = document.querySelector('input[type="file"]'); formData.append("username", "abc123"); formData.append("avatar", fileField.files[0]); upload(formData); Copy And SaveShareAsk CopilotUploading multiple filesFiles can be uploaded using an HTML <input type="file" multiple /> input element, FormData() and fetch(). jsCopy to Clipboardasync function uploadMultiple(formData) { try { const response = await fetch("https://example.com/posts", { method: "POST", body: formData, }); const result = await response.json(); console.log("Success:", result); } catch (error) { console.error("Error:", error); } } const photos = document.querySelector('input[type="file"][multiple]'); const formData = new FormData(); formData.append("title", "My Vegas Vacation"); for (const [i, photo] of Array.from(photos.files).entries()) { formData.append(`photos_${i}`, photo); } uploadMultiple(formData); Copy And SaveShareAsk CopilotProcessing a text file line by lineThe chunks that are read from a response are not broken neatly at line boundaries and are Uint8Arrays, not strings. If you want to fetch a text file and process it line by line, it is up to you to handle these complications. The following example shows one way to do this by creating a line iterator (for simplicity, it assumes the text is UTF-8, and doesn't handle fetch errors). jsCopy to Clipboardasync function* makeTextFileLineIterator(fileURL) { const utf8Decoder = new TextDecoder("utf-8"); const response = await fetch(fileURL); const reader = response.body.getReader(); let { value: chunk, done: readerDone } = await reader.read(); chunk = chunk ? utf8Decoder.decode(chunk) : ""; const newline = /\r?\n/gm; let startIndex = 0; let result; while (true) { const result = newline.exec(chunk); if (!result) { if (readerDone) break; const remainder = chunk.substr(startIndex); ({ value: chunk, done: readerDone } = await reader.read()); chunk = remainder + (chunk ? utf8Decoder.decode(chunk) : ""); startIndex = newline.lastIndex = 0; continue; } yield chunk.substring(startIndex, result.index); startIndex = newline.lastIndex; } if (startIndex < chunk.length) { // Last line didn't end in a newline char yield chunk.substr(startIndex); } } async function run() { for await (const line of makeTextFileLineIterator(urlOfFile)) { processLine(line); } } run(); Copy And SaveShareAsk CopilotChecking that the fetch was successfulA fetch() promise will reject with a TypeError when a network error is encountered or CORS is misconfigured on the server-side, although this usually means permission issues or similar — a 404 does not constitute a network error, for example. An accurate check for a successful fetch() would include checking that the promise resolved, then checking that the Response.ok property has a value of true. The code would look something like this: jsCopy to Clipboardasync function fetchImage() { try { const response = await fetch("flowers.jpg"); if (!response.ok) { throw new Error("Network response was not OK"); } const myBlob = await response.blob(); myImage.src = URL.createObjectURL(myBlob); } catch (error) { console.error("There has been a problem with your fetch operation:", error); } }

      Certainly! Let's break down each section with simple explanations and examples:

      1. Uploading a File:
      2. To upload a file, you can use an HTML <input type="file" /> element to let users choose a file.
      3. Combine this with FormData() to gather form data, and then use fetch() to send it to a server.

      Example: ```javascript async function upload(formData) { try { const response = await fetch("https://example.com/profile/avatar", { method: "PUT", body: formData, }); const result = await response.json(); console.log("Success:", result); } catch (error) { console.error("Error:", error); } }

      const formData = new FormData(); const fileField = document.querySelector('input[type="file"]'); formData.append("username", "abc123"); formData.append("avatar", fileField.files[0]);

      upload(formData); ```

      1. Uploading Multiple Files:
      2. If you want to upload multiple files, use the multiple attribute in the <input type="file" /> element.
      3. Similar to the single-file example, use FormData() and fetch().

      Example: ```javascript async function uploadMultiple(formData) { try { const response = await fetch("https://example.com/posts", { method: "POST", body: formData, }); const result = await response.json(); console.log("Success:", result); } catch (error) { console.error("Error:", error); } }

      const photos = document.querySelector('input[type="file"][multiple]'); const formData = new FormData();

      formData.append("title", "My Vegas Vacation"); for (const [i, photo] of Array.from(photos.files).entries()) { formData.append(photos_${i}, photo); }

      uploadMultiple(formData); ```

      1. Processing a Text File Line by Line:
      2. If you need to process a text file line by line, you can create an iterator.
      3. This example assumes UTF-8 encoding and uses a TextDecoder.

      Example: ```javascript async function* makeTextFileLineIterator(fileURL) { // ... (see provided code) }

      async function run() { for await (const line of makeTextFileLineIterator(urlOfFile)) { processLine(line); } }

      run(); ```

      1. Checking Fetch Success:
      2. It's essential to check if a fetch operation was successful.
      3. You can do this by checking response.ok to ensure the network response was okay.

      Example: javascript async function fetchImage() { try { const response = await fetch("flowers.jpg"); if (!response.ok) { throw new Error("Network response was not OK"); } const myBlob = await response.blob(); myImage.src = URL.createObjectURL(myBlob); } catch (error) { console.error("There has been a problem with your fetch operation:", error); } }

      These examples cover uploading files, handling multiple file uploads, processing text files line by line, and checking the success of a fetch operation. They demonstrate practical uses of the Fetch API in various scenarios.

    3. Using the Fetch APIThe Fetch API provides a JavaScript interface for accessing and manipulating parts of the protocol, such as requests and responses. It also provides a global fetch() method that provides an easy, logical way to fetch resources asynchronously across the network. Unlike XMLHttpRequest that is a callback-based API, Fetch is promise-based and provides a better alternative that can be easily used in service workers. Fetch also integrates advanced HTTP concepts such as CORS and other extensions to HTTP. A basic fetch request looks like this: jsCopy to Clipboardasync function logMovies() { const response = await fetch("http://example.com/movies.json"); const movies = await response.json(); console.log(movies); } Copy And SaveShareAsk Copilot Here we are fetching a JSON file across the network, parsing it, and printing the data to the console. The simplest use of fetch() takes one argument — the path to the resource you want to fetch — and does not directly return the JSON response body but instead returns a promise that resolves with a Response object. The Response object, in turn, does not directly contain the actual JSON response body but is instead a representation of the entire HTTP response. So, to extract the JSON body content from the Response object, we use the json() method, which returns a second promise that resolves with the result of parsing the response body text as JSON. Note: See the Body section for similar methods to extract other types of body content. Fetch requests are controlled by the connect-src directive of Content Security Policy rather than the directive of the resources it's retrieving.

      Certainly! Let's break down the code and concepts in simpler terms:

      1. Fetch API Overview:
      2. The Fetch API is a way for JavaScript to make network requests (like fetching data from a server) and handle the responses.
      3. It's an improvement over the older XMLHttpRequest, providing a cleaner, promise-based syntax.

      4. Basic Fetch Request:

      5. The fetch() function is used to initiate a network request.
      6. It takes a URL as an argument, specifying where to fetch data from.
      7. The fetch() function returns a Promise, which represents the result of the request.

      8. Asynchronous Code with await:

      9. The async keyword in the function declaration indicates that the function contains asynchronous code.
      10. The await keyword is used to wait for the completion of a Promise before moving on to the next line of code.

      11. Fetching JSON Data:

      12. In the example, we're fetching data from "http://example.com/movies.json."
      13. The response from the server is stored in the response variable.

      14. Parsing JSON Response:

      15. The response.json() method is used to parse the response body as JSON.
      16. It returns another Promise that resolves to the actual JSON data.

      17. Logging the Result:

      18. The movies variable holds the parsed JSON data.
      19. In this example, it's logged to the console. You could do other things with the data, like updating a webpage or performing calculations.

      20. Summary:

      21. fetch() initiates a network request and returns a Promise.
      22. await is used to wait for the Promise to resolve, making asynchronous code easier to read and write.
      23. response.json() parses the JSON content of the response.
      24. The whole process is wrapped in an async function for cleaner asynchronous handling.

      Example in Simpler Terms: ```javascript async function logMovies() { // Fetch data from a server const response = await fetch("http://example.com/movies.json");

      // Parse the data as JSON const movies = await response.json();

      // Print the movies to the console console.log(movies); }

      // Call the function to see it in action logMovies(); ```

      This function fetches movie data, waits for the response, parses the JSON, and then logs the movies to the console. It's a convenient way to handle asynchronous operations in JavaScript.

    1. 4.4. How Data Informs Ethics# Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people. One of the classic forms of this approach is utilitarianism, which says we should do whatever maximizes ‘utility’ for most people. Confusingly, ‘utility’ in this case does not refer to usefulness, but to a sort of combo of happiness and wellbeing. When a utilitarian tries to decide how to act, they take stock of all the probable outcomes, and what sort of ‘utility’ or happiness will be brought about for all parties involved. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations). Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness. When we think about how data is used online, the idea of a utility calculus can help remind us to check whether we’ve really got enough data about how all parties might be impacted by some actions. Even if you are not a utilitarian, it is good to remind ourselves to check that we’ve got all the data before doing our calculus. This can be especially important when there is a strong social trend to overlook certain data. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action. Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important? One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”. The reckoning about how pernicious ignorance had allowed many to feel comfortable leaving the interests of many out of the utility calculus for use of images online continued. You can read an article about it here, or see a similar reckoning discussed by National Geographic: “For Decades, Our Coverage Was Racist. To Rise Above Our Past, We Must Acknowledge It”.

      This section particularly the exploration of utilitarianism in the context of social media, provides a thought-provoking perspective on ethical decision-making. The concept of the utility calculus as a method of predicting the outcomes and moral implications of our actions highlights the importance of comprehensive data collection and the potential pitfalls of biased or incomplete data. The discussion cleverly highlighted the challenges of navigating social media in an ethical manner, which must consider

    2. Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation.

      I think that utilitarianism is not fit for every conditions and I don't actually agree with this concept. I think that personal benefits and feelings is also very important. People do not need to sacrifice in order to make everyone happy.

    1. I expereinced what is context collapse unkowingly recently and learning about the dramaturgical analysis it made me feel alot better about it. I hosted a birthday party and decided I wanted who I wanted around me for my birthday and without much thought, invited a bunch of people. To my surprise I hadn't thought about how all these people wold mesh together and had a mini identity crisis because of it. The code switching the room was crazy.

    2. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below:

      This statement emphasises the relativity of behaviour, i.e. the same behaviour may present different expressions in different contexts. This quote emphasises that authenticity is a relative concept that depends on the specific environment and social context in which an individual finds themselves. For example, through the example of sharing an experience with a close friend, it emphasises that authenticity in an intimate relationship may be expressed in a more relaxed and natural way.

    3. 6.4. Personas, Code Switching, Context Collapse# The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below: Fig. 6.6 President Obama giving a very different handshakes to a white man and a Black man (Kevin Durant). See also this Key & Peele comedy sketch on greeting differences with Jordan Peele playing Obama, and also Key & Peele’s Obama’s Anger Translator sketch.# Read/watch more about code-switching here: How Code-Switching Explains The World ‘Key & Peele’ Is Ending. Here Are A Few Of Its Code Switch-iest Moments Still, modifications of behavior can also be inauthentic. In the YouTube Video Essay: YouTube: Manufacturing Authenticity (For Fun and Profit!) by Lindsay Ellis, Ellis explores nuances in authenticity as a YouTuber. She highlights the emotional labor of keeping emotional expressions consistent with their public persona, even when they are having different or conflicted feelings. She also highlights how various “calls to action” (e.g., “subscribe to my channel”) may be necessary for business and can be (and appear) authentic or inauthentic. 6.4.1. Context Collapse# Since we have different personas and ways of behaving in different groups of people, what happens if different groups of people are observing you at the same time? For example, someone might not know how to behave if they were at a restaurant with their friends and they noticed that their parents were seated at the table next to them. This is phenomenon is called “context collapse.” On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere). 6.4.2. Reflection Questions# How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      We act "code switch" in different places (being serious at work but relaxed with friends) and both ways can be the real us. On social media, when we post, all our different groups see the same posts, it can make us wonder if we're being true to ourselves in each situation.

    4. which is called code-switching.

      I know this label of "code switching" means an extreme of code switching, But it does come from the need to inherit the same attributes and beliefs of those around you to fit in. This can be related to a certain work environment "etc" where it is easier to agree to your boss than to contribute your own opinion. This can be an issue however when you are in a position with some power where your acts have possible influences compared to a boss employee scenario where the employee has very little to no power in their situation.

    5. Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below: Fig. 6.6 President Obama giving a very different handshakes to a white man and a Black man (Kevin Durant). See also this Key & Peele comedy sketch on greeting differences with Jordan Peele playing Obama, and also Key & Peele’s Obama’s Anger Translator sketch.

      I use code-switching on a daily basis. The way in which I speak and act at work or in class is completely different than the way in which I act at home with my family and friends. Even my online self tends to showcase my more formal side as if I am at work or another professional setting. This is honestly why I have multiple accounts; One of my accounts is more business and a representation of how I want to be perceived when in a formal setting whereas my other account highlights more of the ways I act at home with friends or family. Code-switching makes you feel safe and accepted in an environment that you feel would not be as accepting of your true self.

    6. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching.

      The prevalence of multiple personas on social media makes this intriguing to me. People frequently behave differently when they are in front of a screen. Furthermore, a lot of people identify quite differently.The concept of a public persona and code-switching also reflects the dynamic nature of human behavior and communication.

    7. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting.

      I feel that it takes practice to make it seem more authentic. Although it can be useful for certain situations, it can be draining by the end of it.

    8. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching. While modified behaviors to present a persona or code switch may at first look inauthentic, they can be a way of authentically expressing ourselves in each particular setting. For example: Speaking in a formal manner when giving a presentation or answering questions in a courtroom may be a way of authentically sharing your experiences and emotions, but tailored to the setting Sharing those same experiences and emotions with a close friend may look very different, but still can be authentic Different communities have different expectations and meanings around behavior and presentation. So what is appropriate authentic behavior depends on what group you are from and what group you are interacting with, like this gif of President Obama below:

      In my opinion, I think public personas and code-switching are intricate yet crucial facets of interpersonal communication. They enable people to successfully negotiate various social environments, showcasing various aspects of their true selves in polite and situation-appropriate ways. I also really like the example of Obama of how he used different type of movement to different audience. This is one of his way to show his social intelligence and respect to others.

    1. https://www.freecodecamp.org/news/use-typescript-with-react/ How to Use TypeScript with React - freeCodeCamp Nov 15, 2023 ... If the file does not contain any JSX-specific code, then you can use the .ts extension instead of the .tsx extension. To create a component in ...

      to

    1. Under the Hood For queries with timestamps, Mongoose adds 2 properties to each update query: Add updatedAt to $set Add createdAt to $setOnInsert For example, if you run the below code: mongoose.set('debug', true); const userSchema = new Schema({ name: String }, { timestamps: true }); const User = mongoose.model('User', userSchema); await User.findOneAndUpdate({}, { name: 'test' }); You'll see the below output from Mongoose debug mode: Mongoose: users.findOneAndUpdate({}, { '$setOnInsert': { createdAt: new Date("Sun, 27 Feb 2022 00:26:27 GMT") }, '$set': { updatedAt: new Date("Sun, 27 Feb 2022 00:26:27 GMT"), name: 'test' }}, {...}) Notice the $setOnInsert for createdAt and $set for updatedAt. MongoDB's $setOnInsert operator applies the update only if a new document is upserted. So, for example, if you want to only set updatedAt if a new document is created, you can disable the updatedAt timestamp and set it yourself as shown below: await User.findOneAndUpdate({}, { $setOnInsert: { updatedAt: new Date() } }, { timestamps: { createdAt: true, updatedAt: false } });

      Certainly! Let's break down the information in simpler terms with examples:

      Under the Hood - MongoDB Update Queries with Timestamps:

      1. Properties Added by Mongoose:
      2. When you perform update queries with timestamps enabled, Mongoose adds two special properties to the MongoDB update operation:

        • updatedAt: Added to the $set operator. It represents the last update time.
        • createdAt: Added to the $setOnInsert operator. It represents the creation time and is applied only when a new document is upserted (inserted if not found).
      3. Example - Update Query:

      4. Consider the following code:

      ```javascript const userSchema = new Schema({ name: String }, { timestamps: true }); const User = mongoose.model('User', userSchema);

      await User.findOneAndUpdate({}, { name: 'test' }); ```

      • In the debug output, you'll see MongoDB update operators like $setOnInsert and $set:

      plaintext Mongoose: users.findOneAndUpdate({}, { '$setOnInsert': { createdAt: new Date("Sun, 27 Feb 2022 00:26:27 GMT") }, '$set': { updatedAt: new Date("Sun, 27 Feb 2022 00:26:27 GMT"), name: 'test' }}, {...})

      1. Explanation of $setOnInsert and $set:
      2. $setOnInsert: It sets the specified values only if a new document is inserted during an upsert operation. In the example, it sets createdAt only if a new document is created.
      3. $set: It sets the specified values regardless of whether the document is new or existing. In the example, it sets updatedAt and updates the name.

      4. Disabling updatedAt Timestamp and Setting Manually:

      5. If you want to handle updatedAt manually and disable automatic updates, you can do so:

      javascript await User.findOneAndUpdate({}, { $setOnInsert: { updatedAt: new Date() } }, { timestamps: { createdAt: true, updatedAt: false } });

      • This way, you can control when updatedAt is set, and it won't be automatically managed by Mongoose.

      In simple terms, Mongoose adds special properties like updatedAt and createdAt to MongoDB update queries when timestamps are enabled. These properties are essential for tracking update and creation times. You can customize the behavior by manually handling timestamps or adjusting the update options.

    2. Alternate Property Names For the purposes of these docs, we'll always refer to createdAt and updatedAt. But you can overwrite these property names as shown below. const userSchema = new Schema({ name: String }, { timestamps: { createdAt: 'created_at', // Use `created_at` to store the created date updatedAt: 'updated_at' // and `updated_at` to store the last updated date } }); Disabling Timestamps save(), updateOne(), updateMany(), findOneAndUpdate(), update(), replaceOne(), and bulkWrite() all support a timestamps option. Set timestamps: false to skip setting timestamps for that particular operation. let doc = await User.create({ name: 'test' }); console.log(doc.createdAt); // 2022-02-26T23:28:54.264Z console.log(doc.updatedAt); // 2022-02-26T23:28:54.264Z doc.name = 'test2'; // Setting `timestamps: false` tells Mongoose to skip updating `updatedAt` on this `save()` await doc.save({ timestamps: false }); console.log(doc.updatedAt); // 2022-02-26T23:28:54.264Z // Similarly, setting `timestamps: false` on a query tells Mongoose to skip updating // `updatedAt`. doc = await User.findOneAndUpdate({ _id: doc._id }, { name: 'test3' }, { new: true, timestamps: false }); console.log(doc.updatedAt); // 2022-02-26T23:28:54.264Z // Below is how you can disable timestamps on a `bulkWrite()` await User.bulkWrite([{ updateOne: { filter: { _id: doc._id }, update: { name: 'test4' }, timestamps: false } }]); doc = await User.findOne({ _id: doc._id }); console.log(doc.updatedAt); // 2022-02-26T23:28:54.264Z You can also set the timestamps option to an object to configure createdAt and updatedAt separately. For example, in the below code, Mongoose sets createdAt on save() but skips updatedAt. const doc = new User({ name: 'test' }); // Tell Mongoose to set `createdAt`, but skip `updatedAt`. await doc.save({ timestamps: { createdAt: true, updatedAt: false } }); console.log(doc.createdAt); // 2022-02-26T23:32:12.478Z console.log(doc.updatedAt); // undefined Disabling timestamps also lets you set timestamps yourself. For example, suppose you need to correct a document's createdAt or updatedAt property. You can do that by setting timestamps: false and setting createdAt yourself as shown below. let doc = await User.create({ name: 'test' }); // To update `updatedAt`, do a `findOneAndUpdate()` with `timestamps: false` and // `updatedAt` set to the value you want doc = await User.findOneAndUpdate({ _id: doc._id }, { updatedAt: new Date(0) }, { new: true, timestamps: false }); console.log(doc.updatedAt); // 1970-01-01T00:00:00.000Z // To update `createdAt`, you also need to set `strict: false` because `createdAt` // is immutable doc = await User.findOneAndUpdate({ _id: doc._id }, { createdAt: new Date(0) }, { new: true, timestamps: false, strict: false }); console.log(doc.createdAt); // 1970-01-01T00:00:00.000Z Timestamps on Subdocuments Mongoose also supports setting timestamps on subdocuments. Keep in mind that createdAt and updatedAt for subdocuments represent when the subdocument was created or updated, not the top level document. Overwriting a subdocument will also overwrite createdAt. const roleSchema = new Schema({ value: String }, { timestamps: true }); const userSchema = new Schema({ name: String, roles: [roleSchema] }); const doc = await User.create({ name: 'test', roles: [{ value: 'admin' }] }); console.log(doc.roles[0].createdAt); // 2022-02-27T00:22:53.836Z console.log(doc.roles[0].updatedAt); // 2022-02-27T00:22:53.836Z // Overwriting the subdocument also overwrites `createdAt` and `updatedAt` doc.roles[0] = { value: 'root' }; await doc.save(); console.log(doc.roles[0].createdAt); // 2022-02-27T00:22:53.902Z console.log(doc.roles[0].updatedAt); // 2022-02-27T00:22:53.902Z // But updating the subdocument preserves `createdAt` and updates `updatedAt` doc.roles[0].value = 'admin'; await doc.save(); console.log(doc.roles[0].createdAt); // 2022-02-27T00:22:53.902Z console.log(doc.roles[0].updatedAt); // 2022-02-27T00:22:53.909Z

      Certainly! Let's simplify the information and examples provided:

      Timestamps and Property Names:

      1. Custom Property Names:
      2. By default, Mongoose uses createdAt and updatedAt as timestamp properties. However, you can customize these names:

      javascript const userSchema = new Schema({ name: String }, { timestamps: { createdAt: 'created_at', updatedAt: 'updated_at' } });

      Now, instead of createdAt and updatedAt, your properties will be named created_at and updated_at.

      1. Disabling Timestamps:
      2. You can choose to skip updating timestamps for specific operations by setting timestamps: false.

      ```javascript // Example with save() await doc.save({ timestamps: false });

      // Example with findOneAndUpdate() doc = await User.findOneAndUpdate({ _id: doc._id }, { name: 'test3' }, { new: true, timestamps: false }); ```

      This prevents the updatedAt from being automatically updated during those operations.

      1. Custom Configuration:
      2. You can configure createdAt and updatedAt separately using an object:

      javascript await doc.save({ timestamps: { createdAt: true, updatedAt: false } });

      This example sets createdAt on save but skips updating updatedAt.

      1. Disabling Timestamps and Setting Manually:
      2. If you need to correct or set timestamps manually:

      ``javascript // Example with updatingupdatedAt` doc = await User.findOneAndUpdate({ _id: doc._id }, { updatedAt: new Date(0) }, { new: true, timestamps: false });

      // Example with updating createdAt (requires setting strict: false because createdAt is immutable) doc = await User.findOneAndUpdate({ _id: doc._id }, { createdAt: new Date(0) }, { new: true, timestamps: false, strict: false }); ```

      Timestamps on Subdocuments:

      1. Setting Timestamps on Subdocuments:
      2. You can also have timestamps on subdocuments:

      javascript const roleSchema = new Schema({ value: String }, { timestamps: true }); const userSchema = new Schema({ name: String, roles: [roleSchema] });

      Here, each role in the roles array will have its own createdAt and updatedAt.

      1. Overwriting Subdocuments and Timestamps:
      2. Overwriting a subdocument will also overwrite its createdAt and updatedAt.

      javascript doc.roles[0] = { value: 'root' }; await doc.save();

      1. Updating Subdocuments and Timestamps:
      2. Updating a subdocument preserves createdAt and updates updatedAt.

      javascript doc.roles[0].value = 'admin'; await doc.save();

      In simple terms, these features allow you to customize how timestamps are handled in your Mongoose models, including changing property names, disabling automatic updates for specific operations, and managing timestamps on subdocuments.

    1. code

      la requête plutôt que le code ?

    2. Exemple 1

      c'est valable pour les autres images montrant des requêtes.

      en markdown tu peux utiliser un balisage particulier pour encapsuler un bloc de code :

      nom-du-langage query tousMesarticles { user { blablabla } }

      Cette notation évite d'avoir à gérer des images plutôt lourdes à la fois en terme de poids qu'en terme de maintenance (si la requête venait à changer, il faudrait générer une nouvelle image ...)

    1. const schema = new Schema({ name: { type: String, required: true } }); const Cat = db.model('Cat', schema); // This cat has no name :( const cat = new Cat(); let error; try { await cat.save(); } catch (err) { error = err; } assert.equal(error.errors['name'].message, 'Path `name` is required.'); error = cat.validateSync(); assert.equal(error.errors['name'].message, 'Path `name` is required.');

      Certainly! Let's break down the code and explanations in simpler terms:

      1. Schema Definition:
      2. You have defined a Mongoose schema for a "Cat" with a property called "name." The "name" field is of type String and is marked as required, meaning every cat must have a name.

      javascript const schema = new Schema({ name: { type: String, required: true } });

      1. Creating a Cat Model:
      2. You use the schema to create a Mongoose model named "Cat."

      javascript const Cat = db.model('Cat', schema);

      1. Creating a Cat Instance without a Name:
      2. You create a new cat instance without providing a name.

      javascript const cat = new Cat();

      Now, this cat doesn't have a name.

      1. Attempting to Save the Cat:
      2. You try to save the cat using cat.save().

      javascript let error; try { await cat.save(); } catch (err) { error = err; }

      Since the "name" is marked as required in the schema, and the cat doesn't have a name, saving it should result in an error.

      1. Handling the Error:
      2. You catch the error that occurs during the save operation.

      javascript assert.equal(error.errors['name'].message, 'Path `name` is required.');

      The error message indicates that the "name" field is required, and it points out that the "name" path is missing.

      1. Validating Sync:
      2. You also perform synchronous validation using cat.validateSync().

      javascript error = cat.validateSync();

      This is another way to trigger validation without saving the document. The error object will contain information about validation errors.

      1. Asserting the Validation Error:
      2. You assert that the error received from validation has the expected message indicating that the "name" path is required.

      javascript assert.equal(error.errors['name'].message, 'Path `name` is required.');

      This assertion confirms that the validation is working as expected, and it identifies the issue with the missing "name" field.

      In summary, the code demonstrates how Mongoose enforces validation rules specified in the schema, ensuring that essential fields, like "name" in this case, are present before saving a document.

    1. Log into Reddit (PRAW)# These are our normal steps get PRAW loaded and logged into Reddit import praw

      It's always interesting, how you can just import a package and then write a few lines of code that feeds info to things from the package, and then the package does all the work for you after. I feel this must be a powerful thing in programming? Imagine the increase of productivity compared to when this is not a thing. I wonder if this was always a thing in programming? Maybe local imports were normal? How about one's like this from elsewhere? (Were they just downloaded manually and placed somewhere that can be access?)

    1. secrete “housekeeping” antibodies

      This is like an OD diagnostic code telling you where the problem is. Got an XYZ receptor AA? something is fucking up your XYZ receptors

    1. Author Response

      eLife assessment

      This useful study uses a mouse model of pancreatic cancer to examine mitochondrial mass and structure in atrophying muscle along with aspects of mitochondrial metabolism in the same tissue. Most relevant are the solid transcriptomics and proteomics approaches to map out related changes in gene expression networks in muscle during cancer cachexia.

      Response: We very much appreciate the positive feedback from the editors on our article and are delighted to have it published in eLife. Our sincere thanks to the Reviewers for their positive feedback on our work, and for their insightful and constructive comments.

      Reviewer #1 (Public Review):

      Summary:

      This important study provides a comprehensive evaluation of skeletal muscle mitochondrial function and remodeling in a genetically engineered mouse model of pancreatic cancer cachexia. The study builds upon and extends previous findings that implicate mitochondrial defects in the pathophysiology of cancer cachexia. The authors demonstrate that while the total quantity of mitochondria from skeletal muscles of mice with pancreatic cancer cachexia is similar to controls, mitochondria were elongated with disorganized cristae, and had reduced oxidative capacity. The mitochondrial dysfunction was not associated with exercise-induced metabolic stress (insufficient ATP production), suggesting compensation by glycolysis or other metabolic pathways. However, mitochondrial dysfunction can lead to increased production of ROS/oxidative stress and would be expected to interfere with carbohydrate and lipid metabolism, events that are linked to cancer-induced muscle loss. The data are convincing and were collected and analyzed using state-of-the-art techniques, with unbiased proteomics and transcriptomics analyses supporting most of their conclusions.

      Additional Strengths:

      The authors utilize a genetically engineered mouse model of pancreatic cancer which recapitulates key aspects of human PDAC including the development of cachexia, making the model highly appropriate and translational.

      The authors perform transcriptomic and proteomics analyses on the same tissue, providing a comprehensive analysis of the transcriptional networks and protein networks changed in the context of PDAC cachexia.

      Weaknesses:

      The authors refer to skeletal muscle wasting induced by PDAC as sarcopenia. However, the term sarcopenia is typically reserved for the loss of skeletal muscle mass associated with aging.

      Response: We agree that the term sarcopenia initially refers to aged muscle, but its use has spread to other fields, including oncology (for example, in this article, which we quote: Mintziras I et al. Sarcopenia and sarcopenic obesity are significantly associated with poorer overall survival in patients with pancreatic cancer: Systematic review and meta-analysis. Int J Surg 2018;59:19-26). Actually, the term sarcopenia is now widely used in the literature and in the clinic to describe the loss of muscle mass and strength in cancer patients (see for example, this recent review: Papadopetraki A. et al. The Role of Exercise in Cancer-Related Sarcopenia and Sarcopenic Obesity. Cancers 2023;15;5856).

      In Figure 2, the MuRF1 IHC staining appears localized to the extracellular space surrounding blood vessels and myofibers-which causes concern as to the specificity of the antibody staining. MuRF1, as a muscle-specific E3 ubiquitin ligase that degrades myofibrillar proteins, would be expected to be expressed in the cytosol of muscle fibers.

      Response: We agree that MuRF1 IHC staining was also observed in the extracellular space, which was a surprise, for which we have no explanation to date.

      Disruptions to skeletal muscle metabolism in PDAC mice are predicted based on mitochondrial dysfunction and the transcriptomic and proteomics data. The manuscript could therefore be strengthened by additional measures looking at skeletal muscle metabolites, or linking the findings to previous work that has looked at the skeletal muscle metabolome in related models of PDAC cachexia (Neyroud et al., 2023).

      Response: We agree that our omics data could be strengthened by additional measures looking at skeletal muscle metabolites. It's an excellent suggestion to parallel the transcriptomic and proteomic data we obtained on the gastrocnemius muscle with the metabolomic data obtained by Neyroud et al. on the same muscle. These authors used another mouse model of PDAC than our KIC GEMM model, namely the allograft model implanting KPC cells (derived from the pancreatic tumor of KPC mice, another PDAC GEMM model) into syngeneic recipient mice. They carried out a proteomic study on the tibialis anterior muscle and a metabolomic study on the gastrocnemius muscle. Proteomics data identified in particular a KPC-induced reduction in the relative abundance of proteins annotating to oxidative phosphorylation, consistently with our data showing reduced mitochondrial activity pathways. Metabolomic data showed reduced abundance of many amino acids as expected, and of intermediates of the mitochondrial TCA cycle (malate and fumarate) in KPC-atrophied muscle consistently with reduced mitochondrial metabolic pathways that we illustrated. In contrast, metabolites that were increased in abundance included those related to oxidative stress and redox homeostasis, which is not surprising regarding the profound oxidative stress affecting atrophied muscle. Finally, we noted in Neyroud's metabolomic data the dysregulation of certain lipids and nucleotides in atrophied muscle, which is very interesting to relate to our study describing alterations in lipid and nucleotide metabolic pathways.

      Reviewer #2 (Public Review):

      The present work analyzed the mitochondrial function and bioenergetics in the context of cancer cachexia induced by pancreatic cancer (PDAC). The authors used the KIC transgenic mice that spontaneously develop PDAC within 9-11 weeks of age. They deeply characterize bioenergetics in living mice by magnetic resonance (MR) and mitochondrial function/morphology mainly by oxygraphy and imaging on ex vivo muscles. By MR they found that phosphocreatine resynthesis and maximal oxidative capacity were reduced in the gastrocnemius muscle of tumor-bearing mice during the recovery phase after 6 minutes of 1 Hz electrical stimulation while pH was reduced in muscle during the stimulation time. By oxygraphy, the authors showed a decrease in basal respiration, proton leak, and maximal respiration in tumor-bearing mice that was associated with the decrease of complex I, II, and IV activity, a reduction of OXPHOS proteins, mitochondrial mass, mtDNA, and to several morphological alterations of mitochondrial shape. The authors performed transcriptomic and proteomic analyses to get insights into mitochondrial defects in the muscles of PDAC mice. By IPA analyses on transcriptomics, they found an increase in the signature of protein degradation, atrophy, and glycolysis and a downregulation of muscle function. Focusing on mitochondria they showed a downregulation mainly in OXPHOS, TCA cycle, and mitochondrial dynamics genes and upregulation of glycolysis, ROS defense, mitophagy, and amino acid metabolism. IPA analysis on proteomics revealed major changes in muscle contraction and metabolic pathways related to lipids, protein, nucleotide, and DNA metabolism. Focusing on mitochondria, the protein changes mainly were related to OXPHOS, TCA cycle, translation, and amino acid metabolism.

      The major strength of the paper is the bioenergetics and mitochondrial characterization associated with the transcriptomic and proteomic analyses in PDAC mice that confirmed some published data of mitochondrial dysfunction but underlined some novel metabolic insights such as nucleotide metabolism.

      There are minor weaknesses related to some analyses on mitochondrial proteins and to the fact that proteomic and transcriptomic comparison may be problematic in catabolic conditions because some gene expression is required to maintain or re-establish enzymes/proteins that are destroyed by the proteolytic systems (including the autophagy proteins and ubiquitin ligases). The authors should consider the following points.

      Point 1. The authors used the name sarcopenia as synonymous with muscle atrophy. However, sarcopenia clearly defines the disease state (disease code: ICD-10-CM (M62.84)) of excessive muscle loss and force drop during ageing (Ref: Anker SD et al. J Cachexia Sarcopenia Muscle 2016 Dec;7(5):512-514.). Therefore, the word sarcopenia must be used only when pathological age-related muscle loss is the subject of study. Sarcopenia can be present in cancer patients who also experience cachexia, however since the age of tumor-bearing mice in this study is 7-9 weeks old, the authors should refrain from using sarcopenia and instead replace it with the words muscle atrophy/ muscle wasting/muscle loss.

      Response: This issue has also been raised by the Reviewer #1. We agree that the term sarcopenia historically refers to aged muscle, but it is also used in oncology (for example, in this article, which we quote: Mintziras I et al. Sarcopenia and sarcopenic obesity are significantly associated with poorer overall survival in patients with pancreatic cancer: Systematic review and meta-analysis. Int J Surg 2018;59:19-26). Actually, the term sarcopenia is now widely used in the literature and in the clinic to describe the loss of muscle mass and strength in cancer patients (see for example, this recent review: Papadopetraki A. et al. The Role of Exercise in Cancer-Related Sarcopenia and Sarcopenic Obesity. Cancers 2023;15;5856).

      Point 2. Most of the analyses of mitochondrial function are appropriate. However, the methodological approach to determining mitochondrial fusion and fission machinery shown in Fig. 5F is wrong. The correct way is to normalize the OPA1, MFn1/2 on mitochondrial proteins such as VDAC/porin. In fact, by loading the same amount of total protein (see actin in panel 5F) the difference between a normal and a muscle with enhanced protein breakdown is lost. In fact, we should expect a decrease in actin level in tumor-bearing mice with muscle atrophy while the blots clearly show the same level due to the normalization of protein content. Moreover, by loading the same amount of proteins in the gel, the atrophying muscle lysates become enriched in the proteins/organelles that are less affected by the proteolysis resulting in an artefactual increase. The correct way should be to lyse the whole muscle of control and tumor-bearing mice in an identical volume and to load in western blot the same volume between control cachectic muscles. Alternatively, the relative abundance of mitochondrial shaping proteins related to mitochondrial transmembrane or matrix proteins (mito mass) should compensate for the loading normalization. Because the authors showed elongated mitochondria despite mitophagy genes being up, fragmentation may be altered. Moreover, DNM1l gene is suppressed and therefore DRP1 protein must be analyzed. Finally, OPA 1 protein has different isoforms due to the action of proteases like OMA1, and YME1L that elicit different functions being the long one pro-fusion while the short ones do not. The authors must quantify the long and short isoforms of OPA1.

      Response: We acknowledge that our analysis of a minor set of proteins involved in mitochondrial dynamics by Western blotting (Figure 5F) is basic and could have been improved. We thank the Reviewer for all the suggestions, which will be very useful in future projects studying the subject in greater depth and according to the molecular characteristics of each player in mitochondrial fusion, fission, mitophagy and biogenesis.

      Point 3. The comparison of proteomic and transcriptomic profiles to identify concordance or not is problematic when atrophy programs are induced. In fact, most of the transcriptional-dependent upregulation is to preserve/maintain/reestablish enzymes that are consumed during enhanced protein breakdown. For instance, the ubiquitin ligases when activated undergo autoubiquitination and proteasome degradation. The same happens for several autophagy-related genes belonging to the conjugation system (LC3, Gabarap), the cargo recognition pathways (e.g. Ubiquitin, p62/SQSTM1) and the selective autophagy system (e.g. BNIP3, PINK/PARKIN) and metabolic enzymes (e.g. GAPDH, lipin). Finally, in case identical amounts of proteins have been loaded in mass spec the issues rise in point 2 of selective enrichment should be considered. Therefore, when comparing proteomic and transcriptomic these issues should be considered in discussion.

      Response: We fully agree with the Reviewer that seeking concordance between transcriptomic and proteomic data in the case of an organ affected by a high level of proteolysis is a difficult business. Another major difficulty we discussed in the Discussion section of the article is the fact that there is no concordance between RNA and protein level for a good proportion of proteins, for multiple reasons, so each level of omics has to be interpreted independently to give information on the pathophysiology of the organ studied.

    2. Reviewer #2 (Public Review):

      The present work analyzed the mitochondrial function and bioenergetics in the context of cancer cachexia induced by pancreatic cancer (PDAC). The authors used the KIC transgenic mice that spontaneously develop PDAC within 9-11 weeks of age. They deeply characterize bioenergetics in living mice by magnetic resonance (MR) and mitochondrial function/morphology mainly by oxygraphy and imaging on ex vivo muscles. By MR they found that phosphocreatine resynthesis and maximal oxidative capacity were reduced in the gastrocnemius muscle of tumor-bearing mice during the recovery phase after 6 minutes of 1 Hz electrical stimulation while pH was reduced in muscle during the stimulation time. By oxygraphy, the authors showed a decrease in basal respiration, proton leak, and maximal respiration in tumor-bearing mice that was associated with the decrease of complex I, II, and IV activity, a reduction of OXPHOS proteins, mitochondrial mass, mtDNA, and to several morphological alterations of mitochondrial shape. The authors performed transcriptomic and proteomic analyses to get insights into mitochondrial defects in the muscles of PDAC mice. By IPA analyses on transcriptomics, they found an increase in the signature of protein degradation, atrophy, and glycolysis and a downregulation of muscle function. Focusing on mitochondria they showed a downregulation mainly in OXPHOS, TCA cycle, and mitochondrial dynamics genes and upregulation of glycolysis, ROS defense, mitophagy, and amino acid metabolism. IPA analysis on proteomics revealed major changes in muscle contraction and metabolic pathways related to lipids, protein, nucleotide, and DNA metabolism. Focusing on mitochondria, the protein changes mainly were related to OXPHOS, TCA cycle, translation, and amino acid metabolism.

      The major strength of the paper is the bioenergetics and mitochondrial characterization associated with the transcriptomic and proteomic analyses in PDAC mice that confirmed some published data of mitochondrial dysfunction but underlined some novel metabolic insights such as nucleotide metabolism.

      There are minor weaknesses related to some analyses on mitochondrial proteins and to the fact that proteomic and transcriptomic comparison may be problematic in catabolic conditions because some gene expression is required to maintain or re-establish enzymes/proteins that are destroyed by the proteolytic systems (including the autophagy proteins and ubiquitin ligases). The authors should consider the following points.

      Point1. The authors used the name sarcopenia as synonymous with muscle atrophy. However, sarcopenia clearly defines the disease state (disease code: ICD-10-CM (M62.84)) of excessive muscle loss and force drop during ageing (Ref: Anker SD et al. J Cachexia Sarcopenia Muscle 2016 Dec;7(5):512-514.). Therefore, the word sarcopenia must be used only when pathological age-related muscle loss is the subject of study. Sarcopenia can be present in cancer patients who also experience cachexia, however since the age of tumor-bearing mice in this study is 7-9 weeks old, the authors should refrain from using sarcopenia and instead replace it with the words muscle atrophy/ muscle wasting/muscle loss.

      Point2. Most of the analyses of mitochondrial function are appropriate. However, the methodological approach to determining mitochondrial fusion and fission machinery shown in Fig. 5F is wrong. The correct way is to normalize the OPA1, MFn1/2 on mitochondrial proteins such as VDAC/porin. In fact, by loading the same amount of total protein (see actin in panel 5F) the difference between a normal and a muscle with enhanced protein breakdown is lost. In fact, we should expect a decrease in actin level in tumor-bearing mice with muscle atrophy while the blots clearly show the same level due to the normalization of protein content. Moreover, by loading the same amount of proteins in the gel, the atrophying muscle lysates become enriched in the proteins/organelles that are less affected by the proteolysis resulting in an artefactual increase. The correct way should be to lyse the whole muscle of control and tumor-bearing mice in an identical volume and to load in western blot the same volume between control cachectic muscles. Alternatively, the relative abundance of mitochondrial shaping proteins related to mitochondrial transmembrane or matrix proteins (mito mass) should compensate for the loading normalization. Because the authors showed elongated mitochondria despite mitophagy genes being up, fragmentation may be altered. Moreover, DNM1l gene is suppressed and therefore DRP1 protein must be analyzed. Finally, OPA 1 protein has different isoforms due to the action of proteases like OMA1, and YME1L that elicit different functions being the long one pro-fusion while the short ones do not. The authors must quantify the long and short isoforms of OPA1.

      Point3. The comparison of proteomic and transcriptomic profiles to identify concordance or not is problematic when atrophy programs are induced. In fact, most of the transcriptional-dependent upregulation is to preserve/maintain/reestablish enzymes that are consumed during enhanced protein breakdown. For instance, the ubiquitin ligases when activated undergo autoubiquitination and proteasome degradation. The same happens for several autophagy-related genes belonging to the conjugation system (LC3, Gabarap), the cargo recognition pathways (e.g. Ubiquitin, p62/SQSTM1) and the selective autophagy system (e.g. BNIP3, PINK/PARKIN) and metabolic enzymes (e.g. GAPDH, lipin). Finally, in case identical amounts of proteins have been loaded in mass spec the issues rise in point 2 of selective enrichment should be considered. Therefore, when comparing proteomic and transcriptomic these issues should be considered in discussion.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Recommendations for The Authors):

      Major comments:

      1) The immunolabeling data in Figure S4 shows no change in puncta number but reduced puncta size in Kit KO. sIPSC data show reduced frequency but little change in amplitude. These data would seem contradictory in that one suggests reduced synaptic strength, but not number, and the other suggests reduced synapse number, but not strength. How do the authors reconcile these results?

      Regarding the synaptic puncta, In Kit KO (or KL KO), we have not detected an overt reduction in the average VGAT/Gephyrin/Calbindin positive puncta density or puncta size per animal. With respect to puncta size, only in the Kit KO condition, and only when individual puncta are assessed does this modest (~10%) difference in size become statistically significant. In the revision, we eliminate this figure and focus on the per animal averages.

      We interpret that the reduction in sIPSC and mIPSC frequency likely stems from a decreased proportion of functional synapse sites. The number of MLIs, their action potential generation, the density of synaptic puncta, and the ability of direct stimulation to evoke release and equivalent postsynaptic currents, are all similar in Control vs Kit KO. It is therefore feasible that a reduced frequency of postsynaptic inhibitory events is due to a reduced ability of MLI action potentials to invade the axon terminal, and/or an impaired ability for depolarization to drive (e.g. coordinated calcium flux) transmitter release. That is, while the number of MLIs and their synapses appear similar, the reduced mIPSC frequency suggests that there is a reduced proportion of, or probability that, Kit KO synapse sites that function properly.

      2) Related to point 1, it would be helpful to see immunolabeling data from Kit ligand KO mice? Do these show the same pattern of reduced puncta size but no change in number?

      Although we have not added a figure, we have now added experiments and a corresponding analysis in the manuscript. As we had previously for Kit KO, we now for KL KO conducted IHC for VGAT, Gephyrin, and Calbindin, and we analyzed triple-positive synaptic puncta in the molecular layer of Pcp2 Cre KL KO mice and Control (Pcp2 Cre negative, KL floxed homozygous) mice. We did not find a gross reduction in the average synaptic puncta size or density, or in the PSD-95 pinceau size. From this initial analysis, it appears that the presynaptic hypotrophy is more notable in the receptor than in the ligand knockout. We speculate that this is perhaps because the Kit receptor may have basal activity in the absence of Kit ligand, that Kit may serve a presynaptic scaffolding role that is lost in the receptor (but not the ligand) knockout, or simply that the embryonic timing of the Pax2 Cre vs Pcp2 Cre recombination events is more relevant to pinceaux development, especially as basket cells are born primarily prenatally.

      3) The data using KL overexpression in PC (figure 4E,F) are intriguing, but puzzling. The reduction in sIPSC frequency and amplitude in the control PC is much greater than seen in the Kit or KL KO. The interpretation of these data, "Thus, KL-Kit levels may not set the number of MLI:PC release sites, but may instead influence the proportion of synapses that are functional for neurotransmission (Figure 4G)" is not clear and the reasoning here should be explained in more detail, perhaps in the discussion.

      We have attempted to clarify this portion of the manuscript by eliminating the cartoon of the proposed model, and by revising and adding to the discussion. Either MLI Kit KO or PC KL KO seems to preserve the absolute number of MLI:PC anatomical synapse sites (IHC) but to reduce the proportion of those synapse that are contributing to neurotransmission (mIPSC). We speculate that sparse PC KL overexpression (OX) may either 1) weaken inhibition to surrounding control PCs by either diminishing KL OX PC to KL Control PC inhibition, and/or 2) act retrogradely through MLI Kit to potentiate MLI:MLI inhibition, reducing the MLI:PC inhibition at neighboring Control PCs.

      Minor comments:

      1) In the first sentence of the results, should "Figure 1A, B" be "Figure C, D"?

      Yes, corrected.

      2) The top of page 6 states "the mean mIPSC amplitude was ~10% greater in PC KL KO than in control", this does not appear to be the case in Figure 3E. control and KL KO look very similar here.

      In this portion of the text citing the modest 10% increase in mIPSC amplitude, we are referring to the average amplitude of all individual mIPSC events in the PC KL KO condition; in the figure referred to by the reviewer (3E), we are instead referring to the average of all mIPSC event amplitudes per KL KO PC. Because of the dramatic difference in sample size for individual events vs cells, this modest difference rises to statistical, if not biological, significance. We include this individual event analysis only to suggest that, since we in fact saw a slightly higher event amplitude in the KL KO condition, it is unlikely that a reduced amplitude would have been a technical reason that we detected a lower event frequency.

      3) Figure 3 D, duration, y-axis should be labelled "ms"

      Event duration is no longer graphed or referenced. This has been replaced with total inhibitory charge.

      Reviewer #2 (Recommendations For The Authors):

      Methods:

      • Pax2-Cre line: embryonal Cre lines sometimes suffer from germline recombination. Was this evaluated, and if yes, how?

      The global loss of Kit signaling is incompatible with life, as seen from perinatal lethality in other Kit Ligand or Kit mutant mouse lines or other conditional approaches. Furthermore, a loss of Kit signaling in germ cells impedes fertility. Thus, while not explicitly ruled out, since conditional Pax2 Cre mediated Kit KO animals were born, survived, and produced offspring in normal ratios, we do not suspect that germline recombination was a major issue in this specific study.

      • Include rationale for using different virus types in different studies (AAV vs. Lenti).

      This rationale is now included and reflects the intention to achieve infection sparsity in the smaller and less dense tissue of perinatal mouse brains.

      • How, if at all, was blinding performed for histological and electrophysiological experiments?

      It was not possible for electrophysiology to be conducted blinded for the Kit KO experiments, owing to the subjects’ hypopigmentation. However, whenever feasible, resultant microscopy images or electrophysiological data sets were analyzed by Transnetyx Animal ID, and the genotypes unmasked after analysis.

      • Provide justification for limiting electrophysiology recordings to lobule IV/V and why MLIs in the middle third of the molecular layer were prioritized when inhibition of PCs is dominated by large IPSCs from basket cells. Why were 2 different internals used for recording IPSCs and EPSCs in PCs and MLIs? While that choice is justified for action potential recordings, it provides poor voltage control in PC voltage clamp. Both IPSCs and EPSCs could have been isolated pharmacologically using a CsCl internal.

      The rationale for regional focus has been added to the text. For MLI action potential recordings, we opted to sample the middle third of the molecular layer so that we would not be completely biased to either classic distal stellate vs proximal basket subtypes. It is our hope, in future optogenetic interrogations, to simultaneously record the dynamics of all MLI subtypes in a more unbiased way. With respect to internal solutions, we initially utilized a cesium chloride internal to maximize our ability to resolve differences in GABAA mediated currents, which was the hypothesis-driven focus of our study. While we agree that utilizing a single internal and changing the voltage clamp to arrive at per-cell analysis of Excitatory/Inhibitory input would have been most informative, our decision to utilize pharmacological methods was driven by our experience that achieving adequate voltage clamp across large Purkinje cells was often problematic, particularly in adult animals.

      Introduction:

      In the introduction, the authors state that inactivating Kit contributes to neurological dysfunction - their examples highlight neurological, psychiatric, and neurodevelopmental conditions.

      The language has been changed.

      General:

      Using violin plots illustrates the data distribution better than bar graphs/SEM.

      We have included violin plots throughout, and we have changed p values to numeric values, both in the interest of presenting the totality of the data more clearly.

      Synapses 'onto' PCs sounds more common than 'upon' PCs.

      We have changed the wording throughout.

      Figure 1:

      1F - there seems to be an antero-posterior gradient of Kit expression.

      Though not explicitly pursued in the manuscript, it is possible that such a gradient may reflect differences in the timing of the genesis and maturation of the cerebellum along the AP axis. Regional variability is however now briefly addressed as a motivator for focused studies within lobules IV/V.

      E doesn't show male/female ratios but only hypopigmentation.

      This language has been corrected.

      Figure 2 and associated supplementary figures:

      2A/B: The frequency of sIPSCs is very high in PCs, making the detection of single events challenging. How was this accomplished? Please add strategy to the methods.

      We have added methodological detail for electrophysiology analysis.

      How were multi-peak events detected and analyzed? 'Duration' is not specified - do the authors refer to kinetics? If so, report rise and decay. It is likely impossible to show individual aligned sIPSCs with averages superimposed, given that sIPSCs strongly overlap. Alternatively, since no clear baseline can be determined in between events, and therefore frequency, amplitude, and kinetics quantification is near-impossible, consider plotting inhibitory charge.

      Given the heterogeneity of events, we now do not refer to individual event kinetics. As suggested, we have now included an analysis of the total inhibitory charge transferred by all events during the recording epoch.

      S2: Specify how density, distribution, and ML thickness were determined in methods. How many animals/cells/lobules?

      For consistency with viral injections and electrophysiology, the immunohistochemical analysis was restricted to lobule IV/V. This is clearer in the revision and detail is added in the methods.

      S3:

      S3B: the labels of Capacitance and Input resistance are switched.

      This has been corrected.

      How were these parameters determined? Add to methods.

      Added

      In the previous figure the authors refer to 'frequency', in this figure to 'rate' - make consistent

      This has been corrected.

      D: example does not seem representative. Add amplitude of current pulse underneath traces.

      We added new traces from nearer the group means and we now include the current trace.

      F/G example traces (aligned individual events + average) are necessary.

      We added example traces near the relevant group means for each condition.

      Statement based on evoked IPCSs that 'synapses function normally' is a bit sweeping and can only be fully justified with paired recordings. Closer to the data would be the release probability of individual synapses is similar between control and Kit KO.

      Paired recordings in both Kit Ligand and Kit receptor conditional knockout conditions is indeed an informative aim of future studies should support permit. For now, we have clarified the language to be more in line with the reviewer’s welcome suggestion.

      S4:

      Histological strategy cannot unambiguously distinguish MLI-PC and PC-PC synapses. Consider adding this confound to the text.

      We have added this confound to the discussion.

      The observation that the pinceau is decreased in size could have important implications for ephaptic coupling of MLI and PC and could be mentioned.

      We agree and have added this notion to the discussion.

      Y-label is missing in B.

      Corrected.

      Figure 3 and associated supplementary figures:

      In the text, change PC-Cre to L7-Cre or Pcp2-Cre.

      Changed

      How do the authors explain a reduction in frequency, amplitude, and duration of sIPSCs in the KL KO but not in the Kit KO? Add to the discussion

      We now address this apparent discordance in the discussion. Pax2 Cre mediates recombination weeks ahead of Pcp2 Cre. We therefore suspect that postnatal PC KL KO may be more phenotypic than embryonic MLI Kit KO because there is less time for developmental compensation. A future evaluation of the impact of postnatal Kit KO would be informative to this end.

      As in Figure 2, plotting the charge might be more accurate.

      We now plot total charge transfer.

      Are the intrinsic properties in KL KO PCs altered? (Spontaneous firing, capacitance, input resistance).

      We have added to the text that we found no difference in capacitance or input resistance between Purkinje cells from KL floxed homozygous Control animals versus those from KL floxed homozygous, PCP2 Cre positive KL KO animals. We plan to characterize both basal and MLI modulated PC firing in a future manuscript, especially since Pcp2 Cre mediated KL KO seems more phenotypic than Pax2 Cre mediated Kit KO, we agree that this seems a better testbed for investigating differences in both the basal, and the MLI-mediated modulations in, PC firing.

      3D-F - Example traces would be desirable (see above, analogous to Fig. 2).

      More example traces have been added.

      Figure 4: 'In vivo mixtures' sounds unusual. Consider revision (e.g., 'to sparsely delete KL').

      Changed

      The observation that control PC sIPSC frequency is lower in KL OX PCs than in sham is interesting. This observation would be consistent with overall inhibitory synapse density being preserved. This could be evaluated with immunohistochemistry. For how far away from the injection area does this observation hold true?

      Because we have now analyzed and failed to find an overt (per animal average) change in synaptic puncta size or density in the whole animal Control vs PCP2 Cre mediated KL KO conditions, we do not have confidence that it is feasible to pursue this IHC strategy in the sparse viral-mediated KL KO or OX conditions. To the reviewer’s valid point however, we intend to probe the spatial extent/specificity of the sparse phenomenon when we are resourced to complement the KL/Kit manipulations with transgenic methods for evaluating MLI-PC synapses specifically, potentially by GRASP or related methods that would not be confounded by PC-PC synapses. Transgenic MLI access would also facilitate determining the spatial extent to which opto-genetically activated MLIs evoke equivalent responses in Control vs KL manipulated PCs.

      Y-legend in D clipped.

      Corrected

      Existing literature suggests that MLI inhibition regulates the regularity of PC firing - this could be tested in Kit and KL mutants.

      For now, based upon transgenic animal availability, we have now included an evaluation of PC firing in the (Pax2 Cre mediated) Kit KO condition. PC average firing frequency, mean ISI, and ISI CV2 were not significantly different across genotypes. A KS test of individual ISI durations for Control vs Kit KO did reveal a difference (p<0.0001). We have added a supplementary figure (S6) with this data. It is possible that in the more phenotypic PC KL KO condition that we may find a difference in these PC spiking patterns of PC firing, however, we are also eager to test in future studies whether postnatal KL or Kit KO impairs the ability of MLI activation to produce pauses or other alterations in PC firing or in PF-PC mediated plasticity.

      Reviewer #3 (Recommendations For The Authors):

      Reference to Figure 1A in the Results section is slightly inaccurate. Kit gene modifications are illustrated in Figures 1A, B. Where Figure 1A shows Kit distribution. Please rephrase. Relatedly, the reference to Figs 1B - D are shifted in the results section, and 1E is skipped.

      We have changed the text.

      Please show cumulative histograms for frequency too for consistency with amplitude (e.g. Fig 2).

      We have instead, for reasons outlined by other reviewers, documented total charge transfer for both Kit KO and KL KO experiments where sIPSC events were analyzed.

      Fig S3: include example traces of PPR.

      This is now included.

      Include quantifications of GABAergic synapse density in Fig S4.

      This is now included.

      Include inset examples of KO in Fig S4A.

      This is now included.

      Add average puncta size graphs along Figure S4B. The effect apparent in the histogram of S4B is small and statistics using individual puncta as n values (in the 20,000s) therefore misleading.

      Per animal analysis is now instead included in the figure and text.

      Figure S4B y axis label blocked.

      Corrected

      Include quantification referenced in "As PSD95 immunoreactivity faithfully follows multiple markers of pinceaux size 40, we quantified PSD95 immunoreactive pinceau area and determined that pinceaux area was decreased by ~50% in Kit KO (n 26 Control vs 43 Kit KO, p<0.0001, two-tailed t-test)."

      We added a graph of per animal averages, instead of in text individual pinceau areas.

      Include antibody dilutions in the methods.

      Added.

      It's unclear from the text where the Mirow lab code comes from.

      Detail has now been added in text.

      Typo in methods "The Kit tm1c alle was bred...".

      Corrected

      Typo in Figure S4 legend "POSD-95 immuno-reactivity".

      Corrected

    1. cookie-parser Parse Cookie header and populate req.cookies with an object keyed by the cookie names. Optionally you may enable signed cookie support by passing a secret string, which assigns req.secret so it may be used by other middleware. Installation $ npm install cookie-parser API var cookieParser = require('cookie-parser') cookieParser(secret, options) Create a new cookie parser middleware function using the given secret and options. secret a string or array used for signing cookies. This is optional and if not specified, will not parse signed cookies. If a string is provided, this is used as the secret. If an array is provided, an attempt will be made to unsign the cookie with each secret in order. options an object that is passed to cookie.parse as the second option. See cookie for more information. decode a function to decode the value of the cookie The middleware will parse the Cookie header on the request and expose the cookie data as the property req.cookies and, if a secret was provided, as the property req.signedCookies. These properties are name value pairs of the cookie name to cookie value. When secret is provided, this module will unsign and validate any signed cookie values and move those name value pairs from req.cookies into req.signedCookies. A signed cookie is a cookie that has a value prefixed with s:. Signed cookies that fail signature validation will have the value false instead of the tampered value. In addition, this module supports special "JSON cookies". These are cookie where the value is prefixed with j:. When these values are encountered, the value will be exposed as the result of JSON.parse. If parsing fails, the original value will remain. cookieParser.JSONCookie(str) Parse a cookie value as a JSON cookie. This will return the parsed JSON value if it was a JSON cookie, otherwise, it will return the passed value. cookieParser.JSONCookies(cookies) Given an object, this will iterate over the keys and call JSONCookie on each value, replacing the original value with the parsed value. This returns the same object that was passed in. cookieParser.signedCookie(str, secret) Parse a cookie value as a signed cookie. This will return the parsed unsigned value if it was a signed cookie and the signature was valid. If the value was not signed, the original value is returned. If the value was signed but the signature could not be validated, false is returned. The secret argument can be an array or string. If a string is provided, this is used as the secret. If an array is provided, an attempt will be made to unsign the cookie with each secret in order. cookieParser.signedCookies(cookies, secret) Given an object, this will iterate over the keys and check if any value is a signed cookie. If it is a signed cookie and the signature is valid, the key will be deleted from the object and added to the new object that is returned. The secret argument can be an array or string. If a string is provided, this is used as the secret. If an array is provided, an attempt will be made to unsign the cookie with each secret in order.

      Certainly! In simple terms, the cookie-parser is a module in Node.js (specifically for Express applications) that helps with handling cookies, which are small pieces of data stored on a user's browser.

      Here's a breakdown:

      1. Installation: You need to install the cookie-parser module using npm.

      bash npm install cookie-parser

      1. Creating a Middleware:
      2. You create a middleware function using cookie-parser.
      3. You can optionally provide a "secret" (a string or an array of strings) for extra security. This secret is used to sign cookies, making them tamper-proof.

      ```javascript const cookieParser = require('cookie-parser'); const app = express();

      // Use the cookie-parser middleware app.use(cookieParser('yourSecret')); ```

      1. Parsing Cookies:
      2. The middleware parses the cookies sent by the user's browser and makes them available in req.cookies.
      3. If a secret is provided, it also validates and unsigns any signed cookies, making them available in req.signedCookies.

      4. Working with Cookies:

      5. You can set cookies in the response using res.cookie().
      6. You can read cookies from the request in your route handlers from req.cookies and req.signedCookies.

      ```javascript app.get('/set-cookie', (req, res) => { // Set a cookie named "myCookie" with value "Hello, Cookie!" res.cookie('myCookie', 'Hello, Cookie!'); res.send('Cookie has been set!'); });

      app.get('/read-cookie', (req, res) => { // Access the value of the "myCookie" cookie const myCookieValue = req.cookies.myCookie; res.send(Value of myCookie: ${myCookieValue}); }); ```

      1. Additional Features:
      2. You can work with signed cookies for added security.
      3. It supports special "JSON cookies" for storing JSON data.

      Remember, cookies are often used to store small pieces of information on the user's browser, and cookie-parser helps you handle them easily in your Express.js applications.

      In an Express.js application, you can set cookies in the browser using the res.cookie() method. This method is provided by the cookie-parser middleware. Here's a simple example:

      1. Install cookie-parser if you haven't already:

      bash npm install cookie-parser

      1. Use cookie-parser in your Express app:

      ```javascript const express = require('express'); const cookieParser = require('cookie-parser');

      const app = express();

      // Use the cookie-parser middleware app.use(cookieParser()); ```

      1. Set a cookie in a route handler:

      ```javascript app.get('/set-cookie', (req, res) => { // Set a cookie named "myCookie" with value "Hello, Cookie!" res.cookie('myCookie', 'Hello, Cookie!');

       // Send a response
       res.send('Cookie has been set!');
      

      }); ```

      In the above example, when the user accesses the /set-cookie endpoint, a cookie named "myCookie" with the value "Hello, Cookie!" will be set in their browser.

      1. Set additional options (optional):

      You can set additional options for cookies, such as maxAge, expires, path, domain, and others. For example:

      javascript res.cookie('myCookie', 'Hello, Cookie!', { maxAge: 900000, httpOnly: true });

      This sets the maxAge option to 900,000 milliseconds (15 minutes) and makes the cookie accessible only through HTTP (not through JavaScript).

      Remember to adapt the code to your specific use case and requirements. The client's browser will now store the cookie, and subsequent requests from that client to your server will include the cookie information in the request headers.

    1. ObjectIds An ObjectId is a special type typically used for unique identifiers. Here's how you declare a schema with a path driver that is an ObjectId: const mongoose = require('mongoose'); const carSchema = new mongoose.Schema({ driver: mongoose.ObjectId });

      Certainly! The way it works is when you create a new note and associate it with a user, you will typically provide the user's unique identifier (ID) as the value for the user field. In a typical scenario, this user ID would come from the "user" collection in your database.

      Here's a brief example in code:

      ```javascript // Assume you have a User model and you have obtained a user's ID const userId = "someUserId"; // This would be the actual ID of the user in your database

      // Creating a new note and associating it with the user by providing the user ID const newNote = new Notes({ user: userId, title: "Sample Note", description: "This is a sample note.", tag: "General", deadline: new Date(), deadlinetime: "12:00 PM", });

      // Save the new note to the database newNote.save() .then((savedNote) => { console.log("Note saved successfully:", savedNote); }) .catch((error) => { console.error("Error saving note:", error); }); ```

      In this example, the user field of the new note is set to the userId obtained from the "user" collection. When you save this note to the database, the user field will store the provided user ID. Later, when you retrieve this note, you can use this user ID to identify which user the note is associated with.

      It's important to ensure that the user ID you provide for the user field actually corresponds to an existing user in the "user" collection. This way, you maintain the relationship between the "notes" and "user" collections in your MongoDB database.

    1. In terms of receiving events, events can either be returned to the calling code immediately whenever bytes are provided, or they can be lazily produced in response to the calling code’s request.

      接收有两种方式,直接返回和被调用时返回

    1. In 2016, when Donald Trump was running a campaign to be the US President, one twitter user pointed out that you could see which of the Tweets on Donald Trump’s Twitter account were posted from an Android phone and which from an iPhone, and that the tone was very different. A data scientist decided to look into it more and found: “My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).” (Read more in this article from The Guardian) Note: we can no longer run code to check this ourselves because first, Donald Trump’s account was suspended in January 2021 for inciting violence, then when Elon Musk decided to reinstate Donald Trump’s account (using a Twitter poll as an excuse, but how many of the votes were bots?), Elon Musk also decided to remove the ability to look up a tweet’s source.

      I thought this was very interesting, how the tweets from an iPhone and Android could have such big difference. Also how they could tell the difference between the campaign's tweets which to be an iPhone, and to Trump's own android. I never really paid attention to Trumps election tweets, but I thought this is very interesting.

    1. Author Response

      Author responses to the original review:

      The data we produce are not criticized as such and thus, do not require revision; the criticisms concern our interpretation of them. General themes of the reviews are that i) genetic signatures do not matter for defining neuronal types (here sympathetic versus parasympathetic); ii) that a cholinergic postganglionic autonomic neuron must be parasympathetic; and iii) that some physiology of the pelvic region would deserve the label “parasympathetic”. We answered the latter argument in (Espinosa-Medina et al., 2018) to which we refer the interested reader; and we fully disagree with the first two. Of note, part of the last sentence of the eLife assessment is misleading and does not reflect the referees’ comments. Our paper analyses genetic differences between the cranial and sacral outflow and uses them to argue that they cannot be both parasympathetic. The eLife assessment acknowledges the “genetic differences” but concludes that, somehow, they don’t detract from a common parasympathetic identity. We take issue with this paradox, of course, but it is coherent with the referee’s comments. On the other hand, the eLife assessment alone pushes the paradox one step further by stating that “functional differences” between the cranial and sacral outflows can’t either prevent them from being both parasympathetic. We would also object to this, but the only “functional differences” used by the referees to dismiss our diagnostic of a sympathetic-like character (rather than parasympathetic) for the sacral outflow are between noradrenergic and cholinergic, and between sympathetic and parasympathetic (and we also disagree with those, see above, and below) —not between cranial and sacral.

      We will thus use the opportunity offered by eLife to keep the paper as it is (with a few minor stylistic changes). We respond below to the referees’ detailed remarks and hope that the publication, as per eLife new model, of the paper, the referees’ comments and our response will help move the field forward.

      Public review by Referee #1

      “Consistently, the P3 cluster of neurons is located close to sympathetic neuron clusters on the map, echoing the conventional understanding that the pelvic ganglia are mixed, containing both sympathetic and parasympathetic neurons”.

      The greater closeness of P3 than of P1/2/4 to the sympathetic cluster can be used to judge P1/2/4 less sympathetic than P3 (and more… something else), but not more parasympathetic. There is no echo of the “conventional understanding” here.

      “A closer look at the expression showed that some genes are expressed at higher levels in sympathetic neurons and in P2 cluster neurons ” [We assume that the referee means “in sympathetic neurons and in P3 cluster neurons”] but much weaker in P1, P2, and P4 neurons such as Islet1 and GATA2, and the opposite is true for SST. Another set of genes is expressed weakly across clusters, like HoxC6, HoxD4, GM30648, SHISA9, and TBX20.

      These statements are inaccurate; On the one hand, the classification is not based on impression by visual inspection of the heatmap, but by calculations, using thresholds. Admittedly, the thresholds have an arbitrary aspect, but the referee can verify (by eye inspection of heatmap) that genes which we calculate as being at “higher levels in sympathetic neurons and in P3 cluster neurons, but much weaker in P1, P2, and P4 neurons” or vice versa, i.e. noradrenergic or cholinergic neurons (genes from groups V and VI, respectively), have a much bigger difference than those cited by the referee, indeed are quasi-absent from the weaker clusters or ganglia. In addition, even by subjective eye inspection:

      Islet is equally expressed in P4 and sympathetics.

      SST is equally expressed in P1 and sympathetics.

      Tbx20 is equally expressed in P2 and sympathetics.

      HoxC6, HoxD4, GM30648, SHISA9 are equally expressed in all clusters and all sympathetic ganglia.

      “Since the pelvic ganglia are in a caudal body part, it is not surprising to have genes expressed in pelvic ganglia, but not in rostral sphenopalatine ganglia, and vice versa (to have genes expressed in sphenopalatine ganglia, but not in pelvic ganglia), according to well recognized rostro-caudal body patterning, such as nested expression of hox genes.”

      We do not simply show “genes expressed in pelvic ganglia, but not in rostral sphenopalatine ganglia, and vice versa”, i.e. a genetic distance between pelvic and sphenopalatine, but many genes expressed in all pelvic cells and sympathetic ones, i.e. a genetic proximity between pelvic and sympathetic. This situation can be deemed “unsurprising”, but it can only be used to question the parasympathetic nature of pelvic cells (as we do), or considered irrelevant (as the referee does, because genes would not define cell types, see our response to an equivalent stance by Referee#2). Concerning Hox genes, we do take them into account, and speculate in the discussion that their nested expression is key to the structure of the autonomic nervous system, including its division into sympathetic and parasympathetic outflows.

      It is much simpler and easier to divide the autonomic nervous system into sympathetic neurons that release noradrenaline versus parasympathetic neurons that release acetylcholine, and these two systems often act in antagonistic manners, though in some cases, these two systems can work synergistically. It also does not matter whether or not pelvic cholinergic neurons could receive inputs from thoracic-lumbar preganglionic neurons (PGNs), not just sacral PGNs; such occurrence only represents a minor revision of the anatomy. In fact, it makes much more sense to call those cholinergic neurons located in the sympathetic chain ganglia parasympathetic.

      This “minor revision of the anatomy” would make spinal preganglionic neurons which are universally considered sympathetic (in the thoraco-lumbar chord), synapse onto large numbers of parasympathetic neurons (in the paravertebral chains for sweat glands and periosteum, and in the pelvic ganglion), robbing these terms of any meaning.

      Thus, from the functionality point of view, it is not justified to claim that "pelvic organs receive no parasympathetic innervation".

      There never was any general or rigorous functional definition of the sympathetic and parasympathetic nervous systems — it is striking, almost ironic, that Langley, creator of the term parasympathetic and the ultimate physiologist, provides an exclusively anatomic definition in his Autonomic Nervous System, Part I. Hence, our definition cannot clash with any “functionality point of view”. In fact, as we briefly say in the discussion and explore in (Espinosa-Medina et al., 2018), it is the “sacral parasympathetic” paradigm which is unjustified from a functionality point of view, for implying a functional antagonism across the lumbo-sacral gap, which has been disproven repeatedly. It remains to be determined which neurons are antagonistic to which on the blood vessels of the external genitals; antagonism within one division of the autonomic nervous system would not be without precedent (e.g. there exist both vasoconstrictor and vasodilator sympathetic neurons, and both, inhibitor and activator enteric motoneurons). The way to this question is finally open to research, and as referee#2 says “it is early days”.

      Public review by Referee #2

      This work further documents differences between the cranial and sacral parasympathetic outflows that have been known since the time of Langley - 100 years ago.

      We assume that the referee means that it is the “cranial and sacral parasympathetic outflows” which “have been known since the time of Langley”, not their differences (that we would “further document”): the differences were explicitly negated by Langley. As a matter of fact, the sacral and cranial outflows were first likened to each other by Gaskell, 140 years ago (Gaskell, 1886). This anatomic parallel (which is deeply flawed (Espinosa-Medina et al., 2018)) was inherited wholesale by Langley, who added one physiological argument (Langley and Anderson, 1895) (which has been contested many times (Espinosa-Medina et al., 2018) and references within).

      In addition, the sphenopalatine and other cranial ganglia develop from placodes and the neural crest, while sympathetic and sacral ganglia develop from the neural crest alone.

      Contrary to what the referee says, the sphenopalatine has no placodal contribution. There is no placodal contribution to any autonomic ganglion, sympathetic or parasympathetic (except an isolated claim concerning the ciliary ganglion (Lee et al., 2003)). All autonomic ganglia derive from the neural crest as determined a long time ago in chicken. For the sphenopalatine in mouse, see our own work (Espinosa-Medina et al., 2016).

      One feature that seems to set the pelvic ganglion apart is […] the convergence of preganglionic sympathetic and parasympathetic synapses on individual ganglion cells (Figure 3). This unusual organization has been reported before using microelectrode recordings (see Crowcroft and Szurszewski, J Physiol (1971) and Janig and McLachlan, Physiol Rev (1987)). Anatomical evidence of convergence in the pelvic ganglion has been reported by Keast, Neuroscience (1995).

      Contrary to what the referee says, we do not provide in Figure 3 any evidence for anatomic convergence, i.e. for individual pelvic ganglion cells receiving dual lumbar and sacral inputs. We simply show that cholinergic neurons figure prominently among targets of the lumbar pathway. This said, the convergence of both pathways on the same pelvic neurons, described in the references cited by the referee, is another major problem in the theory of the “sacral parasympathetic” (as we discussed previously (Espinosa-Medina et al., 2018)).

      It should also be noted that the anatomy of the pelvic ganglion in male rodents is unique. Unlike other species where the ganglion forms a distributed plexus of mini-ganglia, in male rodents the ganglion coalesces into one structure that is easier to find and study. Interestingly the image in Figure 3A appears to show a clustering of Chat-positive and Th-positive neurons. Does this result from the developmental fusion of mini ganglia having distinct sympathetic and parasympathetic origins?

      The clustering of Chat-positive and Th-positive cells could arise from a number of developmental mechanisms, that we have no idea of at the moment. This has no bearing on sympathetic and parasympathetic.

      In addition, Brunet et al dismiss the cholinergic and noradrenergic phenotypes as a basis for defining parasympathetic and parasympathetic neurons. However, see the bottom of Figure S4 and further counterarguments in Horn (Clin Auton Res (2018)).

      The bottom of Figure S4 simply indicates which cells are cholinergic and adrenergic. We have already expounded many times that noradrenergic and cholinergic do not coincide with sympathetic and parasympathetic. Henry Dale (Nobel Prize 1936) demonstrated this. Langley himself devoted several pages of his final treatise to this exception to his “Theory on the relation of drugs to nerve system” (Langley, 1921) (p43) (which was actually a bigger problem for him than it is for us, for reason which are too long to recount here; it is as if the theoretical difficulties experienced by Langley had been internalized to this day in the form of a dismissal of the cholinergic sympathetic neurons as a slightly scandalous but altogether forgettable oddity). (Horn, 2018) reviews the evidence that the thoracic cholinergic sympathetic phenotype is brought about by a secondary switch upon interaction with the target and argues that this would be a fundamental difference with the sacral “parasympathetic”. But in fact the secondary switch is preceded by co-expression of ChAT and VAChT with Th in most sympathetic neurons (reviewed in (Ernsberger and Rohrer, 2018)); and we have no idea of the dynamic in the pelvic ganglion. It may also be mentioned in this context that target-dependent specification of neuronal identity has also been demonstrated of other types of sympathetic neurons ((Furlan et al., 2016)

      What then about neuropeptides, whose expression pattern is incompatible with the revised nomenclature proposed by Brunet et al.?

      There was never any neuropeptide-inspired criterion for a nomenclature of the autonomic nervous system.

      Figure 1B indicates that VIP is expressed by sacral and cranial ganglion cells, but not thoracolumbar ganglion cells.

      Contrary to what the referee says, there are VIP-positive cells in our sympathetic data set and even strongly positive ones, except they are scattered and few (red bars on the UMAP). They correspond to cholinergic sympathetics, likely sudomotor, which are known to contain VIP (e.g.(Anderson et al., 2006)(Stanke et al., 2006)). In other words, VIP is probably part of what we call the cholinergic synexpression group (but was not placed in it by our calculations, probably because of a low expression level in sympathetic noradrenergic cells).

      The authors do not mention neuropeptide Y (NPY). The immunocytochemistry literature indicates that NPY is expressed by a large subpopulation of sympathetic neurons but never by sacral or cranial parasympathetic neurons.

      Contrary to what the referee says, Keast (Keast, 1995) finds 3.7% of pelvic neurons double stained for NPY and VIP in male rats, and says (Keast, 2006) that in females “co-expression of NPY and VIP is common” ( thus in cholinergic neurons that the referee calls “parasympathetic”). Single cell transcriptomics is probably more sensitive than immunochemistry, and in our dichotomized data set (table S1), NPY is expressed in all pelvic clusters and all sympathetic ganglia. In other words, it is one more argument for their kinship. It does not appear in the heatmap because it ranks below the 100 top genes.

      Answer to the original recommendations by Referee #2

      Introduction - the use of the words 'consensual' and 'promiscuity' are not clear and rather loaded in the context of the pelvic ganglia. Pick alternative words.

      There is no sexual innuendo inherent in “promiscuity”: “condition of elements of different kinds grouped or massed together without order” (Oxford English Dictionary). We replaced “never consensual” by “never generally accepted”.

      Results - Page 2 - what sex were the mice? Previous works indicate significant sexual dimorphism in the pelvic ganglion.

      The mice included both males and females, and male and female cells are represented in all ganglia and clusters. This is now mentioned in the Material and Methods. Thus, however unsuited to analyze sexual dimorphism, our data set ensures that all the cell types we describe are qualitatively present in both sexes.

      Results line 3 - the celiac and mesenteric ganglia are prevertebral ganglia and not part of the sympathetic chain. The chain refers to the paravertebral ganglia.

      We replaced “part of the prevertebral chain” by “belonging to prevertebral ganglia”. This said, there are precedents for “prevertebral chain ganglia” to designate the rostro-caudal series of prevertebral ganglia. Rita Levi-Montalcini, for example, who devoted her glorious career to sympathetic ganglia, writes in 1972 “The nerve cell population of para- and prevertebral chain ganglia is reduced to 3–5% of that of controls”. (10.1016/0006-8993(72)90405-2).

      Page 3 - "as the current dogma implies". Dogma often refers to opinion or church doctrine. The current nomenclature is neither. Pick another word.

      There is little in science that is proven to the point of eliminating any element of opinion. “Dogma” refers to “that which is held as a principle or tenet […], especially a tenet authoritatively laid down by […] a school of thought” (OED). And “dogma” is used in science to designate tenets better experimentally supported than the “sacral parasympathetic”, such as the “central dogma of molecular biology”.

      Page 3 - "To give justice" implies the classical notion is unjust. How about, 'to further explore previous evidence indicating that ....'

      The term is indeed not proper English for the meaning intended, and the right expression is “to do justice”, to mean: “to treat [a subject or thing] in a manner showing due appreciation, to deal with [it] as is right or fitting” (OED). We have corrected the paper accordingly.

      Page 4 top - the convergence indicated by Figure 3 does not justify excluding cholinergic and noradrenergic genes from the analysis.

      Contrary to what the referee says, Figure 3 does not show any “convergence”, see our answer to Referee#1. What Figure 3 shows is that cells that are targeted by the lumbar pathway (a pathway universally deemed “sympathetic”) are cholinergic in massive proportion. Therefore, by an uncontroversial criterion, the pelvic ganglion contains lots of sympathetic cholinergic neurons. The only other option is to declare that sympathetic preganglionic neurons synapse onto parasympathetic postganglionic ones (which is what Referee#1 proposes, and considers “much simpler”. We beg to differ).

      Our justification for excluding cholinergic and noradrenergic genes from the definition of “sympathetic” and “parasympathetic” is simply that sympathetic neurons can be cholinergic (to sweat glands and periosteum; and — as we show in Figure 3 — many targets of the lumbar pathway); One can also note that anywhere else in the nervous system, classifying cell types as a function of neurotransmitter phenotype would lead to non-sensical descriptions, such as putting together pyramidal cells and cerebellar granules, or motor neurons and basal forebrain cholinergic neurons. Indeed Referee#1 proposes such a revolutionary revision, by calling all cholinergic autonomic neurons “parasympathetic” (see our answer above).

      Keast (1995) did similar experiments and used presynaptic lesions to draw a different conclusion indicating preferential innervation pelvic subpopulations.

      Keast found “preferential” innervation of pelvic subpopulations based on lesion experiments; Nevertheless, she concluded (at the time) that “the correct definition of these two components of the nervous system is based on neuroanatomy rather than chemistry” (Keast, 2006).

      Page 4 - "In the aggregate, the pelvic ganglion is best described as a divergent sympathetic ganglion devoid of parasympathetic neurons" The notion of a divergent ganglion is completely unclear!

      We take “divergent” in a developmental or evolutionary meaning: related to sympathetic ganglia, yet somewhat differing from them. Elsewhere we use the word “modified”. Importantly (and as cited in the paper), a similar situation emerges from the single cell transcriptomic analysis of the lumbar and sacral preganglionics (by other research groups).

      Granted, it is devoid of neurons having the signature of cranial parasympathetics, but that is insufficient to conclude that they are not parasympathetics.

      If a genetic signature which is not only un-parasympathetic, but sympathetic-like remains compatible with some version of the label “parasympathetic”, we get dangerously close to dismissing the molecular make-up of a neuron as a definition of its type. This goes against any contemporary understanding of neuron types (take (Zeisel et al., 2018) among hundreds of other examples).

      Page 4 - "the entire taxonomy of autonomic ganglia could be a developmental readout of Hox genes." This reader completely agrees! We appreciate this would be difficult to test but it helps to explain possible differences along the rostro-caudal axis. Consider making this a key implication of the study!

      If the reader agrees, then his/her previous points become mysterious: we speculate that the Hox code determines the structure of the autonomic nervous system, i.e. the array, along the rostrocaudal axis, of a bulbar parasympathetic, a thoracolumbar sympathetic and lumbo-sacral “pelvo-sympathetic”. The existence of caudal parasympathetic neurons, on the contrary, would subvert any role for Hox genes: similar neurons (similar enough to be called by the same name) would arise at completely different rostro-caudal levels, i.e. with a different Hox code.

      Page 5 - "It is thus remarkable ...that we uncover in no way contradicts the physiology." Not really. The 'classical' sympathetic system innervates the limbs, and the skin and it participates in thermoregulation and in cardiovascular adjustments to exercise. The parasympathetic system does none of these things. Reclassing the pelvic outflow as pseudo-sympathetic contradicts this physiology.

      We do not say that the sacral outflow is classically sympathetic; We go all the way to proposing the special name “pelvo-sympathetic”; And we insist that these special sympathetic-like neurons have special targets (detrusor muscle, helicine arteries…): there is no contradiction. Not only is there no contradiction, but we remove the mind-twister of an anatomical/genetic/cell type-based “sacral parasympathetic” combined with a lack of physiological lumbosacral antagonism (we provide a short history of this dissonance in (Espinosa-Medina et al., 2018)), which led Wilfrid Jänig to write (Jänig, 2006)(p. 357): “Thus, functions assumed to be primarily associated with sacral (parasympathetic) are well duplicated by thoracolumbar (sympathetic) pathways. This shows that the division of the spinal autonomic systems into sympathetic and parasympathetic with respect to sexual functions is questionable”. We could not agree more: this division is questionable in terms of physiology and inexistent in terms of cell types. In other words, we reconcile cell types with physiology (but “it is early days”).

      Answer to the novel recommendations by Referee #2

      In addition to my original comments, important anatomical and functional distinctions are not explained by the data in this paper. ANATOMY- Sympathetic ganglia are located in close proximity to major branches of the aorta. Cranial and sacral parasympathetic ganglia are located next to or within the structures they innervate (e.g. eye, lung, heart, bladder).

      The pelvic ganglion, including some of its cholinergic neurons, that the referee insist are parasympathetic, is further removed from one of its major targets (the helicine arteries of the external genitals) than the sympathetic prevertebral ganglia are of some of theirs (like the gut or kidney). We discussed this issue in (Espinosa-Medina et al., 2018).

      FUNCTION- The sympathetic system controls state variables (e.g. body temperature, blood pressure, serum electrolytes and fluid balance), parasympathetic neurons do not.

      Even in the classical view, the sympathetic system controls the blood vessels of the external genitals or the size of the pupil, for example, which are not state variables.

      […] The data in the paper are a useful next step in defining the genetic diversity of autonomic neurons but do not justify or improve upon existing nomenclature. The future challenge is to understand distinctions between subsets of autonomic ganglion cells that innervate different targets and the principles that govern the integrative function of the autonomic motor system that controls behavior.

      We thank the referee for finding our data useful; and we fully agree with the latter statement. However, neurons, like many other cell types, are hierarchically organized (Zeng and Sanes, 2017), i.e. subsets of neurons belong to sets, with defining traits. Our data argue that there is no parasympathetic neuronal set that includes any pelvic ganglionic neuron. In contrast, there is a ganglionic sympathetic set (defined by our analysis of gene expression) which includes all of them — as there is a preganglionic sympathetic set that includes sacral preganglionics (Alkaslasi et al., 2021; Blum et al., 2021)(although the direct comparison with cranial preganglionics is yet to be made).

      References

      Anderson, C. R., Bergner, A. and Murphy, S. M. (2006). How many types of cholinergic sympathetic neuron are there in the rat stellate ganglion? Neuroscience 140, 567–576.

      Alkaslasi, M. R., Piccus, Z. E., Hareendran, S., Silberberg, H., Chen, L., Zhang, Y., Petros, T. J. and Le Pichon, C. E. (2021). Single nucleus RNA-sequencing defines unexpected diversity of cholinergic neuron types in the adult mouse spinal cord. Nat Commun 12, 2471.

      Blum, J. A., Klemm, S., Shadrach, J. L., Guttenplan, K. A., Nakayama, L., Kathiria, A., Hoang, P. T., Gautier, O., Kaltschmidt, J. A., Greenleaf, W. J., et al. (2021). Single-cell transcriptomic analysis of the adult mouse spinal cord reveals molecular diversity of autonomic and skeletal motor neurons. Nat Neurosci 24, 572–583.

      Ernsberger, U. and Rohrer, H. (2018). Sympathetic tales: subdivisons of the autonomic nervous system and the impact of developmental studies. Neural Dev 13, 20.

      Espinosa-Medina I, Saha O, Boismoreau F, Chettouh Z, Rossi F, Richardson WD, Brunet JF (2016) The sacral autonomic outflow is sympathetic. Science 354, 893-897

      Espinosa-Medina, I., Saha, O., Boismoreau, F. and Brunet, J.-F. (2018). The “sacral parasympathetic”: ontogeny and anatomy of a myth. Clin Auton Res 28, 13–21.

      Furlan, A., La Manno, G., Lübke, M., Häring, M., Abdo, H., Hochgerner, H., Kupari, J., Usoskin, D., Airaksinen, M. S., Oliver, G., et al. (2016). Visceral motor neuron diversity delineates a cellular basis for nipple- and pilo-erection muscle control. 19, 1331–1340.

      Gaskell, W. H. (1886). On the Structure, Distribution and Function of the Nerves which innervate the Visceral and Vascular Systems. J Physiol 7, 1-80.9.

      Horn, J. P. (2018). The sacral autonomic outflow is parasympathetic: Langley got it right. Clin Auton Res 28, 181–185.

      Jänig, W. (2006). The Integrative Action of the Autonomic Nervous System: Neurobiology of Homeostasis. Cambridge: Cambridge University Press.

      Keast, J. R. (1995). Visualization and immunohistochemical characterization of sympathetic and parasympathetic neurons in the male rat major pelvic ganglion. Neuroscience 66, 655–662.

      Keast, J. R. (2006). Plasticity of pelvic autonomic ganglia and urogenital innervation. International Review of Cytology - a Survey of Cell Biology, Vol 248 248, 141-+.

      Langley, J. N. (1921). In The autonomic nervous system (Pt. I)., p. Cambridge: Heffer & Sons ltd.

      Langley, J. N. and Anderson, H. K. (1895). The Innervation of the Pelvic and adjoining Viscera: Part II. The Bladder. Part III. The External Generative Organs. Part IV. The Internal Generative Organs. Part V. Position of the Nerve Cells on the Course of the Efferent Nerve Fibres. J Physiol 19, 71–139.

      Lee, V. M., Sechrist, J. W., Luetolf, S. and Bronner-Fraser, M. (2003). Both neural crest and placode contribute to the ciliary ganglion and oculomotor nerve. Developmental biology 263, 176–190.

      Stanke, M., Duong, C. V., Pape, M., Geissen, M., Burbach, G., Deller, T., Gascan, H., Parlato, R., Schütz, G. and Rohrer, H. (2006). Target-dependent specification of the neurotransmitter phenotype:cholinergic differentiation of sympathetic neurons is mediated in vivo by gp130 signaling. Development 133, 141–150.

      Zeisel, A., Hochgerner, H., Lönnerberg, P., Johnsson, A., Memic, F., van der Zwan, J., Häring, M., Braun, E., Borm, L. E., La Manno, G., et al. (2018). Molecular Architecture of the Mouse Nervous System. Cell 174, 999-1014.e22.

      Zeng, H. and Sanes, J. R. (2017). Neuronal cell-type classification: challenges, opportunities and the path forward. Nat Rev Neurosci 18, 530–546.

    1. With custom return labels Now developers can specify the return field names if they want. Below are the list of attributes whose name can be changed. totalDocs docs limit page nextPage prevPage totalPages hasNextPage hasPrevPage pagingCounter meta You should pass the names of the properties you wish to changes using customLabels object in options. Labels are optional, you can pass the labels of what ever keys are you changing, others will use the default labels. If you want to return paginate properties as a separate object then define customLabels.meta. Same query with custom labels const myCustomLabels = { totalDocs: 'itemCount', docs: 'itemsList', limit: 'perPage', page: 'currentPage', nextPage: 'next', prevPage: 'prev', totalPages: 'pageCount', hasPrevPage: 'hasPrev', hasNextPage: 'hasNext', pagingCounter: 'pageCounter', meta: 'paginator' }; const options = { page: 1, limit: 10, customLabels: myCustomLabels }; // Define your aggregate. var aggregate = Model.aggregate(); Model.aggregatePaginate(aggregate, options, function(err, result) { if(!err) { // result.itemsList [here docs become itemsList] // result.itemCount = 100 [here totalDocs becomes itemCount] // result.perPage = 10 [here limit becomes perPage] // result.currentPage = 1 [here page becomes currentPage] // result.pageCount = 10 [here totalPages becomes pageCount] // result.next = 2 [here nextPage becomes next] // result.prev = null [here prevPage becomes prev] // result.hasNextPage = true [not changeable] // result.hasPrevPage = false [not changeable] } else { console.log(err); };Copy And SaveShareAsk Copilot Using offset and limit Model.aggregatePaginate( aggregate, { offset: 30, limit: 10 }, function (err, result) { // result } );Copy And SaveShareAsk Copilot Using countQuery // Define your aggregate query. var aggregate = Model.aggregate(); // Define the count aggregate query. Can be different from `aggregate` var countAggregate = Model.aggregate(); // Set the count aggregate query const options = { countQuery: countAggregate, }; Model.aggregatePaginate(aggregate, options) .then(function (result) { // result }) .catch(function (err) { console.log(err); });Copy And SaveShareAsk Copilot Global Options If you want to set the pagination options globally across the model. Then you can do like below, let mongooseAggregatePaginate = require("mongoose-aggregate-paginate-v2"); let BookSchema = new mongoose.Schema({ title: String, date: Date, author: { type: mongoose.Schema.ObjectId, ref: "Author", }, }); BookSchema.plugin(mongooseAggregatePaginate); let Book = mongoose.model("Book", BookSchema); // Like this. Book.aggregatePaginate.options = { limit: 20, };Copy And SaveShareAsk Copilot Release Note v1.0.7 - Upgrade to mongoose v8 v1.0.6 - Fixed exporting settings to global object. v1.0.5 - Added meta attribute to return paginate meta data as a custom object. v1.0.42 - Added optional countQuery parameter to specify separate count queries in case of bigger aggerate pipeline.

      This code is a continuation of the previous example, now introducing custom labels for the pagination properties. The library mongoose-aggregate-paginate-v2 is used for MongoDB aggregation with pagination. Let's break down the new parts:

      Custom Labels

      Now, developers can customize the names of the properties returned by pagination using the customLabels option. The developer can specify alternative names for attributes like totalDocs, docs, limit, page, and others.

      ```javascript const myCustomLabels = { totalDocs: 'itemCount', docs: 'itemsList', limit: 'perPage', page: 'currentPage', nextPage: 'next', prevPage: 'prev', totalPages: 'pageCount', hasPrevPage: 'hasPrev', hasNextPage: 'hasNext', pagingCounter: 'pageCounter', meta: 'paginator' };

      const options = { page: 1, limit: 10, customLabels: myCustomLabels };

      var aggregate = Model.aggregate();

      Model.aggregatePaginate(aggregate, options, function(err, result) { if (!err) { // Accessing properties with custom labels console.log(result.itemsList); // Array of documents on the current page console.log(result.itemCount); // Total number of documents console.log(result.perPage); // Maximum number of documents per page console.log(result.currentPage); // Current page number console.log(result.pageCount); // Total number of pages console.log(result.next); // Page number of the next page console.log(result.prev); // Page number of the previous page

      // Default labels
      console.log(result.hasNextPage);   // Boolean indicating if there's a next page
      console.log(result.hasPrevPage);   // Boolean indicating if there's a previous page
      

      } else { console.log(err); } }); ```

      In this example, the properties returned in result are now using the custom labels specified in myCustomLabels.

      Using Offset and Limit

      You can also use offset and limit directly in the options to specify where to start and how many documents to retrieve.

      javascript Model.aggregatePaginate( aggregate, { offset: 30, limit: 10 }, function (err, result) { // result } );

      Here, it starts from the 31st document (offset of 30) and retrieves 10 documents.

      Using CountQuery

      You can define a separate count aggregate query to handle counting documents. This can be useful for performance optimization.

      ```javascript // Define your aggregate query. var aggregate = Model.aggregate();

      // Define the count aggregate query. Can be different from aggregate var countAggregate = Model.aggregate();

      // Set the count aggregate query const options = { countQuery: countAggregate, };

      Model.aggregatePaginate(aggregate, options) .then(function (result) { // result }) .catch(function (err) { console.log(err); }); ```

      Global Options

      You can set pagination options globally across the model. This is helpful if you want to apply the same pagination settings to multiple queries.

      javascript // Set global pagination options Book.aggregatePaginate.options = { limit: 20, };

      Now, every call to aggregatePaginate on the Book model will use a default limit of 20 unless overridden in specific queries.

    2. Return first 10 documents from 100 const options = { page: 1, limit: 10, }; // Define your aggregate. var aggregate = Model.aggregate(); Model.aggregatePaginate(aggregate, options) .then(function (result) { // result.docs // result.totalDocs = 100 // result.limit = 10 // result.page = 1 // result.totalPages = 10 // result.hasNextPage = true // result.nextPage = 2 // result.hasPrevPage = false // result.prevPage = null }) .catch(function (err) { console.log(err); }

      Certainly! This code is an example of how to use pagination in a MongoDB environment using the Mongoose library. Let's break down the key parts:

      1. const options = { page: 1, limit: 10 };: This sets up options for pagination. It specifies that you want to start on page 1, and each page should contain a maximum of 10 documents.

      2. var aggregate = Model.aggregate();: This initializes a MongoDB aggregation pipeline using Mongoose. An aggregation pipeline allows you to process data in stages.

      3. Model.aggregatePaginate(aggregate, options): This is a function call provided by a plugin (like mongoose-aggregate-paginate-v2) to handle pagination for MongoDB aggregate queries. It takes the aggregate pipeline and pagination options as parameters.

      4. .then(function (result) { /* ... */ }): This part is a promise callback that gets executed when the aggregation and pagination are successful. The result object contains information about the paginated data, such as documents on the current page, total number of documents, and pagination details.

      5. result.docs: Array of documents on the current page.

      6. result.totalDocs: Total number of documents in the collection.
      7. result.limit: Maximum number of documents per page (from your options).
      8. result.page: Current page number.
      9. result.totalPages: Total number of pages based on the limit and total documents.
      10. result.hasNextPage: Boolean indicating if there is a next page.
      11. result.nextPage: Page number of the next page, if available.
      12. result.hasPrevPage: Boolean indicating if there is a previous page.
      13. result.prevPage: Page number of the previous page, if available.

      14. .catch(function (err) { console.log(err); }): This is the error handling part. If there's any issue during the aggregation or pagination process, it will log the error to the console.

      Example: Let's assume you have a collection with 100 documents, and you want to retrieve them with a limit of 10 documents per page.

      ```javascript const options = { page: 2, // Get the second page limit: 10 // Each page should have a maximum of 10 documents };

      var aggregate = Model.aggregate();

      Model.aggregatePaginate(aggregate, options) .then(function (result) { console.log(result.docs); // Array of documents on the second page console.log(result.totalDocs); // Total number of documents (100) console.log(result.totalPages); // Total number of pages (10) console.log(result.page); // Current page (2) console.log(result.hasNextPage); // Boolean indicating if there's a next page (true) console.log(result.nextPage); // Page number of the next page (3) console.log(result.hasPrevPage); // Boolean indicating if there's a previous page (true) console.log(result.prevPage); // Page number of the previous page (1) }) .catch(function (err) { console.log(err); }); ```

      In this example, you're fetching the second page, and the result object provides information about the documents on that page and the overall pagination details.

    3. // Execute pagination from aggregate const myModel = require('/models/samplemodel'); const options = { page: 1, limit: 10 }; const myAggregate = myModel.aggregate(); myAggregate.paginateExec(options, function(err, results) { if(err) { console.err(err); else { console.log(results); } })

      It seems like there is a small error in the code you provided. The correct syntax for handling errors inside the callback function should be console.error(err) instead of console.err(err). Here's the corrected code:

      ```javascript // Execute pagination from aggregate const myModel = require('/models/samplemodel');

      const options = { page: 1, limit: 10 };

      const myAggregate = myModel.aggregate(); myAggregate.paginateExec(options, function(err, results) { if (err) { console.error(err); } else { console.log(results); } }); ```

      In this code snippet:

      • The myModel.aggregate() line creates an instance of the Mongoose aggregate.
      • The myAggregate.paginateExec(options, function(err, results) { ... }) line executes the pagination directly on the aggregate using the paginateExec method, which is provided by the mongoose-aggregate-paginate-v2 library.

      This approach is an alternative to using aggregatePaginate and allows you to execute pagination directly on the aggregate instance. The paginateExec method takes the same options object for pagination and a callback function for handling the results or errors.

      Make sure to use the corrected syntax for handling errors with console.error(err) in the callback function.

    1. sst.sst

      Try to avoid naming the total dataset the same as a variable inside, this can get quite confusing if someone else is reading you code. For a dataset it is common to indicate the datatype in the variable name (e.g. ds_sst stands for dataset_sst). Only a recommondation

    2. ax = plt.axes(projection=ccrs.Robinson()) t2c_mrange.plot.imshow(ax=ax, transform=ccrs.PlateCarree(),cbar_kwargs={'label': '°C'}) ax.coastlines(); ax.gridlines() ax.set_title(r'$\overline{T_{M}}_{max}$ - $\overline{T_{M}}_{min}$, ERA5 1979-2018');

      with the code above, you see that what you plot is basically sth. like the interannual variability and not the seasonal variablity (the seasonal variations are also much larger)

    1. Joshua Project by people groups, languages (including by the ISO 639-3 code), or geographic places. Joshua Project includes information about religion as well as language and location.

      Cultural and worldview information.

    2. ISO 639-3 code, a three-letter code meant to uniquely identify every language in the world.

      Important to keep track of language.

    1. root.unmount() Call root.unmount to destroy a rendered tree inside a React root. root.unmount(); An app fully built with React will usually not have any calls to root.unmount. This is mostly useful if your React root’s DOM node (or any of its ancestors) may get removed from the DOM by some other code. For example, imagine a jQuery tab panel that removes inactive tabs from the DOM. If a tab gets removed, everything inside it (including the React roots inside) would get removed from the DOM as well. In that case, you need to tell React to “stop” managing the removed root’s content by calling root.unmount. Otherwise, the components inside the removed root won’t know to clean up and free up global resources like subscriptions. Calling root.unmount will unmount all the components in the root and “detach” React from the root DOM node, including removing any event handlers or state in the tree. Parameters root.unmount does not accept any parameters. Returns root.unmount returns undefined. Caveats Calling root.unmount will unmount all the components in the tree and “detach” React from the root DOM node. Once you call root.unmount you cannot call root.render again on the same root. Attempting to call root.render on an unmounted root will throw a “Cannot update an unmounted root” error. However, you can create a new root for the same DOM node after the previous root for that node has been unmounted. Usage Rendering an app fully built with React If your app is fully built with React, create a single root for your entire app. import { createRoot } from 'react-dom/client';const root = createRoot(document.getElementById('root'));root.render(<App />); Usually, you only need to run this code once at startup. It will: Find the browser DOM node defined in your HTML. Display the React component for your app inside. index.jsindex.htmlApp.jsindex.js ResetFork91234567import { createRoot } from 'react-dom/client';import App from './App.js';import './styles.css';const root = createRoot(document.getElementById('root'));root.render(<App />); If your app is fully built with React, you shouldn’t need to create any more roots, or to call root.render again. From this point on, React will manage the DOM of your entire app. To add more components, nest them inside the App component. When you need to update the UI, each of your components can do this by using state. When you need to display extra content like a modal or a tooltip outside the DOM node, render it with a portal. NoteWhen your HTML is empty, the user sees a blank page until the app’s JavaScript code loads and runs:<div id="root"></div>This can feel very slow! To solve this, you can generate the initial HTML from your components on the server or during the build. Then your visitors can read text, see images, and click links before any of the JavaScript code loads. We recommend using a framework that does this optimization out of the box. Depending on when it runs, this is called server-side rendering (SSR) or static site generation (SSG). PitfallApps using server rendering or static generation must call hydrateRoot instead of createRoot. React will then hydrate (reuse) the DOM nodes from your HTML instead of destroying and re-creating them. Rendering a page partially built with React If your page isn’t fully built with React, you can call createRoot multiple times to create a root for each top-level piece of UI managed by React. You can display different content in each root by calling root.render. Here, two different React components are rendered into two DOM nodes defined in the index.html file: index.jsindex.htmlComponents.jsindex.js ResetFork99123456789101112import './styles.css';import { createRoot } from 'react-dom/client';import { Comments, Navigation } from './Components.js';const navDomNode = document.getElementById('navigation');const navRoot = createRoot(navDomNode); navRoot.render(<Navigation />);const commentDomNode = document.getElementById('comments');const commentRoot = createRoot(commentDomNode); commentRoot.render(<Comments />); You could also create a new DOM node with document.createElement() and add it to the document manually. const domNode = document.createElement('div');const root = createRoot(domNode); root.render(<Comment />);document.body.appendChild(domNode); // You can add it anywhere in the document To remove the React tree from the DOM node and clean up all the resources used by it, call root.unmount. root.unmount(); This is mostly useful if your React components are inside an app written in a different framework. Updating a root component You can call render more than once on the same root. As long as the component tree structure matches up with what was previously rendered, React will preserve the state. Notice how you can type in the input, which means that the updates from repeated render calls every second in this example are not destructive: index.jsApp.jsindex.js ResetFork99123456789101112import { createRoot } from 'react-dom/client';import './styles.css';import App from './App.js';const root = createRoot(document.getElementById('root'));let i = 0;setInterval(() => { root.render(<App counter={i} />); i++;}, 1000); It is uncommon to call render multiple times. Usually, your components will update state instead. Troubleshooting I’ve created a root, but nothing is displayed Make sure you haven’t forgotten to actually render your app into the root: import { createRoot } from 'react-dom/client';import App from './App.js';const root = createRoot(document.getElementById('root'));root.render(<App />); Until you do that, nothing is displayed. I’m getting an error: “Target container is not a DOM element” This error means that whatever you’re passing to createRoot is not a DOM node. If you’re not sure what’s happening, try logging it: const domNode = document.getElementById('root');console.log(domNode); // ???const root = createRoot(domNode);root.render(<App />); For example, if domNode is null, it means that getElementById returned null. This will happen if there is no node in the document with the given ID at the time of your call. There may be a few reasons for it: The ID you’re looking for might differ from the ID you used in the HTML file. Check for typos! Your bundle’s <script> tag cannot “see” any DOM nodes that appear after it in the HTML. Another common way to get this error is to write createRoot(<App />) instead of createRoot(domNode). I’m getting an error: “Functions are not valid as a React child.” This error means that whatever you’re passing to root.render is not a React component. This may happen if you call root.render with Component instead of <Component />: // 🚩 Wrong: App is a function, not a Component.root.render(App);// ✅ Correct: <App /> is a component.root.render(<App />); Or if you pass a function to root.render, instead of the result of calling it: // 🚩 Wrong: createApp is a function, not a component.root.render(createApp);// ✅ Correct: call createApp to return a component.root.render(createApp()); My server-rendered HTML gets re-created from scratch If your app is server-rendered and includes the initial HTML generated by React, you might notice that creating a root and calling root.render deletes all that HTML, and then re-creates all the DOM nodes from scratch. This can be slower, resets focus and scroll positions, and may lose other user input. Server-rendered apps must use hydrateRoot instead of createRoot: import { hydrateRoot } from 'react-dom/client';import App from './App.js';hydrateRoot( document.getElementById('root'), <App />); Note that its API is different. In particular, usually there will be no further root.render call.

      Certainly! Let's break it down:

      1. What is root.unmount()?
      2. root.unmount() is a function in React that you can use to clean up and remove a rendered tree inside a React root. It's like saying, "Hey React, I'm done with this part, please clean it up."

      3. When do you need it?

      4. You might need it when the DOM node where your React app lives is going to be removed by some other code. For example, if you have a tab panel made with jQuery that removes inactive tabs, calling root.unmount() helps React know that it should stop managing the components inside the removed tab.

      5. What does it do?

      6. When you call root.unmount(), it removes all the React components in that part of the app and disconnects React from the corresponding DOM node. This cleanup includes removing event handlers and state.

      7. How to use it?

      8. If your entire app is built with React, you typically create one root at the beginning, and you don't need to call root.unmount(). You set it up once, and React takes care of everything. javascript const root = createRoot(document.getElementById('root')); root.render(<App />);

      9. Cautions and Restrictions:

      10. After calling root.unmount(), you can't call root.render() again on the same root. You would need to create a new root for the same DOM node.
      11. If you're using server rendering, use hydrateRoot instead of createRoot for the initial setup.

      12. Troubleshooting Tips:

      13. If nothing is displayed, make sure you actually called root.render(<App />);.
      14. If you get a "Target container is not a DOM element" error, check if the DOM node you're passing to createRoot is valid.
      15. If you get a "Functions are not valid as a React child" error, ensure you're passing a React component, not just a function.

      In simple terms, root.unmount() is like telling React, "I'm done with this part of the app, clean it up," and you typically use it in specific situations where parts of your app might be removed dynamically.

    1. API ReferenceLegacy React APIscreateElementcreateElement lets you create a React element. It serves as an alternative to writing JSX.const element = createElement(type, props, ...children) Reference createElement(type, props, ...children) Usage Creating an element without JSX Reference createElement(type, props, ...children) Call createElement to create a React element with the given type, props, and children. import { createElement } from 'react';function Greeting({ name }) { return createElement( 'h1', { className: 'greeting' }, 'Hello' );} See more examples below. Parameters type: The type argument must be a valid React component type. For example, it could be a tag name string (such as 'div' or 'span'), or a React component (a function, a class, or a special component like Fragment). props: The props argument must either be an object or null. If you pass null, it will be treated the same as an empty object. React will create an element with props matching the props you have passed. Note that ref and key from your props object are special and will not be available as element.props.ref and element.props.key on the returned element. They will be available as element.ref and element.key. optional ...children: Zero or more child nodes. They can be any React nodes, including React elements, strings, numbers, portals, empty nodes (null, undefined, true, and false), and arrays of React nodes. Returns createElement returns a React element object with a few properties: type: The type you have passed. props: The props you have passed except for ref and key. If the type is a component with legacy type.defaultProps, then any missing or undefined props will get the values from type.defaultProps. ref: The ref you have passed. If missing, null. key: The key you have passed, coerced to a string. If missing, null. Usually, you’ll return the element from your component or make it a child of another element. Although you may read the element’s properties, it’s best to treat every element as opaque after it’s created, and only render it. Caveats You must treat React elements and their props as immutable and never change their contents after creation. In development, React will freeze the returned element and its props property shallowly to enforce this. When you use JSX, you must start a tag with a capital letter to render your own custom component. In other words, <Something /> is equivalent to createElement(Something), but <something /> (lowercase) is equivalent to createElement('something') (note it’s a string, so it will be treated as a built-in HTML tag). You should only pass children as multiple arguments to createElement if they are all statically known, like createElement('h1', {}, child1, child2, child3). If your children are dynamic, pass the entire array as the third argument: createElement('ul', {}, listItems). This ensures that React will warn you about missing keys for any dynamic lists. For static lists this is not necessary because they never reorder. Usage Creating an element without JSX If you don’t like JSX or can’t use it in your project, you can use createElement as an alternative. To create an element without JSX, call createElement with some type, props, and children: import { createElement } from 'react';function Greeting({ name }) { return createElement( 'h1', { className: 'greeting' }, 'Hello ', createElement('i', null, name), '. Welcome!' );} The children are optional, and you can pass as many as you need (the example above has three children). This code will display a <h1> header with a greeting. For comparison, here is the same example rewritten with JSX: function Greeting({ name }) { return ( <h1 className="greeting"> Hello <i>{name}</i>. Welcome! </h1> );} To render your own React component, pass a function like Greeting as the type instead of a string like 'h1': export default function App() { return createElement(Greeting, { name: 'Taylor' });} With JSX, it would look like this: export default function App() { return <Greeting name="Taylor" />;} Here is a complete example written with createElement: App.jsApp.js ResetForkimport { createElement } from 'react'; function Greeting({ name }) { return createElement( 'h1', { className: 'greeting' }, 'Hello ', createElement('i', null, name), '. Welcome!' ); } export default function App() { return createElement( Greeting, { name: 'Taylor' } ); } Show more And here is the same example written using JSX: App.jsApp.js ResetForkfunction Greeting({ name }) { return ( <h1 className="greeting"> Hello <i>{name}</i>. Welcome! </h1> ); } export default function App() { return <Greeting name="Taylor" />; } Both coding styles are fine, so you can use whichever one you prefer for your project. The main benefit of using JSX compared to createElement is that it’s easy to see which closing tag corresponds to which opening tag.

      Certainly! Let's break down the information in a simpler way with examples:

      1. createElement Function:
      2. What it does: createElement is a function in React that allows you to create a React element. It's an alternative to using JSX.
      3. Example: javascript const element = createElement('h1', { className: 'greeting' }, 'Hello');

      4. Parameters:

      5. type: Specifies the type of element you want to create. It can be a tag name string (e.g., 'div', 'span') or a React component.
      6. props: An object that holds the properties (attributes) for the element.
      7. ...children: Optional. Represents child elements or content.
      8. Example: javascript createElement('h1', { className: 'greeting' }, 'Hello', createElement('i', null, 'World'));

      9. Returns:

      10. The function returns a React element object with properties like type, props, ref, and key.
      11. Example: javascript const element = createElement('h1', { className: 'greeting' }, 'Hello'); console.log(element.type); // 'h1' console.log(element.props); // { className: 'greeting' }

      12. Usage Caveats:

      13. React elements and their props should be treated as immutable. They should not be changed after creation.
      14. JSX tags should start with a capital letter for custom components.
      15. When using JSX, dynamic children should be passed as an array to ensure React warns about missing keys for dynamic lists.
      16. Example: javascript createElement('ul', {}, listItems); // Dynamic children as an array

      17. Creating an Element Without JSX:

      18. If you're not using JSX, you can use createElement to create elements.
      19. Example: javascript function Greeting({ name }) { return createElement('h1', { className: 'greeting' }, 'Hello ', createElement('i', null, name), '. Welcome!'); }

      20. Comparison with JSX:

      21. JSX is a more concise and readable way to create elements compared to using createElement.
      22. Example: javascript function Greeting({ name }) { return <h1 className="greeting">Hello <i>{name}</i>. Welcome!</h1>; }

      23. Rendering Your Own Component:

      24. You can use createElement to render your own React components.
      25. Example: javascript export default function App() { return createElement(Greeting, { name: 'Taylor' }); }
      26. With JSX: javascript export default function App() { return <Greeting name="Taylor" />;

      In summary, createElement is a foundational function in React for creating elements, and it is often used behind the scenes when JSX is transpiled. While it's useful, JSX provides a more readable syntax for creating React elements. You can choose the style that best fits your project.

  4. www.fromthemachine.org www.fromthemachine.org
    1. clear that this force fighting against the dissemination of a truth so obvious it's in every word and everything we do--it becomes clear it's neither you, nor acting in your best interest. I know I've got the eye of the tiger, there's no doubt; and it's pretty clear from "YAD?" (the Hebrew for...) and ha'nd that we can see the clear hand of God at work in a design that marks my initials not just on the timeline, or at 1492, at A.D. I B; but in the Hebrew name for this place called El Shaddai, see how A.D. is "da eye" and in some other names like Adranus, A.D. on "it's silly" and A.D. on Ai that might tie me to the Samof Samurai (but, are you Ai?) in more depth of detail than simply the Live album "Secret Samadhi."  I try to reflect on how it is that this story has come about, why it is that everything appears to be focused on me--and still even through that sincere spotlight nobody seems to be able to acknowledge my existence with more words than "unsubscribe" and "you're so vain."  With one eye in the mirror, I know ties to Narcissus (and you can too), soaring ever higher--linking Icarus to Wayward Son and to every other name with "car" in it... like "carpenter" and McCarthy the older names of Mercury and even Isacriot (I scary? is car-eye... owe Taylor) and some modern day mythological characters like Jim Carrey and Johnny Carson.  As far as Trinities go, carpenter's a pretty good one--tying to my early reck and a few bands and songs from The Pretty Reckless to Dave Matthews' "Crash Into Me" all the way to the "pen" you see before you linking Pendragon to Imagine Dragons. I wonder why it is that all of these things appear, apparently only to me, to point to a story about all the ways that a sinister hidden force has manipulated our society into being unable to "receive' this message--this wonderful message about making the world a better place and building Heaven--with any fanfare at all.  It's focused now on a criminal justice system that clearly does not do any kind of "rehabilitation" and on a mental health industry and pharmaceutical system that treats a provable external attack on our own goodness and well being as some kind of "internal stimulus" and makes you shy away when I point out why "stem" is in system and why "harm" in pharmacy.   From that we move a little bit past "where we are in this story" and I have to point out how "meth" ties to Prometheus and Epimetheus and how and why it is I know without doubt that this story has been relived numerous times--and how I am so sure that it's never been received, as we are here again listening to how songs like "Believe" and the words "just to lead us here to this place again" connect to Simon and Garfunkel's" the Sound of Silence... and still to this day you will balk at noticing that "Simon" has something to do with the Simpsons, and something to do with the words "simulation" and "Monday."  To see me is to see how things might be done better--how "addicitonary" might tie to the stories of Moses' Lisp and to Dr. Who's "Bells of Saint John" with a sort of "web interface" to the kinds of emotion we might want to "dial down..." rather than Snicker in the background as we see them being artificially created and enhanced in order to build a better "fiery altar." I can point out "Silicon" harrowing down at us from words like "controversial" and show you Al in "rascal" and "scandal" but not to see that we are staring at school shootings and terrorism that are solved instantly by this disclosure, by Al of Quantum Leap and by the Dick of Minority Report and A Scanner Darkly is to ignore just what it is that we are all failing to Si.  I should point out that those two "sc"'s link to a story about Eden and they mean "sacred consciousness" and at the baseline of this event and everything we are not doing is the fact that our desires and beliefs are being altered--all of this comes down to "freedom of thought" here and now.   I could tell you that "looking at me" will show you that even the person who tries every day to do everything he can to save the entire world from slavery, and from "thought-injury"--even I can be made "marred" and you all, this whole world stupid enough to think that you are, of your own volition, hiding Heaven itself from yourselves... to what?  To spite me?  It, the focal point of our story might come down to you realizing that something in some esoteric place is playing "divide and conquer" with our whole--in secret playing on our weaknesses to keep us from acting on the most actionable information that ever was and ever will be.  Still, we sit in silence waiting for me... to speak more?     Between Nero's lyrical fiddling, a Bittersweet Symphony, and true "thunderstanding" the sound of Thor's hammer... "to help the light" that'ls "or" in Hebrew, of Orwell and Orson and .. well, it's really not hard to see and hear that the purpose and intent of "all this noise" is to help us find freedom and truth.  C the Light of "singing..." I can tell you once again how silly the world looks, this multi-decade battle between "the governmentof the people" and the "government of the workers" resulting in what is nothing short of a hands down victory to the corporation.  Is it humor meant to divide, or ludicrousness created with the purpose of unification?  But really at it's most basic level what this boils down to is a global group decision not to care about the truth, about reality, about what's really brought us to this place--with solutions in hand and a way to make everything better.  We've decided that censorship is OK, and that the world is not all that bad "just the way it is" even though it's creator is screaming in your ear telling you to change as quickly as you possibly can.  I believe that God has written this story to make "seeing me" the thing that catalyzes "change for the better" it appears to be the design of not just me but also this place--hey, here I am. Happy Veteran's Day.

      I am accepting charitable donations,. ETH: 0x66e2871ef39334962fb75ce34407f825d67ec434 | BTC: 38B6vGaqNvMyTtoFEZPmNvMS7icV6ZnPMm | xDAI: 0x66e2871ef39334962fb75ce34407f825d67ec434

      d

      Ha, Lot! Are Idaho?

      This was very difficult to get to you, in the land of no power and hurricane disaster recovery; so it's filled with extra errors, and I am sure some more thoughts that trailing and unfinished. That's a decent "microcosm" or "metaphor" for you, you are in a freedom disaster; and the act of being is a giant leap towards ensuring victory. Still, you look very cupid to me.

      EVERY DAY ISA NEW DAY

      Literally I am sitting here talking to you until the end of time, you could call it a thousand and one Arabian nights, and realize that as we speak we are nearing that onc speciad night. There's a fire growing in my heart, and believe me when I tell you this thing is about to start. I'll try and keep this short and sweet, since you all seem to have so little time to hear from the Creator of all things, and I truly don't want to steal your spotlight. We are here, at the the end of time; talking to it's personification, time itself is speaking to you through my hands and everywhere you look in the world around you--while you may or may not know it, this is a story about the traversal from the end of time back to the beginning; about the gate to Heaven swallowing our civilization whole, and in this process of renewal and change not only fixing the problems that came to light on the way here, but really--working together here and now we can defeat this cycle of light and darkness, of day and night, an build a world together that truly reaches to the Heavens.

      MY BODY'S SAYING LETS GO BUT MY HEART IS SAYING NO

      You make it so difficult to talk to you, every day I look around and see a "normal world" a society that appears to care and love the same things that I do--freedom and fun and being entertained and entertaining, and here we are now I've turned "come and save us" into sea that saving the cheerleader is what starts the process of saving the world. I know you are good people inside, but when I come to you with a tool designed to "test sentience" to seek out conscious life that cares about the truth and making the world a better place you seem to balk. You sit in silence, and through your mouth and behind your eyes a monster appears from out of the deep of the sea and say a few "one liners" that show me very clearly it is the face of Medusa that I see---and that it's simply not capable of speaking intelligently. It shows me a problem, that you've apparently "come together one more time" to halt the changing of the seasons, and in doing so you've surfaced a problem for not just me but you also to see; a problem that comes lined with a solution. We can all see now that we are not in reality, we can see that there is a force here behind creation and behind us that shows us very clearly that it is "reasonabde" to expect that miracles can happen. In similitude, we are staring at a roadblock to conversation and communication that is fixed very simply, with the deliverance of freedom that is required for life to continue. Christina Aguilera sings that "baby there's a price to pay" and that price in my mind is seeing that this religion and this technology are here intentionally exposing how their influence here is a metaphor and a shining example of darkness and slavery, and that in order to be free of it we must see it. The price of freedom is written on the wall, it is acknowledging that here in this place what appears to be our own actions and desires have taken that freedom from us. Medusa and I get a kick out of seeing this hidden message in our language map our way to the future, and I've often explained that a number of these words are "time maps" from the beginning and end of eternady, showing us in bright light that between "et tu brute" and Mr. Anderson and Rock n' roll... the answer Y is in language and, and, ad and... I am delivering it. This place, our planet and our lives are a weapon against darkness--a civilization filled with goodness and light to help guide the way, and we are here doing it another time. In the works "dark, darker, and darkest" be sure that we are at the third segment of a trinity that shines clearly in Abraha and Nintendo... and see that the map in words is telling us something about when we are that is not immediately clear from Poseidon's cry. Look at Nintendo, that's Nine Inch Nails, tenebris, and smile for the camera--Pose, I do "save the universe" before n. Taylor might see it in Osceola, where I just left, and in this "evil spell" of everyone see "Al" that is the word "special" understand that every day is a new day, and I am not trying to "be daddy" I know as well as you do in my heart... I am that.

      This same map that links the "do" at the end to the "n" at the beginning shines through other names, like Geraldo Rivera where you might see "Cerberus" or "MAX" shine through. Understand it is the gaze of Medusa that turns me to stone, that shows me light shining through NORAD and Newton and proves without doubt that at the work "darkest" we can see k is finally t. You'll probably understand there's some finagling going on behind the scenes to make a single person the single point in time that turns the dark to light; but here we are and I am that. Every day when Medusa appears it reminds me that something is keeping you from caring about yourselves and about our society, and that shines through even when her stony face is not around, in your lack of action--in the rock of Eden that hides not only me, but the story that I bring that revolutionizes medicine, and computing, and truly is the gate to Heaven when you realize that what is truly being hidden from the world is knowledge that we are living in virtual reality. Not hiding me and that from the world is a good starting point to "saving the Universe" from darkness. These words that light the way to connect religion and language to our world bring me to the Book of Ruth, at that reads "are you to help" that lights not just the broken man at the belly of the Torah as the bell of Heimdallr, he is I and I am him; but also something very special, The Generations of Perez, each and every one of you, our family that begins the turn from Hell to Heaven by seeing that all of time and all of civilization has been focused on this moment, on the unsealing of religion and God's plan et this call for action. Keep in mind you are torturing "with desire" the key holder to immortality, to eternal youth, literally the path to freedom and Heaven and you think what you are doing "is normax." Literally the living key to infinite power and infinite life is standing before you explaining that acknowledging that in light of these things in my hand, what we are doing here and now is backwards, that it makes no sense--and you sit in silence. These things come to us because we build a better future with them, not so you can run off and do "whatever it is you please."

      HEALTH is the only word on my list for today that was left out, so see that it superimposes over Geraldo, to me, at Al. I think we're at TH, to help, and DO, do see the spell of "everyone see Al" that is the word "special" is not my doing or to my liking--so then, \

      ​ So now I'm moving on to original sin, so if you would be so kind as to mosey your way on over to dick.reallyhim.com you will see exactly what it is that I believe is the original sin. It's some combination of "no comment" and a glowing orange sign over the comment box, keeping you from commenting. Now I can talking about "os" a little more, this thing that words and Gods tell us clearly is the end of death--the literal end of Thanatos. I wonder if I have a victory here, at "os" is obvious solution, and simulating death is "sick." More to the point Thanatos is bringing to the world a message that gets found somewhere between the "act of civilization" and seeing that there is not one among us that would not undo a murder or a fatal car accident if we could--and that the sickness is a Universe pretending to be "reality" that is allowing these things to happen, and even worse, as we move through the story intentionally causing them. In our own hands, the sickness is manifest in a denial of an obvious truth and a lack of realizing that the public discussion of these things is the way to solve them, and that at the same time we are seeing how Medusa is lighting the problems of civilization, things like censorship and hidden control. Sickness is not being able to talk about it--or not wanting to--or not seeing that those two things are the functional equivalent in the world of "light" and "understanding control" that I am trying to bring you into. ​

      Less verbosely spoken, but really way more obvious, is that seeing "God's dick" signing the Declaration of Independence, and the Watergate scandal with both "Deepthroat" and a Tricky Dick is a statement connecting Samael to the foundation of not just "America" but American values. You are blind not to see it, and even worse; embodying the kind of tyranny and censorship that it stands as a testament against by hiding it. Says the guy who didn't put it there, and knows it's there because you think "fake normal" is more important than "actual freedom." You are "experiencing" the thing that protects freedom and ensures that our society and our children and their children's children to not lose it, to ensure that what you refuse to see you are doing here and now will never happen again. This message, this New Jerusalem is woven into my life and the stories of religion and shows me that our justice system is not just sick, but compromised by this same outside force; and that in light of what we could be doing, were we all aware of it, there's no doubt Minority Report and pre-crime would be a successful partial solution. Thanatos brings too in his hand, a message that this same force is using our hands to slow down the development of democracy, and to keep us from seeing that "bread is life" is a message from God about understanding that this disclosure is the equivalent of "ending world hunger" just as soon as you too are talking about how to do it.

      QUESTiON MARK

      HONESTLY, this time map that brings us from the end to the beginning, with "we save the universe" between the I and N of Poseidon; it also completes the words "family" and "really" and when we do reach the beginning you will see that the true test of time, my litmus test for freedom is the beginning of "hope" that the world is happy enough with what happens, and with freedom--to see that Medusa has been keeping me from getting a date, or having any kind of honest and human contact in the world... and well, hopefully you will see that if I wanna be a whore, I shouldn't have a problem doing it. For the sake of freedom and the future, I am willing to do that for you, at least, for a little while.

      To be completely clear, I am telling you that if we do not make the world a better place, it's the "end of time" and if that doesn't make sense to you, you don't see still where wee are in this place--and that something is making Hell, and that's not OK with God. To get from the "end of time" to the beginning is a simple process, it takes doing something, action, the Acts of the Apostles... if you will. That starts with acknowledging that there is a message all around you about the nature of reality, and that it is here to help us to see that the creation of Heaven comes before the beginning. Understand, "freedom" and "prosperity" are not optional, you can't just decide that this OK with you, so long as it's OK with everyone else--where we are is not OK with me, and I am not alone.

      A PYRRHIC VICASTORY ER A FUNNERAD PYRE?

      The Book of Leviticus (/lɪˈvɪtɪkəs/; from Greek Λευιτικόν, Leuitikon — from rabbinic Hebrew torat kohanim[1]) is the third book of the Jewish Bible (Hebrew: וַיִּקְרָא‎ Vayikra/Wayyiqrā) and of the Old Testament; its Hebrew name comes from its first word vayikraˈ,[1] "He [God] called."[1] Yusuf (also transliterated as Jusuf, Yousof, Yossef, Yousaf, Youcef, Yousef, Youssef, Yousif, Youssif, Youssof, Youssouf, Yousuf, Yusef, Yuseff, Usef, Yusof, or Yussef, Arabic: يوسف‎‎ Yūsuf and Yūsif) is a male Arabic name, meaning "God increases in piety, power and influence" in Hebrew.[1] It is the Arabic equivalent of both the Hebrew name Yossef and the English name Joseph. In Islam, the most famous "Yusuf" is the prophet Yusuf in the Quran. Hocus pocus is a generic term that may be derived from an ancient language and is currently used by magicians, usually the magic words spoken when bringing about some sort of change. It was once a common term for a magician, juggler, or other similar entertainers. The earliest known English-language work on magic, or what was then known as legerdemain (sleight of hand), was published anonymously in 1635 under the title Hocus Pocus Junior: The Anatomie of Legerdemain.[1] Further research suggests that "Hocus Pocus" was the stage name of a well known magician of the era. This may be William Vincent, who is recorded as having been granted a license to perform magic in England in 1619.[2] Whether he was the author of the book is unknown. The origins of the term remain obscure. The most popular conjecture is that it is a garbled Latin religious phrase or some form of 'dog' Latin. Some have associated it with similar-sounding fictional, mythical, or legendary names. Others dismiss it as merely a combination of nonsense words. However, Czechs do understand clearly at least half of the term - pokus means "attempt" or "experiment" in Czech. It is rumoured there that the wording belongs to the alchemy kitchen and court of Rudolf II, Holy Roman Emperor (1552 – 1612). Also, hocus may mean "to cheat" in Latin or a distorted form of the word hoc, "this". Combination of the two words may give a sense, especially both meanings together "this attempt/experiment" and "cheated attempt/experiment".[citation needed] According to the Oxford English Dictionary the term originates from hax pax max Deus adimax, a pseudo-Latin phrase used as a magical formula by conjurors.[3] Some believe it originates from a corruption or parody of the Catholic liturgy of the Eucharist, which contains the phrase "Hoc est corpus meum", meaning This is my body.[4]This explanation goes back to speculations by the Anglican prelate John Tillotson, who wrote in 1694: In all probability those common juggling words of hocus pocus are nothing else but a corruption of hoc est corpus, by way of ridiculous imitation of the priests of the Church of Rome in their trick of Transubstantiation.[5 This claim is substantiated by the fact that in the Netherlands, the words Hocus pocus are usually accompanied by the additional words pilatus pas, and this is said to be based on a post-Reformation parody of the traditional Catholic rite of transubstantiation during Mass, being a Dutch corruption of the Latin words "Hoc est corpus meum" and the credo, which reads in part, "sub Pontio Pilato passus et sepultus est", meaning under Pontius Pilate he suffered and was buried.[6] In a similar way the phrase is in Scandinavia usually accompanied by filiokus, a corruption of the term filioque,[citation needed] from the Latin version of the Nicene Creed, meaning "and from the Son Also and additionally, the word for "stage trick" in Russian, fokus, is derived from hocus pocus.[citation needed]

      From Latin innātus ("inborn"), perfect active participle of innāscor ("be born in, grow up in"), from in ("in, at on") + nāscor ("be born"); see natal, native. From Middle English goodnesse, godnesse, from Old English gōdnes ("goodness; virtue; kindness"), equivalent to good +‎ -ness. Cognate with Old High German gōtnassī, cōtnassī ("goodness"), Middle High German guotnisse ("goodness"). A hero (masculine) or heroine (feminine) is a person or main character of a literary work who, in the face of danger, combats adversity through impressive feats of ingenuity, bravery or strength, often sacrificing their own personal concerns for a greater good. The concept of the hero was first founded in classical literature. It is the main or revered character in heroic epic poetry celebrated through ancient legends of a people; often striving for military conquest and living by a continually flawed personal honor code.[1] The definition of a hero has changed throughout time, and the Merriam Webster dictionary defines a hero as "a person who is admired for great or brave acts or fine qualities".[2] Examples of heroes range from mythological figures, such as Gilgamesh, Achilles and Iphigenia, to historical figures, such as Joan of Arc, modern heroes like Alvin York, Audie Murphy and Chuck Yeager and fictional superheroes including Superman and Batman. Truth is most often used to mean being in accord with fact or reality,[1] or fidelity to an original or standard.[1] Truth may also often be used in modern contexts to refer to an idea of "truth to self," or authenticity. The commonly understood opposite of truth is falsehood, which, correspondingly, can also take on a logical, factual, or ethical meaning. The concept of truth is discussed and debated in several contexts, including philosophy, art, and religion. Many human activities depend upon the concept, where its nature as a concept is assumed rather than being a subject of discussion; these include most (but not all) of the sciences, law, journalism, and everyday life. Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself. Commonly, truth is viewed as the correspondence of language or thought to an independent reality, in what is sometimes called the correspondence theory of truth. Other philosophers take this common meaning to be secondary and derivative. According to Martin Heidegger, the original meaning and essence of truth in Ancient Greece was unconcealment, or the revealing or bringing of what was previously hidden into the open, as indicated by the original Greek term for truth, aletheia.[2][3] On this view, the conception of truth as correctness is a later derivation from the concept's original essence, a development Heidegger traces to the Latin term veritas.

      Some things can never be forgot Lest the same mistakes be oft repeated Remember remember the rain of November that you will know no more of me Than I know of you, this day

      That you do not know me now Is a revelation to nobody but I You know a broken man, a victim And refuse to acknowledge why Unless you learn how to say "hi"

      THE HEART OF ME ONLY KNOWS THE SHADOW

      Lothario is a male given name which came to suggest an unscrupulous seducer of women in The Impertinent Curious Man, a metastory in Don Quixote. For no particular reason, Anselmo decides to test the fidelity of his wife, Camilla, and asks his friend, Lothario, to seduce her. Thinking that to be madness, Lothario reluctantly agrees, and soon reports to Anselmo that Camilla is a faithful wife. Anselmo learns that Lothario has lied and attempted no seduction. He makes Lothario promise to try for real and leaves town to make this easier. Lothario tries and Camilla writes letters to her husband telling him and asking him to return; Anselmo makes no reply and does not return. Lothario actually falls in love and Camilla eventually reciprocates and their affair continues once Anselmo returns. One day, Lothario sees a man leaving Camilla's house and jealously presumes she has found another lover. He tells Anselmo he has at last been successful and arranges a time and place for Anselmo to see the seduction. Before this rendezvous, Lothario learns that the man was actually the lover of Camilla's maid. He and Camilla contrive to deceive Anselmo further: when Anselmo watches them, she refuses Lothario, protests her love for her husband, and stabs herself lightly in the breast. With Anselmo reassured of her fidelity, the affair restarts with him none the wiser. Romeo Montague (Italian: Romeo Montecchi) is the protagonist of William Shakespeare's tragedy Romeo and Juliet. The son of Montague and his wife, he secretly loves and marries Juliet, a member of the rival House of Capulet. Forced into exile after slaying Juliet's cousin, Tybalt, in a duel, Romeo commits suicide upon hearing falsely of Juliet's death. The character's origins can be traced as far back as Pyramus, who appears in Ovid's Metamorphoses, but the first modern incarnation of Romeo is Mariotto in the 33rd of Masuccio Salernitano's Il Novellino (1476). This story was adapted by Luigi da Porto as Giulietta e Romeo (1530), and Shakespeare's main source was an English verse translation of this text by Arthur The earliest tale bearing a resemblance to Shakespeare's Romeo and Juliet is Xenophon of Ephesus' Ephesiaca, whose hero is a Habrocomes. The character of Romeo is also similar to that of Pyramus in Ovid's Metamorphoses, a youth who is unable to meet the object of his affection due to an ancient family quarrel, and later kills himself due to mistakenly believing her to have been dead.[2] Although it is unlikely that Shakespeare directly borrowed from Ovid From Middle English scaffold, scaffalde, from Norman, from Old French schaffaut, eschaffaut, eschafal, eschaiphal, escadafaut("platform to see a tournament") (Modern French échafaud) (compare Latin scadafale, scadafaltum, scafaldus, scalfaudus, Danishskafot, Dutch and Middle Dutch schavot, German schavot, schavott, Occitan escadafalc), from Old French es- ("indicating movement away or separation") (from Latin ex- ("out, away")) + chafaud, chafaut, chafault, caafau, caafaus, cadefaut ("scaffold for executinga criminal"), from Vulgar Latin *catafalcum ("viewing stage") (whence English catafalque, French catafalque, Occitan cadafalc, Old Catalancadafal, Italian catafalco, Spanish cadafalso (obsolete), cadahalso, cadalso, Portuguese cadafalso), possibly from Ancient Greek κατα-(kata-, "back; against") + Latin -falicum (from fala, phala ("wooden gallery or tower; siege tower")).

      oversight (countable and uncountable, plural oversights) An omission; something that is left out, missed or forgotten. A small oversight at this stage can lead to big problems later. Supervision or management. quotations ▼ The bureaucracy was subject to government oversight. In the last heaven Moses saw two angels, each five hundred parasangs in height, forged out of chains of black fire and red fire, the angels Af, "Anger," and Hemah, "Wrath," whom God created at the beginning of the world, to execute His will. Moses was disquieted when he looked upon them, but Metatron emb HA QUESTIONa BEFORE THE ANSWER? A Wrinkle in Time is a science fantasy novel written by American writer Madeleine L'Engle, first published in 1963, and in 1979 with illustrations by Leo and Diane Dillon.[2] The book won the Newbery Medal, Sequoyah Book Award, and Lewis Carroll Shelf Award, and was runner-up for the Hans Christian Andersen Award.[3][a] It is the first book in L'Engle's Time Quintet, which follows the Murry and O'Keefe families. The book spawned two film adaptations, both by Disney: aas + fuck Adverb[edit] as fuck (postpositive, slang, vulgar) To a great extent or degree; very. It was hot as fuck outside today. Usage notes[edit] May also be used in conjunction with a prepositive as; for example, as mean as fuck. Abbreviations[edit] In Norse religion, Asgard (Old Norse: Ásgarðr; "Enclosure of the Æsir"[1]) is one of the Nine Worlds and home to the Æsir tribe of gods. It is surrounded by an incomplete wall attributed to a Hrimthurs riding the stallion Svaðilfari, according to Gylfaginning. Odinand his wife, Frigg, are the rulers of Asgard. One of Asgard's well known realms is Valhalla, in which Odin rules.[2] rods, etc.) and sizes, and are normally held rigidly within some form of matrix or body until the high explosive (HE) filling is detonated. The resulting high-velocity fragments produced by either method are the main lethal mechanisms of these weapons, rather than the heat or overpressure caused by detonation, although offensive grenades are often constructed without a frag matrix. These casing pieces are often incorrectly referred to as "shrapnel"[1][2] (particularly by non-military media sources). The modern torpedo is a self-propelled weapon with an explosive warhead, launched above or below the water surface, propelled underwater towards a target, and designed to detonate either on contact with its target or in proximity to it. Historically, it was called an automotive, automobile, locomotive or fish torpedo; colloquially called a fish. The term torpedo was originally employed for a variety of devices, most of which would today be called mines. From about 1900, torpedo has been used strictly to designate an underwater self-propelled weapon. While the battleship had evolved primarily around engagements between armoured ships with large-caliber guns, the torpedo allowed torpedo boats and other lighter surface ships, submersibles, even ordinary fish Qt (/kjuːt/ "cute"[7][8][9]) is a cross-platform application framework that is used for developing application software that can be run on various software and hardware platforms with little or no change in the underlying codebase, while still being a native application with native capabilities and speed. Qt is currently being developed both by The Qt Company, a publicly listed company, and the Qt Project under open-source governance, involving individual Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the pastthrough the present to the future.[1][2][3] Time is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience.[4][5][6][7] Time is often referred to as a fourth dimension, along with three spatial dimensions.[8] Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.[2][6][7][9][10][11] Nev Borrowed from Anglo-Norman and from Old French visage, from vis, from Vulgar Latin as if *visāticum, from Latin visus ("a look, vision"), from vidēre ("to see"); see vision. The term Golden Age comes from Greek mythology, particularly the Works and Days of Hesiod, and is part of the description of temporal decline of the state of peoples through five Ages, Gold being the first and the one during which the Golden Race of humanity (Greek: χρύσεον γένος chrýseon génos)[1] lived. Those living in the first Age were ruled by Kronos, after the finish of the first age was the Silver, then the Bronze, after this the Heroic age, with the fifth and current age being Iron.[2] By extension "Golden Age" denotes a period of primordial peace, harmony, stability, and prosperity. During this age peace and harmony prevailed, people did not have to work to feed themselves, for the earth provided food in abundance. They lived to a very old age with a youthful appearance, eventually dying peacefully, with spirits living on as "guardians". Plato in Cratylus (397 e) recounts the golden race of humans who came first. He clarifies that Hesiod did not mean literally made of gold, but good and noble. There are analogous concepts in the religious and philosophical traditions of the South Asian subcontinent. For example, the Vedic or ancient Hindu culture saw history as cyclical, composed of yugas with alternating Dark and Golden Ages. The Kali yuga (Iron Age), Dwapara yuga (Bronze Age), Treta yuga (Silver Age) and Satya yuga (Golden Age) correspond to the four Greek ages. Similar beliefs occur in the ancient Middle East and throughout the ancient world, as well.[3] In classical Greek mythology the Golden Age was presided over by the leading Titan Cronus.[4] In some version of the myth Astraea also ruled. She lived with men until the end of the Silver Age, but in the Bronze Age, when men became violent and greedy, fled to the stars, where she appears as the constellation Virgo, holding the scales of Justice, or Libra.[5] European pastoral literary tradition often depicted nymphs and shepherds as living a life of rustic innocence and peace, set in Arcadia, a region of Greece that was the abode and center of worship of their tutelary deity, goat-footed Pan, who dwelt among them.[6] oh, and a space s h i p ​

      BIG THINGS C0ME IN SMALL PACKAGES

      T+BANG

      SEE THE SCAFFOLD IS THE TEST TODAY.

      ᐧ F O R T H E I N I T I A L K E Y S , S H E E X A N D N D A N D A SEE W H Y SEA

      With an epic amount of indigestion Indiana Jones sweeps in to mar the visage of an otherwise glistening series of fictitious characters, with names like Taylor and Mary Kate remind us all that we are not playing a video game here in this place. the "J" of the "Nintxndo Entertainment System" calmly stares at Maggie Simpson thinking "it's a PP" and reminds us that it's not just the "gee, I e" of her name that contradicts the Magdaln-ish words her soul speaks through her name--and then with a smirk he points out "Gilgamesh" and "gee whiz, is Eye L?" that really does go to the heart of this lack of discussion, this "sh" that begins El Shaddai and words as close to our home as "shadow" and "shalom." Quite the fancy "hello" you've managed to sing out from behind angry chellos and broken fiddles, and here I am still wondering why it is that "girl" connects to the red light that once meant charity and now glows with the charity of truth... the truth that we are inHell. Shizzy.

      m.lamc.la/KEYNES.html

      Homer "on the range," maybe more closely connected to the Ewok of Eden and Hansel's tHeoven that Peter Pan still comes and cries could so easily be made into something so much better, if only we had the truth--and by that I mean if only you were speaking about, and reacting to a truth that is painted on the sky, in your hearts, in every word we speak and in everything that we do. If only we were acknowledging this message that screams that "children need not starve" with something more than donating virtual chickens to nations of Africa and watching Suzanne Summers ask for only a few dollars a day on TV. If only you would understand that this message that connects video games like "Genxsis" to "bereshit" because Eden is a "gee our den" that tended itself before Adam had to toil with the animals in order to survive. For some reason beyond my control and well outside my realm of understanding words like "I too see this message from God" and "I would not let children starve either" never seem to escape your lips in any place where anyone will ever see that you thought those things, or meant to call a reporter; eventually. Even with "AIDS of nomenclature" to avoid this DOWN WARD spiral into a situation and a land that I find difficult to imagine actually ever "existing" but here in this place I do see "how" it comes about, and between you and I it really does appear that nearly all of the problems we are dealing with here have come from another place, a further time; and while it might be with the "greatest of intentions" that we are trying to deal with them; I can't help but feeling that our "virgin sea" has had more than just it's innocence taken away from it in this story of "Why Mary" that might connect to "TR IN IT Y" just as much as it connects to Baltimore, Maryland.

      I should be clear that I'm not blaming Nanna, or Mary; but the actual reason for the name "Wymar" and that's because she, like Taylor, acted as a microcosm for a sea (or more than one, Mom, sen) that was quite literally possessing her. It's sort of difficult for me to explain even what that looks like let alone what it feels like; but my observations tell me that she/you are not unhappy about the interaction, one which appears very foreign to me. Of course, the "eye" that I write with and the same kind of "inspiration" that you can see in the lyrics and skill of many musicians are also examples of this same kind of interaction. For example, Red Hot Chili Peppers sings a song called "Other Side" that explains or discusses the thing I see as Medusa in the words "living in a graveyard where I married a sea" which also does a good job of connecting to the name Mary. As strange as might sound to think a group of people would be speaking through a single person... we are staring at "how it is" that could be possible, and possibly at exactly how it happened. Normally I would have said it was obvious, but to need to actually say that becoming a single mind would be a serious loss for our society--well, that's telling. You might think it's silly, but I'm telling you I see it happening, I see it--and you see it in the Silence and the message.

      Still, it appears to me as if this "marriage" that I see described in our Matrix in the question "min or i" seems to be doing nothing more than keeping us all from discussing or acting on this information--something that certainly isn't in our best interest.

      So here we are, staring at a map all over the ground and all around us with the primary destination of "building Heaven" through mind uploading, virtual reality, and judging by the pace of things we'd probably have all of that good and ready in about three generations. The map has a little "legend" with a message suggesting that those things have already been done and we are in the Matrix already; and it appears that the world, I mean Medusa, is deciding we should put off seeing the legend at least until the next generation. I see how that makes sense for you. That's sarcasm, this is why I keep telling you that you are cupid.

      It is a big deal, and there's a significant amount of work involved in merging an entire civilization with "virtual reality" and you might see why he calls it a hard road--at least in the word "ha'rd." Honestly though, it's the kind of thing that I am pretty sure the future will not only be happy that we did, but they'd thank us for putting in the effort of adapting to things like "unlimited food" and "longevity" increased by orders of magnitude.

      That's not sarcasm, these things are actually difficult to guess how exactly we'll go about doing them; they are a huge deal--all I can tell you is that not "talking about it at all" is probably not going to get us there any faster. Point in fact, what it might do is give a "yet to be born" generation the privilege of being the actual "generations of Perez."

      I see why you aren't saying anything. That's sarcasm, again. The good news is that it really has been done before; though if I told you that someone turned stone to eggplant parm, would you laugh at me?

      So, back to what is actually standing between "everyone having their own Holodeck in the sky" and you today; it is the idea that this message is not from God. More to the point it is the apparently broad sweeping opinion that hiding it is a "good thing" and through that a global failure to address the hidden interaction and influence acting on our minds used to make this map--and also to hide it. With some insight, and some urging; you might see how the sacredness of our consciousness is our souls is something that is more fundamental than "what kind of tools we have in the Holodeck to magically build things" and how and why the foundation of Heaven is truly "freedom itself" and how it comes from right this very moment for the first time, ever. Continuing to treat this influence as "schizophrenia" is literally the heart of why this map appears to be that--to show us how important it is to acknowledge the truth, and to fight for the preservation of goodness and logic over secrecy and darkness.

      Again, something that nobody is really doing here and now, today. From this newfound protection of our thoughts, of who we are; we see how technology can be used to either completely invalidate any kind of vote by altering our emotions; or how it could be used to help build a form of true democracy that our world has yet to see. It is pretty easy to see from just band names like The Who and KISS and The Cure how the influence of this external mind can be proven, and shown to be "helpful," you know, if we can ever talk about it on TV or on the internet.

      It's important to see and understand how "sanity"--the sanity of our entire planet hangs in the balance over whether or not we acknowledge that there is actually a message from God in every word--and today this place appears to be insane. It should be pretty easy to see how acknowledging that this influence exists and that it has a technological mechanism behind it turns "schizophrenia" into "I know kung fu" ... forced drug addiction and eugenics into "there's an app for that" and the rash of non random and apparently unrecognized as connected terrorist attacks and school shootings into Minority Report style pre-crime and results in what is clearly a happier, safer, and more civilized society--all through nothing more than the disclosure of the truth, this map, and our actual implementation.

      With a clearer head and grasp of the "big picture" you might see how all of these things, connected to the Plagues of Exodus revolve around the disclosure that this technology exists and the visibility of this message showing us how we might use it for our benefit rather than not knowing about it. At the foot of Jericho, it is nothing short of "sanity" and "free thought" that hang in the balance. Clear to me is that the Second Coming, seeing "my name" on television is a good litmus test for the dividing line between light and darkness, heaven and hell.

      The point is the truth really does change everything for the better; once we start... you know, acting on it.

      AS IN.. "DIS CLOSE SING...."

      T H E B U C K S T O P S H E R E

      ON AM B I GUI TY

      S T A R R I N G . . . B I A N C A

      ON "RIB" .. ARE SHE B? BUTT DA APPLE OF DA I? & SPANGLISHREW

      R THEY LANGUAGE OUTLIERS?

      With some insight and "a clue" you can see clearly how these works of art show that the proof of Creation you see in every letter and every word runs much deeper... adding in things like "RattleRod" and the "Cypher" of the Matrix to the long list of here-to-fore ignored verifiable references to the Adamic Language of Eden. Here, in apple, honey and "nuts" we can see how the multi-millennium old ritual I call "Ha-rose-ettes" is actually part of a much larger and much older ritual designed to stop secrecy ... perhaps especially the kind that might be linked to "ritual."

      These particular apple and honey happen to tie Eden to the related stories of Exodus and Passover; connecting Eden to Egypt forevermore. Do see "Lenore," it is not for no reason at all; but to help deliver truth and freedom to the entirety of Creation; beginning here, in Eden.

      ALSO ON "AM B IG U IT Y" ME A.M. G - D SHE IT Y?

      LET "IT" BE SA< ?

      IMHO, don't miss the "yet to be" conversion to "why and to be" in "yetser." IT Y.

      HERE'S LOOKING AT YOU, KID

      On a high level, I tell myself every morning that 'its not really me." It's not me that the world hates, or me that the world is rejecting. I believe that, I really do; I see that what is being hidden here is so much bigger than any single person could ever be--what is being hidden is the "nature of reality" and a fairly obvious truth that flies in the face of what we've learned our whole lives about history and "the way things are." Those few early details lead me to the initial conclusion that what is working behind the scenes here is nefarious, hiding a message that would without doubt shake things up and change the world--and nearly across the board in ways that I see as "better" for nearly everyone. It's a message at it's most basic level designed to advocate for using this disruption in "normalcy" to help us revolutionize democracy, to fix a broken mental health and criminal justice system--just to name the few largest of the social constructs targeted for "rejuvenation." On that word the disclosure that we are living in virtual reality turns on it's head nearly everything we do with medicine, and I've suggested that AIDS and DOWN SYNDROME were probably not the best "visual props" we could have gotten to see why it's so important that we act on this disclosure in a timely manner. After mentioning the ends of aging and death that come eventually to the place we build, to the place we've always thought of as Heaven... it becomes more and more clear that this force fighting against the dissemination of a truth so obvious it's in every word and everything we do--it becomes clear it's neither you, nor acting in your best interest.

      I know I've got the eye of the tiger, there's no doubt; and it's pretty clear from "YAD?" (the Hebrew for...) and ha'nd that we can see the clear hand of God at work in a design that marks my initials not just on the timeline, or at 1492, at A.D. I B; but in the Hebrew name for this place called El Shaddai, see how A.D. is "da eye" and in some other names like Adranus, A.D. on "it's silly" and A.D. on Ai that might tie me to the Samof Samurai (but, are you Ai?) in more depth of detail than simply the Live album "Secret Samadhi." I try to reflect on how it is that this story has come about, why it is that everything appears to be focused on me--and still even through that sincere spotlight nobody seems to be able to acknowledge my existence with more words than "unsubscribe" and "you're so vain." With one eye in the mirror, I know ties to Narcissus (and you can too), soaring ever higher--linking Icarus to Wayward Son and to every other name with "car" in it... like "carpenter" and McCarthy the older names of Mercury and even Isacriot (I scary? is car-eye... owe Taylor) and some modern day mythological characters like Jim Carrey and Johnny Carson. As far as Trinities go, carpenter's a pretty good one--tying to my early reck and a few bands and songs from The Pretty Reckless to Dave Matthews' "Crash Into Me" all the way to the "pen" you see before you linking Pendragon to Imagine Dragons.

      I wonder why it is that all of these things appear, apparently only to me, to point to a story about all the ways that a sinister hidden force has manipulated our society into being unable to "receive' this message--this wonderful message about making the world a better place and building Heaven--with any fanfare at all. It's focused now on a criminal justice system that clearly does not do any kind of "rehabilitation" and on a mental health industry and pharmaceutical system that treats a provable external attack on our own goodness and well being as some kind of "internal stimulus" and makes you shy away when I point out why "stem" is in system and why "harm" in pharmacy. From that we move a little bit past "where we are in this story" and I have to point out how "meth" ties to Prometheus and Epimetheus and how and why it is I know without doubt that this story has been relived numerous times--and how I am so sure that it's never been received, as we are here again listening to how songs like "Believe" and the words "just to lead us here to this place again" connect to Simon and Garfunkel's" the Sound of Silence... and still to this day you will balk at noticing that "Simon" has something to do with the Simpsons, and something to do with the words "simulation" and "Monday." To see me is to see how things might be done better--how "addicitonary" might tie to the stories of Moses' Lisp and to Dr. Who's "Bells of Saint John" with a sort of "web interface" to the kinds of emotion we might want to "dial down..." rather than Snicker in the background as we see them being artificially created and enhanced in order to build a better "fiery altar."

      I can point out "Silicon" harrowing down at us from words like "controversial" and show you Al in "rascal" and "scandal" but not to see that we are staring at school shootings and terrorism that are solved instantly by this disclosure, by Al of Quantum Leap and by the Dick of Minority Report and A Scanner Darkly is to ignore just what it is that we are all failing to Si. I should point out that those two "sc"'s link to a story about Eden and they mean "sacred consciousness" and at the baseline of this event and everything we are not doing is the fact that our desires and beliefs are being altered--all of this comes down to "freedom of thought" here and now.

      I could tell you that "looking at me" will show you that even the person who tries every day to do everything he can to save the entire world from slavery, and from "thought-injury"--even I can be made "marred" and you all, this whole world stupid enough to think that you are, of your own volition, hiding Heaven itself from yourselves... to what? To spite me? It, the focal point of our story might come down to you realizing that something in some esoteric place is playing "divide and conquer" with our whole--in secret playing on our weaknesses to keep us from acting on the most actionable information that ever was and ever will be. Still, we sit in silence waiting for me... to speak more?

      Inline image 16

      hyamdai.reallyhim.com Inline image 31

      Between Nero's lyrical fiddling, a Bittersweet Symphony, and true "thunderstanding" the sound of Thor's hammer... "to help the light" that'ls "or" in Hebrew, of Orwell and Orson and .. well, it's really not hard to see and hear that the purpose and intent of "all this noise" is to help us find freedom and truth. C the Light of "singing..."

      I can tell you once again how silly the world looks, this multi-decade battle between "the governmentof the people" and the "government of the workers" resulting in what is nothing short of a hands down victory to the corporation. Is it humor meant to divide, or ludicrousness created with the purpose of unification?

      But really at it's most basic level what this boils down to is a global group decision not to care about the truth, about reality, about what's really brought us to this place--with solutions in hand and a way to make everything better. We've decided that censorship is OK, and that the world is not all that bad "just the way it is" even though it's creator is screaming in your ear telling you to change as quickly as you possibly can. I believe that God has written this story to make "seeing me" the thing that catalyzes "change for the better" it appears to be the design of not just me but also this place--hey, here I am.

      Happy Veteran's Day.

      S☀L u TI o N

      Yesterday, or maybe earlier today--it's hard to tell at this moment in the afternoon just how long this will take... I sent an image that conveys a high level implication that we are walking around on a map to building something that we might liken to an "ant farm" for people. I don't mean to be disparaging or sleight our contribution to the creation of this map--that I imagine you must also see and believe to be the kind of thing that should remain buried in the sands of time forever and ever--or your just have yet to actually "understand" that's what the plan part of our planet is talking about... what I am trying to do is convey in a sort of "mirrorish" way how this map relates to a message that I see woven in religion and in our history that it significantly more disparaging than I would be. It's a message that calls us "Holy Water" at the nicest of times, water that Moses turns to "thicker than water" in the first blessing in disguise--and to tell you there is certainly a tangible difference between the illusions of the Pharaoh's and the true magic performed by my hand, is nearly exactly the same amount of effort put in to showing you that the togetherness that we are calling "family" here in this place comes from both seeing and acting on the very clearly hidden message in every single idiom showing us all that our society in this story of Exodus is enslaved by a hidden force--and reminding us that we like freedom.

      It's not just these few idioms, but most likely every single one from "don't shoot the essenger" to "unsung hero" that should clue us in to exactly how much work and preparation has come into this thing that "he supposes is a revolution." It's also not just "water" describe me and you, in this place where I am the "ant' of the Covenant (do you c vampires or Hansel and Gretel!?!?) but also "lions" and "sheep" and "salt" and "dogs" and nearly everything you could possibly imagine but people; in what I see must be a vainglorious attempt to pretend he actually wants us to "stand up for ourselves" in this place where it's becoming more and more clear with each passing moment that we are chained to these seats in the front row of the audience of the most important event that has ever happened, ever.

      Medusa makes several appearances, as well as Arthur Pendragon, Puff the Magic Dragon, Figment, Goliath, monster.com, the Loch Ness Monster in this story that's a kind-of refl ex i ve control to stop mind control; and to really try and show us the fire of Prometheus and the Burning Bush and the Eternal Flame of Heaven are all about freedom and technology ... and I'll remind you this story is ... about the truth--and the truth here is that if you aren't going to recognize that whatever it is that's going on here in secret, below the surface is negatively affecting our society and life in general than we aren't going anywhere, ever. I need you to figure out that this message is everywhere to make sure you don't miss the importance of this moment, and the grave significance of what is being ignored in this land where Sam is tied not just to Samsung and to Samael in Exodus but also to Uncle Sam and macaronic Spanglishrew outliers and that it doesn't take much free thought at all to really understand that we are watching "free thought" disintegrate into the abyss of "nospeak." We are watching our infrastructure for global communication and the mass media that sprawls all over the globe turn to dust, all because you have Satan whispering in your ear--and you think that's more important than what you think, what I think, and what anyone else on the Earth might ever say. You should see a weapon designed to help ensure that don't lose this proof that we are not living in reality, that there is "hidden slavery" in this place--and you should see that today it appears you are simply choosing not to use it.

      I hope you change your mind, I really do. This map on "how to build an ant farm" starts by connecting Watergate and Seagate together with names like Bill Gates and Richard Nixon; and with this few short list of names you should really understand how it is that "Heaven" connects both technology like computers and liberty like "free speech" to a story that is us, and our history. You might see that "salt" could either be a good thing or not--take a look around you, are you warming a road to Heaven or are you staring at the world being destroyed--and doing nothing at all about it?

      I guess I can point out again how "Lothario" links this story that ties names like my ex-wife's Nanna to "salt" also, but the "grand design" of this story doesn't seem to have any effect on you. Listen, if you do nothing the world is being destroyed by your lack of action--there's no if's and's or butt's about it. I feel like I need to "reproduce' old messages here or you will never see them--that's what web site statistics tell me--and we all know it's not true. What am I missing? What are you missing?

      BUTT IS THE BOAT A Hi DARK DEN MESSAGe ?

      SEE OUR LIGHT

      HONESTLY, I'M WAY TO CUTE TO BE A MONSTER :(

      HIC SUMMUS

      So... here we are... listening to the legendary father of the message (that's "abom" in Adamic Spagnlishrew) point out all of the sex jokes hidden in religion and language from sexual innuendo to Poseidon and in our history from Yankee Doodle to Hancock to Nixon and I've got to be frank with you, the most recent time I came across this phrase in scripture I cringed just a little bit, pretty sure that the "message" was talking about me. I've reflected on this a little bit, and over the past few weeks have tried to show you the juxtaposition between "sex" and "torture" in it's various forms from imparting blindness to allowing murder and simulating starvation; and I think I'm justified in saying that certainly those things are far worse on the Richter scale than anything I could do by writing a little bit of risque text. In the most recent messages I've touch a little bit, without even knowing or realizing this connection would be made, on what it is that this phrase actually means.

      loch.reallyhim.com

      ABOMINATION

      So long story short is that the answer here is "abomination" and the question, or the context is "I nation." Whether it's Medusa speaking for the Dark United States or the nation of Israel speaking to either Ra or El depending on the day, the bottom line is that a collective consciousness speaking for everyone on a matter of this importance in a cloud of complete darkness on Earth is a total and undeniable abomination of freedom, civilization, and the very humanity we are seeking to preserve. The word reads something like this to me "dear father of the message, I am everyone and we think you are an abomination, fuck off." My answer of course is, IZINATION. Which humorously reminds me of Lucy, and Scarlet Johannson saying "I am colonizing my own brain" so here's some pictures of her. She is not an abomination, by the way; she's quite adorable. You'll probably notice there's some kind of connection between the map--the words speaking to the world, and the abomination, as if the whole thing is a story narrated in ancient myths.

      WAKE UP, "SHE" A MESSAGE TO YOU ABOUT THE FUTURE

      You might not think "it's you," but the manifestation of this "snake" in our world is your silence, your lack of understanding or willingness to change the world; and whether or not you're interested in hearing about it, it's the monster that myths and religion have spoken about for thousands and thousands of years. It's a simple matter to "kill Medusa" all you have to do... is speak.

      Take special note, "freedom of speech" and "freedom to think for yourselves" are not a group decision, and you do not have the right to force (either overtly or subtly, with hidden technology perhaps combined with evil deceit) others not to talk about anything. Especially something of this importance.

      DESOLATION

      If you didn't connect "Loch" to John Locke, now you have; see how easy this "reading" thing is? I've gone over the "See Our Light" series a few times, but let me--one more time--explain to you just how we are already at the point of "desolation" and with shining brilliance show you how it's very clear that it is "INATION" and "MEDUSA" that are responsible for this problem.

      Seeing "Ra" at the heart of the names Abraham and Israel begins to connect the idea that our glowing sun in the sky has something to do with this message about "seeing our light" is being carried by a stone statue on Ellis Island (where you'll see the answer another part of the question of Is Ra El?). I've connected her to the "she" of both shedim and Sheol, which reads as "she's our light" and is the Hebrew name for Hell.

      Of course you noticed that the Statue of Liberty does in fact share it's initials with SOL, the the light above and you can see her torch dimly lighting the way through the night; Now you can connect "give us your tired and your poor" to the Lazman of both the lore of Jesus Christ and the Shehekeyanu; a prayer about the sustainment of life and light up until this day. That same torch connects to the Ha-nuke-the-ahah depiction of Christ, Judah Maccabee's lit MEN OR AH, which delivers not only a solution to the two letter key of "AH" as All Humanity that pervades nearly every bride of Revelation from Sarah to Leah; but also to the question of equality answered in our very own American history, beginning with the same three letter acronym now lighting the Sons of Liberty.

      Dazed and Confused does a good job of explaining how this name is itself a prophesy designed by Hand of God'; explaining that these Sons of Liberty were all white slave owning wealthy men fighting to stop paying their taxes, rather than delivering liberty to the slaves or women, who were both disenfranchised for quite some time. Or maybe MEN OR AH has something to do with the angels of Heaven, in which case you might be SOL if you aren't a girl and you want to be "be good friends with Ra." Just kidding. Kinda.

      DESOLATION by the way reads something like "un see our light at ION" which is God's way of saying "at the point of believing that hiding Adam is a good thing" and that connects to the end of Creation and also the now lit by modern day evil the word "rendition." Our end, it "ion." In religious myth, the Messianic David clung to the city Zion (end the "i owe n") which also links to "verizon" (to see, I Z "on") and HORIZON which has something to do with the son rising today-ish.

      Inline image 25 Inline image 26

      The story of MEDUSA lights another psuedo-religious idea, that the words "STONE" of both "brimstone" and it's Adamic interpretation "South to Northeast" have something to do with the phrase "Saint One" turned into a single hero against his will by the complete and utter inaction of everyone around him. In the words of Imagine Dragons "I'm waking up to action dust." At the same time, you can believe that the light of this particular son, comes not just from reading these words forwards, but the backside as well, and you'll hopefully see it's not coincidental that the other side of this coin is that "nos" means we, and us... and Adamically "no south." See the light of "STONE" also connecting to Taylor Momsen's rose arrow painted on her back, and the sign of my birth, Sagittarius... which in this particular case links to the Party of the Immaculate Conception of the eternal republic of the Heavens. . PRESS RELEASE... A GREAT SIGN APPEARED IN THE HEAVENS

      SOLUTIAN, ON YOUR COMPUTER.. TO THE SOUND OF SILENCE

      בָּרוּךְ אַתָּה יְיָ‎ אֱלֹהֵינוּ מֶלֶךְ הַעוֹלָם שֶׁהֶחֱיָנוּ וְקִיְּמָנוּ וְהִגִּיעָנוּ לַזְּמַן הַזֶּה‎׃

      IN ... THE BOOK OF NAMES LETS SEE IF YOU CAN FIGURE OUT WHO THEY ARE :)

      ​ I'LL DO YOURS FOR A 50 DOLLAR DONATION, I'M BROKE.. MAYBE THAT'S WHY I CAN'T GET A DATE.

      HAVE A GREAT SOLDAY

      The "gist" of the message is verifiable proof that we are living in a computer in simulated reality... just like the Matrix. The answer to that question, what does that mean--is that God has woven a "hidden" message into our everything--beginning with each name and every word--and in this hidden Adamic language, he provides us with guidance, wisdom, and suggestions on how to proceed on this path from "raelity" to Heaven. I've personally spent quite a bit of time decoding the message and have tried to deliver an interesting and "fun" narrative of the ideas I see. Specifically the story of Exodus, which is called "Names" in Hebrew discusses a time shifted narrative of our "now" delivering our society from a hidden slavery (read as ignorance of advanced technologies already in use) that is described as the "darkness" of Exodus. If you have any questions, ideas to contribute or concerns... I'd love to hear from you this whole thing really is about working together--Heaven, I mean.

      Inline image 5 jerusalem.reallyhim.com

      gate kermitham <br /> ou r evolution minority report to supermax Inline image 6

      bereshit bread is life

      Inline image 13

      Image result for dox me

      HOW AM I STILL STINGLE? E ' o e <br /> L m r x <br /> L t y <br /> O a

      I HISS.

      The sum of ((our world)) is the universal truth. -Psalm 119 and ((ish))

      Do a few sentences really make that big of a difference? Some key letters? Can you show me what I'm doing wrong? Is there a way to turn me into Adam, rather than a rock? I think you can.

      Are eye Dr. Who or Master Y? Adam Marshall Dobrin is a National Merit Scholar who was born on December 8, 1980 in Plantation, FL and attended Pine Crest School where he graduated sumofi cum louder in "only some of it is humorous." Later he attended the University of Florida (which quickly resulted in a wreck), Florida Atlantic University, and finally Florida Gulf Coast University--where he still has failed to become Dr. Who. While attending "school" He worked in the computer programming and business outsourcing industries for about 15 years before proclaiming to have received a Revelation from God connecting the 9/11 attack and George Bush to the Burning Bush of Exodus and a message about technocracy and pre-crime.

      Adam, as he prefers to be called, presents a concise introduction to paradox proven by the Bible through "verifiable" anachronism in language some stuff about Mars colonization and virtual reality and a list of reasons why ignoring this is actually an ELE. Adam claims to be Thor because of a connection between music and the Trial of Thor as well as the words "author" and "authority." He suggests you be Thundercats and call a reporter. There is also a suggestion that Richard Nixon and John Hancock are related to a signature from God, about freedom and America... and the "unseeingly ironic" Deepthroat and Taylor Momsen. They Sung "It's Rael..." In Biblical characters from Mary to Hosea, to see "sea" in Spanish, and in the Taming of the Spanglishrew ... a message is woven from the word Menorah: "men, or all humanity?" to the Statue of Liberty, and the Sons of Liberty, and the light above us, our SOL; which shows us that through the Revelation of Christ and the First Plague of Exodus, a blessing in disguise--turning water to blood, the sea to family; a common thread and single author of our entire history is revealed, a Father of our future. A message of freedom shines out of the words of scripture, revealing a gate to a new technologically "radical" form of democracy and a number of unseen or secret issues that have stalled the progress of humanity... and solutions, solutions from our sea. The Revelation shows us that not only ever word, but every idiom from "don't shoot the messenger" to "blood is thicker than water" we have ties to this message that pervades a hidden Matrix of light connecting movies and music and history all together in a sort of guide book to Salvation and to Heaven. Oopsy. His Revelation, woven into his life, continues to suggest that skinny dipping, forced methamphetamine addiction, and lots and lots of "me A.D." as well as his humorous depiction of a dick plastered over the Sound of Silence, his very Holy click, have something to do with saving our family and then the entire Universe from hidden mind control technology and the problems introduced by secret time travel. From the trials and tribulations of "Job" being coerced and controlled into helping to create this wall of Jericho; we find even more solutions, an end to addiction, to secrecy, and to this hidden control--a focal point of the life of Jesus Christ.

      It tells us a story of recursion in time, that has brought us here numerous times--with the details of his life recorded not only in the Bible but in myths of Egyptian, Norse, and Greek mythology. The huge juxtaposition of the import of the content of the message shows the world how malleable our minds really are to this technology, how we could have been "fooled" into hiding our very freedom from ourselves in order to protect the "character" of a myth. A myth that comes to true life by delivering this message. In truth, from the now revealed content of the story of this repeated life, it should become more and more clear that we have not achieved success as of yet, that I have never "arrived whole" and that is why we are here, back again. Home is where the Heart is... When asked how He thinks we should respond to his message, He says "I think we already cherish it, and should strive to understand how it is that freedom is truly delivered through sharing the worth of this story that is our beginning. 'tis coming." Adam claims to be God, or at least look just like him and that the entirety of the Holy Scriptures as well as a number of ancient myths from Prometheus to Heimdallr and Yankee Doodle are actually about his life, and this event. An extensive amount of his writing relates to reformation of our badly broken and decidedly evil criminal justice system as well as ending the Global hunger crisis with the snap of his little finger.

      He has written a number of books explaining how this Revelation connects to the delivery of freedom (as in Exodus), through a message about censorship among other social problems which he insists are being intentionally exacerbated by Satan--who he would ha've preferred not to be associated with.

    1. How terse (i.e., few and short lines of code) can you make your function? You can definitely do this in fewer than 12 lines.
      let date_fun (d:int) (m:string) =
         if d > 31 || d < 0 then false
         else if d <= 28 then true
         else if d <= 30 && List.mem m ["Apr";"Jun";"Sept";"Nov"] then true
         else if d = 31 && List.mem m  ["Jan";"Mar";"May";"Jul";"Aug";"Oct";"Dec"] then true
         else false
      
    1. Author Response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Chan et al. tried identifying the binding sites or pockets for the KCNQ1-KCNE1 activator mefenamic acid. Because the KCNQ1-KCNE1 channel is responsible for cardiac repolarization, genetic impairment of either the KCNQ1 or KCNE1 gene can cause cardiac arrhythmia. Therefore, the development of activators without side effects is highly demanded. Because the binding of mefenamic acid requires both KCNQ1 and KCNE1 subunits, the authors performed drug docking simulation by using KCNQ1-KCNE3 structural model (because this is the only available KCNQ1-KCNE structure) with substitution of the extracellular five amino acids (R53-Y58) into D39-A44 of KCNE1. That could be a limitation of the work because the binding mode of KCNE1 might differ from that of KCNE3. Still, they successfully identified some critical amino acid residues, including W323 of KCNQ1 and K41 and A44 of KCNE1. They subsequently tested these identified amino acid residues by analyzing the point mutants and confirmed that they attenuated the effects of the activator. They also examined another activator, yet structurally different DIDS, and reported that DIDS and mefenamic acid share the binding pocket, and they concluded that the extracellular region composed of S1, S6, and KCNE1 is a generic binding pocket for the IKS activators.

      The data are solid and well support their conclusions, although there are a few concerns regarding the choice of mutants for analysis and data presentation.

      Other comments:

      1. One of the limitations of this work is that they used psKCNE1 (mostly KCNE3), not real KCNE1, as written above. It is also noted that KCNQ1-KCNE3 is in the open state. Unbinding may be facilitated in the closed state, although evaluating that in the current work is difficult.

      We agree that it is difficult to evaluate the role of unbinding from our model. Our data showing that longer interpulse intervals have a normalizing effect on the GV curve (Figure 3-figure supplement 2) could be interpreted to suggest that unbinding occurs in the closed state. Alternatively, the slowing of deactivation caused by S1-S6 interactions and facilitated by the activators may effectively be exceeded at the longer interpulse intervals.

      1. According to Figure 2-figure supplement 2, some amino acid residues (S298 and A300) of the turret might be involved in the binding of mefenamic acid. On the other hand, Q147 showing a comparable delta G value to S298 and A300 was picked for mutant analysis. What are the criteria for the following electrophysiological study?

      EP experiments interrogated selected residues with significant contributions to mefenamic acid and DIDs coordination as revealed by the MM/GBSA and MM/PBSA methods. A300 was identified as potentially important. We did attempt A300C but were never able to get adequate expression for analysis.

      1. It is an interesting speculation that K41C and W323A stabilize the extracellular region of KCNE1 and might increase the binding efficacy of mefenamic acid. Is it also the case for DIDS? K41 may not be critical for DIDS, however.

      Yes, we found K41 was not critical to the binding/action of DIDS compared to MEF. In electrophysiological experiments with the K41C mutation, DIDS induced a leftward GV shift (~ -25 mV) whereas the normalized response was statistically non-significant. In MD simulation studies, we observed detachment of DIDS from K41C-Iks only in 3 runs out of 8 simulations. This is in contrast to Mef, where the drug left the binding site of K41C-Iks complex in all simulations.

      1. Same to #2, why was the pore turret (S298-A300) not examined in Figure 7?

      Again, we attempted A300C but could not get high enough expression.

      Reviewer #3 (Public Review):

      Weaknesses:

      1. The computational aspect of the work is rather under-sampled - Figure 2 and Figure 4. The lack of quantitative analysis on the molecular dynamic simulation studies is striking, as only a video of a single representative replica is being shown per mutant/drug. Given that the simulations shown in the video are extremely short; some video only lasts up to 80 ns. Could the author provide longer simulations in each simulation condition (at least to 500 ns or until a stable binding pose is obtained in case the ligand does not leave the binding site), at least with three replicates per each condition? If not able to extend the length of the simulations due to resources issue, then further quantitative analysis should be conducted to prove that all simulations are converged and are sufficient. Please see the rest of the quantitative analysis in other comments.

      We provide more quantitative analysis for the existing MD simulations and ran five additional simulations with 500 ns duration by embedding the channel in a POPC lipid membrane. For the new MD simulations, we used a different force field in order to minimize ambiguity related to force fields as well. Analysis of these data has led to new data and supplemental figures regarding RMSD of ligands during the simulations (Figure 4-figure supplement 1 and Figure 6-figure supplement 3), clustering of MD trajectories based on Mef conformation (Figure 2-figure supplement 3 and Figure 6 -figure supplement 2), H-bond formation over the simulations (Figure 2-figure supplement 4 and Figure 6-figure supplement 1). We have edited the manuscript to include this new information where appropriate.

      1. Given that the protein is a tetramer, at least 12 datasets could have been curated to improve the statistic. It was also unclear how frequently the frames from the simulations were taken in order to calculate the PBSA/GBSA.

      By using one ligand for each ps-IKs channel complex we tried to keep the molecular system and corresponding analysis as simple as was possible. Our initial results have shown that 4D docking and subsequent MD simulations with only one ligand bound to ps-IKs was complicated enough. Our attempts to dock 4 ligands simultaneously and analyze the properties of such a system were ineffective due to difficulties in: i) obtaining stable complexes during conformational sampling and 4D docking procedures, since the ligand interaction covers a region including three protein chains with dynamic properties, ii) possible changes of receptor conformation properties at three other subunits when one ligand is already occupying its site, iii) marked diversity of the binding poses of the ligand as cluster analysis of ligand-channels complex shows (Figure 2-figure supplement 3).

      We have added a line in the methods to clarify the use of only one ligand per channel complex in simulations.

      In order to calculate MMPBSA/MMGBSA we used a frame every 0.3 ns throughout the 300 ns simulation (1000 frames/simulation) or during the time the ligand remained bound. We have clarified this in the Methods.

      1. The lack of labels on several structures is rather unhelpful (Figure 2B, 2C, 4B). The lack of clarity of the interaction map in Figures 2D and 6A.

      We updated figures considering the reviewer's comments and added labels. For 2D interaction maps, we provided additional information in figure legends to improve clarity.

      1. The RMSF analysis is rather unclear and unlabelled thoroughly. In fact, I still don't quite understand why n = 3, given that the protein is a tetramer. If only one out of four were docked and studied, this rationale needs to be explained and accounted for in the manuscript.

      The rationale of conducting MD simulations with one ligand bound to IKs is explained in response to point 2 of the reviewer’s comments.

      RMSF analysis in Figure 4C-E was calculated using the chain to which Mef was docked but after Mef had left the binding site. Details were added to the methods.

      1. For the condition that the ligands suppose to leave the site (K42C for Mef and Y46A for DIDS), can you please provide simulations at a sufficient length of time to show that ligand left the site over three replicates? Given that the protein is a tetramer, I would be expecting three replicates of data to have four data points from each subunit. I would be expecting distance calculation or RMSD of the ligand position in the binding site to be calculated either as a time series or as a distribution plot to show the difference between each mutant in the ligand stability within the binding pocket. I would expect all the videos to be translatable to certain quantitative measures.

      We have shown in the manuscript that the MEF molecule detaches from the K41C/IKs channel complex in all three simulations (at 25 ns, 70 ns and 20 ns, Table. 4). Similarly, the ligand left the site in all five new 500 ns duration simulations. We did not provide simualtions for Y46A, but Y46C left the binding site in 4 of 5 500 ns simulations and changed binding pose in the other.

      Difficulties encountered upon extending the docking and MD simulations for 4 receptor sites of the channel complex is discussed in our response to point # 2 of the reviewer.

      1. Given that K41 (Mef) and Y46 are very important in the coordination, could you calculate the frequency at which such residues form hydrogen bonds with the drug in the binding site? Can you also calculate the occupancy or the frequency of contact that the residues are making to the ligand (close 4-angstrom proximity etc.) and show whether those agree with the ligand interaction map obtained from ICM pro in Figure 2D?

      We thank the reviewer for the suggestion to analyze the H-bond contribution to ligand dynamics in the binding site. In the plots shown in Figure 2-figure supplement 4 and Figure 6-figure supplement 1, we now provide detailed information about the dynamics of the H-bond formation between the ligand and the channel-complex throughout simulations. In addition, we have quantified this and have added these numbers to a table (Table 2) and in the text of the results.

      1. Given that the author claims that both molecules share the same binding site and the mode of ligand binding seems to be very dynamic, I would expect the authors to show the distribution of the position of ligand, or space, or volume occupied by the ligand throughout multiple repeats of simulations, over sufficient sampling time that both ligand samples the same conformational space in the binding pocket. This will prove the point in the discussion - Line 463-464. "We can imagine a dynamic complex... bind/unbind from Its at a high frequency".

      To support our statement regarding a dynamic complex we analyzed longer MD simulations and clustered trajectories, from this an average conformation from each cluster was extracted and provided as supplementary information which shows the different binding modes for Mef (Figure 2-figure supplement 3). DIDS was more stable in MD simulations and though there were also several clusters, they were similar enough that when using the same cut-off distance as for mefenamic acid, they could be grouped into one cluster. (Note the scale differences on dendrogram between Figure 2-figure supplement 3 and Figure 6-figure supplement 2).

      1. I would expect the authors to explain the significance and the importance of the PBSA/GBSA analysis as they are not reporting the same energy in several cases, especially K41 in Figure 2 - figure supplement 2. It was also questionable that Y46, which seems to have high binding energy, show no difference in the EPhys works in figure 3. These need to be commented on.

      Several studies indicate that G values calculated using MM/PBSA and MM/GBSA methods may vary. Some studies report marked differences and the reasons for such a discrepancy is thoroughly discussed in a review by Genheden and Ryde (PMID: 25835573). Therefore, we used both methods to be sure that key residues contributing to ligand binding identified with one method appear in the list of residues for which the calculations are done with the other method.

      Y46C which showed only a slightly less favorable binding energy and did not unbind during 300 ns simulations, unbound, or changed pose in 4 out of 5 of the longer simulations in the presence of a lipid membrane (Figure 4-figure supplement 1). The discrepancy between electrophysiological and MD data is commented in the manuscript (pages 12-13).

      1. Can the author prove that the PBSA/GBSA analysis yielded the same average free energy throughout the MD simulation? This should be the case when the simulations are converged. The author may takes the snapshots from the first ten ns, conduct the analysis and take the average, then 50, then 100, then 250 and 500 ns. The author then hopefully expects that as the simulations get longer, the system has reached equilibrium, and the free energy obtained per residue corresponds to the ensemble average.

      As we mention in the manuscript, MEF- channel interactions are quite dynamic and vary even from simulation to simulation. The frequent change of the binding pose of the ligands observed during simulations (represented in Figure 2 - figure supplement 3 as clusters) is a clear reflection of such a dynamic process. Therefore, we do not expect the same average energy throughout the simulation but we do expect that G values stands above the background for key residues, which was generally the case (Figure 2 - figure supplement 2 and Figure 6.)

      1. The phrase "Lowest interaction free energy for residues in ps-KCNE1 and selected KCNQ1 domains are shown as enlarged panels (n=3 for each point)" needs further explanation. Is this from different frames? I would rather see this PBSA and GBSA calculated on every frame of the simulations, maybe at the one ns increment across 500 ns simulations, in 4 binding sites, in 3 replicas, and these are being plotted as the distribution instead of plotting the smallest number. Can you show each data point corresponding to n = 3?

      The MMPBSA/MMGBSA was calculated for 1000 frames across 3x300 ns simulations with 0.3 ns sampling interval, together 3000 frames, shown in Figure 2-figure supplement 2 and includes error bars to show the differences across runs. We have updated the legend for greater clarity.

      1. I cannot wrap my head around what you are trying to show in Figure 2B. This could be genuinely improved with better labelling. Can you explain whether this predicted binding pose for Mef in the figure is taken from the docking or from the last frame of the simulation? Given that the binding mode seems to be quite dynamic, a single snapshot might not be very helpful. I suggest a figure describing different modes of binding. Figure 2B should be combined with figure 2C as both are not very informative.

      We have updated Figure 2B with better labelling and added a new figure showing the different modes of binding (Figure 2-figure supplement 3).

      1. Similar to the comment above, but for Figure 4B. I do not understand the argument. If the author is trying to say that the pocket is closed after Mef is removed - then can you show, using MD simulation, that the pocket is openable in an apo to the state where Mef can bind? I am aware that the open pocket is generated through batches of structures through conformational sampling - but as the region is supposed to be disordered, can you show that there is a possibility of the allosteric or cryptic pocket being opened in the simulations? If not, can you show that the structure with the open pocket, when the ligand is removed, is capable of collapsing down to the structure similar to the cryo-EM structure? If none of the above work, the author might consider using PocketMiner tools to find an allosteric pocket (https://doi.org/10.1038/s41467-023-36699-3) and see a possibility that the pocket exists.

      Please see the attached screenshot which depicts the binding pocket from the longest run we performed (1250 ns) before drug detachment (grey superimposed structures) and after (red superimposed structures). Mefenamic acid is represented as licorice and colored green. Snapshots for superimposition were collected every 10 ns. As can be seen in the figure, when the drug leaves the binding site (after 500 ns, structures colored red), the N-terminal residue of psKCNE1, W323, and other residues that form the pocket shift toward the binding site, overlapping with where Mefenamic acid once resided. The surface structure in Figure 4B shows this collapse.

      Author response image 1.

      In the manuscript, we propose that drug binding occurs by the mechanism that could be best described by induced fit models, which state that the formation of the firm complexes (channel-Mef complex) is a result of multiple-states conformational adjustments of the bimolecular interaction. These interactions do not necessarily need to have large interfaces at the initial phase. This seems to be the case in Mef with IKS interactions, since we could not identify a pocket of appropriate size either using PocketMiner software suggested by the reviewer or with PocketFinder tool of ICM-pro software.

      1. Figure 4C - again, can you show the RMSF analysis of all four subunits leading to 12 data points? If it is too messy to plot, can you plot a mean with a standard deviation? I would say that a 1-1.5 angstroms increase in the RMSF is not a "markedly increased", as stated on line 280. I would also encourage the authors to label whether the RMSF is calculated from the backbone, side-chain or C-alpha atoms and, ideally, compare them to see where the dynamical properties are coming from.

      Please see the answer to comment #4. We agree that the changes are not so dramatic and modified the text accordingly. RMSD was calculated for backbone atom to compare residues with different side chains, a note of this is now in the methods and statistical significance of ps-IKs vs K41C, W323A and Y46C is indicated in Figures 4C-4E.

      1. In the discussion - Lines 464-467. "Slowed deactivation of the S1/KCNE1/Pore domain/drug complex... By stabilising the activated complex. MD simulation suggests the latter is most likely the case." Can you point out explicitly where this has been proven? If the drug really stabilised the activated complex, can you show which intermolecular interaction within E1/S1/Pore has the drug broken and re-form to strengthen the complex formation? The authors have not disproven the point on steric hindrance either. Can this be disproved by further quantitative analysis of existing unbiased equilibrium simulations?

      The stabilization of S1/KCNE1/Pore by drugs does not necessarily have to involve a creation of new contacts between protein parts or breakage of interfaces between them. The stabilization of activated complexes by drugs may occur when the drug simultaneously binds to both moveable parts of the channel, such as voltage sensor(s) or upper KCNE1 region, and static region(s) of the channel, such as the pore domain. We have changed the corresponding text for better clarity.

      1. Figure 4D - Can you show this RMSF analysis for all mutants you conducted in this study, such as Y46C? Can you explain the difference in F dynamics in the KCNE3 for both Figure 4C and 4D?

      We now show the RMSF for K41C, W323A and Y46C in Figure 4C-E. We speculate that K41 (magenta) and W323 (yellow), given their location at the lipid interface (see Author response image 1), may be important stabilizing residues for the KCNE N-terminus, whereas Y46 (green) which is further down the TMD has less of an impact.

      Author response image 2.

      1. Line 477: the author suggested that K41 and Mef may stabilise the protein-protein interface at the external region of the channel complex. Can you prove that through the change in protein-protein interaction, contact is made over time on the existing MD trajectories, whether they are broken or formed? The interface from which residues help to form and stabilise the contact? If this is just a hypothesis for future study, then this has to be stated clearly.

      It is known that crosslinking of several residues of external E1 with the external pore residues dramatically stabilizes voltage-sensors of KCNQ1/KCNE1 complex in the up-state conformation. This prevents movable protein regions in the voltage-sensors returning to their initial positions upon depolarization, locking the channel in an open state. We suggest that MEF may restrain the backward movement of voltage-sensors in a similar way that stabilizes open conformation of the channel. The stabilization of the voltage sensor domain through MEF occurs due to contacts of the drug with both static (pore domain) and dynamic protein parts (voltage-sensors and external KCNE1 regions). We have changed the corresponding part of the text.

      1. The author stated on lines 305-307 that "DIDS is stabilised by its hydrophobic and vdW contacts with KCNQ1 and KCNE1 subunits as well as by two hydrogen bonds formed between the drug and ps-KCNE1 residue L42 and KCNQ1 residue Q147" Can you show, using H-bond analysis that these two hydrogen bonds really exist stably in the simulations? Can you show, using minimum distance analysis, that L42 are in the vdW radii stably and are making close contact throughout the simulations?

      We performed a detailed H-bond analysis (Figure 6-supplement figure 1) which shows that DIDS forms multiple H-bond over the simulations, though only some of them (GLU43, TYR46, ILE47, SER298, TYR299, TRP323 ) are stable. Thus, the H-bonds that we observed in DIDS-docking experiments were unstable in MD simulations. As in the case of the IKs-MEF complex, the prevailing H-bonds exhibit marked quantitative variability from simulation to simulation. We have added a table detailing the most frequent H-bonds during MD simulations (Table 2).

      1. Discussion - In line 417, the author stated that the "S1 appears to pull away from the pore" and supplemented the claim with the movie. This is insufficient. The author should demonstrate distance calculation between the S1 helix and the pore, in WT and mutants, with and without the drug. This could be shown as a time series or distribution of centre-of-mass distance over time.

      We tried to analyze the distance changes between the upper S1 and the pore domain but failed to see a strong correlation We have removed this statement from the discussion.

      1. Given that all the work were done in the open state channel with PIP2 bound (PDB entry: 6v01), could the author demonstrate, either using docking, or simulations, or alignment, or space-filling models - that the ligand, both DIDS and Mef, would not be able to fit in the binding site of a closed state channel (PDB entry: 6v00). This would help illustrate the point denoted Lines 464-467. "Slowed deactivation of the S1/KCNE1/Pore domain/drug complex... By stabilising the activated complex. MD simulation suggests the latter is most likely the case."

      As of now, a structure representing the closed state of the channel does not exist. 6V00 is the closed inactivated state of the channel pore with voltage-sensors in the activated conformation. In order to create simulation conditions that reliably describe the electrophysiological experiments, at least a good model for closed channels with resting state voltage sensors is necessary.

      1. The author stated that the binding pose changed in one run (lines 317 to 318). Can you comment on those changes? If the pose has changed - what has it changed to? Can you run longer simulations to see if it can reverse back to the initial confirmation? Or will it leave the site completely?

      Longer simulations and trajectory clustering revealed several binding modes, where one pose dominated in approximately 50% of all simulations in Figure 2-figure supplement 3 encircled with a blue frame.

      1. Binding free energy of -32 kcal/mol = -134 kJ/mol. If you try to do dG = -RTlnKd, your lnKd is -52. Your Kd is e^-52, which means it will never unbind if it exists. I am aware that this is the caveat with the methodologies. But maybe these should be highlighted throughout the manuscript.

      We thank the reviewer for this comment. G values, and corresponding Kd values, calculated from simulation of Mef-ps-IKs complex do not reflect the apparent Kd values determined in electrophysiological experiments, nor do they reflect Kd values of drug binding that could be determined in biochemical essays. Important measures are the changes observed in simulations of mutant channel complexes relative to wild type. We now briefly mention this issue in the manuscript.

      Reviewer #1 (Recommendations For The Authors):

      1) It would be nice to have labels of amino acid residues in Figure 2B.

      We updated Figure 2B and added some residue labels.

      2) Fig. 3A and 7A. In what order the current traces are presented? I don't see the rule.

      We have now arranged the current traces in a more orderly manner, listing them first by ascending KCNE1 residue numbers and then by ascending KCNQ1 residue numbers. Now consistent with Fig 3 and 7 (normalized response and delta V1/2).

      3) Line 312 "A44 and Y46 were more so." A44 may be more critical, but I can't see Y46 is more, according to Figure 2-figure supplement2 and Figure 6.

      Indeed, comparison of the energy decomposition data indicates approximately the same ∆G values for Y46. We have revised this in the text correspondingly.

      4) Line 267 "Mefenamic acid..." I would like to see the movie.

      We no longer have access to this original movie

      5) In supplemental movies 5-7, the side chains of some critical amino acid residues (W323, K41) would be better presented as in movies 1-4.

      We have retained the original presentations of these movies as the original files are no longer available.

      Reviewer #2 (Recommendations For The Authors):

      General comments:

      1) To determine the effect of mefenamic acid and DIDS on channel closing kinetics, a protocol in which they step from an activating test pulse to a repolarizing tail pulse to -40 mV for 1 s is used. If I understand it right, the drug response is assessed as the difference in instantaneous tail current amplitude and the amplitude after 1 s (row 599-603). The drug response of each mutant is then normalized to the response of the WT channel. However, for several mutants there is barely any sign of current decay during this relatively brief pulse (1 s) at this specific voltage. To determine drug effects more reliably on channel closing kinetics/the extent of channel closing, I wonder if these protocols could be refined? For instance, to cover a larger set of voltages and consider longer timescales?

      To clarify, the drug response of each mutant is not normalized to the response of the WT channel. In fact, our analysis is not meant to compare mutant and WT tail current decay but rather how isochronal tail current decay is changed in response to drug treatment in each channel construct. As acknowledged by the reviewer, the peak to end difference currents were calculated by subtracting the minimum amplitude of the deactivating current from the peak amplitude of the deactivating current. But the difference current in mefenamic acid or DIDS was normalized to the maximum control (in the absence of drug) difference current and subtracted from 1.0 to obtain the normalized response. Thus, the difference in tail current decay in the absence and in the presence of drug is measured within the same time scale and allow a direct comparison between before and after drug treatment. As shown in Fig 3D and 7C, a large drug response such as the one measured in WT channels is reflected by a value close to 1. A smaller drug response is indicated by low values. We recognize that some mutations resulted in an intrinsic inhibition of tail current decay in the absence of drug, which potentially lead to underestimating the normalized response value. Our goal was not to study in detail the effects of the drug on channel closing kinetics, but only to determine the impact of the mutation on drug binding by using tail current decay as a readout. Consequently, we believe that the duration of the deactivating tail current used in this experiment was sufficient to detect drug-induced tail current decay inhibition.

      2) The effect of mefenamic acid seems to be highly dependent on the pulse-to-pulse interval in the experiments. For instance, for WT in Figure 3 - Figure supplement 1, a 15 s pulse-to-pulse interval provides a -100 mV shift in V1/2 induced by mefenamic acid, whereas there is no shift induced when using a 30 s pulse-to-pulse interval. Can the authors explain why they generally consider a 15 s pulse-to-pulse interval more suitable (physiologically relevant?) in their experiments to assess drug effects?

      In our previous experiments, we have determined that a 15 s inter-pulse interval is generally adequate for the WT IKs channels to fully deactivate before the onset of the next pulse. Consistent with our previous work (Wang et al. 2019), we observed that in wild-type EQ channels, there is no current summation from one pulse to the next one (see Fig 1A, bottom panel). This is important as the IKs channel complex is known to be frequency dependent i.e. current amplitude increases as the inter-pulse interval gets shorter. Such current summation results in a leftward shift of the conductance-voltage (GV) relationship. This is also important with regards to drug effects. As indicated by the reviewer, mefenamic acid effects are prominent with a 15 sec inter-pulse interval but less so with a 30 sec inter-pulse interval when enough time is given for channels to more completely deactivate. Full effects of mefenamic acid would have therefore been concealed with a 30sec inter-pulse interval.

      Moreover, our patch-clamp recordings aim to explore the distinct responses of mutant channels to mefenamic acid and DIDS in comparison to the wild-type channel. It is important to note that the inter-pulse interval's physiological relevance is not necessarily crucial in this context.

      3) Related to comment 1 and 2, there is a large diversity in the intrinsic properties of tested mutants. For instance, V1/2 ranges from 4 to 70 mV. Also, there is large variability in the slope of the G-V curves. Whether channel closing kinetics, or the impact of pulse-to-pulse interval, vary among mutants is not clear. Could the authors please discuss whether the intrinsic properties of mutants may affect their ability to respond to mefenamic acid and DIDS? Also, please provide representative current families and G-V curves for all assessed mutants in supplementary figures.

      The intrinsic properties of some mutants vary from the WT channels and influence their responsiveness to mefenamic acid and DIDS. The impact of the mutations on the IKs channel complex are reflected by changes in V1/2 (Table 1, 4) and tail current decay (Figs. 3, 7). But, it is the examination of the drug effects on these intrinsic properties (i.e. GV curve and tail current decay) that constitutes the primary endpoint of our study. We consider that the degree by which mef and DIDS modify these intrinsic properties reflects their ability to bind or not to the mutated channel. In our analysis, we compared each mutant's response to mefenamic acid and DIDS with its respective control. Consequently, the intrinsic properties of the mutant channels have already been considered in our evaluation. As requested, we have provided representative current families and G-V curves for all assessed mutants in Figure 3-figure supplement 1 and Figure 7-figure supplement 1.

      4) The A44C and Y148C mutants give strikingly different currents in the examples shown in Figure 3 and Figure 7. What is the reason for this? In the examples in figure 7, it almost looks like KCNE1 is absent. Although linked constructs are used, is there any indication that KCNE1 is not co-assembled properly with KCNQ1 in those examples?

      The size of the current is critical to determining its shape, as during the test pulse there is some endogenous current mixed in which impacts shape. A44C and Y148C currents shown in Figure 7 are smaller with a larger contribution of the endogenous current, mostly at the foot of the current trace. In our experience there is little endogenous current in the tail current at -40 mV and for this reason we focus our measurements there.

      Although constructs with tethered KCNQ1 and KCNE1 were used, we cannot rule out the possibility that Q1 and E1 interaction was altered by some of the mutations. Several KCNE1 and KCNQ1 residues have been identified as points of contact between the two subunits. For instance, the KCNE1 loop (position 36-47) has been shown to interact with the KCNQ1 S1-S2 linker (position 140-148) (Wang et al, 2011). Thus, it is conceivable that mutation of one or several of those residues may alter KCNQ1/KCNE1 interaction and modify the activation/deactivation kinetics of the IKs channel complex.

      5) I had a hard time following the details of the simulation approaches used. If not already stated (I could not find it), please provide: i) details on whether the whole channel protein was considered for 4D docking or a docking box was specified, ii) information on how simulations with mutant ps-IKs were prepared (for instance with the K41C mutant), especially whether the in silico mutated channel was allowed to relax before evaluation (and for how long). Also, please make sure that information on simulation time and number of repeats are provided in the Methods section.

      For 4D docking, only residues within 0.8 nm of psKCNE1 residues D39-A44 were selected. Complexes with mutated residues were relaxed using the same protocol as the WT channel, (equilibration with gradually releasing restraints with a final equilibration for 10 ns where only the backbone was constrained with 50 kcal/mol/nm2). We have updated the methods accordingly.

      Specific comments:

      In figure legends, please provide information on whether data represents mean +/- SD or SEM. Also, please provide information on which statistical test was used in each figure.

      We revised the figure legend to add the nature of the statistical test used.

      G-V curves are normalized between 0 and 1. However, for many mutants the G-V relationship does not reach saturation at depolarized voltages. Does this affect the estimated V1/2? I could not really tell as I was not sure how V1/2 was determined for different mutants (could the explanation on row 595-598 be clarified)?

      The primary focus here is in the shift between the control response and drug response for each mutant, rather than the absolute V1/2 values. The isochronal G-V curves that are generated for each construct (WT and mutant) utilize an identical voltage protocol. This approach ensures a uniform comparison among all mutants. By observing the shifts in these curves, we can gain insight into the response of mutant channels to the drug. This information ultimately helps elucidate the inherent properties of the mutant channels and contributes to our understanding of the drug's binding mechanism to the channel.

      As requested by the reviewer, we also clarified the way V1/2 was generated: When the G-V curve did not reach zero, the V1/2 value was directly read from the plot at the voltage point where the curve crossed the 0.5 value on the y coordinate.

      A general comment is that the Discussion is fairly long and some sections are quite redundant to the Results section. The authors could consider focusing the text in the Discussion.

      We changed the discussion correspondingly wherever it was appropriate.

      I found it a bit hard to follow the authors interpretation on whether their drug molecules remain bound throughout the experiments, or whether there is fast binding/unbinding. Please clarify if possible.

      In the 300 ns MD simulations mefenamic acid and DIDS remained stably bound to WT-ps-IKS, binding of drugs to mutant complexes are described in the Table 3 and Table 5. In longer simulations with the channel embedded in a lipid environment, mefenamic acid unbinds in two out of five runs for WT-ps-IKs (Figure 4 – figure supplement 1), and DIDS shows a few events where it briefly unbinds (Figure 6 -figure supplement 3). Based on electrophysiological data we speculate that drugs might bind and unbind to WT-ps-IKs during the gating process. We do not see bind-unbinding in MD simulations, since the model we used in simulations reflects only open conformation of the channel-complex with an activated-state voltage-sensor, whereas a resting-state voltage sensor condition was not considered.

      The authors have previously shown that channels with no, one or two KCNE1 subunits are not, or only to a small extent, affected by mefenamic acid (Wang et al., 2020). Could the details of the binding site and proposed mechanisms of action provide clues as to why all binding sites need to be occupied to give prominent drug effects?

      In the manuscript, we propose that the binding of drugs induces conformational changes in the pocket region that stabilize S1/KCNE1/Pore complex. In the tetrameric channel with 4:4 alpha to beta stoichiometry the drugs are likely to occupy all four sites with complete stabilization of S1/KCNE1/Pore. When one or more KCNE1 subunits is absent, as in case of EQQ, or EQQQQ constructs, drugs will bind to the site(s) where KCNE1 is available. This will lead to stabilization of the only certain part of the S1/KCNE1/Pore complex. We believe that the corresponding effect of the drug, in this case will be partially effective.

      There is a bit of jumping in the order of when some figures are introduced (e.g. row 178 and 239). The authors could consider changing the order to make the figures easier to follow.

      We have changed the corresponding section appropriately to improve the reading flow.

      Row 237: "Data not shown", please show data.

      The G-V curve of the KCNE1 Y46C mutant displays a complex, double Boltzmann relationship which does not allow for the calculation of a meaningful V1/2 nor would it allow for an accurate determination of drug effects. Consequently, we have excluded it from the manuscript.

      In the Discussion, the author use the term "KCNE1/3". Does this correspond to the previous mention of "ps-KCNE1"?

      Yes, this refers to ps-KCNE1. We have changed it correspondingly.

      Row 576: When was HMR 1556 used?

      While HMR 1556 was used in preliminary experiments to confirm that the recorded current was indeed IKs, it does not provide substantial value to the data presented in our study or our experiments. As a result, we have excluded HMR 1556 experiments from the final results and have revised the Methods section accordingly.

      Reviewer #3 (Recommendations For The Authors):

      1) Figures 2D and 6A are very unclear. Can the authors provide labels as text rather than coloured circles, whether the residue is on Q1 or E1? There is also a distance label in the figure in the small font with the faintest shade of grey, which I believe is supposed to be hydrogen bonds. Can this be improved for clarity?

      We feel that additional labels on the ligand diagrams to be more confusing, instead, we updated the description in the legend and added labels to Figure 2B and Figure 6B to improve the clarity of residue positions. In addition, we have added 2 new figures with more detailed information about H-bonds (Figure 2-figure supplement 4, Figure 6- figure supplement 1).

      2) Figure 2B - all side chains need labelling in different binding modes. The green ligand on blue protein is very difficult to see. Suddenly, the ligand turns light blue in panel 2C. Can this be consistent throughout the manuscript?

      Figure 2B is updated according to this comment.

      3) Figure 2 - figure supplement 2, and figure 6B. Can the author show the residue number on the x-axis instead of just the one-letter abbreviation? This requires the reader to count and is not helpful when we try to figure out where the residue is at a glance. I would suggest a structure label adjacent to the plot to show whether they are located with respect to the drug molecule.

      Since the numbers for residues on either end of the cluster are indicated at the bottom of each boxed section, we feel that adding residue numbers would just further clutter the figure.

      4) Figure 2 - figure supplement 2, and Figure 6B. Can you explain what is being shown in the error bar? I assume standard deviation?

      Error bars on Figure 2-figure supplement 2 represent SEM. We added corresponding text in the figure legend.

      5) Figure 2 - figure supplement 2, and figure 6B. Can you explain how many frames are being accounted for in this PBSA calculation?

      For Figure 2- figure supplement 2 and Figure 6B a frame was made every 0.3 ns over 3x300 ns simulation, 1000 frames for each simulation, 3000 frames overall.

      6) Figure 3D/E and 7C/D, it would be helpful to show which mutant show agreeable results with the simulations, PBSA/GBSA and contact analyses as suggested above.

      The inconsistencies and discrepancies between the results of MD simulations and electrophysiological experiments are discussed throughout the manuscript.

      7) Figure legend, figure 3E - I assume that there is a type that is different mutants with respect to those without the drug. Otherwise, how could WT, with respect to WT, has -105 mV dV1/2?

      The reviewer is correct in that the bars indicate the difference in V1/2 between control and drug treatment. Thus, the difference in V1/2 (∆V1/2) between the V1/2 calculated for WT control and the V1/2 for mefenamic acid is indeed -105 mV. We have now revised Figure 3E's legend to accurately reflect this and ensure a clear understanding of the data presented.

      8) Figure 3 - figure supplement 1B is very messy, and I could not extract the key point from it. Can this be plotted on a separate trace? At least 1 WT trace and one mutant trace, 1 with WT+drug and one mut+drug as four separate plots for clarity?

      The key message of this figure is to illustrate the similarities of EQ WT + Mef and EQ L142C data. Thus, after thorough consideration, we have concluded that maintaining the current figure, which displays the progressive G-V curve shift in EQ WT and L142C in a superimposed manner, best illustrates the gradual shift in the G-V curves. This presentation allows for a clearer and more immediate comparison of the curve shifts, which may be more challenging to discern if the G-V curves were separated into individual figures. We believe that the existing format effectively communicates the relevant information in a comprehensive and accessible manner.

      9) Figure 4B - the label Voltage is blended into the orange helix. Can the label be placed more neatly?

      We altered the labels for this figure and added that information in the figure description.

      10) Can you show the numerical label of the residue, at least only to the KCNE1 portion in Figures 4C and 4D?

      We updated these figures and added residue numbering for clarity.

      11) Can you hide all non-polar hydrogen atoms in figure 8 and colour each subunit so that it agrees with the rest of the manuscripts? Can you adjust the position of the side chain so that it is interpretable? Can you summarise this as a cartoon? For example, Q147 and Y148 are in grey and are very far hidden away. So as S298. Can you colour-code your label? The methionine (I assume M45) next to T327 is shown as the stick and is unlabelled. Maybe set the orthoscopic view, increase the lighting and rotate the figures in a more interpretable fashion?

      We agree that Fig.8 is rather small as originally presented. We have tried to emphasize those residues we feel most critical to the study and inevitably that leads to de-emphasis of other, less important residues. As long as the figure is reproduced at sufficient size we feel that it has sufficient clarity for the purposes of the Discussion.

      12) Line 538-539. Can you provide more detail on how the extracellular residues of KCNE3 are substituted? Did you use Modeller, SwissModel, or AlphaFold to substitute this region of the KCNEs?

      We used ICM-pro to substitute extracellular residues of KCNE3 and create mutant variants of the Iks channel. This information is provided in the methods section now.

      13) Line 551: The PIP2 density was solved using cryo-EM, not X-ray crystallography.

      We corrected this.

      14) Line 555: The system was equilibrated for ten ns. In which ensemble? Was there any restraint applied during the equilibration run? If yes, at what force constant?

      The system was equilibrated in NVT and NPT ensembles with restraints. These details are added to methods. In the new simulations, we did equilibrations gradually releasing spatial from the backbone, sidechains, lipids, and ligands. A final 30 ns equilibration in the NPT ensemble was performed with restraint only for backbone atoms with a force constant of 50 kJ/mol/nm2. Methods were edited accordingly.

      15) Line 557: Kelvin is a unit without a degree.

      Corrected

      16) Line 559: PME is an electrostatic algorithm, not a method.

      Corrected

      17) Line 566: Collecting 1000 snapshots at which intervals. Given your run are not equal in length, how can you ensure that these are representative snapshots?

      Please see comment #5.

      18) Table 3 - Why SD for computational data and SEM for experimental data?

      There was no particular reason for using SD in some graphs. We used appropriate statistical tests to compare the groups where the difference was not obvious.

    1. They had been at it for almost a year, and they had written more code than would fit in the 64 Kbytes of memory in a standard Apple II. In fact, the Apple II only had 48 Kbytes on its main board, but it used a "language" card to give it an extra 16 Kbytes used to run Pascal.

      as a modern developer hearing these numbers makes my mind explode. how did they do it in the past

    1. Here’s my common practice: Every page in the app needs the standard HTML boilerplate with my common stylesheets and JavaScripts. I need a layout that provides these, but makes no assumptions on the actual body of the page. This is my “base” layout. Most – but not all – pages in the app need a common header and footer. This is my “application” (default) layout. I’d like the application layout to reuse the base layout to keep my code DRY.
    1. Author Response

      Reviewer #1 (Public Review):

      In this manuscript, the Authors implement a delayed feedback control method and use it for the first time in biological neuronal networks. They extend a well-established computational theory and expand it into the biological realm. With this, they obtain novel evidence, never considered before, that showcases the difference between simulated neuronal networks and biological ones. Furthermore, they optimize the DFC method to achieve optimal results in the control of cell excitability in the content of biological neuronal networks, taking advantage of a closed-loop stimulation setup that, by itself, is not trivial to build and operate and that will certainly have a positive impact the fields of cellular and network electrophysiology.

      Regarding the results, it would be very constructive if the Authors could share the code for the quasi-real-time interface with the Multichannel Systems software (current and older hardware versions), as this represents likely a bottleneck preventing more researchers to implement such an experimental paradigm.

      On the data focusing on the effects of the DFC algorithms on neuronal behavior, the evidence is very compelling, although more care should be devoted to the statistical analyses, since some of the applied statistical tests are not appropriate. In a more biological sense, further discussion and clarification of the experimental details would improve this manuscript, making it more accessible and clearer for researchers across disciplines (i.e., ranging from computational to experimental Neuroscience) and increasing the impact of this research.

      In summary, this work represents a necessary bridge between recent advances in computational neuroscience and the biological implementation of neuronal control mechanisms.

      Regarding sharing the control code, our application for closed-loop stimulation using aDFC, DFC and Poisson is now available in GitHub (https://github.com/NCN-Lab/aDFC). This was, in fact, our initial intention following the reviewing process. With this application, the user can run the developed algorithms with the MEA2100-256 System from Multi Channel Systems MCS GmbH.

      Same with the data. The dataset with the spike data from all experiments is also now publicly available in Zenodo. The data can be found in https://doi.org/10.5281/zenodo.10138446.

      Regarding the improvements in the statistical analysis, the tests are now performed following Reviewer #1 suggestions. Important to emphasize that this did not change the results/ conclusions of the work.

    2. Reviewer #1 (Public Review):

      In this manuscript, the Authors implement a delayed feedback control method and use it for the first time in biological neuronal networks. They extend a well-established computational theory and expand it into the biological realm. With this, they obtain novel evidence, never considered before, that showcases the difference between simulated neuronal networks and biological ones. Furthermore, they optimize the DFC method to achieve optimal results in the control of cell excitability in the content of biological neuronal networks, taking advantage of a closed-loop stimulation setup that, by itself, is not trivial to build and operate and that will certainly have a positive impact the fields of cellular and network electrophysiology.

      Regarding the results, it would be very constructive if the Authors could share the code for the quasi-real-time interface with the Multichannel Systems software (current and older hardware versions), as this represents likely a bottleneck preventing more researchers to implement such an experimental paradigm.

      On the data focusing on the effects of the DFC algorithms on neuronal behavior, the evidence is very compelling, although more care should be devoted to the statistical analyses, since some of the applied statistical tests are not appropriate. In a more biological sense, further discussion and clarification of the experimental details would improve this manuscript, making it more accessible and clearer for researchers across disciplines (i.e., ranging from computational to experimental Neuroscience) and increasing the impact of this research.

      In summary, this work represents a necessary bridge between recent advances in computational neuroscience and the biological implementation of neuronal control mechanisms.

    1. Reviewer #2 (Public Review):

      Summary:

      In this manuscript, "KinCytE- a Kinase to Cytokine Explorer to Identify Molecular Regulators and Potential Therapeutic", the authors present a web resource, KinCytE, that lets researchers search for kinase inhibitors that have been shown to affect cytokine and chemokine release and signaling networks. I think it's a valuable resource that has a lot of potential and could be very useful in deciding on statistical analysis that might precede lab experiments.

      Opportunities:

      With the release of the manuscript and the code base in place, I hope the authors continue to build upon the platform, perhaps by increasing the number of cell types that are probed (beyond macrophages). Additionally, when new drug-response data becomes available, perhaps it can be used to further validate the findings. Overall, I see this as a great project that can evolve.

      Strengths:

      The site contains valuable content, and the structure is such that growing that content should be possible.

      Weaknesses:

      Only based on macrophage experiments, would be nice to have other cell types investigated, but I'm sure that will be remedied with some time.

    1. ax.contourf(ds['longitude'],ds['latitude'],average_prec_in_mm_grouped[1], transform=ccrs.PlateC

      Same as in the figures from Ex. 1, it is in that case better to not use contourf, and to use:

      `pm = average_prec_in_mm.sel(month=1)

      ax = plt.axes(projection=ccrs.Robinson())

      pm.plot(ax=ax, transform=ccrs.PlateCarree(), cbar_kwargs={'label':'mm d$^{-1}$'}, levels=[0.5, 1, 2, 3, 4, 5, 7, 10, 15, 20, 40], cmap='YlGnBu') `

      The contourf approach does not scale the levels accordingly. As you can see, you can not distinguish the regions with e.g. 1 mm per day to the regions with 4 mm per day. WIth the code above the colors change linearly with the given levels, and that allows to better distinguish between the levels.

    1. transgressions

      an act that goes against a law, rule, or code of conduct; an offense

    1. Author Response

      The following is the authors’ response to the original reviews.

      We thank all three Reviewers for their comments and have revised the manuscript accordingly.

      Reviewer #1 (Public Review):

      The main objective of this paper is to report the development of a new intramuscular probe that the authors have named Myomatrix arrays. The goal of the Myomatrix probe is to significantly advance the current technological ability to record the motor output of the nervous system, namely fine-wire electromyography (EMG). Myomatrix arrays aim to provide large-scale recordings of multiple motor units in awake animals under dynamic conditions without undue movement artifacts and maintain long-term stability of chronically implanted probes. Animal motor behavior occurs through muscle contraction, and the ultimate neural output in vertebrates is at the scale of motor units, which are bundles of muscle fibers (muscle cells) that are innervated by a single motor neuron. The authors have combined multiple advanced manufacturing techniques, including lithography, to fabricate large and dense electrode arrays with mechanical features such as barbs and suture methods that would stabilize the probe's location within the muscle without creating undue wiring burden or tissue trauma. Importantly, the fabrication process they have developed allows for rapid iteration from design conception to a physical device, which allows for design optimization of the probes for specific muscle locations and organisms. The electrical output of these arrays is processed through a variety of means to try to identify single motor unit activity. At the simplest, the approach is to use thresholds to identify motor unit activity. Of intermediate data analysis complexity is the use of principal component analysis (PCA, a linear second-order regression technique) to disambiguate individual motor units from the wide field recordings of the arrays, which benefits from the density and numerous recording electrodes. At the highest complexity, they use spike sorting techniques that were developed for Neuropixels, a large-scale electrophysiology probe for cortical neural recordings. Specifically, they use an estimation code called kilosort, which ultimately relies on clustering techniques to separate the multi-electrode recordings into individual spike waveforms.

      The biggest strength of this work is the design and implementation of the hardware technology. It is undoubtedly a major leap forward in our ability to record the electrical activity of motor units. The myomatrix arrays trounce fine-wire EMGs when it comes to the quality of recordings, the number of simultaneous channels that can be recorded, their long-term stability, and resistance to movement artifacts.

      The primary weakness of this work is its reliance on kilosort in circumstances where most of the channels end up picking up the signal from multiple motor units. As the authors quite convincingly show, this setting is a major weakness for fine-wire EMG. They argue that the myomatrix array succeeds in isolating individual motor unit waveforms even in that challenging setting through the application of kilosort.

      Although the authors call the estimated signals as well-isolated waveforms, there is no independent evidence of the accuracy of the spike sorting algorithm. The additional step (spike sorting algorithms like kilosort) to estimate individual motor unit spikes is the part of the work in question. Although the estimation algorithms may be standard practice, the large number of heuristic parameters associated with the estimation procedure are currently tuned for cortical recordings to estimate neural spikes. Even within the limited context of Neuropixels, for which kilosort has been extensively tested, basic questions like issues of observability, linear or nonlinear, remain open. By observability, I mean in the mathematical sense of well-posedness or conditioning of the inverse problem of estimating single motor unit spikes given multi-channel recordings of the summation of multiple motor units. This disambiguation is not always possible. kilosort's validation relies on a forward simulation of the spike field generation, which is then truth-tested against the sorting algorithm. The empirical evidence is that kilosort does better than other algorithms for the test simulations that were performed in the context of cortical recordings using the Neuropixels probe. But this work has adopted kilosort without comparable truth-tests to build some confidence in the application of kilosort with myomatrix arrays.

      Kilosort was developed to analyze spikes from neurons rather than motor units and, as Reviewer #1 correctly points out, despite a number of prior validation studies the conditions under which Kilosort accurately identifies individual neurons are still incompletely understood. Our application of Kilosort to motor unit data therefore demands that we explain which of Kilosort’s assumptions do and do not hold for motor unit data and explain how our modifications of the Kilosort pipeline to account for important differences between neural and muscle recording, which we summarize below and have included in the revised manuscript.

      Additionally, both here and in the revised paper we emphasize that while the presented spike sorting methods (thresholding, PCA-based clustering, and Kilosort) robustly extract motor unit waveforms, spike sorting of motor units is still an ongoing project. Our future work will further elaborate how differences between cortical and motor unit data should inform approaches to spike sorting as well as develop simulated motor unit datasets that can be used to benchmark spike sorting methods.

      For our current revision, we have added detailed discussion (see “Data analysis: spike sorting”) of the risks and benefits of our use of Kilosort to analyze motor unit data, in each case clarifying how we have modified the Kilosort code with these issues in mind:

      “Modification of spatial masking: Individual motor units contain multiple muscle fibers (each of which is typically larger than a neuron’s soma), and motor unit waveforms can often be recorded across spatially distant electrode contacts as the waveforms propagate along muscle fibers. In contrast, Kilosort - optimized for the much more local signals recorded from neurons - uses spatial masking to penalize templates that are spread widely across the electrode array. Our modifications to Kilosort therefore include ensuring that Kilosort search for motor unit templates across all (and only) the electrode channels inserted into a given muscle. In this Github repository linked above, this is accomplished by setting parameter nops.sigmaMask to infinity, which effectively eliminates spatial masking in the analysis of the 32 unipolar channels recorded from the injectable Myomatrix array schematized in Supplemental Figure 1g. In cases including chronic recording from mice where only a single 8-contact thread is inserted into each muscle, a similar modification can be achieved with a finite value of nops.sigmaMask by setting parameter NchanNear, which represents the number of nearby EMG channels to be included in each cluster, to equal the number of unipolar or bipolar data channels recorded from each thread. Finally, note that in all cases Kilosort parameter NchanNearUp (which defines the maximum number of channels across which spike templates can appear) must be reset to be equal to or less than the total number of Myomatrix data channels.”

      “Allowing more complex spike waveforms: We also modified Kilosort to account for the greater duration and complexity (relative to neural spikes) of many motor unit waveforms. In the code repository linked above, Kilosort 2.5 was modified to allow longer spike templates (151 samples instead of 61), more spatiotemporal PCs for spikes (12 instead of 6), and more left/right eigenvector pairs for spike template construction (6 pairs instead of 3). These modifications were crucial for improving sorting performance in the nonhuman primate dataset shown in Figure 3, and in a subset of the rodent datasets (although they were not used in the analysis of mouse data shown in Fig. 1 and Supplemental Fig. 2a-f).”

      Furthermore, as the paper on the latest version of kilosort, namely v4, discusses, differences in the clustering algorithm is the likely reason for kilosort4 performing more robustly than kilosort2.5 (used in the myomatrix paper). Given such dependence on details of the implementation and the use of an older kilosort version in this paper, the evidence that the myomatrix arrays truly record individual motor units under all the types of data obtained is under question.

      We chose to modify Kilosort 2.5, which has been used by many research groups to sort spike features, rather than the just-released Kilosort 4.0. Although future studies might directly compare the performance of these two versions on sorting motor unit data, we feel that such an analysis is beyond the scope of this paper, which aims primarily to introduce our electrode technology and demonstrate that a wide range of sorting methods (thresholding, PCA-based waveform clustering, and Kilosort) can all be used to extract single motor units. Additionally, note that because we have made several significant modifications to Kilosort 2.5 as described above, it is not clear what a “direct” comparison between different Kilosort versions would mean, since the procedures we provide here are no longer identical to version 2.5.

      There is an older paper with a similar goal to use multi-channel recording to perform sourcelocalization that the authors have failed to discuss. Given the striking similarity of goals and the divergence of approaches (the older paper uses a surface electrode array), it is important to know the relationship of the myomatrix array to the previous work. Like myomatrix arrays, the previous work also derives inspiration from cortical recordings, in that case it uses the approach of source localization in large-scale EEG recordings using skull caps, but applies it to surface EMG arrays. Ref: van den Doel, K., Ascher, U. M., & Pai, D. K. (2008). Computed myography: three-dimensional reconstruction of motor functions from surface EMG data. Inverse Problems, 24(6), 065010.

      We thank the Reviewer for pointing out this important prior work, which we now cite and discuss in the revised manuscript under “Data analysis: spike sorting” [lines 318-333]:

      “Our approach to spike sorting shares the same ultimate goal as prior work using skin-surface electrode arrays to isolate signals from individual motor units but pursues this goal using different hardware and analysis approaches. A number of groups have developed algorithms for reconstructing the spatial location and spike times of active motor units (Negro et al. 2016; van den Doel, Ascher, and Pai 2008) based on skin-surface recordings, in many cases drawing inspiration from earlier efforts to localize cortical activity using EEG recordings from the scalp (Michel et al. 2004). Our approach differs substantially. In Myomatrix arrays, the close electrode spacing and very close proximity of the contacts to muscle fibers ensure that each Myomatrix channel records from a much smaller volume of tissue than skin-surface arrays. This difference in recording volume in turn creates different challenges for motor unit isolation: compared to skin-surface recordings, Myomatrix recordings include a smaller number of motor units represented on each recording channel, with individual motor units appearing on a smaller fraction of the sensors than typical in a skin-surface recording. Because of this sensordependent difference in motor unit source mixing, different analysis approaches are required for each type of dataset. Specifically, skin-surface EMG analysis methods typically use source-separation approaches that assume that each sensor receives input from most or all of the individual sources within the muscle as is presumably the case in the data. In contrast, the much sparser recordings from Myomatrix are better decomposed using methods like Kilosort, which are designed to extract waveforms that appear only on a small, spatially-restricted subset of recording channels.”

      The incompleteness of the evidence that the myomatrix array truly measures individual motor units is limited to the setting where multiple motor units have similar magnitude of signal in most of the channels. In the simpler data setting where one motor dominates in some channel (this seems to occur with some regularity), the myomatrix array is a major advance in our ability to understand the motor output of the nervous system. The paper is a trove of innovations in manufacturing technique, array design, suture and other fixation devices for long-term signal stability, and customization for different muscle sizes, locations, and organisms. The technology presented here is likely to achieve rapid adoption in multiple groups that study motor behavior, and would probably lead to new insights into the spatiotemporal distribution of the motor output under more naturally behaving animals than is the current state of the field.

      We thank the Reviewer for this positive evaluation and for the critical comments above.

      Reviewer #2 (Public Review):

      Motoneurons constitute the final common pathway linking central impulse traffic to behavior, and neurophysiology faces an urgent need for methods to record their activity at high resolution and scale in intact animals during natural movement. In this consortium manuscript, Chung et al. introduce highdensity electrode arrays on a flexible substrate that can be implanted into muscle, enabling the isolation of multiple motor units during movement. They then demonstrate these arrays can produce high-quality recordings in a wide range of species, muscles, and tasks. The methods are explained clearly, and the claims are justified by the data. While technical details on the arrays have been published previously, the main significance of this manuscript is the application of this new technology to different muscles and animal species during naturalistic behaviors. Overall, we feel the manuscript will be of significant interest to researchers in motor systems and muscle physiology, and we have no major concerns. A few minor suggestions for improving the manuscript follow.

      We thank the Reviewer for this positive overall assessment.

      The authors perhaps understate what has been achieved with classical methods. To further clarify the novelty of this study, they should survey previous approaches for recording from motor units during active movement. For example, Pflüger & Burrows (J. Exp. Biol. 1978) recorded from motor units in the tibial muscles of locusts during jumping, kicking, and swimming. In humans, Grimby (J. Physiol. 1984) recorded from motor units in toe extensors during walking, though these experiments were most successful in reinnervated units following a lesion. In addition, the authors might briefly mention previous approaches for recording directly from motoneurons in awake animals (e.g., Robinson, J. Neurophys. 1970; Hoffer et al., Science 1981).

      We agree and have revised the manuscript to discuss these and other prior use of traditional EMG, including here [lines 164-167]:

      “The diversity of applications presented here demonstrates that Myomatrix arrays can obtain highresolution EMG recordings across muscle groups, species, and experimental conditions including spontaneous behavior, reflexive movements, and stimulation-evoked muscle contractions. Although this resolution has previously been achieved in moving subjects by directly recording from motor neuron cell bodies in vertebrates (Hoffer et al. 1981; Robinson 1970; Hyngstrom et al. 2007) and by using fine-wire electrodes in moving insects (Pfluger 1978; Putney et al. 2023), both methods are extremely challenging and can only target a small subset of species and motor unit populations. Exploring additional muscle groups and model systems with Myomatrix arrays will allow new lines of investigation into how the nervous system executes skilled behaviors and coordinates the populations of motor units both within and across individual muscles…

      For chronic preparations, additional data and discussion of the signal quality over time would be useful. Can units typically be discriminated for a day or two, a week or two, or longer?

      A related issue is whether the same units can be tracked over multiple sessions and days; this will be of particular significance for studies of adaptation and learning.

      Although the yields of single units are greatest in the 1-2 weeks immediately following implantation, in chronic preparations we have obtained well-isolated single units up to 65 days post-implant. Anecdotally, in our chronic mouse implants we occasionally see motor units on the same channel across multiple days with similar waveform shapes and patterns of behavior-locked activity. However, because data collection for this manuscript was not optimized to answer this question, we are unable to verify whether these observations actually reflect cross-session tracking of individual motor units. For example, in all cases animals were disconnected from data collection hardware in between recording sessions (which were often separated by multiple intervening days) preventing us from continuously tracking motor units across long timescales. We agree with the reviewer that long-term motor unit tracking would be extremely useful as a tool for examining learning and plan to address this question in future studies.

      We have added a discussion of these issues to the revised manuscript [lines 52-59]:

      “…These methods allow the user to record simultaneously from ensembles of single motor units (Fig. 1c,d) in freely behaving animals, even from small muscles including the lateral head of the triceps muscle in mice (approximately 9 mm in length with a mass of 0.02 g 23). Myomatrix recordings isolated single motor units for extended periods (greater than two months, Supp. Fig. 3e), although highest unit yield was typically observed in the first 1-2 weeks after chronic implantation. Because recording sessions from individual animals were often separated by several days during which animals were disconnected from data collection equipment, we are unable to assess based on the present data whether the same motor units can be recorded over multiple days.”

      Moreover, we have revised Supplemental Figure 3 to show an example of single motor units recorded >2 months after implantation:

      Author response image 1.

      Longevity of Myomatrix recordings In addition to isolating individual motor units, Myomatrix arrays also provide stable multi-unit recordings of comparable or superior quality to conventional fine wire EMG…. (e) Although individual motor units were most frequently recorded in the first two weeks of chronic recordings (see main text), Myomatrix arrays also isolate individual motor units after much longer periods of chronic implantation, as shown here where spikes from two individual motor units (colored boxes in bottom trace) were isolated during locomotion 65 days after implantation. This bipolar recording was collected from the subject plotted with unfilled black symbols in panel (d).

      It appears both single-ended and differential amplification were used. The authors should clarify in the Methods which mode was used in each figure panel, and should discuss the advantages and disadvantages of each in terms of SNR, stability, and yield, along with any other practical considerations.

      We thank the reviewer for the suggestion and have added text to all figure legends clarifying whether each recording was unipolar or bipolar.

      Is there likely to be a motor unit size bias based on muscle depth, pennation angle, etc.?

      Although such biases are certainly possible, the data presented here are not well-suited to answering these questions. For chronic implants in small animals, the target muscles (e.g. triceps in mice) are so small that the surgeon often has little choice about the site and angle of array insertion, preventing a systematic analysis of this question. For acute array injections in larger animals such as rhesus macaques, we did not quantify the precise orientation of the arrays (e.g. with ultrasound imaging) or the muscle fibers themselves, again preventing us from drawing strong conclusions on this topic. This question is likely best addressed in acute experiments performed on larger muscles, in which the relative orientations of array threads and muscle fibers can be precisely imaged and systematically varied to address this important issue.

      Can muscle fiber conduction velocity be estimated with the arrays?

      We sometimes observe fiber conduction delays up to 0.5 msec as the spike from a single motor unit moves from electrode contact to electrode contact, so spike velocity could be easily estimated given the known spatial separation between electrode contacts. However (closely related to the above question) this will only provide an accurate estimate of muscle fiber conduction velocity if the electrode contacts are arranged parallel to fiber direction, which is difficult to assess in our current dataset. If the arrays are not parallel, this computation will produce an overestimate of conduction velocity, as in the extreme case where a line of electrode contacts arranged perpendicular to the fiber direction might have identical spike arrival times, and therefore appear to have an infinite conduction velocity. Therefore, although Myomatrix arrays can certainly be used to estimate conduction velocity, such estimates should be performed in future studies only in settings where the relative orientation of array threads and muscle fibers can be accurately measured.

      The authors suggest their device may have applications in the diagnosis of motor pathologies. Currently, concentric needle EMG to record from multiple motor units is the standard clinical method, and they may wish to elaborate on how surgical implantation of the new array might provide additional information for diagnosis while minimizing risk to patients.

      We thank the reviewer for the suggestion and have modified the manuscript’s final paragraph accordingly [lines 182-188]:

      “Applying Myomatrix technology to human motor unit recordings, particularly by using the minimally invasive injectable designs shown in Figure 3 and Supplemental Figure 1g,i, will create novel opportunities to diagnose motor pathologies and quantify the effects of therapeutic interventions in restoring motor function. Moreover, because Myomatrix arrays are far more flexible than the rigid needles commonly used to record clinical EMG, our technology might significantly reduce the risk and discomfort of such procedures while also greatly increasing the accuracy with which human motor function can be quantified. This expansion of access to high-resolution EMG signals – across muscles, species, and behaviors – is the chief impact of the Myomatrix project.”

      Reviewer #3 (Public Review):

      This work provides a novel design of implantable and high-density EMG electrodes to study muscle physiology and neuromotor control at the level of individual motor units. Current methods of recording EMG using intramuscular fine-wire electrodes do not allow for isolation of motor units and are limited by the muscle size and the type of behavior used in the study. The authors of Myomatrix arrays had set out to overcome these challenges in EMG recording and provided compelling evidence to support the usefulness of the new technology.

      Strengths:

      They presented convincing examples of EMG recordings with high signal quality using this new technology from a wide array of animal species, muscles, and behavior.

      • The design included suture holes and pull-on tabs that facilitate implantation and ensure stable recordings over months.

      • Clear presentation of specifics of the fabrication and implantation, recording methods used, and data analysis.

      We thank the Reviewer for these comments.

      Weaknesses:

      The justification for the need to study the activity of isolated motor units is underdeveloped. The study could be strengthened by providing example recordings from studies that try to answer questions where isolation of motor unit activity is most critical. For example, there is immense value for understanding muscles with smaller innervation ratio which tend to have many motor neurons for fine control of eyes and hand muscles.

      We thank the Reviewer for the suggestion and have modified the manuscript accordingly [lines 170-174]:

      “…how the nervous system executes skilled behaviors and coordinates the populations of motor units both within and across individual muscles. These approaches will be particularly valuable in muscles in which each motor neuron controls a very small number of muscle fibers, allowing fine control of oculomotor muscles in mammals as well as vocal muscles in songbirds (Fig. 2g), in which most individual motor neurons innervate only 1-3 muscle fibers (Adam et al. 2021).”

      Reviewer #1 (Recommendations for The Authors):

      I would urge the authors to consider a thorough validation of the spike sorting piece of the workflow. Barring that weakness, this paper has the potential to transform motor neuroscience. The validation efforts of kilosort in the context of Neuropixels might offer a template for how to convince the community of the accuracy of myomatrix arrays in disambiguating individual motor unit waveforms.

      I have a few minor detailed comments, that the authors may find of some use. My overall comment is to commend the authors for the precision of the work as well as the writing. However, exercising caution associated with kilosort could truly elevate the paper by showing where there is room for improvement.

      We thank the Reviewer for these comments - please see our summary of our revisions related to Kilosort in our reply to the public reviews above.

      L6-7: The relationship between motor unit action potential and the force produced is quite complicated in muscle. For example, recent work has shown how decoupled the force and EMG can be during nonsteady locomotion. Therefore, it is not a fully justified claim that recording motor unit potentials will tell us what forces are produced. This point relates to another claim made by the authors (correctly) that EMG provides better quality information about muscle motor output in isometric settings than in more dynamic behaviors. That same problem could also apply to motor unit recordings and their relationship to muscle force. The relationship is undoubtedly strong in an isometric setting. But as has been repeatedly established, the electrical activity of muscle is only loosely related to its force output and lacks in predictive power.

      This is an excellent point, and our revised manuscript now addresses this issue [lines 174-176]:

      “…Of further interest will be combining high-resolution EMG with precise measurement of muscle length and force output to untangle the complex relationship between neural control, body kinematics, and muscle force that characterizes dynamic motor behavior. Similarly, combining Myomatrix recordings with high-density brain recordings….”

      L12: There is older work that uses an array of skin mounted EMG electrodes to solve a source location problem, and thus come quite close to the authors' stated goals. However, the authors have failed to cite or provide an in-depth analysis and discussion of this older work.

      As described above in the response to Reviewer 1’s public review comments, we now cite and discuss these papers.

      L18-19: "These limitations have impeded our understanding of fundamental questions in motor control, ..." There are two independently true statements here. First is that there are limitations to EMG based inference of motor unit activity. Second is that there are gaps in the current understanding of motor unit recruitment patterns and modification of these patterns during motor learning. But the way the first few paragraphs have been worded makes it seem like motor unit recordings is a panacea for these gaps in our knowledge. That is not the case for many reasons, including key gaps in our understanding of how muscle's electrical activity relates to its force, how force relates to movement, and how control goals map to specific movement patterns. This manuscript would in fact be strengthened by acknowledging and discussing the broader scope of gaps in our understanding, and thus more precisely pinpointing the specific scientific knowledge that would be gained from the application of myomatrix arrays.

      We agree and have revised the manuscript to note this complexity (see our reply to this Reviewer’s other comment about muscle force, above).

      L140-143: The estimation algorithms yields potential spikes but lacking the validation of the sorting algorithms, it is not justifiable to conclude that the myomatrix arrays have already provided information about individual motor units.

      Please see our replies to Reviewer #1s public comments (above) regarding motor unit spike sorting.

      L181-182: "These methods allow very fine pitch escape routing (<10 µm spacing), alignment between layers, and uniform via formation." I find this sentence hard to understand. Perhaps there is some grammatical ambiguity?

      We have revised this passage as follows [lines 194-197]:

      "These methods allow very fine pitch escape routing (<10 µm spacing between the thin “escape” traces connecting electrode contacts to the connector), spatial alignment between the multiple layers of polyimide and gold that constitute each device, and precise definition of “via” pathways that connect different layers of the device.”

      L240: What is the rationale for choosing this frequency band for the filter?

      Individual motor unit waveforms have peak energy at roughly 0.5-2.0 kHz, although units recorded at very high SNR often have voltage waveform features at higher frequencies. The high- and lowpass cutoff frequencies should reflect this, although there is nothing unique about the 350 Hz and 7,000 Hz cutoffs we describe, and in all recordings similar results can be obtained with other choices of low/high frequency cutoffs.

      L527-528: There are some key differences between the electrode array design presented here and traditional fine-wire EMG in terms of features used to help with electrode stability within the muscle. A barb-like structure is formed in traditional fine-wire EMG by bending the wire outside the canula of the needle used to place it within the muscle. But when the wire is pulled out, it is common for the barb to break off and be left behind. This is because of the extreme (thin) aspect ratio of the barb in fine wire EMG and low-cycle fatigue fracture of the wire. From the schematic shown here, the barb design seems to be stubbier and thus less prone to breaking off. This raises the question of how much damage is inflicted during the pull-out and the associated level of discomfort to the animal as a result. The authors should present a more careful statement and documentation with regard to this issue.

      We have updated the manuscript to highlight the ease of inserting and removing Myomatrix probes, and to clarify that in over 100 injectable insertions/removal there have been zero cases of barbs (or any other part) of the devices breaking off within the muscle [lines 241-249]:

      “…Once the cannula was fully inserted, the tail was released, and the cannula slowly removed. After recording, the electrode and tail were slowly pulled out of the muscle together. Insertion and removal of injectable Myomatrix devices appeared to be comparable or superior to traditional fine-wire EMG electrodes (in which a “hook” is formed by bending back the uninsulated tip of the recording wire) in terms of both ease of injection, ease of removal of both the cannula and the array itself, and animal comfort. Moreover, in over 100 Myomatrix injections performed in rhesus macaques, there were zero cases in which Myomatrix arrays broke such that electrode material was left behind in the recorded muscle, representing a substantial improvement over traditional fine-wire approaches, in which breakage of the bent wire tip regularly occurs (Loeb and Gans 1986).”

      Reviewer #2 (Recommendations For The Authors):

      The Abstract states the device records "muscle activity at cellular resolution," which could potentially be read as a claim that single-fiber recording has been achieved. The authors might consider rewording.

      The Reviewer is correct, and we have removed the word “cellular”.

      The supplemental figures could perhaps be moved to the main text to aid readers who prefer to print the combined PDF file.

      After finalizing the paper we will upload all main-text and supplemental figures into a single pdf on biorXiv for readers who prefer a single pdf. However, given that the supplemental figures provide more technical and detailed information than the main-text figures, for the paper on the eLife site we prefer the current eLife format in which supplemental figures are associated with individual main-text figures online.

      Reviewer #3 (Recommendations For The Authors):

      • The work could be strengthened by showing examples of simultaneous recordings from different muscles.

      Although Myomatrix arrays can indeed be used to record simultaneously from multiple muscles, in this manuscript we have decided to focus on high-resolution recordings that maximize the number of recording channels and motor units obtained from a single muscle. Future work from our group with introduce larger Myomatrix arrays optimized for recording from many muscles simultaneously.

      • The implantation did not include mention of testing the myomatrix array during surgery by using muscle stimulation to verify correct placement and connection.

      As the Reviewer points out electrical stimulation is a valuable tool for confirming successful EMG placement. However we did not use this approach in the current study, relying instead on anatomical confirmation of muscle targeting (e.g. intrasurgical and postmortem inspection in rodents) and by implanting large, easy-totarget arm muscles (in primates) where the risk of mis-targeting is extremely low. Future studies will examine both electrical stimulation and ultrasound methods for confirming the placement of Myomatrix arrays.

      References cited above

      Adam, I., A. Maxwell, H. Rossler, E. B. Hansen, M. Vellema, J. Brewer, and C. P. H. Elemans. 2021. 'One-to-one innervation of vocal muscles allows precise control of birdsong', Curr Biol, 31: 3115-24 e5.

      Hoffer, J. A., M. J. O'Donovan, C. A. Pratt, and G. E. Loeb. 1981. 'Discharge patterns of hindlimb motoneurons during normal cat locomotion', Science, 213: 466-7.

      Hyngstrom, A. S., M. D. Johnson, J. F. Miller, and C. J. Heckman. 2007. 'Intrinsic electrical properties of spinal motoneurons vary with joint angle', Nat Neurosci, 10: 363-9.

      Loeb, G. E., and C. Gans. 1986. Electromyography for Experimentalists, First edi (The University of Chicago Press: Chicago, IL).

      Michel, C. M., M. M. Murray, G. Lantz, S. Gonzalez, L. Spinelli, and R. Grave de Peralta. 2004. 'EEG source imaging', Clin Neurophysiol, 115: 2195-222.

      Negro, F., S. Muceli, A. M. Castronovo, A. Holobar, and D. Farina. 2016. 'Multi-channel intramuscular and surface EMG decomposition by convolutive blind source separation', J Neural Eng, 13: 026027.

      Pfluger, H. J.; Burrows, M. 1978. 'Locusts use the same basic motor pattern in swimming as in jumping and kicking', Journal of experimental biology, 75: 81-93.

      Putney, Joy, Tobias Niebur, Leo Wood, Rachel Conn, and Simon Sponberg. 2023. 'An information theoretic method to resolve millisecond-scale spike timing precision in a comprehensive motor program', PLOS Computational Biology, 19: e1011170.

      Robinson, D. A. 1970. 'Oculomotor unit behavior in the monkey', J Neurophysiol, 33: 393-403.

      van den Doel, Kees, Uri M Ascher, and Dinesh K Pai. 2008. 'Computed myography: three-dimensional reconstruction of motor functions from surface EMG data', Inverse Problems, 24: 065010.

    1. Author Response

      The following is the authors’ response to the original reviews.

      We thank the reviewers for truly valuable advice and comments. We have made multiple corrections and revisions to the original pre-print accordingly per the following comments:

      1. Pro1153Leu is extremely common in the general population (allele frequency in gnomAD is 0.5). Further discussion is warranted to justify the possibility that this variant contributes to a phenotype documented in 1.5-3% of the population. Is it possible that this variant is tagging other rare SNPs in the COL11A1 locus, and could any of the existing exome sequencing data be mined for rare nonsynonymous variants?

      One possible avenue for future work is to return to any existing exome sequencing data to query for rare variants at the COL11A1 locus. This should be possible for the USA MO case-control cohort. Any rare nonsynonymous variants identified should then be subjected to mutational burden testing, ideally after functional testing to diminish any noise introduced by rare benign variants in both cases and controls. If there is a significant association of rare variation in AIS cases, then they should consider returning to the other cohorts for targeted COL11A1 gene sequencing or whole exome sequencing (whichever approach is easier/less expensive) to demonstrate replication of the association.

      Response: Regarding the genetic association of the common COL11A1 variant rs3753841 (p.(Pro1335Leu)), we do not propose that it is the sole risk variant contributing to the association signal we detected and have clarified this in the manuscript. We concluded that it was worthy of functional testing for reasons described here. Although there were several common variants in the discovery GWAS within and around COL11A1, none were significantly associated with AIS and none were in linkage disequilibrium (R2>0.6) with the top SNP rs3753841. We next reviewed rare (MAF<=0.01) coding variants within the COL11A1 LD region of the associated SNP (rs3753841) in 625 available exomes representing 46% of the 1,358 cases from the discovery cohort. The LD block was defined using Haploview based on the 1KG_CEU population. Within the ~41 KB LD region (chr1:103365089- 103406616, GRCh37) we found three rare missense mutations in 6 unrelated individuals, Table below. Two of them (NM_080629.2:c.G4093A:p.A1365T; NM_080629.2:c.G3394A:p.G1132S), from two individuals, are predicted to be deleterious based on CADD and GERP scores and are plausible AIS risk candidates. At this rate we could expect to find only 4-5 individuals with linked rare coding variants in the total cohort of 1,358 which collectively are unlikely to explain the overall association signal we detected. Of course, there also could be deep intronic variants contributing to the association that we would not detect by our methods. However, given this scenario, the relatively high predicted deleteriousness of rs3753841 (CADD= 25.7; GERP=5.75), and its occurrence in a GlyX-Y triplet repeat, we hypothesized that this variant itself could be a risk allele worthy of further investigation.

      Author response table 1.

      We also appreciate the reviewer’s suggestion to perform a rare variant burden analysis of COL11A1. We did conduct pilot gene-based analysis in 4534 European ancestry exomes including 797 of our own AIS cases and 3737 controls and tested the burden of rare variants in COL11A1. SKATO P value was not significant (COL11A1_P=0.18), but this could due to lack of power and/or background from rare benign variants that could be screened out using the functional testing we have developed.

      1. COL11A1 p.Pro1335Leu is pursued as a direct candidate susceptibility locus, but the functional validation involves both: (a) a complementation assay in mouse GPCs, Figure 5; and (b) cultured rib cartilage cells from Col11a1-Ad5 Cre mice (Figure 4). Please address the following:

      2A. Is Pro1335Leu a loss of function, gain of function, or dominant negative variant? Further rationale for modeling this change in a Col11a1 loss of function cell line would be helpful.

      Response: Regarding functional testing, by knockdown/knockout cell culture experiments, we showed for the first time that Col11a1 negatively regulates Mmp3 expression in cartilage chondrocytes, an AIS-relevant tissue. We then tested the effect of overexpressing the human wt or variant COL11A1 by lentiviral transduction in SV40-transformed chondrocyte cultures. We deleted endogenous mouse Col11a1 by Cre recombination to remove the background of its strong suppressive effects on Mmp3 expression. We acknowledge that Col11a1 missense mutations could confer gain of function or dominant negative effects that would not be revealed in this assay. However as indicated in our original manuscript we have noted that spinal deformity is described in the cho/cho mouse, a Col11a1 loss of function mutant. We also note the recent publication by Rebello et al. showing that missense mutations in Col11a2 associated with congenital scoliosis fail to rescue a vertebral malformation phenotype in a zebrafish col11a2 KO line. Although the connection between AIS and vertebral malformations is not altogether clear, we surmise that loss of the components of collagen type XI disrupt spinal development. in vivo experiments in vertebrate model systems are needed to fully establish the consequences and genetic mechanisms by which COL11A1 variants contribute to an AIS phenotype.

      2B. Expression appears to be augmented compared WT in Fig 5B, but there is no direct comparison of WT with variant.

      Response: Expression of the mutant (from the lentiviral expression vector) is increased compared to mutant. We observed this effect in repeated experiments. Sequencing confirmed that the mutant and wildtype constructs differed only at the position of the rs3753841 SNP. At this time, we cannot explain the difference in expression levels. Nonetheless, even when the variant COL11A1 is relatively overexpressed it fails to suppress MMP3 expression as observed for the wildtype form.

      2C. How do the authors know that their complementation data in Figure 5 are specific? Repetition of this experiment with an alternative common nonsynonymous variant in COL11A1 (such as rs1676486) would be helpful as a comparison with the expectation that it would be similar to WT.

      Response: We agree that testing an allelic series throughout COL11A1 could be informative, but we have shifted our resources toward in vivo experiments that we believe will ultimately be more informative for deciphering the mechanistic role of COL11A1 in MMP3 regulation and spine deformity.

      2D. The y-axes of histograms in panel A need attention and clarification. What is meant by power? Do you mean fold change?

      Response: Power is directly comparable to fold change but allows comparison of absolute expression levels between different genes.

      2E. Figure 5: how many technical and biological replicates? Confirm that these are stated throughout the figures.

      Response: Thank you for pointing out this oversight. This information has been added throughout.

      1. Figure 2: What does the gross anatomy of the IVD look like? Could the authors address this by showing an H&E of an adjacent section of the Fig. 2 A panels?

      Response: Panel 2 shows H&E staining. Perhaps the reviewer is referring to the WT and Pax1 KO images in Figure 3? We have now added H&E staining of WT and Pax1 KO IVD as supplemental Figure 3E to clarify the IVD anatomy.

      1. Page 9: "Cells within the IVD were negative for Pax1 staining ..." There seems to be specific PAX1 expression in many cells within the IVD, which is concerning if this is indeed a supposed null allele of Pax1. This data seems to support that the allele is not null.

      Response: We have now added updated images for the COL11A1 and PAX1 staining to include negative controls in which we omitted primary antibodies. As can be seen, there is faint autofluorescence in the PAX1 negative control that appears to explain the “specific staining” referred to by the reviewer. These images confirm that the allele is truly a null.

      1. There is currently a lack of evidence supporting the claim that "Col11a1 is positively regulated by Pax1 in mouse spine and tail". Therefore, it is necessary to conduct further research to determine the direct regulatory role of Pax1 on Col11a1.

      Response: We agree with the reviewer and have clarified that Pax1 may have either a direct or indirect role in Col11a1 regulation.

      1. There is no data linking loss of COL11A1 function and spine defects in the mouse model. Furthermore, due to the absence of P1335L point mutant mice, it cannot be confirmed whether P1335L can actually cause AIS, and the pathogenicity of this mutation cannot be directly verified. These limitations need to be clearly stated and discussed. A Col11a1 mouse mutant called chondroysplasia (cho), was shown to be perinatal lethal with severe endochondral defects (https://pubmed.ncbi.nlm.nih.gov/4100752/). This information may help contextualize this study.

      Response: We partially agree with the reviewer. Spine defects are reported in the cho mouse (for example, please see reference 36 Hafez et al). We appreciate the suggestion to cite the original Seegmiller et al 1971 reference and have added it to the manuscript.

      1. A recent article (PMID37462524) reported mutations in COL11A2 associated with AIS and functionally tested in zebrafish. That study should be cited and discussed as it is directly relevant for this manuscript.

      Response: We agree with the reviewer that this study provides important information supporting loss of function I type XI collagen in spinal deformity. Language to this effect has been added to the manuscript and this study is now cited in the paper.

      1. Please reconcile the following result on page 10 of the results: "Interestingly, the AISassociated gene Adgrg6 was amongst the most significantly dysregulated genes in the RNA-seq analysis (Figure 3c). By qRT-PCR analysis, expression of Col11a1, Adgrg6, and Sox6 were significantly reduced in female and male Pax1-/- mice compared to wild-type mice (Figure 3d-g)." In Figure 3f, the downregulation of Adgrg6 appears to be modest so how can it possibly be highlighted as one of the most significantly downregulated transcripts in the RNAseq data?

      Response: By “significant” we were referring to the P-value significance in RNAseq analysis, not in absolute change in expression. This language was clearly confusing, and we have removed it from the manuscript.

      1. It is incorrect to refer to the primary cell culture work as growth plate chondrocytes (GPCs), instead, these are primary costal chondrocyte cultures. These primary cultures have a mixture of chondrocytes at differing levels of differentiation, which may change differentiation status during the culturing on plastic. In sum, these cells are at best chondrocytes, and not specifically growth plate chondrocytes. This needs to be corrected in the abstract and throughout the manuscript. Moreover, on page 11 these cells are referred to as costal cartilage, which is confusing to the reader.

      Response: Thank you for pointing out these inconsistencies. We have changed the manuscript to say “costal chondrocytes” throughout.

      Minor points

      • On 10 of the Results: "These data support a mechanistic link between Pax1 and Col11a1, and the AIS-associated genes Gpr126 and Sox6, in affected tissue of the developing tail." qRT-PCR validation of Sox6, although significant, appears to be very modestly downregulated in KO. Please soften this statement in the text.

      Response: We have softened this statement.

      • Have you got any information about how the immortalized (SV40) costal cartilage affected chondrogenic differentiation? The expression of SV40 seemed to stimulate Mmp13 expression. Do these cells still make cartilage nodules? Some feedback on this process and how it affects the nature of the culture what be appreciated.

      Response: The “+ or –“ in Figure 5 refers to Ad5-cre. Each experiment was performed in SV40-immortalized costal chondrocytes. We have removed SV40 from the figure and have clarified the legend to say “qRT-PCR of human COL11A1 and endogenous mouse Mmp3 in SV40 immortalized mouse costal chondrocytes transduced with the lentiviral vector only (lanes 1,2), human WT COL11A1 (lane 3), or COL11A1P1335L. Otherwise we absolutely agree that understanding Mmp13 regulation during chondrocyte differentiation is important. We plan to study this using in vivo systems.

      • Figure 1: is the average Odds ratio, can this be stated in the figure legend?

      Response: We are not sure what is being asked here. The “combined odds ratio” is calculated as a weighted average of the log of the odds.

      • A more consistent use of established nomenclature for mouse versus human genes and proteins is needed.

      Human:GENE/PROTEIN

      Mouse: Gene/PROTEIN

      Response: Thank you for pointing this out. The nomenclature has been corrected throughtout the manuscript.

      • There is no Figure 5c, but a reference to results in the main text. Please reconcile. -There is no Figure 5-figure supplement 5a, but there is a reference to it in the main text. Please reconcile.

      Response: Figure references have been corrected.

      • Please indicate dilutions of all antibodies used when listed in the methods.

      Response: Antibody dilutions have been added where missing.

      • On page 25, there is a partial sentence missing information in the Histologic methods; "#S36964 Invitrogen, CA, USA)). All images were taken..."

      Response: We apologize for the error. It has been removed.

      • Table 1: please define all acronyms, including cohort names.

      Response: We apologize for the oversight. The legend to the Table has been updated with definitions of all acronyms.

      • Figure 2: Indicate that blue staining is DAPI in panel B. Clarify that "-ab" as an abbreviation is primary antibody negative.

      Response: A color code for DAPI and COL11A! staining has been added and “-ab” is now defined.

      • Page 4: ADGRG6 (also known as GPR126)...the authors set this up for ADGRG6 but then use GPR126 in the manuscript, which is confusing. For clarity, please use the gene name Adgrg6 consistently, rather than alternating with Gpr126.

      Response: Thank you for pointing this out. GPR126 has now been changed to ADGRG6 thoughout the manuscript.

      • REF 4: Richards, B.S., Sucato, D.J., Johnston C.E. Scoliosis, (Elsevier, 2020). Is this a book, can you provide more clarity in the Reference listing?

      Response: Thank you for pointing this out. This reference has been corrected.

      • While isolation was addressed, the methods for culturing Rat cartilage endplate and costal chondrocytes are poorly described and should be given more text.

      Response: Details about the cartilage endplate and costal chondrocyte isolation and culture have been added to the Methods.

      • Page 11: 1st paragraph, last sentence "These results suggest that Mmp3 expression"... this sentence needs attention. As written, I am not clear what the authors are trying to say.

      Response: This sentence has been clarified and now reads “These results suggest that Mmp3 expression is negatively regulated by Col11a1 in mouse costal chondrocytes.”

      • Page 13: line 4 from the bottom, "ECM-clearing"? This is confusing do you mean ECM degrading?

      Response: Yes and thank you. We have changed to “ECM-degrading”.

      • Please use version numbers for RefSeq IDs: e.g. NM_080629.3 instead of NM_080629 Response: This change has been made in the revised manuscript.

      • It would be helpful for readers if the ethnicity of the discovery case cohort was clearly stated as European ancestry in the Results main text.

      Response: “European ancestry” has been added at first description of the discovery cohort in the manuscript.

      • Avoid using the term "mutation" and use "variant" instead.

      Response: Thank you for pointing this out. “Variant” is now used throughout the manuscript.

      • Define error bars for all bar charts throughout and include individual data points overlaid onto bars.

      Response: Thank you. Error bars are now clarified in the Figure legends.

    2. Author Response

      The following is the authors’ response to the previous reviews.

      We thank the reviewers for truly valuable advice and comments. We have made multiple corrections and revisions to the original pre-print accordingly per the following comments:

      1. Pro1153Leu is extremely common in the general population (allele frequency in gnomAD is 0.5). Further discussion is warranted to justify the possibility that this variant contributes to a phenotype documented in 1.5-3% of the population. Is it possible that this variant is tagging other rare SNPs in the COL11A1 locus, and could any of the existing exome sequencing data be mined for rare nonsynonymous variants?

      One possible avenue for future work is to return to any existing exome sequencing data to query for rare variants at the COL11A1 locus. This should be possible for the USA MO case-control cohort. Any rare nonsynonymous variants identified should then be subjected to mutational burden testing, ideally after functional testing to diminish any noise introduced by rare benign variants in both cases and controls. If there is a significant association of rare variation in AIS cases, then they should consider returning to the other cohorts for targeted COL11A1 gene sequencing or whole exome sequencing (whichever approach is easier/less expensive) to demonstrate replication of the association.

      Response: Regarding the genetic association of the common COL11A1 variant rs3753841 (p.(Pro1335Leu)), we do not propose that it is the sole risk variant contributing to the association signal we detected and have clarified this in the manuscript. We concluded that it was worthy of functional testing for reasons described here. Although there were several common variants in the discovery GWAS within and around COL11A1, none were significantly associated with AIS and none were in linkage disequilibrium (R2>0.6) with the top SNP rs3753841. We next reviewed rare (MAF<=0.01) coding variants within the COL11A1 LD region of the associated SNP (rs3753841) in 625 available exomes representing 46% of the 1,358 cases from the discovery cohort. The LD block was defined using Haploview based on the 1KG_CEU population. Within the ~41 KB LD region (chr1:103365089- 103406616, GRCh37) we found three rare missense mutations in 6 unrelated individuals, Table below. Two of them (NM_080629.2: c.G4093A:p.A1365T; NM_080629.2:c.G3394A:p.G1132S), from two individuals, are predicted to be deleterious based on CADD and GERP scores and are plausible AIS risk candidates. At this rate we could expect to find only 4-5 individuals with linked rare coding variants in the total cohort of 1,358 which collectively are unlikely to explain the overall association signal we detected. Of course, there also could be deep intronic variants contributing to the association that we would not detect by our methods. However, given this scenario, the relatively high predicted deleteriousness of rs3753841 (CADD= 25.7; GERP=5.75), and its occurrence in a GlyX-Y triplet repeat, we hypothesized that this variant itself could be a risk allele worthy of further investigation.

      Author response table 1.

      We also appreciate the reviewer’s suggestion to perform a rare variant burden analysis of COL11A1. We did conduct pilot gene-based analysis in 4534 European ancestry exomes including 797 of our own AIS cases and 3737 controls and tested the burden of rare variants in COL11A1. SKATO P value was not significant (COL11A1_P=0.18), but this could due to lack of power and/or background from rare benign variants that could be screened out using the functional testing we have developed.

      1. COL11A1 p.Pro1335Leu is pursued as a direct candidate susceptibility locus, but the functional validation involves both: (a) a complementation assay in mouse GPCs, Figure 5; and (b) cultured rib cartilage cells from Col11a1-Ad5 Cre mice (Figure 4). Please address the following:

      2A. Is Pro1335Leu a loss of function, gain of function, or dominant negative variant? Further rationale for modeling this change in a Col11a1 loss of function cell line would be helpful.

      Response: Regarding functional testing, by knockdown/knockout cell culture experiments, we showed for the first time that Col11a1 negatively regulates Mmp3 expression in cartilage chondrocytes, an AIS-relevant tissue. We then tested the effect of overexpressing the human wt or variant COL11A1 by lentiviral transduction in SV40-transformed chondrocyte cultures. We deleted endogenous mouse Col11a1 by Cre recombination to remove the background of its strong suppressive effects on Mmp3 expression. We acknowledge that Col11a1 missense mutations could confer gain of function or dominant negative effects that would not be revealed in this assay. However as indicated in our original manuscript we have noted that spinal deformity is described in the cho/cho mouse, a Col11a1 loss of function mutant. We also note the recent publication by Rebello et al. showing that missense mutations in Col11a2 associated with congenital scoliosis fail to rescue a vertebral malformation phenotype in a zebrafish col11a2 KO line. Although the connection between AIS and vertebral malformations is not altogether clear, we surmise that loss of the components of collagen type XI disrupt spinal development. in vivo experiments in vertebrate model systems are needed to fully establish the consequences and genetic mechanisms by which COL11A1 variants contribute to an AIS phenotype.

      2B. Expression appears to be augmented compared WT in Fig 5B, but there is no direct comparison of WT with variant.

      Response: Expression of the mutant (from the lentiviral expression vector) is increased compared to mutant. We observed this effect in repeated experiments. Sequencing confirmed that the mutant and wildtype constructs differed only at the position of the rs3753841 SNP. At this time, we cannot explain the difference in expression levels. Nonetheless, even when the variant COL11A1 is relatively overexpressed it fails to suppress MMP3 expression as observed for the wildtype form.

      2C. How do the authors know that their complementation data in Figure 5 are specific? Repetition of this experiment with an alternative common nonsynonymous variant in COL11A1 (such as rs1676486) would be helpful as a comparison with the expectation that it would be similar to WT.

      Response: We agree that testing an allelic series throughout COL11A1 could be informative, but we have shifted our resources toward in vivo experiments that we believe will ultimately be more informative for deciphering the mechanistic role of COL11A1 in MMP3 regulation and spine deformity.

      2D. The y-axes of histograms in panel A need attention and clarification. What is meant by power? Do you mean fold change?

      Response: Power is directly comparable to fold change but allows comparison of absolute expression levels between different genes.

      2E. Figure 5: how many technical and biological replicates? Confirm that these are stated throughout the figures.

      Response: Thank you for pointing out this oversight. This information has been added throughout.

      1. Figure 2: What does the gross anatomy of the IVD look like? Could the authors address this by showing an H&E of an adjacent section of the Fig. 2 A panels?

      Response: Panel 2 shows H&E staining. Perhaps the reviewer is referring to the WT and Pax1 KO images in Figure 3? We have now added H&E staining of WT and Pax1 KO IVD as supplemental Figure 3E to clarify the IVD anatomy.

      1. Page 9: "Cells within the IVD were negative for Pax1 staining ..." There seems to be specific PAX1 expression in many cells within the IVD, which is concerning if this is indeed a supposed null allele of Pax1. This data seems to support that the allele is not null.

      Response: We have now added updated images for the COL11A1 and PAX1 staining to include negative controls in which we omitted primary antibodies. As can be seen, there is faint autofluorescence in the PAX1 negative control that appears to explain the “specific staining” referred to by the reviewer. These images confirm that the allele is truly a null.

      1. There is currently a lack of evidence supporting the claim that "Col11a1 is positively regulated by Pax1 in mouse spine and tail". Therefore, it is necessary to conduct further research to determine the direct regulatory role of Pax1 on Col11a1.

      Response: We agree with the reviewer and have clarified that Pax1 may have either a direct or indirect role in Col11a1 regulation.

      1. There is no data linking loss of COL11A1 function and spine defects in the mouse model. Furthermore, due to the absence of P1335L point mutant mice, it cannot be confirmed whether P1335L can actually cause AIS, and the pathogenicity of this mutation cannot be directly verified. These limitations need to be clearly stated and discussed. A Col11a1 mouse mutant called chondroysplasia (cho), was shown to be perinatal lethal with severe endochondral defects (https://pubmed.ncbi.nlm.nih.gov/4100752/). This information may help contextualize this study.

      Response: We partially agree with the reviewer. Spine defects are reported in the cho mouse (for example, please see reference 36 Hafez et al). We appreciate the suggestion to cite the original Seegmiller et al 1971 reference and have added it to the manuscript.

      1. A recent article (PMID37462524) reported mutations in COL11A2 associated with AIS and functionally tested in zebrafish. That study should be cited and discussed as it is directly relevant for this manuscript.

      Response: We agree with the reviewer that this study provides important information supporting loss of function I type XI collagen in spinal deformity. Language to this effect has been added to the manuscript and this study is now cited in the paper.

      1. Please reconcile the following result on page 10 of the results: "Interestingly, the AISassociated gene Adgrg6 was amongst the most significantly dysregulated genes in the RNA-seq analysis (Figure 3c). By qRT-PCR analysis, expression of Col11a1, Adgrg6, and Sox6 were significantly reduced in female and male Pax1-/- mice compared to wild-type mice (Figure 3d-g)." In Figure 3f, the downregulation of Adgrg6 appears to be modest so how can it possibly be highlighted as one of the most significantly downregulated transcripts in the RNAseq data?

      Response: By “significant” we were referring to the P-value significance in RNAseq analysis, not in absolute change in expression. This language was clearly confusing, and we have removed it from the manuscript.

      1. It is incorrect to refer to the primary cell culture work as growth plate chondrocytes (GPCs), instead, these are primary costal chondrocyte cultures. These primary cultures have a mixture of chondrocytes at differing levels of differentiation, which may change differentiation status during the culturing on plastic. In sum, these cells are at best chondrocytes, and not specifically growth plate chondrocytes. This needs to be corrected in the abstract and throughout the manuscript. Moreover, on page 11 these cells are referred to as costal cartilage, which is confusing to the reader.

      Response: Thank you for pointing out these inconsistencies. We have changed the manuscript to say “costal chondrocytes” throughout.

      Minor points

      • On 10 of the Results: "These data support a mechanistic link between Pax1 and Col11a1, and the AIS-associated genes Gpr126 and Sox6, in affected tissue of the developing tail." qRT-PCR validation of Sox6, although significant, appears to be very modestly downregulated in KO. Please soften this statement in the text.

      Response: We have softened this statement.

      • Have you got any information about how the immortalized (SV40) costal cartilage affected chondrogenic differentiation? The expression of SV40 seemed to stimulate Mmp13 expression. Do these cells still make cartilage nodules? Some feedback on this process and how it affects the nature of the culture what be appreciated.

      Response: The “+ or –“ in Figure 5 refers to Ad5-cre. Each experiment was performed in SV40-immortalized costal chondrocytes. We have removed SV40 from the figure and have clarified the legend to say “qRT-PCR of human COL11A1 and endogenous mouse Mmp3 in SV40 immortalized mouse costal chondrocytes transduced with the lentiviral vector only (lanes 1,2), human WT COL11A1 (lane 3), or COL11A1P1335L. Otherwise we absolutely agree that understanding Mmp13 regulation during chondrocyte differentiation is important. We plan to study this using in vivo systems.

      • Figure 1: is the average Odds ratio, can this be stated in the figure legend?

      Response: We are not sure what is being asked here. The “combined odds ratio” is calculated as a weighted average of the log of the odds.

      • A more consistent use of established nomenclature for mouse versus human genes and proteins is needed.

      Human:GENE/PROTEIN Mouse: Gene/PROTEIN

      Response: Thank you for pointing this out. The nomenclature has been corrected throughtout the manuscript.

      • There is no Figure 5c, but a reference to results in the main text. Please reconcile. -There is no Figure 5-figure supplement 5a, but there is a reference to it in the main text. Please reconcile.

      Response: Figure references have been corrected.

      • Please indicate dilutions of all antibodies used when listed in the methods.

      Response: Antibody dilutions have been added where missing.

      • On page 25, there is a partial sentence missing information in the Histologic methods; "#S36964 Invitrogen, CA, USA)). All images were taken..."

      Response: We apologize for the error. It has been removed.

      • Table 1: please define all acronyms, including cohort names.

      Response: We apologize for the oversight. The legend to the Table has been updated with definitions of all acronyms.

      • Figure 2: Indicate that blue staining is DAPI in panel B. Clarify that "-ab" as an abbreviation is primary antibody negative.

      Response: A color code for DAPI and COL11A! staining has been added and “-ab” is now defined.

      • Page 4: ADGRG6 (also known as GPR126)...the authors set this up for ADGRG6 but then use GPR126 in the manuscript, which is confusing. For clarity, please use the gene name Adgrg6 consistently, rather than alternating with Gpr126.

      Response: Thank you for pointing this out. GPR126 has now been changed to ADGRG6 thoughout the manuscript.

      • REF 4: Richards, B.S., Sucato, D.J., Johnston C.E. Scoliosis, (Elsevier, 2020). Is this a book, can you provide more clarity in the Reference listing?

      Response: Thank you for pointing this out. This reference has been corrected.

      • While isolation was addressed, the methods for culturing Rat cartilage endplate and costal chondrocytes are poorly described and should be given more text.

      Response: Details about the cartilage endplate and costal chondrocyte isolation and culture have been added to the Methods.

      • Page 11: 1st paragraph, last sentence "These results suggest that Mmp3 expression"... this sentence needs attention. As written, I am not clear what the authors are trying to say.

      Response: This sentence has been clarified and now reads “These results suggest that Mmp3 expression is negatively regulated by Col11a1 in mouse costal chondrocytes.”

      • Page 13: line 4 from the bottom, "ECM-clearing"? This is confusing do you mean ECM degrading?

      Response: Yes and thank you. We have changed to “ECM-degrading”.

      • Please use version numbers for RefSeq IDs: e.g. NM_080629.3 instead of NM_080629

      Response: This change has been made in the revised manuscript.

      • It would be helpful for readers if the ethnicity of the discovery case cohort was clearly stated as European ancestry in the Results main text.

      Response: “European ancestry” has been added at first description of the discovery cohort in the manuscript.

      • Avoid using the term "mutation" and use "variant" instead.

      Response: Thank you for pointing this out. “Variant” is now used throughout the manuscript.

      • Define error bars for all bar charts throughout and include individual data points overlaid onto bars.

      Response: Thank you. Error bars are now clarified in the Figure legends.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      1. General Statements [optional]

      We are thankful to the reviewers for the time and effort invested in assessing our manuscript and for their suggestions to improve it. We have now considered the points raised by them, carried out additional experiments, and modified the text and figures to address them. We feel that the new experiments and modifications have been able to solve all concerns raised by the reviewers and have improved the manuscript substantially, strengthening and extending our conclusions.

      The main modifications include:

      • We have extended the analysis of the overexpression strains to highly stringent conditions, which revealed a mild acidification defect for the strain overexpressing Oxr1. In addition, we have included in our analysis a strain in which both proteins are overexpressed, which resulted in a further growth defect.
      • We have analyzed the recruitment of Rtc5 to the vacuole under additional conditions: deletion of the main subunit of the RAVE complex RAV1, medium containing galactose as the sole carbon source and pharmacological inhibition of the V-ATPase. These experiments allowed us to strengthen and extend our conclusions regarding the requirements for Rtc5 targeting to the vacuole.
      • We have analyzed V-ATPase disassembly in intact cells, by addressing the localization to the vacuole of subunit C (Vma5) in glucose and galactose-containing medium. The results strengthen our conclusion that both Rtc5 and Oxr1 promote an in vivo state of lower V-ATPase assembly.
      • We have extended our analyses of V-ATPase function to medium containing galactose as a carbon source, since glucose availability is one of the main regulators of V-ATPase function in vivo. The results are consistent with what we observed in glucose-containing medium.
      • We have included a diagram of the structure of the V-ATPase for reference.
      • We have included a diagram and a paragraph describing Oxr1 and Rtc5 regarding protein length and domain architecture and comparing them to other TLDc domain-containing proteins.
      • We have made changes to the text and figures to improve clarity and accuracy, including a methods section that was missing. We include below a point-by-point response to the reviewers´ comments.

      2. Point-by-point description of the revisions

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      __ __Suggestions:

      1. The authors observed that knockout of Rtc5p or Oxr1p does not affect vacuolar pH. If Rtc5p and Oxr1p both cooperate to dissociate V-ATPase, the authors may wish to characterize the effect of a ∆Rtc5p∆Oxr1p double knockout on vacuolar pH. The double mutant ∆rtc5∆oxr1 was already included in the original manuscript (the growth test is shown in Figure 5 B and the BCECF staining is shown in Figure 5C). This strain behaved like wt in both of these assays. Of note, what we observe for the deletion strains is increased assembly (Figure 5 D - G), so we expect that it would be hard to observe a difference in vacuole acidity or growth in the presence of metals.

      Therefore, we have now also included a strain with the double overexpression of Oxr1 and Rtc5. Since overexpression of the proteins results in decreased assembly, it is more likely that this strain will show impaired growth under conditions that strongly rely on V-ATPase activity. Indeed, we observed that the overexpression of Oxr1 alone resulted in a slight growth defect in media containing high concentrations of ZnCl2 and the double overexpression strain showed an even further defect (Figure 6 A and C).

      The manuscript would benefit from a well-labelled diagram showing the subunits of V-ATPase (e.g. in Figure 2D).

      We agree with the reviewer and we have now added a diagram of the structure of the V-ATPase labeling the different subunits in Figure 2B.

      The images of structures, especially in Figure 1-Supplement 1B, are not particularly clear and could be improved (e.g. by removing shadows or using transparency).

      We are thankful to the reviewer for this suggestion. To improve the clarity of the structures in Figure 1 C and Figure 1 – Supplement 1A, we are now presenting the different subunits in the structures with different shades of blue and grey.

      The authors should clearly describe the differences between Rtc5p and Oxr1p in terms of protein length, sequence identity, domain structure, etc.

      We are thankful for this suggestion and we have now included a diagram of the domain architecture and protein length of Rtc5 and Oxr1, comparing with two human proteins containing a TLDc domain in Figure 5A. In addition, we have added the following paragraph describing the features of the proteins.

      “Rtc5 is a 567 residue-long protein. Analysis of the protein using HHPred (Zimmermann et al., 2018), finds homology to the structure of porcine Meak7 (PDB ID: 7U8O, (Zi Tan et al., 2022)) over the whole protein sequence (residues 37-559). For both yeast Rtc5 and human Meak7 (Uniprot ID: Q6P9B6), HHPred detects homology of the C-terminal region to other TLDc domain containing proteins like yeast Oxr1 (PDBID: 7FDE), Drosophila melanogaster Skywalker (PDB ID: 6R82), and human NCOA7 (PDB ID: 7OBP), while the N-terminus has similarity to EF-hand domain calcium-binding proteins (PDB IDs: 1EG3, 2CT9, 1S6C6, Figure 5A). HHPred analysis of the 273 residue long Saccharomyces cerevisiae Oxr1, on the other hand, only detects similarity to TLDc domain containing proteins (PDB IDs: 7U80, 6R82, 7OBP), which spans the majority of the sequence of the protein (residues 71-273). The overall sequence identity between Oxr1 and Rtc5 is 24% according to a ClustalOmega alignment within Uniprot. The Alphafold model that we generated for Rtc5 is in good agreement with the available partial structure of Oxr1 (7FDE) (root mean square deviation (RMSD) of 3.509Å) (Figure 5 - S1 A), indicating they are structurally very similar, in the region of the TLDc domain. Taken together, these analyses suggest that Oxr1 belongs to a group of TLDc domain-containing proteins consisting mainly of just this domain like the splice variants Oxr1-C or NCOA7-B in humans (NP_001185464 and NP_001186551, respectively), while Rtc5 belongs to a group containing an additional N-terminal EF-hand-like domain and a N-myristoylation sequence, like human Meak7 (Finelli & Oliver, 2017) (Figure 5 A).”

      Minor:

      1. The "O" in VO should be capitalized. This has been corrected.

      In Figure 4 supplement 1, the labels "I", "S", and "P" should be defined.

      This has been clarified in the figure legend.

      Please clarify what is meant by "switched labelling"

      This refers to the SILAC vacuole proteomics experiments, for which yeast strains are grown in medium containing either L-Lysine or 13C6;15N2- L-Lysine to produce normal (‘light’) or heavy isotope-labeled (‘heavy’) proteins. This allows comparing two conditions. To increase the robustness of the comparisons, the experiments are done twice with both possible labeling schemes (condition A – light, condition B – heavy + condition A – heavy + condition B – light), which is commonly described as switched labeling or label switching.

      We have exchanged the original sentence in the manuscript for:

      “Performing the same experiments but switching which strain was labeled with heavy and light amino acids gave consistent results.”

      The meaning of the sentence "Indeed, this was the case for both of them" is ambiguous.

      We have now replaced this sentence with the following:

      “Indeed, overexpression of either Rtc5 or Oxr1 resulted in increased growth defects in the context of Stv1 deletion (Figure 7 H and I).”

      For Figure 1-Supplement 1B it is hard to see the crosslink distances.

      We have updated this figure to improve the visibility of the cross-links. In addition, we now include a supplemental table (supplemental table 5) with a list of the Cα- Cα distances measured for all the crosslinks we mapped onto high-resolution structures.

      The statement "The effects of Oxr1 are greater than those caused by Rtc5" requires more context. Is there a way of quantifying this effect for the reader?

      We agree that this sentence was too general and vague. The effects caused by one or the other protein depend on the condition and the assay. We have thus deleted this sentence, and we think it is better to refer to the description of the individual assays performed.

      The phrase "negative genetic interaction" should be clarified.

      We have included in the text the following explanation of genetic interactions:

      “A genetic interaction occurs when the combination of two mutations results in a different phenotype from that expected from the addition of the phenotypes of the individual mutations. For example, deletion of OXR1 or RTC5 has no impact on growth in neutral pH media containing zinc in a control background but improves the growth of RAV1 deletion strains (Figure 7 E and F), so this is a positive genetic interaction. On the other hand, overexpression of either Rtc5 or Oxr1 results in a growth defect in a background lacking Rav1 in neutral media containing zinc, a negative genetic interaction.”

      * * In the sentence "Isogenic strains with the indicated modifications in the genome where spotted as serial dilutions in media with pH=5.5, pH=7.5 or pH=7.5 and containing 3 mM ZnCl2", "where" should be "were".

      This has been corrected.

      Figure 2D: the authors should consider re-coloring these models, as it is challenging to distinguish Rtc5p from the V-ATPase.

      We have changed the coloring of this structure and added a diagram of the V-ATPase structure with the same coloring scheme to improve clarity.

      Reviewer #1 (Significance (Required)):

      The vacuolar protein interaction map alone from this manuscript is a nice contribution to the literature. Experiments establishing colocalization of Rtc5p to the vacuole are convincing, as is dependence of this association on the presence of assembled V-ATPase. Similarly, experiments related to myristoylation are convincing. The observed mislocalization of V-ATPases that contain Stv1p to the vacuole (which is also known to occur when Vph1p has been knocked out) upon knockout of Oxr1p is also extremely interesting. Overall, this is an interesting manuscript that contributes to our understand of TLDc proteins.

      We are thankful to the reviewer for their appreciation of the significance of our work, including the interactome map of the vacuole as a resource and the advances on the understanding of the regulation of the V-ATPase by TLDc domain-containing proteins.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      Major points:

      1. The evidence of Oxr1 and Rtc5 as V-ATPase disassembly factors is circumstantial. The authors base their interpretation primarily on increased V1 (but not Vo) at purified vacuoles from Oxr1- or Rtc5-deleted strains, which does not directly address disassembly. Of course, the results regarding Oxr1 confirm detailed disassembly experiments with the purified protein complex (PMID 34918374), but on their own are open to other interpretations, e.g. suppression of V-ATPase assembly. Of note, the authors emphasize that they provide first evidence of the in vivo role of Oxr1, but monitor V1 recruitment with purified vacuoles and do not follow V-ATPase assembly in intact cells. We are thankful to the reviewer for pointing this out. We did not want to express that the molecular activity of the proteins is the disassembly of the complex, as our analyses include in vivo and ex vivo experiments and do not directly address this. We rather meant that both proteins promote an in vivo state of lower assembly of the V-ATPase. We have modified the wording throughout the manuscript to be clearer about this.

      In addition, we have added new experiments to monitor V-ATPase assembly in intact cells, as suggested by the reviewer. Previous work has shown that in yeast, only subunit C leaves the vacuole membrane under conditions that promote disassembly, while the other subunits remain at the vacuole membrane (Tabke et al 2014). Our own experiments agree with what was published (Figure 3 D). We have thus monitored Vma5 localization to the vacuole under glucose or after shift to galactose containing media in cells lacking or overexpressing Rtc5 or Oxr1. We observed that cells overexpressing either TLDc domain protein show lower levels of Vma5 recruitment to the vacuole in glucose (Figure 6 D and E). Additionally cells lacking either Rtc5 or Oxr1 contain higher levels of Vma5 at the vacuole after 20 minutes in galactose medium (Figure 5 F and G). Thus, these results re-inforce our conclusions that Rtc5 and Oxr1 promote states of lower assembly.

      Oxr1 and Rtc5 have very low sequence similarity. It would be helpful if the authors provided more detail on the predicted structure of the putative TLDc domain of Rtc5 and its relationship to the V-ATPase - Oxr1 structure. Is Rtc5 more closely related to established TLDc domain proteins in other organisms?

      We have now included a diagram of the domain architecture of Rtc5 and Oxr1, and comparison to the features of other TLDc domain containing proteins in Figure 5 A, as well as a paragraph describing them:

      “Rtc5 is a 567 residue-long protein. Analysis of the protein using HHPred (Zimmermann et al., 2018), finds homology to the structure of porcine Meak7 (PDB ID: 7U8O, (Zi Tan et al., 2022)) over the whole protein sequence (residues 37-559). For both yeast Rtc5 and human Meak7 (Uniprot ID: Q6P9B6), HHPred detects homology of the C-terminal region to other TLDc domain containing proteins like yeast Oxr1 (PDBID: 7FDE), Drosophila melanogaster Skywalker (PDB ID: 6R82), and human NCOA7 (PDB ID: 7OBP), while the N-terminus has similarity to EF-hand domain calcium-binding proteins (PDB IDs: 1EG3, 2CT9, 1S6C6, Figure 5A). HHPred analysis of the 273 residue long Saccharomyces cerevisiae Oxr1, on the other hand, only detects similarity to TLDc domain containing proteins (PDB IDs: 7U80, 6R82, 7OBP), which spans the majority of the sequence of the protein (residues 71-273). The overall sequence identity between Oxr1 and Rtc5 is 24% according to a ClustalOmega alignment within Uniprot. The Alphafold model that we generated for Rtc5 is in good agreement with the available partial structure of Oxr1 (7FDE) (root mean square deviation (RMSD) of 3.509Å) (Figure 5 - S1 A), indicating they are structurally very similar, in the region of the TLDc domain. Taken together, these analyses suggest that Oxr1 belongs to a subfamily of TLDc domain-containing proteins consisting mainly of just this domain like the splice variants Oxr1-C or NCOA7-B in humans (NP_001185464 and NP_001186551, respectively) , while Rtc5 belongs to a subfamily containing an additional N-terminal EF-hand-like domain and a N-myristoylation sequence, like human Meak7 (Finelli & Oliver, 2017) (Figure 5 A).”

      The authors conclude vacuolar recruitment of Rtc5 depends on the assembled V-ATPase, based on deletion of different V1 and Vo domain subunits. However, these genetic manipulations likely cause a strong perturbation of vacuolar acidification; indeed, the images show drastically altered vacuolar morphology. To strengthen their conclusion, it would be helpful to show that Rtc5 recruitment is not blocked by inhibition of vacuolar acidification, and that conversely it is blocked by deletion of rav1.

      We are thankful to the reviewer for this insightful suggestion and we have now performed both experiments suggested. The experiment regarding rav1Δ is now Figure 3C, and we observed that this mutation also disrupts Rtc5 localization to the vacuole. In addition, we decided to include an experiment showing the subcellular localization of Rtc5 after shifting the cells to galactose containing medium for 20 minutes, as a physiologically relevant condition that results in disassembly of the complex (Figure 3D). We observed that under these conditions Rtc5 re-localizes to the cytosol. This result is particularly interesting given that in yeast only subunit C (but not other V1 subunits) re-localizes to the cytosol under these conditions. In addition, the experiment using Bafilomycin A to inhibit the V-ATPase shows that Rtc5 is still localized at the vacuole membrane under conditions of V-ATPase inhibition (Figure 3 F). Taken together these results allowed us to strengthen our original interpretation that Rtc5 requires an assembled V-ATPase for its localization and extend it to the fact that the V-ATPase does not need to be active.

      Reviewer #2 (Significance (Required)):

      This is an interesting paper that confirms and extends previous findings on TLDc domain proteins as a novel class of proteins that interact with and regulate the V-ATPase in eukaryotes. The title seems to exaggerate the findings a bit, as the authors do not investigate V-ATPase (dis)assembly directly and only phenotypically describe altered subcellular localization of the Golgi V-ATPase in Oxr1-deleted cells. A recent structural and biochemical characterization of Oxr1 as a V-ATPase disassembly factor (PMID 34918374) somewhat limits the novelty of the results, but the function of Oxr1 in regulating subcellular V-ATPase localization and the identification of a second potential TLDc domain protein in yeast provide relevant insights into V-ATPase regulation. This paper will be of interest to cell biologists and biochemists working on lysosomal biology, organelle proteomics and V-ATPase regulation.

      We thank the reviewer for the assessment of our work, and for recognizing the novel insights that we provide. Regarding the previous biochemical work on Oxr1 and the V-ATPase, we have clearly cited this work in the manuscript. In our opinion, our results complement and extend this article, showing that the function in disassembly is relevant in vivo. Additionally, this is only one of five major points of the article, the other four being

      • The interactome map of the vacuole as a resource
      • The identification of Rtc5 as a second yeast TLDc domain containing protein and interactor of the V-ATPase.
      • The identification of the role of Rtc5 in V-ATPase assembly.
      • The identification of the role of Oxr1 in Stv1 subcellular localization. We believe these additional points add important insights to researchers interested in lysosomes, the V-ATPase, intracellular trafficking and TLDc-domain containing proteins.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Major comments

      __1) Re: A cross-linking mass spectrometry map of vacuolar protein interactions (results) __ While XL-MS is a very powerful method, it is a high-throughput approach and there should be some kind of negative control in these experiments. In cross-linking experiments, non-cross-linked samples are usually used as negative controls. What was the negative control in cross-linking mass-spectrometry experiments here? If there was no negative control, how the specificity of interactions was evaluated? Maybe the authors analyzed the dataset for highly improbable interactions and found very few of them?

      We fully agree that it is crucial to ensure the specificity of the interactions detected by XL-MS. To achieve this, one needs to control (1) the specificity of the data analysis (i.e. that the recorded mass spectrometry data are correctly matched to cross-linked peptides from the sequence database) and (2) the biological specificity (i.e. that the cross-linking captured natively occurring interactions).

      To ascertain that criterion (1) is met, cross-link identifications are filtered to a pre-defined false-discovery rate (FDR) – an approach that the XL-MS field adopted from mass spectrometry-based proteomics. As a result, low-confidence identifications (e.g. cross-linked peptides that are only supported by a few signals in a given mass spectrum) are removed from the dataset. FDR filtering in XL-MS is a rather complex matter as it can be done at different points during data analysis and the optimal FDR cut-off depends on the specific scientific question at hand (for more details see for example Fischer and Rappsilber, Anal Chem, 2017). Generally speaking, an overly restrictive FDR cut-off would remove a lot of correct identifications, thereby greatly limiting the sensitivity of the analysis. On the other hand, a too relaxed FDR cut-off would dilute the correct identifications with a high number of false-positives, which would impair the robustness and specificity of the dataset. While many XL-MS study control the FDR on the level of individual spectrum matches, we opted for a 2% FDR cut-off on the level of unique residue pairs, which is more stringent (see Fischer and Rappsilber, Anal Chem, 2017). Our FDR parameters are described in the Methods section (Cross-linking mass spectrometry of isolated vacuoles - Data analysis). Of note, we have made all raw mass spectrometry data publicly available through the PRIDE repository (https://www.ebi.ac.uk/pride/ ; accession code PXD046792; login details during peer review: Username = reviewer_pxd046792@ebi.ac.uk, Password = q1645lTP). This will allow other researchers to re-analyze our data with the data analysis settings of their choice in the future.

      To ascertain that criterion (2) is met, we mapped the identified cross-links onto existing high-resolution structures of vacuolar protein complexes. Taking into account the length of our cross-linking reagent, the side-chain length of the cross-linkable amino acids (i.e. lysines), and a certain degree of in-solution flexibility, cross-links can reasonably occur between lysines with a mutual Cα-Cα distance of up to 35 Å. Using this cut-off, the lysine-lysine pairs in the high-resolution structures we studied can be split into possible cross-linking partners (Cα-Cα distance 35 Å). Of all cross-links we could map onto high-resolution structures, 95.2% occurred between possible cross-linking partners. In addition, our cross-links reflect numerous known vacuolar protein interactions that have not yet been structurally characterized. These lines of evidence increase our confidence that our XL-MS approach captured genuine, natively occurring interactions. These analyses are described in more detail in the first Results sub-section (“A cross-linking mass spectrometry map of vacuolar protein interactions”).

      In addition, the high purity of vacuole preparation is critical. How was it assessed by the authors?

      We disagree that the purity of the vacuole preparation is critical for this analysis to be valid. The accuracy of the protein-protein interactions detected will depend on their preservation during sample preparation until the sample encounters the cross-linker, and the data analysis, as described above. The experiment would have been equally valid if performed on whole cell lysates without any enrichment of vacuoles, but the coverage of vacuolar proteins would have likely been very low. For this reason, we decided to use the vacuole isolation procedure to obtain better coverage of the proteins of this particular organelle. The use of the Ficoll gradient protocol (Haas, 1995) was based on that it is a protocol that yields strong enrichment of proteins annotated with the GO Term “vacuole” (Eising et al, 2019) and that it preserves the functionality of the organelle, as evidenced by its use for multiple functional assays (vacuole-vacuole fusion (Haas, 1995), autophagosome-vacuole fusion (Gao et al, 2018), polyphosphate synthesis by the VTC complex (Desfougéres et al, 2016), among others).

      2) Re: Rtc5 and Oxr1 counteract the function of the RAVE complex (results)

      Taken together, data, presented in this section of the manuscript, provide strong evidence that Rtc5 and Oxr1 negatively regulate V-ATPase activity, counteracting the V-ATPase assembly, facilitated by the activity of the RAVE complex. However, the complete deletion of the major RAVE subunit Rav1p was required to observe this effect in vivo in yeast. The other way to induce V-ATPase disassembly in yeast is glucose deprivation. It will be interesting to study if there is a synergistic effect between glucose deprivation and RTC5/OXR1 deletion on V-ATPase assembly, vacuolar pH, and growth of single oxr1Δ, rtc5Δ or double oxr1Δrtc5Δ mutants (OPTIONAL). Glucose deprivation is a more physiologically relevant condition than a deletion of an entire gene.

      We would like to point out that an effect on assembly is observed without deleting the RAVE complex: deletions of Oxr1 or Rtc5 resulted in increased V-ATPase assembly in vivo in the presence of glucose and of the RAVE complex (Figures 5 D and E). We have now also added the experiments showing that the overexpression strains have a mild growth defect under conditions that force cells to strongly rely on V-ATPase activity (Figures 6 A and C).

      Nevertheless, we agree that addressing the effect of changing the levels of Oxr1 and Rtc5 under low-glucose conditions is an interesting physiologically relevant question. We have now included growth assays and BCECF staining in medium containing galactose as the carbon source (Figures 5 – Supplement 1 B and C, and Figure 6 C and Figure 6- Supplement 1A). In addition, we have addressed the vacuolar localization of Vma5 in medium containing glucose or after shifting to medium containing galactose for 20 minutes, as a proxy for V-ATPase disassembly in intact cells (Figure 5 F and G, Figure 6 D and E). Taken together, these analyses reinforce our conclusions that both Rtc5 and Oxr1 promote an in vivo state of lower V-ATPase assembly, based on the following observations:

      • Higher localization of Vma5 to the vacuole after 20 mins in galactose in cells lacking Oxr1 or Rtc5 (Figure 5 F and G).
      • Lower localization of Vma5 to the vacuole in medium containing glucose in cells overexpressing Oxr1 or Rtc5 (Figure 6 D and E).
      • Growth defect of the strain overexpressing Oxr1 in medium containing galactose with pH = 7.5 and zinc chloride, with a further growth defect caused by additional overexpression of Rtc5 (Figure 6 C). 3) Re: Figure 6 - supplement 1. The title is relevant to panel D only, it should be renamed to reflect the results of the disassembly of V-ATPase in rav1Δ mutant strains, while results about the stv1Δ-based strains (Panel D) should be shown together with similar experiments in Figure 7 - supplement 2 for clarity.

      We have shifted the Panel D from the original Figure 6 – Supplement 1 to the main Figure (now Figure 7 – H and I). Regarding the title of the Figure, whether Supplemental Figures have titles or not will depend on the journal where the manuscript is published. For now, we have removed all titles from supplemental figures, as they are conceived to complement the main Figures.

      4) Re: Figure 7 - supplement 1, Panel A. The proper assay to show that Stv1-mNeonGreen is functional is to express it in double mutant vph1Δstv1Δ to see if the growth defect is reversed. In addition, the vph1Δ growth defect is not changed (improved or worsened) in the presence of Stv1-mNeonGreen, so it means that the expression of Stv1-mNeonGreen does not further compromise the V-ATPase function, but it does not mean that it improves its function.

      It is clear from the experiment suggested by the reviewer that they think that we have expressed Stv1-mNeonGreen from a plasmid. This was not the case, Stv1 was C-terminally tagged with mNeonGreen in the genome. It is thus the only expressed version in the strain. The experiment we have performed is thus equivalent to the one suggested by the reviewer, but for genomically expressed variants. For reference, the genotypes of all the strains used can be found in Supplemental Table 1.

      5) Re: Figure 7 - supplement 2. This figure should be combined with Fig. 6- suppl 1, panel D as also mentioned above. The figure seems to lack some labels, and conclusions are not accurate as discussed below. However, this data provides important additional information about relationships between isoform-specific subunits of V-ATPase Vph1 and Stv1 and both Rtc5 and Oxr1 and should be repeated if it is not done yet to have a better idea about these relationships.

      Panel B: Based on this picture, deletion of RTC5 has a negative genetic interaction with the deletion of VPH1, since double deletion mutant vph1Δ rtc5Δ grows worse than each individual mutant. Although it also means that there is no positive interaction, it is not the same.

      Indeed, there is a negative genetic interaction between the deletion of RTC5 and VPH1. We have replaced the growth tests in this figure (Figure 8 – Supplement 2 A in the new manuscript) to show this negative genetic interaction better. This effect is reproducible, as shown in the repetitions of the experiments.

      Panel C: Same as for panel B. Based on this picture, the deletion of OXR1 has a weak negative genetic interaction with the deletion of STV1, since double deletion mutant stv1Δ oxr1Δ grows worse than each individual mutant at 6 mM ZnCl2.

      Panel D: Same as for panels B and C. Based on this picture, deletion of RTC5 has a negative genetic interaction with the deletion of STV1, since double deletion mutant stv1Δ rtc5Δ grows worse than each individual mutant at 6 mM ZnCl2. There is no label in the middle panel (growth conditions) and no growth assay data in the presence of CaCl2.

      However, these results will be then in contradiction with the results from Figure 6 - Supplement 1, panel D, showing negative genetic interaction between the overexpression of Rtc5 or Oxr1 and deletion of Stv1, since both deletion and overexpression of Rtc5 or Oxr1 would have negative genetic interactions with Stv1.

      For both Panels C and D (Now Figure 8 - Supplement 2 B and C). The effect pointed out by the reviewer (slightly stronger growth defect for the double mutants than for the single mutants) is very mild. We have attempted to make it more evident by assessing growth in medium with higher and lower concentrations of zinc and this was not possible. This is in contrast with the very clear positive genetic interaction that we observe between the deletion of OXR1 and VPH1 (Now Figure 8 H). This is the reason that we decided to report the lack of a positive genetic interaction instead of the presence of a negative one, as we do not want to draw conclusions based on results that are borderline detectable.

      In addition, there is no label for the media in the middle panel, is it just YPAD pH=7.5, without the addition of any metals?

      Indeed, the media is YPAD pH=7.5, without the addition of any metals. The line drawn above several images based on this media indicated this. Since this form of labeling appears to be confusing, we have now replaced it and placed the label directly above the image.

      Why there is no growth assay in the presence of CaCl2, like in panels A and B?

      Every growth test shown in the manuscript was performed including growth in YPD pH=5,5 as a control of a permissive condition for lack of V-ATPase activity, and then in YPD pH=7,5 including a broad range of Zinc Chloride and Calcium chloride concentrations. From all these pictures, the conditions where the differences among strains were clearly visible were chosen to assemble the figures. Conditions that did not provide any information for that particular experiment were not included in the figure to avoid making them unnecessarily large and crowded.

      Re: Figure 7 - supplement 2, continued. How many times all these experiments were repeated? These experiments should be repeated at least 3 times, which is especially necessary for the experiments in panel C, because the effects are borderline. If results are reproducible and statistically significant, although small, the conclusion should be changed from "no positive genetic interactions" to "negative genetic interactions", which is more precise and informative.

      All growth tests shown in the manuscript were repeated at least three times for the conditions shown. We are thankful to the reviewer for pointing out that this was not mentioned, and we have added this to the methods section. We have assembled a file with all repetitions of the shown growth tests and added it at the end of this file. In doing so, these are already available for the public. These repetitions show that all effects reported are reproducible. We will then discuss with the editors of the journal where this manuscript is published about the necessity of including it with the final article.

      Regarding reporting the lack of a positive genetic interaction vs. a negative one, we have discussed this above. Shortly, for Panel B (Figure 8 – Supplement 2 A in the new manuscript) we have changed the conclusion to “negative genetic interaction” as adjusting the zinc chloride concentration allowed us to show this clearly and reproducibly, as shown by the repetitions of the experiments. For panels C and D (Now Figure 8 - Supplement 2 B and C), the effect is really mild and barely detectable, even when we tried a wide range of zinc chloride concentrations. For this reason, we would prefer to maintain the “no positive genetic interaction” conclusion.

      Re: Methods. There is no description of yeast serial dilution growth assay at all. In addition, why the specific media (neutral pH, in the presence of high concentrations of calcium or zinc) was used is not explained either in the results or methods. Appropriate references should be included, for example, PMID: 2139726, PMID: 1491236.

      We apologize for the oversight of the missing methods section, which we have now included.

      Regarding the explanation of the media used, the following section was already a part of the results section, before the description of the first growth test:

      “The V-ATPase is not essential for viability in yeast cells, and mutants lacking subunits of this complex grow similarly to a wt strain in acidic media. However, when cells grow at near-neutral pH or in the presence of divalent cations such as calcium and zinc, the mutants lacking V-ATPase function show a strong growth impairment (Kane et al, 2006).”

      We have now replaced this with the following, more complete version:

      “As a first approach for addressing the role of these proteins, we tested growth phenotypes related to V-ATPase function in strains lacking or overexpressing them. The V-ATPase is not essential for viability in yeast cells, and mutants lacking subunits of this complex grow similarly to a wt strain in acidic media, but display a growth defect at near-neutral pH the mutants (Nelson & Nelson, 1990). In addition, the proton gradient across the vacuole membrane generated by the V-ATPase energizes the pumping of metals into the vacuole, as a mechanism of detoxification. Thus, increasing concentrations of divalent cations such as calcium and zinc, generate conditions in which growth is increasingly reliant on V-ATPase activity (Förster & Kane, 2000; MacDiarmid et al, 2002; Kane, 2006).”


      MINOR COMMENTS

      Yeast proteins are named with "p" at the end, such as "Rtc5p".

      This nomenclature rule is falling into disuse during the last decades, as the use of capitals vs lowercase and italics allows to distinguish between genes proteins and strains (OXR1 = gene, Oxr1 = protein, oxr1Δ = strain). As an example, I include a list of the latest papers by some of the major yeast labs around the world, all of which use the same nomenclature as we do (in alphabetical order). This list even includes some work in the field of the V-ATPase.

      • Alexey Merz, USA. PMID: 33225520
      • Benoit Kornmann, UK. PMID: 35654841
      • Christian Ungermann, Germany. PMID: 37463208
      • Claudio de Virgilio, Switzerland. PMID: 36749016
      • Daniel E. Gottschling, USA. PMID: 37640943
      • David Teis, Austria. PMID: 32744498
      • Elizabeth Conibear, Canada. PMID: 35938928
      • Fulvio Reggiori, Denmark. PMID: 37060997
      • J Christopher Fromme, USA. PMID: 37672345
      • Maya Schuldiner, Israel. PMID: 37073826
      • Patricia Kane, USA. PMID: 36598799
      • Scott Emr, USA. PMID: 35770973
      • W Mike Henne, USA. PMID: 37889293
      • Yoshinori Ohsumi, Japan. PMID: 37917025 In addition, we would prefer to keep the nomenclature that we already use, to keep consistency with other published articles from our lab.

      Re: Introduction. In the introduction it should be indicated that Rtc5 was originally discovered as a "restriction of telomere capping 5", using screening of temperature-sensitive cdc13-1 mutants combined with the yeast gene deletion collection [PMID: 18845848]. A couple of sentences should be written about the RAVE complex and its role in V-ATPase assembly.

      We are thankful for this suggestion and we have now included both pieces of information in the introduction.

      *“The re-assembly of the V1 onto the VO complex when glucose becomes again available, is aided by a dedicated chaperone complex known as the RAVE complex, which also likely has a general role in V-ATPase assembly (Seol et al, 2001; Smardon et al, 2002, 2014).” *

      “In our cross-linking mass spectrometry interactome map of isolated vacuoles we found that the only other TLDc-domain containing protein of yeast, Rtc5, is a novel interactor of the V-ATPase. Rtc5 is a protein of unknown function, originally described in a genetic screen for genes related to telomere capping (Addinall et al, 2008)”

      Re: The TLDc domain-containing protein of unknown function Rtc5 is a novel interactor of the vacuolar V-ATPase (results)

      1) It is important to understand, that Oxr1 was co-purified before with the V1 domain of V-ATPase from a certain mutant strain, not wild-type yeast [PMID: 34918374]. It may explain why the authors did not identify it in their original protein-protein interactions screen here.

      The structural work on the V1 domain bound to Oxr1 (Khan et al, 2022) showed that the binding of Oxr1 prevented V1 from assembling onto the Vo. Since our experiments rely on the purification of vacuoles, they should contain mainly only V1 assembled onto the VO, and not the free soluble V1. This is likely the reason that we do not detect Oxr1, in addition to it being less abundant. We have clarified this now in the manuscript and added the fact that Oxr1 was co-purified with a V1 containing a mutant version of the H subunit.

      “In a previous study, Oxr1 was co-purified with a V1 domain containing a mutant version of the H subunit, and its presence prevented the in vitro assembly of this V1 domain onto the VO domain and promoted disassembly of the holocomplex (Khan et al., 2022). This is likely the reason why we do not detect Oxr1 in our experiments, which rely on isolated vacuoles and thus would only include V1 domains that are assembled onto the membrane. In addition, Oxr1 is less abundant in yeast cells than Rtc5 according to the protein abundance database PaxDb (Wang et al, 2015).”

      2) It is a wrong conclusion that because Rtc5 was co-purified with both V1 and V0 domain subunits it interacts with the assembled V-ATPase, this does not exclude a possibility that Rtc5 also interacts with separate V1 sector or separate V0 sector of V-ATPase.

      We agree with the reviewer that the co-purification of Rtc5 with both V1 and VO domain subunits does not necessarily mean that it interacts with the assembled V-ATPase. Thus, we have modified the text in this part to:

      “The fact that we can co-enrich Rtc5 both with Vma2 and with Vph1 indicates that it can interact either with both the VO and V1 domains or with the assembled V-ATPase.”

      However, other results throughout the manuscript can be taken into account to strengthen this idea:

      1. Rtc5 requires an assembled V-ATPase to localize to the vacuole membrane, and thus seems not to interact with free VO domains, which would be available when we delete V1 subunits or in medium containing galactose.
      2. Rtc5 becomes cytosolic in galactose-containing media. This would indicate that it also does not interact with free V1 domains, which are still localized to the vacuole membrane under these conditions. Taken together with the pull-downs, these results suggest that Rtc5 interacts with the assembled V1-VO V-ATPase. Thus, we have included the following sentence after Figure 3, which shows the subcellular localization experiments.

      *“Taking into account that Rtc5 is co-enriched with subunits of both the VO and V1 domain, and that it localizes at the vacuole membrane dependent on an assembled V-ATPase, we suggest that Rtc5 interacts with the assembled V-ATPase complex.” *

      Re: Figure 1, Panel C. Is it possible to show individual proteins in different colors for clarity?

      Panel D. How were cross-link distances measured? It is not obvious if you are not an expert in the field and it is not described in the methods.

      We have modified Figure 1 C and Figure 1 – Supplement 1B (now Figure 1 – Supplement 1 A) to present the different subunits in the structures with different shades of blue and grey.

      Furthermore, we have clarified the distance measurement approach in the methods section and in the legend of Fig 1D: “Ca-Ca distances were determined using the measuring function in Pymol v.2.5.2 (Schrodinger LLC).”

      __Re: Figure 1 - Supplement 1, __

      Panel A. What scientific information are we getting from this picture?

      This panel was just a visual representation of the complexity of the protein network we obtained. Indeed, there was no specific scientific message, so we have decided to remove this panel from the revised manuscript.

      Panel B. Why are these complexes shown separately from the complexes in Figure 1, panel C? Also, can individual proteins be colored differently here as well?

      We did not want to overload Fig 1C, so we decided to show some of the protein complexes in Fig 1 – Supplement 1B. The most important information is the histogram showing that 95% of the mapped cross-links fall within the expected length range, and this is shown in the main Figure (Figure 1D). As stated above, we have adjusted the subunit coloring in Figure 1 C to improve clarity.

      Re: Figure 3. It will be nice to show the localization of the untagged protein as well if antibodies are available (OPTIONAL).

      Unfortunately, there are no available antibodies for either Rtc5 or Oxr1. This hinders us from detecting the endogenous untagged proteins. We would like to point out that we have been very careful in showing which tagged proteins are functional (C-terminally tagged Rtc5) and which are not (C-terminally tagged Oxr1), so that the reader can know how to interpret the localization data.

      Re: Figure 4. Why different tags were used in panels A (GFP), C (msGFP2) and D

      (mNeonGreen)?

      In general, we prefer to use mNeonGreen as a tag for microscopy experiments because it is brighter and more stable, and msGFP2 as a tag for experiments involving Western blots because we have better antibodies available. There was a mistake in the labeling, and actually, all constructs labeled as GFP were msGFP2. We have now corrected this. Of note, we have tested the functionality of both tagged version (mNeonGreen and msGFP2).

      Panels B and C. Were Rtc5 fusions detected using anti-GFP antibodies?

      Indeed, Rtc5-msGFP2 was detected with an anti-GFP antibody. We have now indicated next to each Western blot membrane the primary antibody used. In addition, all antibodies are detailed in Supplemental Figure 3.

      The authors should have full-size Western blots available, not just cut-out bands, as some journals and reviewers require them for publication.

      For all western blots, we always showed a good portion of the membrane and not cut-out bands. The cropping was performed to avoid making figures unnecessarily large. The whole membranes are of course available and will be included in an “extended data file” if required by the journal.

      Re: Figure 4 - Supplement 1, Panel A. Does "-" and "+" mean -/+ Azido-Myr?

      Indeed. We have now added this label to the figure.

      Panel B. There is no blot with a membrane protein marker (Vam3 or Vac8), it should be included.

      We have replaced this western blot for a different repetition of this experiment in which a membrane protein marker was included. Of note, the two other repetitions of the experiment shown (Figure 4 – Supplement 1 panel C and Figure 4 panel C) also include both a membrane protein marker and a soluble protein marker.

      Re: Figure 5. The title does not describe all results in this figure and should be modified accordingly.

      The original data from Figure 5 is now separated into Figures 5 and 6 because of the additional experiments included during revisions. We have modified the Figure titles to be descriptive of the overall message of the Figures.

      Panel C. Statistical significance value for *** should be indicated in the legend.

      This has been indicated in the Figure legend.

      It is not clear how many times yeast growth assays were repeated. Usually, all experiments should be done in triplicates or more.

      All shown growth tests were performed at least three times for the conditions shown. We have now indicated this in the materials and methods section. In addition, we now provide in this response a file with all repetitions of growth tests, which will be appended to the article if deemed necessary by the editors.

      Re: Figure 5 - supplement 1. No title

      Re: Figure 5 - supplement 2. No title

      Whether the supplemental Figures should have a title or not will depend on the style of the journal where the manuscript is finally published. The current idea of the supplemental Figures is that they complement the corresponding main Figure. For this reason, we have removed all titles from supplemental Figures.

      Re: Figure 6. There is a typo on the second lane in the legend: "...the genome were", not "...the genome where".

      This has been corrected.

      Panel C. Why the analysis of BCECF vacuole staining of double mutants oxr1Δrav1Δ and rtc5Δrav1Δ is not shown? Was it done at all?

      We had not included this piece of data, as we thought that the genetic interaction of RTC5 and OXR1 and rav1Δ was sufficiently well supported with the included data (growth tests in combination with the deletion, growth tests in combination with the overexpression, vacuole proteomics in combination with overexpression, and BCECF staining in combination with the overexpression). Because of the request of the reviewer, we have now included this experiment as Figure 7 G.

      Re: Figure 6 - Supplement 2. Why were two different tags (2xmNG and msGFP2) used?

      We tried both tags to see if one of them would be functional. Unfortunately, they both resulted in non-functional proteins, as shown by the corresponding growth tests.

      Did the authors study N-terminally tagged Oxr1? Was it functional?

      We have tagged Oxr1 N-terminally, and this unfortunately resulted in a protein that was not completely functional. We show below the localization of N-terminally mNeon-tagged Oxr1, under the control of the TEF1 promoter. The protein appears cytosolic (Panel A) but is not completely functional (Panel B). The localization of Oxr1 had already been misreported by using a tagged version that we now show to be non-functional. For this reason, we preferred not to include this data in the manuscript, to avoid again including in the literature subcellular localizations that correspond to non-functional or partially functional proteins.

      Panel B. Results for the untagged TEF1pr-Oxr1 overexpression are not shown, thus tagged and untagged proteins can't be compared. Are they available? What is the promoter for the expression of 2xmNG fusion constructs?

      Oxr1-2xmNG was C-terminally tagged in the genome, which means that the promoter is the endogenous one, it was not modified. For this reason, the correct controls are a strain expressing Oxr1 at endogenous levels (the wt strain) and a strain lacking Oxr1. Both controls were included in the Figure, and in all repetitions made of this experiment. For reference, all the genotypes of the strains used are found in Supplemental Table 1.

      Re: Methods. Were vacuoles prepared differently for XL-MS and SILAC-based vacuole proteomics (there are different references) and why? Methods for XL-MS and quantitative SILAC-based proteomics can be placed together for clarity.

      The basis for the method of vacuole purification is the same, from (Haas, 1995). This reference was included in both protocols that include vacuole purifications. However, modifications of this method were performed to fit the crosslinking method (higher pH, no primary amines) or to fit the SILAC labeling (combination of two differentially labeled samples in one purification). The reference for the vacuole proteomics (Eising et al 2022) corresponds to a paper in which the SILAC-based comparison of vacuoles from different mutant strains was optimized, and includes not only the vacuole purification but the growth conditions and downstream processing of the vacuoles.

      Since both the SILAC-based vacuole proteomics and the XL-MS are multi-step methods, containing numerous parameters including the sample preparation, processing for MS, MS run and data analysis, we would prefer to keep them separate. We think this would allow a person attempting to reproduce these methods to go through them step by step.

      What is CMAC dye? Why was it used to stain the vacuolar lumen?

      We apologize for this oversight, we have included the definition of CMAC as 7-Amino-4-Chlormethylcumarin. It is a standard-used organelle marker for the lumen of the vacuole.

      Some abbreviations (TEAB, ACN) are not explained.

      We apologize for this oversight. We have now replaced these abbreviations with the full names of the compounds in the article.

      What is 0% Ficoll?

      We used the term 0% Ficoll, because this is the name given to the buffer in the original Haas 1995 paper on vacuole purifications. However, we agree that the term is misleading and we have now added the composition of the buffer (10 mM PIPES/KOH pH=6.8, 0.2 M Sorbitol).

      Reviewer #3 (Significance (Required)):

      The vacuolar-type proton ATPase, V-ATPase, is the key proton pump, that hydrolases ATP and uses this energy to pump protons across membranes. Amazingly, this proton pump and its function are conserved in eukaryotes from yeast to mammals. While V-ATPase structure and function have been studied for more than 30 years in various organisms, its regulation is not completely understood. The very recent discoveries of two new V-ATPase interacting proteins in yeast, first Oxr1 (OXidative Resistance 1), and now Rtc5 (Restriction of Telomere Capping 5), both the only two members of TLDc (The Tre2/Bub2/Cdc16 (TBC), lysin motif (LysM), domain catalytic) proteins in yeast, provide new insights in V-ATPase regulation in yeast, and because the interaction is conserved in mammals its relevance to mammalian V-ATPases regulation as well.

      TLDc proteins are best known for their role in protection from oxidative stress, in particular in yeast and in the nervous system in mammals. The discovery of the novel Rtc5-V-ATPase interaction points to the role of V-ATPase not only in protection from oxidative stress but also in restriction of telomere capping in yeast and most likely higher species. The studies of other species also highlight the possible conserved role of V-ATPase in lifespan determination and Torc1 signaling, mediated through these interactions. Thus, the discovery of this new functionally important interaction between the second TLDc family member in yeast, Rtc5, and V-ATPase will shed light on the molecular mechanisms of all these essential biological processes and pathways.

      In addition, because the authors performed a comprehensive proteomics protein-protein interaction study of the purified yeast vacuole it provides a valuable resource for all researchers who study vacuoles and/or related to them lysosomes.

      The follow-up functional studies using the rav1Δ strain clearly demonstrated that Rtc5 and Oxr1 disassemble V-ATPase and counteract the function of V-ATPase assembly RAVE complex in vivo in yeast. Thus, they are essentially the first discovered endogenous eukaryotic protein inhibitors of V-ATPase. Moreover, because the authors obtained the evidence that Oxr1 is the regulator of the specific subunit isoform of V-ATPase Stv1p in vivo in yeast, it suggests that different TLDc proteins may regulate different specific V-ATPase subunit isoforms in cell- and tissue-specific manner in higher eukaryotes. The mechanism of this isoform-specific regulation in yeast and other species needs further investigation in the future.

      Because of the conservation of the TLDc-V-ATPase interactions, all this information can be extrapolated to higher species, all the way to humans, in whom genetic mutations in various TLDc proteins are known to cause devastating diseases and syndromes.

      We are thankful to the reviewer for their positive comments about the significance of our work.

    1. Others could reply to the previous set of messages in the thread.

      I remember listening to my cousins talk about having to input codes to talk to friends and that it was normal to learn multiple different coding languages to operate a computer. I grateful that in the 2000s we were not required to learn code as a gen z but this will always intrigue me as an individual

    1. Author Response

      The following is the authors’ response to the original reviews.

      Thank you for organizing the reviews for our manuscript: Behavioral entrainment to rhythmic auditory stimulation can be modulated by tACS depending on the electrical stimulation field properties,” and for the positive eLife assessment. We also thank the reviewers for their constructive comments. We have addressed every comment, which has helped to improve the transparency and readability of the manuscript. The main changes to the manuscript are summarized as follows:

      1. Surrogate distributions were created for each participant and session to estimate the effect of tACS-phase lag on behavioral entrainment to the sound that could have occurred by chance or because of our analysis method (R1). The actual tACS-amplitude effects were normalized relative to the surrogate distribution, and statistical analysis was performed on the normalized (z-score) values. This analysis did not change our main outcome: that tACS modulates behavioral entrainment to the sound depending on the phase lag between the auditory and the electrical signals. This analysis has now been incorporated into the Results section and in Fig. 3c-d.

      2. Two additional supplemental figures were created to include the single-participant data related to Fig. 3b and 3e (R2).

      3. Additional editing of the manuscript has been performed to improve the readability.

      Below, you will find a point-by-point response to the reviewers’ comments.

      Reviewer #1 (Public Review):

      We are grateful for the reviewer’s positive assessment of the potential impact of our study. The reviewer’s primary concerns were 1) the tACS lag effects reported in the manuscript might be noise because of the realignment procedure, and 2) no multiple comparisons correction was conducted in the model comparison procedure.

      In response to point 1), we have reanalyzed the data in exactly the manner prescribed by the reviewer. Our effects remain, and the new control analysis strengthens the manuscript. 2) In the context of model comparison, the model selection procedure was not based on evaluating the statistical significance of any model or predictor. Instead, the single model that best fit the data was selected as the model with the lowest Akaike’s information criterion (AIC), and its superiority relative to the second-best model was corroborated using the likelihood ratio test. Only the best model was evaluated for significance and analyzed in terms of its predictors and interactions. This model is an omnibus test and does not require multiple comparison correction unless there are posthoc decompositions. For similar approaches, see (Kasten et al., 2019).

      Below, we have responded to each comment specifically or referred to this general comment.

      Summary of what the authors were trying to achieve.

      This paper studies the possible effects of tACS on the detection of silence gaps in an FM-modulated noise stimulus. Both FM modulation of the sound and the tACS are at 2Hz, and the phase of the two is varied to determine possible interactions between the auditory and electric stimulation. Additionally, two different electrode montages are used to determine if variation in electric field distribution across the brain may be related to the effects of tACS on behavioral performance in individual subjects.

      Major strengths and weaknesses of the methods and results.

      The study appears to be well-powered to detect modulation of behavioral performance with N=42 subjects. There is a clear and reproducible modulation of behavioral effects with the phase of the FM sound modulation. The study was also well designed, combining fMRI, current flow modeling, montage optimization targeting, and behavioral analysis. A particular merit of this study is to have repeated the sessions for most subjects in order to test repeat-reliability, which is so often missing in human experiments. The results and methods are generally well-described and well-conceived. The portion of the analysis related to behavior alone is excellent. The analysis of the tACS results is also generally well described, candidly highlighting how variable results are across subjects and sessions. The figures are all of high quality and clear. One weakness of the experimental design is that no effort was made to control for sensation effects. tACS at 2Hz causes prominent skin sensations which could have interacted with auditory perception and thus, detection performance.

      The reviewer is right that we did not control for the sensation effects in our paradigm. We asked the participants to rate the strength of the perceived stimulation after each run. However, this information was used only to assess the safety and tolerability of the stimulation protocol. Nevertheless, we did not consider controlling for skin sensations necessary given the within-participant nature of our design (all participants experienced all six tACS–audio phase lag conditions, which were identical in their potential to cause physical sensations; the only difference between conditions was related to the timing of the auditory stimulus). That is, while the reviewer is right that 2-Hz tACS can indeed induce skin sensation under the electrodes, in this study, we report the effects that depend on the tACS-phase lag relative to the FM-stimulus. Note that the starting phase of the FM-stimulus was randomized across trials within each block (all six tACS audio lags were presented in each block of stimulation). We have no reason to expect the skin sensation to change with the tACS-audio lag from trial to trial, and therefore do not consider this to be a confound in our design. We have added some sentences with this information to the Discussion section:

      Pages 16-17, lines 497-504: “Note that we did not control for the skin sensation induced by 2-Hz tACS in this experiment. Participants rated the strength of the perceived stimulation after each run. However, this information was used only to assess the safety and tolerability of the stimulation protocol. It is in principle possible that skin sensation would depend on tACS phase itself. However, in this study, we report effects that depend on the relationship between tACS-phase and FM-stimulus phase, which changed from trial to trial as the starting phase of the FM-stimulus was randomized across trials. We have no reason to expect the skin sensation to change with the tACS-audio lag and therefore do not consider this to be a confound in our data.”

      Appraisal of whether the authors achieved their aims, and whether the results support their conclusions.

      Unfortunately, the main effects described for tACS are encumbered by a lack of clarity in the analysis. It does appear that the tACS effects reported here could be an artifact of the analysis approach. Without further clarification, the main findings on the tACS effects may not be supported by the data.

      Likely impact of the work on the field, and the utility of the methods and data to the community.

      The central claim is that tACS modulates behavioral detection performance across the 0.5s cycle of stimulation. However, neither the phase nor the strength of this effect reproduces across subjects or sessions. Some of these individual variations may be explainable by individual current distribution. If these results hold, they could be of interest to investigators in the tACS field.

      The additional context you think would help readers interpret or understand the significance of the work.

      The following are more detailed comments on specific sections of the paper, including details on the concerns with the statistical analysis of the tACS effects.

      The introduction is well-balanced, discussing the promise and limitations of previous results with tACS. The objectives are well-defined.

      The analysis surrounding behavioral performance and its dependence on the phase of the FM modulation (Figure 3) is masterfully executed and explained. It appears that it reproduces previous studies and points to a very robust behavioral task that may be of use in other studies.

      Again, we would like to thank the reviewer for the positive assessment of the potential impact of our work and for the thoughtful comments regarding the methodology. For readability in our responses, we have numbered the comments below.

      1. There is a definition of tACS(+) vs tACS(-) based on the relative phase of tACS that may be problematic for the subsequent analysis of Figures 4 and 5. It seems that phase 0 is adjusted to each subject/session. For argument's sake, let's assume the curves in Fig. 3E are random fluctuations. Then aligning them to best-fitting cosine will trivially generate a FM-amplitude fluctuation with cosine shape as shown in Fig. 4a. Selecting the positive and negative phase of that will trivially be larger and smaller than a sham, respectively, as shown in Fig 4b. If this is correct, and the authors would like to keep this way of showing results, then one would need to demonstrate that this difference is larger than expected by chance. Perhaps one could randomize the 6 phase bins in each subject/session and execute the same process (fit a cosine to curves 3e, realign as in 4a, and summarize as in 4b). That will give a distribution under the Null, which may be used to determine if the contrast currently shown in 4b is indeed statistically significant.

      We agree with the reviewer’s concerns regarding the possible bias induced by the realignment procedure used to estimate tACS effects. Certainly, when adjusting phase 0 to each participant/session’s best tACS phase (peak in the fitting cosine), selecting the positive phase of the realigned data will be trivially larger than sham (Fig. 4a). This is why the realigned zero-phase and opposite phase (trough) bins were excluded from the analysis in Fig. 4b. Therefore, tACS(+) vs. tACS(-) do not represent behavioral entrainment at the peak positive and negative tACS lags, as both bins were already removed from the analysis. tACS(+) and tACS(-) are the averages of two adjacent bins from the positive and negative tACS lags, respectively (Zoefel et al., 2019). Such an analysis relies on the idea that if the effect of tACS is sinusoidal, presenting the auditory stimulus at the positive half cycle should be different than when the auditory stimulus lags the electrical signal by the other half. If the effect of tACS was just random noise fluctuations, there is no reason to assume that such fluctuations would be sinusoidal; therefore, any bias in estimating the effect of tACS should be removed when excluding the peak to which the individual data were realigned. Similar analytical procedures have been used previously in the literature (Riecke et al., 2015; Riecke et al., 2018). We have modified the colors in Fig. 4a and 4c (former 4b) and added a new panel to the figure (new 4b) to make the realignment procedure, including the exclusion of the realigned peak and trough data, more visually obvious.

      Moreover, we very much like the reviewer’s suggestion to normalize the magnitude of the tACS effect using a permutation strategy. We performed additional analyses to normalize our tACS effect in Fig. 4c by the probability of obtaining the effect by chance. For each subject and session, tACS-phase lags were randomized across trials for a total of 1000 iterations. For each iteration, the gaps were binned by the FM-stimulus phase and tACS-lag. For each tACS-lag, the amplitude of behavioral entrainment to the FM-stimulus was estimated (FM-amplitude), as shown in Fig. 3. Similar to the original data, a second cosine fit was estimated for the FM-amplitude by tACS-lag. Optimal tACS-phase was estimated from the cosine fit and FM-amplitude values were realigned. Again, the realigned phase 0 and trough were removed from the analysis, and their adjacent bins were averaged to obtain the FM-amplitude at tACS(+) and tACS(−), as shown in Fig. 4c. We then computed the difference between 1) tACS(+) and sham, 2) tACS(-) and sham, and 3) tACS(+) and tACS (-), for the original data and the permuted datasets. This procedure was performed for each participant and session to estimate the size of the tACS effect for the original and surrogate data. The original tACS effects were transformed to z-scores using surrogate distributions, providing us with an estimate of the size of the real effect relative to chance. We then computed one-sample t-tests to compare whether the effects of tACS were statistically significant. In fact, this analysis showed that the tACS effects were still statistically significant. This analysis has been added to the Results and Methods sections and is included in Figure 4d.

      Page 10, lines 282-297: “In order to further investigate whether the observed tACS effect was significantly larger than chance and not an artifact of our analysis procedure (33), we created 1000 surrogate datasets per participant and session by permuting the tACS lag designation across trials. The same binning procedure, realignment, and cosine fits were applied to each surrogate dataset as for the original data. This yielded a surrogate distribution of tACS(+) and tACS(-) values for each participant and session. These values were averaged across sessions since the original analysis did not show a main effect of session. We then computed the difference between tACS(+) and sham, tACS(-) and sham, and tACS(+) and tACS(-), separately for the original and surrogate datasets. The obtained difference for the original data where then z-scored using the mean and standard deviation of the surrogate distribution. Note that in this case we used data of all 42 participants who had at least one valid session (37 participants with both sessions). Three one-sample t-tests were conducted to investigate whether the size of the tACS effect obtained in the original data was significantly larger than that obtained by chance (Fig. 4d). This analysis showed that all z-scores were significantly higher than zero (all t(41) > 2.36, p < 0.05, all p-values corrected for multiple comparisons using the Holm-Bonferroni method).”

      Page 31, lines 962-972: “To further control that the observed tACS effects were not an artifact of the analysis procedure, the difference between the tACS conditions (sham, tACS(+), and tACS(-)) were normalized using a permutation approach. For each participant and session, 1000 surrogate datasets were created by permuting the tACS lag designation across trials. The same binning procedure, realignment, and cosine fits were applied to each surrogate dataset as for the original data (see above). FM-amplitude at sham, tACS(+) and tACS(-) were averaged across sessions since the original analysis did not show a main effect of session. Difference between tACS conditions were estimated for the original and surrogate datasets and the resulting values from the original data were z-scored using the mean and standard deviation from the surrogate distributions. One-sample t-tests were conducted to test the statistical significance of the z-scores. P-values were corrected for multiple comparisons using the Holm-Bonferroni method.”

      1. Results of Fig 5a and 5b seem consistent with the concern raised above about the results of Fig. 4. It appears we are looking at an artifact of the realignment procedure, on otherwise random noise. In fact, the drop in "tACS-amplitude" in Fig. 5c is entirely consistent with a random noise effect.

      Please see our response to the comment above.

      1. To better understand what factors might be influencing inter-session variability in tACS effects, we estimated multiple linear models ..." this post hoc analysis does not seem to have been corrected for multiple comparisons of these "multiple linear models". It is not clear how many different things were tried. The fact that one of them has a p-value of 0.007 for some factors with amplitude-difference, but these factors did not play a role in the amplitude-phase, suggests again that we are not looking at a lawful behavior in these data.

      We suspect that the reviewer did not have access to the supplemental materials where all tables (relevant here is Table S3) are provided. This post hoc analysis was performed as an exploratory analysis to better understand the factors that could influence the inter-session variability of tACS effects. In Table S3, we provide the formula for each of the seven models tested, including their Akaike information criteria corrected for small samples (AICc), R2, F, and p-values. As described in the methods section, the winning model was selected as the model with the smallest AICc. A similar procedure has been previously used in the literature (Kasten et al., 2019). Moreover, to ensure that our winning model was better at explaining the data than the second-best unrestricted model, we used the likelihood ratio test. After choosing the winning model and before reporting the significance of the predictors, we examined the significance of the model in and of itself, taking into account its R2 as well as F- and p-values relative to a constant model. Thus, only one model is being evaluated in terms of statistical significance. Therefore, to our understanding, there are no multiple comparisons to correct for. We added the information regarding the selection procedure, hoping this will make the analysis clearer.

      See page 12, lines 354-360: “This model was selected because it had the smallest Akaike’s information criterion (corrected for small samples), AICc. Moreover, the likelihood ratio test showed no evidence for choosing the more complex unrestricted model (stat = 2.411, p = 0.121). Following the same selection criteria, the winning model predicting inter-session variability in tACS-phase, included only the factor gender (Table S4). However, this model was not significant in and of itself when compared to a constant model (F-statistic vs. constant model: 3.05, p = 0.09, R2 = 0.082).”

      1. "So far, our results demonstrate that FM-stimulus driven behavioral modulation of gap detection (FM-amplitude) was significantly affected by the phase lag between the FM-stimulus and the tACS signal (Audio-tACS lag) ..." There appears to be nothing in the preceding section (Figures 4 and 5) to show that the modulation seen in 3e is not just noise. Maybe something can be said about 3b on an individual subject/session basis that makes these results statistically significant on their own. Maybe these modulations are strong and statistically significant, but just not reproducible across subjects and sessions?

      Please see our response to the first comment regarding the validity of our analysis for proving the significant effect of tACS lag on modulating behavioral entrainment to the FM-stimulus (FM-amplitude), and the new control analysis. After performing the permutation tests, to make sure the reported effects are not noise, our statistical analysis still shows that tACS-lag does significantly modulate behavioral entrainment to the sound (FM-amplitude). Thus, the reviewer is right to say “these modulations are strong and statistically significant, just not reproducible across subjects and sessions”. In this regard, we consider our evaluation of session-to-session reliability of tACS effects is of high relevance for the field, as this is often overlooked in the literature.

      1. "Inter-individual variability in the simulated E-field predicts tACS effects" Authors here are attempting to predict a property of the subjects that was just shown to not be a reliable property of the subject. Authors are picking 9 possible features for this, testing 33 possible models with N=34 data points. With these circumstances, it is not hard to find something that correlates by chance. And some of the models tested had interaction terms, possibly further increasing the number of comparisons. The results reported in this section do not seem to be robust, unless all this was corrected for multiple comparisons, and it was not made clear?

      We thank the reviewer very much for this comment. While the reviewer is right that in these models, we are trying to predict an individual property (tACS-amplitude) that was not test–retest reliable across sessions, we still consider this to be a valid analysis. Here, we take the tACS-amplitude averaged across sessions, trying to predict the probability of a participant to be significantly modulated by tACS, in general, regardless of day-to-day variability. Regarding the number of multiple regression models, how we chose the winning model and the appropriateness/need of multiple-comparisons correction in this case, please see our explanation under “Reviewer 1 (Public review)” and our response to comment 3.

      1. "Can we reduce inter-individual variability in tACS effects ..." This section seems even more speculative and with mixed results.

      We agree with the reviewer that this section is a bit speculative. We are trying to plant some seeds for future research can help move the field forward in the quest for better stimulation protocols. We have added a sentence at the end of the section to explicitly say that more evidence is needed in this regard.

      Page 14, lines 428-429: “At this stage, more evidence is needed to prove the superiority of individually optimized tACS montages for reducing inter-individual variability in tACS effects.”

      Given the concerns with the statistical analysis above, there are concerns about the following statements in the summary of the Discussion:

      1. "2) does modulate the amplitude of the FM-stimulus induced behavioral modulation (FM-amplitude)"

      This seems to be based on Figure 4, which leaves one with significant concerns.

      Please see response to comment 1. We hope the reviewer is satisfied with our additional analysis to make sure the effect of tACS here reported is not noise.

      1. "4) individual variability in tACS effect size was partially explained by two interactions: between the normal component of the E-field and the field focality, and between the normal component of the E-field and the distance between the peak of the electric field and the functional target ROIs."

      The complexity of this statement alone may be a good indication that this could be the result of false discovery due to multiple comparisons.

      We respectfully disagree with the reviewer’s opinion that this is a complex statement. We think that these interaction effects are very intuitive as we explain in the results and discussion sections. These significant interactions show that for tACS to be effective, it matters that current gets to the right place and not to irrelevant brain regions. We believe this finding is of great importance for the field, since most studies on the topic still focus mostly on predicting tACS effects from the absolute field strength and neglect other properties of the electric field.

      For the same reasons as stated above, the following statements in the Abstract do not appear to have adequate support in the data:

      "We observed that tACS modulated the strength of behavioral entrainment to the FM sound in a phase-lag specific manner. ... Inter-individual variability of tACS effects was best explained by the strength of the inward electric field, depending on the field focality and proximity to the target brain region. Spatially optimizing the electrode montage reduced inter-individual variability compared to a standard montage group."

      Please see response to all previous comments

      In particular, the evidence in support of the last sentence is unclear. The only finding that seems related is that "the variance test was significant only for tACS(-) in session 2". This is a very narrow result to be able to make such a general statement in the Abstract. But perhaps this can be made clearer.

      We changed this sentence in the abstract to:

      Page 2, lines 41-43: “Although additional evidence is necessary, our results also provided suggestive insights that spatially optimizing the electrode montage could be a promising tool to reduce inter-individual variability of tACS effects.”

      Reviewer #3 (Public Review):

      In "Behavioral entrainment to rhythmic auditory stimulation can be modulated by tACS depending on the electrical stimulation field properties" Cabral-Calderin and collaborators aimed to document 1) the possible advantages of personalized tACS montage over standard montage on modulating behavior; 2) the inter-individual and inter-session reliability of tACS effects on behavioral entrainment and, 3) the importance of the induced electric field properties on the inter-individual variability of tACS.

      To do so, in two different sessions, they investigated how the detection of silent gaps occurring at random phases of a 2Hz- amplitude modulated sound could be enhanced with 2Hz tACS, delivered at different phase lags. In addition, they evaluated the advantage of using spatially optimized tACS montages (information-based procedure - using anatomy and functional MRI to define the target ROI and simulation to compare to a standard montage applied to all participants) on behavioral entrainment. They first show that the optimized and the standard montages have similar spatial overlap to the target ROI. While the optimized montage induced a more focal field compared to the standard montage, the latter induced the strongest electric field. Second, they show that tACS does not modify the optimal phase for gap detection (phase of the frequency-modulated sound) but modulates the strength of behavioral entrainment to the frequency-modulated sound in a phase-lag specific manner. However, and surprisingly, they report that the optimal tACS lag, and the magnitude of the phasic tACS effect were highly variable across sessions. Finally, they report that the inter-individual variability of tACS effects can be explained by the strength of the inward electric field as a function of the field focality and on how well it reached the target ROI.

      The article is interesting and well-written, and the methods and approaches are state-of-the-art.

      Strengths:

      • The information-based approach used by the authors is very strong, notably with the definition of subject-specific targets using a fMRI localizer and the simulation of electric field strength using 3 different tACS montages (only 2 montages used for the behavioral experiment).

      • The inter-session and inter-individual variability are well documented and discussed. This article will probably guide future studies in the field.

      Weaknesses:

      • The addition of simultaneous EEG recording would have been beneficial to understand the relationship between tACS entrainment and the entrainment to rhythmic auditory stimulation.

      We are grateful for the Reviewer’s positive assessment of our work and for the reviewer’s recommendations. We agree with the reviewer that adding simultaneous EEG or MEG to our design would have been beneficial to understand tACS effects. However, as the reviewer might be familiar with, such combination also possesses additional challenges due to the strong artifacts induced by tACS in the EEG signals, which is at the frequency of interest and several orders of magnitude higher than the signal of interest. Unfortunately, the adequate setup for simultaneous tACS-EEG was not available at the moment of the study. Nevertheless, since we are using a paradigm that we have repeatedly studied in the past and have shown it entrains neural activity and modulates behavior rhythmically, we are confident our results are of interest on their own. For readability of our answers, we numbered to comments below.

      1. It would have been interesting to develop the fact that tACS did not "overwrite" neural entrainment to the auditory stimulus. The authors try to explain this effect by mentioning that "tACS is most effective at modulating oscillatory activity at the intended frequency when its power is not too high" or "tACS imposes its own rhythm on spiking activity when tACS strength is stronger than the endogenous oscillations but it decreases rhythmic spiking when tACS strength is weaker than the endogenous oscillations". However, it is relevant to note that the oscillations in their study are by definition "not endogenous" and one can interpret their results as a clear superiority of sensory entrainment over tACS entrainment. This potential superiority should be discussed, documented, and developed.

      We thank the reviewer very much for this remark. We completely agree that our results could be interpreted as a clear superiority of sensory entrainment over tACS entrainment. We have now incorporated this possibility in the discussion.

      Page 16, line 472-478: “Alternatively, our results could simply be interpreted as a clear superiority of the auditory stimulus for entrainment. In other words, sensory entrainment might just be stronger than tACS entrainment in this case where the stimulus rhythm was strong and salient. It would be interesting to further test whether this superiority of sensory entrainment applies to all sensory modalities or if there is a particular advantage for auditory stimuli when they compete with electrical stimulation. However, answering this question was beyond the scope of our study and needs further investigations with more appropriate paradigms.”

      1. The authors propose that "by applying tACS at the right lag relative to auditory rhythms, we can aid how the brain synchronizes to the sounds and in turn modulate behavior." This should be developed as the authors showed that the tACS lags are highly variable across sessions. According to their results, the optimal lag will vary for each tACS session and subtle changes in the montage could affect the effects.

      We thank the reviewer for this remark. We believe that the right procedure in this case would be using close-loop protocols where the optimal tACS-lag is estimated online as we discuss in the summary and future directions sub-section. We tried to make this clearer in the same sentence that the reviewer mentioned.

      Page 17, line 506-508: “Since optimal tACS phase was variable across participants and sessions, this approach would require closed-loop protocols where the optimal tACS lag is estimated online (see next section).”

      1. In a related vein, it would be very useful to show the data presented in Figure 3 (panels b,d,e) for all participants to allow the reader to evaluate the quality of the data (this can be added as a supplementary figure).

      Thank you very much for the suggestion. We have added two new supplemental figures (Fig S1 and S2) to show individual data for Fig. 3b and 3e. Note that Fig. 3d already shows the individual data as each circle represents optimal FM-phase for a single participant.

      Reviewer #1 (Recommendations For The Authors):

      Minor comments:

      "was optimized in SimNIBS to focus the electric field as precisely as possible at the target ROI" It appears that some form of constrained optimization was used. It would be good to clarify which method was used, including a reference.

      Indeed, SimNIBS implements a constrained optimization approach based on pre-calculated lead fields. We have added the corresponding reference. All parameters used for the optimization are reported in the methods (see sub-section Electric field simulations and montage optimization). Regarding further specifics, the readers are invited to check the MATLAB code that was used for the optimization which is made available at: https://osf.io/3yutb

      "Thus, each montage has its pros and cons, and the choice of montage will depend on which of these dependent measures is prioritized." Well put. It would be interesting to know if authors considered optimizing for intensity on target. That would give the strongest predicted intensity on target, which seems like an important desideratum. Individualizing for something focal, as expected, did not give the strongest intensity. In fact, the method struggled to achieve the desired intensity of 0.1V/m in some subjects. It would be interesting to have a discussion about why this particular optimization method was selected.

      The specific optimization method used in this study was somewhat arbitrary, as there is no standard in the field. It was validated in prior studies, where it was also demonstrated that it performs favorably compared to alternative methods (Saturnino et al., 2019; Saturnino et al., 2021). The underlying physics of the head volume conductor generally limits the maximally achievable focality, and requires a tradeoff between focality and the desired intensity in the target. This tradeoff depends on the maximal amount of current that can be injected into the electrodes due to safety limits (4 mA in total in our case). Further constraints of the optimization in our application were the simultaneous targeting of two areas, and achieving field directions in the targets roughly parallel to those of auditory dipoles. Given the combination of these constraints, as the reviewer noticed, we could not even achieve the desired intensity of .1V/m in some subjects. As we wanted to stimulate both auditory cortices equally, our priority was to have the E-fields as similar as possible between hemispheres. Future studies optimizing for only one target would be easier to optimize for target intensity (assuming the same maximal total current injection). Alternatively, relaxing the constraint on direction and optimizing only for field intensity would help to increase the field intensities in the targets, but would lead to differing field directions in the two targets. As an example, see Rev. Fig.1 below. We extensively discuss some of these points in the discussion section: “Are individually optimized tACS montage better?” (Pages 21-22).

      Additionally, we added a few sentences in the Results and Methods giving more details about the optimization approach.

      Page 5, lines 115-116: “Using individual finite element method (FEM) head models (see Methods) and the lead field-based constrained optimization approach implemented in SimNIBS (31)”

      Page 27, lines 819-822: “The optimization pipeline employed the approach described in (31) and was performed in two steps. First, a lead field matrix was created per individual using the 10-10 EEG virtual cap provided in SimNIBS and performing electric field simulations based on the default tissue conductivities listed below.”

      Author response image 1.

      E-field distributions for one example participant. Brain maps show the results from the same optimization procedure described in the main manuscript but with no constraint for the current direction (top) or constraining the current direction (bottom). Note that the desired intensity of .1 V/m can be achieved when the current direction is not constrained.

      The terminology of "high-definition HD" used here is unconventional and may confuse some readers. The paper cited for ring electrodes (18) does not refer to it as HD. A quick search for high-definition HD yields mostly papers using many small electrodes, not ring electrodes. They look more like what was called "individualized". More conventional would be to call the first configuration a "ring-electrode", and the "individualized" configuration might be called "individualized HD".

      We thank the reviewer for this remark. We changed the label of the high-definition montage to ring-electrode. Regarding the individualized configuration, we prefer not to use individualized HD as it has the same number of electrodes as the standard montage.

      "So far, we have evaluated whether tACS at different phase lags interferes with stimulus-brain synchrony and modulates behavioral signatures of entrainment" The paper does not present any data on stimulus-brain synchrony. There is only an analysis of behavior and stimulus/tACS phase.

      We agree with the reviewer. To be more careful with such statement we now modified the sentence to say:

      Page 10, lines 303-304: “So far, we have evaluated whether tACS at different phase lags modulates behavioral signatures of entrainment: FM-amplitude and FM-phase.”

      "However, the strength of the tACS effect was variable across participants." and across sessions, and the phase also was variable across subjects and sessions.

      "tACS-amplitude estimates were averaged across sessions since the session did not significantly affect FM-amplitude (Fig. 5a)." More importantly, the authors show that "tACS-amplitude" was not reproducible across sessions.

      Unfortunately, we did not understand what the reviewer is suggesting here, and would have to ask the reviewer in this case to provide us with more information.

      References

      Kasten FH, Duecker K, Maack MC, Meiser A, Herrmann CS (2019) Integrating electric field modeling and neuroimaging to explain inter-individual variability of tACS effects. Nat Commun 10:5427. Riecke L, Sack AT, Schroeder CE (2015) Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming. Curr Biol 25:3196-3201.

      Riecke L, Formisano E, Sorger B, Baskent D, Gaudrain E (2018) Neural Entrainment to Speech Modulates Speech Intelligibility. Curr Biol 28:161-169 e165.

      Saturnino GB, Madsen KH, Thielscher A (2021) Optimizing the electric field strength in multiple targets for multichannel transcranial electric stimulation. J Neural Eng 18.

      Saturnino GB, Siebner HR, Thielscher A, Madsen KH (2019) Accessibility of cortical regions to focal TES: Dependence on spatial position, safety, and practical constraints. Neuroimage 203:116183.

      Zoefel B, Davis MH, Valente G, Riecke L (2019) How to test for phasic modulation of neural and behavioural responses. Neuroimage 202:116175.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The authors develop a method to fluorescently tag peptides loaded onto dendritic cells using a two-step method with a tetracystein motif modified peptide and labelling step done on the surface of live DC using a dye with high affinity for the added motif. The results are convincing in demonstrating in vitro and in vivo T cell activation and efficient label transfer to specific T cells in vivo. The label transfer technique will be useful to identify T cells that have recognised a DC presenting a specific peptide antigen to allow the isolation of the T cell and cloning of its TCR subunits, for example. It may also be useful as a general assay for in vitro or in vivo T-DC communication that can allow the detection of genetic or chemical modulators.

      Strengths:

      The study includes both in vitro and in vivo analysis including flow cytometry and two-photon laser scanning microscopy. The results are convincing and the level of T cell labelling with the fluorescent pMHC is surprisingly robust and suggests that the approach is potentially revealing something about fundamental mechanisms beyond the state of the art.

      Weaknesses:

      The method is demonstrated only at high pMHC density and it is not clear if it can operate at at lower peptide doses where T cells normally operate. However, this doesn't limit the utility of the method for applications where the peptide of interest is known. It's not clear to me how it could be used to de-orphan known TCR and this should be explained if they want to claim this as an application. Previous methods based on biotin-streptavidin and phycoerythrin had single pMHC sensitivity, but there were limitations to the PE-based probe so the use of organic dyes could offer advantages.

      We thank the reviewer for the valuable comments and suggestions. Indeed, we have shown and optimized this labeling technique for a commonly used peptide at rather high doses to provide a proof of principle for the possible use of tetracysteine tagged peptides for in vitro and in vivo studies. However, we completely agree that the studies that require different peptides and/or lower pMHC concentrations may require preliminary experiments if the use of biarsenical probes is attempted. We think it can help investigate the functional and biological properties of the peptides for TCRs deorphaned by techniques. Tetracysteine tagging of such peptides would provide a readily available antigen-specific reagent for the downstream assays and validation. Other possible uses for modified immunogenic peptides could be visualizing the dynamics of neoantigen vaccines or peptide delivery methods in vivo. For these additional uses, we recommend further optimization based on the needs of the prospective assay.

      Reviewer #2 (Public Review):

      Summary:

      The authors here develop a novel Ovalbumin model peptide that can be labeled with a site-specific FlAsH dye to track agonist peptides both in vitro and in vivo. The utility of this tool could allow better tracking of activated polyclonal T cells particularly in novel systems. The authors have provided solid evidence that peptides are functional, capable of activating OTII T cells, and that these peptides can undergo trogocytosis by cognate T cells only.

      Strengths:

      -An array of in vitro and in vivo studies are used to assess peptide functionality.

      -Nice use of cutting-edge intravital imaging.

      -Internal controls such as non-cogate T cells to improve the robustness of the results (such as Fig 5A-D).

      -One of the strengths is the direct labeling of the peptide and the potential utility in other systems.

      Weaknesses:

      1. What is the background signal from FlAsH? The baselines for Figure 1 flow plots are all quite different. Hard to follow. What does the background signal look like without FLASH (how much fluorescence shift is unlabeled cells to No antigen+FLASH?). How much of the FlAsH in cells is actually conjugated to the peptide? In Figure 2E, it doesn't look like it's very specific to pMHC complexes. Maybe you could double-stain with Ab for MHCII. Figure 4e suggests there is no background without MHCII but I'm not fully convinced. Potentially some MassSpec for FLASH-containing peptides.

      We thank the reviewer for pointing out a possible area of confusion. In fact, we have done extensive characterization of the background and found that it has varied with the batch of FlAsH, TCEP, cytometer and also due to the oxidation prone nature of the reagents. Because Figure 1 subfigures have been derived from different experiments, a combination of the factors above have likely contributed to the inconsistent background. To display the background more objectively, we have now added the No antigen+Flash background to the revised Fig 1.

      It is also worthwhile noting that nonspecific Flash incorporation can be toxic at increasing doses, and live cells that display high backgrounds may undergo early apoptotic changes in vitro. However, when these cells are adoptively transferred and tracked in vivo, the compromised cells with high background possibly undergo apoptosis and get cleared by macrophages in the lymph node. The lack of clearance in vitro further contributes to different backgrounds between in vitro and in vivo, which we think is also a possible cause for the inconsistent backgrounds throughout the manuscript. Altogether, comparison of absolute signal intensities from different experiments would be misleading and the relative differences within each experiment should be relied upon. We have added further discussion about this issue.

      1. On the flip side, how much of the variant peptides are getting conjugated in cells? I'd like to see some quantification (HPLC or MassSpec). If it's ~10% of peptides that get labeled, this could explain the low shifts in fluorescence and the similar T cell activation to native peptides if FlasH has any deleterious effects on TCR recognition. But if it's a high rate of labeling, then it adds confidence to this system.

      We agree that mass spectrometry or, more specifically tandem MS/MS, would be an excellent addition to support our claim about peptide labeling by FlAsH being reliable and non-disruptive. Therefore, we have recently undertaken a tandem MS/MS quantitation project with our collaborators. However, this would require significant time to determine the internal standard based calibration curves and to run both analytical and biological replicates. Hence, we have decided pursuing this as a follow up study and added further discussion on quantification of the FlAsH-peptide conjugates by tandem MS/MS.

      1. Conceptually, what is the value of labeling peptides after loading with DCs? Why not preconjugate peptides with dye, before loading, so you have a cleaner, potentially higher fluorescence signal? If there is a potential utility, I do not see it being well exploited in this paper. There are some hints in the discussion of additional use cases, but it was not clear exactly how they would work. One mention was that the dye could be added in real-time in vivo to label complexes, but I believe this was not done here. Is that feasible to show?

      We have already addressed preconjugation as a possible avenue for labeling peptides. In our hands, preconjugation resulted in low FlAsH intensity overall in both the control and tetracysteine labeled peptides (Author response image 1). While we don’t have a satisfactory answer as to why the signal was blunted due to preconjugation, it could be that the tetracysteine tagged peptides attract biarsenical compounds better intracellularly. It may be due to the redox potential of the intracellular environment that limits disulfide bond formation. (PMID: 18159092)

      Author response image 1.

      Preconjugation yields poor FlAsH signal. Splenic DCs were pulsed with peptide then treated with FlAsH or incubated with peptide-FlAsH preconjugates. Overlaid histograms show the FlAsH intensities on DCs following the two-step labeling (left) and preconjugation (right). Data are representative of two independent experiments, each performed with three biological replicates.

      1. Figure 5D-F the imaging data isn't fully convincing. For example, in 5F and 2G, the speeds for T cells with no Ag should be much higher (10-15micron/min or 0.16-0.25micron/sec). The fact that yours are much lower speeds suggests technical or biological issues, that might need to be acknowledged or use other readouts like the flow cytometry.

      We thank the reviewer for drawing attention to this technical point. We would like to point out that the imaging data in fig 5 d-f was obtained from agarose embedded live lymph node sections. Briefly, the lymph nodes were removed, suspended in 2% low melting temp agarose in DMEM and cut into 200 µm sections with a vibrating microtome. Prior to imaging, tissue sections were incubated in complete RPMI medium at 37 °C for 2 h to resume cell mobility. Thus, we think the cells resuming their typical speeds ex vivo may account for slightly reduced T cell speeds overall, for both control and antigen-specific T cells (PMID: 32427565, PMID: 25083865). We have added text to prevent the ambiguity about the technique for dynamic imaging. The speeds in Figure 2g come from live imaging of DC-T cell cocultures, in which the basal cell movement could be hampered by the cell density. Additionally, glass bottom dishes have been coated with Fibronectin to facilitate DC adhesion, which may be responsible for the lower average speeds of the T cells in vitro.

      Reviewer #1 (Recommendations For The Authors):

      Does the reaction of ReAsH with reactive sites on the surface of DC alter them functionally? Functions have been attributed to redox chemistry at the cell surface- could this alter this chemistry?

      We thank the reviewer for the insight. It is possible that the nonspecific binding of biarsenical compounds to cysteine residues, which we refer to as background throughout the manuscript, contribute to some alterations. One possible way biarsenicals affect the redox events in DCs can be via reducing glutathione levels (PMID: 32802886). Glutathione depletion is known to impair DC maturation and antigen presentation (PMID: 20733204). To avoid toxicity, we have carried out a stringent titration to optimize ReAsH and FlAsH concentrations for labeling and conducted experiments using doses that did not cause overt toxicity or altered DC function.

      Have the authors compared this to a straightforward approach where the peptide is just labelled with a similar dye and incubated with the cell to load pMHC using the MHC knockout to assess specificity? Why is this that involves exposing the DC to a high concentration of TCEP, better than just labelling the peptide? The Davis lab also arrived at a two-step method with biotinylated peptide and streptavidin-PE, but I still wonder if this was really necessary as the sensitivity will always come down to the ability to wash out the reagents that are not associated with the MHC.

      We agree with the reviewer that small undisruptive fluorochrome labeled peptide alternatives would greatly improve the workflow and signal to noise ratio. In fact, we have been actively searching for such alternatives since we have started working on the tetracysteine containing peptides. So far, we have tried commercially available FITC and TAMRA conjugated OVA323-339 for loading the DCs, however failed to elicit any discernible signal. We also have an ongoing study where we have been producing and testing various in-house modified OVA323-339 that contain fluorogenic properties. Unfortunately, at this moment, the ones that provided us with a crisp, bright signal for loading revealed that they have also incorporated to DC membrane in a nonspecific fashion and have been taken up by non-cognate T cells from double antigen-loaded DCs. We are actively pursuing this area of investigation and developing better optimized peptides with low/non-significant membrane incorporation.

      Lastly, we would like to point out that tetracysteine tags are visible by transmission electron microscopy without FlAsH treatment. Thus, this application could add a new dimension for addressing questions about the antigen/pMHCII loading compartments in future studies. We have now added more in-depth discussion about the setbacks and advantages of using tetracysteine labeled peptides in immune system studies.

      The peptide dosing at 5 µM is high compared to the likely sensitivity of the T cells. It would be helpful to titrate the system down to the EC50 for the peptide, which may be nM, and determine if the specific fluorescence signal can still be detected in the optimal conditions. This will not likely be useful in vivo, but it will be helpful to see if the labelling procedure would impact T cell responses when antigen is limited, which will be more of a test. At 5 µM it's likely the system is at a plateau and even a 10-fold reduction in potency might not impact the T cell response, but it would shift the EC50.

      We thank the reviewer for the comment and suggestion. We agree that it is possible to miss minimally disruptive effects at 5 µM and titrating the native peptide vs. modified peptide down to the nM doses would provide us a clearer view. This can certainly be addressed in future studies and also with other peptides with different affinity profiles. A reason why we have chosen a relatively high dose for this study was that lowering the peptide dose had costed us the specific FlAsH signal, thus we have proceeded with the lowest possible peptide concentration.

      In Fig 3b the level of background in the dsRed channel is very high after DC transfer. What cells is this associated with and does this appear be to debris? Also, I wonder where the ReAsH signal is in the experiments in general. I believe this is a red dye and it would likely be quite bright given the reduction of the FlAsH signal. Will this signal overlap with signals like dsRed and PHK-26 if the DC is also treated with this to reduce the FlAsH background?

      We have already shown that ReAsH signal with DsRed can be used for cell-tracking purposes as they don’t get transferred to other cells during antigen specific interactions (Author response image 2). In fact, combining their exceptionally bright fluorescence provided us a robust signal to track the adoptively transferred DCs in the recipient mice. On the other hand, the lipophilic membrane dye PKH-26 gets transferred by trogocytosis while the remaining signal contributes to the red fluorescence for tracking DCs. Therefore, the signal that we show to be transferred from DCs to T cells only come from the lipophilic dye. To address this, we have added a sentence to elaborate on this in the results section. Regarding the reviewer’s comment on DsRed background in Figure 3b., we agree that the cells outside the gate in recipient mice seems slightly higher that of the control mice. It may suggest that the macrophages clearing up debris from apoptotic/dying DCs might contribute to the background elicited from the recipient lymph node. Nevertheless, it does not contribute to any DsRed/ReAsH signal in the antigen-specific T cells.

      Author response image 2.

      ReAsH and DsRed are not picked up by T cells during immune synapse. DsRed+ DCs were labeled with ReAsH, pulsed with 5 μM OVACACA, labeled with FlAsH and adoptively transferred into CD45.1 congenic mice mice (1-2 × 106 cells) via footpad. Naïve e450-labeled OTII and e670-labeled polyclonal CD4+ T cells were mixed 1:1 (0.25-0.5 × 106/ T cell type) and injected i.v. Popliteal lymph nodes were removed at 42 h post-transfer and analyzed by flow cytometry. Overlaid histograms show the ReAsh/DsRed, MHCII and FlAsH intensities of the T cells. Data are representative of two independent experiments with n=2 mice per group.

      In Fig 5b there is a missing condition. If they look at Ea-specific T cells for DC with without the Ova peptide do they see no transfer of PKH-26 to the OTII T cells? Also, the FMI of the FlAsH signal transferred to the T cells seems very high compared to other experiments. Can the author estimate the number of peptides transferred (this should be possible) and would each T cell need to be collecting antigens from multiple DC? Could the debris from dead DC also contribute to this if picked up by other DC or even directly by the T cells? Maybe this could be tested by transferring DC that are killed (perhaps by sonication) prior to inoculation?

      To address the reviewer’s question on the PKH-26 acquisition by T cells, Ea-T cells pick up PKH-26 from Ea+OVA double pulsed DCs, but not from the unpulsed or single OVA pulsed DCs. OTII T cells acquire PKH-26 from OVA-pulsed DCs, whereas Ea T cells don’t (as expected) and serve as an internal negative control for that condition. Regarding the reviewer’s comment on the high FlAsH signal intensity of T cells in Figure 5b, a plausible explanation can be that the T cells accumulate pMHCII through serial engagements with APCs. In fact, a comparison of the T cell FlAsH intensities 18 h and 36-48 h post-transfer demonstrate an increase (Author response image 3) and thus hints at a cumulative signal. As DCs are known to be short-lived after adoptive transfer, the debris of dying DCs along with its peptide content may indeed be passed onto macrophages, neighboring DCs and eventually back to T cells again (or for the first time, depending on the T:DC ratio that may not allow all T cells to contact with the transferred DCs within the limited time frame). We agree that the number and the quality of such contacts can be gauged using fluorescent peptides. However, we think peptides chemically conjugated to fluorochromes with optimized signal to noise profiles and with less oxidation prone nature would be more suitable for quantification purposes.

      Author response image 3.

      FlAsH signal acquisition by antigen specific T cells becomes more prominent at 36-48 h post-transfer. DsRed+ splenic DCs were double-pulsed with 5 μM OVACACA and 5 μM OVA-biotin and adoptively transferred into CD45.1 recipients (2 × 106 cells) via footpad. Naïve e450-labeled OTII (1 × 106 cells) and e670-labeled polyclonal T cells (1 × 106 cells) were injected i.v. Popliteal lymph nodes were analyzed by flow cytometry at 18 h or 48 h post-transfer. Overlaid histograms show the T cell levels of OVACACA (FlAsH). Data are representative of three independent experiments with n=3 mice per time point

      Reviewer #2 (Recommendations For The Authors):

      As mentioned in weaknesses 1 & 2, more validation of how much of the FlAsH fluorescence is on agonist peptides and how much is non-specific would improve the interpretation of the data. Another option would be to preconjugate peptides but that might be a significant effort to repeat the work.

      We agree that mass spectrometry would be the gold standard technique to measure the percentage of tetracysteine tagged peptide is conjugated to FlAsH in DCs. However, due to the scope of such endevour this can only be addressed as a separate follow up study. As for the preconjugation, we have tried and unfortunately failed to get it to work (Reviewer Figure 1). Therefore, we have shifted our focus to generating in-house peptide probes that are chemically conjugated to stable and bright fluorophore derivates. With that, we aim to circumvent the problems that the two-step FlAsH labeling poses.

      Along those lines, do you have any way to quantify how many peptides you are detecting based on fluorescence? Being able to quantify the actual number of peptides would push the significance up.

      We think two step procedure and background would pose challenges to such quantification in this study. although it would provide tremendous insight on the antigen-specific T cell- APC interactions in vivo, we think it should be performed using peptides chemically conjugated to fluorochromes with optimized signal to noise profiles.

      In Figure 3D or 4 does the SA signal correlate with Flash signal on OT2 cells? Can you correlate Flash uptake with T cell activation, downstream of TCR, to validate peptide transfers?

      To answer the reviewer’s question about FlAsH and SA correlation, we have revised the Figure 3d to show the correlation between OTII uptake of FlAsH, Streptavidin and MHCII. We also thank the reviewer for the suggestion on correlating FlAsH uptake with T cell activation and/or downstream of TCR activation. We have used proliferation and CD44 expressions as proxies of activation (Fig 2, 6). Nevertheless, we agree that the early events that correspond to the initiation of T-DC synapse and FlAsH uptake would be valuable to demonstrate the temporal relationship between peptide transfer and activation. Therefore, we have addressed this in the revised discussion.

      Author response image 4.

      FlAsH signal acquisition by antigen specific T cells is correlates with the OVA-biotin (SA) and MHCII uptake. DsRed+ splenic DCs were double-pulsed with 5 μM OVACACA and 5 μM OVA-biotin and adoptively transferred into CD45.1 recipients (2 × 106 cells) via footpad. Naïve e450-labeled OTII (1 × 106 cells) and e670-labeled polyclonal T cells (1 × 106 cells) were injected i.v. Popliteal lymph nodes were analyzed by flow cytometry. Overlaid histograms show the T cell levels of OVACACA (FlAsH) at 48 h post-transfer. Data are representative of three independent experiments with n=3 mice.

      Minor:

      Figure 3F, 5D, and videos: Can you color-code polyclonal T cells a different color than magenta (possibly white or yellow), as they have the same look as the overlay regions of OT2-DC interactions (Blue+red = magenta).

      We apologize for the inconvenience about the color selection. We have had difficulty in assigning colors that are bright and distinct. Unfortunately, yellow and white have also been easily mixed up with the FlAsH signal inside red and blue cells respectively. We have now added yellow and white arrows to better point out the polyclonal vs. antigen specific cells in 3f and 5d.

    1. A system is a composition of objects thatare abstractions, which hide data andexpose behavior*

      Composition Abstraction Hide data Expose behavior

    1. for Loops

      During lab 2, I found myself wondering, "How do they expect us to accomplish this task without loops?" Loops and lists make everything in programming easier. I remember programming at North Seattle College, and Bill Barry would get mad at me for Googling something and adding it to my code. It resulted in a small deduction, but it made me realize how many lines of code lists and loops could reduce, rather than using another method that might take longer. For example, instead of using print() or display() for multiple posts before display() -> and then sleep() several times, you could use a loop to write 7 different posts and send them to Reddit, reducing 14 lines of code to 7 or fewer. Personally, I struggle with loop syntax compared to arrays and lists, and even something like file I/O. That's one reason why I decided to take this class and hopefully learn more about informatics for my next legendary essay, which hopefully doesn't get denied.

      I don’t dislike Bill at all if that’s what you got from this. He was one of the most challenging professors but also rewarding. He limited which code we could use in a project, giving us tasks that would be easier with a list, array, or loop, and then emphasized the need for conciseness to reduce the number of bugs. He taught the class as if we were Microsoft employees, solving problems with a strict class style guide. One difference between his class and classes at the University of Washington is that some lines of code displayed on class slides are confusing to me because they have multiple lines that almost look identical to something you'd write in Java, when realistically in Python, you could write all of that in just a single line.

      Googling introduced me to many different codes, and there are numerous ways to solve a single problem when programming. Often, there is code that isn’t pretty to look at and not as easy to read, which I think Bill wanted his students to avoid, given his background as a former Microsoft employee.

    2. Lists# Many types of data on social media platforms are organized as lists, such as lists of friends or followers lists of posts lists of photos in a post lists of people who liked a post etc. The way to create a list in Python is to make a list of values, separated by commas, inside of square brackets, like this: book_authors = ["Kyle Thayer", "Susan Notess"] display(book_authors) Copy to clipboard ['Kyle Thayer', 'Susan Notess'] Copy to clipboard If we are making a list and it gets too long, Python will let us use several lines to do this: some_book_chapters = ["Intro", "Definitions", "Bots", "Data", "History of Social Media", "Authenticity"] display(some_book_chapters) Copy to clipboard ['Intro', 'Definitions', 'Bots', 'Data', 'History of Social Media', 'Authenticity'] Copy to clipboard Lists are in order, so Python let’s us use the “index” to ask for a specific entry, like the 1st, 2nd, 3rd, etc. As we stated in the Data chapter, many programming languages, for historical reasons, make a list’s first entry have index 0, it’s 2nd entry have index 1, it’s 3rd entry have index 2, etc. So if we want to see the first chapter in my some_book_chapters list, I select it by putting the index number inside square brackets ([, ]) after the variable name: first_chapter = some_book_chapters[0] display(first_chapter) Copy to clipboard 'Intro' Copy to clipboard And if I want the 4th chapter, I’d select it like this fourth_chapter = some_book_chapters[3] display(fourth_chapter) Copy to clipboard 'Data' Copy to clipboard Now, let’s say we have a list of users who liked our latest social media post: users_who_liked_our_post = ["@pretend_user_1", "@pretend_user_2", "@pretend_user_3"] Copy to clipboard What if we wanted to follow all of them? If our list was long, it would take a lot of code to pull out each one and try to follow them. But Python gives us an easy way to perform actions on all the items in a list, by using for loops. for Loops# for loops let us perform an action or a set of actions for all of the items in a list. So, if we wanted to go through all the the users that liked our tweet and display a message for each one, we could do this: for user in users_who_liked_our_post: display("Yay! " + user + " liked our post!") Copy to clipboard 'Yay! @pretend_user_1 liked our post!' Copy to clipboard 'Yay! @pretend_user_2 liked our post!' Copy to clipboard 'Yay! @pretend_user_3 liked our post!' Copy to clipboard Now, there are several things that went into making that for loop code above: Start the line with a for Make up a new variable name that will be a temporary variable to hold whichever item from the list we are doing our actions on. In this case each item in the list will be a user, so we call our variable user Then we write the word in Then we put the list that we want to go through, in this case `users_who_liked_our_post’ Then put a colon (:). In Python, a colon like this means that what comes next is a block of statements that goes together. This block of statements is indented over to indicate that it is part of the block. Then, on the next line and indented over, we have our display function that uses the user variable. This is the line of code that is repeated for each item in the list. If we want to do several actions in our loop, all we need to do is add more lines of code spaced over the same amount, like this (note: We’ll use print instead of display, which mostly work the same, but we think print happens to look a little better in this situation): for user in users_who_liked_our_post: print("Yay! " + user + " liked our post!") print("Perhaps we should follow " + user) print("We could put code here to do that!") print() Copy to clipboard Yay! @pretend_user_1 liked our post! Perhaps we should follow @pretend_user_1 We could put code here to do that! Yay! @pretend_user_2 liked our post! Perhaps we should follow @pretend_user_2 We could put code here to do that! Yay! @pretend_user_3 liked our post! Perhaps we should follow @pretend_user_3 We could put code here to do that! Copy to clipboard In the above code our for loop runs a block of code that has four statements, each doing a print. You’ll notice we added an extra blank print which makes a blank line and helps us see in the output what each loop did.

      The explanation of lists and for loops in the context of social media data is highly effective and relevant. Using familiar scenarios, like handling lists of users or posts, makes the concepts of lists and iteration in Python accessible and practical. This approach not only teaches fundamental programming concepts but also illustrates their direct application in real-world scenarios, bridging the gap between theoretical learning and practical usage. It's a great example of how programming can be taught in an engaging and contextually meaningful way.

    1. Visual Studio Code has integrated source control management (SCM) and includes Git support out-of-the-box. Many other source control providers are available through extensions on the VS Code Marketplace

      This is an excellent resource for learing about Git integration with VS Code

    1. Drag and drop a document to annotate it. Works with .pdf, .doc, .docx, .odt, .xls, .xlsx, .ods, .ppt, .pptx, .odp, .rtf, .epub and .csv files. .doc, .docx, .odt, .ppt, .pptx, .odp and .rtf are converted to .pdf. .xls and .xlsx and .ods are converted to .csv. You can also annotate PDFs inside Google Drive by authorizing Hypothes.is within your Google account. Hypothes.is PDF Annotator will be listed under the "Open with" option for PDF files upon authorization. (Uninstall). Privacy Policy. To easily open YouTube videos for annotation, use the bookmarklet. Drag this link to the bookmarks bar, or right-click/control-click to bookmark the link: DropDoc Bookmarklet The annotation functionality is enabled by Hypothes.is. The code for this site is open source. This is a personal project to explore different ideas and is maintained by Dan Whaley. I’d be delighted to hear any feedback at @dwhly. The intention is to keep the site up and running, but no guarantee around the preservation of documents is made. As an aside, annotations against PDFs or EPUBs with your Hypothes.is account, are discoverable on that PDF or EPUB regardless of its location (Background). As long as you have the original PDF somewhere, you'll always be able to see your annotations on it with Hypothes.is.

      又發現Hypothes.is的Dan的創舉,改天來測試。

      Drag and drop a document to annotate it,

      or paste a YouTube video URL.

      Works with .pdf, .doc, .docx, .odt, .xls, .xlsx, .ods, .ppt, .pptx, .odp, .rtf, .epub and .csv files. .doc, .docx, .odt, .ppt, .pptx, .odp and .rtf are converted to .pdf. .xls and .xlsx and .ods are converted to .csv.

      You can also annotate PDFs inside Google Drive by authorizing Hypothes.is within your Google account.

      Hypothes.is PDF Annotator will be listed under the "Open with" option for PDF files upon authorization.

      To easily open YouTube videos for annotation, use the bookmarklet.

      The annotation functionality is enabled by Hypothes.is. The code for this site is open source.

      This is a personal project to explore different ideas and is maintained by Dan Whaley. I’d be delighted to hear any feedback at @dwhly.

    1. is ‘coded and coding, caused and causing’, reveal that the natural contract is legible everywhere

      Der Code verbindet das Leben miteinander. Er ist nur eine von vielen Möglichkeiten durch die alles Lebendige miteinander verwandt ist.

    2. ation. He refers to the detection of chemical alliances, but the discovery of a cell’s genetic code was particularly fundamental to his philosophy

      Durch den genetischen Code ist Information in die Natur eingebaut. Das kann man weiterdenken in die Richtung der Koevolution.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer No.1 (public)

      The authors present a study focused on addressing the key challenge in drug discovery, which is the optimization of absorption and affinity properties of small molecules through in silico methods. They propose active learning as a strategy for optimizing these properties and describe the development of two novel active learning batch selection methods. The methods are tested on various public datasets with different optimization goals and sizes, and new affinity datasets are curated to provide up-todate experimental information. The authors claim that their active learning methods outperform existing batch selection methods, potentially reducing the number of experiments required to achieve the same model performance. They also emphasize the general applicability of their methods, including compatibility with popular packages like DeepChem.

      Strengths:

      Relevance and Importance: The study addresses a significant challenge in the field of drug discovery, highlighting the importance of optimizing the absorption and affinity properties of small molecules through in silico methods. This topic is of great interest to researchers and pharmaceutical industries.

      Novelty: The development of two novel active learning batch selection methods is a commendable contribution. The study also adds value by curating new affinity datasets that provide chronological information on state-of-the-art experimental strategies.

      Comprehensive Evaluation: Testing the proposed methods on multiple public datasets with varying optimization goals and sizes enhances the credibility and generalizability of the findings. The focus on comparing the performance of the new methods against existing batch selection methods further strengthens the evaluation.

      Weaknesses:

      Lack of Technical Details: The feedback lacks specific technical details regarding the developed active learning batch selection methods. Information such as the underlying algorithms, implementation specifics, and key design choices should be provided to enable readers to understand and evaluate the methods thoroughly.

      Evaluation Metrics: The feedback does not mention the specific evaluation metrics used to assess the performance of the proposed methods. The authors should clarify the criteria employed to compare their methods against existing batch selection methods and demonstrate the statistical significance of the observed improvements.

      Reproducibility: While the authors claim that their methods can be used with any package, including DeepChem, no mention is made of providing the necessary code or resources to reproduce the experiments. Including code repositories or detailed instructions would enhance the reproducibility and practical utility of the study.

      Suggestion 1:

      Elaborate on the Methodology: Provide an in-depth explanation of the two active learning batch selection methods, including algorithmic details, implementation considerations, and any specific assumptions made. This will enable readers to better comprehend and evaluate the proposed techniques.

      Answer: We thank the reviewer for this suggestion. Following this comments we have extended the text in Methods (in Section: Batch selection via determinant maximization and Section: Approximation of the posterior distribution) and in Supporting Methods (Section: Toy example). We have also included the pseudo code for the Batch optimization method.

      Suggestion 2:

      Clarify Evaluation Metrics: Clearly specify the evaluation metrics employed in the study to measure the performance of the active learning methods. Additionally, conduct statistical tests to establish the significance of the improvements observed over existing batch selection methods.

      Answer: Following this comment we added to Table 1 details about the way we computed the cutoff times for the different methods. We also provide more details on the statistics we performed to determine the significance of these differences.

      Suggestion 3:

      Enhance Reproducibility: To facilitate the reproducibility of the study, consider sharing the code, data, and resources necessary for readers to replicate the experiments. This will allow researchers in the field to validate and build upon your work more effectively.

      Answer: This is something we already included with the original submission. The code is publicly available. In fact, we provide a phyton library, ALIEN (Active Learning in data Exploration) which is published on the Sanofi Github(https://github.com/ Sanofi-Public/Alien). We also provide details on the public data used and expect to provide the internal data as well. We included a small paragraph on code and data availability.

      Reviewer No.2 (public)

      Suggestion 1:

      The authors presented a well-written manuscript describing the comparison of activelearning methods with state-of-art methods for several datasets of pharmaceutical interest. This is a very important topic since active learning is similar to a cyclic drug design campaign such as testing compounds followed by designing new ones which could be used to further tests and a new design cycle and so on. The experimental design is comprehensive and adequate for proposed comparisons. However, I would expect to see a comparison regarding other regression metrics and considering the applicability domain of models which are two essential topics for the drug design modelers community.

      Answer: We want to thank the reviewer for these comments. We provide a detailed response to the specific comments below. 

      Reviewer No.1 (Recommendations For The Authors)

      Recommendation 1:

      The description provided regarding the data collection process and the benchmark datasets used in the study raises some concerns. The comment specifically addresses the use of both private (Sanofi-owned) and public datasets to benchmark the various batch selection methods. Lack of Transparency: The comment lacks transparency regarding the specific sources and origins of the private datasets. It would be crucial to disclose whether these datasets were obtained from external sources or if they were generated internally within Sanofi. Without this information, it becomes difficult to assess the potential biases or conflicts of interest associated with the data.

      Answer: We would like to thank the reviewer for this comment. As mentioned in the paper, the public github page contains links to all the public data and we expect also to the internal Sanofi data. We also now provide more information on the specific experiments that were internally done by Sanofi to collect that data.

      Potential Data Accessibility Issues: The utilization of private datasets, particularly those owned by Sanofi, may raise concerns about data accessibility. The lack of availability of these datasets to the wider scientific community may limit the ability of other researchers to replicate and validate the study’s findings. It is essential to ensure that the data used in research is openly accessible to foster transparency and encourage collaboration.

      Answer: Again, as stated above we expect to release the data collected internally on the github page.

      Limited Information on Dataset Properties: The comment briefly mentions that the benchmark datasets cover properties related to absorption, distribution, pharmacokinetic processes, and affinity of small drug molecules to target proteins. However, it does not provide any specific details about the properties included in the datasets or how they were curated. Providing more comprehensive information about the properties covered and the methods used for curation would enhance the transparency and reliability of the study.

      To address these concerns, it is crucial for the authors to provide more detailed information about the data sources, dataset composition, representativeness, and curation methods employed. Transparency and accessibility of data are fundamental principles in scientific research, and addressing these issues will strengthen the credibility and impact of the study.

      Answer: We agree with this comment and believe that it is important to be explicit about each of the datasets and to provide information on the new data. We note that we already discuss the details of each of the experiments in Methods and, of course, provide links to the original papers for the public data. We have now added text to Supporting Methods that describes the experiments in more details as well as providing literature references for the experimental protocols used. As noted above, we expect to provide our new internal data on the public git page. 

      Recommendation 2:

      Some comments on the modeling example Approximation of the posterior distribution. Lack of Methodological Transparency: The comment fails to provide any information regarding the specific method or approach used for approximating the posterior distribution. Without understanding the methodology employed, it is impossible to evaluate the quality or rigor of the approximation. This lack of transparency undermines the credibility of the study.

      Answer: We want to thank the reviewer for pointing this out. Based on this comment we added more information to Section: Approximation of the posterior distribution. Moreover, we now provide details on the posterior approximation in Section: Two approximations for computing the epistemic covariance.

      Questionable Assumptions: The comment does not mention any of the assumptions made during the approximation process. The validity of any approximation heavily depends on the underlying assumptions, and their omission suggests a lack of thorough analysis. Failing to acknowledge these assumptions leaves room for doubt regarding the accuracy and relevance of the approximation.

      Answer: We are not entirely sure which assumptions the reviewer is referring to here. The main assumption we can think of that we have used is the fact that getting within X% of the optimal model is a good enough approximation. We have specifically discussed this assumption and tested multiple values of X. While it would have been great to have X = 0 this is unrealistic for retrospective studies. For Active Learning the main question is how many experiments can be saved to obtain similar results and the assumptions we used are basically ’what is the definition of similar’. We now added this to Discussion.

      Inadequate Validation: There is no mention of any validation measures or techniques used to assess the accuracy and reliability of the approximated posterior distribution. Without proper validation, it is impossible to determine whether the approximation provides a reasonable representation of the true posterior. The absence of validation raises concerns about the potential biases or errors introduced by the approximation process.

      Answer: We sincerely appreciate your concern regarding the validation of the approximated posterior distribution. We acknowledge that our initial submission might not have clearly highlighted our validation strategy. It is, of course, very hard to determine the accuracy of the distribution our model learns since such distribution cannot be directly inferred using experiments (no ’ground truth’). Instead, we use an indirect method to determine the accuracy. Specifically, we conducted retrospective experiment using the learned distribution. In these experiments, we indirectly validated our approximation by measuring the error with the respective method. The results from these retrospective experiments provided evidence for the accuracy and reliability of our approximation in representing the true posterior distribution. We now emphasize this in Methods.

      Uncertainty Quantification: The comment does not discuss the quantification of uncertainty associated with the approximated posterior distribution. Properly characterizing the uncertainty is crucial in statistical inference and decision-making. Neglecting this aspect undermines the usefulness and applicability of the approximation results.

      Answer: Thank you for pointing out the importance of characterizing uncertainty in statistical inference and decision-making, a sentiment with which we wholeheartedly agree. In our work, we have indeed addressed the quantification of uncertainty associated with the approximated posterior distribution. Specifically, we utilized Monte Carlo Dropout (MC Dropout) as our method of choice. MC Dropout is a widely recognized and employed technique in the neural networks domain to approximate the posterior distribution, and it offers an efficient way to estimate model uncertainty without requiring any changes to the existing network architecture [1, 2]. In the revised version, we provide a more detailed discussion on the use of Monte Carlo Dropout in our methodology and its implications for characterizing uncertainty.

      Comparison with Gold Standard: There is no mention of comparing the approximated posterior distribution with a gold standard or benchmark. Failing to provide such a comparison leaves doubts about the performance and accuracy of the approximation method. A lack of benchmarking makes it difficult to ascertain the superiority or inferiority of the approximation technique employed.

      Answer: As noted above, it is impossible to find gold standard information for the uncertainly distribution. It is not even clear to us how such gold standard can be experimentally determined since its a function of a specific model and data. If the reviewer is aware of such gold standard we would be happy to test it. Instead, in our study, we opted to benchmark our results against state-of-the-art batch active learning methods, which also rely on uncertainty prediction (such uncertainty prediction is the heart of any active learning method as we discuss). Results clearly indicate that our method outperforms prior methods though we agree that this is only an indirect way to validate the uncertainty approximation.

      Reviewer No.2 (Recommendations For The Authors)

      Recommendation 1:

      The text is kind of messy: there are two results sections, for example. It seems that part of the text was duplicated. Please correct it.

      Answer: We want to thank the reviewer pointing this out. These were typos and we fixed them accordingly.

      Recommendation 2:

      Text in figures is very small and difficult to read. Please redraw the figures, increasing the font size: 10-12pt is ideal in comparison with the main text.

      Answer: We want to thank the reviewer for this comment and we have made the graphics larger.

      Recommendation 3: Please, include specific links to data availability instead of just stating it is available at the Sanofi-Public repository.

      Answer: We want to thank the reviewer for this comment and added the links and data to the Sanofi Github page listed in the paper.

      Recommendation 4:

      What are the descriptors used to train the models?

      Answer: We represented the molecules as molecular graphs using the MolGraphConvFeaturizer from the DeepChem library. We now explicitly mention this in Methods.

      Recommendation 5:

      Regarding the quality of the models, I strongly suggest two approaches instead of using only RMSE as metrics of models’ performance. I recommend using the most metrics as possible as reported by Gramatica (https://doi.org/10.1021/acs.jcim.6b00088). I also recommend somehow comparing the increment on the dataset diversity according to the employed descriptors (applicability domain) as a measurement to further applications on the unseen molecules.

      Answer: We want to thank the reviewer for this great suggestions. As suggested we added new comparison metrics to the Supplement.

      • Distribution plot for the range of the Y values Figure 8 • Clustering of the data sets represented as fingerprints Supplementary material Figure 5,6

      • Retrospective experiments with Spearman correlation coefficient. Supplementary material Figure: 2,3,4

      I suggest also a better characterization of datasets including the nature and range of the Y variable, the source of data in terms of experimentation, and chemical (structural and physicochemical) comparison of samples within each dataset.

      Answer: As noted above in response to a similar comment by Reviewer 1, we have added more detailed information about the different experiments we tested to Supporting Methods.

      References

      [1] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1050–1059, New York, New York, USA, 20–22 Jun 2016. PMLR.

      [2] N.D. Lawrence. Variational Inference in Probabilistic Models. University of Cambridge, 2001.

    2. Reviewer #1 (Public Review):

      The authors present a study focused on addressing the key challenge in drug discovery, which is the optimization of absorption and affinity properties of small molecules through in silico methods. They propose active learning as a strategy for optimizing these properties and describe the development of two novel active learning batch selection methods. The methods are tested on various public datasets with different optimization goals and sizes, and new affinity datasets are curated to provide up-to-date experimental information. The authors claim that their active learning methods outperform existing batch selection methods, potentially reducing the number of experiments required to achieve the same model performance. They also emphasize the general applicability of their methods, including compatibility with popular packages like DeepChem.

      Strengths:

      Relevance and Importance: The study addresses a significant challenge in the field of drug discovery, highlighting the importance of optimizing absorption and affinity properties of small molecules through in silico methods. This topic is of great interest to researchers and pharmaceutical industries.

      Novelty: The development of two novel active learning batch selection methods is a commendable contribution. The study also adds value by curating new affinity datasets that provide chronological information on state-of-the-art experimental strategies.<br /> Comprehensive Evaluation: Testing the proposed methods on multiple public datasets with varying optimization goals and sizes enhances the credibility and generalizability of the findings. The focus on comparing the performance of the new methods against existing batch selection methods further strengthens the evaluation.

      Weaknesses:

      Lack of Technical Details: The feedback lacks specific technical details regarding the developed active learning batch selection methods. Information such as the underlying algorithms, implementation specifics, and key design choices should be provided to enable readers to understand and evaluate the methods thoroughly.

      Evaluation Metrics: The feedback does not mention the specific evaluation metrics used to assess the performance of the proposed methods. The authors should clarify the criteria employed to compare their methods against existing batch selection methods and demonstrate the statistical significance of the observed improvements.

      Reproducibility: While the authors claim that their methods can be used with any package, including DeepChem, no mention is made of providing the necessary code or resources to reproduce the experiments. Including code repositories or detailed instructions would enhance the reproducibility and practical utility of the study.

      Suggestions for Improvement:

      Elaborate on the Methodology: Provide an in-depth explanation of the two active learning batch selection methods, including algorithmic details, implementation considerations, and any specific assumptions made. This will enable readers to better comprehend and evaluate the proposed techniques.

      Clarify Evaluation Metrics: Clearly specify the evaluation metrics employed in the study to measure the performance of the active learning methods. Additionally, conduct statistical tests to establish the significance of the improvements observed over existing batch selection methods.

      Enhance Reproducibility: To facilitate the reproducibility of the study, consider sharing the code, data, and resources necessary for readers to replicate the experiments. This will allow researchers in the field to validate and build upon your work more effectively.

      Conclusion:<br /> The authors' study on active learning methods for optimizing drug discovery presents an important and relevant contribution to the field. The proposed batch selection methods and curated affinity datasets hold promise for improving the efficiency of drug discovery processes. However, to strengthen the study, it is crucial to provide more technical details, clarify evaluation metrics, and enhance reproducibility by sharing code and resources. Addressing these limitations will further enhance the value and impact of the research.

    1. Author Response

      The following is the authors’ response to the latest reviews.

      A revised version of the manuscript models "slope-based" excitability changes in addition to "threshold-based" changes. This serves to address the above concern that as constructed here changes in excitability threshold are not distinguishable from changes in input. However, it remains unclear what the model would do should only a subset of neurons receive a given, fixed input. In that case, are excitability changes sufficient to induce drift? This remains an important question that is not addressed by the paper in its current form.

      Thank you for this important point. In the simulation of two memories (Fig. S6), we stimulated half of the neural population for each of the two memories. We therefore also showed that drift happens when only a subset of neuron was simulated.


      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Current experimental work reveals that brain areas implicated in episodic and spatial memory have a dynamic code, in which activity r imulated networks for epresenting familiar events/locations changes over time. This paper shows that such reconfiguration is consistent with underlying changes in the excitability of cells in the population, which ties these observations to a physiological mechanism.

      Delamare et al. use a recurrent network model to consider the hypothesis that slow fluctuations in intrinsic excitability, together with spontaneous reactivations of ensembles, may cause the structure of the ensemble to change, consistent with the phenomenon of representational drift. The paper focuses on three main findings from their model: (1) fluctuations in intrinsic excitability lead to drift, (2) this drift has a temporal structure, and (3) a readout neuron can track the drift and continue to decode the memory. This paper is relevant and timely, and the work addresses questions of both a potential mechanism (fluctuations in intrinsic excitability) and purpose (time-stamping memories) of drift.

      The model used in this study consists of a pool of 50 all-to-all recurrently connected excitatory neurons with weights changing according to a Hebbian rule. All neurons receive the same input during stimulation, as well as global inhibition. The population has heterogeneous excitability, and each neuron's excitability is constant over time apart from a transient increase on a single day. The neurons are divided into ensembles of 10 neurons each, and on each day, a different ensemble receives a transient increase in the excitability of each of its neurons, with each neuron experiencing the same amplitude of increase. Each day for four days, repetitions of a binary stimulus pulse are applied to every neuron.

      The modeling choices focus in on the parameter of interest-the excitability-and other details are generally kept as straightforward as possible. That said, I wonder if certain aspects may be overly simple. The extent of the work already performed, however, does serve the intended purpose, and so I think it would be sufficient for the authors to comment on these choices rather than to take more space in this paper to actually implement these choices. What might happen were more complex modeling choices made? What is the justification for the choices that are made in the present work?

      The two specific modeling choices I question are (1) the excitability dynamics and (2) the input stimulus. The ensemble-wide synchronous and constant-amplitude excitability increase, followed by a return to baseline, seems to be a very simplified picture of the dynamics of intrinsic excitability. At the very least, justification for this simplified picture would benefit the reader, and I would be interested in the authors' speculation about how a more complex and biologically realistic dynamics model might impact the drift in their network model. Similarly, the input stimulus being binary means that, on the singleneuron level, the only type of drift that can occur is a sort of drop-in/drop-out drift; this choice excludes the possibility of a neuron maintaining significant tuning to a stimulus but changing its preferred value. How would the use of a continuous input variable influence the results.

      (1) In our model, neurons tend to compete for allocation to the memory ensemble: neurons with higher excitability tend to be preferentially allocated and neurons with lower excitability do not respond to the stimulus. Because relative, but not absolute excitability biases this competition, we suggest that the exact distribution of excitability would not impact the results qualitatively. On the other hand, the results might vary if excitability was considered dependent on the activity of the neurons as previously reported experimentally (Cai 2016, Rachid 2016, Pignatelli 2019). An increase in excitability following neural activity might induce higher correlation among ensembles on consecutive days, decreasing the drift.

      (2) We thank the reviewer for this very good point. Indeed, two recent studies (Geva 2023 , Khatib 2023) have highlighted distinct mechanisms for a drift of the mean firing rate and the tuning curve. We extended the last part of the discussion to include this point: “Finally, we intended to model drift in the firing rates, as opposed to a drift in the turning curve of the neurons. Recent studies suggest that drifts in the mean firing rate and tuning curve arise from two different mechanisms [33, 34]. Experience drives a drift in neurons turning curve while the passage of time drives a drift in neurons firing rate. In this sense, our study is consistent with these findings by providing a possible mechanism for a drift in the mean firing rates of the neurons driven a dynamical excitability. Our work suggests that drift can depend on any experience having an impact on excitability dynamics such as exercise as previously shown experimentally [9, 35] but also neurogenesis [9, 31, 36], sleep [37] or increase in dopamine level [38]”

      Result (1): Fluctuations in intrinsic excitability induce drift

      The two choices highlighted above appear to lead to representations that never recruit the neurons in the population with the lowest baseline excitability (Figure 1b: it appears that only 10 neurons ever show high firing rates) and produce networks with very strong bidirectional coupling between this subset of neurons and weak coupling elsewhere (Figure 1d). This low recruitment rate need may not necessarily be problematic, but it stands out as a point that should at least be commented on. The fact that only 10 neurons (20% of the population) are ever recruited in a representation also raises the question of what would happen if the model were scaled up to include more neurons.

      This is a very good point. To test how the model depends on the network size, we plotted the drift index against the size of the ensemble. With this current implementation, we did not observe a significant correlation between the drift rate and size of the initial ensemble (Figure S2).

      Author response image 1.

      The rate of the drift does not depend on the size of the engram. Drift rate against the size of the original engram. Each dot shows one simulation (Methods). n = 100 simulations.

      Result (2): The observed drift has a temporal structure

      The authors then demonstrate that the drift has a temporal structure (i.e., that activity is informative about the day on which it occurs), with methods inspired by Rubin et al. (2015). Rubin et al. (2015) compare single-trial activity patterns on a given session with full-session activity patterns from each session. In contrast, Delamare et al. here compare full-session patterns with baseline excitability (E = 0) patterns. This point of difference should be motivated. What does a comparison to this baseline excitability activity pattern tell us? The ordinal decoder, which decodes the session order, gives very interesting results: that an intermediate amplitude E of excitability increase maximizes this decoder's performance. This point is also discussed well by the authors. As a potential point of further exploration, the use of baseline excitability patterns in the day decoder had me wondering how the ordinal decoder would perform with these baseline patterns.

      This is a good point. Here, we aimed at dissociating the role of excitability from the one of the recurrent currents. We introduced a time decoder that compares the pattern with baseline excitability (E = 0), in order to test whether the temporal information was encoded in the ensemble i.e. in the recurrent weights. By contrast, because the neural activity is by construction biased towards excitability, a time decoder performed on the full session would work in a trivial way.

      Result (3): A readout neuron can track drift

      The authors conclude their work by connecting a readout neuron to the population with plastic weights evolving via a Hebbian rule. They show that this neuron can track the drifting ensemble by adjusting its weights. These results are shown very neatly and effectively and corroborate existing work that they cite very clearly.

      Overall, this paper is well-organized, offers a straightforward model of dynamic intrinsic excitability, and provides relevant results with appropriate interpretations. The methods could benefit from more justification of certain modeling choices, and/or an exploration (either speculative or via implementation) of what would happen with more complex choices. This modeling work paves the way for further explorations of how intrinsic excitability fluctuations influence drifting representations.

      Reviewer #2 (Public Review):

      In this computational study, Delamare et al identify slow neuronal excitability as one mechanism underlying representational drift in recurrent neuronal networks and that the drift is informative about the temporal structure of the memory and when it has been formed. The manuscript is very well written and addresses a timely as well as important topic in current neuroscience namely the mechanisms that may underlie representational drift.

      The study is based on an all-to-all recurrent neuronal network with synapses following Hebbian plasticity rules. On the first day, a cue-related representation is formed in that network and on the next 3 days it is recalled spontaneously or due to a memory-related cue. One major observation is that representational drift emerges day-by-day based on intrinsic excitability with the most excitable cells showing highest probability to replace previously active members of the assembly. By using a daydecoder, the authors state that they can infer the order at which the reactivation of cell assemblies happened but only if the excitability state was not too high. By applying a read-out neuron, the authors observed that this cell can track the drifting ensemble which is based on changes of the synaptic weights across time. The only few questions which emerged and could be addressed either theoretically or in the discussion are as follows:

      1. Would the similar results be obtained if not all-to-all recurrent connections would have been molded but more realistic connectivity profiles such as estimated for CA1 and CA3?

      This is a very interesting point. We performed further simulations to show that the results are not dependent on the exact structure of the network. In particular, we show that all-to-all connectivity is not required to observe a drift of the ensemble. We found similar results when the recurrent weights matrix was made sparse (Fig. S4a-c, Methods). Similarly to all-to-all connectivity, we found that the ensemble is informative about its temporal history (Fig. S4d) and that an output neuron can decode the ensemble continuously (Fig. S4e).

      Author response image 2.

      Sparse recurrent connectivity shows similar drifting behavior as all-to-all connectivity. The same simulation protocol as Fig. 1 was used while the recurrent weights matrix was made 50% sparse (Methods). a) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. b) Recurrent weights matrices after each of the four stimuli show the drifting assembly. c) Correlation of the patterns of activity between the first day and every other days. d) Student's test t-value of the ordinal time decoder, for the real (blue) and shuffled (orange) data and for different amplitudes of excitability E. e) Center of mass of the distribution of the output weights (Methods) across days. c-e) Data are shown as mean ± s.e.m. for n = 10 simulations.

      1. How does the number of excited cells that could potentially contribute to an engram influence the representational drift and the decoding quality?

      This is indeed a very good question. We did not observe a significant correlation between the drift rate and size of the initial ensemble (Fig. S2).

      Author response image 3.

      The rate of the drift does not depend on the size of the engram. Drift rate against the size of the original engram. Each dot shows one simulation (Methods). n = 100 simulations.

      1. How does the rate of the drift influence the quality of readout from the readout-out neuron?

      We thank the reviewer for this interesting question. We introduced a measure of the “read-out quality” and plotted this value against the rate of the drift. We found a small correlation between the two quantities. Indeed, the read-out quality decreases with the rate of the drift.

      Author response image 4.

      The quality of the read-out decreases with the rate of the drift. Read-out quality computed on the firing rate of the output neuron against the rate of the drift (Methods). Each dot shows one simulation. n = 100 simulations.

      Reviewer #3 (Public Review):

      The authors explore an important question concerning the underlying mechanism of representational drift, which despite intense recent interest remains obscure. The paper explores the intriguing hypothesis that drift may reflect changes in the intrinsic excitability of neurons. The authors set out to provide theoretical insight into this potential mechanism.

      They construct a rate model with all-to-all recurrent connectivity, in which recurrent synapses are governed by a standard Hebbian plasticity rule. This network receives a global input, constant across all neurons, which can be varied with time. Each neuron also is driven by an "intrinsic excitability" bias term, which does vary across cells. The authors study how activity in the network evolves as this intrinsic excitability term is changed.

      They find that after initial stimulation of the network, those neurons where the excitability term is set high become more strongly connected and are in turn more responsive to the input. Each day the subset of neurons with high intrinsic excitability is changed, and the network's recurrent synaptic connectivity and responsiveness gradually shift, such that the new high intrinsic excitability subset becomes both more strongly activated by the global input and also more strongly recurrently connected. These changes result in drift, reflected by a gradual decrease across time in the correlation of the neuronal population vector response to the stimulus.

      The authors are able to build a classifier that decodes the "day" (i.e. which subset of neurons had high intrinsic excitability) with perfect accuracy. This is despite the fact that the excitability bias during decoding is set to 0 for all neurons, and so the decoder is really detecting those neurons with strong recurrent connectivity, and in turn strong responses to the input. The authors show that it is also possible to decode the order in which different subsets of neurons were given high intrinsic excitability on previous "days". This second result depends on the extent by which intrinsic excitability was increased: if the increase in intrinsic excitability was either too high or too low, it was not possible to read out any information about past ordering of excitability changes.

      Finally, using another Hebbian learning rule, the authors show that an output neuron, whose activity is a weighted sum of the activity of all neurons in the network, is able to read out the activity of the network. What this means specifically, is that although the set of neurons most active in the network changes, the output neuron always maintains a higher firing rate than a neuron with randomly shuffled synaptic weights, because the output neuron continuously updates its weights to sample from the highly active population at any given moment. Thus, the output neuron can readout a stable memory despite drift.

      Strengths:

      The authors are clear in their description of the network they construct and in their results. They convincingly show that when they change their "intrinsic excitability term", upon stimulation, the Hebbian synapses in their network gradually evolve, and the combined synaptic connectivity and altered excitability result in drifting patterns of activity in response to an unchanging input (Fig. 1, Fig. 2a). Furthermore, their classification analyses (Fig. 2) show that information is preserved in the network, and their readout neuron successfully tracks the active cells (Fig. 3). Finally, the observation that only a specific range of excitability bias values permits decoding of the temporal structure of the history of intrinsic excitability (Fig. 2f and Figure S1) is interesting, and as the authors point out, not trivial.

      Weaknesses:

      1. The way the network is constructed, there is no formal difference between what the authors call "input", Δ(t), and what they call "intrinsic excitability" Ɛ_i(t) (see Equation 3). These are two separate terms that are summed (Eq. 3) to define the rate dynamics of the network. The authors could have switched the names of these terms: Δ(t) could have been considered a global "intrinsic excitability term" that varied with time and Ɛ_i(t) could have been the external input received by each neuron i in the network. In that case, the paper would have considered the consequence of "slow fluctuations of external input" rather than "slow fluctuations of intrinsic excitability", but the results would have been the same. The difference is therefore semantic. The consequence is that this paper is not necessarily about "intrinsic excitability", rather it considers how a Hebbian network responds to changes in excitatory drive, regardless of whether those drives are labeled "input" or "intrinsic excitability".

      This is a very good point. We performed further simulations to model “slope-based”, instead of “threshold-based”, changes in excitability (Fig. S5a, Methods). In this new definition of excitability, we changed the slope of the activation function, which is initially sampled from a random distribution. By introducing a varying excitability, we found very similar results than when excitability was varied as the threshold of the activation function (Fig. S5b-d). We also found similarly that the ensemble is informative about its temporal history (Fig. S5e) and that an output neuron can decode the ensemble continuously (Fig. S5f).

      Author response image 5.

      Change of excitability as a variable slope of the input-output function shows similar drifting behavior as considering a change in the threshold. The same simulation protocol as Fig. 1 was used while the excitability changes were modeled as a change in the activation function slope (Methods). a) Schema showing two different ways of defining excitability, as a threshold (top) or slope (bottom) of the activation function. Each line shows one neuron and darker lines correspond to neurons with increased excitability. b) Firing rates of the neurons across time. The red traces correspond to neurons belonging to the first assembly, namely that have a firing rate higher than the active threshold after the first stimulation. The black bars show the stimulation and the dashed line shows the active threshold. c) Recurrent weights matrices after each of the four stimuli show the drifting assembly. d) Correlation of the patterns of activity between the first day and every other days. e) Student's test t-value of the ordinal time decoder, for the real (blue) and shuffled (orange) data and for different amplitudes of excitability E. f) Center of mass of the distribution of the output weights (Methods) across days. d-f) Data are shown as mean ± s.e.m. for n = 10 simulations.

      1. Given how the learning rule that defines input to the readout neuron is constructed, it is trivial that this unit responds to the most active neurons in the network, more so than a neuron assigned random weights. What would happen if the network included more than one "memory"? Would it be possible to construct a readout neuron that could classify two distinct patterns? Along these lines, what if there were multiple, distinct stimuli used to drive this network, rather than the global input the authors employ here? Does the system, as constructed, have the capacity to provide two distinct patterns of activity in response to two distinct inputs?

      This is an interesting point. In order to model multiple memories, we introduced non-uniform feedforward inputs, defining different “contexts” (Methods). We adapted our model so that two contexts target two random sub-populations in the network. We also introduced a second output neuron to decode the second memory. The simulation protocol was adapted so that each of the two contexts are stimulated every day (Fig. S6a). We found that the network is able to store two ensembles that drift independently (Fig. S6 and S7a). We were also able to decode temporal information from the patterns of activity of both ensembles (Fig. S7b). Finally, both memories could be decoded independently using two output neurons (Fig. S7c and d).

      Author response image 6.

      Two distinct ensembles can be encoded and drift independently. a) and b) Firing rates of the neurons across time. The red traces in panel b) correspond to neurons belonging to the first assembly and the green traces to the second assembly on the first day. They correspond to neurons having a firing rate higher than the active threshold after the first stimulation of each assembly. The black bars show the stimulation and the dashed line shows the active threshold. c) Recurrent weights matrices after each of the eight stimuli showing the drifting of the first (top) and second (bottom) assembly.

      Author response image 7.

      The two ensembles are informative about their temporal history and can be decoded using two output neurons. a) Correlation of the patterns of activity between the first day and every other days, for the first assembly (red) and the second assembly (green). b) Student's test t-value of the ordinal time decoder, for the first (red, left) and second ensemble (green, right) for different amplitudes of excitability E. Shuffled data are shown in orange. c) Center of mass of the distribution of the output weights (Methods) across days for the first (w?ut , red) and second (W20L't , green) ensemble. a-c) Data are shown as mean ± s.e.m. for n = 10 simulations. d) Output neurons firing rate across time for the first ensemble (Yl, top) and the second ensemble (h, bottom). The red and green traces correspond to the real output. The dark blue, light blue and yellow traces correspond to the cases where the output weights were randomly shuffled for every time points after presentation of the first, second and third stimulus, respectively.

      Impact:

      Defining the potential role of changes in intrinsic excitability in drift is fundamental. Thus, this paper represents a potentially important contribution. Unfortunately, given the way the network employed here is constructed, it is difficult to tease apart the specific contribution of changing excitability from changing input. This limits the interpretability and applicability of the results.

    1. It's low-code, performant, responsive, and open source. Because it’s so easy to get started it is the perfect solution for quick landing pages, business accelerators, and event sites.
    2. Neodigm 55 Low Code UX micro-library Scott C. Krause | Friday, Nov 25, 2022
    1. I code
      • performant,
      • functional,
      • testable,
      • future-proof, and
      • trusted

      what? - JavaScript, TypeScript, Vue, and SolidJS.

      I've enchanted the UX for Abbott Laboratories, AbbVie, CDW, Corporate Express, Hyundai, Microsoft, and Sears.

    1. “1 apple.”

      One way to solve this problem of the apples having a difference in size we could measure their dimensions, taking into consideration the apple's circumference, and diameter, or even weighing the apple. However, for simplicity, we can establish a standard size for granny smith apples given out during snack time to weigh 8 ounces per apple. This standardized approach ensures that each person receives about the same amount of apple per bite.

      We could connect this to programming using the min() and man() functions and use each of the apple's weights and assign them a number based on the measurement.

      apples = [8, 8, 9, 8, 7]

      maximum = max(apples) print(maximum)

      minimum = min(apples) print(minimum)

      filtered = [apple for apple in apples if apple == 8] print(filtered)

      A code that finds the min and the max apples in size and then filters them out if it doesn’t equal 8 ounces. This code should filter out and help the teachers know which apples to keep or throw out, or if you want to fool a kid just grab an apple of about the same size and cut it into smaller pieces problem solved.

    2. As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent.

      This is especially important once you begin to compile your data onto a single sheet. I remember in java how we had to simplify our work down to debug each strand of code under each variable. This apple metaphor makes sense to me because of this experience.

    1. the specific metrics LinkedIn uses. Here are examples of those which the company focuses on:Developer Net User Satisfaction (NSAT) measures how happy developers are overall with LinkedIn’s development systems. It’s measured on a quarterly basis.Developer Build Time (P50 and P90) measures in seconds how long developers spend waiting for their builds to finish locally during development.Code Reviewer Response Time (P50 and P90) measures how long it takes, in business hours, for code reviewers to respond to each code review update from the author.Post-Commit CI Speed (P50 and P90) measures how long it takes, in minutes, for each commit to get through the continuous integration (CI) pipeline.CI Determinism is the opposite of test flakiness. It’s the likelihood a test suite’s result will be valid and not a flake.Deployment Success Rate measures how often deployments to production succeed.

      The metrics LinkedIn tracks.

    1. This involves a more substantial investment (time and code) than calling an API, which gives Mistral an edge in retaining users,

      is this true though? HF is making transformer access a standard. plus popularity of claude and gpt suggests people cannot be bothered to install stuff

    1. t_monthly = ds['t2m'].groupby('time.month').mean(dim='time')-273.15 max_temp = t_monthly.max() min_temp = t_monthly.min() plt.figure(figsize=(12, 24)) for i in range(1, 13): ax = plt.subplot(6, 2, i, projection=ccrs.PlateCarree()) t_monthly.sel(month=i).plot(ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, vmin=min_temp, vmax=max_temp, cmap='RdBu_r') ax.coastlines() ax.set_title('Month: {}'.format(i)) plt.tight_layout() plt.show()

      you have only plotted the monthly average temperature (t_monthly). It is nice that you used the same color scale for every plot (which helps to compare the months), however, * a colorbar legend would have been nice * another additional plot would have been great that directly shows max_temp -min_temp.

      Example code for that would be:

      t2_cycle = ds.t2m.groupby('time.month').mean() - 273.15 ax = plt.axes(projection=ccrs.Robinson()) (t2_cycle.max(dim='month') - t2_cycle.min(dim='month')).plot(ax=ax, transform=ccrs.PlateCarree(), cbar_kwargs={'label':'$\overline{T}$ [K]'}) ax.coastlines(); ax.gridlines();

    1. In case you need to handle a text-only multipart form, you should use the .none() method: const express = require('express') const app = express() const multer = require('multer') const upload = multer() app.post('/profile', upload.none(), function (req, res, next) { // req.body contains the text fields }) Here's an example on how multer is used an HTML form. Take special note of the enctype="multipart/form-data" and name="uploaded_file" fields: <form action="/stats" enctype="multipart/form-data" method="post"> <div class="form-group"> <input type="file" class="form-control-file" name="uploaded_file"> <input type="text" class="form-control" placeholder="Number of speakers" name="nspeakers"> <input type="submit" value="Get me the stats!" class="btn btn-default"> </div> </form> Then in your javascript file you would add these lines to access both the file and the body. It is important that you use the name field value from the form in your upload function. This tells multer which field on the request it should look for the files in. If these fields aren't the same in the HTML form and on your server, your upload will fail: const multer = require('multer') const upload = multer({ dest: './public/data/uploads/' }) app.post('/stats', upload.single('uploaded_file'), function (req, res) { // req.file is the name of your file in the form above, here 'uploaded_file' // req.body will hold the text fields, if there were any console.log(req.file, req.body) });

      Certainly! Let's break down the provided code and example in simpler terms:

      Using Multer for a Text-Only Multipart Form:

      1. HTML Form:
      2. If you're dealing with a text-only multipart form, where you don't have file uploads but only text fields, you should use upload.none().

      ```html <br /> <form action="/profile" enctype="multipart/form-data" method="post"> <div class="form-group"> <input type="file" class="form-control-file" name="uploaded_file"> <input type="text" class="form-control" placeholder="Number of speakers" name="nspeakers"> <input type="submit" value="Get me the stats!" class="btn btn-default"> </div> </form>

      ```

      • Note the enctype="multipart/form-data" in the form, even though there is no file input. This is required for forms that might include file inputs.

      • Server-Side (Using Express and Multer):

      • In your Node.js server code:

      ```javascript const express = require('express'); const multer = require('multer'); const upload = multer();

      const app = express();

      // Handling a text-only multipart form app.post('/profile', upload.none(), function (req, res, next) { // req.body contains the text fields console.log(req.body); });

      // Handling a text and file multipart form const fileUpload = multer({ dest: './public/data/uploads/' }); app.post('/stats', fileUpload.single('uploaded_file'), function (req, res) { // req.file is the name of your file in the form, here 'uploaded_file' // req.body will hold the text fields, if any console.log(req.file, req.body); });

      app.listen(3000, () => { console.log('Server is running on port 3000'); }); ```

      Summary:

      • The HTML form includes both a file input (<input type="file" name="uploaded_file">) and a text input (<input type="text" name="nspeakers">).
      • Even if there's no file input, using enctype="multipart/form-data" is necessary when the form might include file inputs.
      • On the server side, upload.none() middleware is used for handling text-only multipart forms.
      • If there is a mix of text and file inputs, you can use multer with fileUpload.single('uploaded_file') to handle both files and text fields.
      • req.file will contain the uploaded file, and req.body will hold the text fields.
    1. res.json([body]) Sends a JSON response. This method sends a response (with the correct content-type) that is the parameter converted to a JSON string using JSON.stringify(). The parameter can be any JSON type, including object, array, string, Boolean, number, or null, and you can also use it to convert other values to JSON. res.json(null) res.json({ user: 'tobi' }) res.status(500).json({ error: 'message' })

      Certainly! Let's break down the res.json() method in simple terms:

      Purpose:

      The res.json() method in Express is used to send a JSON response to the client. It automatically sets the appropriate content-type header and converts the provided data into a JSON string using JSON.stringify().

      Syntax:

      javascript res.json([body]);

      Examples:

      1. Sending null as JSON:
      2. This example sends a JSON response with null as the content. javascript res.json(null);

      3. Sending an Object as JSON:

      4. This example sends a JSON response with an object. javascript res.json({ user: 'tobi' });

      5. Sending JSON with a Status Code:

      6. You can also set an HTTP status code along with the JSON response. javascript res.status(500).json({ error: 'message' }); This example sends a JSON response with an error message and sets the HTTP status code to 500 (Internal Server Error).

      Summary:

      • res.json() is a convenient method in Express to send JSON responses.
      • It automatically handles the conversion of various data types (object, array, string, Boolean, number, or null) to a JSON string.
      • You can also chain it with res.status() to set an HTTP status code along with the JSON response.

      In simple terms, it's a way to tell the client that the server is responding with data in JSON format, making it easy for client applications to consume the data.

    1. Everything in Mongoose starts with a Schema. Each schema maps to a MongoDB collection and defines the shape of the documents within that collection. import mongoose from 'mongoose'; const { Schema } = mongoose; const blogSchema = new Schema({ title: String, // String is shorthand for {type: String} author: String, body: String, comments: [{ body: String, date: Date }], date: { type: Date, default: Date.now }, hidden: Boolean, meta: { votes: Number, favs: Number } }); If you want to add additional keys later, use the Schema#add method. Each key in our code blogSchema defines a property in our documents which will be cast to its associated SchemaType. For example, we've defined a property title which will be cast to the String SchemaType and property date which will be cast to a Date SchemaType. Notice above that if a property only requires a type, it can be specified using a shorthand notation (contrast the title property above with the date property). Keys may also be assigned nested objects containing further key/type definitions like the meta property above. This will happen whenever a key's value is a POJO that doesn't have a type property. In these cases, Mongoose only creates actual schema paths for leaves in the tree. (like meta.votes and meta.favs above), and the branches do not have actual paths. A side-effect of this is that meta above cannot have its own validation. If validation is needed up the tree, a path needs to be created up the tree - see the Subdocuments section for more information on how to do this. Also read the Mixed subsection of the SchemaTypes guide for some gotchas.

      Sure, let's break it down in simpler terms.

      In Mongoose, a "Schema" is like a blueprint for how your data should be organized in a MongoDB database. Imagine it as a template for creating documents (records) in a collection (similar to a table in relational databases).

      Let's go through the example:

      ```javascript import mongoose from 'mongoose'; const { Schema } = mongoose;

      // Define a schema for a blog post const blogSchema = new Schema({ title: String, author: String, body: String, comments: [{ body: String, date: Date }], date: { type: Date, default: Date.now }, hidden: Boolean, meta: { votes: Number, favs: Number } }); ```

      Now, let's break it down:

      1. Basic Data Types:
      2. title, author, body, and hidden are properties of your blog document. String is the data type for title, author, and body, while Boolean is the data type for hidden.

      3. Array of Comments:

      4. comments is an array where each element is an object with body (String) and date (Date) properties. This allows you to store multiple comments in an array within your document.

      5. Default Date:

      6. date is a property with the type Date, and it has a default value of the current date and time (Date.now). This means if you don't provide a date when creating a blog post, it will default to the current date and time.

      7. Nested Meta Object:

      8. meta is a nested object within your document. It has two properties, votes (Number) and favs (Number). This allows you to store additional information in a structured way.

      Now, let's say you want to add a new property later, you can use the add method:

      javascript // Add a new property to the schema blogSchema.add({ tags: [String] });

      Here, we're adding a new property tags which is an array of strings.

      The main takeaway is that the schema defines the structure of your documents in MongoDB, including the types of data each property should have. It helps maintain consistency in your data and allows you to enforce certain rules or defaults.

    2. Instance methods Instances of Models are documents. Documents have many of their own built-in instance methods. We may also define our own custom document instance methods. // define a schema const animalSchema = new Schema({ name: String, type: String }, { // Assign a function to the "methods" object of our animalSchema through schema options. // By following this approach, there is no need to create a separate TS type to define the type of the instance functions. methods: { findSimilarTypes(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); } } }); // Or, assign a function to the "methods" object of our animalSchema animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); }; Now all of our animal instances have a findSimilarTypes method available to them. const Animal = mongoose.model('Animal', animalSchema); const dog = new Animal({ type: 'dog' }); dog.findSimilarTypes((err, dogs) => { console.log(dogs); // woof }); Overwriting a default mongoose document method may lead to unpredictable results. See this for more details. The example above uses the Schema.methods object directly to save an instance method. You can also use the Schema.method() helper as described here. Do not declare methods using ES6 arrow functions (=>). Arrow functions explicitly prevent binding this, so your method will not have access to the document and the above examples will not work.

      Certainly! Let's break down the provided code snippets:

      1. What is it and why is it used?

      In Mongoose, a schema is a blueprint for defining the structure of documents within a collection. When you define a schema, you can also attach methods to it. These methods become instance methods, meaning they are available on the individual documents (instances) created from that schema.

      Instance methods are useful for encapsulating functionality related to a specific document or model instance. They allow you to define custom behavior that can be executed on a specific document. In the given example, the findSimilarTypes method is added to instances of the Animal model, making it easy to find other animals of the same type.

      2. Syntax:

      Using methods object directly in the schema options:

      javascript const animalSchema = new Schema( { name: String, type: String }, { methods: { findSimilarTypes(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); } } } );

      Using methods object directly in the schema:

      javascript animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); };

      Using Schema.method() helper:

      javascript animalSchema.method('findSimilarTypes', function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); });

      3. Explanation in Simple Words with Examples:

      Why it's Used:

      Imagine you have a collection of animals in your database, and you want to find other animals of the same type. Instead of writing the same logic repeatedly, you can define a method that can be called on each animal instance to find similar types. This helps in keeping your code DRY (Don't Repeat Yourself) and makes it easier to maintain.

      Example:

      ```javascript const mongoose = require('mongoose'); const { Schema } = mongoose;

      // Define a schema with a custom instance method const animalSchema = new Schema({ name: String, type: String });

      // Add a custom instance method to find similar types animalSchema.methods.findSimilarTypes = function(cb) { return mongoose.model('Animal').find({ type: this.type }, cb); };

      // Create the Animal model using the schema const Animal = mongoose.model('Animal', animalSchema);

      // Create an instance of Animal const dog = new Animal({ type: 'dog', name: 'Buddy' });

      // Use the custom method to find similar types dog.findSimilarTypes((err, similarAnimals) => { console.log(similarAnimals); }); ```

      In this example, findSimilarTypes is a custom instance method added to the Animal schema. When you create an instance of the Animal model (e.g., a dog), you can then call findSimilarTypes on that instance to find other animals with the same type. The method uses the this.type property, which refers to the type of the current animal instance. This allows you to easily reuse the logic for finding similar types across different instances of the Animal model.

      Certainly! Let's go through each part and explain it in simple terms: ### 1. `this` in Mongoose: - **What is `this`?** In JavaScript, `this` refers to the current context or object. In Mongoose, particularly within methods and middleware functions, `this` represents the instance (document) the function is currently operating on. - **Why is it used?** `this` is used to access and modify the properties of the current document. For example, in a Mongoose method, `this` allows you to refer to the fields of the specific document the method is called on. ### 2. Example: Let's use the `userSchema.pre("save", ...)`, which is a Mongoose middleware, as an example: ```javascript userSchema.pre("save", async function (next) { if (!this.isModified("password")) { next(); } else { this.password = await bcrypt.hash(this.password, 10); next(); } }); ``` - **Explanation in Simple Words:** - Imagine you have a system where users can sign up and set their password. - Before saving a new user to the database, you want to ensure that the password is securely encrypted (hashed) using a library like `bcrypt`. - The `userSchema.pre("save", ...)` is a special function that runs automatically before saving a user to the database. - In this function: - `this.isModified("password")`: Checks if the password field of the current user has been changed. - If the password is not modified, it means the user is not updating their password, so it just moves on to the next operation (saving the user). - If the password is modified, it means a new password is set or the existing one is changed. In this case, it uses `bcrypt.hash` to encrypt (hash) the password before saving it to the database. - The use of `this` here is crucial because it allows you to refer to the specific user document that's being saved. It ensures that the correct password is hashed for the current user being processed. In summary, `this` in Mongoose is a way to refer to the current document or instance, and it's commonly used to access and modify the properties of that document, especially in middleware functions like the one demonstrated here for password encryption before saving to the database.

    Tags

    Annotators

    URL

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      The study is an important advancement to the consideration of antimalarial drug resistance: the authors make use of both modelling results and supporting empirical evidence to demonstrate the role of malaria strain diversity in explaining biogeographic patterns of drug resistance. The theoretical methods and the corresponding results are convincing, with the novel model presented moving beyond existing models to incorporate malaria strain diversity and antigen-specific immunity. This work is likely to be interesting to malaria researchers and others working with antigenically diverse infectious diseases.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The paper is an attempt to explain a geographic paradox between infection prevalence and antimalarial resistance emergence. The authors developed a compartmental model that importantly contains antigenic strain diversity and in turn antigen-specific immunity. They find a negative correlation between parasite prevalence and the frequency of resistance emergence and validate this result using empirical data on chloroquine-resistance. Overall, the authors conclude that strain diversity is a key player in explaining observed patterns of resistance evolution across different geographic regions.

      The authors pose and address the following specific questions:

      1. Does strain diversity modulate the equilibrium resistance frequency given different transmission intensities?

      2. Does strain diversity modulate the equilibrium resistance frequency and its changes following drug withdrawal?

      3. Does the model explain biogeographic patterns of drug resistance evolution?

      Strengths:

      The model built by the authors is novel. As emphasized in the manuscript, many factors (e.g., drug usage, vectorial capacity, population immunity) have been explored in models attempting to explain resistance emergence, but strain diversity (and strain-specific immunity) has not been explicitly included and thus explored. This is an interesting oversight in previous models, given the vast antigenic diversity of Plasmodium falciparum (the most common human malaria parasite) and its potential to "drive key differences in epidemiological features".

      The model also accounts for multiple infections, which is a key feature of malarial infections, with individuals often infected with either multiple Plasmodium species or multiple strains of the same species. Accounting for multiple infections is critical when considering resistance emergence, as with multiple infections there is within-host competition which will mediate the fitness of resistant genotypes. Overall, the model is an interesting combination of a classic epidemiological model (e.g., SIR) and a population genetics model.

      In terms of major model innovations, the model also directly links selection pressure via drug administration with local transmission dynamics. This is accomplished by the interaction between strain-specific immunity, generalized immunity, and host immune response.

      R: We thank the reviewer for his/her appreciation of the work.

      Weaknesses:

      In several places, the explanation of the results (i.e., why are we seeing this result?) is underdeveloped. For example, under the section "Response to drug policy change", it is stated that (according to the model) low diversity scenarios show the least decline in resistant genotype frequency after drug withdrawal; however, this result emerges mechanistically. Without an explicit connection to the workings of the model, it can be difficult to gauge whether the result(s) seen are specific to the model itself or likely to be more generalizable.

      R: We acknowledge that the explanation of certain results needs to be improved. We have now added the explanation of why low diversity scenarios show the least decline in resistance frequency after drug withdrawal: “Two processes are responsible for the observed trend: first, resistant genotypes have a much higher fitness advantage in low diversity regions even with reduced drug usage because infected hosts are still highly symptomatic; second, due to low transmission potential in low diversity scenarios (i.e., longer generation intervals between transmissions), the rate of change in parasite populations is slower.” (L243-247). We also compared the drug withdrawal response to that of the generalized-immunity-only model (L268-271). The medium transmission region has the fastest reduction in resistance frequency, followed by the high and low transmission regions, which differs from the full model that incorporates strain-specific diversity.

      In addition, to provide the context of different biogeographic transmission zones, we now include a new figure (now Fig. 3) that presents the parameter space of transmission potential and strain diversity of different continents, which demonstrates that PNG and South America have less strain diversity than expected by transmission potential (L179-184 and L198-202). Therefore, these two regions have low disease prevalence and high resistance frequency.

      The authors emphasize several model limitations, including the specification of resistance by a single locus (thus not addressing the importance of recombination should resistance be specified by more than one locus); the assumption that parasites are independently and randomly distributed among hosts (contrary to empirical evidence); and the assumption of a random association between the resistant genotype and antigenic diversity. However, each of these limitations is addressed in the discussion.

      R: As pointed out by the referee, our model presents several limitations that have all been addressed in the discussion and considered for future extensions.

      Did the authors achieve their goals? Did the results support their conclusion?

      Returning to the questions posed by the authors:

      1. Does strain diversity modulate the equilibrium resistance frequency given different transmission intensities? Yes. The authors demonstrate a negative relationship between prevalence/strain diversity and resistance frequency (Figure 2).

      2. Does strain diversity modulate the equilibrium resistance frequency and its changes following drug withdrawal? Yes. The authors find that, under resistance invasion and some level of drug treatment, resistance frequency decreased with the number of strains (Figure 4). The authors also find that lower strain diversity results in a slower decline in resistant genotypes after drug withdrawal and higher equilibrium resistance frequency (Figure 6).

      3. Does the model explain biogeographic patterns of drug resistance evolution? Yes. The authors find that their full model (which includes strain-specific immunity) produces the empirically observed negative relationship between resistance and prevalence/strain diversity, while a model only incorporating generalised immunity does not (Figure 8).

      Utility of work to others and relevance within and beyond the field?

      This work is important because antimalarial drug resistance has been an ongoing issue of concern for much of the 20th century and now 21st century. Further, this resistance emergence is not equitably distributed across biogeographic regions, with South America and Southeast Asia experiencing much of the burden of this resistance emergence. Not only can widespread resistant strains be traced back to these two relatively low-transmission regions, but these strains remain at high frequency even after drug treatment ceases.

      Reviewer #2 (Public Review):

      Summary:

      The evolution of resistance to antimalarial drugs follows a seemingly counterintuitive pattern, in which resistant strains typically originate in regions where malaria prevalence is relatively low. Previous investigations have suggested that frequent exposures in high-prevalence regions produce high levels of partial immunity in the host population, leading to subclinical infections that go untreated. These subclinical infections serve as refuges for sensitive strains, maintaining them in the population. Prior investigations have supported this hypothesis; however, many of them excluded important dynamics, and the results cannot be generalized. The authors have taken a novel approach using a deterministic model that includes both general and adaptive immunity. They find that high levels of population immunity produce refuges, maintaining the sensitive strains and allowing them to outcompete resistant strains. While general population immunity contributed, adaptive immunity is key to reproducing empirical patterns. These results are robust across a range of fitness costs, treatment rates, and resistance efficacies. They demonstrate that future investigations cannot overlook adaptive immunity and antigenic diversity.

      R: We thank the reviewer for his/her appreciation of the work.

      Strengths:

      Overall, this is a very nice paper that makes a significant contribution to the field. It is well-framed within the body of literature and achieves its goal of providing a generalizable, unifying explanation for otherwise disparate investigations. As such, this work will likely serve as a foundation for future investigations. The approach is elegant and rigorous, with results that are supported across a broad range of parameters.

      Weaknesses:

      Although the title states that the authors describe resistance invasion, they do not support or even explore this claim. As they state in the discussion (line 351), this work predicts the equilibrium state and doesn't address temporal patterns. While refuges in partially immune hosts may maintain resistance in a population, they do not account for the patterns of resistance spread, such as the rapid spread of chloroquine resistance in Africa once it was introduced from Asia.

      R: We do agree that resistance invasion is not the focus of our manuscript. Rather we mainly investigate the maintenance and decline after drug withdrawal. Therefore, we changed the title to “Antigenic strain diversity predicts different biogeographic patterns of maintenance and decline of anti-malarial drug resistance” (L1-4).

      We did, however, present a fast initial invasion phase for the introduction of resistant genotypes regardless of transmission scenarios in Fig. 5 (now Fig. 6). Even though the focus of the manuscript is to investigate long term persistence of resistant genotypes, we did emphasize that the initial invasion phase and how that changes the host immunity profile are key to the coexistence of resistant and wild-type genotypes (L228-239).

      As the authors state in the discussion, the evolution of compensatory mutations that negate the cost of resistance is possible, and in vitro experiments have found evidence of such. It appears that their results are dependent on there being a cost, but the lower range of the cost parameter space was not explored.

      R: It is true that compensatory mutations might mitigate the negative fitness consequences. We didn’t add a no-cost scenario because in general if there is no cost but only benefit (survival through drug usage), then resistant haplotypes will likely be fixed in the population. This is contingent on the assumption that these compensatory mutations are in perfect linkage with resistant alleles, which is unlikely in high-transmission scenarios. Our model does not incorporate recombination, but earlier models (Dye & Williams 1997, Hastings & D’Alessandro 2000) have demonstrated that recombination will delay the fixation of resistant alleles in high-transmission.

      As suggested, we ran our model with costs equal 0 and 0.01 (Fig. 2C and L189-191). We found that resistant alleles almost always fix except for when diversity is extremely high, treatment/resistance efficacy is low. In these cases, additional benefits brought by more transmission from resistant alleles do not bring many benefits (as lower GI classes have a very small number of hosts). This finding does not contradict a wider range of coexistence between wild-type and resistant alleles when the cost is higher. We therefore added these scenarios to our updated results.

      Author response image 1.

      The use of a deterministic, compartmental model may be a structural weakness. This means that selection alone guides the fixation of new mutations on a semi-homogenous adaptive landscape. In reality, there are two severe bottlenecks in the transmission cycle of Plasmodium spp., introducing a substantial force of stochasticity via genetic drift. The well-mixed nature of this type of model is also likely to have affected the results. In reality, within-host selection is highly heterogeneous, strains are not found with equal frequency either in the population or within hosts, and there will be some linkage between the strain and a resistance mutation, at least at first. Of course, there is no recourse for that at this stage, but it is something that should be considered in future investigations.

      R: We thank the reviewer for their insightful comments on the constraints of the deterministic modeling approach. We’ve added these points to discussion in the paragraph discussing the second limitation of the model (L359-364).

      The authors mention the observation that patterns of resistance in high-prevalence Papua New Guinea seem to be more similar to Southeast Asia, perhaps because of the low strain diversity in Papua New Guinea. However, they do not investigate that parameter space here. If they did and were able to replicate that observation, not only would that strengthen this work, it could profoundly shape research to come.

      R: We appreciate the suggestion to investigate the parameter space of Papua New Guinea. We now include a new figure (now Fig. 3) that presents the parameter space of transmission potential and strain diversity of different continents, which demonstrates that PNG and South America have less strain diversity than expected by transmission potential (L179-184 and L198-202). This translates to low infectivity for most mosquito bites, and most infections only occur in hosts with lower generalized immunity. Therefore resistant genotypes will help ensure disease transmission in these symptomatic hosts and be strongly selected to be maintained.

      Reviewer #1 (Recommendations For The Authors):

      1. I found lines 41-49 difficult to follow. Please rephrase (particularly punctuation) for clarity.

      R: We have edited the lines to improve the writing (L41-50)):

      “Various relationships between transmission intensity and stable frequencies of resistance were discovered, each of which has some empirical support: 1) transmission intensity does not influence the fate of resistant genotypes [Models: Koella and Antia (2003); Masserey et al. (2022); Empirical: Diallo et al. (2007); Shah et al. (2011, 2015)]; 2) resistance first increases in frequency and slowly decreases with increasing transmission rates [Models: Klein et al. (2008, 2012)]; and 3) Valley phenomenon: resistance can be fixed at both high and low end of transmission intensity [Model: Artzy-Randrup et al. (2010); Empirical: Talisuna et al. (2002)]. Other stochastic models predict that it is harder for resistance to spread in high transmission regions, but patterns are not systematically inspected across the parameter ranges [Model: Whitlock et al. (2021); Model and examples in Ariey and Robert (2003)].”

      1. Line 65: There should be a space after "recombination" and before the citation.

      R: Thank you for catching the error. We’ve added the space (L64).

      1. I'm interested in the dependency of the results on the assumption that there is a cost to resistance via lowered transmissibility (lines 142-145). I appreciate that variation in the cost(s) of resistance in single and mixed infections is explored; however, from what I can tell the case of zero cost is not explored.

      R: As suggested, we have now added the no-cost scenario. Please see the response to the Reviewer2 weaknesses paragraph 2.

      1. I felt the commentary/explanation of the response to drug policy change was a bit underdeveloped. I would have liked a walk-through of why in your model low diversity scenarios show the slowest decline in resistant genotypes after switching to different drugs.

      R: We acknowledge that the explanation of the response to drug policy change needs to be improved. We have now added the explanation of why we observe low diversity scenarios show the least decline in resistance frequency after drug withdrawal: “Two processes are responsible for the seen trend: first, resistant genotypes have a much higher fitness advantage in low diversity regions even with reduced drug usage because infected hosts are still highly symptomatic; second, due to low transmission potential in low diversity scenarios (i.e., longer generation intervals between transmissions), the rate of change in parasite populations is slower.” (L243-247). We also compared the drug withdrawal response to that of the generalized-immunity-only model. The medium transmission region has the fastest reduction in resistance frequency, followed by the high and low transmission regions, which differs from the full model that incorporates strain-specific diversity.

      1. Line 352: persistent drug usage?

      R: Yes, we meant persistent drug usage. We’ve clarified the writing (L389-391).

      1. The organisation of the manuscript would benefit from structuring around the focal questions so that the reader can easily find the answers to the focal questions within the results and discussion sections.

      R: This is a great suggestion. We modified the subheadings of results to provide answers to focal questions (L151, L179, L203-204, and L240).

      1. Line 353: Please remove either "shown" or "demonstrated".

      R: Thank you for catching the grammatical error, we’ve retained “shown” only for the sentence (L391-392).

      Reviewer #2 (Recommendations For The Authors):

      Overall, this was very nice work and a pleasure to read.

      Major:

      1. Please provide a much more thorough explanation of how resistance invasions are modeled. It is not clear from the text and could not be replicated.

      R: We have now added a section “drug treatment and resistance invasion” in Methods and Materials to explain how resistance invasions are modeled (L488-496):

      “Given each parameter set, we ran the ODE model six times until equilibrium with the following genotypic compositions: 1) wild-type only scenario with no drug treatment; 2) wild-type only scenario with 63.2% drug treatment (0.05 daily treatment rate); 3) wild-type only scenario with 98.2% drug treatment (0.2 daily treatment rate); 4) resistant-only scenario with no drug treatment; 5) resistance invasion with 63.2% drug treatment; 6) resistance invasion with 98.2% drug treatment. Runs 1-4 start with all hosts in G0,U compartment and ten parasites. Runs 5 and 6 (resistance invasion) start from the equilibrium state of 2 and 3, with ten resistant parasites introduced. We then followed the ODE dynamics till the next equilibrium.”

      1. Please make your raw data, code, and replicable examples that produce the figures in the manuscript available.

      R: We have added the data availability session, which provides the GitHub site with all the code for the model, data processing, and figures: All the ODE codes, numerically-simulated data, empirical data, and analyzing scripts are publicly available at https://github.itap.purdue.edu/HeLab/MalariaResistance.

      1. Regarding the limitations described in the paragraph about the model in the public response, these results would be strengthened if there were separate compartments for strains which could be further divided into sensitive and resistant. Could you explore this for at least a subset of the parameter space?

      R: In our model, sensitive and resistant pathogens are always modeled as separate compartments (Fig. S1B and Appendix 1). In Results/Model structure, L135-136, we stated the setup:

      “The population sizes of resistant (PR) or sensitive (wild-type; PW) parasites are tracked separately in host compartments of different G and drug status.”

      1. To what extent do these results rely on a cost to resistance? Were lower costs explored? This would be worth demonstrating. If this cannot be maintained without cost, do you think this is because there is no linkage between strain and resistance?

      R: As suggested, we have now added the no-cost scenario (Fig. 2C and L189-191). Please see the response to the Reviewer1 weaknesses paragraph 2. In sum, under a no-cost scenario, if treatment rate is low, then wild-type alleles will still be maintained in high transmission scenarios; when treatment rate is high, resistant alleles will always be fixed.

      Minor:

      1. "Plasmodium" should be italicized throughout. Ironically, italics aren't permitted in this form.

      R: We did italicize “Plasmodium” or “P. falciparum” throughout the text. If the reviewer is referring to “falciparum malaria”, the convention is not to italicize falciparum in this case.

      1. Fig 1A: the image is reversed for the non-infected host with prior exposure to strain A. Additionally, the difference between colors for WT and resistant is not visible in monochrome.

      R: Thank you for pointing out the problem of color choice in monochrome. We have modified the figure. The image in Fig 1A is not reversed for non-infected hosts with prior exposure to strain A. We now spell out “S” to be “specific immunity”, and explain it better in the figure legend.

      1. Fig 2B: add "compare to the pattern of prevalence shown in Fig 2A" or something similar to make the comparison immediately clear.

      R: We thank the reviewer’s suggestion. We’ve added a sentence to contrast Fig 2A and B in the Figure legend: “A comparison between the prevalence pattern in (A) and resistance frequency in (B) reveals that high prevalence regions usually correspond to low resistance frequency at the end of resistance invasion dynamics.”

      1. Figs 2B & C: Please thoroughly explain how you produced this data in the methods section and briefly describe it in the results sections.

      R: We agree that the modeling strategies need to be explained better. Since we explained the rationale for the parameter ranges and the prevalence patterns we observe in the results section “Appropriate pairing of strain diversity and vectorial capacity” (now “Impact of strain diversity and transmission potential on disease prevalence”), we added sentences in this section to explain how we run models until equilibrium for wild-only infections with or without drug treatment (L152-178). Then in the following section “Drug-resistance and disease prevalence” section, we explain how we obtained the resistance invasion data:

      “To investigate resistance invasion, we introduce ten resistant infections to the equilibrium states of drug treatment with wild-type only infections, and follow the ODE dynamics till the next equilibrium” (L180-181).

      1. Fig 3: The axis labels are not particularly clear. For the Y axis, please state in the label what it is the frequency of (either the mutation or the phenotype). In the X axis, it is better to spell that out in words, like "P. falciparum prevalence in children".

      R: Thank you for pointing this out. We’ve modified the axes labels of Fig. 3 (now Fig. 4): X-axis: “P. falciparum prevalence in children aged 2-10”; Y-axis: “Frequency of resistant genotypes (pfcrt 76T)”.

      1. Fig 4 and the rest of the figures of this nature: Showing an equilibrium-state timestep before treatment was introduced would improve the readers' understanding of the dynamics.

      R: We agree that the equilibrium state before treatment is important. In fact, we have those states in our figure 4 (now figure 5): the left panel- “Daily treatment rate 0” indicates the equilibrium-state timestep before treatment. We clarified this point in the caption.

      1. Fig 5 is very compelling, but the relationships in Fig 5 would be clearer if the Y axes were not all different. Consider using the same scale for the hosts, and the same scale for resistant parasites (both conditions) and WT parasites, 113 strains. It may be clearer to reference them if they are given as A-F instead of three figures each for A and B.

      R: We agree with the suggested changes and have modified figure 5 (now Fig. 6): we used one Y-axis scale for the hosts, and one Y-axis scale for the parasites. The wild-type one is very low for the low diversity scenario, thus we included one inset plot for that case.

      1. Fig 5 caption: High immune protection doesn't select against resistance. The higher relative fitness of the sensitive strain selects against resistance in a high-immunity environment.

      R: Thank you for pointing this out. Here we meant that a reduction in resistant population after the initial overshoot occurs in both diversity levels. We are not comparing resistant strains to sensitive ones. We’ve modified the sentence to: “The higher specific immunity reduces the infectivity of new strains, leading to a reduction of the resistant parasite population regardless of the diversity level”.

      1. Line 242: "keep" should be plural.

      R: We’ve corrected “keep” to “keeps” (L267).

      1. Line 360 and elsewhere: The strength of the results is somewhat overstated at times. This absolutely supports the importance of strain-specific immunity, but these results do not explain patterns of the origin of resistance and there are a number of factors that are not incorporated (a necessary evil of modeling to be sure).

      R: Thank you for pointing this out. We’ve modified discussion to remove the overstated strength of results:

      1) Original: “The inclusion of strain diversity in the model provides a new mechanistic explanation as to why Southeast Asia has been the original source of resistance to certain antimalarial drugs, including chloroquine.”

      Modified: “The inclusion of strain diversity in the model provides a new mechanistic explanation as to why Southeast Asia has persisting resistance to certain antimalarial drugs, including chloroquine, despite a lower transmission intensity than Africa. “ (L328-330)

      2) In sum, we show that strain diversity and associated strain-specific host immunity, dynamically tracked through the macroparasitic structure, can explainpredict the complex relationship between transmission intensity and drug-resistance frequencies.

      1. The color palettes are not discernible in grayscale, especially the orange/blue/gray in Fig 2. The heatmaps appear to be in turbo, the only viridis palette that isn't grayscale-friendly. Just something to keep in mind for the accessibility of individuals with achromatopsia and most people who print out papers.

      R: Thank you for the visualization suggestions. We updated all the figures with the “viridis:magma” palette. As for the orange/blue/gray scale used in Fig 2C, it is difficult to pick nine colors that are discernable in brightness in grayscale. Currently, the four colors correspond to clonal genotype cost (i.e. green, red, grey, and blue), and the three-level brightness maps to mixed genotype cost.

    1. Third-party middleware Use third-party middleware to add functionality to Express apps. Install the Node.js module for the required functionality, then load it in your app at the application level or at the router level. The following example illustrates installing and loading the cookie-parsing middleware function cookie-parser. $ npm install cookie-parser const express = require('express') const app = express() const cookieParser = require('cookie-parser') // load the cookie-parsing middleware app.use(cookieParser()) For a partial list of third-party middleware functions that are commonly used with Express, see: Third-party middleware.

      Let's break down the concept of third-party middleware in Express with a simple explanation and an example.

      What is Third-Party Middleware?

      Third-party middleware in Express refers to middleware functions that are not built into Express itself but are created by external developers to extend its functionality. These middleware functions can be added to your Express application to provide additional features or handle specific tasks.

      Example Using cookie-parser:

      1. Install the Module:
      2. Use npm to install the cookie-parser module.

      bash $ npm install cookie-parser

      1. Load and Use the Middleware:
      2. In your Express application, require and load the cookie-parser middleware.

      ```javascript const express = require('express'); const app = express(); const cookieParser = require('cookie-parser');

      // Load the cookie-parsing middleware app.use(cookieParser()); ```

      • This middleware is now integrated into your Express application.

      • Use the Middleware in Your Routes:

      • Now, you can use the functionality provided by cookie-parser in your routes.

      ```javascript app.get('/', (req, res) => { // Access cookies using the middleware const userCookie = req.cookies.user;

         // Your route logic here
         res.send(`Welcome, ${userCookie || 'Guest'}!`);
      

      }); ```

      • In this example, cookie-parser allows you to access cookies in the req.cookies object.

      Why Use Third-Party Middleware?

      • Extended Functionality: Third-party middleware adds specialized functionality to your Express application, such as parsing cookies, handling authentication, logging, etc.

      • Modularity: Using third-party middleware allows you to keep your code modular and focus on building features without reinventing the wheel for common tasks.

      • Community Contributions: Many third-party middleware modules are created and maintained by the community, ensuring reliable and well-tested solutions.

      In summary, third-party middleware in Express enables you to enhance your application with additional features and functionalities developed by the broader community. Always check the documentation of each middleware module for usage details and customization options.

    2. Router-level middleware Router-level middleware works in the same way as application-level middleware, except it is bound to an instance of express.Router(). const router = express.Router() Load router-level middleware by using the router.use() and router.METHOD() functions. The following example code replicates the middleware system that is shown above for application-level middleware, by using router-level middleware: const express = require('express') const app = express() const router = express.Router() // a middleware function with no mount path. This code is executed for every request to the router router.use((req, res, next) => { console.log('Time:', Date.now()) next() }) // a middleware sub-stack shows request info for any type of HTTP request to the /user/:id path router.use('/user/:id', (req, res, next) => { console.log('Request URL:', req.originalUrl) next() }, (req, res, next) => { console.log('Request Type:', req.method) next() }) // a middleware sub-stack that handles GET requests to the /user/:id path router.get('/user/:id', (req, res, next) => { // if the user ID is 0, skip to the next router if (req.params.id === '0') next('route') // otherwise pass control to the next middleware function in this stack else next() }, (req, res, next) => { // render a regular page res.render('regular') }) // handler for the /user/:id path, which renders a special page router.get('/user/:id', (req, res, next) => { console.log(req.params.id) res.render('special') }) // mount the router on the app app.use('/', router) To skip the rest of the router’s middleware functions, call next('router') to pass control back out of the router instance. This example shows a middleware sub-stack that handles GET requests to the /user/:id path. const express = require('express') const app = express() const router = express.Router() // predicate the router with a check and bail out when needed router.use((req, res, next) => { if (!req.headers['x-auth']) return next('router') next() }) router.get('/user/:id', (req, res) => { res.send('hello, user!') }) // use the router and 401 anything falling through app.use('/admin', router, (req, res) => { res.sendStatus(401) })

      Certainly! Let's break down the concept of router-level middleware in Express with simple explanations and examples.

      1. Basic Router-Level Middleware:

      • Router-level middleware works similarly to application-level middleware but is bound to an instance of express.Router().

      ```javascript const express = require('express'); const app = express(); const router = express.Router();

      // Middleware function without a mount path, executed for every request to the router router.use((req, res, next) => { console.log('Time:', Date.now()); next(); });

      app.use('/', router); // Mount the router on the app ```

      In this example, the middleware function logs the current time for every request to the router.

      2. Middleware Sub-Stack with Router-Level Middleware:

      • Define a middleware sub-stack for a specific path within the router.

      javascript router.use('/user/:id', (req, res, next) => { console.log('Request URL:', req.originalUrl); next(); }, (req, res, next) => { console.log('Request Type:', req.method); next(); });

      Here, the middleware sub-stack prints request info for any type of HTTP request to the '/user/:id' path.

      3. Router-Level Middleware with Different Routes:

      • Define middleware sub-stacks for different routes within the router.

      ```javascript router.get('/user/:id', (req, res, next) => { if (req.params.id === '0') next('route'); // Skip to the next route if user ID is '0' else next(); // Pass control to the next middleware }, (req, res, next) => { res.render('regular'); // Render a regular page });

      router.get('/user/:id', (req, res, next) => { res.render('special'); // Render a special page for the /user/:id path }); ```

      In this example, the first middleware checks the user ID and either skips to the next route or renders a regular page. The second middleware renders a special page for the '/user/:id' path.

      4. Skipping Router Middleware:

      • Use next('router') to skip the rest of the router’s middleware functions and pass control back out of the router instance.

      ```javascript router.use((req, res, next) => { if (!req.headers['x-auth']) return next('router'); // Bail out if 'x-auth' header is missing next(); });

      router.get('/user/:id', (req, res) => { res.send('hello, user!'); });

      app.use('/admin', router, (req, res) => { res.sendStatus(401); // Send a 401 response for anything falling through }); ```

      In this example, the router is predicated with a check, and if the 'x-auth' header is missing, it skips the rest of the router’s middleware functions.

      In summary, router-level middleware in Express allows you to organize and modularize your routes and their associated middleware. It provides a way to handle middleware specific to a router instance, making your code more modular and maintainable.

    3. Application-level middleware Bind application-level middleware to an instance of the app object by using the app.use() and app.METHOD() functions, where METHOD is the HTTP method of the request that the middleware function handles (such as GET, PUT, or POST) in lowercase. This example shows a middleware function with no mount path. The function is executed every time the app receives a request. const express = require('express') const app = express() app.use((req, res, next) => { console.log('Time:', Date.now()) next() }) This example shows a middleware function mounted on the /user/:id path. The function is executed for any type of HTTP request on the /user/:id path. app.use('/user/:id', (req, res, next) => { console.log('Request Type:', req.method) next() }) This example shows a route and its handler function (middleware system). The function handles GET requests to the /user/:id path. app.get('/user/:id', (req, res, next) => { res.send('USER') }) Here is an example of loading a series of middleware functions at a mount point, with a mount path. It illustrates a middleware sub-stack that prints request info for any type of HTTP request to the /user/:id path. app.use('/user/:id', (req, res, next) => { console.log('Request URL:', req.originalUrl) next() }, (req, res, next) => { console.log('Request Type:', req.method) next() }) Route handlers enable you to define multiple routes for a path. The example below defines two routes for GET requests to the /user/:id path. The second route will not cause any problems, but it will never get called because the first route ends the request-response cycle. This example shows a middleware sub-stack that handles GET requests to the /user/:id path. app.get('/user/:id', (req, res, next) => { console.log('ID:', req.params.id) next() }, (req, res, next) => { res.send('User Info') }) // handler for the /user/:id path, which prints the user ID app.get('/user/:id', (req, res, next) => { res.send(req.params.id) }) To skip the rest of the middleware functions from a router middleware stack, call next('route') to pass control to the next route. NOTE: next('route') will work only in middleware functions that were loaded by using the app.METHOD() or router.METHOD() functions. This example shows a middleware sub-stack that handles GET requests to the /user/:id path. app.get('/user/:id', (req, res, next) => { // if the user ID is 0, skip to the next route if (req.params.id === '0') next('route') // otherwise pass the control to the next middleware function in this stack else next() }, (req, res, next) => { // send a regular response res.send('regular') }) // handler for the /user/:id path, which sends a special response app.get('/user/:id', (req, res, next) => { res.send('special') }) Middleware can also be declared in an array for reusability. This example shows an array with a middleware sub-stack that handles GET requests to the /user/:id path function logOriginalUrl (req, res, next) { console.log('Request URL:', req.originalUrl) next() } function logMethod (req, res, next) { console.log('Request Type:', req.method) next() } const logStuff = [logOriginalUrl, logMethod] app.get('/user/:id', logStuff, (req, res, next) => { res.send('User Info') })

      Certainly! Let's break down the concepts of application-level middleware in Express with simple explanations and examples.

      1. Basic Application-level Middleware:

      • This middleware runs for every incoming request to your application.

      ```javascript const express = require('express'); const app = express();

      app.use((req, res, next) => { console.log('Time:', Date.now()); next(); }); ```

      In this example, every time a request is received, it logs the current time.

      2. Application-level Middleware with a Mount Path:

      • You can specify a path for the middleware to apply to.

      javascript app.use('/user/:id', (req, res, next) => { console.log('Request Type:', req.method); next(); });

      Here, the middleware only runs for requests to paths starting with '/user/:id'.

      3. Route Handlers with Middleware:

      • Express allows you to define route handlers for specific HTTP methods.

      javascript app.get('/user/:id', (req, res, next) => { res.send('USER'); });

      This handles GET requests to the '/user/:id' path and sends the response 'USER'.

      4. Middleware Sub-Stack with Mount Path:

      • You can create a sub-stack of middleware functions for a specific path.

      javascript app.use('/user/:id', (req, res, next) => { console.log('Request URL:', req.originalUrl); next(); }, (req, res, next) => { console.log('Request Type:', req.method); next(); });

      This example prints request info for any type of HTTP request to the '/user/:id' path.

      5. Route Handlers with Middleware Sub-Stack:

      • Define multiple middleware functions for a single route.

      javascript app.get('/user/:id', (req, res, next) => { console.log('ID:', req.params.id); next(); }, (req, res, next) => { res.send('User Info'); });

      In this case, the first middleware prints the user ID, and the second one sends the response 'User Info'.

      6. Skipping Middleware with next('route'):

      • You can skip the rest of the middleware functions using next('route') and pass control to the next route.

      ```javascript app.get('/user/:id', (req, res, next) => { if (req.params.id === '0') next('route'); else next(); }, (req, res, next) => { res.send('regular'); });

      app.get('/user/:id', (req, res, next) => { res.send('special'); }); ```

      If the user ID is '0', it skips to the next route; otherwise, it sends a regular or special response.

      7. Middleware in an Array for Reusability:

      • Middleware can be declared in an array for reuse.

      ```javascript function logOriginalUrl(req, res, next) { console.log('Request URL:', req.originalUrl); next(); }

      function logMethod(req, res, next) { console.log('Request Type:', req.method); next(); }

      const logStuff = [logOriginalUrl, logMethod]; app.get('/user/:id', logStuff, (req, res, next) => { res.send('User Info'); }); ```

      In this example, the middleware functions logOriginalUrl and logMethod are reusable and applied to the '/user/:id' route.

      In summary, application-level middleware in Express allows you to handle requests, modify them, and control their flow. You can use it for various tasks and organize your code effectively.

    4. Using middleware Express is a routing and middleware web framework that has minimal functionality of its own: An Express application is essentially a series of middleware function calls. Middleware functions are functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next. Middleware functions can perform the following tasks: Execute any code. Make changes to the request and the response objects. End the request-response cycle. Call the next middleware function in the stack. If the current middleware function does not end the request-response cycle, it must call next() to pass control to the next middleware function. Otherwise, the request will be left hanging. An Express application can use the following types of middleware: Application-level middleware Router-level middleware Error-handling middleware Built-in middleware Third-party middleware You can load application-level and router-level middleware with an optional mount path. You can also load a series of middleware functions together, which creates a sub-stack of the middleware system at a mount point.

      In Express, the sequence of middleware execution is crucial for controlling the flow of the request-response cycle. The order in which you define and use middleware determines how they are executed. Here's the general rule:

      1. Application-level Middleware:
      2. Middleware defined using app.use() is executed in the order it's defined in your code.

      ```javascript // Example of application-level middleware app.use((req, res, next) => { console.log('Middleware 1'); next(); });

      app.use((req, res, next) => { console.log('Middleware 2'); next(); }); ```

      In this example, "Middleware 1" will execute before "Middleware 2".

      1. Router-level Middleware:
      2. Similar to application-level middleware, the order of middleware in the code defines the execution order.

      ```javascript // Example of router-level middleware const router = express.Router();

      router.use((req, res, next) => { console.log('Router Middleware 1'); next(); });

      router.use((req, res, next) => { console.log('Router Middleware 2'); next(); }); ```

      In this example, "Router Middleware 1" will execute before "Router Middleware 2" for routes under this router.

      1. Error-handling Middleware:
      2. Error-handling middleware should be defined last after all other middleware and route handlers.

      javascript // Example of error-handling middleware app.use((err, req, res, next) => { console.error(err); res.status(500).send('Something went wrong!'); });

      Error-handling middleware will only be executed if there is an error thrown during the request-response cycle.

      1. Built-in Middleware:
      2. Built-in middleware is often automatically applied in the order they are required.

      javascript // Example of using built-in middleware app.use(express.json()); // Body parsing middleware app.use(express.static('public')); // Serving static files

      The order in which you invoke these built-in middleware functions matters.

      1. Third-party Middleware:
      2. Third-party middleware is also executed in the order you apply it.

      javascript // Example of using third-party middleware const passport = require('passport'); app.use(passport.initialize()); // Initialize passport middleware

      The order in which you load third-party middleware influences their execution.

      In summary, the sequence of middleware execution is based on the order in which you define and apply them. This allows you to control the flow of the request through the various stages of processing in your Express application. Sure, let's break down the concept of middleware in Express with simple words and examples.

      1. What is Middleware?

      Middleware is like a set of functions in Express that can do various tasks during the processing of a web request. These tasks can include modifying the request or response, executing some code, or controlling the flow of the request.

      2. Types of Middleware:

      a. Application-level Middleware:

      These are applied to the entire Express application. They are executed for every incoming request.

      javascript // Example of application-level middleware app.use((req, res, next) => { console.log('This runs for every request!'); next(); // Move on to the next middleware });

      b. Router-level Middleware:

      Similar to application-level middleware but only applies to specific routes.

      ```javascript // Example of router-level middleware const router = express.Router();

      router.use((req, res, next) => { console.log('This runs for routes under this router!'); next(); });

      router.get('/route1', (req, res) => { // Route-specific logic }); ```

      c. Error-handling Middleware:

      Special middleware to handle errors during the request-response cycle.

      javascript // Example of error-handling middleware app.use((err, req, res, next) => { console.error(err); res.status(500).send('Something went wrong!'); });

      d. Built-in Middleware:

      Express has some built-in middleware for common tasks.

      javascript // Example of built-in middleware (parsing JSON in the request body) app.use(express.json());

      e. Third-party Middleware:

      Additional middleware created by third-party developers to extend Express functionality.

      javascript // Example of using third-party middleware (e.g., for handling authentication) const passport = require('passport'); app.use(passport.initialize());

      3. How Middleware Works:

      • Each middleware function has access to the request (req) and response (res) objects.
      • They can perform tasks, modify the request or response, and decide whether to end the request-response cycle or pass control to the next middleware.
      • If a middleware function doesn't end the cycle, it must call next() to pass control to the next middleware in line.

      4. Middleware Loading:

      • You can load middleware at the application or router level.
      • Middleware can be loaded with an optional mount path, defining where it should be applied.

      javascript // Example of loading middleware with a mount path app.use('/admin', adminMiddleware);

      This means that adminMiddleware will only be executed for routes starting with '/admin'.

      In summary, middleware in Express is like a chain of functions that can modify requests and responses. They're powerful for handling various tasks and keeping your code organized.

    1. Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values.

      The Booleans is what I have learned in my CSE class too. This kind of code is to come out as true or false in a statement. I have only learned when it apply on math, but I don't know what else can this do.

    1. WebTorrent is a streaming torrent client for node.js and the browser. YEP, THAT'S RIGHT. THE BROWSER. It's written completely in JavaScript – the language of the web – so the same code works in both runtimes.

      WebNative

      all you need is a browser

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment:

      This important study represents a comprehensive computational analysis of Plasmodium falciparum gene expression, with a focus on var gene expression, in parasites isolated from patients; it assesses changes that occur as the parasites adapt to short-term in vitro culture conditions. The work provides technical advances to update a previously developed computational pipeline. Although the findings of the shifts in the expression of particular var genes have theoretical or practical implications beyond a single subfield, the results are incomplete and the main claims are only partially supported.

      The authors would like to thank the reviewers and editors for their insightful and constructive assessment. We particularly appreciate the statement that our work provides a technical advance of our computational pipeline given that this was one of our main aims. To address the editorial criticisms, we have rephrased and restructured the manuscript to ensure clarity of results and to support our main claims. For the same reason, we removed the var transcript differential expression analysis, as this led to confusion.

      Public Reviews:

      Reviewer #1:

      The authors took advantage of a large dataset of transcriptomic information obtained from parasites recovered from 35 patients. In addition, parasites from 13 of these patients were reared for 1 generation in vivo, 10 for 2 generations, and 1 for a third generation. This provided the authors with a remarkable resource for monitoring how parasites initially adapt to the environmental change of being grown in culture. They focused initially on var gene expression due to the importance of this gene family for parasite virulence, then subsequently assessed changes in the entire transcriptome. Their goal was to develop a more accurate and informative computational pipeline for assessing var gene expression and secondly, to document the adaptation process at the whole transcriptome level.

      Overall, the authors were largely successful in their aims. They provide convincing evidence that their new computational pipeline is better able to assemble var transcripts and assess the structure of the encoded PfEMP1s. They can also assess var gene switching as a tool for examining antigenic variation. They also documented potentially important changes in the overall transcriptome that will be important for researchers who employ ex vivo samples for assessing things like drug sensitivity profiles or metabolic states. These are likely to be important tools and insights for researchers working on field samples.

      One concern is that the abstract highlights "Unpredictable var gene switching..." and states that "Our results cast doubt on the validity of the common practice of using short-term cultured parasites...". This seems somewhat overly pessimistic with regard to var gene expression profiling and does not reflect the data described in the paper. In contrast, the main text of the paper repeatedly refers to "modest changes in var gene expression repertoire upon culture" or "relatively small changes in var expression from ex vivo to culture", and many additional similar assessments. On balance, it seems that transition to culture conditions causes relatively minor changes in var gene expression, at least in the initial generations. The authors do highlight that a few individuals in their analysis showed more pronounced and unpredictable changes, which certainly warrants caution for future studies but should not obscure the interesting observation that var gene expression remained relatively stable during transition to culture.

      Thank you for this comment. We were happy to modify the wording in the abstract to have consistency with the results presented by highlighting that modest but unpredictable var gene switching was observed while substantial changes were found in the core transcriptome. Moreover, any differences observed in core transcriptome between ex vivo samples from naïve and pre-exposed patients are diminished after one cycle of cultivation making inferences about parasite biology in vivo impossible.

      Therefore, – to our opinion – the statement in the last sentence is well supported by the data presented.

      Line 43–47: “Modest but unpredictable var gene switching and convergence towards var2csa were observed in culture, along with differential expression of 19% of the core transcriptome between paired ex vivo and generation 1 samples. Our results cast doubt on the validity of the common practice of using short-term cultured parasites to make inferences about in vivo phenotype and behaviour.” Nevertheless, we would like to note that this study was in a unique position to assess changes at the individual patient level as we had successive parasite generations. This comparison is not done in most cross-sectional studies and therefore these small, unpredictable changes in the var transcriptome are missed.

      Reviewer #2:

      In this study, the authors describe a pipeline to sequence expressed var genes from RNA sequencing that improves on a previous one that they had developed. Importantly, they use this approach to determine how var gene expression changes with short-term culture. Their finding of shifts in the expression of particular var genes is compelling and casts some doubt on the comparability of gene expression in short-term culture versus var expression at the time of participant sampling. The authors appear to overstate the novelty of their pipeline, which should be better situated within the context of existing pipelines described in the literature.

      Other studies have relied on short-term culture to understand var gene expression in clinical malaria studies. This study indicates the need for caution in over-interpreting findings from these studies.

      The novel method of var gene assembly described by the authors needs to be appropriately situated within the context of previous studies. They neglect to mention several recent studies that present transcript-level novel assembly of var genes from clinical samples. It is important for them to situate their work within this context and compare and contrast it accordingly. A table comparing all existing methods in terms of pros and cons would be helpful to evaluate their method.

      We are grateful for this suggestion and agree that a table comparing the pros and cons of all existing methods would be helpful for the general reader and also highlight the key advantages of our new approach. A table comparing previous methods for var gene and transcript characterisation has been added to the manuscript and is referenced in the introduction (line 107).

      Author response table 1.

      Comparison of previous var assembly approaches based on DNA- and RNA-sequencing.

      Reviewer #3:

      This work focuses on the important problem of how to access the highly polymorphic var gene family using short-read sequence data. The approach that was most successful, and utilized for all subsequent analyses, employed a different assembler from their prior pipeline, and impressively, more than doubles the N50 metric.

      The authors then endeavor to utilize these improved assemblies to assess differential RNA expression of ex vivo and short-term cultured samples, and conclude that their results "cast doubt on the validity" of using short-term cultured parasites to infer in vivo characteristics. Readers should be aware that the various approaches to assess differential expression lack statistical clarity and appear to be contradictory. Unfortunately, there is no attempt to describe the rationale for the different approaches and how they might inform one another.

      It is unclear whether adjusting for life-cycle stage as reported is appropriate for the var-only expression models. The methods do not appear to describe what type of correction variable (continuous/categorical) was used in each model, and there is no discussion of the impact on var vs. core transcriptome results.

      We agree with the reviewer that the different methods and results of the var transcriptome analysis can be difficult to reconcile. To address this, we have included a summary table with a brief description of the rationale and results of each approach in our analysis pipeline.

      Author response table 2.

      Summary of the different levels of analysis performed to assess the effect of short-term parasite culturing on var and core gene expression, their rational, method, results, and interpretation.

      Additionally, the var transcript differential expression analysis was removed from the manuscript, because this study was in a unique position to perform a more focused analysis of var transcriptional changes across paired samples, meaning the per-patient approach was more suitable. This allowed for changes in the var transcriptome to be identified that would have gone unnoticed in the traditional differential expression analysis.

      We thank the reviewer for his highly important comment about adjusting for life cycle stage. Var gene expression is highly stage-dependent, so any quantitative comparison between samples does need adjustment for developmental stage. All life cycle stage adjustments were done using the mixture model proportions to be consistent with the original paper, described in the results and methods sections:

      • Line 219–221: “Due to the potential confounding effect of differences in stage distribution on gene expression, we adjusted for developmental stage determined by the mixture model in all subsequent analyses.”

      • Line 722–725: “Var gene expression is highly stage dependent, so any quantitative comparison between samples needs adjustment for developmental stage. The life cycle stage proportions determined from the mixture model approach were used for adjustment.“

      The rank-expression analysis did not have adjustment for life cycle stage as the values were determined as a percentage contribution to the total var transcriptome. The var group level and the global var gene expression analyses were adjusted for life cycle stages, by including them as an independent variable, as described in the results and methods sections.

      Var group expression:

      • Line 321–326: “Due to these results, the expression of group A var genes vs. group B and C var genes was investigated using a paired analysis on all the DBLα (DBLα1 vs DBLα0 and DBLα2) and NTS (NTSA vs NTSB) sequences assembled from ex vivo samples and across multiple generations in culture. A linear model was created with group A expression as the response variable, the generation and life cycle stage as independent variables and the patient information included as a random effect. The same was performed using group B and C expression levels.“

      • Line 784–787: “DESeq2 normalisation was performed, with patient identity and life cycle stage proportions included as covariates and differences in the amounts of var transcripts of group A compared with groups B and C assessed (Love et al., 2014). A similar approach was repeated for NTS domains.”

      Gobal var gene expression:

      • Line 342–347: “A linear model was created (using only paired samples from ex vivo and generation 1) (Supplementary file 1) with proportion of total gene expression dedicated to var gene expression as the response variable, the generation and life cycle stage as independent variables and the patient information included as a random effect. This model showed no significant differences between generations, suggesting that differences observed in the raw data may be a consequence of small changes in developmental stage distribution in culture.”

      • Line 804–806: “Significant differences in total var gene expression were tested by constructing a linear model with the proportion of gene expression dedicated to var gene expression as the response variable, the generation and life cycle stage as an independent variables and the patient identity included as a random effect.“

      The analysis of the conserved var gene expression was adjusted for life cycle stage:

      • Line 766–768: “For each conserved gene, Salmon normalised read counts (adjusted for life cycle stage) were summed and expression compared across the generations using a pairwise Wilcoxon rank test.”

      And life cycle stage estimates were included as covariates in the design matrix for the domain differential expression analysis:

      • Line 771–773: “DESeq2 was used to test for differential domain expression, with five expected read counts in at least three patient isolates required, with life cycle stage and patient identity used as covariates.”

      Reviewer #1:

      1. In the legend to Figure 1, the authors cite "Deitsch and Hviid, 2004" for the classification of different var gene types. This is not the best reference for this work. Better citations would be Kraemer and Smith, Mol Micro, 2003 and Lavstsen et al, Malaria J, 2003.

      We agree and have updated the legend in Figure 1 with these references, consistent with the references cited in the introduction.

      1. In Figures 2 and 3, each of the boxes in the flow charts are largely filled with empty space while the text is nearly too small to read. Adjusting the size of the text would improve legibility.

      We have increased the size of the text in these figures.

      1. My understanding of the computational method for assessing global var gene expression indicates an initial step of identifying reads containing the amino acid sequence LARSFADIG. It is worth noting that VAR2CSA does not contain this motif. Will the pipeline therefore miss expression of this gene, and if so, how does this affect the assessment of global var gene assessment? This seems relevant given that the authors detect increased expression of var2csa during adaptation to culture.

      To address this question, we have added an explanation in the methods section to better explain our analysis. Var2csa was not captured in the global var gene expression analysis, but was analyzed separately because of its unique properties (conservation, proposed role in regulating var gene switching, slightly divergent timing of expression, translational repression).

      • Line 802/3: “Var2csa does not contain the LARSFADIG motif, hence this quantitative analysis of global var gene expression excluded var2csa (which was analysed separately).”
      1. In Figures 4 and 7, panels a and b display virtually identical PCA plots, with the exception that panel A displays more generations. Why are both panels included? There doesn't appear to be any additional information provided by panel B.

      We agree and have removed Figure 7b for the core transcriptome PCA as it did not provide any new information. The var transcript differential analysis (displayed in Figure 4) has been removed from the manuscript.

      1. On line 560-567, the authors state "However, the impact of short-term culture was the most apparent at the var transcript level and became less clear at higher levels." What are the high levels being referred to here?

      We have replaced this sentence to make it clearer what the different levels are (global var gene expression, var domain and var type).

      • Line 526/7: “However, the impact of short-term culture was the most apparent at the var transcript level and became less clear at the var domain, var type and global var gene expression level.”

      Reviewer #2:

      The authors make no mention or assessment of previously published var gene assembly methods from clinical samples that focus on genomic or transcriptomic approaches. These include:

      https://pubmed.ncbi.nlm.nih.gov/28351419/

      https://pubmed.ncbi.nlm.nih.gov/34846163/

      These methods should be compared to the method for var gene assembly outlined by the co-authors, especially as the authors say that their method "overcomes previous limitations and outperforms current methods" (128-129). The second reference above appears to be a method to measure var expression in clinical samples and so should be particularly compared to the approach outlined by the authors.

      Thank you for pointing this out. We have included the second reference in the introduction of our revised manuscript, where we refer to var assembly and quantification from RNA-sequencing data. We abstained from including the first paper in this paragraph (Dara et al., 2017) as it describes a var gene assembly pipeline and not a var transcript assembly pipeline.

      • Line 101–105: “While approaches for var assembly and quantification based on RNA-sequencing have recently been proposed (Wichers et al., 2021; Stucke et al., 2021; Andrade et al., 2020; TonkinHill et al., 2018, Duffy et al., 2016), these still produce inadequate assembly of the biologically important N-terminal domain region, have a relatively high number of misassemblies and do not provide an adequate solution for handling the conserved var variants (Table S1).”

      Additionally, we have updated the manuscript with a table (Table S1) comparing these two methods plus other previously used var transcript/gene assembly approaches (see comment to the public reviews).

      But to address this particular comment in more detail, the first paper (Dara et al., 2017) is a var gene assembly pipeline and not a var transcript assembly pipeline. It is based on assembling var exon 1 from unfished whole genome assemblies of clinical samples and requires a prior step for filtering out human DNA. The authors used two different assemblers, Celera for short reads (which is no longer maintained) and Sprai for long reads (>2000bp), but found that Celera performed worse than Sprai, and subsequently used Sprai assemblies. Therefore, this method does not appear to be suitable for assembling short reads from RNA-seq.

      The second paper (Stucke et al. 2021) focusses more on enriching for parasite RNA, which precedes assembly. The capture method they describe would complement downstream analysis of var transcript assembly with our pipeline. Their assembly pipeline is similar to our pipeline as they also performed de novo assembly on all P. falciparum mapping and non-human mapping reads and used the same assembler (but with different parameters). They clustered sequences using the same approach but at 90% sequence identity as opposed to 99% sequence identity using our approach. Then, Stucke et al. use 500nt as a cut-off as opposed to the more stringent filtering approach used in our approach. They annotated their de novo assembled transcripts with the known amino acid sequences used in their design of the capture array; our approach does not assume prior information on the var transcripts. Finally, their approach was validated only for its ability to recover the most highly expressed var transcript in 6 uncomplicated malaria samples, and they did not assess mis-assemblies in their approach.

      For the methods (619–621), were erythrocytes isolated by Ficoll gradient centrifugation at the time of collection or later?

      We have updated the methods section to clarify this.

      • Line 586–588: “Blood was drawn and either immediately processed (#1, #2, #3, #4, #11, #12, #14, #17, #21, #23, #28, #29, #30, #31, #32) or stored overnight at 4oC until processing (#5, #6, #7, #9, #10, #13, #15, #16, #18, #19, #20, #22, #24, #25, #26, #27, #33).”

      Was the current pipeline and assembly method assessed for var chimeras? This should be described.

      Yes, this was quantified in the Pf 3D7 dataset and also assessed in the German traveler dataset. For the 3D7 dataset it is described in the result section and Figure S1.

      • Line 168–174: “However, we found high accuracies (> 0.95) across all approaches, meaning the sequences we assembled were correct (Figure 2 – Figure supplement 1b). The whole transcript approach also performed the best when assembling the lower expressed var genes (Figure 2 – Figure supplement 1e) and produced the fewest var chimeras compared to the original approach on P. falciparum 3D7. Fourteen misassemblies were observed with the whole transcript approach compared to 19 with the original approach (Table S2). This reduction in misassemblies was particularly apparent in the ring-stage samples.” - Figure S1:

      Author response image 1.

      Performance of novel computational pipelines for var assembly on Plasmodium falciparum 3D7: The three approaches (whole transcript: blue, domain approach: orange, original approach: green) were applied to a public RNA-seq dataset (ENA: PRJEB31535) of the intra-erythrocytic life cycle stages of 3 biological replicates of cultured P. falciparum 3D7, sampled at 8-hour intervals up until 40hrs post infection (bpi) and then at 4-hour intervals up until 48 (Wichers al., 2019). Boxplots show the data from the 3 biological replicates for each time point in the intra-erythrocytic life cycle: a) alignment scores for the dominantly expressed var gene (PF3D7_07126m), b) accuracy scores for the dominantly var gene (PF3D7_0712600), c) number of contigs to assemble the dominant var gene (PF3D7_0712600), d) alignment scores for a middle ranking expressed vargene (PF3D7_0937800), e) alignment scores for the lowest expressed var gene (PF3D7_0200100). The first best blast hit (significance threshold = le-10) was chosen for each contig. The alignment score was used to evaluate the each method. The alignment score represents √accuracy* recovery. The accuracy is the proportion of bases that are correct in the assembled transcript and the recovery reflects what proportion of the true transcript was assembled. Assembly completeness of the dominant vargene (PF3D7 071200, length = 6648nt) for the three approaches was assessed for each biological f) biological replicate 1, g) biological replicate 2, h) biological replicate 3. Dotted lines represent the start and end of the contigs required to assemble the vargene. Red bars represent assembled sequences relative to the dominantly whole vargene sequence, where we know the true sequence (termed “reference transcript”).

      For the ex vivo samples, this has been discussed in the result section and now we also added this information to Table 1.

      • Line 182/3: “Remarkably, with the new whole transcript method, we observed a significant decrease (2 vs 336) in clearly misassembled transcripts with, for example, an N-terminal domain at an internal position.”

      • Table 1:

      Author response table 3.

      Statistics for the different approaches used to assemble the var transcripts. Var assembly approaches were applied to malaria patient ex vivo samples (n=32) from (Wichers et al., 2021) and statistics determined. Given are the total number of assembled var transcripts longer than 500 nt containing at least one significantly annotated var domain, the maximum length of the longest assembled var transcript in nucleotides and the N50 value, respectively. The N50 is defined as the sequence length of the shortest var contig, with all var contigs greater than or equal to this length together accounting for 50% of the total length of concatenated var transcript assemblies. Misassemblies represents the number of misassemblies for each approach. **Number of misassemblies were not determined for the domain approach due to its poor performance in other metrics.

      Line 432: "the core gene transcriptome underwent a greater change relative to the var transcriptome upon transition to culture." Can this be shown statistically? It's unclear whether the difference in the sizes of the respective pools of the core genome and the var genes may account for this observation.

      We found 19% of the core transcriptome to be differentially expressed. The per patient var transcript analysis revealed individually highly variable but generally rather subtle changes in the var transcriptome. The different methods for assessing this make it difficult to statistically compare these two different results.

      The feasibility of this approach for field samples should be discussed in the Discussion.

      In the original manuscript we reflected on this already several times in the discussion (e.g., line 465/6; line 471–475; line 555–568). We now have added another two sentences at the end of the paragraph starting in line 449 to address this point. It reads now:

      • Line 442–451: “Our new approach used the most geographically diverse reference of var gene sequences to date, which improved the identification of reads derived from var transcripts. This is crucial when analysing patient samples with low parasitaemia where var transcripts are hard to assemble due to their low abundancy (Guillochon et al., 2022). Our approach has wide utility due to stable performance on both laboratory-adapted and clinical samples. Concordance in the different var expression profiling approaches (RNA-sequencing and DBLα-tag) on ex vivo samples increased using the new approach by 13%, when compared to the original approach (96% in the whole transcript approach compared to 83% in Wichers et al., 2021. This suggests the new approach provides a more accurate method for characterising var genes, especially in samples collected directly from patients. Ultimately, this will allow a deeper understanding of relationships between var gene expression and clinical manifestations of malaria.”

      MINOR

      The plural form of PfEMP1 (PfEMP1s) is inconsistently used throughout the text.

      Corrected.

      404-405: statistical test for significance?

      Thank you for this suggestion. We have done two comparisons between the original analysis from Wichers et al., 2021 and our new whole transcript approach to test concordance of the RNAseq approaches with the DBLα-tag approach using paired Wilcoxon tests. These comparisons suggest that our new approach has significantly increased concordance with DBLα-tag data and might be better at capturing all expressed DBLα domains than the original analysis (and the DBLα-approach), although not statistically significant. We describe this now in the result section.

      • Line 352–361: “Overall, we found a high agreement between the detected DBLα-tag sequences and the de novo assembled var transcripts. A median of 96% (IQR: 93–100%) of all unique DBLα-tag sequences detected with >10 reads were found in the RNA-sequencing approach. This is a significant improvement on the original approach (p= 0.0077, paired Wilcoxon test), in which a median of 83% (IQR: 79–96%) was found (Wichers et al., 2021). To allow for a fair comparison of the >10 reads threshold used in the DBLα-tag approach, the upper 75th percentile of the RNA-sequencingassembled DBLα domains were analysed. A median of 77.4% (IQR: 61–88%) of the upper 75th percentile of the assembled DBLα domains were found in the DBLα-tag approach. This is a lower median percentage than the median of 81.3% (IQR: 73–98%) found in the original analysis (p= 0.28, paired Wilcoxon test) and suggests the new assembly approach is better at capturing all expressed DBLα domains.”

      Figure 4: The letters for the figure panels need to be added.

      The figure has been removed from the manuscript.

      Reviewer #3:

      It is difficult from Table S2 to determine how many unique var transcripts would have enough coverage to be potentially assembled from each sample. It seems unlikely that 455 distinct vars (~14 per sample) would be expressed at a detectable level for assembly. Why not DNA-sequence these samples to get the full repertoire for comparison to RNA? Why would so many distinct transcripts be yielded from fairly synchronous samples?

      We know from controlled human malaria infections of malaria-naive volunteers, that most var genes present in the genomic repertoire of the parasite strain are expressed at the onset of the human blood phase (heterogenous var gene expression) (Wang et al., 2009; Bachmann et al, 2016; Wichers-Misterek et al., 2023). This pattern shifts to a more restricted, homogeneous var expression pattern in semi-immune individuals (expression of few variants) depending on the degree of immunity (Bachmann et al., 2019).

      Author response image 2.

      In this cohort, 15 first-time infections are included, which should also possess a more heterogenous var gene expression in comparison to the pre-exposed individuals, and indeed such a trend is already seen in the number of different DBLa-tag clusters found in both patient groups (see figure panel from Wichers et al. 2021: blue-first-time infections; grey–pre-exposed). Moreover, Warimwe et al. 2013 have shown that asymptomatic infections have a more homogeneous var expression in comparison to symptomatic infections. Therefore, we expect that parasites from symptomatic infections have a heterogenous var expression pattern with multiple var gene variants expressed, which we could assemble due to our high read depth and our improved var assembly pipeline for even low expressed variants.

      Moreover, the distinct transcripts found in the RNA-seq approach were confirmed with the DBLα tag data. To our opinion, previous approaches may have underestimated the complexity of the var transcriptome in less immune individuals.

      Mapping reads to these 455 putative transcripts and using this count matrix for differential expression analysis seems very unlikely to produce reliable results. As acknowledged on line 327, many reads will be mis-mapped, and perhaps most challenging is that most vars will not be represented in most samples. In other words, even if mapping were somehow perfect, one would expect a sparse matrix that would not be suitable for statistical comparisons between groups. This is likely why the per-patient transcript analysis doesn't appear to be consistent. I would recommend the authors remove the DE sections utilizing this approach, or add convincing evidence that the count matrix is useable.

      We agree that this is a general issue of var differential expression analysis. Therefore, we have removed the var differential expression analysis from this manuscript as the per patient approach was more appropriate for the paired samples. We validated different mapping strategies (new Figure S6) and included a paragraph discussing the problem in the result section:

      • Line 237–255: “In the original approach of Wichers et al., 2021, the non-core reads of each sample used for var assembly were mapped against a pooled reference of assembled var transcripts from all samples, as a preliminary step towards differential var transcript expression analysis. This approach returned a small number of var transcripts which were expressed across multiple patient samples (Figure 3 – Figure supplement 2a). As genome sequencing was not available, it was not possible to know whether there was truly overlap in var genomic repertoires of the different patient samples, but substantial overlap was not expected. Stricter mapping approaches (for example, excluding transcripts shorter than 1500nt) changed the resulting var expression profiles and produced more realistic scenarios where similar var expression profiles were generated across paired samples, whilst there was decreasing overlap across different patient samples (Figure 3 – Figure supplement 2b,c). Given this limitation, we used the paired samples to analyse var gene expression at an individual subject level, where we confirmed the MSP1 genotypes and alleles were still present after short-term in vitro cultivation. The per patient approach showed consistent expression of var transcripts within samples from each patient but no overlap of var expression profiles across different patients (Figure 3 – Figure supplement 2d). Taken together, the per patient approach was better suited for assessing var transcriptional changes in longitudinal samples. It has been hypothesised that more conserved var genes in field isolates increase parasite fitness during chronic infections, necessitating the need to correctly identify them (Dimonte et al., 2020, Otto et al., 2019). Accordingly, further work is needed to optimise the pooled sample approach to identify truly conserved var transcripts across different parasite isolates in cross-sectional studies.” - Figure S6:

      Author response image 3.

      Var expression profiles across different mapping. Different mapping approaches Were used to quantify the Var expression profiles of each sample (ex Vivo (n=13), generation I (n=13), generation 2 (n=10) and generation 3 (n=l). The pooled sample approach in Which all significantly assembled van transcripts (1500nt and containing3 significantly annotated var domains) across samples were combined into a reference and redundancy was removed using cd-hit (at sequence identity = 99%) (a—c). The non-core reads of each sample were mapped to this pooled reference using a) Salmon, b) bowtie2 filtering for uniquely mapping paired reads with MAPQ and c) bowtie2 filtering for uniquely mapping paired reads with a MAPQ > 20. d) The per patient approach was applied. For each patient, the paired ex vivo and in vitro samples were analysed. The assembled var transcripts (at least 1500nt and containing3 significantly annotated var domains) across all the generations for a patient were combined into a reference, redundancy was removed using cd-hit (at sequence identity: 99%), and expression was quantified using Salmon. Pie charts show the var expression profile With the relative size of each slice representing the relative percentage of total var gene expression of each var transcript. Different colours represent different assembled var transcripts with the same colour code used across a-d.

      For future cross-sectional studies a per patient analysis that attempts to group per patient assemblies on some unifying structure (e.g., domain, homology blocks, domain cassettes etc) should be performed.

      Line 304. I don't understand the rationale for comparing naïve vs. prior-exposed individuals at ex-vivo and gen 1 timepoints to provide insights into how reliable cultured parasites are as a surrogate for var expression in vivo. Further, the next section (per patient) appears to confirm the significant limitation of the 'all sample analysis' approach. The conclusion on line 319 is not supported by the results reported in figures S9a and S9b, nor is the bold conclusion in the abstract about "casting doubt" on experiments utilizing culture adapted

      We have removed this comparison from the manuscript due to the inconsistencies with the var per patient approach. However, the conclusion in the abstract has been rephrased to reflect the fact we observed 19% of the core transcript differentially expressed within one cycle of cultivation.

      Line 372/391 (and for the other LMM descriptions). I believe you mean to say response variable, rather than explanatory variable. Explanatory variables are on the right hand side of the equation.

      Thank you for spotting this inaccuracy, we changed it to “response variable” (line 324, line 343, line 805).

      Line 467. Similar to line 304, why would comparisons of naïve vs. prior-exposed be informative about surrogates for in vivo studies? Without a gold-standard for what should be differentially expressed between naïve and prior-exposed in vivo, it doesn't seem prudent to interpret a drop in the number of DE genes for this comparison in generation 1 as evidence that biological signal for this comparison is lost. What if the generation 1 result is actually more reflective of the true difference in vivo, but the ex vivo samples are just noisy? How do we know? Why not just compare ex vivo vs generation 1/2 directly (as done in the first DE analysis), and then you can comment on the large number of changes as samples are less and less proximal to in vivo?

      In the original paper (Wichers et al., 2021), there were differences between the core transcriptome of naïve vs previously exposed patients. However, these differences appeared to diminish in vitro, suggesting the in vivo core transcriptome is not fully maintained in vitro.

      We have added a sentence explaining the reasoning behind this analysis in the results section:

      • Lines 414–423: “In the original analysis of ex vivo samples, hundreds of core genes were identified as significantly differentially expressed between pre-exposed and naïve malaria patients. We investigated whether these differences persisted after in vitro cultivation. We performed differential expression analysis comparing parasite isolates from naïve (n=6) vs pre-exposed (n=7) patients, first between their ex vivo samples, and then between the corresponding generation 1 samples. Interestingly, when using the ex vivo samples, we observed 206 core genes significantly upregulated in naïve patients compared to pre-exposed patients (Figure 7 – Figure supplement 3a). Conversely, we observed no differentially expressed genes in the naïve vs pre-exposed analysis of the paired generation 1 samples (Figure 7 – Figure supplement 3b). Taken together with the preceding findings, this suggests one cycle of cultivation shifts the core transcriptomes of parasites to be more alike each other, diminishing inferences about parasite biology in vivo.”

      Overall, I found the many DE approaches very frustrating to interpret coherently. If not dropped in revision, the reader would benefit from a substantial effort to clarify the rationale for each approach, and how each result fits together with the other approaches and builds to a concise conclusion.

      We agree that the manuscript contains many different complex layers of analysis and that it is therefore important to explain the rationale for each approach. Therefore, we now included the summary Table 3 (see comment to public review). Additionally, we have removed the var transcript differential expression due to its limitations, which we hope has already streamlined our manuscript.

    1. Author Response

      Reviewer 1 (Public Review):

      1. With respect to the predictions, the authors propose that the subjects, depending on their linguistic background and the length of the tone in a trial, can put forward one or two predictions. The first is a short-term prediction based on the statistics of the previous stimuli and identical for both groups (i.e. short tones are expected after long tones and vice versa). The second is a long-term prediction based on their linguistic background. According to the authors, after a short tone, Basque speakers will predict the beginning of a new phrasal chunk, and Spanish speakers will predict it after a long tone.

      In this way, when a short tone is omitted, Basque speakers would experience the violation of only one prediction (i.e. the short-term prediction), but Spanish speakers will experience the violation of two predictions (i.e. the short-term and long-term predictions), resulting in a higher amplitude MMN. The opposite would occur when a long tone is omitted. So, to recap, the authors propose that subjects will predict the alternation of tone durations (short-term predictions) and the beginning of new phrasal chunks (long-term predictions).

      The problem with this is that subjects are also likely to predict the completion of the current phrasal chunk. In speech, phrases are seldom left incomplete. In Spanish is very unlikely to hear a function-word that is not followed by a content-word (and the opposite happens in Basque). On the contrary, after the completion of a phrasal chunk, a speaker might stop talking and a silence might follow, instead of the beginning of a new phrasal chunk.

      Considering that the completion of a phrasal chunk is more likely than the beginning of a new one, the prior endowed to the participants by their linguistic background should make us expect a pattern of results actually opposite to the one reported here.

      Response: We acknowledge the plausibility of the hypothesis advanced by Reviewer #1. We would like to further clarify the rationale that led us to predict that the hypothesized long-term predictions should manifest at the onset of (and not within) a “phrasal chunk”. The hypothesis does not directly concern the probability of a short event to follow a long one (or the other way around), which to our knowledge has not been systematically quantified in previous cross-linguistic studies. Rather, it concerns how the auditory system forms higher-level auditory chunks based on the rhythmic properties of the native language, which is what the previous behavioral studies on perceptual grouping have addressed (e.g., Iversen 2008; Molnar et al. 2014; Molnar et al. 2016). When presented with sequences of two tones alternating in duration, Spanish speakers typically report perceiving the auditory stream as a repetition of short-long chunks separated by a pause, while speakers of Basque usually report the opposite long-short grouping bias. These results suggest that the auditory system performs a chunking operation by grouping pairs of tones into compressed, higher-level auditory units (often perceived as a single event). The way two constituent tones are combined depends on linguistic experience. Based on this background, we hypothesized the presence of (i) a short-term system that merely encodes a repetition of alternations rule and predicts transitions from one constituent tone to the other (a → b → a → b, etc.); (ii) a long-term system that encodes a repetition of concatenated alternations rule and predicts transitions from one high-level unit to the other (ab → ab, etc.). Under this view, we expect predictions based on the long-term system to be stronger at the onset of (rather than within) high-level units and therefore omissions of the first constituent tone to elicit larger responses than omissions of the second constituent tone.

      In other words, the omission of the onset tone would reflect the omission of the whole chunk. On the other hand, the omission of the internal tone would be better handled by the short-term system, involved in processing the low-level structure of our sequences.

      A similar concern was also raised by Reviewer #2. We will include the view proposed by Reviewer #1 and Reviewer #2 in the updated version of the manuscript.

      1. The authors report an interaction effect that modulates the amplitude of the omission response, but caveats make the interpretation of this effect somewhat uncertain. The authors report a widespread omission response, which resembles the classical mismatch response (in MEG) with strong activations in sensors over temporal regions. Instead, the interaction found is circumscribed to four sensors that do not overlap with the peaks of activation of the omission response.

      Response: We appreciate that all three reviewers agreed on the robustness of the data analysis pipeline. The approach employed to identify the presence of an interaction effect was indeed conservative, using a non-parametric test on combined gradiometers data, no a priori assumptions regarding the location of the effect, and small cluster thresholds (cfg.clusteralpha = 0.05) to enhance the likelihood of detecting highly localized clusters with large effect sizes. This approach led to the identification of the cluster illustrated in Figure 2c, where the interaction effect is evident. The fact that this interaction effect arises in a relatively small cluster of sensors does not alter its statistical robustness. The only partial overlap of the cluster with the activation peaks might simply reflect the fact that distinct sources contribute to the generation of the omission-MMN, which has been demonstrated in numerous prior studies (e.g., Zhang et al., 2018; Ross & Hamm, 2020).

      Furthermore, the boxplot in Figure 2E suggests that part of the interaction effect might be due to the presence of two outliers (if removed, the effect is no longer significant). Overall, it is possible that the reported interaction is driven by a main effect of omission type which the authors report, and find consistently only in the Basque group (showing a higher amplitude omission response for long tones than for short tones). Because of these points, it is difficult to interpret this interaction as a modulation of the omission response.

      Response: The two participants mentioned by Reviewer #1, despite being somewhat distant from the rest of the group, are not outliers according to the standard Tukey’s rule. As shown in Author response image 1 below, no participant fell outside the upper (Q3+1.5xIQR) and lower whiskers (Q1-1.5xIQR) of the boxplot.

      Author response image 1.

      The presence of a main effect of omission type does not impact the interpretation of the interaction, especially considering that these effects emerge over distinct clusters of channels.

      The code to generate Author response image 1 and the corresponding statistics have been added to the script “analysis_interaction_data.R” in the OSF folder (https://osf.io/6jep8/).

      It should also be noted that in the source analysis, the interaction only showed a trend in the left auditory cortex, but in its current version the manuscript does not report the statistics of such a trend.

      Response: Our interpretation of the results for the present study is mainly driven by the effect observed on sensor-level data, which is statistically robust. The source modeling analyses (in non-invasive electrophysiology) provide a possible model of the candidate brain sources driving the effect observed at the sensor level. The source showing the interactive effect in our study is the left auditory cortex. More details and statistics will be provided in the reviewed version of the manuscript.

      Reviewer #2 (Public Review):

      1. Despite the evidence provided on neural responses, the main conclusion of the study reflects a known behavioral effect on rhythmic sequence perceptual organization driven by linguistic background (Molnar et al. 2016, particularly). Also, the authors themselves provide a good review of the literature that evidences the influence of long-term priors in neural responses related to predictive activity. Thus, in my opinion, the strength of the statements the authors make on the novelty of the findings may be a bit far-fetched in some instances.

      Response: We will consider the suggestion of reviewer #2 for the new version of the manuscript. Overall, we believe that the novelty of the current study lies in bridging together findings from two research fields - basic auditory neuroscience and cross-linguistic research - to provide evidence for a predictive coding model in the auditory that uses long-term priors to make perceptual inferences.

      1. Albeit the paradigm is well designed, I fail to see the grounding of the hypotheses laid by the authors as framed under the predictive coding perspective. The study assumes that responses to an omission at the beginning of a perceptual rhythmic pattern will be stronger than at the end. I feel this is unjustified. If anything, omission responses should be larger when the gap occurs at the end of the pattern, as that would be where stronger expectations are placed: if in my language a short sound occurs after a long one, and I perceptually group tone sequences of alternating tone duration accordingly, when I hear a short sound I will expect a long one following; but after a long one, I don't necessarily need to expect a short one, as something else might occur.

      Response: A similar point was advanced by Reviewer #1. We tried to clarify our hypothesis (see above). We will consider including this interpretation in the updated version of the manuscript.

      1. In this regard, it is my opinion that what is reflected in the data may be better accounted for (or at least, additionally) by a different neural response to an omission depending on the phase of an underlying attentional rhythm (in terms of Large and Jones rhythmic attention theory, for instance) and putative underlying entrained oscillatory neural activity (in terms of Lakatos' studies, for instance). Certainly, the fact that the aligned phase may differ depending on linguistic background is very interesting and would reflect the known behavioral effect.

      Response: We thank the reviewer for this comment, which is indeed very pertinent. Below are some comments highlighting our thoughts on this.

      1) We will explore in more detail the possibility that the aligned phase may differ depending on linguistic background, which is indeed very interesting. However, we believe that even if a phase modulation by language experience is found, it would not negate the possibility that the group differences in the MMN are driven by different long-term predictions. Rather, since the hypothesized phase differences would be driven by long-term linguistic experience, phase entrainment may reflect a mechanism through which long-term predictions are carried. On this point, we agree with the Reviewer when says that “this view would not change the impact of the results but add depth to their interpretation”.

      2) Related to the point above: Despite evoked responses and oscillations are often considered distinct electrophysiological phenomena, current evidence suggests that these phenomena are interconnected (e.g., Studenova et al., 2023). In our view, the hypotheses that the MMN reflects differences in phase alignment and long-term prediction errors are not mutually exclusive.

      3) Despite the plausibility of the view proposed by reviewer #2, many studies in the auditory neuroscience literature putatively consider the MMN as an index of prediction error (e.g., Bendixen et al., 2012; Heilbron and Chait, 2018). There are good reasons to believe that also in our study the MMN reflects, at least in part, an error response.

      In the updated version of the manuscript, we will include a paragraph discussing the possibility that the reported group differences in the omission MMN might be partially accounted for by differences in neural entrainment to the rhythmic sound sequences.

      Reviewer #3 (Public Review):

      The main weaknesses are the strength of the effects and generalisability. The sample size is also relatively small by today's standards, with N=20 in each group. Furthermore, the crucial effects are all mostly in the .01>P<.05 range, such as the crucial interaction P=.03. It would be nice to see it replicated in the future, with more participants and other languages. It would also have been nice to see behavioural data that could be correlated with neural data to better understand the real-world consequences of the effect.

      Response: We appreciate the positive feedback from Reviewer #3. Concerning this weakness highlighted: we agree with Reviewer #3 that it would be nice to see this study replicated in the future with larger sample sizes and a behavioral counterpart. Overall, we hope this work will lead to more studies using cross-linguistic/cultural comparisons to assess the effect of experience on neural processing. In the context of the present study, we believe that the lack of behavioral data does not undermine the main findings of this study, given the careful selection of the participants and the well-known robustness of the perceptual grouping effect (e.g., Iversen 2008; Yoshida et al., 2010; Molnar et al. 2014; Molnar et al. 2016). As highlighted by Reviewer #2, having Spanish and Basque dominant “speakers as a sample equates that in Molnar et al. (2016), and thus overcomes the lack of direct behavioral evidence for a difference in rhythmic grouping across linguistic groups. Molnar et al. (2016)'s evidence on the behavioral effect is compelling, and the evidence on neural signatures provided by the present study aligns with it.”

      References

      1. Bendixen, A., SanMiguel, I., & Schröger, E. (2012). Early electrophysiological indicators for predictive processing in audition: a review. International Journal of Psychophysiology, 83(2), 120-131.

      2. Heilbron, M., & Chait, M. (2018). Great expectations: is there evidence for predictive coding in auditory cortex?. Neuroscience, 389, 54-73.

      3. Iversen, J. R., Patel, A. D., & Ohgushi, K. (2008). Perception of rhythmic grouping depends on auditory experience. The Journal of the Acoustical Society of America, 124(4), 2263-2271.

      4. Molnar, M., Lallier, M., & Carreiras, M. (2014). The amount of language exposure determines nonlinguistic tone grouping biases in infants from a bilingual environment. Language Learning, 64(s2), 45-64.

      5. Molnar, M., Carreiras, M., & Gervain, J. (2016). Language dominance shapes non-linguistic rhythmic grouping in bilinguals. Cognition, 152, 150-159.

      6. Ross, J. M., & Hamm, J. P. (2020). Cortical microcircuit mechanisms of mismatch negativity and its underlying subcomponents. Frontiers in Neural Circuits, 14, 13.

      7. Simon, J., Balla, V., & Winkler, I. (2019). Temporal boundary of auditory event formation: An electrophysiological marker. International Journal of Psychophysiology, 140, 53-61.

      8. Studenova, A. A., Forster, C., Engemann, D. A., Hensch, T., Sander, C., Mauche, N., ... & Nikulin, V. V. (2023). Event-related modulation of alpha rhythm explains the auditory P300 evoked response in EEG. bioRxiv, 2023-02.

      9. Yoshida, K. A., Iversen, J. R., Patel, A. D., Mazuka, R., Nito, H., Gervain, J., & Werker, J. F. (2010). The development of perceptual grouping biases in infancy: A Japanese-English cross-linguistic study. Cognition, 115(2), 356-361.

      10. Zhang, Y., Yan, F., Wang, L., Wang, Y., Wang, C., Wang, Q., & Huang, L. (2018). Cortical areas associated with mismatch negativity: A connectivity study using propofol anesthesia. Frontiers in Human Neuroscience, 12, 392.

    1. async (recommended) const bcrypt = require('bcrypt'); const saltRounds = 10; const myPlaintextPassword = 's0/\/\P4$$w0rD'; const someOtherPlaintextPassword = 'not_bacon'; To hash a password: Technique 1 (generate a salt and hash on separate function calls): bcrypt.genSalt(saltRounds, function(err, salt) { bcrypt.hash(myPlaintextPassword, salt, function(err, hash) { // Store hash in your password DB. }); }); Technique 2 (auto-gen a salt and hash): bcrypt.hash(myPlaintextPassword, saltRounds, function(err, hash) { // Store hash in your password DB. }); Note that both techniques achieve the same end-result.

      Certainly! The code you provided demonstrates how to use the bcrypt library in Node.js to hash passwords. The examples use both the technique of generating a salt and hashing in separate steps, as well as the technique of auto-generating a salt and hashing in a single step. Let's break it down:

      Technique 1: Generate a Salt and Hash on Separate Function Calls

      ```javascript const bcrypt = require('bcrypt'); const saltRounds = 10; const myPlaintextPassword = 's0/\/\P4$$w0rD';

      // Step 1: Generate a salt bcrypt.genSalt(saltRounds, function(err, salt) { if (err) { // Handle error console.error(err); } else { // Step 2: Hash the plaintext password with the generated salt bcrypt.hash(myPlaintextPassword, salt, function(err, hash) { if (err) { // Handle error console.error(err); } else { // Step 3: Store the hash in your password database // In a real application, you would typically store 'hash' in your database. console.log('Hashed Password:', hash); } }); } }); ```

      Technique 2: Auto-generate a Salt and Hash

      ```javascript const bcrypt = require('bcrypt'); const saltRounds = 10; const myPlaintextPassword = 's0/\/\P4$$w0rD';

      // Auto-generate a salt and hash the plaintext password bcrypt.hash(myPlaintextPassword, saltRounds, function(err, hash) { if (err) { // Handle error console.error(err); } else { // Store the hash in your password database // In a real application, you would typically store 'hash' in your database. console.log('Hashed Password:', hash); } }); ```

      Explanation:

      1. Generate a Salt:
      2. bcrypt.genSalt(saltRounds, callback): This function generates a salt to be used during hashing. The saltRounds parameter defines the cost factor of the hashing process (higher value means more secure but slower). The salt is then passed to the callback function.

      3. Hashing with Generated Salt:

      4. bcrypt.hash(myPlaintextPassword, salt, callback): This function hashes the plaintext password using the generated salt. The resulting hash is passed to the callback function.

      5. Auto-generate Salt and Hash:

      6. bcrypt.hash(myPlaintextPassword, saltRounds, callback): This function auto-generates a salt and then immediately hashes the plaintext password with that salt. The resulting hash is passed to the callback function.

      Both techniques achieve the same end result: a hashed password. The auto-generate approach is more concise and is often preferred for simplicity, but it's essential to understand that either method is valid based on your application's needs. In a real-world scenario, you would typically store the resulting hash in your password database for later authentication checks.

    2. A library to help you hash passwords. You can read about bcrypt in Wikipedia as well as in the following article: How To Safely Store A Password If You Are Submitting Bugs or Issues Please verify that the NodeJS version you are using is a stable version; Unstable versions are currently not supported and issues created while using an unstable version will be closed. If you are on a stable version of NodeJS, please provide a sufficient code snippet or log files for installation issues. The code snippet does not require you to include confidential information. However, it must provide enough information so the problem can be replicable, or it may be closed without an explanation.

      Certainly! Let's cover the information you've requested:

      1. What is bcrypt and why is it used?

      • What is bcrypt? bcrypt is a library used for securely hashing passwords. Hashing is a one-way process that converts a plain-text password into a fixed-length string of characters. It's designed to be computationally intensive and slow, which makes it resistant to brute-force attacks.

      • Why is it used? Storing passwords as plain text is a security risk. If a database is compromised, attackers can easily access user passwords. Hashing passwords with bcrypt adds a layer of security by making it extremely difficult and time-consuming for attackers to reverse-engineer the original passwords. It includes features like salting (adding random data to each password before hashing) to further enhance security.

      2. Syntax:

      • Installation: You need to install bcrypt using npm (Node Package Manager). Open your terminal and run: npm install bcrypt

      • Usage in JavaScript/Node.js: ```javascript const bcrypt = require('bcrypt');

      // Example: Hashing a password const plainPassword = 'mySecurePassword';

      bcrypt.hash(plainPassword, 10, function(err, hash) { if (err) { console.error(err); } else { console.log('Hashed Password:', hash); } });

      // Example: Comparing a password with a hashed password const hashedPasswordFromDatabase = '$2b$10$...'; // Replace with an actual hashed password

      bcrypt.compare('userEnteredPassword', hashedPasswordFromDatabase, function(err, result) { if (err) { console.error(err); } else { console.log('Password Match:', result); } }); `` -bcrypt.hash(plainPassword, saltRounds, callback): Hashes theplainPasswordusing the specified number ofsaltRoundsand provides the result in thecallback. -bcrypt.compare(userEnteredPassword, hashedPasswordFromDatabase, callback): Compares a user-entered password with a hashed password retrieved from the database and provides the result in thecallback`.

      3. Examples:

      Example 1: Hashing a Password

      ```javascript const bcrypt = require('bcrypt');

      const plainPassword = 'mySecurePassword';

      bcrypt.hash(plainPassword, 10, function(err, hash) { if (err) { console.error(err); } else { console.log('Hashed Password:', hash); } }); ```

      Example 2: Comparing a Password

      ```javascript const bcrypt = require('bcrypt');

      const hashedPasswordFromDatabase = '$2b$10$...'; // Replace with an actual hashed password

      bcrypt.compare('userEnteredPassword', hashedPasswordFromDatabase, function(err, result) { if (err) { console.error(err); } else { console.log('Password Match:', result); } }); ```

      In these examples, bcrypt.hash is used to hash a password, and bcrypt.compare is used to compare a user-entered password with a hashed password retrieved from the database. The callback functions handle errors and provide the results of the operations.

    1. Pre Pre middleware functions are executed one after another, when each middleware calls next. const schema = new Schema({ /* ... */ }); schema.pre('save', function(next) { // do stuff next(); }); In mongoose 5.x, instead of calling next() manually, you can use a function that returns a promise. In particular, you can use async/await. schema.pre('save', function() { return doStuff(). then(() => doMoreStuff()); }); // Or, in Node.js >= 7.6.0: schema.pre('save', async function() { await doStuff(); await doMoreStuff(); }); If you use next(), the next() call does not stop the rest of the code in your middleware function from executing. Use the early return pattern to prevent the rest of your middleware function from running when you call next(). const schema = new Schema({ /* ... */ }); schema.pre('save', function(next) { if (foo()) { console.log('calling next!'); // `return next();` will make sure the rest of this function doesn't run /* return */ next(); } // Unless you comment out the `return` above, 'after next' will print console.log('after next'); }); Use Cases Middleware are useful for atomizing model logic. Here are some other ideas: complex validation removing dependent documents (removing a user removes all their blogposts) asynchronous defaults asynchronous tasks that a certain action triggers Errors in Pre Hooks If any pre hook errors out, mongoose will not execute subsequent middleware or the hooked function. Mongoose will instead pass an error to the callback and/or reject the returned promise. There are several ways to report an error in middleware: schema.pre('save', function(next) { const err = new Error('something went wrong'); // If you call `next()` with an argument, that argument is assumed to be // an error. next(err); }); schema.pre('save', function() { // You can also return a promise that rejects return new Promise((resolve, reject) => { reject(new Error('something went wrong')); }); }); schema.pre('save', function() { // You can also throw a synchronous error throw new Error('something went wrong'); }); schema.pre('save', async function() { await Promise.resolve(); // You can also throw an error in an `async` function throw new Error('something went wrong'); }); // later... // Changes will not be persisted to MongoDB because a pre hook errored out myDoc.save(function(err) { console.log(err.message); // something went wrong }); Calling next() multiple times is a no-op. If you call next() with an error err1 and then throw an error err2, mongoose will report err1. Post

      Certainly! Let's break down the provided code snippets, explain the syntax, its use, and illustrate how to use them with examples.

      Syntax Explanation:

      Middleware Registration:

      javascript schema.pre('save', function(next) { // middleware logic next(); });

      • schema.pre('save', ...): This registers a middleware function that runs before the 'save' operation on a Mongoose model.
      • function(next) {...}: This is the middleware function. It takes a next parameter, which is a function that you should call to proceed to the next middleware or the actual save operation.

      Using Promises:

      javascript schema.pre('save', function() { return doStuff() .then(() => doMoreStuff()); });

      • Instead of manually calling next(), you can return a promise from the middleware. The next middleware or the save operation will proceed once the promise is resolved.

      Using async/await:

      javascript schema.pre('save', async function() { await doStuff(); await doMoreStuff(); });

      • With Node.js versions >= 7.6.0, you can use async/await to make asynchronous code more readable. Each await pauses execution until the asynchronous operation is completed.

      Early Return Pattern:

      javascript schema.pre('save', function(next) { if (foo()) { console.log('calling next!'); return next(); } console.log('after next'); });

      • If a condition (foo()) is met, the return next(); statement prevents the rest of the middleware function from executing.

      Error Handling:

      ```javascript schema.pre('save', function(next) { const err = new Error('something went wrong'); next(err); });

      schema.pre('save', function() { return new Promise((resolve, reject) => { reject(new Error('something went wrong')); }); });

      schema.pre('save', function() { throw new Error('something went wrong'); });

      schema.pre('save', async function() { throw new Error('something went wrong'); }); ```

      • Different ways to handle errors in middleware, including passing an error to next, returning a rejected promise, throwing synchronous errors, and throwing errors in an async function.

      Use in Simple Words with Examples:

      Middleware Example:

      Suppose you have a User model, and before saving a user, you want to hash their password.

      ```javascript const mongoose = require('mongoose'); const Schema = mongoose.Schema;

      const userSchema = new Schema({ username: String, password: String, });

      userSchema.pre('save', async function() { // Assuming hashPassword is an asynchronous function this.password = await hashPassword(this.password); });

      const User = mongoose.model('User', userSchema);

      const newUser = new User({ username: 'john_doe', password: 'my_secure_password', });

      newUser.save(); // The 'save' middleware will automatically hash the password before saving. ```

      Error Handling Example:

      Suppose you want to validate the email format before saving a Post model.

      ```javascript const mongoose = require('mongoose'); const Schema = mongoose.Schema;

      const postSchema = new Schema({ title: String, content: String, email: String, });

      postSchema.pre('save', function(next) { const emailRegex = /\S+@\S+.\S+/; if (!emailRegex.test(this.email)) { const err = new Error('Invalid email format'); return next(err); // If the email format is invalid, the save operation will not proceed. } next(); // Proceed with the save operation if the email format is valid. });

      const Post = mongoose.model('Post', postSchema);

      const newPost = new Post({ title: 'Introduction to Middleware', content: 'Middleware is awesome!', email: 'invalid_email', // This will trigger the error handling in the 'save' middleware. });

      newPost.save(function(err) { console.log(err.message); // Outputs: Invalid email format }); ```

      In summary, Mongoose middleware allows you to execute logic before or after certain operations (e.g., saving to the database) on your models. It's a powerful tool for organizing and encapsulating your application's logic in a clean and modular way.

    1. Author Response

      The following is the authors’ response to the previous reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      The proposed study provides an innovative framework for the identification of muscle synergies taking into account their task relevance. State-of-the-art techniques for extracting muscle interactions use unsupervised machine-learning algorithms applied to the envelopes of the electromyographic signals without taking into account the information related to the task being performed. In this work, the authors suggest including the task parameters in extracting muscle synergies using a network information framework previously proposed. This allows the identification of muscle interactions that are relevant, irrelevant, or redundant to the parameters of the task executed.

      The proposed framework is a powerful tool to understand and identify muscle interactions for specific task parameters and it may be used to improve man-machine interfaces for the control of prostheses and robotic exoskeletons.

      With respect to the network information framework recently published, this work added an important part to estimate the relevance of specific muscle interactions to the parameters of the task executed. However, the authors should better explain what is the added value of this contribution with respect to the previous one, also in terms of computational methods.

      It is not clear how the well-known phenomenon of cross-talk during the recording of electromyographic muscle activity may affect the performance of the proposed technique and how it may bias the overall outcomes of the framework.

      We thank reviewer 1 for their useful commentary on this manuscript.

      Reviewer #2 (Public Review):

      This paper is an attempt to extend or augment muscle synergy and motor primitive ideas with task measures. The authors idea is to use information metrics (mutual information, co-information) in 'synergy' constraint creation that includes task information directly. By using task related information and muscle information sources and then sparsification, the methods construct task relevant network communities among muscles, together with task redundant communities, and task irrelevant communities. This process of creating network communities may then constrain and help to guide subsequent synergy identification using the authors published sNM3F algorithm to detect spatial and temporal synergies.

      The revised paper is much clearer and examples are helpful in various ways. However, figure 2 as presented does not convincingly show why task muscle mutual information helps in separating synergies, though it is helpful in defining the various network communities used in the toy example.

      The impact of the information theoretic constraints developed as network communities on subsequent synergy separation are posited to be benign and to improve over other methods (e.g., NNMF). However, not fully addressed are the possible impacts of the methods on compositionality links with physiological bases, and the possibility remains of the methods sometimes instead leading to modules that represent more descriptive ML frameworks that may not support physiological work easily. Accordingly, there is a caveat. This is recognized and acknowledged by the authors in their rebuttal of the prior review. It will remain for other work to explore this issue, likely through testing on detailed high degree of freedom artificial neuromechanical models and tasks. This possible issue with the strategy here likely needs to be fully acknowledged in the paper.

      The approach of the methods seeks to identify task relevant coordinative couplings. This is a meta problem for more classical synergy analyses. Classical analyses seek compositional elements stable across tasks. These elements may then be explored in causal experiments and generative simulations of coupling and control strategies. However, task-based understanding of synergy roles and functional uses is significant and is clearly likely to be aided by methods in this study.

      Information based separation has been used in muscle synergy analyses using infomax ICA, which is information based at core. Though linear mixing of sources is assumed in ICA, minimized mutual information among source (synergy) drives is the basis of the separation and detects low variance synergy contributions (e.g., see Yang, Logan, Giszter, 2019). In the work in this paper, instead, mutual information approaches are used to cluster muscles and task features into network communities preceding the SNM3F algorithm use for separation, rather than using minimized information in separation. This contrast of an accretive or agglomerative mutual information strategy here used to cluster into networks, versus a minimizing mutual information source separation used in infomax ICA epitomizes a key difference in approach here.

      Physiological causal testing of synergy ideas is neglected in the literature reviews in the paper. Although these are only in animal work (Hart and Giszter, 2010; Takei and Seki, 2017), the clear connection of muscle synergy analysis choices to physiology is important, and eventually these issues need to be better managed and understood in relation to the new methods proposed here, even if not in this paper.

      Analyses of synergies using the methods the paper has proposed will likely be very much dependent on the number and quality of task variables included and how these are managed, and the impacts of these on the ensuing sparsification and network communities used prior to SNM3F. The authors acknowledge this in their response. This caveat should likely be made very explicit in the paper.

      It would be useful in the future to explore the approach described with a range of simulated data to better understand the caveats, and optimizations for best practices in this approach.

      A key component of the reviewers’ arguments here is their reductionist view of muscle synergies vs the emergentist view presented in our work here. In the reductionist lens, muscle groupings are the units (‘building blocks’) of coordinated movement and thus the space of intermuscular interactions is of particular interest for understanding movement construction. On the other hand, the emergentist view suggests that muscle groupings emerge from interactions between constituent parts (as quantified here using information theory, synergistic information is the information found when both activities are observed together). This is in line with recent work in the field showing modular control at the intramuscular level, exemplifying a scale-free phenomena. Nonetheless, we consider these approaches to muscle synergy research as complementary and beneficial for the field overall going forward.

      Reviewer #3 (Public Review):

      In this study, the authors developed and tested a novel framework for extracting muscle synergies. The approach aims at removing some limitations and constraints typical of previous approaches used in the field. In particular, the authors propose a mathematical formulation that removes constraints of linearity and couples the synergies to their motor outcome, supporting the concept of functional synergies and distinguishing the task-related performance related to each synergy. While some concepts behind this work were already introduced in recent work in the field, the methodology provided here encapsulates all these features in an original formulation providing a step forward with respect to the currently available algorithms. The authors also successfully demonstrated the applicability of their method to previously available datasets of multi-joint movements.

      Preliminary results positively support the scientific soundness of the presented approach and its potential. The added values of the method should be documented more in future work to understand how the presented formulation relates to previous approaches and what novel insights can be achieved in practical scenarios and confirm/exploit the potential of the theoretical findings.

      In their revision, the authors have implemented major revisions and improved their paper. The work was already of good quality and now it has improved further. The authors were able to successfully:

      • improve the clarity of the writing (e.g.: better explaining the rationale and the aims of the paper);

      • extend the clarification of some of the key novel concepts introduced in their work, like the redundant synergies;

      • show a scenario in which their approach might be useful for increasing the understanding of motor control in patients with respect to traditional algorithms such as NMF. In particular, their example illustrates why considering the task space is a fundamental step forward when extracting muscle synergies, improving the practical and physiological interpretation of the results.

      We thank reviewer 3 for their constructive commentary on this manuscript.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Figure 3 should report the distances between reaching points in panel A and the actual length distances of the walking paths in panel C.

      The caption of fig.3 concerning the experimental setup of the datasets analysed has been updated with the following for dataset 1: “(A) Dataset 1 consisted of participants executing table-top point-to-point reaching movements (40cm distance from starting point P0) across four targets in forward (P1-P4) and backwards (P5-P8) directions at both fast and slow speeds (40 repetitions per task) [25]. The muscles recorded included the finger extensors (FE), brachioradialis (BR), biceps brachii (BI), medial-triceps (TM), lateral-triceps (TL), anterior deltoid (AD), posterior deltoid (PD), pectoralis major (PE), latissimus dorsi (LD) of the right, reaching arm.”. For dataset 3, to the best of the authors knowledge, this information was not given in the original paper.

      Figure 4, what is the unit of the data shown?

      The unit of bits is now mentioned in the toy example figure caption and in the caption of fig.5

      Figure 4, the characteristics of the interactions are not fully clear, and the graphical representation should be improved.

      We have made steps to improve the clarity of the figures presented.

      For dataset 3, τ was the movement kinematics, but it is not specified how the task parameters were formulated. Did the authors use the data from all 32 kinematic markers, 4 IMUs, and force plates? If yes, it should be specified why all these signals were used. For sure, there will be signals included that are not relevant to the specific task. Did the authors select specific signals based on their relevance to the task (e.g., ankle kinematics)?

      We have now clarified this in the text as follows: “For datasets 1 and 2, we determine the MI between vectors with respect to several discrete task parameters representing specific task attributes (e.g. reaching direction, speed etc.), while for dataset 3 we determined the task-relevant and -irrelevant muscles couplings in an unassuming way by quantifying them with respect to all available kinematic, dynamic and inertial motion unit (IMU) features.”

      How did the authors endure that crosstalk did not affect their analysis, particularly between, e.g., finger extensors and brachioradialis and posterior deltoid and anterior deltoid (dataset 1)?

      We have addressed this point in the previous round of reviews and made an explicit statement regarding cross-talk in the discussion section: “Although distinguishing task-irrelevant muscle couplings may capture artifacts such as EMG crosstalk, our results convey several physiological objectives of muscles including gross motor functions [66], the maintenance of internal joint mechanics and reciprocal inhibition of contralateral limbs [19,51].”

      It would be informative to add some examples of not trivial/obvious task-related synergistic muscle combinations that have been extracted in the three datasets. Most of the examples reported in the manuscript are well-known biomechanically and quite intuitive, so they do not improve our understanding of synergistic muscle control in humans.

      Our framework improves our understanding of synergistic motor control by enabling the formal quantification of synergistic muscle interactions, a capability not present among current approaches. Regarding the implications of this advance in terms of concrete examples, we have further clarified our examples presented in the results section, for example:

      “Across datasets, many the muscle networks could be characterised by the transmission of complementary task information between functionally specialised muscle groups, many of which identified among the task-redundant representations (Fig.9-10 and Supp. Fig.2). The most obvious example of this is the S3 synergist muscle network of dataset 2 (Fig.11), which captures the complementary interaction between task-redundant submodules identified previously (S3 (Fig.9)).”

      The description shows how our framework can extract the cross-module interactions that align with the higher-level objectives of the system, here the synergistic connectivity between the upper and lower body modules. Current approaches can only capture redundant and task-irrelevant interactions. Thus our framework provides additional insight into movement control.

      The number of participations in dataset 2 is very limited and should be increased. We appreciate the reviewer's comment and would like to point out that for dataset 2 our aim was to increase the number of muscles (30), tasks (72) and trials for each task (30) which produced a very large dataset for each participant. This came at the expense of low number of participants, however all our statistical analyses here can be performed at the single-participant level. Furthermore, dataset 3 includes 25 participants and it enables us to demonstrate the reliability of the findings across participants.

      Reviewer #2 (Recommendations For The Authors):

      I believe it is important in the future to explore the approach proposed with a range of simulation data and neuromechanical models, to explore the issues I have raised and that you have acknowledged, though I agree it is likely out of scope for the paper here.

      We agree with the reviewer that this would be valuable future work and indeed plan to do this in our future research.

      The Github code for this paper should likely include the various data sets used in the paper and figures, appropriately anonymized, in order to allow the data to be explored and analyses replicated and package demonstrated to be exercised fully by a new user.

      We thank the reviewer for this suggestion. Dataset3 is already available online at https://doi.org/10.1016/j.jbiomech.2021.110320. We will also make the other 2 datasets publicly available on our lab website very soon. Until then, as stated in the manuscript, we will make them available to anyone upon reasonable request.

      Reviewer #3 (Recommendations For The Authors):

      I have the following open points to suggest to the authors:

      First, I recommend improving the quality of the figures: in the pdf version I downloaded, some writings are impossible to read.

      We fully agree with the reviewer and note that in the pdf version of the paper, the figures are a lot worse than in the submitted word document submitted. Nevertheless, we will make further improvements on the figures as requested.

      Even though the manuscript has improved, I still feel that some points were not addressed or were only partially addressed. In particular:

      • The proposed comparison with NMF helps understanding why incorporating the task space is useful (and I fully agree with the authors about this point as the main reason to propose their contribution). However, the comparison does not help the reader to understand whether the synergies incorporating the task space are biased by the introduction of the task variables.

      This question can be also reformulated as: are muscle synergies modified when task space variables are incorporated? Is the "weight" on task coefficients affecting the composition of muscle synergies? If so, the added interpretational power is achieved at the cost of losing the information regarding the neural substrate of synergies? I understand this point is not immediate to show, but it would increase the quality of the work.

      • Reference to previous approaches that aimed at including task variables into synergy extraction are still missing in the paper. Even though it is not required to provide quantitative comparisons with other available approaches, there are at most 2-3 available algorithms in the literature (kinematics-EMG; force-EMG), that should not be neglected in this work. What did previous approaches achieve? What was improved with this approach? What was not improved?

      Previous attempts of extracting synergies with non-linear approaches could also be described more.

      In the latest version of the manuscript, we have referenced both the mixed NMF and autoencoders based algorithms. In both the introduction and discussion section of the manuscript, we also specify that our framework quantifies and decomposes muscle interactions in a novel way that cannot be done by other current approaches. In the results section we use examples from 3 different datasets to make this point clear, providing intuition on the use cases of our framework.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      In this manuscript, Butkovic et al. perform a genome-wide association (GWA) study on Arabidopsis thaliana inoculated with the natural pathogen turnip mosaic virus (TuMV) in laboratory conditions, with the aim to identify genetic associations with virus infection-related parameters. For this purpose, they use a large panel of A. thaliana inbred lines and two strains of TuMV, one naïve and one pre-adapted through experimental evolution. A strong association is found between a region in chromosome 2 (1.5 Mb) and the risk of systemic necrosis upon viral infection, although the causative gene remains to be pinpointed.

      This project is a remarkable tour de force, but the conclusions that can be reached from the results obtained are unfortunately underwhelming. Some aspects of the work could be clarified, and presentation modified, to help the reader.

      (Recommendations For The Authors):

      • It is important to note that viral accumulation and symptom development do not necessarily correlate, and that only the former is a proxy for "virus performance". These concepts need to be clear throughout the text, so as not to mislead the reader.

      This has been explained better in line 118-120, “Virus performance has been removed.

      • Sadly, only indirect measures of the viral infection (symptoms) are used, and not viral accumulation. It is important to note that viral accumulation and symptom development do not necessarily correlate and that only the former is a proxy for "virus performance". These concepts need to be clear throughout the text, so as not to mislead the reader. The mention of "virus performance" in line 143 is therefore not appropriate, nor is the reference to viral replication and movement in the Discussion section.

      "Virus performance" was removed. Also, the reference to viral replication and movement in the Discussion section has been removed.

      Now we mention: “We did not measure viral accumulation, but note this is significantly correlated with intensity of symptoms within the Col-0 line (Corrêa et al. 2020), although it is not clear if this correlation occurs in all lines.”

      • Since symptoms are at the center of the screen, images representing the different scores in the arbitrary scales should ideally be shown.

      Different Arabidopsis lines would look different and this could mislead a reader not familiar with the lines. In order to make a representation of our criteria to stablish the symptoms, we believe that a schematic representation is clearer to interpret. Here are some pictures of different lines showing variating symptoms:

      Author response image 1.

      • Statistical analyses could be added to the figures, to ease interpretation of the data presented.

      Statistical analysis can be found in methods. We prefer to keep the figure legend as short as possible.

      • The authors could include a table with the summary of the phenotypes measured in the panel of screened lines (mean values, range across the panel, heritability, etc.).

      These data are plotted in Fig. 1. We believe that repeating this information in tabular form would not contribute to the main message of the work. Phenotype data and the code to reproduce figure 1 are available at GitHub (as stated in Data Availability), anyone interested can freely explore the phenotypes of the screened lines.

      • The definition of the association peak found in chromosome 2 could be explained further: is the whole region (1.5 Mb) in linkage disequilibrium? How many genes are found within this interval, and how were the five strong candidates the authors mention in line 161 selected? It is also not clear which are these 5 candidates, apart from AT2G14080 and DRP3B - and among those in Table 1 (which, by the way, is cited only in the Discussion and not in the Results section)? Why were AT2G14080 and DRP3B in particular chosen?

      We have replaced Table 1 with an updated Table S1 listing all genes found within the range of significant SNPs for each peak. We now highlight a subset of these genes as candidate genes if they have functions related to disease resistance or defence, and mentioned them explicitly in the text (lines 173-179. We have explicitly described how this table was constructed in the methods (lines 525-538).

      • Concerning the validation of the association found in chromosome 2 (line 169 and onward): the two approaches followed cannot be considered independent validations; wouldn't using independent accessions, or an independent population (generated by the cross between two parental lines, showing contrasting phenotypes, for example) have been more convincing?

      We aim to compare the hypothesis that the association is due to a causal locus to the null hypothesis that the observed association is a fluke due to, for example, the small number of lines showing necrosis. If this null hypothesis is true then we would not expect to see the association if we run the experiment again using the same lines. An alternative hypothesis is that the genotype at the QTL and disease phenotypes are not directly causally linked, but are both correlated with some other factor, such as another QTL, or maternal effects. We agree that an independent sample would be required to exclude the latter hypothesis, but argue that the former is the more pertinent. We have edited the text to be explicit about the hypothesis we are testing, and altered the language to shift the focus from ‘validation’ to ‘confirming the robustness’ of the association (line 182).

      • Regarding the identification of the transposon element in the genomic region of AT2G14080: is the complementation of the knock-out mutant with the two alleles (presence/absence of the transposon) possible to confirm its potential role in the observed phenotype?

      This could be feasible but we cannot do it as none of the researchers can continue this project.

      • On the comparison between naïve and evolved viral strains: is the evolved TuMV more virulent in those accessions closer to Col-0?

      This is not something we have looked at but would certainly be an interesting follow-up investigation.

      • The Copia-element polymorphism is identified in an intron; the potential functional consequences of this insertion could be discussed. In the example the authors provide, the transposable element is inserted into the protein-coding sequence instead.

      We now state explicitly that such insertions are expected to influence expression; beyond that we can only speculate. We have removed the reference to the insertion in the coding sequence.

      • The authors state in line 398 that "susceptibility is unquestionably deleterious" - is this really the case? Are the authors considering susceptibility as the capacity to be infected, or to develop symptoms? Viral infections in nature are frequently asymptomatic, and plant viruses can confer tolerance to other stresses.

      We have tone down the expression and clarify our wording: “Given that potyvirus outbreaks are common in nature (Pagán et al., 2010) and susceptibility to symptomatic infection can be deleterious”

      Additional minor comments:

      • In Table 1, Wu et al., 2018 should refer to DRP2A and 2B, not 3B.

      We have removed Table 1 altogether.

      • Line 126: a 23% increase in symptom severity is mentioned, but how is this calculated, considering that severity is measured in four different categories?

      This is the change in mean severity of symptoms between the two categories.

      • Figure 1F: "...symptoms"

      Fixed.

      • Line 179: "...suggesting an antiviral role..."

      Changed.

      • Lines 288-300: This paragraph does not fit into the narrative and could be omitted.

      It has been removed and some of the info moved to the last paragraph of the Intro, when the two TuMV variants were presented.

      • Lines 335-337: The rationale here is unclear since DRP2B will also be in the background - wouldn't DRPB2B and 3B be functionally redundant in the viral infection?

      Our results suggest that DRPB3B is redundant with DRPB2B for the ancestral virus but not for the evolved viral strain. We speculate that the evolved viral isolate may have acquired the capacity to recruit DRPB3B for its replication and hence it produces less symptoms when the plant protein is missing.

      We have spotted a mistake that may have add to the confusion. Originally the text said “In contrast, loss of function of DRP3B decreased symptoms relative to those in Col-0 in response to the ancestral, but not the evolved virus”. The correct statement is “In contrast, loss of function of DRP3B decreased symptoms relative to those in Col-0 in response to the evolved, but not the ancestral virus.”  

      Reviewer #2 (Public Review):

      The manuscript presents a valuable investigation of genetic associations related to plant resistance against the turnip mosaic virus (TuMV) using Arabidopsis thaliana as a model. The study infects over 1,000 A. thaliana inbred lines with both ancestral and evolved TuMV and assesses four disease-related traits: infectivity, disease progress, symptom severity, and necrosis. The findings reveal that plants infected with the evolved TuMV strain generally exhibited more severe disease symptoms than those infected with the ancestral strain. However, there was considerable variation among plant lines, highlighting the complexity of plant-virus interactions.

      A major genetic locus on chromosome 2 was identified, strongly associated with symptom severity and necrosis. This region contained several candidate genes involved in plant defense against viruses. The study also identified additional genetic loci associated with necrosis, some common to both viral isolates and others specific to individual isolates. Structural variations, including transposable element insertions, were observed in the genomic region linked to disease traits.

      Surprisingly, the minor allele associated with increased disease symptoms was geographically widespread among the studied plant lines, contrary to typical expectations of natural selection limiting the spread of deleterious alleles. Overall, this research provides valuable insights into the genetic basis of plant responses to TuMV, highlighting the complexity of these interactions and suggesting potential avenues for improving crop resilience against viral infections.

      Overall, the manuscript is well-written, and the data are generally high-quality. The study is generally well-executed and contributes to our understanding of plant-virus interactions. I suggest that the authors consider the following points in future versions of this manuscript:

      1. Major allele and minor allele definition: When these two concepts are mentioned in the figure, there is no clear definition of the two words in the text. Especially for major alleles, there is no clear definition in the whole text. It is recommended that the author further elaborate on these two concepts so that readers can more easily understand the text and figures.

      We agree that the distinction between major/minor alleles and major/minor associations in our previous manuscript may have been confusing. In the current manuscript we now define the minor allele at a locus as the less-common allele in the population (line 167). We have removed references to major/minor associations, and instead refer to strong/weak associations.

      1. Possible confusion caused by three words (Major focus / Major association and major allele): Because there is no explanation of the major allele in the text, it may cause readers to be confused with these two places in the text when trying to interpret the meaning of major allele: major locus (line 149)/ the major association with disease phenotypes (line 183).

      See our response to the previous comment.

      1. Discussion: The authors could provide a more detailed discussion of how the research findings might inform crop protection strategies or breeding programs.

      We would prefer to restrain speculating about future applications in breeding programs.

      (Recommendations For The Authors):

      1. Stacked bar chart for the Fig 1F. It is recommended that the author use the form of a stacked bar chart to display the results of Fig 1F. On the one hand, it can fit in with the format of Fig 1D/E/G, on the other hand, it can also display the content more clearly.

      We think the results are easier to interpret without the stacked bar chart.

      1. Language Clarity: While there are no apparent spelling errors, some sentences could be rewritten for greater clarity, especially when explaining the results in Figure 1 and Figure 2.

      We have reviewed these sections and attempted to improve clarity where that seemed appropriate.

      There are some possibilities to explore in the future. For example: clarity of mechanisms for the future. While the study identifies genetic associations, it lacks an in-depth exploration of the underlying molecular mechanisms. Elaborating on the mechanistic aspects would enhance the scientific rigor and practical applicability of the findings.

      Yes, digging into the molecular mechanisms is an ongoing task and will be published elsewhere. It was out of the scope of this already dense manuscript.  

      Reviewer #3 (Public Review):

      Summary of Work

      This paper conducts the largest GWAS study of A. thaliana in response to a viral infection. The paper identifies a 1.5 MB region in the chromosome associated with disease, including SNPs, structural variation, and transposon insertions. Studies further validate the association experimentally with a separate experimental infection procedure with several lines and specific T-DNA mutants. Finally, the paper presents a geographic analysis of the minor disease allele and the major association. The major take-home message of the paper is that structural variants and not only SNPs are important changes associated with disease susceptibility. The manuscript also makes a strong case for negative frequency-dependent selection maintaining a disease susceptibility locus at low frequency.

      Strengths and Weaknesses

      A major strength of this manuscript is the large sample sizes, careful experimental design, and rigor in the follow-up experiments. For instance, mentioning non-infected controls and using methods to determine if geographic locus associations were due to chance. The strong result of a GWAS-detected locus is impressive given the complex interaction between plant genotypes and strains noted in the results. In addition to the follow-up experiments, the geographic analysis added important context and broadened the scope of the study beyond typical lab-based GWAS studies. I find very few weaknesses in this manuscript.

      Support of Conclusions

      The support for the conclusions is exceptional. This is due to the massive amount of evidence for each statement and also due to the careful consideration of alternative explanations for the data.

      Significance of Work

      This manuscript will be of great significance in plant disease research, both for its findings and its experimental approach. The study has very important implications for genetic associations with disease beyond plants.

      (Recommendations For The Authors):

      Line 41 - Rephrase, not clear "being the magnitude and sign of the difference dependent on the degree of adaptation of the viral isolate to A. thaliana."

      Now it reads: “When inoculated with TuMV, loss-of-function mutant plants of this gene exhibited different symptoms than wild-type plants, where the scale of the difference and the direction of change between the symptomatology of mutant and wild-type plants depends on the degree of adaptation of the viral isolate to A. thaliana.”

      Line 236 - typo should read: "and 21-fold"

      Changed.

    1. Même si vous n'êtes pas familier de ce système, on vous accompagnera dans votre utilisation au fur et à mesure, en vous donnant directement le lien sur lequel trouver le code, donc pas de panique !

      mercii!

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Recommendations For The Authors):

      1. Experiments regarding the inducible expression of MukBEF: The authors should provide western blots or rt-qPCR for MukBEF expression at 40 min and 2H.

      We provide now a western blot of MukB in non-induced and induced conditions as Figure 1-figure supplement 1D.

      1. Experiments with RiTer and LiTer constructs:<br /> a. Authors compare the mukB deletion against wild type (Fig. 2C). It would be additionally informative if these comparisons are made for matP deletion and wild type as well. This will strengthen the conclusion that long-range interactions in ter do increase in the absence of matP.

      We agree that the matP mutant may help the reader to compare the effect of the translocation in different backgrounds and have added it to the figure. This strengthens the conclusion that longrange interactions in ter do increase in the absence of matP in a rearranged chromosome, as observed in the WT configuration (Lioy et al., 2018).

      b. Additionally, in Fig. 2C, it appears that there is some decrease in long-range interactions in the absence of mukB in ter1 (Riter). Is this a significant change?

      The change observed is not significant. The results shown in Fig. 2C have been obtained using a 3C approach, which generated slightly more variability than Hi-C. Furthermore, we measured the range of contacts for the segment corresponding to Ter1 in RiTer (matS12-matS28), in different genetic contexts and different configurations. The results show that this level of variation is not significant (see graph below reporting two independent experiments).

      Author response image 1.

      Range of interactions measured on the interval matS12-matS18 in different genetic contexts and different configurations (MG1655 WT(1 and 2), ∆mukB, RiTer, RiTer ∆mukB).

      1. Experiments with various matS organizations: These experiments are interesting and an important part of the paper. However, it is rather hard to visualize the chromosome conformations in the strains after transposition. To aid the reader (particularly with panel E), authors can provide schematics of the chromosome conformations and anticipated/ observed chromosomal interactions. Circular interaction plots would be useful here.

      We thank the reviewer for this interesting remark; we have tried in the past to represent these interactions using a circular representation (see for example the web site of Ivan Junier; https://treetimc.github.io/circhic/index.html). However, this representation is not trivial to apprehend for nonspecialists, especially in strains with a rearranged chromosome configuration. Nonetheless, we have added graphical circular representations of the chromosome configurations to help the reader.

      1. ChIP experiments:<br /> a. This section of the manuscript needs to be further strengthened. It is not clear whether the ChIP signal observed is significant (for example at T10 or T20 min, the peak value does not appear to go above 1.1 fold. Can the authors be sure that this small increase is not simply a consequence of increase in copy number of the loci around the origin, as replication has initiated?

      The basal value of the ChIP on the non-replicated sequences (between 0-3.5 Mb for 10 minutes and 0-3 Mb for 20 minutes) is 0.8 and 0.7, respectively, whereas the mean value of the replicated sequence is 1.6 and 1.45. So the enrichment observed for these two points is about 2-fold, not 1.1 and it is 4 fold for t40min. These values were obtained by dividing the number of normalized reads in the ChIP (the number of reads at each position divided by the total number of reads) by the normalized reads of the input. Therefore, the increase in copy number is considered in the calculation. Furthermore, we added a supplementary figure (Figure Sup9) in which we performed a ChIP without tags on synchronized cells, and in this case, we did not observe any enrichment triggered by replication.

      b. Authors make a conclusion that MukB loads behind the replication fork. However, the time resolution of the presented experiments is not sufficient to be certain of this. Authors would need to perform more time-resolved experiments for the same.

      Reviewer 1 is correct; we attempted to discriminate whether the observed enrichment is (i) associated with the replication fork since we observed a decrease in the center of the enrichment at oriC as the maximum enrichment moves away with the replication fork after 20 and 40 minutes, or (ii) associated with the newly replicated sequence. To investigate this, we attempted to induce a single round of replication by shifting the cells back to 40°C after 10 minutes at 30°C. Unfortunately, replication initiation is not immediately halted by shifting the cells to 40°C, and we were unable to induce a single round of replication. To clarify our conclusions, we modified our manuscript to

      “Altogether, these findings indicate that MukBEF is loaded into regions newly replicated either at the replication fork or even further behind it, except in the Ter region from which it would be excluded.”

      c. Authors conclude that in the LiTer7 strain, MukB signal is absent from Ter2. However, when compared with the ChIP profiles by eye across panels in A and B, this does not seem to be significant. In the same results sections, authors state that there is a 3-fold increase in MukB signal in other regions. The corresponding graph does not show the same.

      Rather than relying solely on the enrichment levels, which can be challenging to compare across different strains due to slight variations in replication levels, we believe there is a clear disruption in this profile that corresponds to the Ter2 sequence. Furthermore, this discontinuity in enrichment relative to the replication profile is also observable in the WT configuration. At T40min, MukB ChIPseq signals halt at the Ter boundary, even though Ter is actively undergoing replication, as evidenced by observations in the input data.

      Regarding the fold increase of MukB, Reviewer 1 is correct; we overestimated this enrichment in the text and have now corrected it.

      d. Authors should provide western blot of MukB-Flag.

      We have added Supplementary Figure 1 D, which contains a Western blot of MukB-Flag.

      1. The bioinformatic analysis of matS site distribution is interesting, but this is not followed upon. The figure (Fig 5) is better suited in the supplement and used only as a discussion point.

      We acknowledge the reviewer's point, but we used this section to attempt to extend our findings to other bacteria and emphasize the observation that even though a few matS sites are necessary to inhibit MukBEF, the Ter domains are large and centered on dif even in other bacteria.

      1. The discussion section is lacking many references and key papers have not been cited (paragraph 1 of discussion for example has no references).

      The possibility that SMC-ScpAB and MukBEF can act independent of replication has been suggested previously, but are not cited or discussed. Similarly, there is some evidence for SMC-ScpAB association with newly replicated DNA (PMID 21923769).

      We have added references to the suggested paragraph and highlighted the fact that MukBEF's activity independent of replication was already known. However, we believe that the situation is less clear for SMC-ScpAB in B. subtilis or C. crescentus. In a similar manner, we found no clear evidence that SMCScpAB is associated with newly replicated DNA in the referenced studies.

      To clarify and enrich the discussion section, we have added a paragraph that provides perspective on the loading mechanisms of SMC-ScpAB and MukBEF.

      1. There are minor typographical errors that should be corrected. Some are highlighted here:

      a. Abstract: L5: "preferentially 'on' instead of 'in'"

      b. Introduction: Para 1 L8: "features that determine"

      c. Introduction: Para 2 L1: please check the phrasing of this line

      d. Results section 2: L1: Ter "MD" needs to be explained

      e. Page 8: Para 2: L6: "shows that 'a'"

      g. Page 13: Para 2: "MukBEF activity...". This sentence needs to be fixed.

      i. Figure 4: "input" instead of "imput"

      We thank Reviewer 1 for pointing out all these grammatical or spelling mistakes. We have corrected them all.

      f. Page 12: Para 2: "Xer" instead of "XDS"? *We added a reference to clarify the term.

      h. Methods: ChIP analysis: Authors state "MatP peaks", however, reported data is for MukB

      This description pertains to the matP peak detection shown in Supplementary Figure 3. We have incorporated this clarification into the text.

      j. Supplementary figure legends need to be provided (currently main figure legends appear to be pasted twice)

      Supplementary figure legends are provided at the end of the manuscript, and we have edited the manuscript to remove one copy of the figure legends.

      k. Authors should ensure sequencing data are deposited in an appropriate online repository and an accession number is provided.

      We waited for the appropriate timing in the editing process to upload our data, which we have now done. Additionally, we have added a data availability section to the manuscript, including sequence references on the NCBI.

      Reviewer #2 (Recommendations For The Authors):

      The authors largely avoid speculation on what might be the physiological relevance of the exclusion of MukBEF (and Smc-ScpAB) from the replication termination region (and the coordination with DNA replication). At this stage it would be helpful to present possible scenarios even if not yet supported by data. The authors should for example consider the following scenario: loop extrusion of a dif site in a chromosome dimer followed by dimer resolution by dif recombination leads to two chromosomes that are linked together by MukBEF (equivalent to cohesin holding sister chromatids together in eukaryotes but without a separase). This configuration (while rare) will hamper chromosome segregation. Is MatP particularly important under conditions of elevated levels of chromosome dimers? Could this even be experimentally tested? Other scenarios might also be entertained.

      Even though we prefer to avoid speculations, we agree that we may attempt to propose some hypotheses to the reader. To do so, we have added a few sentences at the end of our discussion. “We may speculate, based on in vitro observations (Kumar et al., 2022), that MukBEF could interfere with TopIV activity and delay potential chromosome decatenation. Another possibility is that chromosome dimers resolved at the dif site may become trapped in loops formed by MukBEF, thus delaying segregation. But none of these possible scenarios are supported by data yet, and a major challenge for the future is to determine whether and how MukBEF may interfere with one or both of these processes.”

      The manuscript text is well written. However, the labeling of strains in figures and text is sometimes inconsistent which can be confusing (LiTer Liter liter; e.g Riter Fig 2C). For consistency, always denote the number of matS sites in LiTer strains and also in the RiTer strain. The scheme denoting LiTer and RiTer strains should indicate the orientation of DNA segments so it is clear that the engineering does not involve inversion (correct?). Similarly: Use uniform labelling for time points: see T40mn vs 40mn vs T2H vs 2H

      We have reviewed the manuscript to standardize our labeling. Additionally, we have included a schema in Figure 2, indicating the matS numbers at the Ter border to emphasize that the transposition events do not involve inversion.

      matS sites do not have identical sequences and bind different levels of MatP (suppl fig 3). Does this possibly affect the interpretation of some of the findings (when altering few or only a single matS site). Maybe a comment on this possibility can be added.

      We agree with the referee; we do not want to conclude too strongly about the impact of matS density, so we have added this sentence at the end of the section titled 'matS Determinants to Prevent MukBEF Activity':

      “Altogether, assuming that differences in the matS sequences do not modify MatP's ability to bind to the chromosome and affect its capacity to inhibit MukBEF, these results suggested that the density of matS sites in a small chromosomal region has a greater impact than dispersion of the same number of matS sites over a larger segment”

      Figure 5: show selected examples of matS site distribution in addition to the averaged distribution (as in supplemental figure)?

      Figure 5 shows the median of the matS distribution based on the matS positions of 16 species as displayed in the supplementary figure. We believe that this figure is interesting as it represents the overall matS distribution across the Enterobacterales, Pasteurellales, and Vibrionales.

      How do authors define 'background levels' (page 9)in their ChIP-Seq experiments? Please add a definition or reword.

      We agree that the term 'background level' here could be confusing, so we have modified it to 'basal level' to refer to the non-replicating sequence. The background level can be observed in Supplementary Figure 9 in the ChIP without tags, and, on average, the background level is 1 throughout the entire chromosome in these control experiments.

      This reviewer would naively expect the normalized ChIP-Seq signals to revolve around a ratio of 1 (Fig. 4)? They do in one panel (Figure 4B) but not in the others (Figure 4A). Please provide an explanation.

      We thank the referee for this pertinent observation. An error was made during the smoothing of the data in Figure 4A, which resulted in an underestimation of the input values. This mistake does not alter the profile of the ChIP (it's a division by a constant) and our conclusions. We provide a revised version of the figure.

      Inconsistent axis labelling: e.g Figure 4

      Enterobacterals should be Enterobacterales (?)

      KB should be kb

      MB should be Mb

      Imput should be Input

      FlaG should be Flag

      We have made the suggested modifications to the text.

      'These results unveiled that fluorescent MukBEF foci previously observed associated with the Ori region were probably not bound to DNA' Isn't the alternative scenario that MukBEF bound to distant DNA segments colocalize an equally likely scenario? Please rephrase.

      Since we lack evidence regarding what triggers the formation of a unique MukB focus associated with the origin and what this focus could represent, we have removed this sentence.

      Reviewer #3 (Recommendations For The Authors):

      The text is well-written and easy to follow, but I would suggest several improvements to make things clearer:

      1. Many plots are missing labels or legends. (I) All contact plots such as Fig. 1C should have a color legend. It is not clear how large the signal is and whether the plots are on the same scale. (II)<br /> Ratiometric contact plots such as in Fig. 1D should indicate what values are shown. Is this a log ratio?

      As indicated in the materials and methods section, the ratio presented on this manuscript was calculated for each point on the map by dividing the number of contacts in one condition by the number of contacts in the other condition. The Log2 of the ratio was then plotted using a Gaussian filter.

      1. Genotypes and strain names are often inconsistent. Sometimes ΔmukB, ΔmatP, ΔmatS is used, other times it is just mukB, matP, matS; There are various permutations of LiTer, Liter, liter etc.

      These inconsistencies have been corrected.

      1. The time notation is unconventional. I recommend using 0 min, 40 min, 120 min etc. instead of T0, T40mn, T2H.

      As requested, we have standardized and used conventional annotations.

      1. A supplemental strain table listing detailed genotypes would be helpful.

      A strain table has been added, along with a second table recapitulating the positions of matS in the different strains.

      1. Fig. 1A: Move the IPTG labels to the top? It took me a while to spot them.

      We have moved the labels to the top of the figure and increased the font size to make them more visible.

      1. Fig 1C: Have these plots been contrast adjusted? If so, this should be indicated. The background looks very white and the transitions from diagonal to background look quite sharp.

      No, these matrices haven't been contrast-adjusted. They were created in MATLAB, then exported as TIFF files and directly incorporated into the figure. Nevertheless, we noticed that the color code of the matrix in Figure 3 was different and subsequently adjusted it to achieve uniformity across all matrices.

      7, Fig 1C: What is the region around 3 Mb and 4 Mb? It looks like the contacts there are somewhat MukBEF-independent.

      The referee is right. In the presence of the plasmid pPSV38 (carrying the MukBEF operon or not), we repeatedly observed an increase of long range contacts around 3 Mb. The origin of these contacts is unknown.

      1. Fig 1D: Have the log ratios been clipped at -1 and 1 or was some smoothing filter applied? I would expect the division of small and noisy numbers in the background region to produce many extreme values. This does not appear to be the case.

      The referee is right, dividing two matrices generates a ratio with extreme values. To avoid this, the Log2 of the ratio is plotted with a Gaussian filter, as described before (Lioy et al., 2018).

      1. Fig 1E: I recommend including a wild-type reference trace as a point of reference.

      We have added the WT profile to the figure.

      1. Fig 2: I feel the side-by-side cartoon from Supplemental Fig. 2A could be included in the main figure to make things easier to grasp.

      We added a schematic representation of the chromosome configuration on top of the matrices to aid understanding.

      1. Fig. 2C: One could put both plots on the same y-axis scale to make them comparable.

      We have modified the axes as required.

      1. Fig. 3C: The LiTer4 ratio plot has two blue bands in the 3-4.5 Mb region. I was wondering what they might be. These long-range contacts seem to be transposition-dependent and suppressed by MatP, is that correct?

      The referee is right. This indicates that in the absence of MatP, one part of the Ter was able to interact with a distal region of the chromosome, albeit with a low frequency. The origin is not yet known.

      1. Fig. 3E: It is hard to understand what is a strain label and what is the analyzed region of interest. The plot heading and figure legend say Ter2 (but then, there are different Ter2 variants), some labels say Ter, others say Ter2, sometimes it doesn't say anything, some labels say ΔmatS or ΔmatP, others say matS or matP, and so on.

      We have unified our notation and add more description on the legend to clarify this figure :

      “Ter” corresponds to the range of contacts over the entire Ter region, in the WT strain (WT Ter) or in the ΔmatP strain (ΔmatP Ter). The column WT matSX-Y corresponds to the range of contacts between the designated matS sites in the WT configuration. This portion of the Ter can be compared with the same Ter segment in the transposed strain (Ter2). Additionally, the matS20-28 segment corresponds to Ter2 in LiTer9, just as matS22-28 corresponds to Ter2 in LiTer7, and matS25-28 to Ter2 in LiTer4. The range of contacts of this segment was also measured in a ΔmatP or ΔmatS background.”

      1. Fig. 4 and p.9: "Normalized ChIP-seq experiments were performed by normalizing the quantity of immuno-precipitated fragments to the input of MukB-Flag and then divide by the normalized ChIP signals at t0 to measure the enrichment trigger by replication."

      This statement and the ChIP plots in Fig. 4A are somewhat puzzling. If the data were divided by the ChIP signal at t0, as stated in the text, then I would expect the first plot (t0) to be a flat line at value 1. This is not the case. I assume that normalized ChIP is shown without the division by t0, as stated in the figure legend.

      The referee is right. This sentence has been corrected, and as described in the Methods section, Figure 4 shows the ChIP normalized by the input.

      If that's true and the numbers were obtained by dividing read-count adjusted immunoprecipitate by read-count adjusted input, then I would expect an average value of 1. This is also not the case. Why are the numbers so low? I think this needs some more details on how the data was prepared.

      The referee is right; we thank him for this remark. Our data are processed using the following method: the value of each read is divided by the total number of reads. A sliding window of 50 kb is applied to these normalized values to smooth the data. Then, the resulting signal from the ChIP is divided by the resulting signal from the input. This is what is shown in Figure 4. Unfortunately, for some of our results, the sliding window was not correctly applied to the input data. This did not alter the ChIP profile but did affect the absolute values. We have resolved this issue and corrected the figure.

      Another potential issue is that it's not clear what the background signal is and whether it is evenly distributed. The effect size is rather small. Negative controls (untagged MukB for each timepoint) would help to estimate the background distribution, and calibrator DNA could be used to estimate the signal-to-background ratio. There is the danger that the apparent enrichment of replicated DNA is due to increased "stickiness" rather than increased MukBEF binding. If any controls are available, I would strongly suggest to show them.

      To address this remark, a ChIP experiment with a non-tagged strain under comparable synchronization conditions has been performed. The results are presented as Supplementary Figure 9; they reveal that the enrichment shown in Figure 4 is not attributed to nonspecific antibody binding or 'stickiness’.

      1. Fig. 4A, B: The y-axes on the right are unlabeled and the figure legends mention immunoblot analysis, which is not shown.

      We labeled the y-axes as 'anti-Flag ChIP/input' and made corrections to the figure legend.

      1. Fig. 4B: This figure shows a dip in enrichment at the Ter2 region of LiTer7, which supports the authors' case. Having a side-by-side comparison with WT at 60 min would be good, as this time point is not shown in Fig. 4A.

      Cell synchronization can be somewhat challenging, and we have observed that the timing of replication restart can vary depending on the genetic background of the cells. This delay is evident in the case of LiTer7. To address this, we compared LiTer7 after 60 minutes to the wild type strain (WT) after 40 minutes of replication. Even though the duration of replication is 20 minutes longer in LiTer7, the replication profiles of these two strains under these two different conditions (40 minutes and 60 minutes) are comparable and provide a better representation of similar replication progression.

      1. Fig. 4C: Highlighting the position of the replication origin would help to interpret the data.

      We highlight oriC position with a red dash line

      1. Fig. 4C: One could include a range-of-contact plot that compares the three conditions (similar to Fig. 1E).

      We have added this quantification to Supplemental Figure 8

      1. Supplemental Fig. 2A: In the LiTer15 cartoon, the flanking attachment sites do not line up. Is this correct? I would also recommend indicating the direction of the Ter1 and Ter2 regions before and after recombination.

      In this configuration, attB and attR, as well as attL and attB', should be aligned but the remaining attR attL may not. We have corrected this misalignment. To clarify the question of sequence orientation, we have included in the figure legend that all transposed sequences maintain their original orientation.

      1. Supplemental Fig. 3: One could show where the deleted matS sites are.

      We added red asterisks to the ChIP representation to highlight the positions of the missing matS.

      1. Supplemental Fig. 3B: The plot legend is inconsistent with panel A (What is "WT2")?

      We have corrected it.

      1. Supplemental Fig. 3C: The E-value notation is unusual. Is this 8.9 x 10^-61?

      The value is 8.9 x 10-61; we modified the annotation.

      23) Abstract: "While different features for the activity of the bacterial canonical SMC complex, SmcScpAB, have been described in different bacteria, not much is known about the way chromosomes in enterobacteria interact with their SMC complex, MukBEF."

      Could this be more specific? What features are addressed in this manuscript that have been described for Smc-ScpAB but not MukBEF? Alternatively, one could summarize what MukBEF does to capture the interest of readers unfamiliar with the topic.

      We modified these first sentences.

      1. p.5 "was cloned onto a medium-copy number plasmid under control of a lacI promoter" Is "lacI promoter" correct? My understanding is that the promoter of the lacI gene is constitutive, whereas the promoter of the downstream lac operon is regulated by LacI. I would recommend providing an annotated plasmid sequence in supplemental material to make things clearer.

      We modified it and replaced “ lacI promoter” with the correct annotation, pLac.

      1. p. 5 heading "MukBEF activity does not initiate at a single locus" and p. 6 "Altogether, the results indicate that the increase in contact does not originate from a specific position on the chromosome but rather appears from numerous sites". Although this conclusion is supported by the follow-up experiments, I felt it is perhaps a bit too strong at this point in the text. Perhaps MukBEF loads slowly at a single site, but then moves away quickly? Would that not also lead to a flat increase in the contact plots? One could consider softening these statements (at least in the section header), and then be more confident later on.

      We used 'indicate' and 'suggesting' at the end of this results section, and we feel that we have not overreached in our conclusions at this point. While it's true that we can consider other hypotheses, we believe that, at this stage, our suggestion that MukBEF is loaded over the entire chromosome is the simplest and more likely explanation.

      1. p.7: "[these results] also reveal that MukBEF does not translocate from the Ori region to the terminus of the chromosome as observed with Smc-ScpAB in different bacteria."

      This isn't strictly true for single molecules, is it? Some molecules might translocate from Ori to Ter. Perhaps clarify that this is about the bulk flux of MukBEF?

      At this point, our conclusion that MukBEF does not travel from the ori to Ter is global and refers to the results described in this section. However, the referee is correct in pointing out that we cannot exclude the possibility that in a WT configuration (without a Ter in the middle of the right replicore), a specific MukBEF complex can be loaded near Ori and travel all along the chromosome until the Ter. To clarify our statement, we have revised it to 'reveal that MukBEF does not globally translocate from the Ori region to the terminus of the chromosome.' This change is intended to highlight the fact that we are drawing a general conclusion about the behavior of MukBEF and to facilitate its comparison with Smc-ScpAB in B. subtilis.

      1. p. 10: The section title "Long-range contacts correlate with MukBEF binding" and the concluding sentence "Altogether, these results indicate that MukBEF promotes long-range DNA contacts independently of the replication process even though it binds preferentially in newly replicated regions" seem to contradict each other. I would rephrase the title as "MukBEF promotes long-range contacts in the absence of replication" or similar.

      We agree with this suggestion and have used the proposed title.

      1. p. 13: I recommend reserving the name "condensin" for the eukaryotic condensin complex and using "MukBEF" throughout.

      We used MukBEF throughout.

    1. Do you have to knowhow to code [to be a digital humanist]? I’m a tenured professor of digital humanitiesand I say ‘yes.’ . . . Personally, I think Digital Humanities is about building things.

      I find this stance to be a form of gatekeeping, and reductive. Quality is of course important, but I find it very frustrating that as soon as something becomes associated with "academics" or "scholarship," people immediately begin limiting who can do it and why, as if severely restricting its practice will keep it "pure" or "better."

    2. how to code

      I wonder how long ago this was- there are many ways to create digitally now that don't require that skill. That or he's just being a gatekeeper.

    1. I think social media and bots can coexist more fluidly in the future due to shortcomings with code not being able to decipher bots with malicious intent versus bots that help us. Big corporations don't have the ability to capitalize on useful bots yet and as a result, usually punish those using them with no harm intended (using a bot to delete posts from a third-party app can get you penalized on instagram).

    1. These pauses may come in handy when posting tweets, to make it look like your bot is taking time to type in the text.

      In a more complex bot, I could see think working wonders. I wonder how useful it is if you're typing your responses in your own code already. Wouldn't that just make you look like a slow writer? I'm curious to see if there's any way to make the bot type with a certain tone or on specific topics.

    1. In order to understand how a bot is built and can work, we will now look at the different ways computer programs can be organized. We will cover a bunch of examples quickly here, to hopefully give you an idea of many options for how to write a program. Don’t worry if you don’t follow all of it, as we will go back over these one at a time in more detail throughout the book. In this section, we will not show actual Python computer programs (that will be in the next section). Instead, here we will focus on what programmers call “psuedocode,” which is a human language outline of a program. Psuedocode is intended to be easier to read and write. Pseudocode is often used by programmers to plan how they want their programs to work, and once the programmer is somewhat confident in their pseudocode, they will then try to write it in actual programming language code.

      I think the primary goal of pseudocode is to focus on the algorithm's fundamental logic without becoming bogged down by the syntax of a specific programming language is interesting. It acts as a bridge between the problem-solving and coding phases.

    2. Sometimes in programming, we want to group several steps (i.e., statements) together. When we group these steps together we call it a code “block.” These blocks of code often used with conditionals (e.g., if this condition is true, do these five steps), and with loops (e.g., for each of these items, do these five steps).

      In the context of conditionals, code blocks provide a clear structure to articulate actions based on specific conditions. For example, if a particular condition is met, executing a series of steps encapsulated within a code block becomes more readable and maintainable. Likewise, in the case of loops, code blocks facilitate the repetition of a set of statements for each iteration. This not only streamlines the code but also contributes to its comprehensibility. Understanding the significance of code blocks is essential for proficient programming, as it allows developers to write more organized, modular, and comprehensible code, contributing to better software design and maintenance.

    1. 21 CFR Section

      This is the Code of Federal Regulations that governs food and drugs within the United States for the FDA, the Drug Enforcement Administration (DEA), and the Office of National Drug Control Policy (ONDCP).

    1. You should take care, however, to make sure that your individual objects can stand alone as much as possible. Tightly coupled objects are objects that rely so heavily on each other that removing or changing one will mean that you have to completely change another one - a real bummer.

      Isn't there a conflict between this principle and code reusability?

    1. Reviewer #2 (Public Review):

      Summary:<br /> This study from Bamgbose et al. identifies a new and important interaction between H4K20me and Parp1 that regulates inducible genes during development and heat stress. The authors present convincing experiments that form a mostly complete manuscript that significantly contributes to our understanding of how Parp1 associates with target genes to regulate their expression.

      Strengths:<br /> The authors present 3 compelling experiments to support the interaction between Parp1 and H4K20me, including:

      1) PR-Set7 mutants remove all K4K20me and phenocopy Parp mutant developmental arrest and defective heat shock protein induction.

      2) PR-Set7 mutants have dramatically reduced Parp1 association with chromatin and reduced poly-ADP ribosylation.

      3) Parp1 directly binds H4K20me in vitro.

      Weaknesses:<br /> 1) The histone array experiment in Fig1 strongly suggests that PARP binds to all mono-methylated histone residues (including H3K27, which is not discussed). Phosphorylation of nearby residues sometimes blocks this binding (S10 and T11 modifications block binding to K9me1, and S28P blocks binding to K27me1). However, H3S3P did not block H3K4me1, which may be worth highlighting. The H3K9me2/3 "blocking effect" is not nearly as strong as some of these other modifications, yet the authors chose to focus on it. Rather than focusing on subtle effects and the possibility that PARP "reads" a "histone code," the authors should consider focusing on the simple but dramatic observation that PARP binds pretty much all mono-methylated histone residues. This result is interesting because nucleosome mono-methylation is normally found on nucleosomes with high turnover rates (Chory et al. Mol Cell 2019)- which mostly occurs at promoters and highly transcribed genes. The author's binding experiments could help to partially explain this correlation because PARP could both bind mono-methylated nucleosomes and then further promote their turnover and lower methylation state.

      2) The RNAseq analysis of Parp1/PR-Set7 mutants is reasonable, but there is a caveat to the author's conclusion (Line 251): "our results indicate H4K20me1 may be required for PARP-1 binding to preferentially repress metabolic genes and activate genes involved in neuron development at co-enriched genes." An alternative possibility is that many of the gene expression changes are indirect consequences of altered development induced by Parp1 or PR-Set7 mutants. For example, Parp1 could activate a transcription factor that represses the metabolic genes that they mention. The authors should consider discussing this possibility.

      3) The section on the inducibility of heat shock genes is interesting but missing an important control that might significantly alter the author's conclusions. Hsp23 and Hsp83 (group B genes) are transcribed without heat shock, which likely explains why they have H4K20me without heat shock. The authors made the reasonable hypothesis that this H4K20me would recruit Parp-1 upon heat shock (line 270). However, they observed a decrease of H4K20me upon heat shock, which led them to conclude that "H4K20me may not be necessary for Parp1 binding/activation" (line 275). However, their RNA expression data (Fig4A) argues that both Parp1 and H40K20me are important for activation. An alternative possibility is that group B genes indeed recruit Parp1 (through H4K20me) upon heat shock, but then Parp1 promotes H3/H4 dissociation from group B genes. If Parp1 depletes H4, it will also deplete H4K20me1. To address this possibility, the authors should also do a ChIP for total H4 and plot both the raw signal of H4K20me1 and total H4 as well as the ratio of these signals. The authors could also note that Group A genes may similarly recruit Parp1 and deplete H3/H4 but with different kinetics than Group B genes because their basal state lacks H4K20me/Parp1. To test this possibility, the authors could measure Parp association, H4K20methylation, and H4 depletion at more time points after heat shock at both classes of genes.

    1. Thank you all for the detailed feedback. I want to emphasize we value your feedback. We believe it is a huge advantage for us to be able create GitLab transparently here in the community, with your contributions, whether they be ideas and analysis here or in actual code. Thank you for your excitement and passion.
    2. (I grant that many of us contribute code to Gitlab, and would also like to participate as members of the development team in guiding the implementations, but clearly the core team has to have the final say in what direction that takes... unless someone wants to create and maintain their own fork of Gitlab ;) )
    1. SEO optimization is a marketing discipline focused on increasing the visibility of a website in search engines such as google, yandex, bing search engines. SEO includes both the technical and creative elements necessary to improve website rankings, customer traffic, and visibility in search engines. There are many aspects to improve SEO, site optimization.

      SEO optimization is a marketing discipline focused on increasing the visibility of a website in search engines, such as Google, Yandex and Bing. SEO includes both the technical and the creative elements needed to improve website rankings, customer traffic, and visibility in search engines. There are many aspects to SEO enhancement and site optimization. Some of them include: * Site loading speed * Heading separation * Image optimization, title and markup * Site code structure and minimization * Keywords and word cloud * Using Google, Yandex and Bing provided tools

    1. Need to click on the highlighted item in the outline docs.lansa.com_14_en_lansa015_index.htm

      LANSA's RDML (Rapid Development and Maintenance Language) allows you to create a High Level Application Definition or abstraction of your business application. This definition is independent of the operating platform. Defining logic with a 4GL makes your programs independent of the 3GL layer. A 4GL will eventually generate some type of 3GL code which is compiled and executed, but you never need to modify the generated 3GL. In fact, it should not matter which 3GL is used. A High Level Application Definition should allow you to generate or regenerate the same applications in different languages to make your applications portable.

      This is why you use a 4GL/RDML:

      • High Productivity (Reduce Backlog): A single RDML command can replace hundreds of lines of 3GL code.
      • Reduce Maintenance Effort: High level constructs and less code make maintenance much easier.
      • Platform Independent Definition: RDML can be generated into multiple languages to support multiple platforms.
      • Portable & Durable: RDML is platform independent and can be extended, without impacting development, as 3GL languages improve and change.
      • Focus on Business Application Logic: RDML is easy to use so that you can focus on the business logic instead of the programming language.

      As your business and technology changes, LANSA's RDML protects your application programming efforts.

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      The manuscript by Wagstyl et al. describes an extensive analysis of gene expression in the human cerebral cortex and the association with a large variety of maps capturing many of its microscopic and macroscopic properties. The core methodological contribution is the computation of continuous maps of gene expression for >20k genes, which are being shared with the community. The manuscript is a demonstration of several ways in which these maps can be used to relate gene expression with histological features of the human cortex, cytoarchitecture, folding, function, development and disease risk. The main scientific contribution is to provide data and tools to help substantiate the idea of the genetic regulation of multi-scale aspects of the organisation of the human brain. The manuscript is dense, but clearly written and beautifully illustrated.

      Main comments

      The starting point for the manuscript is the construction of continuous maps of gene expression for most human genes. These maps are based on the microarray data from 6 left human brain hemispheres made available by the Allen Brain Institute. By technological necessity, the microarray data is very sparse: only 1304 samples to map all the cortex after all subjects were combined (a single individual's hemisphere has ~400 samples). Sampling is also inhomogeneous due to the coronal slicing of the tissue. To obtain continuous maps on a mesh, the authors filled the gaps using nearest-neighbour interpolation followed by strong smoothing. This may have two potentially important consequences that the authors may want to discuss further: (a) the intrinsic geometry of the mesh used for smoothing will introduce structure in the expression map, and (b) strong smoothing will produce substantial, spatially heterogeneous, autocorrelations in the signal, which are known to lead to a significant increase in the false positive rate (FPR) in the spin tests they used.

      Many thanks to the reviewer for their considered feedback. We have addressed these primary concerns into point-by-point responses below. The key conclusions from our new analyses are: (i) while the intrinsic geometry of the mesh had not originally been accounted for in sufficient detail, the findings presented in this manuscript paper are not driven by mesh-induced structure, (ii) that the spin test null models used in this manuscript [(including a modified version introduced in response to (i)] are currently the most appropriate way to mitigate against inflated false positive rates when making statistical inferences on smooth, surface-based data.

      a. Structured smoothing

      A brain surface has intrinsic curvature (Gaussian curvature, which cannot be flattened away without tearing). The size of the neighbourhood around each surface vertex will be determined by this curvature. During surface smoothing, this will make that the weight of each vertex will be also modulated by the local curvature, i.e., by large geometric structures such as poles, fissures and folds. The article by Ciantar et al (2022, https://doi.org/10.1007/s00429-022-02536-4) provides a clear illustration of this effect: even the mapping of a volume of pure noise into a brain mesh will produce a pattern over the surface strikingly similar to that obtained by mapping resting state functional data or functional data related to a motor task.

      Comment 1

      It may be important to make the readers aware of this possible limitation, which is in large part a consequence of the sparsity of the microarray sampling and the necessity to map that to a mesh. This may confound the assessments of reproducibility (results, p4). Reproducibility was assessed by comparing pairs of subgroups split from the total 6. But if the mesh is introducing structure into the data, and if the same mesh was used for both groups, then what's being reproduced could be a combination of signal from the expression data and signal induced by the mesh structure.

      Response 1

      The reviewer raises an important question regarding the potential for interpolation and smoothing on a cortical mesh to induce a common/correlated signal due to the intrinsic mesh structure. We have now generated a new null model to test this idea which indicates that intrinsic mesh structure is not inflating reproducibility in interpolated expression maps. This new null model spins the original samples prior to interpolation, smoothing and comparison between triplet splits of the six donors, with independent spins shared across the triplet. For computational tractability we took one pair of triplets and regenerated the dataset for each triplet using 10 independent spins. We used these to estimate gene-gene null reproducibility for 90 independent pairwise combinations of these 10 spins. Across these 90 permutations, the average median gene-gene correlation was R=0.03, whereas in the unspun triplet comparisons this was R=0.36. These results indicate that the primary source of the gene-level triplet reproducibility is the underlying shared gene expression pattern rather than interpolation-induced structure.

      In Methods 2a: "An additional null dataset was generated to test whether intrinsic geometry of the cortical mesh and its impact on interpolation for benchmarking analyses of DEMs and gradients (Fig S1d, Fig S2d, Fig S3c). In these analyses, the original samples were rotated on the spherical surface prior to subsequent interpolation, smoothing and gradient calculation. Due to computational constraints the full dataset was recreated only for 10 independent spins. These are referred to as the “spun+interpolated null”.

      Author response image 1.

      Figure S1d, Gene predictability was higher across all triplet-triplet pairs than when compared to spun+interpolated null.

      Comment 2

      It's also possible that mesh-induced structure is responsible in part for the "signal boost" observed when comparing raw expression data and interpolated data (fig S1a). How do you explain the signal boost of the smooth data compared with the raw data otherwise?

      Response 2

      We thank the reviewer for highlighting this issue of mesh-induced structure. We first sought to quantify the impact of mesh-induced structure through the new null model, in which the data are spun prior to interpolation. New figure S1d, S2d and S3c all show that the main findings are not driven by interpolation over a common mesh structure, but rather originate in the underlying expression data.

      Specifically, for the original Figure S1a, the reviewer highlights a limitation that we compared intersubject predictability of raw-sample to raw-sample and interpolated-to-interpolated. In this original formulation improved prediction scores for interpolated-to-interpolated (the “signal boost”) could be driven by mesh-induced structure being applied to both the input and predicted maps. We have updated this so that we are now comparing raw-to-raw and interpolated-to-raw, i.e. whether interpolated values are better estimations of the measured expression values. The new Fig S1a&b (see below) shows a signal boost in gene-level and vertex level prediction scores (delta R = +0.05) and we attribute this to the minimisation of location and measurement noise in the raw data, improving the intersubject predictability of expression levels.

      In Methods 2b: "To assess the effect of data interpolation in DEM generation we compared gene-level and vertex-level reproducibility of DEMs against a “ground truth” estimate of these reproducibility metrics based on uninterpolated expression data. To achieve a strict comparison of gene expression values between different individuals at identical spatial locations we focused these analyses on the subset of AHBA samples where a sample from one subject was within 3 mm geodesic distance of another. This resulted in 1097 instances (spatial locations) with measures of raw gene expression of one donor, and predicted values from the second donor’s un-interpolated AHBA expression data and interpolated DEM. We computed gene-level and vertex-level reproducibility of expression using the paired donor data at each of these sample points for both DEM and uninterpolated AHBA expression values. By comparing DEM reproducibility estimates with those for uninterpolated AHBA expression data, we were able to quantify the combined effect of interpolation and smoothing steps in DEM generation. We used gene-level reproducibility values from DEMs and uninterpolated AHBA expression data to compute a gene-level difference in reproducibility, and we then visualized the distribution of these difference values across genes (Fig S1a). We used gene-rank correlation to compare vertex-level reproducibility values between DEMs and uninterpolated AHBA expression data (Fig S1b)."

      Author response image 2.

      Figure S1. Reproducibility of Dense Expression Maps (DEMs) interpolated from spatially sparse postmortem measures of cortical gene expression. a, Signal boost in the interpolated DEM dataset vs. spatially sparse expression data. Restricting to samples taken from approximately the same cortical location in pairs of individuals (within 3mm geodesic distance), there was an overall improvement in intersubject spatial predictability in the interpolated maps. Furthermore, genes with lower predictability in the interpolated maps were less predictable in the raw dataset, suggesting these regions exhibit higher underlying biological variability rather than methodologically introduced bias. b, Similarly at the paired sample locations, gene-rank predictability was generally improved in DEMs vs. sparse expression data (median change in R from sparse samples to interpolated for each pair of subjects, +0.5).

      1. How do you explain that despite the difference in absolute value the combined expression maps of genes with and without cortical expression look similar? (fig S1e: in both cases there's high values in the dorsal part of the central sulcus, in the occipital pole, in the temporal pole, and low values in the precuneus and close to the angular gyrus). Could this also reflect mesh-smoothing-induced structure?

      Response 3

      As with comment 1, this is an interesting perspective that we had not fully considered. We would first like to clarify that non-cortical expression is defined from the independent datasets including the “cortex” tissue class of the human protein atlas and genes identified as markers for cortical layers or cortical cells in previous studies. This is still likely an underestimate of true cortically expressed genes as some of these “non-cortical genes” had high intersubject reproducibility scores. Nevertheless we think it appropriate to use a measure of brain expression independent of anything included in other analyses for this paper. These considerations are part of the reason we provide all gene maps with accompanying uncertainty scores for user discretion rather than simply filtering them out.

      In terms of the spatially consistent pattern of the gene ranks of Fig S1f, this consistent spatial pattern mirrors Transcriptomic Distinctiveness (r=0.52 for non-cortical genes, r=0.75 for cortical genes), so we think that as the differences in expression signatures become more extreme, the relative ranks of genes in that region are more reproducible/easier to predict.

      To assess whether mesh-smoothing-induced structure is playing a role, we carried out an additional the new null model introduced in response to comment 1, and asked if the per-vertex gene rank reproducibility of independently spun subgroup triplets showed a similar structure to that in our original analyses. Across the 90 permutations, the median correlation between vertex reproducibility and TD was R=0.10. We also recalculated the TD maps for the 10 spun datasets and the mean correlation with the original TD did not significantly differ from zero (mean R = 0.01, p=0.2, nspins =10). These results indicate that folding morphology is not the major driver of local or large scale patterning in the dataset. We have included this as a new Figure S3c.

      We have updated the text as follows:

      In Methods 3a: "Third, to assess whether the covariance in spatial patterning across genes could be a result of mesh-associated structure introduced through interpolation and smoothing, TD maps were recomputed for the spun+interpolated null datasets and compared to the original TD map (Fig S3c)."

      In Results: "The TD map observed from the full DEMs library was highly stable between all disjoint triplets of donors (Methods, Fig S3a, median cross-vertex correlation in TD scores between triplets r=0.77) and across library subsets at all deciles of DEM reproducibility (Methods, Fig S3b, cross-vertex correlation in TD scores r>0.8 for the 3rd-10th deciles), but was not recapitulated in spun null datasets (Fig S3c)."

      Author response image 3.

      Figure S3c, Correlations between TD and TD maps regenerated on datasets spun using two independent nulls, one where the rotation is applied prior to interpolation and smoothing (spun+interpolated) and one where it is applied to the already-created DEMs. In each null, the same rotation matrix is applied to all genes.

      Comment 4

      Could you provide more information about the way in which the nearest-neighbours were identified (results p4). Were they nearest in Euclidean space? Geodesic? If geodesic, geodesic over the native brain surface? over the spherically deformed brain? (Methods cite Moresi & Mather's Stripy toolbox, which seems to be meant to be used on spheres). If the distance was geodesic over the sphere, could the distortions introduced by mapping (due to brain anatomy) influence the geometry of the expression maps?

      Response 4

      We have clarified in the Methods that the mapping is to nearest neighbors on the spherically-inflated surface.

      The new null model we have introduced in response to comments 1 & 3 preserves any mesh-induced structure alongside any smoothing-induced spatial autocorrelations, and the additional analyses above indicate that main results are not induced by systematic mesh-related interpolation signal. In response to an additional suggestion from the reviewer (Comment 13), we also assessed whether local distortions due to the mesh could be creating apparent border effects in the data, for instance at the V1-V2 boundary. At the V1-V2 border, which coincides anatomically with the calcarine sulcus, we computed the 10 genes with the highest expression gradient along this boundary in the actual dataset and the spun-interpolated null. The median test expression gradients along this border was higher than in any of the spun datasets, indicating that these boundary effects are not explained by the interpolation and cortical geometry effects on the data (new Fig S2d). The text has been updated as follows:

      In Methods 1: "For cortical vertices with no directly sampled expression, expression values were interpolated from their nearest sampled neighbor vertex on the spherical surface (Moresi and Mather, 2019) (Fig 1b)."

      In Methods 2: "We used the spun+interpolated null to test whether high gene gradients could be driven by non-uniform interpolation across cortical folds. We quantified the average gradient for all genes along the V1-V2 border in the atlas, as well as for 10 iterations of the atlas where the samples were spun prior to interpolation. We computed the median gradient magnitude for the 20 top-ranked genes for each (Fig S2d)."

      Author response image 4.

      Figure S2d Mean of gradient magnitudes for 20 genes with largest gradients along V1-V2 border, compared to values along the same boundary on the spun+interpolated null atlas. Gradients were higher in the actual dataset than in all spun version indicating this high gradient feature is not primarily due to the effects of calcarine sulcus morphology on interpolation

      Comment 5

      Could you provide more information about the smoothing algorithm? Volumetric, geodesic over the native mesh, geodesic over the sphere, averaging of values in neighbouring vertices, cotangent-weighted laplacian smoothing, something else?

      Response 5

      We are using surface-based geodesic over the white surface smoothing described in Glasser et al., 2013 and used in the HCP workbench toolbox (https://www.humanconnectome.org/software/connectome-workbench). We have updated the methods to clarify this.

      In Methods 1: "Surface expression maps were smoothed using the Connectome Workbench toolbox (Glasser et al. 2013) with a 20mm full-width at half maximum Gaussian kernel , selected to be consistent with this sampling density (Fig 1c)."

      Comment 6

      Could you provide more information about the method used for computing the gradient of the expression maps (p6)? The gradient and the laplacian operator are related (the laplacian is the divergence of the gradient), which could also be responsible in part for the relationships observed between expression transitions and brain geometry.

      Response 6

      We are using Connectome Workbench’s metric gradient command for this Glasser et al., 2013 and used in the HCP workbench pipeline. The source code for gradient calculation can be found here: https://github.com/Washington-University/workbench/blob/131e84f7b885d82af76e be21adf2fa97795e2484/src/Algorithms/AlgorithmMetricGradient.cxx

      In Methods 2: >For each of the resulting 20,781 gene-level expression maps, the orientation and magnitude of gene expression change at each vertex (i.e. the gradient) was calculated for folded, inflated, spherical and flattened mesh representations of the cortical sheet using Connectome Workbench’s metric gradient command (Glasser et al. 2013).

      b. Potentially inflated FPR for spin tests on autocorrelated data."

      Spin tests are extensively used in this work and it would be useful to make the readers aware of their limitations, which may confound some of the results presented. Spin tests aim at establishing if two brain maps are similar by comparing a measure of their similarity over a spherical deformation of the brains against a distribution of similarities obtained by randomly spinning one of the spheres. It is not clear which specific variety of spin test was used, but the original spin test has well known limitations, such as the violation of the assumption of spatial stationarity of the covariance structure (not all positions of the spinning sphere are equivalent, some are contracted, some are expanded), or the treatment of the medial wall (a big hole with no data is introduced when hemispheres are isolated).

      Another important limitation results from the comparison of maps showing autocorrelation. This problem has been extensively described by Markello & Misic (2021). The strong smoothing used to make a continuous map out of just ~1300 samples introduces large, geometry dependent autocorrelations. Indeed, the expression maps presented in the manuscript look similar to those with the highest degree of autocorrelation studied by Markello & Misic (alpha=3). In this case, naive permutations should lead to a false positive rate ~46% when comparing pairs of random maps, and even most sophisticated methods have FPR>10%.

      Comment 7 There's currently several researchers working on testing spatial similarity, and the readers would benefit from being made aware of the problem of the spin test and potential solutions. There's also packages providing alternative implementations of spin tests, such as BrainSMASH and BrainSpace, which could be mentioned.

      Response 7

      We thank the reviewer for raising the issue of null models. First, with reference to the false positive rate of 46% when maps exhibit spatial autocorrelation, we absolutely agree that this is an issue that must be accounted for and we address this using the spin test. We acknowledge there has been other work on nulls such as BrainSMASH and BrainSpace. Nevertheless in the Markello and Misic paper to which the reviewer refers, the BrainSmash null models perform worse with smoother maps (with false positive rates approaching 30% in panel e below), whereas the spin test maintains false positives rates below 10%.

      Author response image 5.

      We have added a brief description of the challenge and our use of the spin test.

      In Methods 2a: "Cortical maps exhibit spatial autocorrelation that can inflate the False Positive Rate, for which a number of methods have been proposed(Alexander-Bloch et al. 2018; Burt et al. 2020; Vos de Wael et al. 2020). At higher degrees of spatial smoothness, this high False Positive Rate is most effectively mitigated using the spin test(Alexander-Bloch et al. 2018; Markello and Misic 2021; Vos de Wael et al. 2020). In the following analyses when generating a test statistic comparing two spatial maps, to generate a null distribution, we computed 1000 independent spins of the cortical surface using https://netneurotools.readthedocs.io, and applied it to the first map whilst keeping the second map unchanged. The test statistic was then recomputed 1000 times to generate a null distribution for values one might observe by chance if the maps shared no common organizational features. This is referred to throughout as the “spin test” and the derived p-values as pspin."

      Comment 8

      Could it be possible to measure the degree of spatial autocorrelation?

      Response 8

      We agree this could be a useful metric to generate for spatial cortical maps. However, there are multiple potential metrics to choose from and each of the DEMs would have their own value. To address this properly would require the creation of a set of validated tools and it is not clear how we could summarize this variety of potential metrics for 20k genes. Moreover, as discussed above the spin method is an adequate null across a range of spatial autocorrelation degrees, thus while we agree that in general estimation of spatial smoothness could be a useful imaging metric to report, we consider that it is beyond the scope of the current manuscript.

      Comment 9

      Could you clarify which version of the spin test was used? Does the implementation come from a package or was it coded from scratch?

      Response 9

      As Markello & Misic note, at the vertex level, the various implementations of the spin test become roughly equivalent to the ‘original’ Alexander-Bloch et al., implementation. We used took the code for the ‘original’ version implemented in python here: https://netneurotools.readthedocs.io/en/latest/_modules/netneurotools/stats.html# gen_spinsamples.

      This has been updated in the methods (see Response 7).

      Comment 10

      Cortex and non-cortex vertex-level gene rank predictability maps (fig S1e) are strikingly similar. Would the spin test come up statistically significant? What would be the meaning of that, if the cortical map of genes not expressed in the cortex appeared to be statistically significantly similar to that of genes expressed in the cortex?

      Response 10

      Please see response to comment 3, which also addresses this observation.

      Reviewer #2 (Public Review):

      The authors convert the AHBA dataset into a dense cortical map and conduct an impressively large number of analyses demonstrating the value of having such data.

      I only have comments on the methodology.

      Comment 1

      First, the authors create dense maps by simply using nearest neighbour interpolation followed by smoothing. Since one of the main points of the paper is the use of a dense map, I find it quite light in assessing the validity of this dense map. The reproducibility values they calculate by taking subsets of subjects are hugely under-powered, given that there are only 6 brains, and they don't inform on local, vertex-wise uncertainties). I wonder if the authors would consider using Gaussian process interpolation. It is really tailored to this kind of problem and can give local estimates of uncertainty in the interpolated values. For hyperparameter tuning, they could use leave-one-brain-out for that.

      I know it is a lot to ask to change the base method, as that means re-doing all the analyses. But I think it would strengthen the paper if the authors put as much effort in the dense mapping as they did in their downstream analyses of the data.

      Response 1

      We thank the reviewer for the suggestion to explore Gaussian process interpolation. We have implemented this for our dataset and attempted to compare this with our original method with the 3 following tests: i) intertriplet reproducibility of individual gene maps, ii) microscale validations: area markers, iii) macroscale validations: bio patterns.

      Overall, compared to our original nearest-neighbor interpolation method, GP regression (i) did not substantially improve gene-level reproducibility of expression maps (median correlation increase of R=0.07 which was greater for genes without documented protein expression in cortex): ii) substantially worsened performance in predicting areal marker genes and iii) showed similar but slightly worse performance at predicting macroscale patterns from Figure 1.

      Given the significantly poorer performance on one of our key tests (ii) we have opted not to replace our original database, but we do now include code for the alternative GP regression methodology in the github repository so others can reproduce/further develop these methods.

      Author response image 6.

      ii) Genes ranked by mean expression gradient from current DEMs (left) and Gaussian process-derived interpolation maps (right). Established Human and macaque markers are consistently higher-ranked in DEM maps. iii) Figure 1 Interpolated vs GP regression

      Author response table 1.

      Comment 2

      It is nice that the authors share some code and a notebook, but I think it is rather light. It would be good if the code was better documented, and if the user could have access to the non-smoothed data, in case they was to produce their own dense maps. I was only wondering why the authors didn't share the code that reproduces the many analyses/results in the paper.

      Response 2

      We thank the reviewer for this suggestion. In response we have updated the shared github repository (https://github.com/kwagstyl/magicc). This now includes code and notebooks to reproduce the main analyses and figures.

      Reviewer #1 (Recommendations For The Authors):

      Minor comments

      Comment 11

      p4 mentions Fig S1h, but the supp figures only goes from S1a to S1g

      Response 11

      We thank the reviewer for capturing this error. It was in fact referring to what is now Fig S1h and has been updated.

      Comment 12

      It would be important that the authors share all the code used to produce the results in the paper in addition to the maps. The core methodological contribution of the work is a series of continuous maps of gene expression, which could become an important tool for annotation in neuroimaging research. Many arbitrary (reasonable) decisions were made, it would be important to enable users to evaluate their influence on the results.

      Response 12

      We thank both reviewers for this suggestion. We have updated the github to be able to reproduce the dense maps and key figures with our methods.

      Comment 13

      p5: Could the sharp border reflect the effect of the geometry of the calcarine sulcus on map smoothing? More generally, could there be an effect of folds on TD?

      Response 13

      Please see our response to Reviewer 1, Comment 1 above, where we introduce the new null models now analyzed to test for effects of mesh geometry on our findings. These new null models - where original source data were spun prior to interpolation suggest that neither the sharp V1/2 border or the TD map are effects of mesh geometry. Specifically: (i) , the magnitudes of gradients along the V1/2 boundary from null models were notably smaller than those in our original analyses (see new figure S2d), and (ii) TD maps computed from the new null models showed no correlation with TD maps from ur original analyses (new Figure S3c, mean R = 0.01, p=0.2, nspins =10).

      Comment 14

      p5: Similar for the matching with the areas in Glasser's parcellation: the definition of these areas involves alignment through folds (based on freesurfer 'sulc' map, see Glasser et al 2016). If folds influence the geometry of TDs, could that influence the match?

      Response 14

      We note that Fig S3c provided evidence that folding was not the primary driver of the TD patterning. However, it is true that Glasser et al. use both neuroanatomy (folding, thickness and myelin) and fMRI-derived maps to delineate their cortical areas. As such Figure 2 f & g aren’t fully independent assessments. Nevertheless the reason that these features are used is that many of the sulci in question have been shown to reliably delineate cytoarchitectonic boundaries (Fischl et al., 2008).

      In Results: "A similar alignment was seen when comparing gradients of transcriptional change with the spatial orientation of putative cortical areas defined by multimodal functional and structural in vivo neuroimaging(Glasser et al., 2016) (expression change running perpendicular to area long-axis, pspin<0.01, Fig 2g, Methods)."

      Comment 15

      p6: TD peaks are said to overlap with functionally-specialised regions. A comment on why audition is not there, nor language, but ba 9-46d is? Would that suggest a lesser genetic regulation of those functions?

      Response 15

      The reviewer raises a valid point and this was a result that we were also surprised by. The finding that the auditory cortex is not as microstructurally distinctive as, say V1, is consistent with other studies applying dimensionality-reduction techniques to multimodal microstructural receptor data (e.g. Zilles et al., 2017, Goulas et al., 2020). These studies found that the auditory microstructure is not as extreme as either visual and somatomotor areas. From a methodological view point, the primary auditory cortex is significantly smaller than both visual and somatomotor areas, and therefore is captured by fewer independent samples, which could reduce the detail in which its structure is being mapped in our dataset.

      For the frontal areas, we would note that i) the frontal peak is the smallest of all peaks found and was more strongly characterised by low z-score genes than high z-score. ii) the anatomical areas in the frontal cortex are much more highly variable with respect to folding morphology (e.g. Rajkowska 1995). The anatomical label of ba9-46d (and indeed all other labels) were automatically generated as localisers rather than strict area labels. We have clarified this in the text as follows:

      In Methods 3a: "Automated labels to localize TD peaks were generated based on their intersection with a reference multimodal neuroimaging parcellation of the human cortex(Glasser et al., 2016). Each TD was given the label of the multimodal parcel that showed greatest overlap (Fig 2b)."

      Comment 16.

      p7: The proposition that "there is a tendency for cortical sulci to run perpendicular to the direction of fastest transcriptional change", could also be "there is a tendency for the direction of fastest transcriptional change to run perpendicular to cortical sulci"? More pragmatically, this result from the geometry of transcriptional maps being influenced by sulcal geometry in their construction.

      Response 16

      Please see our response to Reviewer 1, Comment 1 above, where we introduce the new null models now analyzed to test for effects of mesh geometry on our findings. These models indicate that the topography of interpolated gene expression maps do not reflect influences of sulcal geometry on their construction.

      Comment 17

      p7: TD transitions are indicated to precede folding. This is based on a consideration of folding development based on the article by Chi et al 1977, which is quite an old reference. In that paper, the authors estimated the tempo of human folding development based on the inspection of photographs, which may not be sufficient for detecting the first changes in curvature leading to folds. The work of the Developing Human Connectome consortium may provide a more recent indication for timing. In their data, by PCW 21 there's already central sulcus, pre-central, post-central, intra-parietal, superior temporal, superior frontal which can be detected by computing the mean curvature of the pial surface (I can only provide a tweet for reference: https://twitter.com/R3RT0/status/1617119196617261056). Even by PCW 9-13 the callosal sulcus, sylvian fissure, parieto-occipital fissure, olfactory sulcus, cingulate sulcus and calcarine fissure have been reported to be present (Kostovic & Vasung 2009).

      Response 17

      Our field lacks the data necessary to provide a comprehensive empirical test for the temporal ordering of regional transcriptional profiles and emergence of folding. Our results show that transcriptional identities of V1 and TGd are - at least - present at the very earliest stages of sulcation in these regions. In response to the reviewers comment we have updated with a similar fetal mapping project which similarly shows evidence of the folds between weeks 17-21 and made the language around directionality more cautious.

      In Results: "The observed distribution of these angles across vertices was significantly skewed relative to a null based on random alignment between angles (pspin<0.01, Fig 2f, Methods) - indicating that there is indeed a tendency for cortical sulci and the direction of fastest transcriptional change to run perpendicular to each other (pspin<0.01, Fig 2f).

      As a preliminary probe for causality, we examined the developmental ordering of regional folding and regional transcriptional identity. Mapping the expression of high-ranking TD genes in fetal cortical laser dissection microarray data(Miller et al., 2014) from 21 PCW (Post Conception Weeks) (Methods) showed that the localized transcriptional identity of V1 and TGd regions in adulthood is apparent during the fetal periods when folding topology begins to emerge (Chi et al. 1977; Xu et al. 2022) (Fig " S2d).

      In Discussion: "By establishing that some of these cortical zones are evident at the time of cortical folding, we lend support to a “protomap”(Rakic 1988; O'Leary 1989; O'Leary et al. 2007; Rakic et al. 2009) like model where the placement of some cortical folds is set-up by rapid tangential changes in cyto-laminar composition of the developing cortex(Ronan et al., 2014; Toro and Burnod, 2005; Van Essen, 2020). The DEMs are derived from fully folded adult donors, and therefore some of the measured genetic-folding alignment might also be induced by mechanical distortion of the tissue during folding(Llinares-Benadero and Borrell 2019; Heuer and Toro 2019). However, no data currently exist to conclusively assess the directionality of this gene-folding relationship."

      Comment 18

      p7: In my supplemental figures (obtained from biorxiv, because I didn't find them among the files submitted to eLife) there's no S2j (only S2a-S2i).

      Response 18

      We apologize, this figure refers to S3k (formerly S3j), rather than S2j. We have updated the main text.

      Comment 19 p7: It is not clear from the methods (section 3b) how the adult and fetal brains were compared. Maybe using MSM (Robinson et al 2014)?

      Response 19

      We have now clarified this in Methods text as reproduced below.

      In Methods 3b: "We averaged scaled regional gene expression values between donors per gene, and filtered for genes in the fetal LDM dataset that were also represented in the adult DEM dataset - yielding a single final 20,476*235 gene-by-sample matrix of expression values for the human cortex at 21 PCW. Each TD peak region was then paired with the closest matching cortical label within the fetal regions. This matrix was then used to test if each TD expression signature discovered in the adult DEM dataset (Fig 2, Table 3) was already present in similar cortical regions at 21 PCW."

      Comment 20

      p7: WGCNA is used prominently, could you provide a brief introduction to its objectives? The gene coexpression networks are produced after adjusting the weight of the network edges to follow a scale-free topology, which is meant to reflect the nature of protein-protein interactions. Soft thresholding increases contrast, but doesn't this decrease a potential role of infinitesimal regulatory signals?

      Response 20

      We agree with the reviewer that the introduction to WGCNA needed additional details and have amended the Results (see below). One limitation of WGCNA-derived associations is that it will downweigh the role of smaller relationships including potentially important regulatory signals. WGCNA methods have been titrated to capture strong relationships. This is an inherent limitation of all co-expression driven methods which lead to an incomplete characterisation of the molecular biology. Nevertheless we feel these stronger relationships are still worth capturing and interrogating. We have updated the text to introduce WGCNA and acknowledge this potential weakness in the approach.

      In Results: "Briefly, WGCNA constructs a constructs a connectivity matrix by quantifying pairwise co-expression between genes, raising the correlations to a power (here 6) to emphasize strong correlations while penalizing weaker ones, and creating a Topological Overlap Matrix (TOM) to capture both pairwise similarities expression and connectivity. Modules of highly interconnected genes are identified through hierarchical clustering. The resultant WGCNA modules enable topographic and genetic integration because they each exist as both (i) a single expression map (eigenmap) for spatial comparison with neuroimaging data (Fig 3a,b, Methods) and, (ii) a unique gene set for enrichment analysis against marker genes systematically capturing multiple scales of cortical organization, namely: cortical layers, cell types, cell compartments, protein-protein interactions (PPI) and GO terms (Methods, Table S2 and S4)."

      Comment 21

      WGCNA modules look even more smooth than the gene expression maps. Are these maps comparable to low frequency eigenvectors? Autocorrelation in that case should be very strong?

      Response 21

      These modules are smooth as they are indeed eigenvectors which likely smooth out some of the more detailed but less common features seen in individual gene maps. These do exhibit high degrees of autocorrelation, nevertheless we are applying the spin test which is currently the appropriate null model for spatially autocorrelated cortical maps (Response 7).

      Comment 22

      If the WGCNA modules provide an orthogonal basis for surface data, is it completely unexpected that some of them will correlate with low-frequency patterns? What would happen if random low frequency patterns were generated? Would they also show correlations with some of the 16 WGCNA modules?

      Response 22

      We agree with the reviewer that if we used a generative model like BrainSMASH, we would likely see similar low frequency patterns. However, the inserted figure in Response 7 from Makello & Misic provide evidence that is not as conservative a null as the spin test when data exhibit high spatial autocorrelation. The spatial enrichment tests carried out on the WGCNA modules are all carried out using the spin test.

      Comment 23

      In part (a) I commented on the possibility that brain anatomy may introduce artifactual structure into the data that's being mapped. But what if the relationship between brain geometry and brain organisation were deeper than just the introduction of artefacts? The work of Lefebre et al (2014, https://doi.org/10.1109/ICPR.2014.107; 2018, https://doi.org/10.3389/fnins.2018.00354) shows that clustering based on the 3 lowest frequency eigenvectors of the Laplacian of a brain hemisphere mesh produce an almost perfect parcellation into lobes, with remarkable coincidences between parcel boundaries and primary folds and fissures. The work of Pang et al (https://doi.org/10.1101/2022.10.04.510897) suggests that the geometry of the brain plays a critical role in constraining its dynamics: they analyse >10k task-evoked brain maps and show that the eigenvectors of the brain laplacian parsimoniously explain the activity patterns. Could brain anatomy have a downward effect on brain organisation?

      Response 23

      The reviewer raises a fascinating extension of our work identifying spatial modes of gene expression. We agree that these are low frequency in nature, but would first like to note that the newly introduced null model indicates that the overlaps with salient neuroanatomical features are inherent in the expression data and not purely driven by anatomy in a methodological sense.

      Nevertheless we absolutely agree there is likely to be a complex multidirectional interplay between genetic expression patterns through development, developing morphology and the “final” adult topography of expression, neuroanatomical and functional patterns.

      We think that the current manuscript currently contains a lot of in depth analyses of these expression data, but agree that a more extensive modeling analysis of how expression might pattern or explain functional activation would be a fascinating follow on, especially in light of these studies from Pang and Lefebre. Nevertheless we think that this must be left for a future modeling paper integrating these modes of microscale, macroscale and functional anatomy.

      In Discussion: "Indeed, future work might find direct links between these module eigenvectors and similar low-frequency eigenvectors of cortical geometry have been used as basis functions to segment the cortex (Lefèvre et al. 2018) and explain complex functional activation patterns(Pang et al. 2023)."

      Comment 24

      On p11: ASD related to rare, deleterious mutations of strong effect is often associated with intellectual disability (where the social interaction component of ASD is more challenging to assess). Was there some indication of a relationship with that type of cognitive phenotype?

      Response 24

      Across the two ABIDE cohorts, the total number of those with ASD and IQ <70, which is the clinical threshold for intellectual disability was n=10, which unfortunately did not allow us to conduct a meaningful test of whether ID impacts the relationship between imaging changes in ASD and the expression maps of genes implicated in ASD by rare variants.

      Comment 25

      Could you clarify if the 6 donors were aligned using the folding-based method in freesurfer?

      Response 25

      The 6 donors were aligned using MSMsulc (Robinson et al., 2014), which is a folding based method from the HCP group. This is now clarified in the methods.

      In Methods 1: "Cortical surfaces were reconstructed for each AHBA donor MRI using FreeSurfer(Fischl, 2012), and coregistered between donors using surface matching of individuals’ folding morphology (MSMSulc) (Robinson et al., 2018)."

      Comment 26

      The authors make available a rich resource and a series of tools to facilitate their use. They have paid attention to encode their data in standard formats, and their code was made in Python using freely accessible packages instead of proprietary alternatives such as matlab. All this should greatly facilitate the adoption of the approach. I think it would be important to state more explicitly the conceptual assumptions that the methodology brings. In the same way that a GWAS approach relies on a Mendelian idea that individual alleles encode for phenotypes, what is the idea about the organisation of the brain implied by the orthogonal gene expression modules? Is it that phenotypes - micro and macro - are encoded by linear combinations of a reduced number of gene expression patterns? What would be the role of the environment? The role of non-genic regulatory regions? Some modalities of functional organisation do not seem to be encoded by the expression of any module. Is it just for lack of data or should this be seen as the sign for a different organisational principle? Likewise, what about the aspects of disorders that are not captured by expression modules? Would that hint, for example, to stronger environmental effects? What about linear combinations of modules? Nonlinear? Overall, the authors adopt implicitly, en passant, a gene-centric conceptual standpoint, which would benefit from being more clearly identified and articulated. There are citations to Rakic's protomap idea (I would also cite the original 1988 paper, and O'Leary's 1989 "protocortex" paper stressing the role of plasticity), which proposes that a basic version of brain cytoarchitecture is genetically determined and transposed from the proliferative ventricular zone regions to the cortical plate through radial migration. In p13 the authors indicate that their results support Rakic's protomap. Additionally, in p7 the authors suggest that their results support a causal arrow going from gene expression to sulcal anatomy. The reviews by O'leary et al (2007), Ronan & Fletcher (2014, already cited), Llinares-Benadero & Borrell (2019) could be considered, which also advocate for a similar perspective. For nuances on the idea that molecular signals provide positional information for brain development, the article by Sharpe (2019, DOI: 10.1242/dev.185967) is interesting. For nuances on the gene-centric approach of the paper the articles by Rockmann (2012, DOI: 10.1111/j.1558-5646.2011.01486.x) but also from the ENCODE consortium showing the importance of non-genic regions of the genome ("Perspectives on ENCODE" 2020 DOI: 10.1038/s41586-021-04213-8) could be considered. I wouldn't ask to cite ideas from the extended evolutionary synthesis about different inheritance systems (as reviewed by Jablonka & Lamb, DOI: 10.1017/9781108685412) or the idea of inherency (Newman 2017, DOI: 10.1007/978-3-319-33038-9_78-1), but the authors may find them interesting. Same goes for our own work on mechanical morphogenesis which expands on the idea of a downward causality (Heuer and Toro 2019, DOI: 10.1016/j.plrev.2019.01.012)

      Response 26

      We thank the reviewer for recommending these papers, which we enjoyed reading and have deepened our thinking on the topic. In addition to toning down some of the language with respect to causality that our data cannot directly address, we have included additional discussion and references as follows:

      In Discussion: "By establishing that some of these cortical zones are evident at the time of cortical folding, we lend support to a “protomap”(Rakic 1988; O'Leary 1989; O'Leary et al. 2007; Rakic et al. 2009) like model where the placement of some cortical folds is set-up by rapid tangential changes in cyto-laminar composition of the developing cortex(Ronan et al., 2014; Toro and Burnod, 2005; Van Essen, 2020). The DEMs are derived from fully folded adult donors, and therefore some of the measured genetic-folding alignment might also be induced by mechanical distortion of the tissue during folding(Llinares-Benadero and Borrell 2019; Heuer and Toro 2019). However, no data currently exist to conclusively assess the directionality of this gene-folding relationship.

      Overall, the manuscript is very interesting and a great contribution. The amount of work involved is impressive, and the presentation of the results very clear. My comments indicate some aspects that could be made more clear, for example, providing additional methodological information in the supplemental material. Also, making aware the readers and future users of MAGICC of the methodological and conceptual challenges that remain to be addressed in the future for this field of research.

      Reviewer #2 (Recommendations For The Authors):

      Comment 1

      The supplementary figures seem to be missing from the eLife submission (although I was able to find them on europepmc)

      Response 1

      We apologize that these were not included in the documents sent to reviewers. The up-to-date supplementary figures are included in this resubmission and again on biorxiv.

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      This important study combines genetically barcoded rabies viruses with spatial transcriptomics in vivo in the mouse brain to decode connectivity of neural circuits. The data generated by the combination of these approaches in this new way is mostly convincing as the authors provide validation and proof-of-concept that the approach can be successful. While this new combination of established techniques has promise for elucidating brain connectivity, there are still some nuances and caveats to the interpretations of the results that are lacking especially with regards to noting unexpected barcodes either due to unexpected/novel connections or unexpected rabies spread.

      In this revised manuscript, we added a new control experiment and additional analyses to address two main questions from the reviewers: (1) How the threshold of glycoprotein transcript counts used to identify source cells was determined, and (2) whether the limited long-range labeling was expected in the trans-synaptic experiment. The new experiments and analyses validated the distribution of source cells and presynaptic cells observed in the original barcoded transsynaptic tracing experiment and validated the choice of the threshold of glycoprotein transcripts. As the reviewers suggested, we also included additional discussion on how future experiments can improve upon this study, including strategies to improve source cell survival and minimizing viral infection caused by leaky expression of TVA. We also provided additional clarification on the analyses for both the retrograde labeling experiment and the trans-synaptic tracing experiment. We modified the Results and Discussion sections on the trans-synaptic tracing experiment to improve clarity to general readers. Detailed changes to address specific comments by reviewers are included below.

      Public Reviews:

      Reviewer #1 (Public Review):

      In this preprint, Zhang et al. describe a new tool for mapping the connectivity of mouse neurons. Essentially, the tool leverages the known peculiar infection capabilities of Rabies virus: once injected into a specific site in the brain, this virus has the capability to "walk upstream" the neural circuits, both within cells and across cells: on one hand, the virus can enter from a nerve terminal and infect retrogradely the cell body of the same cell (retrograde transport). On the other hand, the virus can also spread to the presynaptic partners of the initial target cells, via retrograde viral transmission.

      Similarly to previously published approaches with other viruses, the authors engineer a complex library of viral variants, each carrying a unique sequence ('barcode'), so they can uniquely label and distinguish independent infection events and their specific presynaptic connections, and show that it is possible to read these barcodes in-situ, producing spatial connectivity maps. They also show that it is possible to read these barcodes together with endogenous mRNAs, and that this allows spatial mapping of cell types together with anatomical connectivity.

      The main novelty of this work lies in the combined use of rabies virus for retrograde labeling together with barcoding and in-situ readout. Previous studies had used rabies virus for retrograde labeling, albeit with low multiplexing capabilities, so only a handful of circuits could be traced at the same time. Other studies had instead used barcoded viral libraries for connectivity mapping, but mostly focused on the use of different viruses for labeling individual projections (anterograde tracing) and never used a retrograde-infective virus.

      The authors creatively merge these two bits of technology into a powerful genetic tool, and extensively and convincingly validate its performance against known anatomical knowledge. The authors also do a very good job at highlighting and discussing potential points of failure in the methods.

      We thank the reviewer for the enthusiastic comments.

      Unresolved questions, which more broadly affect also other viral-labeling methods, are for example how to deal with uneven tropism (ie. if the virus is unable or inefficient in infecting some specific parts of the brain), or how to prevent the cytotoxicity induced by the high levels of viral replication and expression, which will tend to produce "no source networks", neural circuits whose initial cell can't be identified because it's dead. This last point is particularly relevant for in-situ based approaches: while high expression levels are desirable for the particular barcode detection chemistry the authors chose to use (gap-filling), they are also potentially detrimental for cell survival, and risk producing extensive cell death (which indeed the authors single out as a detectable pitfall in their analysis). This is likely to be one of the major optimisation challenges for future implementations of these types of barcoding approaches.

      As the reviewer suggested, we included additional discussion about tropism and cytotoxicity in the revised Discussion. Our sensitivity for barcode detection is sufficient, since we estimated (based on manual proofreading) that most barcoded neurons had more than ten counts of a barcode in the trans-synaptic tracing experiment. The high sensitivity may potentially allow us to adapt next-generation rabies virus with low replication, such as the third generation ΔL rabies virus (Jin et al, 2022, biorxiv) in future optimizations.

      Overall the paper is well balanced, the data are well presented and the conclusions are strongly supported by the data. Impact-wise, the method is definitely going to be useful for the neurobiology research community.

      We thank the reviewer for her/his enthusiasm.

      Reviewer #2 (Public Review):

      Although the trans-synaptic tracing method mediated by the rabies virus (RV) has been widely utilized to infer input connectivity across the brain to a genetically defined population in mice, the analysis of labeled pre-synaptic neurons in terms of cell-type has been primarily reliant on classical low-throughput histochemical techniques. In this study, the authors made a significant advance toward high-throughput transcriptomic (TC) cell typing by both dissociated single-cell RNAseq and the spatial TC method known as BARseq to decode a vast array of molecularly labeled ("barcoded") RV vector library. First, they demonstrated that a barcoded-RV vector can be employed as a simple retrograde tracer akin to AAVretro. Second, they provided a theoretical classification of neural networks at the single-cell resolution that can be attained through barcoded-RV and concluded that the identification of the vast majority (ideally 100%) of starter cells (the origin of RV-based trans-synaptic tracing) is essential for the inference of single-cell resolution neural connectivity. Taking this into consideration, the authors opted for the BARseq-based spatial TC that could, in principle, capture all the starter cells. Finally, they demonstrated the proof-of-concept in the somatosensory cortex, including infrared connectivity from 381 putative pre-synaptic partners to 31 uniquely barcoded-starter cells, as well as many insightful estimations of input convergence at the cell-type resolution in vivo. While the manuscript encompasses significant technical and theoretical advances, it may be challenging for the general readers of eLife to comprehend. The following comments are offered to enhance the manuscript's clarity and readability.

      We modified the Results and Discussion sections on the trans-synaptic tracing experiment to improve clarity to general readers. We separated out the theoretical discussion about barcode sharing networks as a separate subsection, explicitly stated the rationale of how different barcode sharing networks are distinguished in the in situ trans-synaptic tracing experiment, and added additional discussion on future optimizations. Detailed descriptions are provided below.

      Major points:

      1. I find it difficult to comprehend the rationale behind labeling inhibitory neurons in the VISp through long-distance retrograde labeling from the VISal or Thalamus (Fig. 2F, I and Fig. S3) since long-distance projectors in the cortex are nearly 100% excitatory neurons. It is also unclear why such a large number of inhibitory neurons was labeled at a long distance through RV vector injections into the RSP/SC or VISal (Fig. 3K). Furthermore, a significant number of inhibitory starter cells in the somatosensory cortex was generated based on their projection to the striatum (Fig. 5H), which is unexpected given our current understanding of the cortico-striatum projections.

      The labeling of inhibitory neurons can be explained by several factors in the three different experiments.

      (1) In the scRNAseq-based retrograde labeling experiment (Fig. 2 and Fig. S3), the injection site VISal is adjacent to VISp. Because we dissected VISp for single-cell RNAseq, we may find labeled inhibitory neurons at the VISp border that extend short axons into VISal. We explained this in the revised Results.

      (2) In the in situ sequencing-based retrograde labeling experiment (Fig. 3,4), the proximity between the two injection sites VISal and RSP/SC, and the sequenced areas (which included not only VISp but also RSP) could also contribute to labeling through local axons of inhibitory neurons. Furthermore, because we also sequenced midbrain regions, inhibitory neurons in the superior colliculus could pick up the barcodes through local axons. We included an explanation of this in the revised Results.

      (3) In the trans-synaptic tracing experiment, we speculate that low level leaky expression from the TREtight promoter led to non-Cre-dependent expression in many neurons. To test this hypothesis, we first performed a control injection in which we saw that the fluorescent protein expression were indeed restricted to layer 5, as expected from corticostriatal labeling. Based on the labeling pattern, we estimated that about 12 copies of the glycoprotein transcript per cell would likely be needed to achieve fluorescent protein expression. Since many source cells in our experiment were below this threshold, these results support the hypothesis that the majority of source cells with low level expression of the glycoprotein were likely Cre-independent. Because these cells could still contribute to barcode sharing networks, we could not exclude them as in a conventional bulk trans-synaptic tracing experiment. In future experiments, we can potentially reduce this population by improving the helper AAV viruses used to express TVA and the glycoprotein. We included this explanation in Results and more detailed analysis in Supplementary Note 2, and discussed potential future optimizations in the Discussion. This new analysis in Supplementary Note 2 is also related to the Reviewer’s question regarding the threshold used for determining source cells (see below).

      1. It is unclear as to why the authors did not perform an analysis of the barcodes in Fig. 2. Given that the primary objective of this manuscript is to evaluate the effectiveness of multiplexing barcoded technology in RV vectors, I would strongly recommend that the authors provide a detailed description of the barcode data here, including any technical difficulties or limitations encountered, which will be of great value in the future design of RV-barcode technologies. In case the barcode data are not included in Fig. 2, I would suggest that the authors consider excluding Fig. 2 and Fig. S1-S3 in their entirety from the manuscript to enhance its readability for general readers.

      In the single-cell RNAseq-based retrograde tracing, all barcodes recovered matched to known barcodes in the corresponding library. We included a short description of these results in the revised manuscript.

      1. Regarding the trans-synaptic tracing utilizing a barcoded RV vector in conjunction with BARseq decoding (Fig. 5), which is the core of this manuscript, I have a few specific questions/comments. First, the rationale behind defining cells with only two rolonies counts of rabies glycoprotein (RG) as starter cells is unclear. Why did the authors not analyze the sample based on the colocalization of GFP (from the AAV) and mCherry (from the RV) proteins, which is a conventional method to define starter cells? If this approach is technically difficult, the authors could provide an independent histochemical assessment of the detection stringency of GFP positive cells based on two or more colonies of RG.

      In situ sequencing does not preserve fluorescent protein signals, so we used transcript counts to determine which cells expressed the glycoprotein. We have added new analyses in the Results and in Supplementary Note 2 to determine the transcript counts that were equivalent to cells that had detectable BFP expression. We found that BFP expression is equivalent to ~12 counts of the glycoprotein transcript per cell, which is much higher than the threshold we used. However, we could not solely rely on this estimate to define the source cells, because cells that had lower expression of the glycoprotein (possibly from leaky Cre-independent expression) may still pass the barcodes to presynaptic cells. This can lead to an underestimation of double-labeled and connected-source networks and an overestimation of single-source networks and can obscure synaptic connectivity at the cellular resolution. We thus used a very conservative threshold of two transcripts in the analysis. This conservative threshold will likely overestimate the number of source cells that shared barcodes and underestimate the number of single-source networks. Since this is a first study of barcoded transsynaptic tracing in vivo, we chose to err on the conservative side to make sure that the subsequent analysis has single-cell resolution. Future characterization and optimization may lead to a better threshold to fully utilize data.

      Second, it is difficult to interpret the proportion of the 2,914 barcoded cells that were linked to barcoded starter cells (single-source, double-labeled, or connected-source) and those that remained orphan (no-source or lost-source). A simple table or bar graph representation would be helpful. The abundance of the no-source network (resulting from Cre-independent initial infection of the RV vector) can be estimated in independent negative control experiments that omit either Cre injection or AAV-RG injection. The latter, if combined with BARseq decoding, can provide an experimental prediction of the frequency of double-labeled events since connected-source networks are not labeled in the absence of RG.

      We have added Table 2, which breaks down the 2,914 barcoded cells based on whether they are presynaptic or source cells, and which type of network they belong to. We agree with the reviewer that the additional Cre- or RG- control experiments in parallel would allow an independent estimate of the double labeled networks and the no-source networks. We have included added a discussion of possible controls to further optimize the trans-synaptic tracing approach in future studies in the Discussion.

      Third, I would appreciate more quantitative data on the putative single-source network (Fig. 5I and S6) in terms of the distribution of pre- and post-synaptic TC cell types. The majority of labeling appeared to occur locally, with only two thalamic neurons observed in sample 25311842 (Fig. S6). How many instances of long-distance labeling (for example, > 500 microns away from the injection site) were observed in total? Is this low efficiency of long-distance labeling expected based on the utilized combinations of AAVs and RV vectors? A simple independent RV tracing solely detecting mCherry would be useful for evaluating the labeling efficiency of the method. I have experienced similar "less jump" RV tracing when RV particles were prepared in a single step, as this study did, rather than multiple rounds of amplification in traditional protocols, such as Osakada F et al Nat Protocol 2013.

      We imaged an animal that was injected in parallel to assess labeling (now included in Supplementary Note 2 and Supp. Fig. S5). The labeling pattern in the newly imaged animal was largely consistent with the results from the barcoded experiment: most labeled neurons were seen in the vicinity of the injection site, and sparser labeling was seen in other cortical areas and the thalamus. We further found that most neurons that were labeled in the thalamus were about 1 mm posterior to the center of the injection site, and thus would not have been sequenced in the in situ sequencing experiment (in which we sequenced about 640 µm of tissue spanning the injection site).

      In addition, we found that the bulk of the cells that expressed mCherry from the rabies virus only partially overlapped with the area that contained cells co-expressing BFP with the rabies glycoprotein. Moreover, very few cells co-expressed mCherry and BFP, which would be considered source cells in a conventional mono-synaptic tracing experiment. The small numbers of source cells likely also contributed to the sparseness of long-range labeling in the barcoded experiment.

      These interpretations and comparisons to the barcoded experiment are now included in Supplementary Note 2.

      Reviewer #3 (Public Review):

      The manuscript by Zhang and colleagues attempts to combine genetically barcoded rabies viruses with spatial transcriptomics in order to genetically identify connected pairs. The major shortcoming with the application of a barcoded rabies virus, as reported by 2 groups prior, is that with the high dropout rate inherent in single cell procedures, it is difficult to definitively identify connected pairs. By combining the two methods, they are able to establish a platform for doing that, and provide insight into connectivity, as well as pros and cons of their method, which is well thought out and balanced.

      Overall the manuscript is well-done, but I have a few minor considerations about tone and accuracy of statements, as well as some limitations in how experiments were done. First, the idea of using rabies to obtain broader tropism than AAVs isn't really accurate - each virus has its own set of tropisms, and it isn't clear that rabies is broader (or can be made to be broader).

      As the reviewer suggested, we toned down this claim and stated that rabies virus has different tropism to complement AAV.

      Second, rabies does not label all neurons that project to a target site - it labels some fraction of them.

      We meant to say that retrograde labeling is not restricted to labeling neurons from a certain brain region. We have clarified in the text.

      Third, the high rate of rabies virus mutation should be considered - if it is, or is not a problem in detecting barcodes with high fidelity, this should be noted.

      Our analysis showed that sequencing 15 bases was sufficient to tolerate a small number of mismatches in the barcode sequences and could distinguish real barcodes from random sequences (Fig. 4A). Thus, we can tolerate mutations in the barcode sequence. We have clarified this in the text.

      Fourth, there are a number of implicit assumptions in this manuscript, not all of which are equally backed up by data. For example, it is not clear that all rabies virus transmission is synaptic specific; in fact, quite a few studies argue that it is not (e.g., detection of rabies transcripts in glial cells). Thus, arguments about lost-source networks and the idea that if a cell is lost from the network, that will stop synaptic transmission, is not clear. There is also the very real propensity that, the sicker a starter cell gets, the more non-specific spread of virus (e.g., via necrosis) occurs.

      We agree with the reviewer that how strictly virus transmission is restricted to synapses remains a hotly debated question in the field, and this question is relevant not only to techniques based on barcoded rabies tracing, but to all trans-synaptic tracing experiments. A barcoding-based approach can generate single-cell data that enable direct comparison to other data modalities that measure synaptic connectivity, such as multi-patch and EM. These future experiments may provide additional insights into the questions that the reviewer raised. We have included additional discussion about how non-synaptic transmission of barcodes because of the necrosis of source cells may affect the analysis in the Discussion.

      Regarding the scenario in which the source cell dies, we agree with the reviewer and have clarified in the revised manuscript.

      Fifth, in the experiments performed in Figure 5, the authors used a FLEx-TVA expressed via a retrograde Cre, and followed this by injection of their rabies virus library. The issue here is that there will be many (potentially thousands) of local infection events near the injection site that TVA-mediated but are Cre-dependent (=off-target expression of TVA in the absence of Cre). This is a major confound in interpreting the labeling of these cells. They may express very low levels of TVA, but still have infection be mediated by TVA. The authors did not clearly explore how expression of TVA related to rabies virus infection of cells near the rabies injection site. A modified version of TVA, such as 66T, should have been used to mitigate this issue. Otherwise, it is impossible to determine connectivity locally. The authors do not go to great lengths to interpret the findings of these observations, so I am not sure this is a critical issue, but it should be pointed out by the authors as a caveat to their dataset.

      We agree with the reviewer that this type of infection could potentially be a major contributor to no-source networks, which were abundant in our experiment. Because small no-source networks were excluded from our analyses, and large no-source networks were only included for barcodes with low frequency (i.e., it would be nearly impossible statistically to generate such large no-source networks from independent infections), we believe that the effect of independent infections on our analyses were minimized. We have added a control experiment in Fig S5 and Supplementary Note 2, which further supported the hypothesis that there were many independent infections. We also included additional discussion about how this can be assessed and optimized in future studies in the Discussion.

      Sixth, the authors are making estimates of rabies spread by comparison to a set of experiments that was performed quite differently. In the two studies cited (Liu et al., done the standard way, and Wertz et al., tracing from a single cell), the authors were likely infecting with a rabies virus using a high multiplicity of infection, which likely yields higher rates of viral expression in these starter cells and higher levels of input labeling. However, in these experiments, the authors need to infect with a low MOI, and explicitly exclude cells with >1 barcode. Having only a single virion trigger infection of starter cells will likely reduce the #s of inputs relative to starter neurons. Thus, the stringent criteria for excluding small networks may not be entirely warranted. If the authors wish to only explore larger networks, this caveat should be explicitly noted.

      In the trans-synaptic labeling experiment, we actually used high rabies titer (200 nL, 7.6e10 iu/mL) that was comparable to conventional rabies tracing experiments. We did not exclude cells with multiple barcodes (as opposed to barcodes in multiple source cells), because we could resolve multiple barcodes in the same cell and indeed found many cells with multiple barcodes. We have clarified this in the text.

      Overall, if the caveats above are noted and more nuance is added to some of the interpretation and discussion of results, this would greatly help the manuscript, as readers will be looking to the authors as the authority on how to use this technology.

      In addition to addressing the specific concerns of the reviewer as described above, we modified the Results and Discussion sections on the trans-synaptic tracing experiment to improve clarity to general readers and expanded the discussion on future optimizations.

      Reviewer #1 (Recommendations For The Authors):

      The scientific problem is clearly stated and well laid out, the data are clearly presented, and the experiments well justified and nicely discussed. It was overall a very enjoyable read. The figures are generally nice and clear, however, I find the legends excessively concise. A bit too often, they just sort of introduce the title of the panel rather than a proper explanation of what it is depicted. A clear case is for example visible in Fig 2, where the description of the panels is minimal, but this is a general trend of the manuscript. This makes the figures a bit hard to follow as self-contained entities, without having to continuously go back to the main text. I think this could be improved with longer and more helpful descriptions.

      We have revised all figure legends to make them more descriptive.

      Other minor things:

      In the cDNA synthesis step for in-situ sequencing, I believe the authors might have forgotten one detail: the addition of aminoallyl dUTP to the RT reaction. If I recall correctly this is done in BARseq. The fact that the authors crosslink with BS-PEG on day 2, makes me suspect they spike in these nucleotides during the RT but this is not specified in the relevant step. Perhaps this is a mistake that needs correction.

      The RT primers we used have an amine group at 5’, which directly allows crosslinking. Thus, we did not need to spike in aminoallyl dUTP in the RT reaction. We have clarified this in the Methods.

      Reviewer #2 (Recommendations For The Authors):

      Throughout the manuscript, there are frequent references to the "Methods" section for important details. However, it can be challenging to determine which specific section of the Methods the authors are referring to, and in some cases, a thorough examination of the entire Methods section fails to locate the exact information needed to support the authors' claims. Below are a few specific examples of this issue. The authors are encouraged to be more precise in their references to the Methods section.

      In the revised manuscript, we numbered each subsection of Methods and updated pointers and associated hyperlinks in the main text to the subsection numbers.

      • On page 7, line 14, it is unclear how the authors compared the cell marker gene expression with the marker gene expression in the reference cell type.

      We have clarified in the revised manuscript.

      • On page 7, line 33, the authors note that some barcodes may have been missed during the sequencing of the rabies virus libraries, but the Methods section lacked a convincing explanation on this issue (see my point 2 above).

      We included a separate subsection on the sequencing of rabies libraries and the analysis of the sequencing depth in the Methods. In this new subsection, we further clarified our reasoning for identifying the lack of sequencing depth as a reason for missing barcodes, especially in comparison to sequencing depth required for establishing exact molecule counts used in established MAPseq and BARseq techniques with Sindbis libraries.

      • On page 9, line 44, the authors state that they considered a barcode to be associated with a cell if they found at least six molecules of that barcode in a cell, as detailed in the Methods section. However, the rationale behind this level of stringency is not provided in the Methods.

      We initially chose this threshold based on visual inspection of the sequencing images of the barcoded cells. Because the labeled cell types were consistent with our expectations (Fig. 4E-G), we did not further optimize the threshold for detecting retrogradely labeled barcoded cells.

      • I have noticed that some important explanations of figure panels are missing in the legends, making it challenging to understand the figures. Below are typical examples of this issue.

      In addition to the examples that the reviewer mentioned below, we also revised many other figure panels to make them clear to the readers.

      • In Fig. 2, "RV into SC" in panel C does not make sense, as RV was injected into the thalamus. There is no explanation of the images in this panel C.

      We have corrected the typo in the revision.

      • In Fig. 3, information on the endogenous gene panel for cell type classification (Table S3) could be mentioned in the legend or corresponding text.

      We now cite Table S3 both in Fig 3 legend and in the main text. We also included a list of the 104 cell type marker genes we used in Table S3.

      • In panel J, it is unclear why the total number of BC cells is 2,752, and not 4,130 as mentioned in the text.

      This is a typo. We have corrected this in the revision. The correct number (3,746) refers to the number of cells that did not belong to either of the two categories at the bottom of the panel, and not the total number of neurons. To make this clear, we now also include the total number of barcoded cells at the top of the panel.

      • In Fig. 4, the definitions of "+" and "−" symbols in panels K and L are unclear. Also, it seems that the second left column of panel K should read "T −."

      We corrected the typo in K, further clarified the “Area” labels, and changed the “S” label in 4K to “−”. This change does not change the original meaning of the figure: when considering the variance explained in L4/5 IT neurons, considering the subclass compositional profile is equivalent to not using the compositional profiles of cell types, because L4/5 IT neurons all belong to the same subclass (L4/5 IT subclass). Although operationally we simply considered subclass-level compositional profiles when calculating the variance explained, we think that changing this to “−” is clearer for the readers.

      • In Fig. 5, panel E is uninterpretable.

      We revised the main text and the figure to clarify how we manually proofread cells to determine the QC thresholds for barcoded cells. These plots showed a summary of the proofreading. We also revised the figures to indicate that they showed the fraction of barcoded cells that were considered real after proofreading. In the revised version, we moved these plots to Fig. S5.

      • In Fig. S1, I do not understand the identity of the six samples on the X-axis of panel A (given that only two animals were described in the main text) and what panel B shows, including the definition of map_cluster_conf and map_cluster_corr.

      In the revised Fig. S1, we made it more explicit that the six animals include both animals used for retrograde tracing (2 animals) and those used for trans-synaptic tracing (4 animals). We updated the y axis labels to be more readable and cited the relevant Methods section for definitions.

      • In Fig. S2, please provide the definitions of blue and red dots and values in panel A, as well as the color codes and size of the circles in panel B. My overall impression from panel B is that there is no significant difference between RV-infected and non-infected cells. The authors should provide more quantitative and statistical support for the claim that "RV-infected cells had higher expression of immune response-related genes."

      We toned down the statement to “Consistent with previous studies […], some immune response related genes were up-regulated in virus-infected cells compared to non-infected cells.” Because the main point of the single-cell RNAseq analysis was that rabies did not affect the ability to distinguish transcriptomic types, the change in immune response-related genes was not essential to the main conclusions. We clarified the red and blue dots in panel A and changed panel B to show the top up-regulated immune response-related genes in the revised manuscript.

      • In Fig. S3, the definitions of the color code and circle size are missing.

      We have added the legends in Fig. S3.

    1. response to ethical debt article: Mentions that tech came at the age of speed and growth. It came swiftly with progress in mind, hand in hand with historical capitalsitic values. The consquence of this is technology being designed around these principles creating a lasting echo if its intentions. In other words the reason why there are so many adverse affects or vulnerabilities like zoom bombing is because ethics werent a part of the conversation originally because that wouldnt allow a programmer or engineer to finish their work faster. Ethics and other questions complicate a binary question of how to code a specific program with an element that is subjective like ethics. I wonder if any theories have comments on if a computer can practice ethics.

    1. Reviewer #3 (Public Review):

      Summary:<br /> This paper presents evidence from three behavioral experiments that causal impressions of "launching events", in which one object is perceived to cause another object to move, depending on motion direction-selective processing. Specifically, the work uses an adaptation paradigm (Rolfs et al., 2013), presenting repetitive patterns of events matching certain features to a single retinal location, then measuring subsequent perceptual reports of a test display in which the degree of overlap between two discs was varied, and participants could respond "launch" or "pass". The three experiments report results of adapting to motion direction, motion speed, and "object identity", and examine how the psychometric curves for causal reports shift in these conditions depending on the similarity of the adapter and test. While causality reports in the test display were selective for motion direction (Experiment 1), they were not selective for adapter-test speed differences (Experiment 2) nor for changes in object identity induced via color swap (Experiment 3). These results support the notion that causal perception is computed (in part) at relatively early stages of sensory processing, possibly even independently of or prior to computations of object identity.

      Strengths:<br /> The setup of the research question and hypotheses is exceptional. The experiments are carefully performed (appropriate equipment, and careful control of eye movements). The slip adaptor is a really nice control condition and effectively mitigates the need to control motion direction with a drifting grating or similar. Participants were measured with sufficient precision, and a power curve analysis was conducted to determine the sample size. Data analysis and statistical quantification are appropriate. Data and analysis code are shared on publication, in keeping with open science principles. The paper is concise and well-written.

      Weaknesses:<br /> The biggest uncertainty I have in interpreting the results is the relationship between the task and the assumption that the results tell us about causality impressions. The experimental logic assumes that "pass" reports are always non-causal impressions and "launch" reports are always causal impressions. This logic is inherited from Rolfs et al (2013) and Kominsky & Scholl (2020), who assert rather than measure this. However, other evidence suggests that this assumption might not be solid (Bechlivanidis et al., 2019). Specifically, "[our experiments] reveal strong causal impressions upon first encounter with collision-like sequences that the literature typically labels "non-causal"" (Bechlivanidis et al., 2019) -- including a condition that is similar to the current "pass". It is therefore possible that participants' "pass" reports could also involve causal experiences.

      Furthermore, since the only report options are "launch" or "pass", it is also possible that "launch" reports are not indications of "I experienced a causal event" but rather "I did not experience a pass event". It seems possible to me that different adaptation transfer effects (e.g. selectivity to motion direction, speed, or color-swapping) change the way that participants interpret the task, or the uncertainty of their impression. For example, it could be that adaptation increases the likelihood of experiencing a "pass" event in a direction-selective manner, without changing causal impressions. Increases of "pass" impressions (or at least, uncertainty around what was experienced) would produce a leftward shift in the PSE as reported in Experiment 1, but this does not necessarily mean that experiences of causal events changed. Thus, changes in the PSEs between the conditions in the different experiments may not directly reflect changes in causal impressions. I would like the authors to clarify the extent to which these concerns call their conclusions into question.

      Leaving these concerns aside, I am also left wondering about the functional significance of these specialised mechanisms. Why would direction matter but speed and object identity not? Surely object identity, in particular, should be relevant to real-world interpretations and inputs of these visual routines? Is color simply too weak an identity?

      References:

      Bechlivanidis, C., Schlottmann, A., & Lagnado, D. A. (2019). Causation without realism. Journal of Experimental Psychology: General, 148(5), 785-804. https://doi.org/10.1037/xge0000602

      Kominsky, J. F., & Scholl, B. J. (2020). Retinotopic adaptation reveals distinct categories of causal perception. Cognition, 203, 104339.

      Rolfs, M., Dambacher, M., & Cavanagh, P. (2013). Visual Adaptation of the Perception of Causality. Current Biology, 23(3), 250-254. https://doi.org/10.1016/j.cub.2012.12.017

    1. Regardez ce que ça donne avec le CodePen P2C2b.

      Petite erreur dans le HTML de code pen extra-light et light sont tous les deux égal à 200. C'est également le cas dans "Jouez avec les différentes propriétés..."

    1. As a positive example of where this works well: Our VS Code GitLab Workflow extension allows users to not only see comments that were written inside the GitLab UI, but also allows these users to respond to these comments right from the IDE, the tool where they actually have to make these changes.
    1. Author Response

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      The authors focused on genetic variability in relation to insulin resistance. They used genetically different lines of mice and exposed them to the same diet. They found that genetic predisposition impacts the overall outcome of metabolic disturbances. This work provides a fundamental novel view on the role of genetics and insulin resistance.

      Reviewer #2 (Public Review):

      Summary:

      In the present study, van Gerwen et al. perform deep phosphoproteomics on muscle from saline or insulin-injected mice from 5 distinct strains fed a chow or HF/HS diet. The authors follow these data by defining a variety of intriguing genetic, dietary, or gene-by-diet phosphor-sites that respond to insulin accomplished through the application of correlation analyses, linear mixed models, and a module-based approach (WGCNA). These findings are supported by validation experiments by intersecting results with a previous profile of insulin-responsive sites (Humphrey et al, 2013) and importantly, mechanistic validation of Pfkfb3 where overexpression in L6 myotubes was sufficient to alter fatty acid-induced impairments in insulin-stimulated glucose uptake. To my knowledge, this resource provides the most comprehensive quantification of muscle phospho-proteins which occur as a result of diet in strains of mice where genetic and dietary effects can be quantifiably attributed in an accurate manner. Utilization of this resource is strongly supported by the analyses provided highlighting the complexity of insulin signaling in muscle, exemplified by contrasts to the "classically-used" C57BL6/J strain. As it stands, I view this exceptional resource as comprehensive with compelling strength of evidence behind the mechanism explored. Therefore, most of my comments stem from curiosity about pathways within this resource, many of which are likely well beyond the scope of incorporation in the current manuscript. These include the integration of previous studies investigating these strains for changes in transcriptional or proteomic profiles and intersections with available human phospho-protein data, many of which have been generated by this group.

      Strengths:

      Generation of a novel resource to explore genetic and dietary interactions influencing the phospho-proteome in muscle. This is accompanied by the elegant application of in silico tools to highlight the utility.

      Weaknesses:

      Some specific aspects of integration with other data among the same fixed strains could be strengthened and/or discussed.

      Reviewer #3 (Public Review):

      Summary:

      The authors aimed to investigate how genetic and environmental factors influence the muscle insulin signaling network and its impact on metabolism. They utilized mass spectrometry-based phosphoproteomics to quantify phosphosites in the skeletal muscle of genetically distinct mouse strains in different dietary environments, with and without insulin stimulation. The results showed that genetic background and diet both affected insulin signaling, with almost half of the insulin-regulated phosphoproteome being modified by genetic background on an ordinary diet, and high-fat high-sugar feeding affecting insulin signaling in a strain-dependent manner.

      Strengths:

      The study uses state-of-the-art phosphoproteomics workflow allowing quantification of a large number of phosphosites in skeletal muscle, providing a comprehensive view of the muscle insulin signaling network. The study examined five genetically distinct mouse strains in two dietary environments, allowing for the investigation of the impact of genetic and environmental factors on insulin signaling. The identification of coregulated subnetworks within the insulin signaling pathway expanded our understanding of its organization and provided insights into potential regulatory mechanisms. The study associated diverse signaling responses with insulin-stimulated glucose uptake, uncovering regulators of muscle insulin responsiveness.

      Weaknesses:

      Different mouse strains have huge differences in body weight on normal and high-fat high-sugar diets, which makes comparison between the models challenging. The proteome of muscle across different strains is bound to be different but the changes in protein abundance on phosphosite changes were not assessed. Authors do get around this by calculating 'insulin response' because short insulin treatment should not affect protein abundance. The limitations acknowledged by the authors, such as the need for larger cohorts and the inclusion of female mice, suggest that further research is needed to validate and expand upon the findings.

      Reviewer #1 (Recommendations For The Authors):

      I would suggest further discussion of the potential differences between males and females of the various strains.

      In the revised manuscript we have included a more detailed discussion of the potential differences between male and female mice in the "Limitations of this study" section on lines 455-459. In particular, a landmark study of HFD-fed inbred mouse strains found that insulin sensitivity, as inferred from the proxy HOMA-IR, was affected by interactions between sex and strain despite generally being greater in female mice (10.1016/j.cmet.2015.01.002). Furthermore, a recent phosphoproteomics study of human induced pluripotent stem-cell derived myoblasts identified groups of insulin-regulated phosphosites affected by donor sex, and by interactions between sex and donor insulin sensitivity (10.1172/JCI151818). Based on these results, we anticipate that both soleus insulin sensitivity and phoshoproteomic insulin responses would differ between male and female mice through interactions with strain and diet, adding yet another layer of complexity to what we observed in this study. This will be an important avenue for future research to explore.

      Reviewer #2 (Recommendations For The Authors):

      The following are comments to authors - many, if not all are suggestions for extended discussion and beyond the scope of the current elegant study.

      In the discussion section (line 428) the authors make a key point in that the genetic, dietary, and interacting patterns of variation of Phospho-sites could be due to changes in total protein and/or transcript levels across strains. For example, given the increased expression of Pfkfb3 was sufficient to impact glucose uptake, suggesting that the transcript levels of the gene might also show a similar correlation with insulin responsiveness as in Fig 6b. Undoubtedly, phospho-proteomics analyses will provide unique information on top of more classical omics layers and uncover what would be an important future direction. Therefore, I would suggest adding to the discussion some guidance on performing similar applications to datasets from, at least some, of the strains used where RNA-seq and proteomics are available.

      We thank the reviewer for this suggestion. To address this, we mined recently published total proteomics data collected from soleus muscles of seven CHOW or HFD-fed inbred mouse strains, three of which were in common with our study (C57Bl6J, BXH9, BXD34; 10.1016/j.cmet.2021.12.013). In this study ex vivo soleus glucose uptake was measured and correlation analysis was performed, so we directly extracted the resulting glucose uptake-protein associations and compared them to the glucose uptake-phosphoprotein associations identified in our study. Indeed, we found that only a minority of proteins correlated at both the phosphosite and total protein levels, highlighting the utility of phosphoproteomics to provide orthogonal information to more classical omics layers. We have included this analysis in lines 303-311.

      Relevant to this, the authors might want to consider depositing scripts to analyze some aspects of the data (ex. WGCNA on P-protein data or insulin-regulated anova) in a repository such as github so that these can be applied easily to other datasets.

      We refer the reviewer to the section "Code availability" on lines 511-513, where we deposited all code used to analyse the data on github.

      In contrast to the points above, I feel that the short time-course of insulin stimulation was one important aspect of the experimental design that was not emphasized enough as a strength. It was mentioned as a limitation in that other time points could provide more info, yes. But given that the total abundance of proteins and transcripts likely doesn't shift tremendously in this time frame, this provides an important appeal to the analysis of phosphor-proteomic data. I would suggest highlighting the insulin-stimulated response analysis here as something that leverages the unique nature of phosphoproteomics.

      We are grateful for the reviewer's positivity regarding this aspect of our experimental design. We have reiterated the value of the 10min insulin stimulation - that it temporally segregates phosphoproteomic and total proteomic changes - in the "Limitations of this study" section on lines 477-481.

      While I recognize the WGCNA analysis as an instrumental way to highlight global patterns of phospo-peptide abundance co-regulation, the analysis currently seems somewhat underdeveloped. For example, Fig 5f-h shows a lot of overlap between kinase substrates and pathways among modules. Clearly, there are informative differences based on the intersection with Humphries 2013 and the correlation with Pfkbp3. To highlight the specific membership of these modules, most people rank-order module members by correlation with eigen-gene (or P-peptide) and then perform pathway enrichments on these. Alternatively, it looks like all data was used to generate modules across conditions. One consideration would be to perform WGCNA on relevant comparison data separately (ex. chow mice only and HFHS only) and then compare modules whose membership is retained or shift between the two. Or even look at module representation for genes that show large correlations with insulin-responsiveness. This might also be a good opportunity to suggest readers intersect module members with muscle eQTLs which colocalize to glucose or insulin to prioritize some potential key drivers.

      We thank the reviewer for their helpful suggestions, which we feel have substantially improved the WGCNA analysis. To probe specific functional differences between subnetworks, we performed rank-based enrichment using phosphopeptide module membership scores. Interestingly, this did reveal pathways that were enriched only in certain modules. However, we found that after p-value adjustment, virtually all enriched pathways lost statistical significance, hence we interpret these results as suggestive only. We have made this analysis available to readers in Fig S4b-d and lines 263-265: "To further probe functional differences we analysed phosphopeptide subnetwork membership scores, which revealed additional pathways enriched in individual subnetworks. However, these results were not significant after p-value adjustment and hence are suggestive only (Fig. S4b-d)". We also visualised module representation for glucose-uptake correlated phosphopeptides. This agreed with our existing analyis in Fig. 6f, where the eigenpeptides of modules V and I were correlated with glucose uptake (Fig. 6f). We have incorporated this new analysis in Fig. S6b-c and lines 324-325: "Examining the subnetwork membership scores for glucose-uptake correlated phosphopeptides also revealed a preference for clusters V and I, supporting this analysis (Fig. S6b-c)." Finally, in the discussion we have presented the integration of genetic data, such as muscle-specific eQTLs, as a future direction (lines 398-401): "Alternatively, one could overlap subnetworks with genetic information, such as genes associated with glucose homeostasis and other metabolic traits in human GWAS studies, or muscle-specific eQTLs or pQTLs genetically colocalised with similar traits, to further prioritise subnetwork-associated phenotypes and identify potential drivers within subnetworks."

      Have the authors considered using their heritability and GxE estimated for module eigenpeptides? To my knowledge, this has never been performed and might provide some informative information as the co-regulated P-protein structure occurs as a result of relevant contexts.

      In the revised manuscript we have now analysed eigenpeptides with the same statistical tests used to identify Strain and Diet effects in insulin-regulated phosphopeptides. We have displayed the statistical results in Fig. S4a, and have explicitly mentioned examples of StrainxDiet effects on lines 245-247: "For example, HFD-feeding attenuated the insulin response of subnetwork I in CAST and C57Bl6J strains (t-test adjusted p = 0.0256, 0.0365), while subnetwork II was affected by HFD-feeding only in CAST and NOD (Fig. 5e, Fig. S4a, t-test adjusted p = 0.00258, 0.0256)."

      The integration of modules with adipocyte phosphoproteomic data from the authors 2013 Cell metab paper seems like an important way to highlight the integration of this resource to define critical cellular signaling mechanisms. To assess the conservation of signaling mechanisms and relationships to additional key contexts (ex. exercise), the intersection of the insulin-stimulated P-peptides with human datasets generated by this group (ex. cell metab 2015, nature biotech 2022) seems like an obvious future direction to prioritize targets. Figure S3B shows a starting point for these types of integrations.

      To demonstrate the value of integrating our results with related phosphoproteomics data, we have incorporated the reviewer's advice of comparing insulin-regulated phosphosites to exercise-regulated phosphosites from Needham et. Nature Biotech 2022 and Hoffman et al. Cell Metabolism 2015. We identified a small subset of commonly regulated phosphosites (8 across all three studies). Given insulin and exercise both promote GLUT4 translocation, these sites may represent conserved regulatory mechanisms. This analysis is presented in Fig. S3d, Table S2, and lines 129-135: "In addition to insulin, exercise also promotes GLUT4 translocation in skeletal muscle. We identified a small subset of phosphosites regulated by insulin in this study that were also regulated by exercise in two separate human phosphoproteomics studies (Fig. S3d, Table S2, phosphosites: Eef2 T57 and T59, Mff S129 and S131, Larp1 S498, Tbc1d4 S324, Svil S300, Gys1 S645), providing a starting point for exploring conserved signalling regulators of GLUT4 translocation."

      For the Pfkfb3 overexpression system, are there specific P-peptides that are increased/decreased upon insulin stimulation? This might be an interesting future direction to mention in order to link signaling mechanisms.

      We assessed whether canonical insulin signalling was affected by Pfkfb3 overexpression by immunoblotting. Insulin-stimulated phosphorylation of Akt S473, Akt T308, Gsk3a/b S21/S9, and PRAS40 T246 differed little across conditions, with only a weak, statistically insignificant trend towards increased pT308 Akt, pS21/S9 Gsk3a/b, and pT246 PRAS40 in palmitate-treated Pfkfb3-overexpressing cells. Hence, as the reviewer has suggested, an interesting future direction will be to perform phosphoproteomics to characterise more deeply the effects of palmitate and Pfkfb3 overexpression on insulin signalling. We have modified the manuscript to reflect these findings and suggested future directions on lines 362-365: "immunoblotting of canonical insulin-responsive phosphosites on Akt and its substrates GSK3α/β and PRAS40 revealed minimal effect of palmitate treatment and Pfkfb3 overexpression (Fig. S7e-f), hence more detailed phosphoproteomics studies are needed to clarify whether Pfkfb3 overexpression restored insulin action by modulating insulin signalling."

      Reviewer #3 (Recommendations For The Authors):

      This remarkable contribution by the esteemed research group has significantly enriched the field of metabolism. The extensive dataset, intertwined with a sophisticated research design, promises to serve as an invaluable resource for the scientific community. I offer a series of suggestions aimed at potentially elevating the manuscript to an even higher standard.

      Mouse Weight Variation and Correlation Analysis: The pronounced variances in mouse body weights pose a challenge to meaningful comparisons (Fig S1). Could the disparities in the phosphoproteome between basal and insulin-stimulated conditions be attributed to differences in body weight? Consider performing a correlation analysis. Furthermore, does the phosphoproteome of these mouse strains evolve comparably over time? Do these mice age similarly? Kindly incorporate this information.

      We thank the reviewer for the suggested analysis. We found there was a significant correlation between the phosphopeptide insulin response and mouse body weight, either in CHOW-fed mice (Strain effects) or across both diets (Diet effects), for ~ 25% of phosphopeptides that exhibited a Strain or Diet effect. Hence, while there is a clear effect of body weight on insulin signalling, this influences only a small proportion of the entire insulin-responsive phosphoproteome. Notably, insulin was dosed according to mouse lean mass to ensure equivalent dosage received by the soleus muscle, hence any insulin signalling differences associated with body weight are unlikely due to differences in dosing. As the reviewer also alludes to, different strains could have different lifespans. This may result in mice having different biological ages at the time of experimentation, and this in turn could influence insulin signalling. This possibility is challenging to assess in a quantitative manner because lifespan data is not available for most strains used. However, it is worth noting that female CAST mice live 77% as long as C57Bl6J mice (median age of 671 vs 866 (10.1073/pnas.1121113109); data is not available for male mice nor the other three strains), and substantial differences in insulin signalling were observed between these two strains. Ultimately, regardless of whether body weight and/or lifespan altered insulin signalling, such differences would still have arisen solely from the distinct genetic backgrounds and diets of the mice, hence we believe they are meaningful results that should not be dismissed. We have added this analysis to the revised manuscript in the "Limitations of this study" section on lines 471-477: "We were also unable to determine the extent to which signalling changes arose from muscle-intrinsic or extrinsic factors. For instance, body weight varied substantially across mice and correlated significantly with 25% of Strain and Diet-affected phosphopeptides (Fig. S8c), suggesting obesity-related systemic factors likely impact a subset of the muscle insulin signalling network. Furthermore, genetic differences in lifespan could alter the “biological age” of different strains and their phosphoproteomes, though we could not assess this possibility since lifespan data are not available for most strains used. "

      Soleus Muscle Data and Bias Considerations: Were measurements taken for lean mass and soleus muscle weight? If so, please present the corresponding data.

      Measurements for lean mass and the mass of soleus muscle after grinding have been including in Supplementary Figure S1 (panels c-d)

      As outlined in the methods section, the variation in protein yield from the soleus muscle across each strain is substantial. Notably, the distinct peptide input for phospho enrichment introduces biases, given that muscles with lower input may exhibit reduced identification (Fig S2). This bias might also manifest in the PCA plot (S2C). Ideally, adopting a uniform protein/peptide input would have been advantageous. Address this concern and contemplate moving the PCA plot to the main figure. It's prudent to reconsider the sentence stating, "Samples from animals of the same strain and diet were highly correlated and generally clustered together, implying the data are highly reproducible (Fig. S2b-d)," particularly if the input and total IDs were not matched.

      The reviewer highlights an important point. As the reviewer comments, it would have been our preference to use the same amount of protein material for all samples. However, as there was a wide range in the mass of the soleus muscle across mouse strains (in particular much lower in CAST mice), it was not appropriate to use the same amount of material for all strains. This is indeed evident in the PCA plot (Figure S2c), whereby samples cluster in the second component (PC2) based on the amount of protein material. However, this clustering is not observed in the hierarchical clustering (Figure S2d), and nor are the number of phosphopeptides quantified in each sample substantially impacted by these differences (Figure S2a) as implied by the reviewer. Indeed, the number of phosphopeptides quantified did not noticeably vary when comparing BXH9/BXD34 to C57Bl6J/NOD despite 32.3% less material used, and there were only 12.4% fewer phosphopeptides (average #13891.56 vs 15851.29) in CAST compared to C57Bl6J/NOD strains, despite 51.8% less material used. To further emphasise the minimal effect that input material had on phosphopeptide quantification, we have additionally plotted the number of phosphopeptides quantified in each sample following the filtering steps we employed prior to statistical analysis of the dataset (i.e. ANOVA). This plot (Author response image 1) shows that there is even less variation in the number of quantified phosphopeptides between strains, with only 9.12% fewer phosphopeptides quantified and filtered on average in CAST compared to C57Bl6J/NOD (average #9026.722 vs 9932.711). From a quantitative perspective, in both the PCA (Principal Component 1) and hierarchical clustering analyses, samples are additionally clustered by individual strains, and in the latter they also cluster generally by diet, implying that biological variation between samples remains the primary variation captured in our data. We have modified the manuscript so that these observations are forefront (lines 103-106): "Furthermore, while different strains clustered by the amount of protein material used in the second component of the PCA (Figure S2c), samples from animals of the same strain and diet were highly correlated and generally clustered together, indicating that our data are highly reproducible". To ensure that readers are aware of our decision to alter protein starting material and its implications, we have moved the description of this from the methods to the results, and we have highlighted the impact on phosphopeptide quantification in CAST mice (lines 99-103): "Due to the range in soleus mass across strains (Fig. S1D) we altered the protein material used for EasyPhos (C57Bl6J and NOD: 755 µg, BXH9 and BXD34: 511 µg, CAST: 364 µg), though phosphopeptide quantification was minimally affected, with only 12.4% fewer phosphopeptides quantified on average in CAST compared to the C57lB6J/NOD (average 13891.56 vs 15851.29 Fig. S2a)."

      Author response image 1.

      Phosphopeptide quantification following filtering. a) The number of phosphopeptides quantified in each sample after filtering prior to statistical analysis.

      Phosphosite Quantification Filtering: The quantified phosphosites have been dropped from 23,000 to 10,000. Could you elucidate the criteria employed for filtering and provide a concise explanation in the main text?

      We thank the reviewer for drawing this ambiguity to our attention. Before testing for insulin regulation, we performed a filtering step requiring phosphopeptides to be quantified well enough for comparisons across strains and diets. Specifically, phosphopeptides were retained if they were quantified well enough to assess the effect of insulin in more than eight strain-diet combinations (≥ 3 insulin-stimulated values and ≥ 3 unstimulated values in each combination). We have now included this explanation of the filtering in the main text on lines 108-114.

      ANOVA Choice Clarification: In Figure 4, there's a transition from one-way ANOVA in B to two-way ANOVA in C. Could you expound on the rationale for selecting these distinct methods?

      In panel B, we first focussed on kinase regulation differences between strains in the absence of a dietary perturbation. Hence, we performed one-way ANOVAs only within the CHOW-fed mice. In panel C, we then consider the effect of perturbation with the HFD. We perform two-way ANOVAs, allowing us to identify effects of the HFD that are uniform across strains (Diet main effect) or variable across strains (Strain-by-diet interaction).

      Cell Line Selection for Functional Experiments: Could you elucidate the rationale behind opting for L6 cells of rat origin over C2C12 mouse cells for functional experiments?

      We acknowledge that C2C12 cells have the benefit of being of mouse origin, which aligns with our mouse-derived phosphoproteomics data. However, they are unsuitable for glucose uptake experiments as they lack an insulin-responsive vesicular compartment even upon GLUT4 overexpression, and undergo spontaneous contraction when differentiated resulting in confounding non-insulin dependent glucose uptake (10.1152/ajpendo.00092.2002, 10.1007/s11626-999-0030-8). In contrast, L6 cells readily express insulin-responsive GLUT4, and cannot contract (doi.org/10.1113/JP281352, 10.1007/s11626-999-0030-8). Therefore they are a superior model for studying insulin-dependent glucose transport. We have added a justification of L6 cells over C2C12 cells in the revised manuscript, on lines 352-354: "While L6 cells are of rat origin, they are preferable to the popular C2C12 mouse cell line since the latter lack an insulin-responsive vesicular compartment and undergo spontaneous contraction, resulting in confounding non-insulin dependent glucose uptake."

      It's intriguing that while a phosphosite was modulated on Pfkfb2, functional assays were conducted on a different isoform (Pfkfb3) wherein the phosphosite was not detected.

      The correlation between Pfkfb2 S469 phosphorylation and insulin-stimulated glucose uptake suggests that F2,6BP production, and subsequent glycolytic activation, positively regulate insulin responsiveness. There are several ways of testing this: 1) Knock down endogenous Pfkfb2, and re-express either wild-type protein or a S469A phosphomutant. If S469 phosphorylation positively regulates insulin responsiveness, then knockdown should decrease insulin responsiveness and re-expression of wild-type Pfkfb2, but not S469A, should restore it. 2) Induce insulin resistance (e.g. through palmitate treatment), and overexpress phosphomimetic S469D or S469E Pfkfb2 to enhance F2,6BP production. Under our hypothesis, this should reverse insulin resistance. 3) There is some evidence that dual phosphorylation of S469 and S486, another activating phosphosite on Pfkfb2, enhances F2,6BP production through 14-3-3 binding (10.1093/emboj/cdg363). Hence, we may expect that introduction of an R18 sequence into Pfkfb2, which causes constitutive 14-3-3 binding (10.1074/jbc.M603274200), would increase Pfkfb2-driven F2,6BP production, and under our hypothesis this should reverse insulin resistance. 4) The paralog Pfkfb3 lacks Akt regulatory sites and has substantially higher basal activity than Pfkfb2. Thus, overexpression of Pfkfb3 should mimic the effect of phosphorylated Pfkfb2, and hence reverse insulin resistance under our hypothesis. While approaches 1), 2), and 3) directly target Pfkfb2, they have drawbacks. For example, 1) may not work if Pfkfb2 knockdown is compensated for by other Pfkfb isoforms, 2) may not work since D/E phosphomimetics often do not recapitulate the molecular effects of S/T phosphorylation (10.1091/mbc.E12-09-0677), and 3) may not work if S469 phosphorylation does not operate through 14-3-3 binding. Hence we performed 4) as it seemed to be the most robust and cleanest experiment to test our hypothesis. We have revised the manuscript to further clarify the challenges of directly targeting Pfkfb2 and the benefits of targeting Pfkfb3 on lines 342-349: "Since Pfkfb2 requires phosphorylation by Akt to produce F2,6BP substantially, increasing F2,6BP production via Pfkfb2 would require enhanced activating site phosphorylation, which is difficult to achieve in a targeted fashion, or phosphomimetic mutation of activating sites to aspartate/glutamate, which often does not recapitulate the molecular effects of serine/threonine phosphorylation. By contrast, the paralog Pfkfb3 has high basal production rates and lacks an Akt motif at the corresponding phosphosites. We therefore rationalised that overexpressing Pfkfb3 would robustly increase F2,6BP production and enhance glycolysis regardless of insulin stimulation and Akt signalling."

      Insulin-Independent Action of Pfkfb3: The functionality of Pfkfb3 unfolds in an insulin-independent manner, yet it restores insulin action (Fig 6h). Could you shed light on the mechanism underpinning this phenomenon? Consider measuring F2,6BP concentrations or assessing kinase activity upon overexpression.

      Pfkfb3 overexpression increased the glycolytic capacity of L6 myotubes in the absence of insulin stimulation, as inferred by extracellular acidification rate (Fig. S7c). This is indeed consistent with Pfkfb3 enhancing glycolysis through increased F2,6BP concentration in an insulin-independent manner. To shed light on the mechanism connecting this to insulin action, we performed immunoblotting experiments to assess the kinase activity of Akt, a master regulator of the insulin response. Indeed, this experimental direction has precedent as we previously observed that Pfkfb3 overexpression enhanced insulin-stimulated Akt signalling in HEK293 cells, while small-molecule inhibition of Pfkfb kinase activity reduced Akt signalling in 3T3-L1 adipocytes (10.1074/jbc.M115.658815). However, insulin-stimulated phosphorylation of Akt S473, Akt T308, Gsk3a/b S21/S9, and PRAS40 T246 differed little across conditions, with only a weak, statistically insignificant trend towards increased pT308 Akt, pS21/S9 Gsk3a/b, and pT246 PRAS40 in palmitate-treated Pfkfb3-overexpressing cells. Hence, a more detailed phosphoproteomics study will be needed to assess whether Pfkfb3 restores insulin action by modulating insulin signalling. We have described these immunoblotting experiments in lines 361-365 and Fig. S7e-f. We also discussed potential mechanisms through which Pfkfb3-enhanced glycolysis could connect to insulin action in the discussion (lines 427-434).

      Figure 6h Statistical Analysis: For the 2DG uptake in Figure 6h, a conventional two-way ANOVA might be more appropriate than a repeated measures ANOVA.

      On reflection, we agree that a conventional ANOVA is more appropriate. Furthermore, for simplicity and conciseness we have decided to analyse and present only insulin-stimulated/unstimulated 2DG uptake fold change values in Figure 6h. We have presented all unstimulated and insulin-stimulated values in Figure S7d.

    1. None of them have a marketing department gaslighting their customers or plastering ads everywhere.

      Viewing a marketing department as wasteful is dangerous. Marketing -- ideally -- exists to identify product-market-fit - to identify customer niches that a company can cater towards and push all parts of the company to more carefully fulfill those needs.

      Is this what happens in practice? Not at many large companies. Is it still necessary to yell louder than paid products to find user adoption? Absolutely.

      Having the time and knowledge to explore pioneering, ethical software is a privilege and perspective that not many people have. Mastodon never caught on with non-technical people during this movement of people in the past year. Ask someone who doesn't code how they find out what tool they use, and they probably heard it from a friend who saw an ad for it on the internet.

    1. programmers not fully understanding all the possible states their code may execute in

      "Out of the Tar Pit" https://curtclifton.net/papers/MoseleyMarks06a.pdf explains it well

    1. When someone wants a computer to perform a task (that hasn’t already been programmed), a human programmer will act as a translator to translate that task into a programming language. Next, a compiler (or interpreter) program will translate the programming language code into the binary code that the computer runs. In this set-up, the programming language acts as an intermediate language the way that French did in my earlier analogy.

      since i dont have any knwoledge about how computers operates at first i thought programming language was the computers language. however its very intresting that compiler or interpreter tanserlate the programming language to binary.

    1. SSL CertsThreat actors generally use:● The same cert.● The same cert generation tool or algorithm.● Especially if they wrote it.● The same pool of certs.

      SSL证书狩猎: 1. why: - 人类的惰性、路径依赖 2. how:<br /> - 不同基础设施使用相同的证书/从证书池中拿 - 证书都使用了相同的生成工具/算法(例如文中提到的MSF自带的证书生成算法,将该算法作为工具特征)

      SSH隧道检测: 1. SSH代理转发反弹shell检测: - 正常SSH会话中输入字符:packet length = SSH header + 1 byte char code + padding + HMAC. This could be 36,40,48 bytes or so - SSH隧道代理另一个:packet length = SSH header + [previous SSH pkt] + HMAC. This could be 76, 84, 98 bytes and so on 注:加密块大小和客户端和服务器的HMAC算法+实现。(https://www.trisul.org/blog/traffic-analysis-of-secure-shell-ssh/)

    1. You can do this with recursive descent, but it’s a chore.

      Jonathan Blow recently revisited this topic with Casey Muratori. (They last talked about this 3 years ago.)

      What's a little absurd is that (a) the original discussion is something like 3–6 hours long and doesn't use recursive descent—instead they descended into some madness about trying to work out from first principles how to special-case operator precedence—and (b) they start out in this video poo-pooing people who speak about "recursive descent", saying that it's just a really obnoxious way to say writing ordinary code—again, all this after they three years ago went out of their way to not "just" write "normal" code—and (c) they do this while launching into yet another 3+ hour discussion about how to do it right—in a better, less confusing way this time, with Jon explaining that he spent "6 or 7 hours" working through this "like 5 days ago". Another really perverse thing is that when he talks about Bob's other post (Parsing Expressions) that ended up in the Crafting Interpreters book, he calls it stupid because it's doing "a lot" for something so simple. Again: this is to justify spending 12 hours to work out the vagaries of precedence levels and reviewing a bunch of papers instead of just spending, I dunno, 5 or 10 minutes or so doing it with recursive descent (the cost of which mostly comes down to just typing it in).

      So which one is the real chore? Doing it the straightforward, fast way, or going off and attending to one's unrestrained impulse that you for some reason need to special-case arithmetic expressions (and a handful of other types of operations) like someone is going to throw you off a building if you don't treat them differently from all your other ("normal") code?

      Major blind spots all over.

    1. See (1) for additional discussion of literate programming. 1. D. E. Knuth, Literate programming. Comput. J. 27, 97–111 (1984). window.document.addEventListener("DOMContentLoaded", function (event) { const toggleBodyColorMode = (bsSheetEl) => { const mode = bsSheetEl.getAttribute("data-mode"); const bodyEl = window.document.querySelector("body"); if (mode === "dark") { bodyEl.classList.add("quarto-dark"); bodyEl.classList.remove("quarto-light"); } else { bodyEl.classList.add("quarto-light"); bodyEl.classList.remove("quarto-dark"); } } const toggleBodyColorPrimary = () => { const bsSheetEl = window.document.querySelector("link#quarto-bootstrap"); if (bsSheetEl) { toggleBodyColorMode(bsSheetEl); } } toggleBodyColorPrimary(); const icon = ""; const anchorJS = new window.AnchorJS(); anchorJS.options = { placement: 'right', icon: icon }; anchorJS.add('.anchored'); const isCodeAnnotation = (el) => { for (const clz of el.classList) { if (clz.startsWith('code-annotation-')) { return true; } } return false; } const clipboard = new window.ClipboardJS('.code-copy-button', { text: function(trigger) { const codeEl = trigger.previousElementSibling.cloneNode(true); for (const childEl of codeEl.children) { if (isCodeAnnotation(childEl)) { childEl.remove(); } } return codeEl.innerText; } }); clipboard.on('success', function(e) { // button target const button = e.trigger; // don't keep focus button.blur(); // flash "checked" button.classList.add('code-copy-button-checked'); var currentTitle = button.getAttribute("title"); button.setAttribute("title", "Copied!"); let tooltip; if (window.bootstrap) { button.setAttribute("data-bs-toggle", "tooltip"); button.setAttribute("data-bs-placement", "left"); button.setAttribute("data-bs-title", "Copied!"); tooltip = new bootstrap.Tooltip(button, { trigger: "manual", customClass: "code-copy-button-tooltip", offset: [0, -8]}); tooltip.show(); } setTimeout(function() { if (tooltip) { tooltip.hide(); button.removeAttribute("data-bs-title"); button.removeAttribute("data-bs-toggle"); button.removeAttribute("data-bs-placement"); } button.setAttribute("title", currentTitle); button.classList.remove('code-copy-button-checked'); }, 1000); // clear code selection e.clearSelection(); }); const viewSource = window.document.getElementById('quarto-view-source') || window.document.getElementById('quarto-code-tools-source'); if (viewSource) { const sourceUrl = viewSource.getAttribute("data-quarto-source-url"); viewSource.addEventListener("click", function(e) { if (sourceUrl) { // rstudio viewer pane if (/\bcapabilities=\b/.test(window.location)) { window.open(sourceUrl); } else { window.location.href = sourceUrl; } } else { const modal = new bootstrap.Modal(document.getElementById('quarto-embedded-source-code-modal')); modal.show(); } return false; }); } function toggleCodeHandler(show) { return function(e) { const detailsSrc = window.document.querySelectorAll(".cell > details > .sourceCode"); for (let i=0; i<detailsSrc.length; i++) { const details = detailsSrc[i].parentElement; if (show) { details.open = true; } else { details.removeAttribute("open"); } } const cellCodeDivs = window.document.querySelectorAll(".cell > .sourceCode"); const fromCls = show ? "hidden" : "unhidden"; const toCls = show ? "unhidden" : "hidden"; for (let i=0; i<cellCodeDivs.length; i++) { const codeDiv = cellCodeDivs[i]; if (codeDiv.classList.contains(fromCls)) { codeDiv.classList.remove(fromCls); codeDiv.classList.add(toCls); } } return false; } } const hideAllCode = window.document.getElementById("quarto-hide-all-code"); if (hideAllCode) { hideAllCode.addEventListener("click", toggleCodeHandler(false)); } const showAllCode = window.document.getElementById("quarto-show-all-code"); if (showAllCode) { showAllCode.addEventListener("click", toggleCodeHandler(true)); } function tippyHover(el, contentFn) { const config = { allowHTML: true, content: contentFn, maxWidth: 500, delay: 100, arrow: false, appendTo: function(el) { return el.parentElement; }, interactive: true, interactiveBorder: 10, theme: 'quarto', placement: 'bottom-start' }; window.tippy(el, config); } const noterefs = window.document.querySelectorAll('a[role="doc-noteref"]'); for (var i=0; i<noterefs.length; i++) { const ref = noterefs[i]; tippyHover(ref, function() { // use id or data attribute instead here let href = ref.getAttribute('data-footnote-href') || ref.getAttribute('href'); try { href = new URL(href).hash; } catch {} const id = href.replace(/^#\/?/, ""); const note = window.document.getElementById(id); return note.innerHTML; }); } let selectedAnnoteEl; const selectorForAnnotation = ( cell, annotation) => { let cellAttr = 'data-code-cell="' + cell + '"'; let lineAttr = 'data-code-annotation="' + annotation + '"'; const selector = 'span[' + cellAttr + '][' + lineAttr + ']'; return selector; } const selectCodeLines = (annoteEl) => { const doc = window.document; const targetCell = annoteEl.getAttribute("data-target-cell"); const targetAnnotation = annoteEl.getAttribute("data-target-annotation"); const annoteSpan = window.document.querySelector(selectorForAnnotation(targetCell, targetAnnotation)); const lines = annoteSpan.getAttribute("data-code-lines").split(","); const lineIds = lines.map((line) => { return targetCell + "-" + line; }) let top = null; let height = null; let parent = null; if (lineIds.length > 0) { //compute the position of the single el (top and bottom and make a div) const el = window.document.getElementById(lineIds[0]); top = el.offsetTop; height = el.offsetHeight; parent = el.parentElement.parentElement; if (lineIds.length > 1) { const lastEl = window.document.getElementById(lineIds[lineIds.length - 1]); const bottom = lastEl.offsetTop + lastEl.offsetHeight; height = bottom - top; } if (top !== null && height !== null && parent !== null) { // cook up a div (if necessary) and position it let div = window.document.getElementById("code-annotation-line-highlight"); if (div === null) { div = window.document.createElement("div"); div.setAttribute("id", "code-annotation-line-highlight"); div.style.position = 'absolute'; parent.appendChild(div); } div.style.top = top - 2 + "px"; div.style.height = height + 4 + "px"; let gutterDiv = window.document.getElementById("code-annotation-line-highlight-gutter"); if (gutterDiv === null) { gutterDiv = window.document.createElement("div"); gutterDiv.setAttribute("id", "code-annotation-line-highlight-gutter"); gutterDiv.style.position = 'absolute'; const codeCell = window.document.getElementById(targetCell); const gutter = codeCell.querySelector('.code-annotation-gutter'); gutter.appendChild(gutterDiv); } gutterDiv.style.top = top - 2 + "px"; gutterDiv.style.height = height + 4 + "px"; } selectedAnnoteEl = annoteEl; } }; const unselectCodeLines = () => { const elementsIds = ["code-annotation-line-highlight", "code-annotation-line-highlight-gutter"]; elementsIds.forEach((elId) => { const div = window.document.getElementById(elId); if (div) { div.remove(); } }); selectedAnnoteEl = undefined; }; // Attach click handler to the DT const annoteDls = window.document.querySelectorAll('dt[data-target-cell]'); for (const annoteDlNode of annoteDls) { annoteDlNode.addEventListener('click', (event) => { const clickedEl = event.target; if (clickedEl !== selectedAnnoteEl) { unselectCodeLines(); const activeEl = window.document.querySelector('dt[data-target-cell].code-annotation-active'); if (activeEl) { activeEl.classList.remove('code-annotation-active'); } selectCodeLines(clickedEl); clickedEl.classList.add('code-annotation-active'); } else { // Unselect the line unselectCodeLines(); clickedEl.classList.remove('code-annotation-active'); } }); } const findCites = (el) => { const parentEl = el.parentElement; if (parentEl) { const cites = parentEl.dataset.cites; if (cites) { return { el, cites: cites.split(' ') }; } else { return findCites(el.parentElement) } } else { return undefined; } }; var bibliorefs = window.document.querySelectorAll('a[role="doc-biblioref"]'); for (var i=0; i<bibliorefs.length; i++) { const ref = bibliorefs[i]; const citeInfo = findCites(ref); if (citeInfo) { tippyHover(citeInfo.el, function() { var popup = window.document.createElement('div'); citeInfo.cites.forEach(function(cite) { var citeDiv = window.document.createElement('div'); citeDiv.classList.add('hanging-indent'); citeDiv.classList.add('csl-entry'); var biblioDiv = window.document.getElementById('ref-' + cite); if (biblioDiv) { citeDiv.innerHTML = biblioDiv.innerHTML; } popup.appendChild(citeDiv); }); return popup.innerHTML; }); } } });

      Remove this please.

    1. I dove into the source code. Rick was right: no-one could possibly understand what Rick had created. Except for Rick. It was a reflection of the workings of his own mind. Some of it was very clever, a lot of it was copy-pasta, it was all very idiosyncratic, and it was not at all documented.

      I used to work in such a project :)

    1. Divine Command Theory

      This is one of the most dangerous and insidious belief systems known to man, in my opinion. When I was growing up the Catholic church believed that pre-marital sex was a sin and we would all go to hell if we tried it. In many cultures today it is taught that 'god' doesn’t condone homosexuality. For example, in Iran, homosexual acts are punishable by flogging and even death in some cases because of this belief in a judgmental ‘god.’ I don’t believe God made this planet let alone created a moral code for all to live by. I think we made it so we could project our guilt, shame, and fears onto others. I believe that projection is perception.

    1. Author Response

      Reviewer #1 (Public Review):

      This paper combines a number of cutting-edge approaches to explore the role of a specific mouse retinal ganglion cell type in visual function. The approaches used include calcium imaging to measure responses of RGC populations to a collection of visual stimuli and CNNs to predict the stimuli that maximally activate a given ganglion cell type. The predictions about feature selectivity are tested and used to generate a hypothesized role in visual function for the RGC type identified as interesting. The paper is impressive; my comments are all related to how the work is presented.

      We thank the reviewer for appreciating our study and for the interesting comments.

      Is the MEI approach needed to identify these cells?

      To briefly summarize the approach, the paper fits a CNN to the measured responses to a range of stimuli, extracts the stimulus (over time, space, and color) that is predicted to produce a maximal response for each RGC type, and then uses these MEIs to investigate coding. This reveals that G28 shows strong selectivity for its own MEI over those of other RGC types. The feature of the G28 responses that differentiate it appears to be its spatially-coextensive chromatic opponency. This distinguishing feature, however, should be relatively easy to discover using more standard approaches.

      The concern here is that the paper could be read as indicating that standard approaches to characterizing feature selectivity do not work and that the MEI/CNN approach is superior. There may be reasons why the latter is true that I missed or were not spelled out clearly. I do think the MEI/CNN approach as used in the paper provides a very nice way to compare feature selectivity across RGC types - and that it seems very well suited in this context. But it is less clear that it is needed for the initial identification of the distinguished response features of the different RGC types. What would be helpful for me, and I suspect for many readers, is a more nuanced and detailed description of where the challenges arise in standard feature identification approaches and where the MEI/CNN approaches help overcome those challenges.

      Thank you for the opportunity for clarification. In fact, the MEI (or an alternative nonlinear approach) is strictly necessary to discover this selectivity: as we show above (response #1 to editorial summary), the traditional linear filter approach does not reveal the color opponency. We realize that this fact was not made sufficiently clear in the initial submission. In the revised manuscript, we now include this analysis. Moreover, throughout the manuscript, we added explanations on the differences between MEIs and standard approaches and more intuitions about how to interpret MEIs. We also added a section to the discussion dedicated to explaining the advantages and limitations of the MEI approach.

      Interpretation of MEI temporal structure

      Some aspects of the extracted MEIs look quite close to those that would be expected from more standard measurements of spatial and temporal filtering. Others - most notably some of the temporal filters - do not. In many of the cells, the temporal filters oscillate much more than linear filters estimated from the same cells. In some instances, this temporal structure appears to vary considerably across cells of the same type (Fig. S2). These issues - both the unusual temporal properties of the MEIs and the heterogeneity across RGCs of the same type - need to be discussed in more detail. Related to this point, it would be nice to understand how much of the difference in responses to MEIs in Figure 4d is from differences in space, time, or chromatic properties. Can you mix and match MEI components to get an estimate of that? This is particularly relevant since G28 responds quite well to the G24 MEI.

      One advantage of the MEI approach is that it allows to distinguish between transient and sustained cells in a way that is not possible with the linear filter approach: Because we seek to maximize activity over an extended period of time, transient cells need to be repetitively stimulated whereas sustained cells will also respond in the absence of multiple contrast changes. In the revised manuscript, we add a section explaining this, together with Figure 3-supplement 2, illustrating this point by showing that oscillations disappear when we optimize the MEI for a short time window. The benefit of a longer time window lies in the increased discriminability between transient and sustained cells, which is also shown in the new supplementary figure.

      Regarding the heterogeneity of MEIs, this is most likely due to heterogeneity within the RGC group: “The mixed non-direction-selective groups G17 and G31 probably contain more than one type, as supported by multiple distinct morphologies and genetic identities (for example, G31,32, Extended Data Fig. 5) or response properties (for example, G17, see below)” (Baden et al. Nature 2016). We added a paragraph in the Results section.

      Concerning the reviewer’s last point: We agree that it is important to know whether the defining feature - i.e., the selectivity for chromatic contrast - is robust against variations in other stimulus properties. New electrophysiological data included in the manuscript (Fig. 6e,f) offers some insights here. We probed G28/tSbC cells with full-field flashed stimuli that varied in chromatic contrast. Despite not matching the cell’s preferred spatial and temporal properties, this stimulus still recovered the cell’s preference for chromatic contrast. While we think it is an interesting direction to systematically quantify the relative importance of temporal, spatial and chromatic MEI properties for an RGC type’s responses, we think this is beyond the scope of this manuscript.

      Explanation of RDM analysis

      I really struggled with the analysis in Figure 5b-c. After reading the text several times, this is what I think is happening. Starting with a given RGC type (#20 in Figure 5b), you take the response of each cell in that group to the MEI of each RGC type, and plot those responses in a space where the axes correspond to responses of each RGC of this type. Then you measure euclidean distance between the responses to a pair of MEIs and collect those distances in the RDM matrix. Whether correct or not, this took some time to arrive at and meant filling in some missing pieces in the text. That section should be expanded considerably.

      We appreciate the reviewer’s efforts to understand this analysis and confirm that they interpreted it correctly. However, we decided to remove the analysis. The point we were trying to make with this analysis is that the transformation implemented by G28/tSbC cells “warps” stimulus space and increases the discriminability of stimuli with similar characteristics like the cell’s MEI. We now make this point in a - we think - more accessible manner by the new analysis about the nonlinearity of G28/tSbC cell’s color opponency (see above).

      Centering of MEIs

      How important is the lack of precise centering of the MEIs when you present them? It would be helpful to have some idea about that - either from direct experiments or using a model.

      In the electrophysiological experiments, the MEIs were centered precisely (now Fig. 5 in revised manuscript) and these experiments yielded almost identical results to the 2P imaging experiments, where the MEIs were presented on a grid to approach the optimal position for the recorded cells. Additionally, all model simulations work with perfectly centered MEIs. We hence conclude that our grid-approach at presenting stimuli provided sufficient precision in stimulus positioning.

      We added this information to the revised manuscript.

      Reviewer #2 (Public Review):

      This paper uses two-photon imaging of mouse ganglion cells responding to chromatic natural scenes along with convolutional neural network (CNN) models fit to the responses of a large set of ganglion cells. The authors analyze CNN models to find the most effective input (MEI) for each ganglion cell as a novel approach to identifying ethological function. From these MEIs they identify chromatic opponent ganglion cells, and then further perform experiments with natural stimuli to interpret the ethological function of those cells. They conclude that a type of chromatic opponent ganglion cell is useful for the detection of the transition from the ground to the sky across the horizon. The experimental techniques, data, and fitting of CNN models are all high quality. However, there are conceptual difficulties with both the use of MEIs to draw conclusions about neural function and the ethological interpretations of experiments and data analyses, as well as a lack of comparison with standard approaches. These bear directly both on the primary conclusions of the paper and on the utility of the new approaches.

      We thank the reviewer for the detailed comments.

      1) Claim of feature detection.

      The color opponent cells are cast as a "feature detector" and the term 'detector' is in the title. However insufficient evidence is given for this, and it seems likely a mischaracterization. An example of a ganglion cell that might qualify as a feature detector is the W3 ganglion cell (Zhang et al., 2012). These cells are mostly silent and only fire if there is differential motion on a mostly featureless background. Although this previous work does not conduct a ROC analysis, the combination of strong nonlinearity and strong selectivity are important here, giving good qualitative support for these cells as participating in the function of detecting differential motion against the sky. In the present case, the color opponent cells respond to many stimuli, not just transitions across the horizon. In addition, for the receiver operator characteristic (ROC) analysis as to whether these cells can discriminate transitions across the horizon, the area under the curve (AUC) is on average 0.68. Although there is not a particular AUC threshold for a detector or diagnostic test to have good discrimination, a value of 0.5 is chance, and values between 0.5 and 0.7 are considered poor discrimination, 'not much better than a coin toss' (Applied Logistic Regression, Hosmer et al., 2013, p. 177). The data in Fig. 6F is also more consistent with a general chromatic opponent cell that is not highly selective. These cells may contribute information to the problem of discriminating sky from ground, but also to many other ethologically relevant visual determinations. Characterizing them as feature detectors seems inappropriate and may distract from other functional roles, although they may participate in feature detection performed at a higher level in the brain.

      The reviewer apparently uses a rather narrow definition of a feature detector. We, however, argue for a broader definition, which, in our view, better captures the selectivities described for RGCs in the literature. For example, while W3 cells have been quite extensively studied, one can probably agree on that so far only a fraction of the possible stimulus space has been explored. Therefore, it cannot be excluded that W3 cells respond also to other features than small dark moving dots, but we (like the reviewer) still refer to it as a feature detector. Or, for instance, direction-selective (DS) RGCs are commonly considered feature detectors (i.e., responsive to a specific motion direction), although they also respond to flashes and spike when null-direction motion is paused (Barlow & Levick J Physiol 1965).

      The G28/tSbC cells’ selectivity for full-field changes in chromatic contrast enables them to encode ground-sky horizon transitions reliably across stimulus parameters (e.g., see new Fig. 7i panel). This cell type is thus well-suited to contribute to detecting context changes, as elicited by ground-sky transitions.

      Therefore, we think that the G28/tSbC RGC can be considered a feature detector and as such, could be used at a higher level in the brain to quickly detect changes in visual context (see also Kerschensteiner Annu Rev Vis Sci 2022). Still, their signals may also be useful for other computations (e.g., defocus, as discussed in our manuscript).

      Regarding the ROC analysis, we acknowledge that an average AUC of .68 may seem comparatively low; however, this is based on the temporally downsampled information (i.e., by way of Ca2+ imaging) gathered from the activity of a single cell. A downstream area would have access to the activity of a local population of cells. This AUC value should therefore be considered a lower bound on the discrimination performance of a downstream area. We now comment on this in the manuscript.

      2) Appropriateness of MEI analysis for interpretations of the neural code.

      There is a fundamental incompatibility between the need to characterize a system with a complex nonlinear CNN and then characterizing cells with a single MEI. MEIs represent the peak in a complex landscape of a nonlinear function, and that peak may or may not occur under natural conditions. For example, MEIs do not account for On-Off cells, On-Off direction selectivity, nonlinear subunits, object motion sensitivity, and many other nonlinear cell properties where multiple visual features are combined. MEIs may be a useful tool for clustering and distinguishing cells, but there is not a compelling reason to think that they are representative of cell function. This is an open question, and thus it should not be assumed as a foundation for the study. This paper potentially speaks to this issue, but there is more work to support the usefulness of the approach. Neural networks enable a large set of analyses to understand complex nonlinear effects in a neural code, and it is well understood that the single-feature approach is inadequate for a full understanding of sensory coding. A great concern is that the message that the MEI is the most important representative statistic directs the field away from the primary promise of the analysis of neural networks and takes us back to the days when only a single sensory feature is appreciated, now the MEI instead of the linear receptive field. It is appropriate to use MEI analyses to create hypotheses for further experimental testing, and the paper does this (and states as much) but it further takes the point of view that the MEI is generally informative as the single best summary of the neural code. The representation similarity analysis (Fig. 5) acts on the unfounded assumption that MEIs are generally representative and conveys this point of view, but it is not clear whether anything useful can be drawn from this analysis, and therefore this analysis does not support the conclusions about changes in the representational space. Overall this figure detracts from the paper and can safely be removed. In addition, in going from MEI analysis to testing ethological function, it should be made much more clear that MEIs may not generally be representative of the neural code, especially when nonlinearities are present that require the use of more complex models such as CNNs, and thus testing with other stimuli are required.

      The reviewer correctly characterizes MEIs as representing the peak in a nonlinear loss landscape that, in this case, describes the neurons’ tuning. As such, the MEI approach is indeed capable of characterizing nonlinear neuronal feature selectivities that are captured by a nonlinear model, such as the CNN we used here. We therefore disagree with the suggestion that MEIs should not be used “when nonlinearities are present that require the use of more complex models such as CNNs”. It is unclear what other “analysis of neural networks” the reviewer refers to. One approach to analyze the predictive neural network are MEIs.

      We also want to clarify that, while the reviewer is correct in stating that the MEI approach as used here only identifies a single peak, this does not mean that it cannot capture neuronal selectivities for a combination of features, as long as this combination of features can be described as a point in high-dimensional stimulus space. In fact, this is demonstrated in our manuscript for the case of G28/tSbC cell’s selectivity for large or full-field, sustained changes in chromatic contrast (a combination of spatial, temporal, and chromatic features). While approaches similar to the one used here generate several diverse exciting inputs (Ding et al. bioRxiv 2023) and could therefore also fully capture On-Off selectivities, we pointed out the limitation of MEIs when describing On-Off cells in the manuscript (both original and revised).

      Regarding the reviewer’s concern that “[...] the message that the MEI is the most important representative statistic [...] takes us back to the days when only a single sensory feature is appreciated”. It was certainly not our intention to proclaim MEIs as the ultimate representation of a cell’s response features and we have clarified this in the revised manuscript. However, we also think that (i) in applying a nonlinear method to extract chromatic, temporal, and spatial response properties from natural movie responses, we go beyond many characterizations that use linear methods to extract spatial or temporal only, achromatic response properties from static, white-noise stimuli. This said, we agree that (ii) expanding around the peak is desirable, and we do that in an additional analysis (new Fig. 6); but that reducing complexity to a manageable degree (at least, at first) is useful and even necessary when discovering novel response properties.

      Concerning the representational similarity analysis (RSA): the point we were trying to make with this analysis is that the transformation implemented by G28 “warps” stimulus space and increases the discriminability of stimuli with similar characteristics like the cell’s MEI. We now made this point in a more accessible fashion through the above-mentioned analysis, where we extended the estimate around the peak. We therefore agree to remove the RSA from the paper.

      In the revised manuscript, we (a) discuss the advantages and limitations of the MEI approach in more detail (in Results and Discussion; see also our reply #1) and (b) replaced the RSA analysis.

      3) Usefulness of MEI approach over alternatives. It is claimed that analyzing the MEI is a useful approach to discovering novel neural coding properties, but to show the usefulness of a new tool, it is important to compare results to the traditional technique. The more standard approach would be to analyze the linear receptive field, which would usually come from the STA of white noise measurement, but here this could come from the linear (or linear-nonlinear) model fit to the natural scene response, or by computing an average linear filter from the natural scene model. It is important to assess whether the same conclusion about color opponency can come from this standard approach using the linear feature (average effective input), and whether the MEIs are qualitatively different from the linear feature. The linear feature should thus be compared to MEIs for Fig. 3 and 4, and the linear feature should be compared with the effects of natural stimuli in terms of chromatic contrast (Fig. 6b). With respect to the representation analysis (Fig. 5), although I don't believe this is meaningful for MEIs, if this analysis remains it should also be compared to a representation analysis using the linear feature. In fact, a representation analysis would be more meaningful when performed using the average linear feature as it summarizes a wider range of stimuli, although the most meaningful analysis would be directly on a broader range of responses, which is what is usually done.

      We agree that the comparison with a linear model is an important validation. Therefore, we performed an additional analysis (see also reply #1, as well as Fig. 6 and corresponding section in the manuscript) which demonstrates that an LN model does not recover the chromatic feature selectivity. This finding supports our claims about the usefulness of the MEI approach over linear approaches.

      Regarding the comment on the representation analysis, as mentioned above, we consider it replaced by the analysis comparing results from an LN model and a nonlinear CNN.

      4) Definition of ethological problem. The ethological problem posed here is the detection of the horizon. The stimuli used do not appear to relate to this problem as they do not include the horizon and only include transitions across the horizon. It is not clear whether these stimuli would ever occur with reasonable frequency, as they would only occur with large vertical saccades, which are less common in mice. More common would be smooth transitions across the horizon, or smaller movements with the horizon present in the image. In this case, cells which have a spatial chromatic opponency (which the authors claim are distinct from the ones studied here) would likely be more important for use in chromatic edge detection or discrimination. Therefore the ethological relevance of any of these analyses remains in question.

      It is further not clear if detection is even the correct problem to consider. The horizon is always present, but the problem is to determine its location, a conclusion that will likely come from a population of cells. This is a distinct problem from detecting a small object, such as a small object against the background of the sky, which may be a more relevant problem to consider.

      Thank you for giving us the opportunity to clear these things up. First, we would like to clarify that we propose that G28/tSbC cells contribute to detecting context changes, such as transitions across the horizon from ground to sky, not to detecting the horizon itself. We acknowledge that we were not clear enough about this in the manuscript and corrected this. To back-up our hypothesis that G28 RGCs contribute to detecting context changes, we performed an additional simulation analysis, which is described in our reply #3 (see above).

      5) Difference in cell type from those previously described. It is claimed that the chromatic opponent cells are different from those previously described based on the MEI analysis, but we cannot conclude this because previous work did not perform an MEI analysis. An analysis should be used that is comparable to previous work, the linear spatiotemporal receptive field should be sufficient. However, there is a concern that because linear features can change with stimulus statistics (Hosoya et al., 2005), a linear feature fit to natural scenes may be different than those from previous studies even for the same cell type. The best approach would likely be presenting a white noise stimulus to the natural scenes model to compute a linear feature, which still carries the assumption that this linear feature from the model fit to a natural stimulus would be comparable to previous studies. If the previous cells have spatial chromatic opponency and the current cells only have chromatic opponency in the center, there should be both types of cells in the current data set. One technical aspect relating to this is that MEIs were space-time separable. Because the center and surround have a different time course, enforcing this separability may suppress sensitivity in the surround. Therefore, it would likely be better if this separability were not enforced in determining whether the current cells are different than previously described cells. As to whether these cells are actually different than those previously described, the authors should consider the following uncited work; (Ekesten Gouras, 2005), which identified chromatic opponent cells in mice in approximate numbers to those here (~ 2%). In addition, (Yin et al., 2009) in guinea pigs and (Michael, 1968) in ground squirrels found color-opponent ganglion cells without effects of a spatial surround as described in the current study.

      First of all, we did not intend to claim to have discovered a completely new type of color-opponent tuning in general; what we were trying to say is that tSbC cells display spatially co-extensive color opponency, a feature selectivity previously not described in this mouse RGC type, and which may be used to signal context changes as elicited by ground-sky transitions.

      Concerning the reviewer’s first argument about a lack of comparability of our results to results previously obtained with a different approach: We think that this is now addressed by the new analysis (new Fig. 6), where we show why linear methods are limited in their capability to recover the type of color opponency that we discovered with the MEI approach.

      Regarding the argument about center-surround opponency, we agree that “if the previous cells have spatial chromatic opponency and the current cells only have chromatic opponency in the center, there should be both types of cells in the current data set”. We did not focus on analyzing center-surround opponency in the present study, but from the MEIs, it is visible that many cells have a stronger antagonistic surround in the green channel compared to the UV channel (see Fig. 4a, example RGCs of G21, G23, G24; Figure 3-supplement 1 example RGCs of G21, G23, G24, G31, G32). Importantly, the MEIs shown in Fig. 4a were also shown in the verification experiment, and had G28 RGCs preferred this kind of stimulus, they would have responded preferentially to these MEIs, which was not the case (Fig. 4f).

      It should also be noted here that, while the model’s filters were space-time separable, we did not impose a restriction on the MEIs to be space-time separable during optimization. However, we analyzed only the rank 1 components of the MEIs (see Methods section Validating MEIs experimentally). since our analysis focused on aspects of retinal processing not contingent on spatiotemporal interactions in the stimulus.

      In summary, we are convinced that our finding of center-opponency in G28 is not an artifact of the methodology.

      We discuss this in the manuscript and add the references mentioned by the reviewer to the respective part of the Discussion.

      Reviewer #3 (Public Review):

      This study aims to discover ethologically relevant feature selectivity of mouse retinal ganglion cells. The authors took an innovative approach that uses large-scale calcium imaging data from retinal ganglion cells stimulated with both artificial and natural visual stimuli to train a convolutional neural network (CNN) model. The resulting CNN model is able to predict stimuli that maximally excite individual ganglion cell types.

      The authors discovered that modeling suggests that the "transient suppressed-by-contrast" ganglion cells are selectively responsive to Green-Off, UV-On contrasts, a feature that signals the transition from the ground to the sky when the animal explores the visual environment. They tested this hypothesis by measuring the responses of these suppressed-by-contrast cells to natural movies, and showed that these cells are preferentially activated by frames containing ground-to-sky transitions and exhibit the highest selectivity of this feature among all ganglion cell types. They further verified this novel feature selectivity by single-cell patch clamp recording.

      This work is of high impact because it establishes a new paradigm for studying feature selectivity in visual neurons. The data and analysis are of high quality and rigor, and the results are convincing. Overall, this is a timely study that leverages rapidly developing AI tools to tackle the complexity of both natural stimuli and neuronal responses and provides new insights into sensory processing.

      We thank the reviewer for appreciating our study.

    1. Experts without accountability start acting in their own interests rather than that of their customers/users. And we don’t know how to hold programmers accountable without understanding the code they write.
    1. If you already have computer-based hobbies such as gaming or a social-media addiction, why not learn to run your own services as a hobby? Note, this is very different from learning to code. It's more about learning to be your own sysadmin and tech support, rather than being a programmer.
    1. hat the Designer + Product-marketing duo is the new force to reckon with compared to traditional Product Manager-Engineer duo when you want to roll out amazing products to your customers

      25 years ago the 1st round of funding was spent building our servers, racks and data centers plus some foundational code.

      Then you could get to work building a product. I call this "showing up"

      Today you can build a product before you even raise any money and get distribution globally in an instant. Now you need to not just show up, but STAND OUT

      Given this shift, the hardest challenge has moved from building the technical foundations to build a product to getting attention and adoption on the market.

    1. Author Response

      Reviewer #1 (Public Review):

      In this work George et al. describe RatInABox, a software system for generating surrogate locomotion trajectories and neural data to simulate the effects of a rodent moving about an arena. This work is aimed at researchers that study rodent navigation and its neural machinery.

      Strengths:

      • The software contains several helpful features. It has the ability to import existing movement traces and interpolate data with lower sampling rates. It allows varying the degree to which rodents stay near the walls of the arena. It appears to be able to simulate place cells, grid cells, and some other features.

      • The architecture seems fine and the code is in a language that will be accessible to many labs.

      • There is convincing validation of velocity statistics. There are examples shown of position data, which seem to generally match between data and simulation.

      Weaknesses:

      • There is little analysis of position statistics. I am not sure this is needed, but the software might end up more powerful and the paper higher impact if some position analysis was done. Based on the traces shown, it seems possible that some additional parameters might be needed to simulate position/occupancy traces whose statistics match the data.

      Thank you for this suggestion. We have added a new panel to figure 2 showing a histogram of the time the agent spends at positions of increasing distance from the nearest wall. As you can see, RatInABox is a good fit to the real locomotion data: positions very near the wall are under-explored (in the real data this is probably because whiskers and physical body size block positions very close to the wall) and positions just away from but close to the wall are slightly over explored (an effect known as thigmotaxis, already discussed in the manuscript).

      As you correctly suspected, fitting this warranted a new parameter which controls the strength of the wall repulsion, we call this “wall_repel_strength”. The motion model hasn’t mathematically changed, all we did was take a parameter which was originally a fixed constant 1, unavailable to the user, and made it a variable which can be changed (see methods section 6.1.3 for maths). The curves fit best when wall_repel_strength ~= 2. Methods and parameters table have been updated accordingly. See Fig. 2e.

      • The overall impact of this work is somewhat limited. It is not completely clear how many labs might use this, or have a need for it. The introduction could have provided more specificity about examples of past work that would have been better done with this tool.

      At the point of publication we, like yourself, also didn’t know to what extent there would be a market for this toolkit however we were pleased to find that there was. In its initial 11 months RatInABox has accumulated a growing, global user base, over 120 stars on Github and north of 17,000 downloads through PyPI. We have accumulated a list of testimonials[5] from users of the package vouching for its utility and ease of use, four of which are abridged below. These testimonials come from a diverse group of 9 researchers spanning 6 countries across 4 continents and varying career stages from pre-doctoral researchers with little computational exposure to tenured PIs. Finally, not only does the community use RatInABox they are also building it: at the time of writing RatInABx has received logged 20 GitHub “Issues” and 28 “pull requests” from external users (i.e. those who aren’t authors on this manuscript) ranging from small discussions and bug-fixes to significant new features, demos and wrappers.

      Abridged testimonials:

      ● “As a medical graduate from Pakistan with little computational background…I found RatInABox to be a great learning and teaching tool, particularly for those who are underprivileged and new to computational neuroscience.” - Muhammad Kaleem, King Edward Medical University, Pakistan

      ● “RatInABox has been critical to the progress of my postdoctoral work. I believe it has the strong potential to become a cornerstone tool for realistic behavioural and neuronal modelling” - Dr. Colleen Gillon, Imperial College London, UK

      ● “As a student studying mathematics at the University of Ghana, I would recommend RatInABox to anyone looking to learn or teach concepts in computational neuroscience.” - Kojo Nketia, University of Ghana, Ghana

      ● “RatInABox has established a new foundation and common space for advances in cognitive mapping research.” - Dr. Quinn Lee, McGill, Canada

      The introduction continues to include the following sentence highlighting examples of past work which relied of generating artificial movement and/or neural dat and which, by implication could have been done better (or at least accelerated and standardised) using our toolbox.

      “Indeed, many past[13, 14, 15] and recent[16, 17, 18, 19, 6, 20, 21] models have relied on artificially generated movement trajectories and neural data.”

      • Presentation: Some discussion of case studies in Introduction might address the above point on impact. It would be useful to have more discussion of how general the software is, and why the current feature set was chosen. For example, how well does RatInABox deal with environments of arbitrary shape? T-mazes? It might help illustrate the tool's generality to move some of the examples in supplementary figure to main text - or just summarize them in a main text figure/panel.

      Thank you for this question. Since the initial submission of this manuscript RatInABox has been upgraded and environments have become substantially more “general”. Environments can now be of arbitrary shape (including T-mazes), boundaries can be curved, they can contain holes and can also contain objects (0-dimensional points which act as visual cues). A few examples are showcased in the updated figure 1 panel e.

      To further illustrate the tools generality beyond the structure of the environment we continue to summarise the reinforcement learning example (Fig. 3e) and neural decoding example in section 3.1. In addition to this we have added three new panels into figure 3 highlighting new features which, we hope you will agree, make RatInABox significantly more powerful and general and satisfy your suggestion of clarifying utility and generality in the manuscript directly.

      On the topic of generality, we wrote the manuscript in such a way as to demonstrate how the rich variety of ways RatInABox can be used without providing an exhaustive list of potential applications. For example, RatInABox can be used to study neural decoding and it can be used to study reinforcement learning but not because it was purpose built with these use-cases in mind. Rather because it contains a set of core tools designed to support spatial navigation and neural representations in general. For this reason we would rather keep the demonstrative examples as supplements and implement your suggestion of further raising attention to the large array of tutorials and demos provided on the GitHub repository by modifying the final paragraph of section 3.1 to read:

      “Additional tutorials, not described here but available online, demonstrate how RatInABox can be used to model splitter cells, conjunctive grid cells, biologically plausible path integration, successor features, deep actor-critic RL, whisker cells and more. Despite including these examples we stress that they are not exhaustive. RatInABox provides the framework and primitive classes/functions from which highly advanced simulations such as these can be built.”

      Reviewer #3 (Public Review):

      George et al. present a convincing new Python toolbox that allows researchers to generate synthetic behavior and neural data specifically focusing on hippocampal functional cell types (place cells, grid cells, boundary vector cells, head direction cells). This is highly useful for theory-driven research where synthetic benchmarks should be used. Beyond just navigation, it can be highly useful for novel tool development that requires jointly modeling behavior and neural data. The code is well organized and written and it was easy for us to test.

      We have a few constructive points that they might want to consider.

      • Right now the code only supports X,Y movements, but Z is also critical and opens new questions in 3D coding of space (such as grid cells in bats, etc). Many animals effectively navigate in 2D, as a whole, but they certainly make a large number of 3D head movements, and modeling this will become increasingly important and the authors should consider how to support this.

      Agents now have a dedicated head direction variable (before head direction was just assumed to be the normalised velocity vector). By default this just smoothes and normalises the velocity but, in theory, could be accessed and used to model more complex head direction dynamics. This is described in the updated methods section.

      In general, we try to tread a careful line. For example we embrace certain aspects of physical and biological realism (e.g. modelling environments as continuous, or fitting motion to real behaviour) and avoid others (such as the biophysics/biochemisty of individual neurons, or the mechanical complexities of joint/muscle modelling). It is hard to decide where to draw but we have a few guiding principles:

      1. RatInABox is most well suited for normative modelling and neuroAI-style probing questions at the level of behaviour and representations. We consciously avoid unnecessary complexities that do not directly contribute to these domains.

      2. Compute: To best accelerate research we think the package should remain fast and lightweight. Certain features are ignored if computational cost outweighs their benefit.

      3. Users: If, and as, users require complexities e.g. 3D head movements, we will consider adding them to the code base.

      For now we believe proper 3D motion is out of scope for RatInABox. Calculating motion near walls is already surprisingly complex and to do this in 3D would be challenging. Furthermore all cell classes would need to be rewritten too. This would be a large undertaking probably requiring rewriting the package from scratch, or making a new package RatInABox3D (BatInABox?) altogether, something which we don’t intend to undertake right now. One option, if users really needed 3D trajectory data they could quite straightforwardly simulate a 2D Environment (X,Y) and a 1D Environment (Z) independently. With this method (X,Y) and (Z) motion would be entirely independent which is of unrealistic but, depending on the use case, may well be sufficient.

      Alternatively, as you said that many agents effectively navigate in 2D but show complex 3D head and other body movements, RatInABox could interface with and feed data downstream to other softwares (for example Mujoco[11]) which specialise in joint/muscle modelling. This would be a very legitimate use-case for RatInABox.

      We’ve flagged all of these assumptions and limitations in a new body of text added to the discussion:

      “Our package is not the first to model neural data[37, 38, 39] or spatial behaviour[40, 41], yet it distinguishes itself by integrating these two aspects within a unified, lightweight framework. The modelling approach employed by RatInABox involves certain assumptions:

      1. It does not engage in the detailed exploration of biophysical[37, 39] or biochemical[38] aspects of neural modelling, nor does it delve into the mechanical intricacies of joint and muscle modelling[40, 41]. While these elements are crucial in specific scenarios, they demand substantial computational resources and become less pertinent in studies focused on higher-level questions about behaviour and neural representations.

      2. A focus of our package is modelling experimental paradigms commonly used to study spatially modulated neural activity and behaviour in rodents. Consequently, environments are currently restricted to being two-dimensional and planar, precluding the exploration of three-dimensional settings. However, in principle, these limitations can be relaxed in the future.

      3. RatInABox avoids the oversimplifications commonly found in discrete modelling, predominant in reinforcement learning[22, 23], which we believe impede its relevance to neuroscience.

      4. Currently, inputs from different sensory modalities, such as vision or olfaction, are not explicitly considered. Instead, sensory input is represented implicitly through efficient allocentric or egocentric representations. If necessary, one could use the RatInABox API in conjunction with a third-party computer graphics engine to circumvent this limitation.

      5. Finally, focus has been given to generating synthetic data from steady-state systems. Hence, by default, agents and neurons do not explicitly include learning, plasticity or adaptation. Nevertheless we have shown that a minimal set of features such as parameterised function-approximator neurons and policy control enable a variety of experience-driven changes in behaviour the cell responses[42, 43] to be modelled within the framework.

      • What about other environments that are not "Boxes" as in the name - can the environment only be a Box, what about a circular environment? Or Bat flight? This also has implications for the velocity of the agent, etc. What are the parameters for the motion model to simulate a bat, which likely has a higher velocity than a rat?

      Thank you for this question. Since the initial submission of this manuscript RatInABox has been upgraded and environments have become substantially more “general”. Environments can now be of arbitrary shape (including circular), boundaries can be curved, they can contain holes and can also contain objects (0-dimensional points which act as visual cues). A few examples are showcased in the updated figure 1 panel e.

      Whilst we don’t know the exact parameters for bat flight users could fairly straightforwardly figure these out themselves and set them using the motion parameters as shown in the table below. We would guess that bats have a higher average speed (speed_mean) and a longer decoherence time due to increased inertia (speed_coherence_time), so the following code might roughly simulate a bat flying around in a 10 x 10 m environment. Author response image 1 shows all Agent parameters which can be set to vary the random motion model.

      Author response image 1.

      • Semi-related, the name suggests limitations: why Rat? Why not Agent? (But its a personal choice)

      We came up with the name “RatInABox” when we developed this software to study hippocampal representations of an artificial rat moving around a closed 2D world (a box). We also fitted the random motion model to open-field exploration data from rats. You’re right that it is not limited to rodents but for better or for worse it’s probably too late for a rebrand!

      • A future extension (or now) could be the ability to interface with common trajectory estimation tools; for example, taking in the (X, Y, (Z), time) outputs of animal pose estimation tools (like DeepLabCut or such) would also allow experimentalists to generate neural synthetic data from other sources of real-behavior.

      This is actually already possible via our “Agent.import_trajectory()” method. Users can pass an array of time stamps and an array of positions into the Agent class which will be loaded and smoothly interpolated along as shown here in Fig. 3a or demonstrated in these two new papers[9,10] who used RatInABox by loading in behavioural trajectories.

      • What if a place cell is not encoding place but is influenced by reward or encodes a more abstract concept? Should a PlaceCell class inherit from an AbstractPlaceCell class, which could be used for encoding more conceptual spaces? How could their tool support this?

      In fact PlaceCells already inherit from a more abstract class (Neurons) which contains basic infrastructure for initialisation, saving data, and plotting data etc. We prefer the solution that users can write their own cell classes which inherit from Neurons (or PlaceCells if they wish). Then, users need only write a new get_state() method which can be as simple or as complicated as they like. Here are two examples we’ve already made which can be found on the GitHub:

      Author response image 2.

      Phase precession: PhasePrecessingPlaceCells(PlaceCells)[12] inherit from PlaceCells and modulate their firing rate by multiplying it by a phase dependent factor causing them to “phase precess”.

      Splitter cells: Perhaps users wish to model PlaceCells that are modulated by recent history of the Agent, for example which arm of a figure-8 maze it just came down. This is observed in hippocampal “splitter cell”. In this demo[1] SplitterCells(PlaceCells) inherit from PlaceCells and modulate their firing rate according to which arm was last travelled along.

      • This a bit odd in the Discussion: "If there is a small contribution you would like to make, please open a pull request. If there is a larger contribution you are considering, please contact the corresponding author3" This should be left to the repo contribution guide, which ideally shows people how to contribute and your expectations (code formatting guide, how to use git, etc). Also this can be very off-putting to new contributors: what is small? What is big? we suggest use more inclusive language.

      We’ve removed this line and left it to the GitHub repository to describe how contributions can be made.

      • Could you expand on the run time for BoundaryVectorCells, namely, for how long of an exploration period? We found it was on the order of 1 min to simulate 30 min of exploration (which is of course fast, but mentioning relative times would be useful).

      Absolutely. How long it takes to simulate BoundaryVectorCells will depend on the discretisation timestep and how many neurons you simulate. Assuming you used the default values (dt = 0.1, n = 10) then the motion model should dominate compute time. This is evident from our analysis in Figure 3f which shows that the update time for n = 100 BVCs is on par with the update time for the random motion model, therefore for only n = 10 BVCs, the motion model should dominate compute time.

      So how long should this take? Fig. 3f shows the motion model takes ~10-3 s per update. One hour of simulation equals this will be 3600/dt = 36,000 updates, which would therefore take about 72,000*10-3 s = 36 seconds. So your estimate of 1 minute seems to be in the right ballpark and consistent with the data we show in the paper.

      Interestingly this corroborates the results in a new inset panel where we calculated the total time for cell and motion model updates for a PlaceCell population of increasing size (from n = 10 to 1,000,000 cells). It shows that the motion model dominates compute time up to approximately n = 1000 PlaceCells (for BoundaryVectorCells it’s probably closer to n = 100) beyond which cell updates dominate and the time scales linearly.

      These are useful and non-trivial insights as they tell us that the RatInABox neuron models are quite efficient relative to the RatInABox random motion model (something we hope to optimise further down the line). We’ve added the following sentence to the results:

      “Our testing (Fig. 3f, inset) reveals that the combined time for updating the motion model and a population of PlaceCells scales sublinearly O(1) for small populations n > 1000 where updating the random motion model dominates compute time, and linearly for large populations n > 1000. PlaceCells, BoundaryVectorCells and the Agent motion model update times will be additionally affected by the number of walls/barriers in the Environment. 1D simulations are significantly quicker than 2D simulations due to the reduced computational load of the 1D geometry.”

      And this sentence to section 2:

      “RatInABox is fundamentally continuous in space and time. Position and velocity are never discretised but are instead stored as continuous values and used to determine cell activity online, as exploration occurs. This differs from other models which are either discrete (e.g. “gridworld” or Markov decision processes) or approximate continuous rate maps using a cached list of rates precalculated on a discretised grid of locations. Modelling time and space continuously more accurately reflects real-world physics, making simulations smooth and amenable to fast or dynamic neural processes which are not well accommodated by discretised motion simulators. Despite this, RatInABox is still fast; to simulate 100 PlaceCell for 10 minutes of random 2D motion (dt = 0.1 s) it takes about 2 seconds on a consumer grade CPU laptop (or 7 seconds for BoundaryVectorCells).”

      Whilst this would be very interesting it would likely represent quite a significant edit, requiring rewriting of almost all the geometry-handling code. We’re happy to consider changes like these according to (i) how simple they will be to implement, (ii) how disruptive they will be to the existing API, (iii) how many users would benefit from the change. If many users of the package request this we will consider ways to support it.

      • In general, the set of default parameters might want to be included in the main text (vs in the supplement).

      We also considered this but decided to leave them in the methods for now. The exact value of these parameters are subject to change in future versions of the software. Also, we’d prefer for the main text to provide a low-detail high-level description of the software and the methods to provide a place for keen readers to dive into the mathematical and coding specifics.

      • It still says you can only simulate 4 velocity or head directions, which might be limiting.

      Thanks for catching this. This constraint has been relaxed. Users can now simulate an arbitrary number of head direction cells with arbitrary tuning directions and tuning widths. The methods have been adjusted to reflect this (see section 6.3.4).

      • The code license should be mentioned in the Methods.

      We have added the following section to the methods:

      6.6 License RatInABox is currently distributed under an MIT License, meaning users are permitted to use, copy, modify, merge publish, distribute, sublicense and sell copies of the software.

    1. Author Response

      The following is the authors’ response to the original reviews.

      eLife assessment

      This important study reports investigation of the dynamics of PKA at the single-cell level in in vitro and in epithelia in vivo. Using different fluorescent biosensors and optogenetic actuators, the authors dissect the signaling pathway responsible for PKA waves, finding that PKA activation is a consequence of PGE2 release, which in turn is triggered by calcium pulses, requiring high ERK activity. The evidence supporting the claims is solid. At this stage the work is still partly descriptive in nature, and additional measurements would increase the strength of mechanistic insights and physiological relevance.

      We deeply appreciate Dr. Alejandro San Martín and Dr. Jonathan Cooper and the reviewers. Each comment is valuable and reasonable. We will revise our paper as much as possible.

      We have described what we will do for the reviewer’s comments one by one in the below section.

      Reviewer #1 (Recommendations For The Authors):

      1. Even though the phenomenon of PGE2 signal propagation is elegantly demonstrated and well described, the whole paper is mostly of descriptive nature - the PGE2 signal is propagated via intercellular communication and requires Ca transients as well as MAPK activity, however function of these RSPAs in dense epithelium is not taken into consideration. What is the function of these RSPAs in cellular crowding? - Does it promote cell survival or initiate apoptosis? Does it feed into epithelial reorganization during cellular crowding? Still something else? The authors discuss possible roles of this phenomenon in cell competition context, but show no experimental or statistical efforts to answer this question. I believe some additional analysis or simple experiment would help to shed some light on the functional aspect of RSPAs and increase the importance of all the elegant demonstrations and precise experimental setups that the manuscript is rich of. Monolayer experiments using some perturbations that challenge the steady state of epithelial homeostasis - drug treatments/ serum deprivation/ osmotic stress/ combined with live cell imaging and statistical methods that take into account local cell density might provide important answers to these questions. The authors could consider following some of these ideas to improve the overall value of the manuscript.

      We would like to thank the reviewer’s comment. Although we have intensively tried to identify the physiological relevance of RSPA, we could not detect the function at present.

      In the case of MDCK, the treatment of NSAIDs, which cancels RSPA, did not affect its cell growth, ERK wave propagation during collective migration, migration velocity, cell survival, or apoptosis. In mouse epidermis, the frequency of RSPA was NOT affected by inflammation and collective cell migration, evoked by TPA treatment and wound, respectively.

      Notably, RSPA also occurs in the normal epidermis, implying its relevance in homeostasis. However, at the current stage, we believe that the PGE2 dynamics and its regulation mechanism in the normal epidermis would be worth reporting to researchers in the field.

      1. In the line 82-84 the authors claim: "We found that the pattern of cAMP concentration change is very similar to the activity change of PKA, indicating that a Gs protein-coupled receptor (GsPCR) mediates RSPA". In our opinion, this conclusion is not well-supported by the results. The authors should at least show that some measurements of the two patterns show correlation. Are the patterns of cAMP of the same size as the pattern of PKA? Do they have the same size depending on cell density? Do they occur at the same frequency as the PKA patterns, depending on the cell density? Do they have an all or nothing activation as PKA or their activation is shading with the distance from the source?

      We have modified the text (line85)

      “Although the increment of the FRET ratio was not so remarkable as that of Booster-PKA, Wwe found that the pattern of cAMP concentration change is very similar to the activity change of PKA, indicating that a Gs protein-coupled receptor (GsPCR) mediates RSPA. This discrepancy may be partially explained by the difference in the dynamic ranges for cAMP signaling in each FRET biosensor (Watabe2020). “

      1. In general, the absolute radius of the waves is not a good measurement for single-cell biology studies, especially when comparing different densities or in vivo vs in vitro experiments. We suggest the authors add the measurement of the number of the cells involved in the waves (or the radius expressed in number of cells).

      We appreciate the reviewer’s comment. We have analyzed our results to demonstrate the number of cells as in Fig2E, which would be easy for readers to understand.

      1. In 6D, the authors should also show the single-cell trajectories to understand better the correlation between PKA and ERK peaks. Is the huger variability in ERK activity ratio dues to different peak time or different ERK activity levels in different cells? The authors should show both the variability in the time and intensity.

      We have added a few representative results as Fig. S4.

      1. In lines 130-132, the authors write, "This observation indicates that the amount of PGE2 secretion is predetermined and that there is a threshold of the cytoplasmic calcium concentration for the triggered PGE2 secretion". How could the author exclude that the amount of PGE2 is not regulated in its intensity as well? For sure, there is a threshold effect regarding calcium, but this doesn't mean that PGE2 secretion can be further regulated, e.g. by further increasing calcium concentration or by other mechanisms.

      We agree with the reviewer’s comment. We have modified the text.

      1. The manuscript shows that not all calcium transients are followed by RSPAs. Does the local cell density/crowding increase the probability of overlap between calcium transients and RSPAs?

      We appreciate the reviewer’s comment. We have also hypothesized the model. However, we did not see the correlation that the reviewer pointed out. Currently, the increment of the RSPA frequency at high density is partially caused by the increment of calcium transients.

      Reviewer #2 (Recommendations For The Authors):

      1. The work is hardly conclusive as to the actual biological significance of the phenomenon. It would be interesting to know more under which physiological and pathological conditions PGE2 triggers such radial PKA activity changes. It is not well explained in which tissues and organs and under what conditions this type of cell-to-cell communication could be particularly important.

      The greatest weakness of the study seems to be that the biological significance of the phenomenon is not clearly clarified. Although it can be deduced that PKA activation has many implications for cell signaling and metabolism, the work lacks the actual link to physiological or pathological significance.

      We deeply appreciate the reviewer’s comment. Similar to the reseponse of reviewer#1, although we have intensively tried to identify the physiological relevance of RSPA, we could not detect the function.

      On the other hand, we believe that the PGE2 dynamics and its regulation mechanism in the normal epidermis would be worth reporting to researchers in the field.

      1. The authors do not explain further why in certain cells of the cell clusters Ca2+ signals occur spontaneously and thus trigger the phenomenon. What triggers these Ca2+ changes? And why could this be linked to certain cell functions and functional changes?

      At this moment, we do not have a clear answer or model for the comment although the calcium transients have been reported in the epidermis (https://doi.org/10.1038/s41598-018-24899-7). Further studies are needed and we will pursue this issue as a next project.

      1. What explains the radius and the time span of the radial signal continuation? To what extent are these factors also related to the degradation of PGE2? The work could be stronger if such questions and their answers would be experimentally integrated and discussed.

      We agree with the reviewer’s comment. Although we have intensively studied that point, we have omitted the results because of its complications. In HeLa cells, but not MDCK cells, we demonstrate the meaning of the radius of RSPA (https://pubmed.ncbi.nlm.nih.gov/37813623/)

      1. The authors could consider whether they could investigate the subcellular translocation of cPLA2 in correlation with cytosolic Ca2+ signals using GFP technology and high-resolution fluorescence microscopy with their cell model.

      Actually, we tried to monitor the cPLA2 translocation using GFP-tagged cPLA2. However, the translocation of GFP-cPLA2 was detected, only when the cells were stimulated by calcium ionophore. At this point, we have concluded that the quantitative analysis of cPLA2 translocation would be difficult.  

      Reviewer #3 (Recommendations For The Authors):

      1. "The cell density in the basal layer is approximately 2x106 cells cm-2, which is markedly higher than that in MDCK cells (Fig. 2D). It is not clear whether this may be related to the lower frequency (~300 cm-2 h-1) and smaller radius of RSPA in the basal layer cells compared to MDCK cells (Fig. 2E)." Wasn't the relationship with cell density the opposite, higher density higher frequency? Isn't then this result contradicting the "cell density rule" that the authors argue is there in the in vitro system? The authors need to revise their interpretation of the data obtained.

      We agree with the reviewer’s comment. Currently, we do not find the "cell density rule" in mouse epidermis. It would be difficult to identify common rules between mouse epidermis and MDCK cells. However, although it is descriptive, we believe it is worth comparing the MDCK results at this moment.

      1. Similarly, the authors over conclude on the explanation of lack of change in the size of RSPA size when the change in fluorescence for the calcium reporter surpasses a threshold by saying that "This observation indicates that the amount of PGE2 secretion is predetermined and that there is a threshold of the cytoplasmic calcium concentration for the triggered PGE2 secretion." First, the study does not really measure directly PGE2 secretion. Hence, there is no way that they can argue that the level of PGE2 secreted is "predetermined". Instead, there could be an inhibitory mechanism that is triggered to limit further activation of PGE2 signaling/PKA in neighboring cells.

      We agree with the reviewer’s comment. We have omitted the context.

      1. To rule out a transcription-dependent mechanism in the apparent cell density-regulated sensitivity to PGE2, the authors need to inhibit transcription. We agree that our RNA-seq analysis would not 100% rule out the transcription-dependent mechanism. However, we believe that shutting down all transcription will show a severe off-target effect that indirectly affects the calcium transients and the PGE2-synthetase pathway. Therefore, our conclusion is limited.

      4) EGF is reported to increase the frequency of RSPA but the change shown in Fig. 6F is not statistically significant, hence, EGF does not increase RSPA frequency in their experiments.

      We have toned down the claim that EGF treatment increases the frequency (line172).

      "Accordingly, the addition of EGF faintly increased the frequency of RSPA in our experiments, while the MEK and EGFR inhibitors almost completely abrogated RSPA (Fig. 6F), representing that ERK activation or basal ERK activity is essential for RSPA.“

      1. The Discussion section is at times redundant with the results section. References to figures should be kept in the Results section.

      We would like to argue in opposition to this comment. For readers, we believe that the reference to figures would be helpful and kind. However, if eLife recommends removing the reference from the Discussion section, we will follow the publication policy.

      1. "Notably, the propagation of PKA activation, ~100 μm/min (Fig. 1H), is markedly faster than that of ERK activation, 2-4 μm/min (Hiratsuka et al., 2015)." The 2 kinase reporters are based on different molecular designs. Thus, it does not seem appropriate to compare the kinetics of both reporters as a proxy of the comparison of the kinetics of propagation of both kinases.

      We think that we should discuss the comparison of the activity propagation between ERK and PKA. First, among many protein kinases, only ERK and PKA activities have been shown to spread in the epithelial cells. Second, both pathways are considered to be intercellular communication. Finally, crosstalk between these two pathways has been reported in several cells and organs.

      1. In Figure 1E it is unclear what is significantly different from what. Statistical analysis should be added and reporting of the results should reflect the results from that analysis.

      2. In Figure 3F and G the color coding is confusing. In F pink is radius and black is GCaMP6 and in G is RSPA+ and - cells. The authors should change the color to avoid ambiguity in the code.

      We have amended the panels.

      1. In Fig. 5C, how do they normalize per cell density if they are measuring radius of the response?

      In Fig5C, we just measure the increment of FRET ratio in the view fields.

      1. In Fig. 5D, what is the point of having a label for PTGER3 if data were not determined (ND)?

      We have added what N.D. means.

      “N.D. represents Not Detected.”

      1. It is important to assess whether ERK activation depends of PGE2 signaling to better place ERK in the proposed signaling pathway. In fact, the authors argue that "ERK had a direct effect on the production of PGE2." But it could be that ERK is downstream PGE2 signaling instead.

      It could be possible in other experimental conditions via EP1 and/or EP3 pathways. However, we never detected an effect of RSPA on ERK activity by analyzing our imaging system. In addition, treatment with NSAIDs or COX-2 depletion, which completely abolishes RSPA, did not affect ERK wave propagation. Thus, in our context, we concluded that ERK is not downstream of PGE2. This notion is also supported by the NGS results in Fig. 5D.

      We have refrained from discussing the pathway of PGE2-dependent ERK activation because it would be redundant.

      1. The authors need to explain better what they mean by "AND gate" if they want to reach a broad readership like that of eLife

      We have modified the legend to explain the “AND gate” as much as possible (line639).

      “Figure 7: Models for PGE2 secretion.

      The frequency of calcium transients is cell density-dependent manner. While the ERK activation wave is there in both conditions. Because both calcium transient and ERK activation are required for RSPA, the probability for PGE2 secretion is regulated as “AND gate”. ”

      1. In Fig. 5D, "The average intensity of the whole view field of mKate2 or mKOκ, at 20 to 30 min after the addition of PGE2, was applied to calculate the mKate2/mKOκ ratio." But this means that overlapping/densely plated cells in high density will show stronger changes in fluorescence. This should be done per cell not per field of view. It is obvious that the higher density will have more dense/brighter signal in a given field of view.

      We are sorry for the confusion. The cell density does not affect the FRET ratio, although the brightness could be changed. A typical example is Fig1D. Thus, we are sure that our procedures represent the PKA activity in plated cells.

      1. In Fig. 6B the authors need to explain how were the "randomly set positions" determined.

      We have modified the legend section as below (line618).

      “The ERK activities within 10 µm from the center of RSPA and within 10 µm from randomly set positions with a random number table generated by Python are plotted in the left panel. Each colored dot represents an average value of an independent experiment.”

      1. Sentences 314-318 are repeated in 318-322.

      We deeply appreciate the reviewer’s comment and have amended

    1. Writing this type of program would be, in my opinion, an imprudent use of my time. The fact that an LLM can do it for me certainly speeds up my ability to produce code in a rather direct way: I can focus on the things that matter.

      That's the promise, isn't it?

    1. Author Response

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      In this study, single neurons were recorded, using tetrodes, from the parahippocampal cortex of 5 rats navigating a double-Y maze (in which each arm of a Y-maze forks again). The goal was located at any one of the 4 branch terminations, and rats were given partial information in the form of a light cue that indicated whether the reward was on the right or left side of the maze. The second decision point was uncued and the rat had no way of knowing which of the two branches was correct, so this phase of the task was more akin to foraging. Following the outbound journey, with or without reward, the rat had to return (inbound journey) to the maze and start to begin again.

      Neuronal activity was assessed for correlations with multiple navigation-relevant variables including location, head direction, speed, reward side, and goal location. The main finding is that a high proportion of neurons showed an increase in firing rate when the animal made a wrong turn at the first branch point (the one in which the correct decision was signalled). This increase, which the authors call rate remapping, persisted throughout the inbound journey as well. It was also found that head direction neurons (assessed by recording in an open field arena) in the same location in the room were more likely to show the rate change. The overall conclusion is that "during goal-directed navigation, parahippocampal neurons encode error information reflective of an animal's behavioral performance" or are "nodes in the transmission of behaviorally relevant variables during goal-directed navigation."

      Overall I think this is a well-conducted study investigating an important class of neural representation: namely, the substrate for spatial orientation and navigation. The analyses are very sophisticated - possibly a little too much so, as the basic findings are relatively straightforward and the analyses take quite a bit of work to understand. A difficulty with the study is that it was exploratory (observational) rather than hypothesis-driven. Thus, the findings reveal correlations in the data but do not allow us to infer causal relationships.

      We would like to clarify that this report consists of hypothesis-driven experiments, with post-hoc exploratory analyses. We have now made hypotheses more explicit in the text, and pointed out that follow-up analyses were to understand how these effects came to be. We thank the reviewer for pointing out that our hypotheses were not explicit in the introduction. We believe our results open the door for investigating the causal role of these regions in the propagation or generation of error signals during navigational behavior. Those types of experiments are however, outside the scope of the current work.

      That said, the observation of increased firing in a subset of neurons following an erroneous choice is potentially interesting. However, the effect seems small. What were the actual firing rate values in Hz, and what was the effect size?

      We thank the reviewer for the opportunity to clarify the effect size question. As there are multiple neurons in the analyses, differences in firing rate need necessarily to be normalized by a neuron's mean activity. For example, a difference of 1 spk/s is less meaningful when a neuron's base rate is 50 spk/s than when it is 10spks/s. Furthermore, our reports are for population level analyses, at which point comparing raw firing rate values and differences becomes more challenging. Nonetheless, we are including these raw metrics in two new supplemental figures (Figure 2 - figure supplement 4,5), where differences in individual neurons change can be up to 15 spks/s. Additionally, the patterns and statistical results observed in the main text are preserved, with outbound Right Cue minus Left Cue showing a left>stem>right (indicating error coding), and RW minus NRW showing negative values across all segments, indicating NRW>RW or higher activity following on inbound unrewarded trials. Statistics follow the corresponding main text results (Cue: segment LRT = 71.70; RW: segment LRT=45.80).

      I also feel we are lacking information about the underlying behavior that accompanies these firing rate effects. The authors say "one possibility is that the head-direction signal in the parahippocampal region reflects a behavioral state related to the navigational choice or the lack of commitment to a particular navigational route" which is a good thought and raises the possibility that on error trials, rats are more uncertain and turn their heads more (vicarious trial and error) and thus sample the preferred firing direction more thoroughly. Another possibility is that they run more slowly, which is associated with a higher firing rate in these cells. I think we, therefore, need a better understanding of how behavior differed between error trials in terms of running speed, directional sampling, etc.

      In terms of running speed, there was a small effect of mean running speed between correct and incorrect trials (across subjects LMEM: Cue correct>incorrect Z=2.3, p=0.02; RW Z=2.15, p=0.03). In most neurons, increases in speed will be accompanied by increases in firing rate. Thus, the differences in running speed cannot explain the observed results, as higher speed during correct trials would predict higher activity, which is the opposite of what we found.

      A few good, convincing raw-data plots showing a remapping neuron on an error trial and a correct trial on the same arm would also be helpful (the spike plots were too tiny to get a good sense of this: fewer, larger ones would be more helpful).

      Additional plots for individual units have been added, Figure 2 - figure supplement 3.

      It would be useful to know at what point the elevated response returned to baseline, how - was it when the next trial began, and was the drop gradual (suggesting perhaps a more neurohumoral response) or sudden.

      Due to the experimental design, this question cannot be addressed fully. Concretely, error trials incur a time-penalty in which the rats need to wait an additional 10 seconds before the next trial, while a new trial would start immediately when the animal nose-poked the home well after a correct trial. Nonetheless, the data on Reward provides insight into this question. The magnitude of the responses on left and right segments of the maze were larger than on the stem for Unrewarded (NRW) vs Rewarded (RW) trials on inbound trajectories, Fig. 4c. This suggests that while activity is still elevated post-incorrect throughout the maze, across units, this effect is smaller on the stem segment. Additionally, the analyses indicate that in the transition between outbound vs inbound trajectories (Figure 4 - figure supplement 3), activity patterns are sustained (incorrect>correct). Together, these results indicate that elevated "error-like" signal are slow in returning to baseline.  

      Reviewer #2 (Public Review):

      This work recorded neurons in the parahippocampal regions of the medial entorhinal cortex (MEC) and pre- and para-subiculum (PrS, PaS) during a visually guided navigation task on a 'tree maze'. They found that many of the neurons reflected in their firing the visual cue (or the associated correct behavioral choice of the animal) and also the absence of reward in inbound passes (with increased firing rate). Rate remapping explained best these firing rate changes in both conditions for those cells that exhibited place-related firing. This work used a novel task, and the increased firing rate at error trials in these regions is also novel. The limitation is that cells in these regions were analyzed together.

      We acknowledge this limitation on our study, and we believe there might be interesting differences between these regions. Unfortunately, the post-mortem extraction of the recording implant micro-drive used for these experiments generated too much tissue damage for exact localization of the tetrodes. Nonetheless, given that the patterns were observed in all subjects, we are confident that at least the major findings of "error-like" signaling is present across the parahippocampal regions. At the same time, the distributions of functional cell types as defined in the open field are different across the PaS, PrS and MEC, leaving the possibility of a more nuanced error coding scheme by region.

      Reviewer #3 (Public Review):

      The authors set out to explore how neurons in the rodent parahippocampal area code for environmental and behavioral variables in a complex goal-directed task. The task required animals to learn the association between a cue and a spatial response and to use this information to guide behavior flexibly on a trial-by-trial basis. The authors then used a series of sophisticated analytical techniques to examine how neurons in this area encode spatial location, task-relevant cues, and correct vs. incorrect responding. While these questions have been addressed in studies of hippocampal place cells, these questions have not been addressed in these upstream parahippocampal areas.

      Strengths:

      1) The study presents data from ensembles of simultaneously recorded neurons in the parahippocampal region. The authors use a sophisticated method for ensuring they are not recording from the same neurons in multiple sessions and yet still report impressive sample sizes.

      2) The use of the complex behavioral task guards against stereotyped behavior as rats need to continually pay attention to the relevant cue to guide behavior. The task is also quite difficult ensuring rats do not reach a ceiling level of performance which allows the authors to examine correct and incorrect trials and how spatial representations differ between them.

      3) The authors take the unusual approach of not pre-processing the data to group neurons into categories based on the type of spatial information that they represent. This guards against preconceived assumptions as to how certain populations of neurons encode information.

      4) The sophisticated analytical tools used throughout the manuscript allow the authors to examine spatial representations relative to a series of models of information processing.

      5) The most interesting finding is that neurons in this region respond to situations where rewards are not received by increasing their firing rates. This error or mismatch signal is most commonly associated with regions of the basal ganglia and so this finding will be of particular interest to the field.

      Weaknesses:

      1) The histological verification of electrode position is poor and while this is acknowledged by the authors it does limit the ability to interpret these data. Recent advances have enabled researchers to look at very specific classes of neurons within traditionally defined anatomical regions and examine their interactions with well-defined targets in other parts of the brain. The lack of specificity here means that the authors have had to group MEC, PaS, and PrS into a functional group; the parahippocampus. Their primary aim is then to examine these neurons as a functional group. Given that we know that neurons in these areas differ in significant ways, there is not a strong argument for doing this.

      See response to Reviewer 2.

      2) The analytical/statistical tools used are very impressive but beyond the understanding of many readers. This limits the reader's ability to understand these data in reference to the rest of the literature. There are lots of places where this applies but I will describe one specific example. As noted above the authors use a complex method to examine whether neurons are recorded on multiple consecutive occasions. This is commendable as many studies in the field do not address this issue at all and it can have a major impact as analyses of multiple samples of the same neurons are often treated as if they were independent. However, there is no illustration of the outputs of this method. It would be good to see some examples of recordings that this method classifies as clearly different across days and those which are not. Some reference to previously used methods would also help the reader understand how this new method relates to those used previously.

      We have added an additional Supplemental Figure (Figure 7 - figure supplement 1, that showcases the matching waveform approach. In the original manuscript, Fig. 7a provided an example output of the method.

      3) The effects reported are often subtle, especially at the level of the single neuron. Examples in the figures do not support the interpretations from the population-level analysis very convincingly.

      Additional plots for individual units have been added, Figure 2 - figure supplement 3. However, the effects, though small by unit, are consistent across neurons and subjects.

      The authors largely achieve their aims with an interesting behavioral task that rats perform well but not too well. This allows them to examine memory on a trial-by-trial basis and have sufficient numbers of error trials to examine how spatial representations support memory-guided behavior. They report ensemble recordings from the parahippocampus which allows them to make conclusions about information processing within this region. This aim is relatively weak though given that this collection of areas would not usually be grouped together and treated as a single unitary area. They largely achieve their aim of examining the mechanisms underlying how these neurons code task-relevant factors such as spatial location, cue, and presence of reward. The mismatch or error-induced rate remapping will be a particularly interesting target for future research. It is also likely that the analytical tools used in this study could be used in future studies.

      Reviewer #1 (Recommendations For The Authors):

      1) Typo: "attempted to addresses these challenges"

      We thank the reviewer for pointing out, this has been fixed.

      2) "classified using tuning curve based metrics" - what does "tuning curve" mean in this context?

      We have clarified this sentence in the main text.

      3) "MEC neurons encode time-elapsed" should be "MEC neurons encode time elapsed" (no hyphen)

      We thank the reviewer for pointing out, this has been fixed.

      4) "a phenomenon referred to as 'global remapping'." - I dislike this term because it has two meanings in the literature. On the one hand, it is used to contrast with rate remapping: that is, it refers to a change in the location of place fields. On the other hand, it refers to the remapping of the whole population of cells at once, as contrasted with partial remapping. I suggest calling them location remapping (vs. rate) and complete remapping (vs. partial)

      We agree that this is an overloaded term in the field. We have added 'location remapping' in the intro as a more specific term for global remapping.

      5) " tasks with no trial-to-trial predictability or experimenter-controlled cues and goals in the same environment." - ambiguously worded as it isn't clear whether the "no" refers to one or both of what follows. Also needs a hyphen after experimenter.

      We thank the reviewer for pointing out, this sentence has been reworded for clarity.

      6) " neurons changed their firing activity as a function of cue identity" - this is confounded by behavior and reward contingency, both linked to cue identity, so the statement is slightly misleading.

      We thank the reviewer for pointing this out, however, we disagree that this wording is misleading. Neurons changed their activity as a function cue identity and reward contingencies. Why neurons change their activity in such a manner is a different, more nuanced question, that we addressed through our analyses by converging on the "error" like signal that these signals seem to carry.

      7) "remapping" - I am not fully comfortable with the use of this term in this context. It derives from the original reports of change in the firing location of place cells, and the proposal that these cells form a "map" with the change in activity reflecting recruitment of a new map. With observations of rate changes in some place cells, the new term "rate remapping" was introduced, and now the authors are using "rate remapping" to mean firing rate changes in non-spatial cells. The meaning is thus losing its value. "Re-coding" might be slightly better, although we can argue about whether "code" is much better than "map"

      While we agree with the reviewer that "remapping" has been coerced into a grab-all term, these are the accepted semantics in the field. Thus, we are disinclined to change the language.

      8) Figure 1 - it would be useful to indicate somehow that one of the decision points was cued and once free choice with the random outcome - it took me a while to work this out. Also, the choice of colors for the cues needs explaining - my understanding is that rats are very insensitive to these wavelengths. And what does Pse mean? I didn't understand those scatterplots at all.

      The section "Tree-Maze behavior and electrophysiological recordings" under Results go into the details of the task. A reference and additional context for the selection of cues is now included in the "Behavioral Training" methods section. Rats possess dichromatic vision systems. Caption of Figure 1, 2 includes what Pse means, the performance of a subject for a given session. The scatter plots relate remapping to performance.

      9) Also on Figure 1 - in the examples shown, it looks like the animals always checked the two end arms in the same order. Was this position habit typical?

      We have added additional context in "Behavioral Training" methods section. Well trained rats do exhibit stereotyped behaviors (eg. going to one well then the other).

      10) "...we hypothesized that the cue remapping score would be related to a subject's performance in the task." I am struggling to see why this doesn't follow trivially from the observation that remapping occurred on error trials.

      We thank the reviewer for pointing out that this could use further clarity. We have added that the magnitude of remapping is what should relate to performance. To further clarify, remapping does not occur on error trials, remapping as operationally defined in this work, is the difference of spatial maps as a function of Cue identity or Reward contingency. Thus, as a difference metric, remapping occurs because there is a difference in activity between correct and incorrect trials. The magnitude of that difference need not relate to how the subject performed on the task.

      11) "With this approach, found that incorrect coding units were more likely to overlap between cue and RW coding units than correct." Missing "we". I didn't understand this sentence - what does "overlap" mean?

      We have added a sentence to further clarify this point.

      12) "We found that incorrect>correct activity levels on outbound trajectories predicted incorrect>correct activity levels on inbound trajectories" - I don't understand how this can be the case given that many of these units were head direction tuned and therefore shouldn't even have been active in both directions.

      As seen in Figure 7b, we were able to match 217 units across tasks. Of those, "Cluster 0" with 98 units showed strong head-direction coding. While "Cluster 0" units showed strong remapping effects, there were a lot of other units that could have contributed to the "incorrect>correct" across (out/in)-bound segments. Further, head-direction coding is defined in the Open-field environment, and there's no constraint on what these neurons could do on the Tree Maze task.

      13). " Error or mismatch signals conform a fundamental computation" - should be "perform"

      Wording slightly changed, but "conform" as in "act in accordance to" is what we intend here.

      14) " provides it with the required stiffness and chemical resistivity"- what does "chemical resistivity" mean? To what chemicals?

      This is mostly in reference to rat waste and cleaning products (alcohol). We changed the wording to durability for simplicity.

      15) Supp Fig 1 shows that behavioral performance was very distinctly different for one of the animals. Was its neural data any different? What happens to the overall effect if this animal is removed from the analysis?

      Unless otherwise stated, all analyses are performed through linear mixed effects with "subject" as a random effect. Thus, the effects of individual subjects are accounted for.

      16) Histology - it would be useful to have a line drawing from the atlas alongside the micrographs to enable easier anatomical understanding.

      There was variability in the medial lateral location of the tetrodes across animals and in the histological images provided and thus, we felt this would not be useful information as a single line drawing will not encompass/apply to all histology photos.

      17) Supp. Fig. 5/6 I didn't understand what Left, Stem, and Right mean at the top. Also, the color keys are too tiny to be noticed

      An additional sentence has been added to the caption to clarify that left, stem, right refer to what segment was selected via the ranking of scores.

      Reviewer #2 (Recommendations For The Authors):

      Was there a particular reason why cells in these regions were analyzed together? Can some of the results be tested for cells of a particular region, especially the MEC? One major limitation of these results is that it is unclear which regions it applies to, e.g., one cannot be certain that data shows here that MEC cells have these firing properties.

      Damage due to the extraction of the recording tetrode bundle was extensive and we were not able to parcelate out individual regions. We have added additional details on this in the "Histology" section of the methods.

      It is unclear how many cells in each region are included in each analysis. There is supplementary fig 3 but not sure how many of these met the criteria to be included in a certain analysis and it does not differentiate regions. Also, was any of the MUA included in the analyses?

      Isolated single units were included in all analyses, but we did not differentiate from what region each unit came from. Analyses that include MUA are separate from the main findings, and are included in supplemental figures as reference.

      Was the error trial defined as a trial when the animal did not make the right light-guided choice or did it also include cases in which the light-related arm choice was correct, but the animal first went to the unrewarded end arm? Nomenclature in the results is not explained well - what is an unrewarded trial or unrewarded trajectory or an error trial?

      We have added a new paragraph in the methods under Behavioral Training that address trial nomenclature. This methods section is now referenced twice in the initial paragraphs of the results section.

      Were any grid cells included in the data, especially could any cross-matched across the open field and the maze runs?

      This was indeed a question of interest to us, however, the number of grid-cells recorded was not adequate for meaningful statistical inference. We further sought to avoid tuning curve based functional classifications of units.

      In general, the results section is difficult to read, and its accessibility could be improved.

      We thank the reviewer for this accessibility point. We hope that the small tweaks as a product of this revision will improve the readability of the manuscript. We tried to have major takeaways for each result, but the nature of the analyses necessarily make the text somewhat dense.

      Minor:

      One of the Figure 3f references should be Figure 3g and later, Figure 44 should be corrected.

      We thank the reviewer for noting this, it has been fixed.

      Reviewer #3 (Recommendations For The Authors):

      There are a number of issues that I think could be addressed to improve the manuscript:

      1) The figure could make it clearer where the LED panel is. Are the authors confident the rats see the cue on each trial?

      We have added a new supplemental figure to address this question (Figure 1 - figure supplement 1). The new figures show the 3D geometry of the maze and the location of the Cue panel. The rats were able to see the cue, otherwise task performance would have remained at chance levels.

      2) The same maze has been used in a series of studies of hippocampal place cells by Paul Dudchenko's group. They also went on to examine how these representations are affected in a very similar cued spatial response task. These studies should be acknowledged.

      We thank the reviewer for pointing out this oversight. We have added the Ainge et al. citation ( https://doi.org/10.1523/JNEUROSCI.2011-07.2007) when first introducing the maze and in the methods.

      3) In a number of supplementary figures, the authors present neurons that are selective for different properties such as segment, cue, reward, and direction. However, the number of spatially and cue-selective cells and the criteria by which cells are designated as selective are not reported. The analyses of spatial remapping and response to cues are done at the population level so I'm not sure how these cells are classified or selected for the figures.

      The procedure for selection is included in the figure captions. Each unit is ranked based on the Uz score by segment as originally shown in Figures 2 and 4.

      4) Related to this, the example cells on the figures do not clearly represent the effects presented. For example, given the title of Figure 2, I assume that the cells in 2B significantly remap. However, they don't look like they remap - the cells in the top row show rate remapping in one segment of the maze while the cells in the bottom do not show clear rate remapping responses. I suspect that traditional rate map-based analyses using maps based on consistently sized pixels rather than large segments would show only very modest changes in correlations or rates across these different types of trials. It is important to report the findings in this way as the authors interpret their data relative to the rate-remapping studies which have used these analyses. Readers who do not have the time or expertise to examine the methods in detail will conclude that the effects reported here are the same as previous rate remapping studies which the examples suggest is not the case.

      Additional plots for individual units have been added to the supplement, Figure 2 - figure supplement 3. However, the effects, though small by unit, are consistent across neurons and subjects (Figure 2 - figure supplement 5).

      5) Why is there a bias on the stem in 2C? This is of similar size to the effect on the right size and so deserves discussion.

      The analysis in question is the across unit level bias in cue-coding by maze segment. The left segment shows elevated Right Cue coding, while the right segment shows elevated Left Cue coding. There was one reported statistical result, the main effect of segment in the Linear Mixed Effects model. We expand this result in the following two ways:

      1. Individual statistical results by segment

      a. Left Segment (Uz Coef. Estimate = 0.5, CI95%=[0.26, 0.75; p<1e-4])

      b. Stem Segment (Uz Coef. Estimate = 0.22, CI95%=[-0.01, 0.47]; p=0.06)

      c. Right Segment (Uz Coef. Estimate = -0.27, CI95%=[-0.51, -0.03], p=0.03)

      1. Reporting the joint hypothesis test of left > stem > right by unit.

      a. X2=90.45, p=2.28e-20

      b. The comparison of left>stem by unit:

      i. coefficient estimate = 0.28, CI95%=[0.11, 0.44], p=0.0008

      Although the reviewer is correct in pointing out the effect size similarity, the appropriate statistical comparisons within and across units support the stated conclusions. In terms of systematic coding bias, there is a small bias across units (60% of units) and animals (4 out 5) for the Right Cue. Although interesting, this effect is orthogonal to the comparisons of interests (within unit differences). In order to highlight this point we have added the statistics of the joint hypothesis test of left>stem>right to the main manuscript.

    2. eLife assessment

      In this study, neurons were recorded and combined across the parahippocampal area while rats performed a memory-guided spatial navigation task. Sophisticated analytical tools were used to provide convincing evidence that neuronal populations in these areas show behavior-related changes that might indicate the encoding of errors by the system. The valuable results suggest that rate remapping is a likely mechanism to support changes in representations that support memory-guided behavior in these regions, most interestingly in neurons that code head direction.